pax_global_header00006660000000000000000000000064150511324650014514gustar00rootroot0000000000000052 comment=d0cd1aca89841804976d4f81ec6052760c206248 mongo-ruby-driver-2.21.3/000077500000000000000000000000001505113246500151705ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/.dockerignore000066400000000000000000000000741505113246500176450ustar00rootroot00000000000000/.git /yard-docs Gemfile.lock gemfiles/*.lock .env.private* mongo-ruby-driver-2.21.3/.evergreen/000077500000000000000000000000001505113246500172305ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/.evergreen/.gitignore000066400000000000000000000000141505113246500212130ustar00rootroot00000000000000/Dockerfile mongo-ruby-driver-2.21.3/.evergreen/README.md000066400000000000000000000236121505113246500205130ustar00rootroot00000000000000# Evergreen Tests This directory contains configuration and scripts used to run the driver's test suite in Evergreen, MongoDB's continuous integration system. ## Testing In Docker It is possible to run the test suite in Docker. This executes all of the shell scripts as if they were running in the Evergreen environment. Use the following command: ./.evergreen/test-on-docker -d debian92 RVM_RUBY=ruby-2.7 The `-d` option specifies the distro to use. This must be one of the Evergreen-recognized distros. The arguments are the environment variables as would be set by Evergreen configuration (i.e. `config.yml` in this directory). All arguments are optional. By default the entire test suite is run (using mlaunch to launch the server); to specify another script, use `-s` option: ./.evergreen/test-on-docker -s .evergreen/run-tests-kerberos-unit.sh To override just the test command (but maintain the setup performed by Evergreen shell scripts), use TEST_CMD: ./.evergreen/test-on-docker TEST_CMD='rspec spec/mongo/auth' ### Toolchain and Server Preloading The docker test runner supports preloading Ruby interpreters and server binaries in the docker image, which reduces the runtime of subsequent test runs. To turn on preloading, use `-p` option: ./.evergreen/test-on-docker -p It is possible to run the test suite offline (without Internet access) provided the full process has already been executed. This is accomplished with the `-e` option and only makes sense when `-p` is also used: ./.evergreen/test-on-docker -pe ### Private Environment Variables Normally the environment variables are specified on the command line as positional arguments. However the Ruby driver Evergreen projects also have private variables containing various passwords which are not echoed in the build logs, and therefore are not conveniently providable using the normal environment variable handling. Instead, these variables can be collected into a [.env](https://github.com/bkeepers/dotenv)-compatible configuration file, and the path to this configuration file can be provided via the `-a` option to the test runner. The `-a` option may be given multiple times. When creating the .env files from Evergreen private variables, the variable names must be uppercased. For example, to execute Kerberos integration tests which require private variables pertanining to the test Kerberos server, you could run: ./.evergreen/test-on-docker -d rhel70 RVM_RUBY=ruby-2.5 \ -s .evergreen/run-tests-kerberos-integration.sh -pa .env.private The `.env.private` path specifically is listed in .gitignore and .dockerignore files, and is thus ignored by both Git and Docker. The private environment variables provided via the `-a` argument are specified in the `docker run` invocation and are not part of the image created by `docker build`. Because of this, they override any environment variables provided as positional arguments. ### Field-Level Encryption FLE The Docker testing script supports running tests with field-level encryption (FLE). To enable FLE, set the FLE environment variable to true. Some FLE tests require other environment variables to be set as well. You may specify these environment variables in a private .env file as explained in the [Private Environment Variables](#private-environment-variables) section. The following is a list of required environment variables: - MONGO_RUBY_DRIVER_AWS_KEY - MONGO_RUBY_DRIVER_AWS_SECRET - MONGO_RUBY_DRIVER_AWS_REGION - MONGO_RUBY_DRIVER_AWS_ARN - MONGO_RUBY_DRIVER_AZURE_TENANT_ID - MONGO_RUBY_DRIVER_AZURE_CLIENT_ID - MONGO_RUBY_DRIVER_AZURE_CLIENT_SECRET - MONGO_RUBY_DRIVER_AZURE_IDENTITY_PLATFORM_ENDPOINT - MONGO_RUBY_DRIVER_AZURE_KEY_VAULT_ENDPOINT - MONGO_RUBY_DRIVER_AZURE_KEY_NAME - MONGO_RUBY_DRIVER_GCP_EMAIL - MONGO_RUBY_DRIVER_GCP_PRIVATE_KEY Here's an example of how to run FLE tests in Docker: ./.evergreen/test-on-docker FLE=true -pa .env.private ### rhel62 To run rhel62 distro in docker, host system must be configured to [emulate syscalls](https://github.com/CentOS/sig-cloud-instance-images/issues/103). Note that this defeats one of the patches for the Spectre set of processor vulnerabilities. ## Running MongoDB Server In Docker It is possible to use the Docker infrastructure provided by the test suite to provision a MongoDB server deployment in Docker and expose it to the host. Doing so allows testing on all server versions supported by the test suite without having to build and install them on the host system, as well as running the deployment on a distro that differs from that of the host system. To provision a deployment, use the `-m` option. This option requires one argument which is the port number on the host system to use as the starting port for the deployment. Use the Evergreen environment variable syntax to specify the desired server version, topology, authentication and other parameters. The `-p` argument is supported to preload the server into the Docker image and its use is recommended with `-m`. To run a standalone server and expose it on the default port, 27017: ./.evergreen/test-on-docker -pm 27017 To run a replica set deployment with authentication and expose its members on ports 30000 through 30002: ./.evergreen/test-on-docker -pm 30000 -d debian92 TOPOLOGY=replica-set AUTH=auth When OCSP is enabled, the test OCSP responder will be launched on port 8100 and this port will be exposed to the host OS. There must not be another service using this port on the host OS. ## Testing in AWS The scripts described in this section assist in running the driver test suite on EC2 instances and in ECS tasks. It is recommended to test via Docker on EC2 instances, as this produces shorter test cycles since all of the cleanup is handled by Docker. Docker is not usable on ECS (because ECS tasks are already running in Docker themselves), thus to test in ECS tasks it is required to use non-Docker scripts which generally rebuild more of the target instance and thus have longer test cycles. ### Instance Types The test suite, as well as the Docker infrastructure if it is used, require a decent amount of memory to run. Starting with 2 GB generally works well, for example via the `t3a.small` instance type. ### Supported Operating Systems Currently Debian and Ubuntu operating systems are supported. Support for other operating systems may be added in the future. ### `ssh-agent` Setup The AWS testing scripts do not provide a way to specify the private key to use for authentication. This functionality is instead delegated to `ssh-agent`. If you do not already have it configured, you can run from your shell: eval `ssh-agent` This launches a `ssh-agent` instance for the shell in which you run this command. It is more efficient to run a single `ssh-agent` for the entire machine but the procedure for setting this up is outside the scope of this readme file. With the agent running, add the private key corresponding to the key pair used to launch the EC2 instance you wish to use for testing: ssh-add path/to/key-pair.pem ### Provision Given an EC2 instance running a supported Debian or Ubuntu version at IP `12.34.56.78`, use the `provision-remote` command to prepare it for being used to run the driver's test suite. This command takes two arguments: the target, in the form of `username@ip`, and the type of provisioning to perform which can be `docker` or `local`. Note that the username for Debian instances is `admin` and the username for Ubuntu instances is `ubuntu`: # Configure a Debian instance to run the test suite via Docker ./.evergreen/provision-remote admin@12.34.56.78 docker # Configure an Ubuntu instance to run the test suite without Docker ./.evergreen/provision-remote ubuntu@12.34.56.78 local This only needs to be done once per instance. ### Run Tests - Docker When testing on an EC2 instance, it is recommended to run the tests via Docker In this scenario a docker image is created on the EC2 instance with appropriate configuration, then a container is run using this image which executes the test suite. All parameters supported by the Docker test script described above are supported. Note that the private environment files (`.env.private*`), if any exist, are copied to the EC2 instance. This is done so that, for example, AWS auth may be tested in EC2 which generally requires private environment variables. Run the `test-docker-remote` script as follows: ./.evergreen/test-docker-remote ubuntu@12.34.56.78 MONGODB_VERSION=4.2 -p The first argument is the target on which to run the tests. All subsequent arguments are passed to the `test-on-docker` script. In this case, `test-docker-remote` will execute the following script on the target instance: ./.evergreen/test-on-docker MONGODB_VERSION=4.2 -p All arguments that `test-on-docker` accepts are accepted by `test-docker-remote`. For example, to verify that all of the tooling is working correctly but not run any tests you could issue; ./.evergreen/test-on-docker -p TEST_CMD=true The private environment files need to be specified explicitly, just like they need to be explicitly specified to `test-on-docker`. For example: ./.evergreen/test-on-docker MONGODB_VERSION=4.2 -pa .env.private ### Run Tests - Local When testing in an ECS task, the only option is to execute the test suite locally to the task. This strategy can also be used on an EC2 instance, although this is not recommended because the test cycle is longer compared to the Docker testing strategy. To run the tests in the task, use the `test-remote` script as follows: ./.evergreen/test-remote ubuntu@12.34.56.78 \ env MONGODB_VERSION=4.4 AUTH=aws-regular .evergreen/run-tests-aws-auth.sh The first argument is the target in the `username@ip` format. The script first copies the current directory to the target, then executes the remaining arguments as a shell command on the target. This example uses `env` to set environment variables that are referenced by the `.evergreen/run-tests-aws-auth.sh` script. mongo-ruby-driver-2.21.3/.evergreen/atlas000077700000000000000000000000001505113246500312052../.mod/drivers-evergreen-tools/.evergreen/atlasustar00rootroot00000000000000mongo-ruby-driver-2.21.3/.evergreen/auth_aws000077700000000000000000000000001505113246500324232../.mod/drivers-evergreen-tools/.evergreen/auth_awsustar00rootroot00000000000000mongo-ruby-driver-2.21.3/.evergreen/aws000077500000000000000000000060771505113246500177620ustar00rootroot00000000000000#!/usr/bin/env ruby $: << File.join(File.dirname(__FILE__), '../spec') require 'support/aws_utils' require 'optparse' def parse_options options = {} OptionParser.new do |opts| opts.banner = "Usage: aws [options] command ..." opts.on("-a", "--access-key-id=ID", "AWS access key ID") do |v| options[:access_key_id] = v end opts.on("-s", "--secret-access-key=SECRET", "AWS secret access key") do |v| options[:secret_access_key] = v end opts.on("-r", "--region=REGION", "AWS region") do |v| options[:region] = v end # launch-ecs options opts.on('--ec2', 'Use EC2 launch type instead of Fargate') do |v| options[:ec2] = true end end.parse! options end def assume_role(arn, options) orchestrator = AwsUtils::Orchestrator.new(**options) if arn.nil? arn = AwsUtils::Inspector.new(**options).assume_role_arn end credentials = orchestrator.assume_role(arn) puts "AWS_ACCESS_KEY_ID=#{credentials.access_key_id}" puts "AWS_SECRET_ACCESS_KEY=#{credentials.secret_access_key}" puts "AWS_SESSION_TOKEN=#{credentials.session_token}" puts end def set_instance_profile(instance_id, options) unless instance_id raise 'Instance id is required' end orchestrator = AwsUtils::Orchestrator.new(**options) orchestrator.set_instance_profile(instance_id) end def clear_instance_profile(instance_id, options) unless instance_id raise 'Instance id is required' end orchestrator = AwsUtils::Orchestrator.new(**options) orchestrator.clear_instance_profile(instance_id) end def launch_ec2(public_key_path, options) unless public_key_path raise "Public key path must be given" end orchestrator = AwsUtils::Orchestrator.new(**options) orchestrator.provision_auth_ec2_instance(public_key_path: public_key_path) end def launch_ecs(public_key_path, options) unless public_key_path raise "Public key path must be given" end orchestrator = AwsUtils::Orchestrator.new(**options) orchestrator.provision_auth_ecs_task( public_key_path: public_key_path, ) end options = parse_options case cmd = ARGV.shift when 'setup-resources' AwsUtils::Provisioner.new.setup_aws_auth_resources when 'reset-keys' AwsUtils::Provisioner.new.reset_keys when 'assume-role' arn = ARGV.shift assume_role(arn, options) when 'set-instance-profile' instance_id = ARGV.shift set_instance_profile(instance_id, options) when 'clear-instance-profile' instance_id = ARGV.shift clear_instance_profile(instance_id, options) when 'key-pairs' AwsUtils::Inspector.new(**options).list_key_pairs when 'launch-ec2' public_key_path, = ARGV launch_ec2(public_key_path, options) when 'stop-ec2' orchestrator = AwsUtils::Orchestrator.new(**options) orchestrator.terminate_auth_ec2_instance when 'launch-ecs' public_key_path, = ARGV launch_ecs(public_key_path, options) when 'stop-ecs' orchestrator = AwsUtils::Orchestrator.new(**options) orchestrator.terminate_auth_ecs_task when 'ecs-status' AwsUtils::Inspector.new(**options).ecs_status when nil raise "Command must be given" else raise "Bogus command #{cmd}" end mongo-ruby-driver-2.21.3/.evergreen/aws_lambda000077700000000000000000000000001505113246500331612../.mod/drivers-evergreen-tools/.evergreen/aws_lambdaustar00rootroot00000000000000mongo-ruby-driver-2.21.3/.evergreen/config.yml000066400000000000000000001716331505113246500212330ustar00rootroot00000000000000# GENERATED FILE - DO NOT EDIT. # Run `rake eg` to regenerate this file. # When a task that used to pass starts to fail, go through all versions that # may have been skipped to detect when the task started failing. stepback: true # Fail builds when pre tasks fail. pre_error_fails_task: true # Mark a failure as a system/bootstrap failure (purple box) rather then a task # failure by default. # Actual testing tasks are marked with `type: test` command_type: system # Protect ourself against rogue test case, or curl gone wild, that runs forever. exec_timeout_secs: 5400 # What to do when evergreen hits the timeout (`post:` tasks are run automatically) timeout: - command: shell.exec params: script: | true functions: "fetch source": # Executes git clone and applies the submitted patch, if any - command: git.get_project params: directory: "src" - command: shell.exec params: working_dir: "src" script: | set -ex git submodule update --init --recursive "create expansions": # Make an evergreen expansion file with dynamic values - command: shell.exec params: working_dir: "src" script: | # Get the current unique version of this checkout if [ "${is_patch}" = "true" ]; then CURRENT_VERSION=$(git describe)-patch-${version_id} else CURRENT_VERSION=latest fi export DRIVERS_TOOLS="$(pwd)/.mod/drivers-evergreen-tools" # Python has cygwin path problems on Windows. Detect prospective mongo-orchestration home directory if [ "Windows_NT" = "$OS" ]; then # Magic variable in cygwin export DRIVERS_TOOLS=$(cygpath -m $DRIVERS_TOOLS) fi export MONGO_ORCHESTRATION_HOME="$DRIVERS_TOOLS/.evergreen/orchestration" export MONGODB_BINARIES="$DRIVERS_TOOLS/mongodb/bin" export UPLOAD_BUCKET="${project}" export PROJECT_DIRECTORY="$(pwd)" cat < expansion.yml CURRENT_VERSION: "$CURRENT_VERSION" DRIVERS_TOOLS: "$DRIVERS_TOOLS" MONGO_ORCHESTRATION_HOME: "$MONGO_ORCHESTRATION_HOME" MONGODB_BINARIES: "$MONGODB_BINARIES" UPLOAD_BUCKET: "$UPLOAD_BUCKET" PROJECT_DIRECTORY: "$PROJECT_DIRECTORY" PREPARE_SHELL: | set -o errexit #set -o xtrace export DRIVERS_TOOLS="$DRIVERS_TOOLS" export MONGO_ORCHESTRATION_HOME="$MONGO_ORCHESTRATION_HOME" export MONGODB_BINARIES="$MONGODB_BINARIES" export UPLOAD_BUCKET="$UPLOAD_BUCKET" export PROJECT_DIRECTORY="$PROJECT_DIRECTORY" # TMPDIR cannot be too long, see # https://github.com/broadinstitute/cromwell/issues/3647. # Why is it even set at all? #export TMPDIR="$MONGO_ORCHESTRATION_HOME/db" export PATH="$MONGODB_BINARIES:$PATH" export PROJECT="${project}" export AUTH=${AUTH} export SSL=${SSL} export TOPOLOGY=${TOPOLOGY} export COMPRESSOR=${COMPRESSOR} export RVM_RUBY="${RVM_RUBY}" export MONGODB_VERSION=${MONGODB_VERSION} export CRYPT_SHARED_VERSION=${CRYPT_SHARED_VERSION} export FCV=${FCV} export MONGO_RUBY_DRIVER_LINT=${LINT} export RETRY_READS=${RETRY_READS} export RETRY_WRITES=${RETRY_WRITES} export WITH_ACTIVE_SUPPORT="${WITH_ACTIVE_SUPPORT}" export SINGLE_MONGOS="${SINGLE_MONGOS}" export BSON="${BSON}" export MMAPV1="${MMAPV1}" export FLE="${FLE}" export FORK="${FORK}" export SOLO="${SOLO}" export EXTRA_URI_OPTIONS="${EXTRA_URI_OPTIONS}" export API_VERSION_REQUIRED="${API_VERSION_REQUIRED}" export DOCKER_DISTRO="${DOCKER_DISTRO}" export STRESS="${STRESS}" export OCSP_ALGORITHM="${OCSP_ALGORITHM}" export OCSP_STATUS="${OCSP_STATUS}" export OCSP_DELEGATE="${OCSP_DELEGATE}" export OCSP_MUST_STAPLE="${OCSP_MUST_STAPLE}" export OCSP_CONNECTIVITY="${OCSP_CONNECTIVITY}" export OCSP_VERIFIER="${OCSP_VERIFIER}" export ATLAS_REPLICA_SET_URI="${atlas_replica_set_uri}" export ATLAS_SHARDED_URI="${atlas_sharded_uri}" export ATLAS_FREE_TIER_URI="${atlas_free_tier_uri}" export ATLAS_TLS11_URI="${atlas_tls11_uri}" export ATLAS_TLS12_URI="${atlas_tls12_uri}" export ATLAS_SERVERLESS_URI="${atlas_serverless_uri}" export ATLAS_SERVERLESS_LB_URI="${atlas_serverless_lb_uri}" export ATLAS_X509_CERT_BASE64="${atlas_x509_cert_base64}" export ATLAS_X509_URI="${atlas_x509}" export ATLAS_X509_DEV_CERT_BASE64="${atlas_x509_dev_cert_base64}" export ATLAS_X509_DEV_URI="${atlas_x509_dev}" export RVM_RUBY="${RVM_RUBY}" export SERVERLESS_DRIVERS_GROUP="${SERVERLESS_DRIVERS_GROUP}" export SERVERLESS_API_PUBLIC_KEY="${SERVERLESS_API_PUBLIC_KEY}" export SERVERLESS_API_PRIVATE_KEY="${SERVERLESS_API_PRIVATE_KEY}" export SERVERLESS_ATLAS_USER="${SERVERLESS_ATLAS_USER}" export SERVERLESS_ATLAS_PASSWORD="${SERVERLESS_ATLAS_PASSWORD}" EOT # See what we've done cat expansion.yml # Load the expansion file to make an evergreen variable with the current # unique version - command: expansions.update params: file: src/expansion.yml "export AWS auth credentials": - command: shell.exec type: test params: silent: true working_dir: "src" script: | cat < .env.private IAM_AUTH_ASSUME_AWS_ACCOUNT="${iam_auth_assume_aws_account}" IAM_AUTH_ASSUME_AWS_SECRET_ACCESS_KEY="${iam_auth_assume_aws_secret_access_key}" IAM_AUTH_ASSUME_ROLE_NAME="${iam_auth_assume_role_name}" IAM_AUTH_EC2_INSTANCE_ACCOUNT="${iam_auth_ec2_instance_account}" IAM_AUTH_EC2_INSTANCE_PROFILE="${iam_auth_ec2_instance_profile}" IAM_AUTH_EC2_INSTANCE_SECRET_ACCESS_KEY="${iam_auth_ec2_instance_secret_access_key}" IAM_AUTH_ECS_ACCOUNT="${iam_auth_ecs_account}" IAM_AUTH_ECS_ACCOUNT_ARN="${iam_auth_ecs_account_arn}" IAM_AUTH_ECS_CLUSTER="${iam_auth_ecs_cluster}" IAM_AUTH_ECS_SECRET_ACCESS_KEY="${iam_auth_ecs_secret_access_key}" IAM_AUTH_ECS_SECURITY_GROUP="${iam_auth_ecs_security_group}" IAM_AUTH_ECS_SUBNET_A="${iam_auth_ecs_subnet_a}" IAM_AUTH_ECS_SUBNET_B="${iam_auth_ecs_subnet_b}" IAM_AUTH_ECS_TASK_DEFINITION="${iam_auth_ecs_task_definition_ubuntu2004}" IAM_WEB_IDENTITY_ISSUER="${iam_web_identity_issuer}" IAM_WEB_IDENTITY_JWKS_URI="${iam_web_identity_jwks_uri}" IAM_WEB_IDENTITY_RSA_KEY="${iam_web_identity_rsa_key}" IAM_WEB_IDENTITY_TOKEN_FILE="${iam_web_identity_token_file}" IAM_AUTH_ASSUME_WEB_ROLE_NAME="${iam_auth_assume_web_role_name}" EOT "run CSOT tests": - command: shell.exec type: test params: shell: bash working_dir: "src" script: | ${PREPARE_SHELL} # Needed for generating temporary aws credentials. if [ -n "${FLE}" ]; then export AWS_ACCESS_KEY_ID="${fle_aws_key}" export AWS_SECRET_ACCESS_KEY="${fle_aws_secret}" export AWS_DEFAULT_REGION="${fle_aws_region}" fi export CSOT_SPEC_TESTS=1 TEST_CMD="bundle exec rspec spec/spec_tests/client_side_operations_timeout_spec.rb" \ .evergreen/run-tests.sh "export FLE credentials": - command: shell.exec type: test params: silent: true working_dir: "src" script: | cat < .env.private MONGO_RUBY_DRIVER_AWS_KEY="${fle_aws_key}" MONGO_RUBY_DRIVER_AWS_SECRET="${fle_aws_secret}" MONGO_RUBY_DRIVER_AWS_REGION="${fle_aws_region}" MONGO_RUBY_DRIVER_AWS_ARN="${fle_aws_arn}" MONGO_RUBY_DRIVER_AZURE_TENANT_ID="${fle_azure_tenant_id}" MONGO_RUBY_DRIVER_AZURE_CLIENT_ID="${fle_azure_client_id}" MONGO_RUBY_DRIVER_AZURE_CLIENT_SECRET="${fle_azure_client_secret}" MONGO_RUBY_DRIVER_AZURE_IDENTITY_PLATFORM_ENDPOINT="${fle_azure_identity_platform_endpoint}" MONGO_RUBY_DRIVER_AZURE_KEY_VAULT_ENDPOINT="${fle_azure_key_vault_endpoint}" MONGO_RUBY_DRIVER_AZURE_KEY_NAME="${fle_azure_key_name}" MONGO_RUBY_DRIVER_GCP_EMAIL="${fle_gcp_email}" MONGO_RUBY_DRIVER_GCP_PRIVATE_KEY="${fle_gcp_private_key}" MONGO_RUBY_DRIVER_GCP_PROJECT_ID="${fle_gcp_project_id}" MONGO_RUBY_DRIVER_GCP_LOCATION="${fle_gcp_location}" MONGO_RUBY_DRIVER_GCP_KEY_RING="${fle_gcp_key_ring}" MONGO_RUBY_DRIVER_GCP_KEY_NAME="${fle_gcp_key_name}" MONGO_RUBY_DRIVER_MONGOCRYPTD_PORT="${fle_mongocryptd_port}" EOT "export Kerberos credentials": - command: shell.exec type: test params: silent: true working_dir: "src" script: | cat < .env.private SASL_HOST=${sasl_host} SASL_PORT=${sasl_port} SASL_USER=${sasl_user} SASL_PASS=${sasl_pass} SASL_DB=${sasl_db} PRINCIPAL=${principal} KERBEROS_DB=${kerberos_db} KEYTAB_BASE64=${keytab_base64} EOT "exec script" : - command: shell.exec type: test params: working_dir: "src" script: | ${PREPARE_SHELL} sh ${PROJECT_DIRECTORY}/${file} "upload mo artifacts": - command: shell.exec params: script: | ${PREPARE_SHELL} find $MONGO_ORCHESTRATION_HOME -name \*.log\* | xargs tar czf mongodb-logs.tar.gz - command: s3.put params: aws_key: ${aws_key} aws_secret: ${aws_secret} local_file: mongodb-logs.tar.gz remote_file: ${UPLOAD_BUCKET}/${build_variant}/${revision}/${version_id}/${build_id}/logs/${task_id}-${execution}-mongodb-logs.tar.gz bucket: mciuploads permissions: public-read content_type: ${content_type|application/x-gzip} display_name: "mongodb-logs.tar.gz" "upload working dir": - command: archive.targz_pack params: target: "working-dir.tar.gz" source_dir: ${PROJECT_DIRECTORY}/ include: - "./**" - command: s3.put params: aws_key: ${aws_key} aws_secret: ${aws_secret} local_file: working-dir.tar.gz remote_file: ${UPLOAD_BUCKET}/${build_variant}/${revision}/${version_id}/${build_id}/artifacts/${task_id}-${execution}-working-dir.tar.gz bucket: mciuploads permissions: public-read content_type: ${content_type|application/x-gzip} display_name: "working-dir.tar.gz" - command: archive.targz_pack params: target: "drivers-dir.tar.gz" source_dir: ${DRIVERS_TOOLS} include: - "./**" - command: s3.put params: aws_key: ${aws_key} aws_secret: ${aws_secret} local_file: drivers-dir.tar.gz remote_file: ${UPLOAD_BUCKET}/${build_variant}/${revision}/${version_id}/${build_id}/artifacts/${task_id}-${execution}-drivers-dir.tar.gz bucket: mciuploads permissions: public-read content_type: ${content_type|application/x-gzip} display_name: "drivers-dir.tar.gz" "upload test results to s3": - command: s3.put params: aws_key: ${aws_key} aws_secret: ${aws_secret} # src is the relative path to repo checkout, # This is specified in this yaml file earlier. local_file: ./src/tmp/rspec.json display_name: rspec.json remote_file: ${UPLOAD_BUCKET}/${version_id}/${build_id}/artifacts/${build_variant}/rspec.json content_type: application/json permissions: public-read bucket: mciuploads # AWS does not appear to support on-the-fly gzip encoding; compress # the results manually and upload a compressed file. # Typical size reduction: 50 MB -> 800 KB - command: shell.exec params: script: | gzip src/tmp/rspec.json.gz - command: s3.put params: aws_key: ${aws_key} aws_secret: ${aws_secret} # src is the relative path to repo checkout, # This is specified in this yaml file earlier. local_file: ./src/tmp/rspec.json.gz display_name: rspec.json.gz remote_file: ${UPLOAD_BUCKET}/${version_id}/${build_id}/artifacts/${build_variant}/rspec.json.gz content_type: application/gzip permissions: public-read bucket: mciuploads - command: shell.exec params: script: | xz -9 src/tmp/rspec.json.xz - command: s3.put params: aws_key: ${aws_key} aws_secret: ${aws_secret} # src is the relative path to repo checkout, # This is specified in this yaml file earlier. local_file: ./src/tmp/rspec.json.xz display_name: rspec.json.xz remote_file: ${UPLOAD_BUCKET}/${version_id}/${build_id}/artifacts/${build_variant}/rspec.json.xz content_type: application/x-xz permissions: public-read bucket: mciuploads "upload test results": - command: attach.xunit_results params: file: ./src/rspec.xml "delete private environment": - command: shell.exec type: test params: silent: true working_dir: "src" script: | rm -f .env.private "build and test docker image": - command: shell.exec type: test params: shell: bash working_dir: "src" script: | ${PREPARE_SHELL} set -x .evergreen/test-on-docker -d ${os} MONGODB_VERSION=${mongodb-version} TOPOLOGY=${topology} RVM_RUBY=${ruby} -s .evergreen/run-tests.sh TEST_CMD=true ${PRELOAD_ARG} "run benchmarks": - command: shell.exec type: test params: shell: bash working_dir: "src" script: | ${PREPARE_SHELL} TEST_CMD="bundle exec rake driver_bench" PERFORMANCE_RESULTS_FILE="$PROJECT_DIRECTORY/perf.json" .evergreen/run-tests.sh - command: perf.send params: file: "${PROJECT_DIRECTORY}/perf.json" "run tests": - command: shell.exec type: test params: shell: bash working_dir: "src" script: | ${PREPARE_SHELL} # Needed for generating temporary aws credentials. if [ -n "${FLE}" ]; then export AWS_ACCESS_KEY_ID="${fle_aws_key}" export AWS_SECRET_ACCESS_KEY="${fle_aws_secret}" export AWS_DEFAULT_REGION="${fle_aws_region}" fi .evergreen/run-tests.sh "run tests via docker": - command: shell.exec type: test params: shell: bash working_dir: "src" script: | ${PREPARE_SHELL} # Needed for generating temporary aws credentials. if [ -n "${FLE}" ]; then export AWS_ACCESS_KEY_ID="${fle_aws_key}" export AWS_SECRET_ACCESS_KEY="${fle_aws_secret}" export AWS_DEFAULT_REGION="${fle_aws_region}" fi .evergreen/run-tests-docker.sh "run AWS auth tests": - command: shell.exec type: test params: shell: bash working_dir: "src" script: | ${PREPARE_SHELL} .evergreen/run-tests-aws-auth.sh "run Kerberos unit tests": - command: shell.exec type: test params: shell: bash working_dir: "src" script: | ${PREPARE_SHELL} .evergreen/run-tests-kerberos-unit.sh "run Kerberos integration tests": - command: shell.exec type: test params: shell: bash working_dir: "src" script: | ${PREPARE_SHELL} .evergreen/run-tests-kerberos-integration.sh "run Atlas tests": - command: shell.exec type: test params: shell: bash working_dir: "src" script: | ${PREPARE_SHELL} AUTH=${AUTH} SSL=${SSL} TOPOLOGY=${TOPOLOGY} RVM_RUBY="${RVM_RUBY}" \ ATLAS_REPLICA_SET_URI=${atlas_replica_set_uri} ATLAS_SHARDED_URI=${atlas_sharded_uri} \ ATLAS_FREE_TIER_URI=${atlas_free_tier_uri} ATLAS_TLS11_URI=${atlas_tls11_uri} \ ATLAS_TLS12_URI=${atlas_tls12_uri} ATLAS_SERVERLESS_URI=${atlas_serverless_uri} \ ATLAS_SERVERLESS_LB_URI=${atlas_serverless_lb_uri} \ ATLAS_X509_CERT_BASE64="${atlas_x509_cert_base64}" \ ATLAS_X509_URI="${atlas_x509}" \ ATLAS_X509_DEV_CERT_BASE64="${atlas_x509_dev_cert_base64}" \ ATLAS_X509_DEV_URI="${atlas_x509_dev}" \ .evergreen/run-tests-atlas.sh "run serverless tests": - command: shell.exec type: test params: shell: bash working_dir: "src" script: | ${PREPARE_SHELL} # Needed for generating temporary aws credentials. if [ -n "${FLE}" ]; then export AWS_ACCESS_KEY_ID="${fle_aws_key}" export AWS_SECRET_ACCESS_KEY="${fle_aws_secret}" export AWS_DEFAULT_REGION="${fle_aws_region}" fi CRYPT_SHARED_LIB_PATH="${CRYPT_SHARED_LIB_PATH}" SERVERLESS=1 SSL=ssl RVM_RUBY="${RVM_RUBY}" SINGLE_MONGOS="${SINGLE_MONGOS}" SERVERLESS_URI="${SERVERLESS_URI}" FLE="${FLE}" SERVERLESS_MONGODB_VERSION="${SERVERLESS_MONGODB_VERSION}" .evergreen/run-tests-serverless.sh pre: - func: "fetch source" - func: "create expansions" post: - func: "delete private environment" # Removed, causing timeouts # - func: "upload working dir" - func: "upload mo artifacts" # - func: "upload test results" - func: "upload test results to s3" task_groups: - name: serverless_task_group setup_group_can_fail_task: true setup_group_timeout_secs: 1800 # 30 minutes setup_group: - func: "fetch source" - func: "create expansions" - command: ec2.assume_role params: role_arn: ${aws_test_secrets_role} - command: shell.exec params: shell: "bash" script: | ${PREPARE_SHELL} bash ${DRIVERS_TOOLS}/.evergreen/serverless/setup-secrets.sh bash ${DRIVERS_TOOLS}/.evergreen/serverless/create-instance.sh - command: expansions.update params: file: serverless-expansion.yml teardown_task: - command: shell.exec params: script: | ${PREPARE_SHELL} bash ${DRIVERS_TOOLS}/.evergreen/serverless/delete-instance.sh - func: "upload test results" tasks: - "test-serverless" - name: testatlas_full_task_group setup_group_can_fail_task: true setup_group_timeout_secs: 1800 # 30 minutes setup_group: - func: fetch source - func: create expansions - command: shell.exec params: shell: "bash" working_dir: "src" script: | ${PREPARE_SHELL} echo "Setting up Atlas cluster" DRIVERS_ATLAS_PUBLIC_API_KEY="${DRIVERS_ATLAS_PUBLIC_API_KEY}" \ DRIVERS_ATLAS_PRIVATE_API_KEY="${DRIVERS_ATLAS_PRIVATE_API_KEY}" \ DRIVERS_ATLAS_GROUP_ID="${DRIVERS_ATLAS_GROUP_ID}" \ DRIVERS_ATLAS_LAMBDA_USER="${DRIVERS_ATLAS_LAMBDA_USER}" \ DRIVERS_ATLAS_LAMBDA_PASSWORD="${DRIVERS_ATLAS_LAMBDA_PASSWORD}" \ DRIVERS_ATLAS_BASE_URL="${DRIVERS_ATLAS_BASE_URL}" \ LAMBDA_STACK_NAME="dbx-ruby-lambda" \ MONGODB_VERSION="7.0" \ task_id="${task_id}" \ execution="${execution}" \ $DRIVERS_TOOLS/.evergreen/atlas/setup-atlas-cluster.sh echo "MONGODB_URI=${MONGODB_URI}" - command: expansions.update params: file: src/atlas-expansion.yml teardown_group: - command: shell.exec params: shell: "bash" working_dir: "src" script: | ${PREPARE_SHELL} DRIVERS_ATLAS_PUBLIC_API_KEY="${DRIVERS_ATLAS_PUBLIC_API_KEY}" \ DRIVERS_ATLAS_PRIVATE_API_KEY="${DRIVERS_ATLAS_PRIVATE_API_KEY}" \ DRIVERS_ATLAS_GROUP_ID="${DRIVERS_ATLAS_GROUP_ID}" \ DRIVERS_ATLAS_BASE_URL="${DRIVERS_ATLAS_BASE_URL}" \ LAMBDA_STACK_NAME="dbx-ruby-lambda" \ task_id="${task_id}" \ execution="${execution}" \ $DRIVERS_TOOLS/.evergreen/atlas/teardown-atlas-cluster.sh tasks: - test-full-atlas-task - name: test_aws_lambda_task_group setup_group_can_fail_task: true setup_group_timeout_secs: 1800 # 30 minutes setup_group: - func: fetch source - func: create expansions - command: shell.exec params: shell: "bash" working_dir: "src" script: | ${PREPARE_SHELL} echo "Setting up Atlas cluster" DRIVERS_ATLAS_PUBLIC_API_KEY="${DRIVERS_ATLAS_PUBLIC_API_KEY}" \ DRIVERS_ATLAS_PRIVATE_API_KEY="${DRIVERS_ATLAS_PRIVATE_API_KEY}" \ DRIVERS_ATLAS_GROUP_ID="${DRIVERS_ATLAS_GROUP_ID}" \ DRIVERS_ATLAS_LAMBDA_USER="${DRIVERS_ATLAS_LAMBDA_USER}" \ DRIVERS_ATLAS_LAMBDA_PASSWORD="${DRIVERS_ATLAS_LAMBDA_PASSWORD}" \ DRIVERS_ATLAS_BASE_URL="${DRIVERS_ATLAS_BASE_URL}" \ LAMBDA_STACK_NAME="dbx-ruby-lambda" \ MONGODB_VERSION="7.0" \ task_id="${task_id}" \ execution="${execution}" \ $DRIVERS_TOOLS/.evergreen/atlas/setup-atlas-cluster.sh echo "MONGODB_URI=${MONGODB_URI}" - command: expansions.update params: file: src/atlas-expansion.yml teardown_group: - command: shell.exec params: shell: "bash" working_dir: "src" script: | ${PREPARE_SHELL} DRIVERS_ATLAS_PUBLIC_API_KEY="${DRIVERS_ATLAS_PUBLIC_API_KEY}" \ DRIVERS_ATLAS_PRIVATE_API_KEY="${DRIVERS_ATLAS_PRIVATE_API_KEY}" \ DRIVERS_ATLAS_GROUP_ID="${DRIVERS_ATLAS_GROUP_ID}" \ DRIVERS_ATLAS_BASE_URL="${DRIVERS_ATLAS_BASE_URL}" \ LAMBDA_STACK_NAME="dbx-ruby-lambda" \ task_id="${task_id}" \ execution="${execution}" \ $DRIVERS_TOOLS/.evergreen/atlas/teardown-atlas-cluster.sh tasks: - test-aws-lambda-deployed - name: testgcpkms_task_group setup_group_can_fail_task: true setup_group_timeout_secs: 1800 # 30 minutes setup_group: - func: fetch source - func: "create expansions" - command: shell.exec params: shell: "bash" working_dir: "src" script: | ${PREPARE_SHELL} echo '${testgcpkms_key_file}' > /tmp/testgcpkms_key_file.json export GCPKMS_KEYFILE=/tmp/testgcpkms_key_file.json export GCPKMS_DRIVERS_TOOLS=$DRIVERS_TOOLS export GCPKMS_SERVICEACCOUNT="${testgcpkms_service_account}" export GCPKMS_MACHINETYPE="e2-standard-4" .evergreen/csfle/gcpkms/create-and-setup-instance.sh # Load the GCPKMS_GCLOUD, GCPKMS_INSTANCE, GCPKMS_REGION, and GCPKMS_ZONE expansions. - command: expansions.update params: file: src/testgcpkms-expansions.yml teardown_group: - command: shell.exec params: shell: "bash" working_dir: "src" script: | ${PREPARE_SHELL} export GCPKMS_GCLOUD=${GCPKMS_GCLOUD} export GCPKMS_PROJECT=${GCPKMS_PROJECT} export GCPKMS_ZONE=${GCPKMS_ZONE} export GCPKMS_INSTANCENAME=${GCPKMS_INSTANCENAME} .evergreen/csfle/gcpkms/delete-instance.sh tasks: - testgcpkms-task - name: testazurekms_task_group setup_group_can_fail_task: true setup_group_timeout_secs: 1800 # 30 minutes setup_group: - func: fetch source - func: "create expansions" - command: shell.exec params: shell: "bash" working_dir: "src" script: | ${PREPARE_SHELL} export AZUREKMS_VMNAME_PREFIX=RUBY export AZUREKMS_CLIENTID="${testazurekms_clientid}" export AZUREKMS_TENANTID="${testazurekms_tenantid}" export AZUREKMS_SECRET="${testazurekms_secret}" export AZUREKMS_DRIVERS_TOOLS=$DRIVERS_TOOLS export AZUREKMS_RESOURCEGROUP="${testazurekms_resourcegroup}" echo '${testazurekms_publickey}' > /tmp/testazurekms_public_key_file export AZUREKMS_PUBLICKEYPATH="/tmp/testazurekms_public_key_file" echo '${testazurekms_privatekey}' > /tmp/testazurekms_private_key_file chmod 600 /tmp/testazurekms_private_key_file export AZUREKMS_PRIVATEKEYPATH="/tmp/testazurekms_private_key_file" export AZUREKMS_SCOPE="${testazurekms_scope}" .evergreen/csfle/azurekms/create-and-setup-vm.sh # Load the AZUREKMS_GCLOUD, AZUREKMS_INSTANCE, AZUREKMS_REGION, and AZUREKMS_ZONE expansions. - command: expansions.update params: file: src/testazurekms-expansions.yml teardown_group: - command: expansions.update params: file: src/testazurekms-expansions.yml - command: shell.exec params: shell: "bash" working_dir: "src" script: | ${PREPARE_SHELL} export AZUREKMS_RESOURCEGROUP="${testazurekms_resourcegroup}" .evergreen/csfle/azurekms/delete-vm.sh tasks: - testazurekms-task tasks: - name: "test-atlas" commands: - func: "run Atlas tests" - name: "test-serverless" commands: - func: "export FLE credentials" - func: "run serverless tests" - name: "test-docker" commands: - func: "build and test docker image" - name: "test-mlaunch" commands: - func: "run tests" - name: "driver-bench" commands: - func: "run benchmarks" - name: "test-via-docker" commands: - func: "run tests via docker" - name: "test-kerberos-integration" commands: - func: "export Kerberos credentials" - func: "run Kerberos integration tests" - name: "test-kerberos" commands: - func: "run Kerberos unit tests" - name: "test-csot" commands: - func: "run CSOT tests" - name: "test-fle" commands: - func: "export FLE credentials" - func: "run tests" - name: "test-fle-via-docker" commands: - func: "export FLE credentials" - func: "run tests via docker" - name: "test-aws-auth" commands: - func: "export AWS auth credentials" - func: "run AWS auth tests" - name: "test-full-atlas-task" commands: - command: shell.exec type: test params: working_dir: "src" shell: "bash" script: | ${PREPARE_SHELL} MONGODB_URI="${MONGODB_URI}" .evergreen/run-tests-atlas-full.sh - name: "testgcpkms-task" commands: - command: shell.exec type: setup params: working_dir: "src" shell: "bash" script: | ${PREPARE_SHELL} echo "Copying files ... begin" export GCPKMS_GCLOUD=${GCPKMS_GCLOUD} export GCPKMS_PROJECT=${GCPKMS_PROJECT} export GCPKMS_ZONE=${GCPKMS_ZONE} export GCPKMS_INSTANCENAME=${GCPKMS_INSTANCENAME} tar czf /tmp/mongo-ruby-driver.tgz . GCPKMS_SRC=/tmp/mongo-ruby-driver.tgz GCPKMS_DST=$GCPKMS_INSTANCENAME: .evergreen/csfle/gcpkms/copy-file.sh echo "Copying files ... end" echo "Untarring file ... begin" GCPKMS_CMD="tar xf mongo-ruby-driver.tgz" .evergreen/csfle/gcpkms/run-command.sh echo "Untarring file ... end" - command: shell.exec type: test params: working_dir: "src" shell: "bash" script: | ${PREPARE_SHELL} export GCPKMS_GCLOUD=${GCPKMS_GCLOUD} export GCPKMS_PROJECT=${GCPKMS_PROJECT} export GCPKMS_ZONE=${GCPKMS_ZONE} export GCPKMS_INSTANCENAME=${GCPKMS_INSTANCENAME} GCPKMS_CMD="TEST_FLE_GCP_AUTO=1 RVM_RUBY=ruby-3.1 FLE=helper TOPOLOGY=standalone MONGODB_VERSION=6.0 MONGO_RUBY_DRIVER_GCP_EMAIL="${fle_gcp_email}" MONGO_RUBY_DRIVER_GCP_PRIVATE_KEY='${fle_gcp_private_key}' MONGO_RUBY_DRIVER_GCP_PROJECT_ID='${fle_gcp_project_id}' MONGO_RUBY_DRIVER_GCP_LOCATION='${fle_gcp_location}' MONGO_RUBY_DRIVER_GCP_KEY_RING='${fle_gcp_key_ring}' MONGO_RUBY_DRIVER_GCP_KEY_NAME='${fle_gcp_key_name}' ./.evergreen/run-tests-gcp.sh" .evergreen/csfle/gcpkms/run-command.sh - name: "testazurekms-task" commands: - command: shell.exec type: setup params: working_dir: "src" shell: "bash" script: | ${PREPARE_SHELL} echo "Copying files ... begin" export AZUREKMS_RESOURCEGROUP=${testazurekms_resourcegroup} export AZUREKMS_VMNAME=${AZUREKMS_VMNAME} export AZUREKMS_PRIVATEKEYPATH="/tmp/testazurekms_private_key_file" tar czf /tmp/mongo-ruby-driver.tgz . AZUREKMS_SRC=/tmp/mongo-ruby-driver.tgz AZUREKMS_DST="~/" .evergreen/csfle/azurekms/copy-file.sh echo "Copying files ... end" echo "Untarring file ... begin" AZUREKMS_CMD="tar xf mongo-ruby-driver.tgz" .evergreen/csfle/azurekms/run-command.sh echo "Untarring file ... end" - command: shell.exec type: test params: working_dir: "src" shell: "bash" script: | ${PREPARE_SHELL} export AZUREKMS_RESOURCEGROUP=${testazurekms_resourcegroup} export AZUREKMS_VMNAME=${AZUREKMS_VMNAME} export AZUREKMS_PRIVATEKEYPATH="/tmp/testazurekms_private_key_file" AZUREKMS_CMD="TEST_FLE_AZURE_AUTO=1 RVM_RUBY=ruby-3.1 FLE=helper TOPOLOGY=standalone MONGODB_VERSION=6.0 MONGO_RUBY_DRIVER_AZURE_TENANT_ID="${MONGO_RUBY_DRIVER_AZURE_TENANT_ID}" MONGO_RUBY_DRIVER_AZURE_CLIENT_ID="${MONGO_RUBY_DRIVER_AZURE_CLIENT_ID}" MONGO_RUBY_DRIVER_AZURE_CLIENT_SECRET="${MONGO_RUBY_DRIVER_AZURE_CLIENT_SECRET}" MONGO_RUBY_DRIVER_AZURE_IDENTITY_PLATFORM_ENDPOINT="${MONGO_RUBY_DRIVER_AZURE_IDENTITY_PLATFORM_ENDPOINT}" MONGO_RUBY_DRIVER_AZURE_KEY_VAULT_ENDPOINT="${testazurekms_keyvaultendpoint}" MONGO_RUBY_DRIVER_AZURE_KEY_NAME="${testazurekms_keyname}" ./.evergreen/run-tests-azure.sh" .evergreen/csfle/azurekms/run-command.sh - name: "test-aws-lambda-deployed" commands: - command: ec2.assume_role params: role_arn: ${LAMBDA_AWS_ROLE_ARN} duration_seconds: 3600 - command: shell.exec type: test params: working_dir: "src" shell: "bash" script: | ${PREPARE_SHELL} export MONGODB_URI=${MONGODB_URI} export FUNCTION_NAME="ruby-driver-lambda" .evergreen/run-tests-deployed-lambda.sh env: TEST_LAMBDA_DIRECTORY: ${PROJECT_DIRECTORY}/spec/faas/ruby-sam-app AWS_REGION: us-east-1 PROJECT_DIRECTORY: ${PROJECT_DIRECTORY} DRIVERS_TOOLS: ${DRIVERS_TOOLS} DRIVERS_ATLAS_PUBLIC_API_KEY: ${DRIVERS_ATLAS_PUBLIC_API_KEY} DRIVERS_ATLAS_PRIVATE_API_KEY: ${DRIVERS_ATLAS_PRIVATE_API_KEY} DRIVERS_ATLAS_LAMBDA_USER: ${DRIVERS_ATLAS_LAMBDA_USER} DRIVERS_ATLAS_LAMBDA_PASSWORD: ${DRIVERS_ATLAS_LAMBDA_PASSWORD} DRIVERS_ATLAS_GROUP_ID: ${DRIVERS_ATLAS_GROUP_ID} DRIVERS_ATLAS_BASE_URL: ${DRIVERS_ATLAS_BASE_URL} AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID} AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY} AWS_SESSION_TOKEN: ${AWS_SESSION_TOKEN} LAMBDA_STACK_NAME: "dbx-ruby-lambda" CLUSTER_PREFIX: "dbx-ruby-lambda" RVM_RUBY: ruby-3.2 MONGODB_URI: ${MONGODB_URI} axes: - id: preload display_name: Preload server values: - id: nopreload display_name: Do not preload - id: preload display_name: Preload variables: PRELOAD_ARG: -p - id: "mongodb-version" display_name: MongoDB Version values: - id: "latest" display_name: "latest" variables: MONGODB_VERSION: "latest" CRYPT_SHARED_VERSION: "latest" - id: "8.0" display_name: "8.0" variables: MONGODB_VERSION: "8.0" - id: "7.0" display_name: "7.0" variables: MONGODB_VERSION: "7.0" - id: "6.0" display_name: "6.0" variables: MONGODB_VERSION: "6.0" - id: "5.0" display_name: "5.0" variables: MONGODB_VERSION: "5.0" CRYPT_SHARED_VERSION: "6.0.5" - id: "4.4" display_name: "4.4" variables: MONGODB_VERSION: "4.4" CRYPT_SHARED_VERSION: "6.0.5" - id: "4.2" display_name: "4.2" variables: MONGODB_VERSION: "4.2" CRYPT_SHARED_VERSION: "6.0.5" - id: "4.0" display_name: "4.0" variables: MONGODB_VERSION: "4.0" - id: "3.6" display_name: "3.6" variables: MONGODB_VERSION: "3.6" - id: fcv display_name: FCV values: - id: '3.4' display_name: '3.4' variables: FCV: '3.4' - id: "topology" display_name: Topology values: - id: "standalone" display_name: Standalone variables: TOPOLOGY: standalone - id: "replica-set" display_name: Replica Set variables: TOPOLOGY: replica-set - id: "replica-set-single-node" display_name: Replica Set (Single Node) variables: TOPOLOGY: replica-set-single-node - id: "sharded-cluster" display_name: Sharded variables: TOPOLOGY: sharded-cluster - id: "load-balanced" display_name: Load Balanced variables: TOPOLOGY: load-balanced - id: "single-mongos" display_name: Single Mongos values: - id: "single-mongos" display_name: Single Mongos variables: SINGLE_MONGOS: 'true' - id: "auth-and-ssl" display_name: Authentication and SSL values: - id: "auth-and-ssl" display_name: Auth SSL variables: AUTH: "auth" SSL: "ssl" - id: "auth-and-nossl" display_name: Auth NoSSL variables: AUTH: "auth" - id: "noauth-and-ssl" display_name: NoAuth SSL variables: SSL: "ssl" - id: "noauth-and-nossl" display_name: NoAuth NoSSL - id: "x509" display_name: X.509 variables: AUTH: "x509" SSL: "ssl" - id: kerberos display_name: Kerberos variables: AUTH: kerberos - id: aws-regular display_name: AWS Auth Regular Credentials variables: AUTH: aws-regular - id: aws-assume-role display_name: AWS Auth Assume Role variables: AUTH: aws-assume-role - id: aws-ec2 display_name: AWS Auth EC2 Role variables: AUTH: aws-ec2 - id: aws-ecs display_name: AWS Auth ECS Task variables: AUTH: aws-ecs - id: aws-web-identity display_name: AWS Auth Web Identity Task variables: AUTH: aws-web-identity - id: "ruby" display_name: Ruby Version values: - id: "ruby-3.4" display_name: ruby-3.4 variables: RVM_RUBY: "ruby-3.4" - id: "ruby-3.3" display_name: ruby-3.3 variables: RVM_RUBY: "ruby-3.3" - id: "ruby-3.2" display_name: ruby-3.2 variables: RVM_RUBY: "ruby-3.2" - id: "ruby-3.1" display_name: ruby-3.1 variables: RVM_RUBY: "ruby-3.1" - id: "ruby-3.1" display_name: ruby-3.1 variables: RVM_RUBY: "ruby-3.1" - id: "ruby-3.0" display_name: ruby-3.0 variables: RVM_RUBY: "ruby-3.0" - id: "ruby-2.7" display_name: ruby-2.7 variables: RVM_RUBY: "ruby-2.7" - id: "ruby-head" display_name: ruby-head variables: RVM_RUBY: "ruby-head" - id: "jruby-9.3" display_name: jruby-9.3 variables: RVM_RUBY: "jruby-9.3" - id: "jruby-9.4" display_name: jruby-9.4 variables: RVM_RUBY: "jruby-9.4" - id: "os" display_name: OS values: - id: debian11 display_name: "Debian 11" run_on: debian11-small - id: ubuntu2404 display_name: "Ubuntu 24.04" run_on: ubuntu2404-small - id: ubuntu2404-arm display_name: "Ubuntu 24.04 ARM64" run_on: ubuntu2404-arm64-small - id: ubuntu2204 display_name: "Ubuntu 22.04" run_on: ubuntu2204-small - id: ubuntu2204-arm display_name: "Ubuntu 22.04 ARM64" run_on: ubuntu2204-arm64-small - id: ubuntu2004 display_name: "Ubuntu 20.04" run_on: ubuntu2004-small - id: ubuntu1804 display_name: "Ubuntu 18.04" run_on: ubuntu1804-small - id: docker-distro display_name: Docker Distro values: - id: debian11 display_name: debian11 variables: DOCKER_DISTRO: debian11 - id: ubuntu2204 display_name: ubuntu2204 variables: DOCKER_DISTRO: ubuntu2204 - id: "compressor" display_name: Compressor values: - id: "zlib" display_name: Zlib variables: COMPRESSOR: "zlib" - id: "snappy" display_name: Snappy variables: COMPRESSOR: "snappy" - id: "zstd" display_name: Zstd variables: COMPRESSOR: "zstd" - id: retry-reads display_name: Retry Reads values: - id: no-retry-reads display_name: No Retry Reads variables: RETRY_READS: 'false' - id: retry-writes display_name: Retry Writes values: - id: no-retry-writes display_name: No Retry Writes variables: RETRY_WRITES: 'false' - id: lint display_name: Lint values: - id: on display_name: On variables: LINT: '1' - id: stress display_name: Stress values: - id: on display_name: On variables: STRESS: '1' - id: fork display_name: Fork values: - id: on display_name: On variables: FORK: '1' - id: solo display_name: Solo values: - id: on display_name: On variables: SOLO: '1' - id: "as" display_name: ActiveSupport values: - id: "as" display_name: AS variables: WITH_ACTIVE_SUPPORT: true - id: bson display_name: BSON values: - id: master display_name: master variables: BSON: master - id: 4-stable display_name: 4-stable variables: BSON: 4-stable - id: min display_name: min variables: BSON: min - id: storage-engine display_name: Storage Engine values: - id: mmapv1 display_name: MMAPv1 run_on: ubuntu1804-small variables: MMAPV1: 'true' - id: "fle" display_name: FLE values: - id: "helper" display_name: via LMC helper variables: FLE: helper - id: "path" display_name: via LMC path variables: FLE: path - id: ocsp-algorithm display_name: OCSP Algorithm values: - id: rsa display_name: RSA variables: OCSP_ALGORITHM: rsa - id: ecdsa display_name: ECDSA variables: OCSP_ALGORITHM: ecdsa - id: ocsp-status display_name: OCSP Status values: - id: valid display_name: Valid - id: revoked display_name: Revoked variables: OCSP_STATUS: revoked - id: unknown display_name: Unknown variables: OCSP_STATUS: unknown - id: ocsp-delegate display_name: OCSP Delegate values: - id: on display_name: on variables: OCSP_DELEGATE: 1 - id: ocsp-must-staple display_name: OCSP Must Staple values: - id: on display_name: on variables: OCSP_MUST_STAPLE: 1 - id: ocsp-verifier display_name: OCSP Verifier values: - id: true display_name: true variables: OCSP_VERIFIER: 1 - id: ocsp-connectivity display_name: OCSP Connectivity values: - id: pass display_name: pass variables: OCSP_CONNECTIVITY: pass - id: fail display_name: fail variables: OCSP_CONNECTIVITY: fail - id: extra-uri-options display_name: extra URI options values: - id: none display_name: None - id: "tlsInsecure=true" variables: EXTRA_URI_OPTIONS: "tlsInsecure=true" - id: "tlsAllowInvalidCertificates=true" variables: EXTRA_URI_OPTIONS: "tlsAllowInvalidCertificates=true" - id: api-version-required display_name: API version required values: - id: yes display_name: Yes variables: API_VERSION_REQUIRED: 1 - id: no display_name: No buildvariants: - matrix_name: DriverBench matrix_spec: ruby: "ruby-3.3" mongodb-version: "8.0" topology: standalone os: ubuntu2204 display_name: DriverBench tasks: - name: "driver-bench" - matrix_name: "auth/ssl" matrix_spec: auth-and-ssl: ["auth-and-ssl", "noauth-and-nossl"] ruby: "ruby-3.3" mongodb-version: ["latest", "8.0", "7.0"] topology: ["standalone", "replica-set", "sharded-cluster"] os: ubuntu2204 display_name: ${auth-and-ssl} ${ruby} db-${mongodb-version} ${topology} tasks: - name: "test-mlaunch" - matrix_name: "mongo-recent" matrix_spec: ruby: ["ruby-3.3", "ruby-3.2", "jruby-9.4"] mongodb-version: ["latest", "8.0", "7.0"] topology: ["standalone", "replica-set", "sharded-cluster"] os: ubuntu2204 display_name: "${mongodb-version} ${os} ${topology} ${auth-and-ssl} ${ruby}" tasks: - name: "test-mlaunch" - matrix_name: "mongo-8-arm" matrix_spec: ruby: "ruby-3.3" mongodb-version: [ '8.0' ] topology: ["standalone", "replica-set", "sharded-cluster"] os: ubuntu2404-arm display_name: "${mongodb-version} ${os} ${topology} ${auth-and-ssl} ${ruby}" tasks: - name: "test-mlaunch" - matrix_name: "mongo-5.x" matrix_spec: ruby: ["ruby-3.3", "ruby-3.2", "jruby-9.4"] mongodb-version: ['5.0'] topology: ["standalone", "replica-set", "sharded-cluster"] os: ubuntu1804 display_name: "${mongodb-version} ${topology} ${auth-and-ssl} ${ruby}" tasks: - name: "test-mlaunch" - matrix_name: "mongo-4.x" matrix_spec: ruby: ["ruby-3.0", "ruby-2.7"] mongodb-version: ['4.4', '4.2', '4.0'] topology: ["standalone", "replica-set", "sharded-cluster"] os: ubuntu1804 display_name: "${mongodb-version} ${topology} ${auth-and-ssl} ${ruby}" tasks: - name: "test-mlaunch" - matrix_name: "mongo-3.6" matrix_spec: ruby: "ruby-2.7" mongodb-version: ['3.6'] topology: ["standalone", "replica-set", "sharded-cluster"] os: ubuntu1804 display_name: "${mongodb-version} ${topology} ${auth-and-ssl} ${ruby}" tasks: - name: "test-mlaunch" - matrix_name: "single-lb" matrix_spec: ruby: "ruby-3.3" mongodb-version: "8.0" topology: load-balanced single-mongos: single-mongos os: ubuntu2204 display_name: "${mongodb-version} ${topology} single-lb ${auth-and-ssl} ${ruby}" tasks: - name: "test-mlaunch" - matrix_name: "mongo-api-version" matrix_spec: ruby: "ruby-3.3" mongodb-version: '7.0' topology: standalone api-version-required: yes os: ubuntu2204 display_name: "${mongodb-version} api-version-required ${topology} ${auth-and-ssl} ${ruby}" tasks: - name: "test-mlaunch" - matrix_name: "single-mongos" matrix_spec: ruby: "ruby-3.3" mongodb-version: "8.0" topology: "sharded-cluster" single-mongos: single-mongos os: ubuntu2204 display_name: "${mongodb-version} ${topology} single-mongos ${auth-and-ssl} ${ruby}" tasks: - name: "test-mlaunch" - matrix_name: CSOT matrix_spec: ruby: "ruby-3.3" mongodb-version: "8.0" topology: replica-set-single-node os: ubuntu2204 display_name: "CSOT - ${mongodb-version}" tasks: - name: test-csot - matrix_name: "no-retry-reads" matrix_spec: retry-reads: no-retry-reads ruby: "ruby-3.3" mongodb-version: "8.0" topology: ["standalone", "replica-set", "sharded-cluster"] os: ubuntu2204 display_name: "${mongodb-version} ${topology} ${retry-reads} ${ruby}" tasks: - name: "test-mlaunch" - matrix_name: "no-retry-writes" matrix_spec: retry-writes: no-retry-writes ruby: "ruby-3.3" mongodb-version: "8.0" topology: [replica-set, sharded-cluster] os: ubuntu2204 display_name: "${mongodb-version} ${topology} ${retry-writes} ${ruby}" tasks: - name: "test-mlaunch" - matrix_name: mmapv1 matrix_spec: ruby: "ruby-2.7" mongodb-version: ['3.6', '4.0'] topology: ["standalone", "replica-set", "sharded-cluster"] storage-engine: mmapv1 os: ubuntu1804 display_name: "${mongodb-version} ${topology} mmapv1 ${ruby}" tasks: - name: "test-mlaunch" - matrix_name: "lint" matrix_spec: lint: on ruby: "ruby-3.3" mongodb-version: "8.0" topology: ["standalone", "replica-set", "sharded-cluster"] os: ubuntu2204 display_name: "${mongodb-version} ${topology} ${lint} ${ruby}" tasks: - name: "test-mlaunch" - matrix_name: "fork" matrix_spec: fork: on ruby: "ruby-3.3" mongodb-version: "8.0" topology: ["standalone", "replica-set", "sharded-cluster"] os: ubuntu2204 display_name: "${mongodb-version} ${topology} fork ${ruby}" tasks: - name: "test-mlaunch" - matrix_name: "solo" matrix_spec: solo: on ruby: ["ruby-3.3", "ruby-3.2", "ruby-3.1"] mongodb-version: "8.0" topology: ["standalone", "replica-set", "sharded-cluster"] os: ubuntu2204 display_name: "${mongodb-version} ${topology} solo ${ruby}" tasks: - name: "test-mlaunch" - matrix_name: "stress older" matrix_spec: stress: on ruby: "ruby-2.7" mongodb-version: ['4.2', '4.0', '3.6'] topology: replica-set os: ubuntu1804 display_name: "${mongodb-version} ${topology} stress ${ruby}" tasks: - name: "test-mlaunch" - matrix_name: "stress" matrix_spec: stress: on ruby: "ruby-3.3" mongodb-version: ["8.0", "7.0"] topology: replica-set os: ubuntu2204 display_name: "${mongodb-version} ${topology} stress ${ruby}" tasks: - name: "test-mlaunch" - matrix_name: "x509-tests" matrix_spec: auth-and-ssl: "x509" ruby: "ruby-3.3" mongodb-version: "8.0" topology: standalone os: ubuntu2204 display_name: "${mongodb-version} ${topology} ${auth-and-ssl} ${ruby}" tasks: - name: "test-mlaunch" - matrix_name: "jruby-auth" matrix_spec: auth-and-ssl: [ "auth-and-ssl", "noauth-and-nossl" ] ruby: jruby-9.4 mongodb-version: "8.0" topology: ["standalone", "replica-set", "sharded-cluster"] os: ubuntu2204 display_name: "${mongodb-version} ${topology} ${auth-and-ssl} ${ruby}" tasks: - name: "test-mlaunch" - matrix_name: zlib-"ruby-3.3" matrix_spec: auth-and-ssl: [ "auth-and-ssl", "noauth-and-nossl" ] ruby: "ruby-3.3" mongodb-version: "8.0" topology: "replica-set" compressor: 'zlib' os: ubuntu2204 display_name: "${compressor} ${mongodb-version} ${topology} ${auth-and-ssl} ${ruby}" tasks: - name: "test-mlaunch" - matrix_name: snappy-"ruby-3.3" matrix_spec: auth-and-ssl: [ "auth-and-ssl", "noauth-and-nossl" ] ruby: "ruby-3.3" mongodb-version: "8.0" topology: "replica-set" compressor: 'snappy' os: ubuntu2204 display_name: "${compressor} ${mongodb-version} ${topology} ${auth-and-ssl} ${ruby}" tasks: - name: "test-mlaunch" # the zstd-ruby gem does not support JRuby (explicitly). However, there is # apparently a zstd-jni gem for JRuby that we could investigate here; if # this test is ever supported to support jruby, the `sample_mri_rubies` # reference should be replaced with `sample_rubies`. - matrix_name: zstd-auth-"ruby-3.3" matrix_spec: auth-and-ssl: [ "auth-and-ssl", "noauth-and-nossl" ] ruby: "ruby-3.3" mongodb-version: "8.0" topology: "replica-set" compressor: 'zstd' os: ubuntu2204 display_name: "${compressor} ${mongodb-version} ${topology} ${auth-and-ssl} ${ruby}" tasks: - name: "test-mlaunch" - matrix_name: activesupport-"ruby-3.3" matrix_spec: ruby: "ruby-3.3" mongodb-version: "8.0" topology: replica-set as: as os: ubuntu2204 display_name: "AS ${mongodb-version} ${topology} ${ruby}" tasks: - name: "test-mlaunch" - matrix_name: bson-"ruby-3.3" matrix_spec: ruby: "ruby-3.3" mongodb-version: "8.0" topology: replica-set bson: "*" os: ubuntu2204 display_name: "bson-${bson} ${mongodb-version} ${topology} ${ruby}" tasks: - name: "test-mlaunch" - matrix_name: zlib-"ruby-2.7" matrix_spec: auth-and-ssl: [ "auth-and-ssl", "noauth-and-nossl" ] ruby: "ruby-2.7" mongodb-version: "6.0" topology: "replica-set" compressor: 'zlib' os: ubuntu2004 display_name: "${compressor} ${mongodb-version} ${topology} ${auth-and-ssl} ${ruby}" tasks: - name: "test-mlaunch" - matrix_name: snappy-"ruby-2.7" matrix_spec: auth-and-ssl: [ "auth-and-ssl", "noauth-and-nossl" ] ruby: "ruby-2.7" mongodb-version: "6.0" topology: "replica-set" compressor: 'snappy' os: ubuntu2004 display_name: "${compressor} ${mongodb-version} ${topology} ${auth-and-ssl} ${ruby}" tasks: - name: "test-mlaunch" # the zstd-ruby gem does not support JRuby (explicitly). However, there is # apparently a zstd-jni gem for JRuby that we could investigate here; if # this test is ever supported to support jruby, the `sample_mri_rubies` # reference should be replaced with `sample_rubies`. - matrix_name: zstd-auth-"ruby-2.7" matrix_spec: auth-and-ssl: [ "auth-and-ssl", "noauth-and-nossl" ] ruby: "ruby-2.7" mongodb-version: "6.0" topology: "replica-set" compressor: 'zstd' os: ubuntu2004 display_name: "${compressor} ${mongodb-version} ${topology} ${auth-and-ssl} ${ruby}" tasks: - name: "test-mlaunch" - matrix_name: activesupport-"ruby-2.7" matrix_spec: ruby: "ruby-2.7" mongodb-version: "6.0" topology: replica-set as: as os: ubuntu2004 display_name: "AS ${mongodb-version} ${topology} ${ruby}" tasks: - name: "test-mlaunch" - matrix_name: bson-"ruby-2.7" matrix_spec: ruby: "ruby-2.7" mongodb-version: "6.0" topology: replica-set bson: "*" os: ubuntu2004 display_name: "bson-${bson} ${mongodb-version} ${topology} ${ruby}" tasks: - name: "test-mlaunch" - matrix_name: "fle above 4.4" matrix_spec: auth-and-ssl: "noauth-and-nossl" ruby: ["ruby-3.3", "ruby-3.2", "ruby-3.1"] topology: [replica-set, sharded-cluster] mongodb-version: [ '6.0', '7.0', '8.0' ] os: ubuntu2204 fle: helper display_name: "FLE: ${mongodb-version} ${topology} ${ruby}" tasks: - name: "test-fle" # kerberos integration tests are broken (RUBY-3266) # - matrix_name: "kerberos-integration" # matrix_spec: # ruby: ["ruby-3.3", "ruby-2.7", "jruby-9.4"] # os: rhel8 # display_name: "Kerberos integration ${os} ${ruby}" # tasks: # - name: "test-kerberos-integration" - matrix_name: "kerberos-unit" matrix_spec: ruby: "ruby-3.3" mongodb-version: "8.0" topology: standalone os: ubuntu2204 auth-and-ssl: kerberos display_name: "Kerberos Tests" tasks: - name: "test-kerberos" # - matrix_name: "fle-latest" # matrix_spec: # auth-and-ssl: "noauth-and-nossl" # ruby: # topology: [replica-set, sharded-cluster] # mongodb-version: [ 'latest' ] # os: ubuntu2204 # fle: helper # display_name: "FLE: ${mongodb-version} ${topology} ${ruby}" # tasks: # - name: "test-fle" - matrix_name: aws-auth-regular matrix_spec: # https://jira.mongodb.org/browse/RUBY-3311 # auth-and-ssl: [ aws-regular, aws-assume-role, aws-ec2, aws-ecs, aws-web-identity ] # auth-and-ssl: [ aws-regular, aws-assume-role, aws-ecs, aws-web-identity ] # https://jira.mongodb.org/browse/RUBY-3659 auth-and-ssl: [ aws-regular, aws-assume-role, aws-web-identity ] ruby: "ruby-3.3" topology: standalone mongodb-version: "8.0" os: ubuntu2204 display_name: "AWS ${auth-and-ssl} ${mongodb-version} ${ruby}" tasks: - name: "test-aws-auth" - matrix_name: ocsp-verifier matrix_spec: ocsp-verifier: true # No JRuby due to https://github.com/jruby/jruby-openssl/issues/210 ruby: ["ruby-3.3", "ruby-3.2", "ruby-3.1"] topology: standalone mongodb-version: "8.0" os: ubuntu2204 display_name: "OCSP verifier: ${mongodb-version} ${ruby}" tasks: - name: test-mlaunch - matrix_name: ocsp-must-staple matrix_spec: ocsp-algorithm: ecdsa ocsp-must-staple: on ocsp-delegate: on ruby: "ruby-3.3" topology: standalone mongodb-version: "8.0" os: ubuntu2204 auth-and-ssl: noauth-and-ssl display_name: "OCSP integration - must staple: ${mongodb-version} ${ruby}" tasks: - name: test-mlaunch - matrix_name: ocsp-unknown matrix_spec: ocsp-algorithm: rsa ocsp-status: unknown ruby: "ruby-3.3" topology: standalone mongodb-version: "8.0" os: ubuntu2204 auth-and-ssl: noauth-and-ssl display_name: "OCSP integration - unknown: ${mongodb-version} ${ruby}" tasks: - name: test-mlaunch - matrix_name: ocsp-connectivity matrix_spec: ocsp-algorithm: '*' ocsp-status: valid ocsp-delegate: '*' ocsp-connectivity: pass extra-uri-options: "none" ruby: "ruby-3.3" topology: standalone mongodb-version: "8.0" os: ubuntu2204 display_name: "OCSP connectivity: ${ocsp-algorithm} ${ocsp-status} ${ocsp-delegate} ${extra-uri-options} ${mongodb-version} ${ruby}" tasks: - name: test-mlaunch - matrix_name: ocsp-connectivity matrix_spec: ocsp-algorithm: '*' ocsp-status: unknown ocsp-delegate: '*' ocsp-connectivity: pass extra-uri-options: "none" ruby: "ruby-3.3" topology: standalone mongodb-version: "8.0" os: ubuntu2204 display_name: "OCSP connectivity: ${ocsp-algorithm} ${ocsp-status} ${ocsp-delegate} ${extra-uri-options} ${mongodb-version} ${ruby}" tasks: - name: test-mlaunch - matrix_name: ocsp-connectivity matrix_spec: ocsp-algorithm: '*' ocsp-status: revoked ocsp-delegate: '*' ocsp-connectivity: fail extra-uri-options: "none" ruby: "ruby-3.3" topology: standalone mongodb-version: "8.0" os: ubuntu2204 display_name: "OCSP connectivity: ${ocsp-algorithm} ${ocsp-status} ${ocsp-delegate} ${extra-uri-options} ${mongodb-version} ${ruby}" tasks: - name: test-mlaunch - matrix_name: ocsp-connectivity matrix_spec: ocsp-algorithm: '*' ocsp-status: valid ocsp-delegate: '*' ocsp-connectivity: pass extra-uri-options: "tlsInsecure=true" ruby: "ruby-3.3" topology: standalone mongodb-version: "8.0" os: ubuntu2204 display_name: "OCSP connectivity: ${ocsp-algorithm} ${ocsp-status} ${ocsp-delegate} ${extra-uri-options} ${mongodb-version} ${ruby}" tasks: - name: test-mlaunch - matrix_name: ocsp-connectivity matrix_spec: ocsp-algorithm: '*' ocsp-status: unknown ocsp-delegate: '*' ocsp-connectivity: pass extra-uri-options: "tlsInsecure=true" ruby: "ruby-3.3" topology: standalone mongodb-version: "8.0" os: ubuntu2204 display_name: "OCSP connectivity: ${ocsp-algorithm} ${ocsp-status} ${ocsp-delegate} ${extra-uri-options} ${mongodb-version} ${ruby}" tasks: - name: test-mlaunch - matrix_name: ocsp-connectivity matrix_spec: ocsp-algorithm: '*' ocsp-status: revoked ocsp-delegate: '*' ocsp-connectivity: pass extra-uri-options: "tlsInsecure=true" ruby: "ruby-3.3" topology: standalone mongodb-version: "8.0" os: ubuntu2204 display_name: "OCSP connectivity: ${ocsp-algorithm} ${ocsp-status} ${ocsp-delegate} ${extra-uri-options} ${mongodb-version} ${ruby}" tasks: - name: test-mlaunch - matrix_name: ocsp-connectivity matrix_spec: ocsp-algorithm: '*' ocsp-status: valid ocsp-delegate: '*' ocsp-connectivity: pass extra-uri-options: "tlsAllowInvalidCertificates=true" ruby: "ruby-3.3" topology: standalone mongodb-version: "8.0" os: ubuntu2204 display_name: "OCSP connectivity: ${ocsp-algorithm} ${ocsp-status} ${ocsp-delegate} ${extra-uri-options} ${mongodb-version} ${ruby}" tasks: - name: test-mlaunch - matrix_name: ocsp-connectivity matrix_spec: ocsp-algorithm: '*' ocsp-status: unknown ocsp-delegate: '*' ocsp-connectivity: pass extra-uri-options: "tlsAllowInvalidCertificates=true" ruby: "ruby-3.3" topology: standalone mongodb-version: "8.0" os: ubuntu2204 display_name: "OCSP connectivity: ${ocsp-algorithm} ${ocsp-status} ${ocsp-delegate} ${extra-uri-options} ${mongodb-version} ${ruby}" tasks: - name: test-mlaunch - matrix_name: ocsp-connectivity matrix_spec: ocsp-algorithm: '*' ocsp-status: revoked ocsp-delegate: '*' ocsp-connectivity: pass extra-uri-options: "tlsAllowInvalidCertificates=true" ruby: "ruby-3.3" topology: standalone mongodb-version: "8.0" os: ubuntu2204 display_name: "OCSP connectivity: ${ocsp-algorithm} ${ocsp-status} ${ocsp-delegate} ${extra-uri-options} ${mongodb-version} ${ruby}" tasks: - name: test-mlaunch - matrix_name: ocsp-connectivity-jruby matrix_spec: # ECDSA does not work on JRuby. # https://github.com/jruby/jruby-openssl/issues/213 ocsp-algorithm: rsa # We do not perform OCSP verification on JRuby, therefore the revoked # configuration fails (connection succeeds due to lack of verification # when it is expected to fail). # https://github.com/jruby/jruby-openssl/issues/210 ocsp-status: [valid, unknown] ocsp-delegate: '*' ocsp-connectivity: pass ruby: jruby-9.4 topology: standalone mongodb-version: "8.0" os: ubuntu2204 display_name: "OCSP connectivity: ${ocsp-algorithm} ${ocsp-status} ${ocsp-delegate} ${mongodb-version} ${ruby}" tasks: - name: test-mlaunch # https://jira.mongodb.org/browse/RUBY-3540 #- matrix_name: testgcpkms-variant # matrix_spec: # ruby: "ruby-3.3" # fle: helper # topology: standalone # os: ubuntu2204 # mongodb-version: "8.0" # display_name: "GCP KMS" # tasks: # - name: testgcpkms_task_group # batchtime: 20160 # Use a batchtime of 14 days as suggested by the CSFLE test README # https://jira.mongodb.org/browse/RUBY-3672 #- matrix_name: testazurekms-variant # matrix_spec: # ruby: ruby-3.0 # fle: helper # topology: standalone # os: debian11 # could eventually look at updating this to rhel80 # mongodb-version: 6.0 # display_name: "AZURE KMS" # tasks: # - name: testazurekms_task_group # batchtime: 20160 # Use a batchtime of 14 days as suggested by the CSFLE test README - matrix_name: atlas-full matrix_spec: ruby: "ruby-3.3" os: ubuntu2204 display_name: "Atlas (Full)" tasks: - name: testatlas_full_task_group - matrix_name: "atlas" matrix_spec: ruby: ["ruby-3.3", "ruby-3.2", "ruby-3.1"] os: ubuntu2204 display_name: "Atlas connectivity tests ${ruby}" tasks: - name: test-atlas - matrix_name: "aws-lambda" matrix_spec: ruby: 'ruby-3.2' os: ubuntu2204 display_name: "AWS Lambda" tasks: - name: test_aws_lambda_task_group mongo-ruby-driver-2.21.3/.evergreen/config/000077500000000000000000000000001505113246500204755ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/.evergreen/config/axes.yml.erb000066400000000000000000000226271505113246500227400ustar00rootroot00000000000000axes: - id: preload display_name: Preload server values: - id: nopreload display_name: Do not preload - id: preload display_name: Preload variables: PRELOAD_ARG: -p - id: "mongodb-version" display_name: MongoDB Version values: - id: "latest" display_name: "latest" variables: MONGODB_VERSION: "latest" CRYPT_SHARED_VERSION: "latest" - id: "8.0" display_name: "8.0" variables: MONGODB_VERSION: "8.0" - id: "7.0" display_name: "7.0" variables: MONGODB_VERSION: "7.0" - id: "6.0" display_name: "6.0" variables: MONGODB_VERSION: "6.0" - id: "5.0" display_name: "5.0" variables: MONGODB_VERSION: "5.0" CRYPT_SHARED_VERSION: "6.0.5" - id: "4.4" display_name: "4.4" variables: MONGODB_VERSION: "4.4" CRYPT_SHARED_VERSION: "6.0.5" - id: "4.2" display_name: "4.2" variables: MONGODB_VERSION: "4.2" CRYPT_SHARED_VERSION: "6.0.5" - id: "4.0" display_name: "4.0" variables: MONGODB_VERSION: "4.0" - id: "3.6" display_name: "3.6" variables: MONGODB_VERSION: "3.6" - id: fcv display_name: FCV values: - id: '3.4' display_name: '3.4' variables: FCV: '3.4' - id: "topology" display_name: Topology values: - id: "standalone" display_name: Standalone variables: TOPOLOGY: standalone - id: "replica-set" display_name: Replica Set variables: TOPOLOGY: replica-set - id: "replica-set-single-node" display_name: Replica Set (Single Node) variables: TOPOLOGY: replica-set-single-node - id: "sharded-cluster" display_name: Sharded variables: TOPOLOGY: sharded-cluster - id: "load-balanced" display_name: Load Balanced variables: TOPOLOGY: load-balanced - id: "single-mongos" display_name: Single Mongos values: - id: "single-mongos" display_name: Single Mongos variables: SINGLE_MONGOS: 'true' - id: "auth-and-ssl" display_name: Authentication and SSL values: - id: "auth-and-ssl" display_name: Auth SSL variables: AUTH: "auth" SSL: "ssl" - id: "auth-and-nossl" display_name: Auth NoSSL variables: AUTH: "auth" - id: "noauth-and-ssl" display_name: NoAuth SSL variables: SSL: "ssl" - id: "noauth-and-nossl" display_name: NoAuth NoSSL - id: "x509" display_name: X.509 variables: AUTH: "x509" SSL: "ssl" - id: kerberos display_name: Kerberos variables: AUTH: kerberos - id: aws-regular display_name: AWS Auth Regular Credentials variables: AUTH: aws-regular - id: aws-assume-role display_name: AWS Auth Assume Role variables: AUTH: aws-assume-role - id: aws-ec2 display_name: AWS Auth EC2 Role variables: AUTH: aws-ec2 - id: aws-ecs display_name: AWS Auth ECS Task variables: AUTH: aws-ecs - id: aws-web-identity display_name: AWS Auth Web Identity Task variables: AUTH: aws-web-identity - id: "ruby" display_name: Ruby Version values: - id: "ruby-3.4" display_name: ruby-3.4 variables: RVM_RUBY: "ruby-3.4" - id: "ruby-3.3" display_name: ruby-3.3 variables: RVM_RUBY: "ruby-3.3" - id: "ruby-3.2" display_name: ruby-3.2 variables: RVM_RUBY: "ruby-3.2" - id: "ruby-3.1" display_name: ruby-3.1 variables: RVM_RUBY: "ruby-3.1" - id: "ruby-3.1" display_name: ruby-3.1 variables: RVM_RUBY: "ruby-3.1" - id: "ruby-3.0" display_name: ruby-3.0 variables: RVM_RUBY: "ruby-3.0" - id: "ruby-2.7" display_name: ruby-2.7 variables: RVM_RUBY: "ruby-2.7" - id: "ruby-head" display_name: ruby-head variables: RVM_RUBY: "ruby-head" - id: "jruby-9.3" display_name: jruby-9.3 variables: RVM_RUBY: "jruby-9.3" - id: "jruby-9.4" display_name: jruby-9.4 variables: RVM_RUBY: "jruby-9.4" - id: "os" display_name: OS values: - id: debian11 display_name: "Debian 11" run_on: debian11-small - id: ubuntu2404 display_name: "Ubuntu 24.04" run_on: ubuntu2404-small - id: ubuntu2404-arm display_name: "Ubuntu 24.04 ARM64" run_on: ubuntu2404-arm64-small - id: ubuntu2204 display_name: "Ubuntu 22.04" run_on: ubuntu2204-small - id: ubuntu2204-arm display_name: "Ubuntu 22.04 ARM64" run_on: ubuntu2204-arm64-small - id: ubuntu2004 display_name: "Ubuntu 20.04" run_on: ubuntu2004-small - id: ubuntu1804 display_name: "Ubuntu 18.04" run_on: ubuntu1804-small - id: docker-distro display_name: Docker Distro values: <% %w(debian11 ubuntu2204).each do |distro| %> - id: <%= distro %> display_name: <%= distro %> variables: DOCKER_DISTRO: <%= distro %> <% end %> - id: "compressor" display_name: Compressor values: - id: "zlib" display_name: Zlib variables: COMPRESSOR: "zlib" - id: "snappy" display_name: Snappy variables: COMPRESSOR: "snappy" - id: "zstd" display_name: Zstd variables: COMPRESSOR: "zstd" - id: retry-reads display_name: Retry Reads values: - id: no-retry-reads display_name: No Retry Reads variables: RETRY_READS: 'false' - id: retry-writes display_name: Retry Writes values: - id: no-retry-writes display_name: No Retry Writes variables: RETRY_WRITES: 'false' - id: lint display_name: Lint values: - id: on display_name: On variables: LINT: '1' - id: stress display_name: Stress values: - id: on display_name: On variables: STRESS: '1' - id: fork display_name: Fork values: - id: on display_name: On variables: FORK: '1' - id: solo display_name: Solo values: - id: on display_name: On variables: SOLO: '1' - id: "as" display_name: ActiveSupport values: - id: "as" display_name: AS variables: WITH_ACTIVE_SUPPORT: true - id: bson display_name: BSON values: - id: master display_name: master variables: BSON: master - id: 4-stable display_name: 4-stable variables: BSON: 4-stable - id: min display_name: min variables: BSON: min - id: storage-engine display_name: Storage Engine values: - id: mmapv1 display_name: MMAPv1 run_on: ubuntu1804-small variables: MMAPV1: 'true' - id: "fle" display_name: FLE values: - id: "helper" display_name: via LMC helper variables: FLE: helper - id: "path" display_name: via LMC path variables: FLE: path - id: ocsp-algorithm display_name: OCSP Algorithm values: - id: rsa display_name: RSA variables: OCSP_ALGORITHM: rsa - id: ecdsa display_name: ECDSA variables: OCSP_ALGORITHM: ecdsa - id: ocsp-status display_name: OCSP Status values: - id: valid display_name: Valid - id: revoked display_name: Revoked variables: OCSP_STATUS: revoked - id: unknown display_name: Unknown variables: OCSP_STATUS: unknown - id: ocsp-delegate display_name: OCSP Delegate values: - id: on display_name: on variables: OCSP_DELEGATE: 1 - id: ocsp-must-staple display_name: OCSP Must Staple values: - id: on display_name: on variables: OCSP_MUST_STAPLE: 1 - id: ocsp-verifier display_name: OCSP Verifier values: - id: true display_name: true variables: OCSP_VERIFIER: 1 - id: ocsp-connectivity display_name: OCSP Connectivity values: <% %w(pass fail).each do |value| %> - id: <%= value %> display_name: <%= value %> variables: OCSP_CONNECTIVITY: <%= value %> <% end %> - id: extra-uri-options display_name: extra URI options values: - id: none display_name: None <% %w(tlsInsecure=true tlsAllowInvalidCertificates=true).each do |value| %> - id: "<%= value %>" variables: EXTRA_URI_OPTIONS: "<%= value %>" <% end %> - id: api-version-required display_name: API version required values: - id: yes display_name: Yes variables: API_VERSION_REQUIRED: 1 - id: no display_name: No mongo-ruby-driver-2.21.3/.evergreen/config/common.yml.erb000066400000000000000000000776321505113246500232760ustar00rootroot00000000000000# When a task that used to pass starts to fail, go through all versions that # may have been skipped to detect when the task started failing. stepback: true # Fail builds when pre tasks fail. pre_error_fails_task: true # Mark a failure as a system/bootstrap failure (purple box) rather then a task # failure by default. # Actual testing tasks are marked with `type: test` command_type: system # Protect ourself against rogue test case, or curl gone wild, that runs forever. exec_timeout_secs: 5400 # What to do when evergreen hits the timeout (`post:` tasks are run automatically) timeout: - command: shell.exec params: script: | true functions: "fetch source": # Executes git clone and applies the submitted patch, if any - command: git.get_project params: directory: "src" - command: shell.exec params: working_dir: "src" script: | set -ex git submodule update --init --recursive "create expansions": # Make an evergreen expansion file with dynamic values - command: shell.exec params: working_dir: "src" script: | # Get the current unique version of this checkout if [ "${is_patch}" = "true" ]; then CURRENT_VERSION=$(git describe)-patch-${version_id} else CURRENT_VERSION=latest fi export DRIVERS_TOOLS="$(pwd)/.mod/drivers-evergreen-tools" # Python has cygwin path problems on Windows. Detect prospective mongo-orchestration home directory if [ "Windows_NT" = "$OS" ]; then # Magic variable in cygwin export DRIVERS_TOOLS=$(cygpath -m $DRIVERS_TOOLS) fi export MONGO_ORCHESTRATION_HOME="$DRIVERS_TOOLS/.evergreen/orchestration" export MONGODB_BINARIES="$DRIVERS_TOOLS/mongodb/bin" export UPLOAD_BUCKET="${project}" export PROJECT_DIRECTORY="$(pwd)" cat < expansion.yml CURRENT_VERSION: "$CURRENT_VERSION" DRIVERS_TOOLS: "$DRIVERS_TOOLS" MONGO_ORCHESTRATION_HOME: "$MONGO_ORCHESTRATION_HOME" MONGODB_BINARIES: "$MONGODB_BINARIES" UPLOAD_BUCKET: "$UPLOAD_BUCKET" PROJECT_DIRECTORY: "$PROJECT_DIRECTORY" PREPARE_SHELL: | set -o errexit #set -o xtrace export DRIVERS_TOOLS="$DRIVERS_TOOLS" export MONGO_ORCHESTRATION_HOME="$MONGO_ORCHESTRATION_HOME" export MONGODB_BINARIES="$MONGODB_BINARIES" export UPLOAD_BUCKET="$UPLOAD_BUCKET" export PROJECT_DIRECTORY="$PROJECT_DIRECTORY" # TMPDIR cannot be too long, see # https://github.com/broadinstitute/cromwell/issues/3647. # Why is it even set at all? #export TMPDIR="$MONGO_ORCHESTRATION_HOME/db" export PATH="$MONGODB_BINARIES:$PATH" export PROJECT="${project}" export AUTH=${AUTH} export SSL=${SSL} export TOPOLOGY=${TOPOLOGY} export COMPRESSOR=${COMPRESSOR} export RVM_RUBY="${RVM_RUBY}" export MONGODB_VERSION=${MONGODB_VERSION} export CRYPT_SHARED_VERSION=${CRYPT_SHARED_VERSION} export FCV=${FCV} export MONGO_RUBY_DRIVER_LINT=${LINT} export RETRY_READS=${RETRY_READS} export RETRY_WRITES=${RETRY_WRITES} export WITH_ACTIVE_SUPPORT="${WITH_ACTIVE_SUPPORT}" export SINGLE_MONGOS="${SINGLE_MONGOS}" export BSON="${BSON}" export MMAPV1="${MMAPV1}" export FLE="${FLE}" export FORK="${FORK}" export SOLO="${SOLO}" export EXTRA_URI_OPTIONS="${EXTRA_URI_OPTIONS}" export API_VERSION_REQUIRED="${API_VERSION_REQUIRED}" export DOCKER_DISTRO="${DOCKER_DISTRO}" export STRESS="${STRESS}" export OCSP_ALGORITHM="${OCSP_ALGORITHM}" export OCSP_STATUS="${OCSP_STATUS}" export OCSP_DELEGATE="${OCSP_DELEGATE}" export OCSP_MUST_STAPLE="${OCSP_MUST_STAPLE}" export OCSP_CONNECTIVITY="${OCSP_CONNECTIVITY}" export OCSP_VERIFIER="${OCSP_VERIFIER}" export ATLAS_REPLICA_SET_URI="${atlas_replica_set_uri}" export ATLAS_SHARDED_URI="${atlas_sharded_uri}" export ATLAS_FREE_TIER_URI="${atlas_free_tier_uri}" export ATLAS_TLS11_URI="${atlas_tls11_uri}" export ATLAS_TLS12_URI="${atlas_tls12_uri}" export ATLAS_SERVERLESS_URI="${atlas_serverless_uri}" export ATLAS_SERVERLESS_LB_URI="${atlas_serverless_lb_uri}" export ATLAS_X509_CERT_BASE64="${atlas_x509_cert_base64}" export ATLAS_X509_URI="${atlas_x509}" export ATLAS_X509_DEV_CERT_BASE64="${atlas_x509_dev_cert_base64}" export ATLAS_X509_DEV_URI="${atlas_x509_dev}" export RVM_RUBY="${RVM_RUBY}" export SERVERLESS_DRIVERS_GROUP="${SERVERLESS_DRIVERS_GROUP}" export SERVERLESS_API_PUBLIC_KEY="${SERVERLESS_API_PUBLIC_KEY}" export SERVERLESS_API_PRIVATE_KEY="${SERVERLESS_API_PRIVATE_KEY}" export SERVERLESS_ATLAS_USER="${SERVERLESS_ATLAS_USER}" export SERVERLESS_ATLAS_PASSWORD="${SERVERLESS_ATLAS_PASSWORD}" EOT # See what we've done cat expansion.yml # Load the expansion file to make an evergreen variable with the current # unique version - command: expansions.update params: file: src/expansion.yml "export AWS auth credentials": - command: shell.exec type: test params: silent: true working_dir: "src" script: | cat < .env.private IAM_AUTH_ASSUME_AWS_ACCOUNT="${iam_auth_assume_aws_account}" IAM_AUTH_ASSUME_AWS_SECRET_ACCESS_KEY="${iam_auth_assume_aws_secret_access_key}" IAM_AUTH_ASSUME_ROLE_NAME="${iam_auth_assume_role_name}" IAM_AUTH_EC2_INSTANCE_ACCOUNT="${iam_auth_ec2_instance_account}" IAM_AUTH_EC2_INSTANCE_PROFILE="${iam_auth_ec2_instance_profile}" IAM_AUTH_EC2_INSTANCE_SECRET_ACCESS_KEY="${iam_auth_ec2_instance_secret_access_key}" IAM_AUTH_ECS_ACCOUNT="${iam_auth_ecs_account}" IAM_AUTH_ECS_ACCOUNT_ARN="${iam_auth_ecs_account_arn}" IAM_AUTH_ECS_CLUSTER="${iam_auth_ecs_cluster}" IAM_AUTH_ECS_SECRET_ACCESS_KEY="${iam_auth_ecs_secret_access_key}" IAM_AUTH_ECS_SECURITY_GROUP="${iam_auth_ecs_security_group}" IAM_AUTH_ECS_SUBNET_A="${iam_auth_ecs_subnet_a}" IAM_AUTH_ECS_SUBNET_B="${iam_auth_ecs_subnet_b}" IAM_AUTH_ECS_TASK_DEFINITION="${iam_auth_ecs_task_definition_ubuntu2004}" IAM_WEB_IDENTITY_ISSUER="${iam_web_identity_issuer}" IAM_WEB_IDENTITY_JWKS_URI="${iam_web_identity_jwks_uri}" IAM_WEB_IDENTITY_RSA_KEY="${iam_web_identity_rsa_key}" IAM_WEB_IDENTITY_TOKEN_FILE="${iam_web_identity_token_file}" IAM_AUTH_ASSUME_WEB_ROLE_NAME="${iam_auth_assume_web_role_name}" EOT "run CSOT tests": - command: shell.exec type: test params: shell: bash working_dir: "src" script: | ${PREPARE_SHELL} # Needed for generating temporary aws credentials. if [ -n "${FLE}" ]; then export AWS_ACCESS_KEY_ID="${fle_aws_key}" export AWS_SECRET_ACCESS_KEY="${fle_aws_secret}" export AWS_DEFAULT_REGION="${fle_aws_region}" fi export CSOT_SPEC_TESTS=1 TEST_CMD="bundle exec rspec spec/spec_tests/client_side_operations_timeout_spec.rb" \ .evergreen/run-tests.sh "export FLE credentials": - command: shell.exec type: test params: silent: true working_dir: "src" script: | cat < .env.private MONGO_RUBY_DRIVER_AWS_KEY="${fle_aws_key}" MONGO_RUBY_DRIVER_AWS_SECRET="${fle_aws_secret}" MONGO_RUBY_DRIVER_AWS_REGION="${fle_aws_region}" MONGO_RUBY_DRIVER_AWS_ARN="${fle_aws_arn}" MONGO_RUBY_DRIVER_AZURE_TENANT_ID="${fle_azure_tenant_id}" MONGO_RUBY_DRIVER_AZURE_CLIENT_ID="${fle_azure_client_id}" MONGO_RUBY_DRIVER_AZURE_CLIENT_SECRET="${fle_azure_client_secret}" MONGO_RUBY_DRIVER_AZURE_IDENTITY_PLATFORM_ENDPOINT="${fle_azure_identity_platform_endpoint}" MONGO_RUBY_DRIVER_AZURE_KEY_VAULT_ENDPOINT="${fle_azure_key_vault_endpoint}" MONGO_RUBY_DRIVER_AZURE_KEY_NAME="${fle_azure_key_name}" MONGO_RUBY_DRIVER_GCP_EMAIL="${fle_gcp_email}" MONGO_RUBY_DRIVER_GCP_PRIVATE_KEY="${fle_gcp_private_key}" MONGO_RUBY_DRIVER_GCP_PROJECT_ID="${fle_gcp_project_id}" MONGO_RUBY_DRIVER_GCP_LOCATION="${fle_gcp_location}" MONGO_RUBY_DRIVER_GCP_KEY_RING="${fle_gcp_key_ring}" MONGO_RUBY_DRIVER_GCP_KEY_NAME="${fle_gcp_key_name}" MONGO_RUBY_DRIVER_MONGOCRYPTD_PORT="${fle_mongocryptd_port}" EOT "export Kerberos credentials": - command: shell.exec type: test params: silent: true working_dir: "src" script: | cat < .env.private SASL_HOST=${sasl_host} SASL_PORT=${sasl_port} SASL_USER=${sasl_user} SASL_PASS=${sasl_pass} SASL_DB=${sasl_db} PRINCIPAL=${principal} KERBEROS_DB=${kerberos_db} KEYTAB_BASE64=${keytab_base64} EOT "exec script" : - command: shell.exec type: test params: working_dir: "src" script: | ${PREPARE_SHELL} sh ${PROJECT_DIRECTORY}/${file} "upload mo artifacts": - command: shell.exec params: script: | ${PREPARE_SHELL} find $MONGO_ORCHESTRATION_HOME -name \*.log\* | xargs tar czf mongodb-logs.tar.gz - command: s3.put params: aws_key: ${aws_key} aws_secret: ${aws_secret} local_file: mongodb-logs.tar.gz remote_file: ${UPLOAD_BUCKET}/${build_variant}/${revision}/${version_id}/${build_id}/logs/${task_id}-${execution}-mongodb-logs.tar.gz bucket: mciuploads permissions: public-read content_type: ${content_type|application/x-gzip} display_name: "mongodb-logs.tar.gz" "upload working dir": - command: archive.targz_pack params: target: "working-dir.tar.gz" source_dir: ${PROJECT_DIRECTORY}/ include: - "./**" - command: s3.put params: aws_key: ${aws_key} aws_secret: ${aws_secret} local_file: working-dir.tar.gz remote_file: ${UPLOAD_BUCKET}/${build_variant}/${revision}/${version_id}/${build_id}/artifacts/${task_id}-${execution}-working-dir.tar.gz bucket: mciuploads permissions: public-read content_type: ${content_type|application/x-gzip} display_name: "working-dir.tar.gz" - command: archive.targz_pack params: target: "drivers-dir.tar.gz" source_dir: ${DRIVERS_TOOLS} include: - "./**" - command: s3.put params: aws_key: ${aws_key} aws_secret: ${aws_secret} local_file: drivers-dir.tar.gz remote_file: ${UPLOAD_BUCKET}/${build_variant}/${revision}/${version_id}/${build_id}/artifacts/${task_id}-${execution}-drivers-dir.tar.gz bucket: mciuploads permissions: public-read content_type: ${content_type|application/x-gzip} display_name: "drivers-dir.tar.gz" "upload test results to s3": - command: s3.put params: aws_key: ${aws_key} aws_secret: ${aws_secret} # src is the relative path to repo checkout, # This is specified in this yaml file earlier. local_file: ./src/tmp/rspec.json display_name: rspec.json remote_file: ${UPLOAD_BUCKET}/${version_id}/${build_id}/artifacts/${build_variant}/rspec.json content_type: application/json permissions: public-read bucket: mciuploads # AWS does not appear to support on-the-fly gzip encoding; compress # the results manually and upload a compressed file. # Typical size reduction: 50 MB -> 800 KB - command: shell.exec params: script: | gzip src/tmp/rspec.json.gz - command: s3.put params: aws_key: ${aws_key} aws_secret: ${aws_secret} # src is the relative path to repo checkout, # This is specified in this yaml file earlier. local_file: ./src/tmp/rspec.json.gz display_name: rspec.json.gz remote_file: ${UPLOAD_BUCKET}/${version_id}/${build_id}/artifacts/${build_variant}/rspec.json.gz content_type: application/gzip permissions: public-read bucket: mciuploads - command: shell.exec params: script: | xz -9 src/tmp/rspec.json.xz - command: s3.put params: aws_key: ${aws_key} aws_secret: ${aws_secret} # src is the relative path to repo checkout, # This is specified in this yaml file earlier. local_file: ./src/tmp/rspec.json.xz display_name: rspec.json.xz remote_file: ${UPLOAD_BUCKET}/${version_id}/${build_id}/artifacts/${build_variant}/rspec.json.xz content_type: application/x-xz permissions: public-read bucket: mciuploads "upload test results": - command: attach.xunit_results params: file: ./src/rspec.xml "delete private environment": - command: shell.exec type: test params: silent: true working_dir: "src" script: | rm -f .env.private "build and test docker image": - command: shell.exec type: test params: shell: bash working_dir: "src" script: | ${PREPARE_SHELL} set -x .evergreen/test-on-docker -d ${os} MONGODB_VERSION=${mongodb-version} TOPOLOGY=${topology} RVM_RUBY=${ruby} -s .evergreen/run-tests.sh TEST_CMD=true ${PRELOAD_ARG} "run benchmarks": - command: shell.exec type: test params: shell: bash working_dir: "src" script: | ${PREPARE_SHELL} TEST_CMD="bundle exec rake driver_bench" PERFORMANCE_RESULTS_FILE="$PROJECT_DIRECTORY/perf.json" .evergreen/run-tests.sh - command: perf.send params: file: "${PROJECT_DIRECTORY}/perf.json" "run tests": - command: shell.exec type: test params: shell: bash working_dir: "src" script: | ${PREPARE_SHELL} # Needed for generating temporary aws credentials. if [ -n "${FLE}" ]; then export AWS_ACCESS_KEY_ID="${fle_aws_key}" export AWS_SECRET_ACCESS_KEY="${fle_aws_secret}" export AWS_DEFAULT_REGION="${fle_aws_region}" fi .evergreen/run-tests.sh "run tests via docker": - command: shell.exec type: test params: shell: bash working_dir: "src" script: | ${PREPARE_SHELL} # Needed for generating temporary aws credentials. if [ -n "${FLE}" ]; then export AWS_ACCESS_KEY_ID="${fle_aws_key}" export AWS_SECRET_ACCESS_KEY="${fle_aws_secret}" export AWS_DEFAULT_REGION="${fle_aws_region}" fi .evergreen/run-tests-docker.sh "run AWS auth tests": - command: shell.exec type: test params: shell: bash working_dir: "src" script: | ${PREPARE_SHELL} .evergreen/run-tests-aws-auth.sh "run Kerberos unit tests": - command: shell.exec type: test params: shell: bash working_dir: "src" script: | ${PREPARE_SHELL} .evergreen/run-tests-kerberos-unit.sh "run Kerberos integration tests": - command: shell.exec type: test params: shell: bash working_dir: "src" script: | ${PREPARE_SHELL} .evergreen/run-tests-kerberos-integration.sh "run Atlas tests": - command: shell.exec type: test params: shell: bash working_dir: "src" script: | ${PREPARE_SHELL} AUTH=${AUTH} SSL=${SSL} TOPOLOGY=${TOPOLOGY} RVM_RUBY="${RVM_RUBY}" \ ATLAS_REPLICA_SET_URI=${atlas_replica_set_uri} ATLAS_SHARDED_URI=${atlas_sharded_uri} \ ATLAS_FREE_TIER_URI=${atlas_free_tier_uri} ATLAS_TLS11_URI=${atlas_tls11_uri} \ ATLAS_TLS12_URI=${atlas_tls12_uri} ATLAS_SERVERLESS_URI=${atlas_serverless_uri} \ ATLAS_SERVERLESS_LB_URI=${atlas_serverless_lb_uri} \ ATLAS_X509_CERT_BASE64="${atlas_x509_cert_base64}" \ ATLAS_X509_URI="${atlas_x509}" \ ATLAS_X509_DEV_CERT_BASE64="${atlas_x509_dev_cert_base64}" \ ATLAS_X509_DEV_URI="${atlas_x509_dev}" \ .evergreen/run-tests-atlas.sh "run serverless tests": - command: shell.exec type: test params: shell: bash working_dir: "src" script: | ${PREPARE_SHELL} # Needed for generating temporary aws credentials. if [ -n "${FLE}" ]; then export AWS_ACCESS_KEY_ID="${fle_aws_key}" export AWS_SECRET_ACCESS_KEY="${fle_aws_secret}" export AWS_DEFAULT_REGION="${fle_aws_region}" fi CRYPT_SHARED_LIB_PATH="${CRYPT_SHARED_LIB_PATH}" SERVERLESS=1 SSL=ssl RVM_RUBY="${RVM_RUBY}" SINGLE_MONGOS="${SINGLE_MONGOS}" SERVERLESS_URI="${SERVERLESS_URI}" FLE="${FLE}" SERVERLESS_MONGODB_VERSION="${SERVERLESS_MONGODB_VERSION}" .evergreen/run-tests-serverless.sh pre: - func: "fetch source" - func: "create expansions" post: - func: "delete private environment" # Removed, causing timeouts # - func: "upload working dir" - func: "upload mo artifacts" # - func: "upload test results" - func: "upload test results to s3" task_groups: - name: serverless_task_group setup_group_can_fail_task: true setup_group_timeout_secs: 1800 # 30 minutes setup_group: - func: "fetch source" - func: "create expansions" - command: ec2.assume_role params: role_arn: ${aws_test_secrets_role} - command: shell.exec params: shell: "bash" script: | ${PREPARE_SHELL} bash ${DRIVERS_TOOLS}/.evergreen/serverless/setup-secrets.sh bash ${DRIVERS_TOOLS}/.evergreen/serverless/create-instance.sh - command: expansions.update params: file: serverless-expansion.yml teardown_task: - command: shell.exec params: script: | ${PREPARE_SHELL} bash ${DRIVERS_TOOLS}/.evergreen/serverless/delete-instance.sh - func: "upload test results" tasks: - "test-serverless" - name: testatlas_full_task_group setup_group_can_fail_task: true setup_group_timeout_secs: 1800 # 30 minutes setup_group: - func: fetch source - func: create expansions - command: shell.exec params: shell: "bash" working_dir: "src" script: | ${PREPARE_SHELL} echo "Setting up Atlas cluster" DRIVERS_ATLAS_PUBLIC_API_KEY="${DRIVERS_ATLAS_PUBLIC_API_KEY}" \ DRIVERS_ATLAS_PRIVATE_API_KEY="${DRIVERS_ATLAS_PRIVATE_API_KEY}" \ DRIVERS_ATLAS_GROUP_ID="${DRIVERS_ATLAS_GROUP_ID}" \ DRIVERS_ATLAS_LAMBDA_USER="${DRIVERS_ATLAS_LAMBDA_USER}" \ DRIVERS_ATLAS_LAMBDA_PASSWORD="${DRIVERS_ATLAS_LAMBDA_PASSWORD}" \ DRIVERS_ATLAS_BASE_URL="${DRIVERS_ATLAS_BASE_URL}" \ LAMBDA_STACK_NAME="dbx-ruby-lambda" \ MONGODB_VERSION="7.0" \ task_id="${task_id}" \ execution="${execution}" \ $DRIVERS_TOOLS/.evergreen/atlas/setup-atlas-cluster.sh echo "MONGODB_URI=${MONGODB_URI}" - command: expansions.update params: file: src/atlas-expansion.yml teardown_group: - command: shell.exec params: shell: "bash" working_dir: "src" script: | ${PREPARE_SHELL} DRIVERS_ATLAS_PUBLIC_API_KEY="${DRIVERS_ATLAS_PUBLIC_API_KEY}" \ DRIVERS_ATLAS_PRIVATE_API_KEY="${DRIVERS_ATLAS_PRIVATE_API_KEY}" \ DRIVERS_ATLAS_GROUP_ID="${DRIVERS_ATLAS_GROUP_ID}" \ DRIVERS_ATLAS_BASE_URL="${DRIVERS_ATLAS_BASE_URL}" \ LAMBDA_STACK_NAME="dbx-ruby-lambda" \ task_id="${task_id}" \ execution="${execution}" \ $DRIVERS_TOOLS/.evergreen/atlas/teardown-atlas-cluster.sh tasks: - test-full-atlas-task - name: test_aws_lambda_task_group setup_group_can_fail_task: true setup_group_timeout_secs: 1800 # 30 minutes setup_group: - func: fetch source - func: create expansions - command: shell.exec params: shell: "bash" working_dir: "src" script: | ${PREPARE_SHELL} echo "Setting up Atlas cluster" DRIVERS_ATLAS_PUBLIC_API_KEY="${DRIVERS_ATLAS_PUBLIC_API_KEY}" \ DRIVERS_ATLAS_PRIVATE_API_KEY="${DRIVERS_ATLAS_PRIVATE_API_KEY}" \ DRIVERS_ATLAS_GROUP_ID="${DRIVERS_ATLAS_GROUP_ID}" \ DRIVERS_ATLAS_LAMBDA_USER="${DRIVERS_ATLAS_LAMBDA_USER}" \ DRIVERS_ATLAS_LAMBDA_PASSWORD="${DRIVERS_ATLAS_LAMBDA_PASSWORD}" \ DRIVERS_ATLAS_BASE_URL="${DRIVERS_ATLAS_BASE_URL}" \ LAMBDA_STACK_NAME="dbx-ruby-lambda" \ MONGODB_VERSION="7.0" \ task_id="${task_id}" \ execution="${execution}" \ $DRIVERS_TOOLS/.evergreen/atlas/setup-atlas-cluster.sh echo "MONGODB_URI=${MONGODB_URI}" - command: expansions.update params: file: src/atlas-expansion.yml teardown_group: - command: shell.exec params: shell: "bash" working_dir: "src" script: | ${PREPARE_SHELL} DRIVERS_ATLAS_PUBLIC_API_KEY="${DRIVERS_ATLAS_PUBLIC_API_KEY}" \ DRIVERS_ATLAS_PRIVATE_API_KEY="${DRIVERS_ATLAS_PRIVATE_API_KEY}" \ DRIVERS_ATLAS_GROUP_ID="${DRIVERS_ATLAS_GROUP_ID}" \ DRIVERS_ATLAS_BASE_URL="${DRIVERS_ATLAS_BASE_URL}" \ LAMBDA_STACK_NAME="dbx-ruby-lambda" \ task_id="${task_id}" \ execution="${execution}" \ $DRIVERS_TOOLS/.evergreen/atlas/teardown-atlas-cluster.sh tasks: - test-aws-lambda-deployed - name: testgcpkms_task_group setup_group_can_fail_task: true setup_group_timeout_secs: 1800 # 30 minutes setup_group: - func: fetch source - func: "create expansions" - command: shell.exec params: shell: "bash" working_dir: "src" script: | ${PREPARE_SHELL} echo '${testgcpkms_key_file}' > /tmp/testgcpkms_key_file.json export GCPKMS_KEYFILE=/tmp/testgcpkms_key_file.json export GCPKMS_DRIVERS_TOOLS=$DRIVERS_TOOLS export GCPKMS_SERVICEACCOUNT="${testgcpkms_service_account}" export GCPKMS_MACHINETYPE="e2-standard-4" .evergreen/csfle/gcpkms/create-and-setup-instance.sh # Load the GCPKMS_GCLOUD, GCPKMS_INSTANCE, GCPKMS_REGION, and GCPKMS_ZONE expansions. - command: expansions.update params: file: src/testgcpkms-expansions.yml teardown_group: - command: shell.exec params: shell: "bash" working_dir: "src" script: | ${PREPARE_SHELL} export GCPKMS_GCLOUD=${GCPKMS_GCLOUD} export GCPKMS_PROJECT=${GCPKMS_PROJECT} export GCPKMS_ZONE=${GCPKMS_ZONE} export GCPKMS_INSTANCENAME=${GCPKMS_INSTANCENAME} .evergreen/csfle/gcpkms/delete-instance.sh tasks: - testgcpkms-task - name: testazurekms_task_group setup_group_can_fail_task: true setup_group_timeout_secs: 1800 # 30 minutes setup_group: - func: fetch source - func: "create expansions" - command: shell.exec params: shell: "bash" working_dir: "src" script: | ${PREPARE_SHELL} export AZUREKMS_VMNAME_PREFIX=RUBY export AZUREKMS_CLIENTID="${testazurekms_clientid}" export AZUREKMS_TENANTID="${testazurekms_tenantid}" export AZUREKMS_SECRET="${testazurekms_secret}" export AZUREKMS_DRIVERS_TOOLS=$DRIVERS_TOOLS export AZUREKMS_RESOURCEGROUP="${testazurekms_resourcegroup}" echo '${testazurekms_publickey}' > /tmp/testazurekms_public_key_file export AZUREKMS_PUBLICKEYPATH="/tmp/testazurekms_public_key_file" echo '${testazurekms_privatekey}' > /tmp/testazurekms_private_key_file chmod 600 /tmp/testazurekms_private_key_file export AZUREKMS_PRIVATEKEYPATH="/tmp/testazurekms_private_key_file" export AZUREKMS_SCOPE="${testazurekms_scope}" .evergreen/csfle/azurekms/create-and-setup-vm.sh # Load the AZUREKMS_GCLOUD, AZUREKMS_INSTANCE, AZUREKMS_REGION, and AZUREKMS_ZONE expansions. - command: expansions.update params: file: src/testazurekms-expansions.yml teardown_group: - command: expansions.update params: file: src/testazurekms-expansions.yml - command: shell.exec params: shell: "bash" working_dir: "src" script: | ${PREPARE_SHELL} export AZUREKMS_RESOURCEGROUP="${testazurekms_resourcegroup}" .evergreen/csfle/azurekms/delete-vm.sh tasks: - testazurekms-task tasks: - name: "test-atlas" commands: - func: "run Atlas tests" - name: "test-serverless" commands: - func: "export FLE credentials" - func: "run serverless tests" - name: "test-docker" commands: - func: "build and test docker image" - name: "test-mlaunch" commands: - func: "run tests" - name: "driver-bench" commands: - func: "run benchmarks" - name: "test-via-docker" commands: - func: "run tests via docker" - name: "test-kerberos-integration" commands: - func: "export Kerberos credentials" - func: "run Kerberos integration tests" - name: "test-kerberos" commands: - func: "run Kerberos unit tests" - name: "test-csot" commands: - func: "run CSOT tests" - name: "test-fle" commands: - func: "export FLE credentials" - func: "run tests" - name: "test-fle-via-docker" commands: - func: "export FLE credentials" - func: "run tests via docker" - name: "test-aws-auth" commands: - func: "export AWS auth credentials" - func: "run AWS auth tests" - name: "test-full-atlas-task" commands: - command: shell.exec type: test params: working_dir: "src" shell: "bash" script: | ${PREPARE_SHELL} MONGODB_URI="${MONGODB_URI}" .evergreen/run-tests-atlas-full.sh - name: "testgcpkms-task" commands: - command: shell.exec type: setup params: working_dir: "src" shell: "bash" script: | ${PREPARE_SHELL} echo "Copying files ... begin" export GCPKMS_GCLOUD=${GCPKMS_GCLOUD} export GCPKMS_PROJECT=${GCPKMS_PROJECT} export GCPKMS_ZONE=${GCPKMS_ZONE} export GCPKMS_INSTANCENAME=${GCPKMS_INSTANCENAME} tar czf /tmp/mongo-ruby-driver.tgz . GCPKMS_SRC=/tmp/mongo-ruby-driver.tgz GCPKMS_DST=$GCPKMS_INSTANCENAME: .evergreen/csfle/gcpkms/copy-file.sh echo "Copying files ... end" echo "Untarring file ... begin" GCPKMS_CMD="tar xf mongo-ruby-driver.tgz" .evergreen/csfle/gcpkms/run-command.sh echo "Untarring file ... end" - command: shell.exec type: test params: working_dir: "src" shell: "bash" script: | ${PREPARE_SHELL} export GCPKMS_GCLOUD=${GCPKMS_GCLOUD} export GCPKMS_PROJECT=${GCPKMS_PROJECT} export GCPKMS_ZONE=${GCPKMS_ZONE} export GCPKMS_INSTANCENAME=${GCPKMS_INSTANCENAME} GCPKMS_CMD="TEST_FLE_GCP_AUTO=1 RVM_RUBY=ruby-3.1 FLE=helper TOPOLOGY=standalone MONGODB_VERSION=6.0 MONGO_RUBY_DRIVER_GCP_EMAIL="${fle_gcp_email}" MONGO_RUBY_DRIVER_GCP_PRIVATE_KEY='${fle_gcp_private_key}' MONGO_RUBY_DRIVER_GCP_PROJECT_ID='${fle_gcp_project_id}' MONGO_RUBY_DRIVER_GCP_LOCATION='${fle_gcp_location}' MONGO_RUBY_DRIVER_GCP_KEY_RING='${fle_gcp_key_ring}' MONGO_RUBY_DRIVER_GCP_KEY_NAME='${fle_gcp_key_name}' ./.evergreen/run-tests-gcp.sh" .evergreen/csfle/gcpkms/run-command.sh - name: "testazurekms-task" commands: - command: shell.exec type: setup params: working_dir: "src" shell: "bash" script: | ${PREPARE_SHELL} echo "Copying files ... begin" export AZUREKMS_RESOURCEGROUP=${testazurekms_resourcegroup} export AZUREKMS_VMNAME=${AZUREKMS_VMNAME} export AZUREKMS_PRIVATEKEYPATH="/tmp/testazurekms_private_key_file" tar czf /tmp/mongo-ruby-driver.tgz . AZUREKMS_SRC=/tmp/mongo-ruby-driver.tgz AZUREKMS_DST="~/" .evergreen/csfle/azurekms/copy-file.sh echo "Copying files ... end" echo "Untarring file ... begin" AZUREKMS_CMD="tar xf mongo-ruby-driver.tgz" .evergreen/csfle/azurekms/run-command.sh echo "Untarring file ... end" - command: shell.exec type: test params: working_dir: "src" shell: "bash" script: | ${PREPARE_SHELL} export AZUREKMS_RESOURCEGROUP=${testazurekms_resourcegroup} export AZUREKMS_VMNAME=${AZUREKMS_VMNAME} export AZUREKMS_PRIVATEKEYPATH="/tmp/testazurekms_private_key_file" AZUREKMS_CMD="TEST_FLE_AZURE_AUTO=1 RVM_RUBY=ruby-3.1 FLE=helper TOPOLOGY=standalone MONGODB_VERSION=6.0 MONGO_RUBY_DRIVER_AZURE_TENANT_ID="${MONGO_RUBY_DRIVER_AZURE_TENANT_ID}" MONGO_RUBY_DRIVER_AZURE_CLIENT_ID="${MONGO_RUBY_DRIVER_AZURE_CLIENT_ID}" MONGO_RUBY_DRIVER_AZURE_CLIENT_SECRET="${MONGO_RUBY_DRIVER_AZURE_CLIENT_SECRET}" MONGO_RUBY_DRIVER_AZURE_IDENTITY_PLATFORM_ENDPOINT="${MONGO_RUBY_DRIVER_AZURE_IDENTITY_PLATFORM_ENDPOINT}" MONGO_RUBY_DRIVER_AZURE_KEY_VAULT_ENDPOINT="${testazurekms_keyvaultendpoint}" MONGO_RUBY_DRIVER_AZURE_KEY_NAME="${testazurekms_keyname}" ./.evergreen/run-tests-azure.sh" .evergreen/csfle/azurekms/run-command.sh - name: "test-aws-lambda-deployed" commands: - command: ec2.assume_role params: role_arn: ${LAMBDA_AWS_ROLE_ARN} duration_seconds: 3600 - command: shell.exec type: test params: working_dir: "src" shell: "bash" script: | ${PREPARE_SHELL} export MONGODB_URI=${MONGODB_URI} export FUNCTION_NAME="ruby-driver-lambda" .evergreen/run-tests-deployed-lambda.sh env: TEST_LAMBDA_DIRECTORY: ${PROJECT_DIRECTORY}/spec/faas/ruby-sam-app AWS_REGION: us-east-1 PROJECT_DIRECTORY: ${PROJECT_DIRECTORY} DRIVERS_TOOLS: ${DRIVERS_TOOLS} DRIVERS_ATLAS_PUBLIC_API_KEY: ${DRIVERS_ATLAS_PUBLIC_API_KEY} DRIVERS_ATLAS_PRIVATE_API_KEY: ${DRIVERS_ATLAS_PRIVATE_API_KEY} DRIVERS_ATLAS_LAMBDA_USER: ${DRIVERS_ATLAS_LAMBDA_USER} DRIVERS_ATLAS_LAMBDA_PASSWORD: ${DRIVERS_ATLAS_LAMBDA_PASSWORD} DRIVERS_ATLAS_GROUP_ID: ${DRIVERS_ATLAS_GROUP_ID} DRIVERS_ATLAS_BASE_URL: ${DRIVERS_ATLAS_BASE_URL} AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID} AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY} AWS_SESSION_TOKEN: ${AWS_SESSION_TOKEN} LAMBDA_STACK_NAME: "dbx-ruby-lambda" CLUSTER_PREFIX: "dbx-ruby-lambda" RVM_RUBY: ruby-3.2 MONGODB_URI: ${MONGODB_URI} mongo-ruby-driver-2.21.3/.evergreen/config/standard.yml.erb000066400000000000000000000401371505113246500235740ustar00rootroot00000000000000<% topologies = %w( standalone replica-set sharded-cluster ) # latest_ruby = the most recently released, stable version of Ruby # (make sure this version is being built by 10gen/mongo-ruby-toolchain) latest_ruby = "ruby-3.3".inspect # so it gets quoted as a string # these are used for testing against a few recent ruby versions recent_rubies = %w( ruby-3.3 ruby-3.2 jruby-9.4 ) # this is a list of the most most recent 3.x and 2.x MRI ruby versions sample_mri_rubies = %w( ruby-3.3 ruby-2.7 ) # as above, but including the most recent JRuby release sample_rubies = sample_mri_rubies + %w( jruby-9.4 ) # older Ruby versions provided by 10gen/mongo-ruby-toolchain older_rubies = %w( ruby-3.0 ruby-2.7 ) # all supported JRuby versions provided by 10gen/mongo-ruby-toolchain jrubies = %w( jruby-9.4 jruby-9.3 ) supported_mri_rubies_3 = %w( ruby-3.3 ruby-3.2 ruby-3.1 ruby-3.0 ) supported_mri_rubies_3_ubuntu = %w( ruby-3.3 ruby-3.2 ruby-3.1 ) supported_mri_ruby_2 = "ruby-2.7".inspect supported_rubies = supported_mri_rubies_3 + %w( ruby-2.7 ) + jrubies # The latest stable version of MongoDB latest_stable_mdb = "8.0".inspect # so it gets quoted as a string # A few of the most recent MongoDB versions actual_and_upcoming_mdb = %w( latest 8.0 7.0 ) recent_mdb = %w( 8.0 7.0 ) all_dbs = %w(latest 8.0 7.0 6.0 5.0 4.4 4.2 4.0 3.6) %> buildvariants: - matrix_name: DriverBench matrix_spec: ruby: <%= latest_ruby %> mongodb-version: <%= latest_stable_mdb %> topology: standalone os: ubuntu2204 display_name: DriverBench tasks: - name: "driver-bench" - matrix_name: "auth/ssl" matrix_spec: auth-and-ssl: ["auth-and-ssl", "noauth-and-nossl"] ruby: <%= latest_ruby %> mongodb-version: <%= actual_and_upcoming_mdb %> topology: <%= topologies %> os: ubuntu2204 display_name: ${auth-and-ssl} ${ruby} db-${mongodb-version} ${topology} tasks: - name: "test-mlaunch" - matrix_name: "mongo-recent" matrix_spec: ruby: <%= recent_rubies %> mongodb-version: <%= actual_and_upcoming_mdb %> topology: <%= topologies %> os: ubuntu2204 display_name: "${mongodb-version} ${os} ${topology} ${auth-and-ssl} ${ruby}" tasks: - name: "test-mlaunch" - matrix_name: "mongo-8-arm" matrix_spec: ruby: <%= latest_ruby %> mongodb-version: [ '8.0' ] topology: <%= topologies %> os: ubuntu2404-arm display_name: "${mongodb-version} ${os} ${topology} ${auth-and-ssl} ${ruby}" tasks: - name: "test-mlaunch" - matrix_name: "mongo-5.x" matrix_spec: ruby: <%= recent_rubies %> mongodb-version: ['5.0'] topology: <%= topologies %> os: ubuntu1804 display_name: "${mongodb-version} ${topology} ${auth-and-ssl} ${ruby}" tasks: - name: "test-mlaunch" - matrix_name: "mongo-4.x" matrix_spec: ruby: <%= older_rubies %> mongodb-version: ['4.4', '4.2', '4.0'] topology: <%= topologies %> os: ubuntu1804 display_name: "${mongodb-version} ${topology} ${auth-and-ssl} ${ruby}" tasks: - name: "test-mlaunch" - matrix_name: "mongo-3.6" matrix_spec: ruby: <%= supported_mri_ruby_2 %> mongodb-version: ['3.6'] topology: <%= topologies %> os: ubuntu1804 display_name: "${mongodb-version} ${topology} ${auth-and-ssl} ${ruby}" tasks: - name: "test-mlaunch" - matrix_name: "single-lb" matrix_spec: ruby: <%= latest_ruby %> mongodb-version: <%= latest_stable_mdb %> topology: load-balanced single-mongos: single-mongos os: ubuntu2204 display_name: "${mongodb-version} ${topology} single-lb ${auth-and-ssl} ${ruby}" tasks: - name: "test-mlaunch" - matrix_name: "mongo-api-version" matrix_spec: ruby: <%= latest_ruby %> mongodb-version: '7.0' topology: standalone api-version-required: yes os: ubuntu2204 display_name: "${mongodb-version} api-version-required ${topology} ${auth-and-ssl} ${ruby}" tasks: - name: "test-mlaunch" - matrix_name: "single-mongos" matrix_spec: ruby: <%= latest_ruby %> mongodb-version: <%= latest_stable_mdb %> topology: "sharded-cluster" single-mongos: single-mongos os: ubuntu2204 display_name: "${mongodb-version} ${topology} single-mongos ${auth-and-ssl} ${ruby}" tasks: - name: "test-mlaunch" - matrix_name: CSOT matrix_spec: ruby: <%= latest_ruby %> mongodb-version: <%= latest_stable_mdb %> topology: replica-set-single-node os: ubuntu2204 display_name: "CSOT - ${mongodb-version}" tasks: - name: test-csot - matrix_name: "no-retry-reads" matrix_spec: retry-reads: no-retry-reads ruby: <%= latest_ruby %> mongodb-version: <%= latest_stable_mdb %> topology: <%= topologies %> os: ubuntu2204 display_name: "${mongodb-version} ${topology} ${retry-reads} ${ruby}" tasks: - name: "test-mlaunch" - matrix_name: "no-retry-writes" matrix_spec: retry-writes: no-retry-writes ruby: <%= latest_ruby %> mongodb-version: <%= latest_stable_mdb %> topology: [replica-set, sharded-cluster] os: ubuntu2204 display_name: "${mongodb-version} ${topology} ${retry-writes} ${ruby}" tasks: - name: "test-mlaunch" - matrix_name: mmapv1 matrix_spec: ruby: <%= supported_mri_ruby_2 %> mongodb-version: ['3.6', '4.0'] topology: <%= topologies %> storage-engine: mmapv1 os: ubuntu1804 display_name: "${mongodb-version} ${topology} mmapv1 ${ruby}" tasks: - name: "test-mlaunch" - matrix_name: "lint" matrix_spec: lint: on ruby: <%= latest_ruby %> mongodb-version: <%= latest_stable_mdb %> topology: <%= topologies %> os: ubuntu2204 display_name: "${mongodb-version} ${topology} ${lint} ${ruby}" tasks: - name: "test-mlaunch" - matrix_name: "fork" matrix_spec: fork: on ruby: <%= latest_ruby %> mongodb-version: <%= latest_stable_mdb %> topology: <%= topologies %> os: ubuntu2204 display_name: "${mongodb-version} ${topology} fork ${ruby}" tasks: - name: "test-mlaunch" - matrix_name: "solo" matrix_spec: solo: on ruby: <%= supported_mri_rubies_3_ubuntu %> mongodb-version: <%= latest_stable_mdb %> topology: <%= topologies %> os: ubuntu2204 display_name: "${mongodb-version} ${topology} solo ${ruby}" tasks: - name: "test-mlaunch" - matrix_name: "stress older" matrix_spec: stress: on ruby: <%= supported_mri_ruby_2 %> mongodb-version: ['4.2', '4.0', '3.6'] topology: replica-set os: ubuntu1804 display_name: "${mongodb-version} ${topology} stress ${ruby}" tasks: - name: "test-mlaunch" - matrix_name: "stress" matrix_spec: stress: on ruby: <%= latest_ruby %> mongodb-version: <%= recent_mdb %> topology: replica-set os: ubuntu2204 display_name: "${mongodb-version} ${topology} stress ${ruby}" tasks: - name: "test-mlaunch" - matrix_name: "x509-tests" matrix_spec: auth-and-ssl: "x509" ruby: <%= latest_ruby %> mongodb-version: <%= latest_stable_mdb %> topology: standalone os: ubuntu2204 display_name: "${mongodb-version} ${topology} ${auth-and-ssl} ${ruby}" tasks: - name: "test-mlaunch" - matrix_name: "jruby-auth" matrix_spec: auth-and-ssl: [ "auth-and-ssl", "noauth-and-nossl" ] ruby: <%= jrubies.first %> mongodb-version: <%= latest_stable_mdb %> topology: <%= topologies %> os: ubuntu2204 display_name: "${mongodb-version} ${topology} ${auth-and-ssl} ${ruby}" tasks: - name: "test-mlaunch" <% [ [latest_ruby, latest_stable_mdb, 'ubuntu2204'], [supported_mri_ruby_2, '"6.0"', 'ubuntu2004'] ].each do |rubies, mdb, distro| %> - matrix_name: <%= "zlib-#{rubies}" %> matrix_spec: auth-and-ssl: [ "auth-and-ssl", "noauth-and-nossl" ] ruby: <%= rubies %> mongodb-version: <%= mdb %> topology: "replica-set" compressor: 'zlib' os: <%= distro %> display_name: "${compressor} ${mongodb-version} ${topology} ${auth-and-ssl} ${ruby}" tasks: - name: "test-mlaunch" - matrix_name: <%= "snappy-#{rubies}" %> matrix_spec: auth-and-ssl: [ "auth-and-ssl", "noauth-and-nossl" ] ruby: <%= rubies %> mongodb-version: <%= mdb %> topology: "replica-set" compressor: 'snappy' os: <%= distro %> display_name: "${compressor} ${mongodb-version} ${topology} ${auth-and-ssl} ${ruby}" tasks: - name: "test-mlaunch" # the zstd-ruby gem does not support JRuby (explicitly). However, there is # apparently a zstd-jni gem for JRuby that we could investigate here; if # this test is ever supported to support jruby, the `sample_mri_rubies` # reference should be replaced with `sample_rubies`. - matrix_name: <%= "zstd-auth-#{rubies}" %> matrix_spec: auth-and-ssl: [ "auth-and-ssl", "noauth-and-nossl" ] ruby: <%= rubies %> mongodb-version: <%= mdb %> topology: "replica-set" compressor: 'zstd' os: <%= distro %> display_name: "${compressor} ${mongodb-version} ${topology} ${auth-and-ssl} ${ruby}" tasks: - name: "test-mlaunch" - matrix_name: <%= "activesupport-#{rubies}" %> matrix_spec: ruby: <%= rubies %> mongodb-version: <%= mdb %> topology: replica-set as: as os: <%= distro %> display_name: "AS ${mongodb-version} ${topology} ${ruby}" tasks: - name: "test-mlaunch" - matrix_name: <%= "bson-#{rubies}" %> matrix_spec: ruby: <%= rubies %> mongodb-version: <%= mdb %> topology: replica-set bson: "*" os: <%= distro %> display_name: "bson-${bson} ${mongodb-version} ${topology} ${ruby}" tasks: - name: "test-mlaunch" <% end %> - matrix_name: "fle above 4.4" matrix_spec: auth-and-ssl: "noauth-and-nossl" ruby: <%= supported_mri_rubies_3_ubuntu %> topology: [replica-set, sharded-cluster] mongodb-version: [ '6.0', '7.0', '8.0' ] os: ubuntu2204 fle: helper display_name: "FLE: ${mongodb-version} ${topology} ${ruby}" tasks: - name: "test-fle" # kerberos integration tests are broken (RUBY-3266) # - matrix_name: "kerberos-integration" # matrix_spec: # ruby: <%= sample_rubies %> # os: rhel8 # display_name: "Kerberos integration ${os} ${ruby}" # tasks: # - name: "test-kerberos-integration" - matrix_name: "kerberos-unit" matrix_spec: ruby: <%= latest_ruby %> mongodb-version: <%= latest_stable_mdb %> topology: standalone os: ubuntu2204 auth-and-ssl: kerberos display_name: "Kerberos Tests" tasks: - name: "test-kerberos" # - matrix_name: "fle-latest" # matrix_spec: # auth-and-ssl: "noauth-and-nossl" # ruby: <%#= latest_ruby %> # topology: [replica-set, sharded-cluster] # mongodb-version: [ 'latest' ] # os: ubuntu2204 # fle: helper # display_name: "FLE: ${mongodb-version} ${topology} ${ruby}" # tasks: # - name: "test-fle" - matrix_name: aws-auth-regular matrix_spec: # https://jira.mongodb.org/browse/RUBY-3311 # auth-and-ssl: [ aws-regular, aws-assume-role, aws-ec2, aws-ecs, aws-web-identity ] # auth-and-ssl: [ aws-regular, aws-assume-role, aws-ecs, aws-web-identity ] # https://jira.mongodb.org/browse/RUBY-3659 auth-and-ssl: [ aws-regular, aws-assume-role, aws-web-identity ] ruby: <%= latest_ruby %> topology: standalone mongodb-version: <%= latest_stable_mdb %> os: ubuntu2204 display_name: "AWS ${auth-and-ssl} ${mongodb-version} ${ruby}" tasks: - name: "test-aws-auth" - matrix_name: ocsp-verifier matrix_spec: ocsp-verifier: true # No JRuby due to https://github.com/jruby/jruby-openssl/issues/210 ruby: <%= supported_mri_rubies_3_ubuntu %> topology: standalone mongodb-version: <%= latest_stable_mdb %> os: ubuntu2204 display_name: "OCSP verifier: ${mongodb-version} ${ruby}" tasks: - name: test-mlaunch - matrix_name: ocsp-must-staple matrix_spec: ocsp-algorithm: ecdsa ocsp-must-staple: on ocsp-delegate: on ruby: <%= latest_ruby %> topology: standalone mongodb-version: <%= latest_stable_mdb %> os: ubuntu2204 auth-and-ssl: noauth-and-ssl display_name: "OCSP integration - must staple: ${mongodb-version} ${ruby}" tasks: - name: test-mlaunch - matrix_name: ocsp-unknown matrix_spec: ocsp-algorithm: rsa ocsp-status: unknown ruby: <%= latest_ruby %> topology: standalone mongodb-version: <%= latest_stable_mdb %> os: ubuntu2204 auth-and-ssl: noauth-and-ssl display_name: "OCSP integration - unknown: ${mongodb-version} ${ruby}" tasks: - name: test-mlaunch <% [ %w(valid none pass), %w(unknown none pass), %w(revoked none fail), %w(valid tlsInsecure=true pass), %w(unknown tlsInsecure=true pass), %w(revoked tlsInsecure=true pass), %w(valid tlsAllowInvalidCertificates=true pass), %w(unknown tlsAllowInvalidCertificates=true pass), %w(revoked tlsAllowInvalidCertificates=true pass), ].each do |status, extra_uri_options, outcome| %> - matrix_name: ocsp-connectivity matrix_spec: ocsp-algorithm: '*' ocsp-status: <%= status %> ocsp-delegate: '*' ocsp-connectivity: <%= outcome %> extra-uri-options: "<%= extra_uri_options %>" ruby: <%= latest_ruby %> topology: standalone mongodb-version: <%= latest_stable_mdb %> os: ubuntu2204 display_name: "OCSP connectivity: ${ocsp-algorithm} ${ocsp-status} ${ocsp-delegate} ${extra-uri-options} ${mongodb-version} ${ruby}" tasks: - name: test-mlaunch <% end %> - matrix_name: ocsp-connectivity-jruby matrix_spec: # ECDSA does not work on JRuby. # https://github.com/jruby/jruby-openssl/issues/213 ocsp-algorithm: rsa # We do not perform OCSP verification on JRuby, therefore the revoked # configuration fails (connection succeeds due to lack of verification # when it is expected to fail). # https://github.com/jruby/jruby-openssl/issues/210 ocsp-status: [valid, unknown] ocsp-delegate: '*' ocsp-connectivity: pass ruby: <%= jrubies.first %> topology: standalone mongodb-version: <%= latest_stable_mdb %> os: ubuntu2204 display_name: "OCSP connectivity: ${ocsp-algorithm} ${ocsp-status} ${ocsp-delegate} ${mongodb-version} ${ruby}" tasks: - name: test-mlaunch # https://jira.mongodb.org/browse/RUBY-3540 #- matrix_name: testgcpkms-variant # matrix_spec: # ruby: <%= latest_ruby %> # fle: helper # topology: standalone # os: ubuntu2204 # mongodb-version: <%= latest_stable_mdb %> # display_name: "GCP KMS" # tasks: # - name: testgcpkms_task_group # batchtime: 20160 # Use a batchtime of 14 days as suggested by the CSFLE test README # https://jira.mongodb.org/browse/RUBY-3672 #- matrix_name: testazurekms-variant # matrix_spec: # ruby: ruby-3.0 # fle: helper # topology: standalone # os: debian11 # could eventually look at updating this to rhel80 # mongodb-version: 6.0 # display_name: "AZURE KMS" # tasks: # - name: testazurekms_task_group # batchtime: 20160 # Use a batchtime of 14 days as suggested by the CSFLE test README - matrix_name: atlas-full matrix_spec: ruby: <%= latest_ruby %> os: ubuntu2204 display_name: "Atlas (Full)" tasks: - name: testatlas_full_task_group - matrix_name: "atlas" matrix_spec: ruby: <%= supported_mri_rubies_3_ubuntu %> os: ubuntu2204 display_name: "Atlas connectivity tests ${ruby}" tasks: - name: test-atlas - matrix_name: "aws-lambda" matrix_spec: ruby: 'ruby-3.2' os: ubuntu2204 display_name: "AWS Lambda" tasks: - name: test_aws_lambda_task_group mongo-ruby-driver-2.21.3/.evergreen/csfle000077700000000000000000000000001505113246500311652../.mod/drivers-evergreen-tools/.evergreen/csfleustar00rootroot00000000000000mongo-ruby-driver-2.21.3/.evergreen/download-mongodb.sh000077700000000000000000000000001505113246500365032../.mod/drivers-evergreen-tools/.evergreen/download-mongodb.shustar00rootroot00000000000000mongo-ruby-driver-2.21.3/.evergreen/functions-aws.sh000066400000000000000000000025551505113246500223730ustar00rootroot00000000000000clear_instance_profile() { # The tests check, for example, failure to authenticate when no credentials # are explicitly provided. If an instance profile happens to be assigned # to the running instance, those tests will fail; clear instance profile # (if any) for regular and assume role configurations. # # To clear the instance profile, we need to use the EC2 credentials. # Set them in a subshell to ensure they are not accidentally leaked into # the main shell environment, which uses different credentials for # regular and assume role configurations. ( # When running in Evergreen, credentials are written to this file. # In Docker they are already in the environment and the file does not exist. if test -f .env.private; then . ./.env.private fi export MONGO_RUBY_DRIVER_AWS_AUTH_ACCESS_KEY_ID="`get_var IAM_AUTH_EC2_INSTANCE_ACCOUNT`" export MONGO_RUBY_DRIVER_AWS_AUTH_SECRET_ACCESS_KEY="`get_var IAM_AUTH_EC2_INSTANCE_SECRET_ACCESS_KEY`" export MONGO_RUBY_DRIVER_AWS_AUTH_INSTANCE_PROFILE_ARN="`get_var IAM_AUTH_EC2_INSTANCE_PROFILE`" # Region is not specified in Evergreen but can be specified when # testing locally. export MONGO_RUBY_DRIVER_AWS_AUTH_REGION=${MONGO_RUBY_DRIVER_AWS_AUTH_REGION:=us-east-1} ruby -Ispec -Ilib -I.evergreen/lib -rec2_setup -e Ec2Setup.new.clear_instance_profile ) } mongo-ruby-driver-2.21.3/.evergreen/functions-config.sh000066400000000000000000000011401505113246500230330ustar00rootroot00000000000000# This file contains functions pertaining to driver configuration in Evergreen. show_local_instructions() { show_local_instructions_impl "$arch" \ MONGODB_VERSION \ TOPOLOGY \ RVM_RUBY \ AUTH \ SSL \ COMPRESSOR \ FLE \ FCV \ MONGO_RUBY_DRIVER_LINT \ RETRY_READS \ RETRY_WRITES \ WITH_ACTIVE_SUPPORT \ SINGLE_MONGOS \ BSON \ MMAPV1 \ STRESS \ FORK \ SOLO \ OCSP_ALGORITHM \ OCSP_STATUS \ OCSP_DELEGATE \ OCSP_MUST_STAPLE \ OCSP_CONNECTIVITY \ OCSP_VERIFIER \ EXTRA_URI_OPTIONS \ API_VERSION_REQUIRED } mongo-ruby-driver-2.21.3/.evergreen/functions-kerberos.sh000066400000000000000000000065111505113246500234110ustar00rootroot00000000000000configure_for_external_kerberos() { echo "Setting krb5 config file" touch ${PROJECT_DIRECTORY}/.evergreen/krb5.conf.empty export KRB5_CONFIG=${PROJECT_DIRECTORY}/.evergreen/krb5.conf.empty if test -z "$KEYTAB_BASE64"; then echo KEYTAB_BASE64 must be set in the environment 1>&2 exit 5 fi echo "Writing keytab" echo "$KEYTAB_BASE64" | base64 --decode > ${PROJECT_DIRECTORY}/.evergreen/drivers.keytab if test -z "$PRINCIPAL"; then echo PRINCIPAL must be set in the environment 1>&2 exit 5 fi echo "Running kinit" kinit -k -t ${PROJECT_DIRECTORY}/.evergreen/drivers.keytab -p "$PRINCIPAL" # Realm must be uppercased. export SASL_REALM=`echo "$SASL_HOST" |tr a-z A-Z` } configure_local_kerberos() { # This configuration should only be run in a Docker environment # because it overwrites files in /etc. # # https://stackoverflow.com/questions/20010199/how-to-determine-if-a-process-runs-inside-lxc-docker if ! grep -q docker /proc/1/cgroup; then echo Local Kerberos configuration should only be done in Docker containers 1>&2 exit 43 fi cp .evergreen/local-kerberos/krb5.conf /etc/ mkdir -p /etc/krb5kdc cp .evergreen/local-kerberos/kdc.conf /etc/krb5kdc/kdc.conf cp .evergreen/local-kerberos/kadm5.acl /etc/krb5kdc/ cat .evergreen/local-kerberos/test.keytab.base64 |\ base64 --decode > ${PROJECT_DIRECTORY}/.evergreen/drivers.keytab (echo masterp; echo masterp) |kdb5_util create -s (echo testp; echo testp) |kadmin.local addprinc rubytest@LOCALKRB krb5kdc kadmind echo 127.0.0.1 krb.local |tee -a /etc/hosts echo testp |kinit rubytest@LOCALKRB (echo hostp; echo hostp) |kadmin.local addprinc mongodb/`hostname`@LOCALKRB kadmin.local ktadd mongodb/`hostname` # Server is installed here in the Docker environment. export BINDIR=/opt/mongodb/bin if ! "$BINDIR"/mongod --version |grep enterprise; then echo MongoDB server is not an enterprise one 1>&2 exit 44 fi mkdir /db "$BINDIR"/mongod --dbpath /db --fork --logpath /db/mongod.log create_user_cmd="`cat <<'EOT' db.getSiblingDB("$external").runCommand( { createUser: "rubytest@LOCALKRB", roles: [ { role: "root", db: "admin" }, ], writeConcern: { w: "majority" , wtimeout: 5000 }, } ) EOT `" "$BINDIR"/mongosh --eval "$create_user_cmd" "$BINDIR"/mongosh --eval 'db.getSiblingDB("kerberos").test.insert({kerberos: true, authenticated: "yeah"})' pkill mongod sleep 1 # https://mongodb.com/docs/manual/tutorial/control-access-to-mongodb-with-kerberos-authentication/ "$BINDIR"/mongod --dbpath /db --fork --logpath /db/mongod.log \ --bind_ip 0.0.0.0 \ --auth --setParameter authenticationMechanisms=GSSAPI & export SASL_USER=rubytest export SASL_PASS=testp export SASL_HOST=`hostname` export SASL_REALM=LOCALKRB export SASL_PORT=27017 export SASL_DB='$external' export KERBEROS_DB=kerberos } configure_kerberos_ip_addr() { # TODO Find out of $OS is set here, right now we only test on Linux thus # it doesn't matter if it is set. case "$OS" in cygwin*) IP_ADDR=`getent hosts ${SASL_HOST} | head -n 1 | awk '{print $1}'` ;; darwin) IP_ADDR=`dig ${SASL_HOST} +short | tail -1` ;; *) IP_ADDR=`getent hosts ${SASL_HOST} | head -n 1 | awk '{print $1}'` esac export IP_ADDR } mongo-ruby-driver-2.21.3/.evergreen/functions-remote.sh000066400000000000000000000003711505113246500230660ustar00rootroot00000000000000determine_user() { user=`echo $target |awk -F@ '{print $1}'` if test -z "$user"; then user=`whoami` fi echo "$user" } do_ssh() { ssh -o StrictHostKeyChecking=no "$@" } do_rsync() { rsync -e "ssh -o StrictHostKeyChecking=no" "$@" } mongo-ruby-driver-2.21.3/.evergreen/functions.sh000066400000000000000000000054401505113246500215770ustar00rootroot00000000000000# This file contains basic functions common between all Ruby driver team # projects: toolchain, bson-ruby, driver and Mongoid. get_var() { var=$1 value=${!var} if test -z "$value"; then echo "Missing value for $var" 1>&2 exit 1 fi echo "$value" } set_home() { if test -z "$HOME"; then export HOME=$(pwd) fi } uri_escape() { echo "$1" |ruby -rcgi -e 'puts CGI.escape(STDIN.read.strip).gsub("+", "%20")' } set_env_vars() { DRIVERS_TOOLS=${DRIVERS_TOOLS:-} if test -n "$AUTH"; then export ROOT_USER_NAME="bob" export ROOT_USER_PWD="pwd123" fi if test -n "$MONGODB_URI"; then export MONGODB_URI else unset MONGODB_URI fi export CI=1 # JRUBY_OPTS were initially set for Mongoid export JRUBY_OPTS="-J-Xms512m -J-Xmx1536M" if test "$BSON" = min; then export BUNDLE_GEMFILE=gemfiles/bson_min.gemfile elif test "$BSON" = master; then export MONGO_RUBY_DRIVER_BSON_MASTER=1 export BUNDLE_GEMFILE=gemfiles/bson_master.gemfile elif test "$BSON" = 4-stable; then export BUNDLE_GEMFILE=gemfiles/bson_4-stable.gemfile elif test "$COMPRESSOR" = snappy; then export BUNDLE_GEMFILE=gemfiles/snappy_compression.gemfile elif test "$COMPRESSOR" = zstd; then export BUNDLE_GEMFILE=gemfiles/zstd_compression.gemfile fi # rhel62 ships with Python 2.6 if test -d /opt/python/2.7/bin; then export PATH=/opt/python/2.7/bin:$PATH fi } bundle_install() { args=--quiet if test "$BSON" = master || test "$BSON" = 4-stable; then # In Docker bson is installed in the image, remove it if we need bson master. gem uni bson || true fi # On JRuby we can test against bson master but not in a conventional way. # See https://jira.mongodb.org/browse/RUBY-2156 if echo $RVM_RUBY |grep -q jruby && (test "$BSON" = master || test "$BSON" = 4-stable); then unset BUNDLE_GEMFILE git clone https://github.com/mongodb/bson-ruby (cd bson-ruby && git checkout "origin/$BSON" && bundle install && rake compile && gem build *.gemspec && gem install *.gem) # TODO redirect output of bundle install to file. # Then we don't have to see it in evergreen output. args= fi #which bundle #bundle --version if test -n "$BUNDLE_GEMFILE"; then args="$args --gemfile=$BUNDLE_GEMFILE" fi echo "Running bundle install $args" # Sometimes bundler fails for no apparent reason, run it again then. # The failures happen on both MRI and JRuby and have different manifestatinons. bundle install $args || bundle install $args } kill_jruby() { set +o pipefail jruby_running=`ps -ef | grep 'jruby' | grep -v grep | awk '{print $2}'` set -o pipefail if [ -n "$jruby_running" ];then echo "terminating remaining jruby processes" for pid in $jruby_running; do kill -9 $pid; done fi } mongo-ruby-driver-2.21.3/.evergreen/get-mongodb-download-url000077500000000000000000000001531505113246500237640ustar00rootroot00000000000000#!/usr/bin/env ruby load File.join(File.dirname(__FILE__), '../spec/shared/bin/get-mongodb-download-url') mongo-ruby-driver-2.21.3/.evergreen/handle-paths.sh000077700000000000000000000000001505113246500347372../.mod/drivers-evergreen-tools/.evergreen/handle-paths.shustar00rootroot00000000000000mongo-ruby-driver-2.21.3/.evergreen/lib/000077500000000000000000000000001505113246500177765ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/.evergreen/lib/ec2_setup.rb000066400000000000000000000023421505113246500222150ustar00rootroot00000000000000autoload :AwsUtils, 'support/aws_utils' autoload :Utils, 'support/utils' class Ec2Setup def assign_instance_profile opts = { region: ENV.fetch('MONGO_RUBY_DRIVER_AWS_AUTH_REGION'), access_key_id: ENV.fetch('MONGO_RUBY_DRIVER_AWS_AUTH_ACCESS_KEY_ID'), secret_access_key: ENV.fetch('MONGO_RUBY_DRIVER_AWS_AUTH_SECRET_ACCESS_KEY'), } ip_arn = ENV.fetch('MONGO_RUBY_DRIVER_AWS_AUTH_INSTANCE_PROFILE_ARN') puts "Setting instance profile to #{ip_arn} on #{Utils.ec2_instance_id}" orchestrator = AwsUtils::Orchestrator.new(**opts) orchestrator.set_instance_profile(Utils.ec2_instance_id, instance_profile_name: nil, instance_profile_arn: ip_arn, ) Utils.wait_for_instance_profile end def clear_instance_profile opts = { region: ENV.fetch('MONGO_RUBY_DRIVER_AWS_AUTH_REGION'), access_key_id: ENV.fetch('MONGO_RUBY_DRIVER_AWS_AUTH_ACCESS_KEY_ID'), secret_access_key: ENV.fetch('MONGO_RUBY_DRIVER_AWS_AUTH_SECRET_ACCESS_KEY'), } puts "Clearing instance profile on #{Utils.ec2_instance_id}" orchestrator = AwsUtils::Orchestrator.new(**opts) orchestrator.clear_instance_profile(Utils.ec2_instance_id) Utils.wait_for_no_instance_profile end end mongo-ruby-driver-2.21.3/.evergreen/lib/ecs_setup.rb000066400000000000000000000047031505113246500223210ustar00rootroot00000000000000autoload :AwsUtils, 'support/aws_utils' class EcsSetup def run opts = { region: ENV.fetch('MONGO_RUBY_DRIVER_AWS_AUTH_REGION'), access_key_id: ENV.fetch('MONGO_RUBY_DRIVER_AWS_AUTH_ACCESS_KEY_ID'), secret_access_key: ENV.fetch('MONGO_RUBY_DRIVER_AWS_AUTH_SECRET_ACCESS_KEY'), } inspector = AwsUtils::Inspector.new(**opts) cluster = inspector.ecs_client.describe_clusters( clusters: [ENV.fetch('MONGO_RUBY_DRIVER_AWS_AUTH_ECS_CLUSTER_ARN')], ).clusters.first orchestrator = AwsUtils::Orchestrator.new(**opts) service_name = "mdb-ruby_test_#{SecureRandom.uuid}" puts "Using service name: #{service_name}" service = orchestrator.provision_auth_ecs_task( cluster_name: cluster.cluster_name, service_name: service_name, security_group_id: ENV.fetch('MONGO_RUBY_DRIVER_AWS_AUTH_ECS_SECURITY_GROUP'), subnet_ids: ENV.fetch('MONGO_RUBY_DRIVER_AWS_AUTH_ECS_SUBNETS').split(','), task_definition_ref: ENV.fetch('MONGO_RUBY_DRIVER_AWS_AUTH_ECS_TASK_DEFINITION_ARN'), ) puts "Waiting for #{service_name} to become ready" orchestrator.wait_for_ecs_ready( cluster_name: cluster.cluster_name, service_name: service_name, ) puts "... OK" status = inspector.ecs_status( cluster_name: cluster.cluster_name, service_name: service.service_name, get_public_ip: false, get_logs: false, ) # Wait for the task to provision itself. In Evergreen I assume the image # already comes with SSH configured therefore this step is probably not # needed, but when we test using the driver tooling there is a reasonably # lengthy post-boot provisioning process that we need to wait for to # complete. begin Timeout.timeout(180) do begin Timeout.timeout(5) do # The StrictHostKeyChecking=no option is important here. # Note also that once this connection succeeds, this option # need not be passed again when connecting to the same IP. puts "Try to connect to #{status[:private_ip]}" puts `ssh -o StrictHostKeyChecking=no root@#{status[:private_ip]} id` end rescue Timeout::Error retry end end rescue Timeout::Error raise 'The task did not provision itself in 3 minutes' end File.open('.env.private.ecs', 'w') do |f| status.each do |k, v| f << "#{k.upcase}=#{v}\n" end end end end mongo-ruby-driver-2.21.3/.evergreen/lib/server_setup.rb000066400000000000000000000051021505113246500230470ustar00rootroot00000000000000require 'mongo' Mongo::Logger.logger.level = :WARN class ServerSetup def setup_aws_auth arn = env!('MONGO_RUBY_DRIVER_AWS_AUTH_USER_ARN') puts "Adding AWS-mapped user #{wildcard_arn(arn)} for #{arn}" create_aws_user(arn) puts 'Setup done' end def setup_tags cfg = client.command(replSetGetConfig: 1).documents.first.fetch('config') members = cfg['members'].sort_by { |info| info['host'] } members.each_with_index do |member, index| # For case-sensitive tag set testing, add a mixed case tag. unless member['arbiterOnly'] member['tags']['nodeIndex'] = index.to_s end end cfg['members'] = members cfg['version'] = cfg['version'] + 1 client.command(replSetReconfig: cfg) end def require_api_version client.cluster.next_primary # In sharded clusters, the parameter must be set on each mongos. if Mongo::Cluster::Topology::Sharded === client.cluster.topology client.cluster.servers.each do |server| host = server.address.seed Mongo::Client.new([host], client.options.merge(connect: :direct)) do |c| c.command(setParameter: 1, requireApiVersion: true) end end else client.command(setParameter: 1, requireApiVersion: true) end end private # Creates an appropriate AWS mapped user for the provided ARN. # # The mapped user does not use the specified ARN directly but instead # uses a derived wildcard ARN. Because of this, multiple ARNs can map # to the same user. def create_aws_user(arn) bootstrap_client.use('$external').database.users.create( wildcard_arn(arn), roles: [{role: 'root', db: 'admin'}], write_concern: {w: :majority, wtimeout: 5000}, ) end def wildcard_arn(arn) if arn.start_with?('arn:aws:sts::') arn.sub(%r,/[^/]+\z,, '/*') else arn end end def require_env_vars(vars) vars.each do |var| unless env?(var) raise "#{var} must be set in environment" end end end def env?(key) ENV[key] && !ENV[key].empty? end def env!(key) ENV[key].tap do |value| if value.nil? || value.empty? raise "Value for #{key} is required in environment" end end end def env_true?(key) %w(1 true yes).include?(ENV[key]&.downcase) end def client @client ||= Mongo::Client.new(ENV.fetch('MONGODB_URI')) end def bootstrap_client @bootstrap_client ||= Mongo::Client.new(ENV['MONGODB_URI'] || %w(localhost), user: 'bootstrap', password: 'bootstrap', auth_mech: :scram, auth_mech_properties: nil, ) end end mongo-ruby-driver-2.21.3/.evergreen/local-kerberos/000077500000000000000000000000001505113246500221345ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/.evergreen/local-kerberos/Dockerfile000066400000000000000000000011151505113246500241240ustar00rootroot00000000000000# https://help.ubuntu.com/lts/serverguide/kerberos.html FROM ubuntu:bionic ENV DEBIAN_FRONTEND=noninteractive RUN apt-get update RUN apt-get install -y krb5-kdc krb5-admin-server nvi less iproute2 COPY krb5.conf /etc/krb5.conf COPY kdc.conf /etc/krb5kdc/kdc.conf COPY kadm5.acl /etc/krb5kdc/kadm5.acl RUN (echo masterp; echo masterp) |kdb5_util create -s RUN (echo testp; echo testp) |kadmin.local addprinc test/test@LOCALKRB COPY entrypoint.sh entrypoint.sh ENTRYPOINT ["./entrypoint.sh"] CMD ["tail", "-f", "/var/log/kdc.log"] # Kerberos ports EXPOSE 88 #EXPOSE 464 #EXPOSE 749 mongo-ruby-driver-2.21.3/.evergreen/local-kerberos/README.md000066400000000000000000000031621505113246500234150ustar00rootroot00000000000000# Local Kerberos The scripts and configuration files in this directory provision a local Kerberos server via Docker. ## Usage Build the Docker image: docker build -t local-kerberos Run the container with the Kerberos server: docker run -it --init local-kerberos Note: the `--init` flag is important to be able to stop the container with Ctrl-C. The container by default tails the KDC log which should show authentication attempts by clients. When the container starts, it prints the instructions that need to be followed to use it, including its IP address. For convenience the instructions are repeated below. 1. Add the container's IP address to `/etc/hosts` on the host machine. For example, if the container's IP address is `172.17.0.3`, run: echo 172.17.0.3 krb.local | sudo tee -a /etc/hosts 2. Install `krb5-user` on the host machine: sudo apt-get install krb5-user This step may vary based on the host operating system. 3. Create `/etc/krb5.conf` with the contents of `krb5.conf` in this directory. 4. Log in using `kinit`: kinit test/test@LOCALKRB The password is `testp`. ## References The following resources were used to develop the provisioner: - [Kerberos instructions for Ubuntu](https://help.ubuntu.com/lts/serverguide/kerberos.html) - [Kerberos upstream instructions for configuring a KDC](https://web.mit.edu/kerberos/krb5-devel/doc/admin/install_kdc.html) - [kadm5.acl syntax](https://web.mit.edu/kerberos/krb5-devel/doc/admin/conf_files/kadm5_acl.html#kadm5-acl-5) - [Kerberos instructions for RHEL](https://www.rootusers.com/how-to-configure-linux-to-authenticate-using-kerberos/) mongo-ruby-driver-2.21.3/.evergreen/local-kerberos/entrypoint.sh000077500000000000000000000021611505113246500247060ustar00rootroot00000000000000#!/bin/sh echo 127.0.0.1 krb.local >>/etc/hosts krb5kdc kadmind # Check that the daemons are running: #ps awwxu # Check that kerberos is set up successfully and a user can authenticate: echo testp |kinit test/test@LOCALKRB echo Authentication test succeeded if ! grep docker-init /proc/1/cmdline; then echo echo NOTE: container is running without --init. Ctrl-C will not stop it. fi ip=`ip a |grep eth0 |grep inet |awk '{print $2}' |sed -e 's,/.*,,'` echo echo '===================================================================' echo echo To use this container for Kerberos authentication: echo echo 1. Add its IP address, $ip, to /etc/hosts: echo echo " echo $ip krb.local | sudo tee -a /etc/hosts" echo echo 2. Install krb5-user: echo echo ' sudo apt-get install krb5-user' echo echo 3. Create /etc/krb5.conf with the following contents: echo cat /etc/krb5.conf |sed -e 's/^/ /' echo echo "4. Log in using kinit with the password 'testp':" echo echo ' kinit test/test@LOCALKRB' echo echo '===================================================================' echo # sudo apt-get install krb5-user exec "$@" mongo-ruby-driver-2.21.3/.evergreen/local-kerberos/kadm5.acl000066400000000000000000000000041505113246500236100ustar00rootroot00000000000000* * mongo-ruby-driver-2.21.3/.evergreen/local-kerberos/kdc.conf000066400000000000000000000013571505113246500235520ustar00rootroot00000000000000[kdcdefaults] kdc_listen = 88 kdc_tcp_listen = 88 [realms] LOCALKRB = { kadmind_port = 749 max_life = 12h 0m 0s max_renewable_life = 7d 0h 0m 0s master_key_type = aes256-cts supported_enctypes = aes256-cts:normal aes128-cts:normal # If the default location does not suit your setup, # explicitly configure the following values: # database_name = /var/krb5kdc/principal # key_stash_file = /var/krb5kdc/.k5.ATHENA.MIT.EDU # acl_file = /var/krb5kdc/kadm5.acl } [logging] # By default, the KDC and kadmind will log output using # syslog. You can instead send log output to files like this: kdc = FILE:/var/log/kdc.log admin_server = FILE:/var/log/kadmin.log default = FILE:/var/log/klib.log mongo-ruby-driver-2.21.3/.evergreen/local-kerberos/kdc.conf.default000066400000000000000000000007771505113246500252020ustar00rootroot00000000000000[kdcdefaults] kdc_ports = 750,88 [realms] EXAMPLE.COM = { database_name = /var/lib/krb5kdc/principal admin_keytab = FILE:/etc/krb5kdc/kadm5.keytab acl_file = /etc/krb5kdc/kadm5.acl key_stash_file = /etc/krb5kdc/stash kdc_ports = 750,88 max_life = 10h 0m 0s max_renewable_life = 7d 0h 0m 0s master_key_type = des3-hmac-sha1 #supported_enctypes = aes256-cts:normal aes128-cts:normal default_principal_flags = +preauth } mongo-ruby-driver-2.21.3/.evergreen/local-kerberos/krb5.conf000066400000000000000000000001671505113246500236520ustar00rootroot00000000000000[libdefaults] default_realm = LOCALKRB [realms] LOCALKRB = { kdc = krb.local admin_server = krb.local } mongo-ruby-driver-2.21.3/.evergreen/local-kerberos/krb5.conf.default000066400000000000000000000053541505113246500253000ustar00rootroot00000000000000[libdefaults] default_realm = ATHENA.MIT.EDU # The following krb5.conf variables are only for MIT Kerberos. kdc_timesync = 1 ccache_type = 4 forwardable = true proxiable = true # The following encryption type specification will be used by MIT Kerberos # if uncommented. In general, the defaults in the MIT Kerberos code are # correct and overriding these specifications only serves to disable new # encryption types as they are added, creating interoperability problems. # # The only time when you might need to uncomment these lines and change # the enctypes is if you have local software that will break on ticket # caches containing ticket encryption types it doesn't know about (such as # old versions of Sun Java). # default_tgs_enctypes = des3-hmac-sha1 # default_tkt_enctypes = des3-hmac-sha1 # permitted_enctypes = des3-hmac-sha1 # The following libdefaults parameters are only for Heimdal Kerberos. fcc-mit-ticketflags = true [realms] ATHENA.MIT.EDU = { kdc = kerberos.mit.edu kdc = kerberos-1.mit.edu kdc = kerberos-2.mit.edu:88 admin_server = kerberos.mit.edu default_domain = mit.edu } ZONE.MIT.EDU = { kdc = casio.mit.edu kdc = seiko.mit.edu admin_server = casio.mit.edu } CSAIL.MIT.EDU = { admin_server = kerberos.csail.mit.edu default_domain = csail.mit.edu } IHTFP.ORG = { kdc = kerberos.ihtfp.org admin_server = kerberos.ihtfp.org } 1TS.ORG = { kdc = kerberos.1ts.org admin_server = kerberos.1ts.org } ANDREW.CMU.EDU = { admin_server = kerberos.andrew.cmu.edu default_domain = andrew.cmu.edu } CS.CMU.EDU = { kdc = kerberos-1.srv.cs.cmu.edu kdc = kerberos-2.srv.cs.cmu.edu kdc = kerberos-3.srv.cs.cmu.edu admin_server = kerberos.cs.cmu.edu } DEMENTIA.ORG = { kdc = kerberos.dementix.org kdc = kerberos2.dementix.org admin_server = kerberos.dementix.org } stanford.edu = { kdc = krb5auth1.stanford.edu kdc = krb5auth2.stanford.edu kdc = krb5auth3.stanford.edu master_kdc = krb5auth1.stanford.edu admin_server = krb5-admin.stanford.edu default_domain = stanford.edu } UTORONTO.CA = { kdc = kerberos1.utoronto.ca kdc = kerberos2.utoronto.ca kdc = kerberos3.utoronto.ca admin_server = kerberos1.utoronto.ca default_domain = utoronto.ca } [domain_realm] .mit.edu = ATHENA.MIT.EDU mit.edu = ATHENA.MIT.EDU .media.mit.edu = MEDIA-LAB.MIT.EDU media.mit.edu = MEDIA-LAB.MIT.EDU .csail.mit.edu = CSAIL.MIT.EDU csail.mit.edu = CSAIL.MIT.EDU .whoi.edu = ATHENA.MIT.EDU whoi.edu = ATHENA.MIT.EDU .stanford.edu = stanford.edu .slac.stanford.edu = SLAC.STANFORD.EDU .toronto.edu = UTORONTO.CA .utoronto.ca = UTORONTO.CA mongo-ruby-driver-2.21.3/.evergreen/mongodl.py000077700000000000000000000000001505113246500331512../.mod/drivers-evergreen-tools/.evergreen/mongodl.pyustar00rootroot00000000000000mongo-ruby-driver-2.21.3/.evergreen/patch-debuggers000077500000000000000000000010071505113246500222200ustar00rootroot00000000000000#!/bin/sh # Patches debuggers to not ask for confirmation on exit. # byebug tracking issue: https://github.com/deivid-rodriguez/byebug/issues/404 # byebug proposed patch: https://github.com/deivid-rodriguez/byebug/pull/605 root="$1" if test -z "$root"; then root=$HOME/.rbenv fi find "$root" -name quit.rb -path '*/byebug/*' -exec \ sed -e '/quit.confirmations.really/d' -i {} \; # JRuby ruby-debug find "$root" -name quit.rb -path '*/ruby-debug/*' -exec \ sed -e 's/confirm("Really quit.*")/true/' -i {} \; mongo-ruby-driver-2.21.3/.evergreen/provision-docker000077500000000000000000000011761505113246500224600ustar00rootroot00000000000000#!/bin/sh # Provisions the machine on which this script is running with the # required software to be able to build and run a Docker container with the # driver's test suite. # # After this script runs for the first time, the user needs to log out and # log back in to be able to issue Docker commands. # # This script may be run more than once, in which case it will try to attain # the same final machine state as it would have attained on a fresh instance. set -e sudo env DEBIAN_FRONTEND=noninteractive \ apt-get -y install docker.io ruby zsh sudo gem install dotenv --no-document user=`whoami` sudo usermod -aG docker "$user" mongo-ruby-driver-2.21.3/.evergreen/provision-local000077500000000000000000000023131505113246500222750ustar00rootroot00000000000000#!/bin/sh # Provisions the machine on which this script is running with the # required software to be able to run the Ruby driver test suite. # # This script may be run more than once, in which case it will try to attain # the same final machine state as it would have attained on a fresh instance. set -e # https://askubuntu.com/questions/132059/how-to-make-a-package-manager-wait-if-another-instance-of-apt-is-running while sudo fuser /var/{lib/{dpkg,apt/lists},cache/apt/archives}/lock; do echo Waiting for existing package manager commands to finish... 1>&2 sleep 1 done # psmisc is for fuser, which is used for detecting concurrent apt-get runs sudo env DEBIAN_FRONTEND=noninteractive \ apt-get -y install psmisc sudo env DEBIAN_FRONTEND=noninteractive \ apt-get -y install ruby curl zsh #sudo env DEBIAN_FRONTEND=noninteractive \ # apt-get -y install libcurl4 || sudo apt-get -y install libcurl3 # Need binutils for `strings` utility per # https://aws.amazon.com/premiumsupport/knowledge-center/ecs-iam-task-roles-config-errors/ sudo env DEBIAN_FRONTEND=noninteractive \ apt-get install -y libsnmp35 libyaml-0-2 gcc make git lsb-release \ krb5-user bzip2 libgmp-dev python3-pip python2.7-dev binutils mongo-ruby-driver-2.21.3/.evergreen/provision-remote000077500000000000000000000031661505113246500225050ustar00rootroot00000000000000#!/bin/bash # Copies the current directory to the specified target, then runs the # provision script on the target. # # The current directory is copied into the `work` subdirectory of the user's # home directory on the target. # # The target is meant to be an EC2 instance which will be provisioned with the # required software to be able to build and run a Docker container with the # driver's test suite. set -e target="$1" if test -z "$target"; then echo Usage: `basename $0` user@host 1>&2 exit 1 fi shift method="$1" . `dirname $0`/functions-remote.sh # Waiting for previous apt runs: # https://askubuntu.com/questions/132059/how-to-make-a-package-manager-wait-if-another-instance-of-apt-is-running # FIXME: Assumes we are running on ubuntu1804 which is true in Evergreen # but not necessarily true in local testing. do_ssh "$target" ' while sudo fuser /var/{lib/{dpkg,apt/lists},cache/apt/archives}/lock; do echo Waiting for existing package manager commands to finish... 1>&2 && sleep 1 done && if test `id -u` = 0; then apt-get update && env DEBIAN_FRONTEND=noninteractive apt-get -y install rsync sudo psmisc else sudo apt-get update && sudo env DEBIAN_FRONTEND=noninteractive apt-get -y install rsync psmisc fi && curl -fL --retry 3 https://github.com/p-mongodb/deps/raw/main/ubuntu1804-python37.tar.xz | \ tar xfJ - -C /opt ' do_rsync --delete --exclude .git --exclude .env.private\* -a \ --exclude gem-private_key.pem \ . $target:work if test "$method" = local; then script=provision-local else script=provision-docker fi do_ssh "$target" "cd work && ./.evergreen/$script" mongo-ruby-driver-2.21.3/.evergreen/run-tests-atlas-full.sh000077500000000000000000000006111505113246500235730ustar00rootroot00000000000000#!/bin/bash set -ex . `dirname "$0"`/../spec/shared/shlib/distro.sh . `dirname "$0"`/../spec/shared/shlib/set_env.sh . `dirname "$0"`/functions.sh set_env_vars set_env_python set_env_ruby bundle_install ATLAS_URI=$MONGODB_URI \ SERVERLESS=1 \ EXAMPLE_TIMEOUT=600 \ bundle exec rspec -fd spec/integration/search_indexes_prose_spec.rb test_status=$? kill_jruby exit ${test_status} mongo-ruby-driver-2.21.3/.evergreen/run-tests-atlas.sh000077500000000000000000000004371505113246500226410ustar00rootroot00000000000000#!/bin/bash set -ex . `dirname "$0"`/../spec/shared/shlib/distro.sh . `dirname "$0"`/../spec/shared/shlib/set_env.sh . `dirname "$0"`/functions.sh set_env_vars set_env_python set_env_ruby bundle_install echo "Running specs" export ATLAS_TESTING=1 bundle exec rspec spec/atlas -fd mongo-ruby-driver-2.21.3/.evergreen/run-tests-aws-auth.sh000077500000000000000000000146731505113246500232750ustar00rootroot00000000000000#!/bin/bash set -e # IMPORTANT: Don't set trace (-x) to avoid secrets showing up in the logs. set +x . `dirname "$0"`/functions.sh # When running in Evergreen, credentials are written to this file. # In Docker they are already in the environment and the file does not exist. if test -f .env.private; then . ./.env.private fi # The AWS auth-related Evergreen variables are set the same way for most/all # drivers. Therefore we don't want to change the variable names in order to # transparently benefit from possible updates to these credentials in # the future. # # At the same time, the chosen names do not cleanly map to our configurations, # therefore to keep the rest of our test suite readable we perform the # remapping in this file. case "$AUTH" in aws-regular) export MONGO_RUBY_DRIVER_AWS_AUTH_ACCESS_KEY_ID="`get_var IAM_AUTH_ECS_ACCOUNT`" export MONGO_RUBY_DRIVER_AWS_AUTH_SECRET_ACCESS_KEY="`get_var IAM_AUTH_ECS_SECRET_ACCESS_KEY`" export MONGO_RUBY_DRIVER_AWS_AUTH_USER_ARN="`get_var IAM_AUTH_ECS_ACCOUNT_ARN`" ;; aws-assume-role) export MONGO_RUBY_DRIVER_AWS_AUTH_ACCESS_KEY_ID="`get_var IAM_AUTH_ASSUME_AWS_ACCOUNT`" export MONGO_RUBY_DRIVER_AWS_AUTH_SECRET_ACCESS_KEY="`get_var IAM_AUTH_ASSUME_AWS_SECRET_ACCESS_KEY`" # This is the ARN provided in the AssumeRole request. It is different # from the ARN that the credentials returned by the AssumeRole request # resolve to. export MONGO_RUBY_DRIVER_AWS_AUTH_ASSUME_ROLE_ARN="`get_var IAM_AUTH_ASSUME_ROLE_NAME`" # This is the ARN that the credentials obtained by the AssumeRole # request resolve to. It is hardcoded in # https://github.com/mongodb-labs/drivers-evergreen-tools/blob/master/.evergreen/auth_aws/aws_e2e_assume_role.js # and is not given as an Evergreen variable. # Note: the asterisk at the end is manufactured by the server and not # obtained from STS. See https://jira.mongodb.org/browse/RUBY-2425. export MONGO_RUBY_DRIVER_AWS_AUTH_USER_ARN="arn:aws:sts::557821124784:assumed-role/authtest_user_assume_role/*" ;; aws-ec2) export MONGO_RUBY_DRIVER_AWS_AUTH_ACCESS_KEY_ID="`get_var IAM_AUTH_EC2_INSTANCE_ACCOUNT`" export MONGO_RUBY_DRIVER_AWS_AUTH_SECRET_ACCESS_KEY="`get_var IAM_AUTH_EC2_INSTANCE_SECRET_ACCESS_KEY`" export MONGO_RUBY_DRIVER_AWS_AUTH_INSTANCE_PROFILE_ARN="`get_var IAM_AUTH_EC2_INSTANCE_PROFILE`" # Region is not specified in Evergreen but can be specified when # testing locally. export MONGO_RUBY_DRIVER_AWS_AUTH_REGION=${MONGO_RUBY_DRIVER_AWS_AUTH_REGION:=us-east-1} if test -z "$MONGO_RUBY_DRIVER_AWS_AUTH_USER_ARN"; then # This is the ARN that the credentials obtained via EC2 instance metadata # resolve to. It is hardcoded in # https://github.com/mongodb-labs/drivers-evergreen-tools/blob/master/.evergreen/auth_aws/aws_e2e_ec2.js # and is not given as an Evergreen variable. # If you are testing with a different AWS account, your user ARN will be # different. You can specify your ARN by populating the environment # variable manually. export MONGO_RUBY_DRIVER_AWS_AUTH_USER_ARN="arn:aws:sts::557821124784:assumed-role/authtest_instance_profile_role/*" fi export TEST_CMD=${TEST_CMD:=rspec spec/integration/aws*spec.rb spec/integration/client_construction_aws*spec.rb} ;; aws-ecs) export MONGO_RUBY_DRIVER_AWS_AUTH_ACCESS_KEY_ID="`get_var IAM_AUTH_ECS_ACCOUNT`" export MONGO_RUBY_DRIVER_AWS_AUTH_SECRET_ACCESS_KEY="`get_var IAM_AUTH_ECS_SECRET_ACCESS_KEY`" export MONGO_RUBY_DRIVER_AWS_AUTH_ECS_CLUSTER_ARN="`get_var IAM_AUTH_ECS_CLUSTER`" export MONGO_RUBY_DRIVER_AWS_AUTH_ECS_SECURITY_GROUP="`get_var IAM_AUTH_ECS_SECURITY_GROUP`" export MONGO_RUBY_DRIVER_AWS_AUTH_ECS_SUBNETS="`get_var IAM_AUTH_ECS_SUBNET_A`,`get_var IAM_AUTH_ECS_SUBNET_B`" export MONGO_RUBY_DRIVER_AWS_AUTH_ECS_TASK_DEFINITION_ARN="`get_var IAM_AUTH_ECS_TASK_DEFINITION`" # Region is not specified in Evergreen but can be specified when # testing locally. export MONGO_RUBY_DRIVER_AWS_AUTH_REGION=${MONGO_RUBY_DRIVER_AWS_AUTH_REGION:=us-east-1} if test -z "$MONGO_RUBY_DRIVER_AWS_AUTH_USER_ARN"; then # This is the ARN that the credentials obtained via ECS task metadata # resolve to. It is hardcoded in # https://github.com/mongodb-labs/drivers-evergreen-tools/blob/master/.evergreen/auth_aws/lib/ecs_hosted_test.js # and is not given as an Evergreen variable. # If you are testing with a different AWS account, your user ARN will be # different. You can specify your ARN by populating the environment # variable manually. export MONGO_RUBY_DRIVER_AWS_AUTH_USER_ARN="arn:aws:sts::557821124784:assumed-role/ecsTaskExecutionRole/*" fi export TEST_CMD=${TEST_CMD:=rspec spec/integration/aws*spec.rb spec/integration/client_construction_aws*spec.rb} exec `dirname $0`/run-tests-ecs.sh ;; aws-web-identity) cd `dirname "$0"`/auth_aws echo "Activating virtual environment 'authawsvenv'..." . ./activate-authawsvenv.sh export AWS_ACCESS_KEY_ID="`get_var IAM_AUTH_EC2_INSTANCE_ACCOUNT`" export AWS_SECRET_ACCESS_KEY="`get_var IAM_AUTH_EC2_INSTANCE_SECRET_ACCESS_KEY`" echo "Unassigning instance profile..." python -u lib/aws_unassign_instance_profile.py unset AWS_ACCESS_KEY_ID unset AWS_SECRET_ACCESS_KEY export IDP_ISSUER="`get_var IAM_WEB_IDENTITY_ISSUER`" export IDP_JWKS_URI="`get_var IAM_WEB_IDENTITY_JWKS_URI`" export IDP_RSA_KEY="`get_var IAM_WEB_IDENTITY_RSA_KEY`" export AWS_WEB_IDENTITY_TOKEN_FILE="`get_var IAM_WEB_IDENTITY_TOKEN_FILE`" python -u lib/aws_handle_oidc_creds.py token unset IDP_ISSUER unset IDP_JWKS_URI unset IDP_RSA_KEY deactivate cd - export MONGO_RUBY_DRIVER_AWS_AUTH_ACCESS_KEY_ID="`get_var IAM_AUTH_EC2_INSTANCE_ACCOUNT`" export MONGO_RUBY_DRIVER_AWS_AUTH_SECRET_ACCESS_KEY="`get_var IAM_AUTH_EC2_INSTANCE_SECRET_ACCESS_KEY`" export AWS_WEB_IDENTITY_TOKEN_FILE="`get_var IAM_WEB_IDENTITY_TOKEN_FILE`" export AWS_ROLE_ARN="`get_var IAM_AUTH_ASSUME_WEB_ROLE_NAME`" export MONGO_RUBY_DRIVER_AWS_AUTH_ASSUME_ROLE_ARN="`get_var IAM_AUTH_ASSUME_WEB_ROLE_NAME`" export MONGO_RUBY_DRIVER_AWS_AUTH_USER_ARN="arn:aws:sts::857654397073:assumed-role/webIdentityTestRole/*" export TEST_CMD=${TEST_CMD:=rspec spec/integration/aws*spec.rb spec/integration/client_construction_aws*spec.rb} ;; *) echo "Unknown AUTH value $AUTH" 1>&2 exit 1 ;; esac exec `dirname $0`/run-tests.sh mongo-ruby-driver-2.21.3/.evergreen/run-tests-azure.sh000077500000000000000000000007511505113246500226620ustar00rootroot00000000000000#!/bin/bash set -ex . `dirname "$0"`/../spec/shared/shlib/distro.sh . `dirname "$0"`/../spec/shared/shlib/set_env.sh . `dirname "$0"`/functions.sh set_env_vars set_env_python set_env_ruby sudo apt-get -y install libyaml-dev cmake bundle_install echo "Running specs" export MONGO_RUBY_DRIVER_CRYPT_SHARED_LIB_PATH=${CRYPT_SHARED_LIB_PATH} bundle exec rake spec:prepare bundle exec rspec spec/integration/client_side_encryption/on_demand_azure_credentials_spec.rb exit ${test_status} mongo-ruby-driver-2.21.3/.evergreen/run-tests-deployed-lambda.sh000077500000000000000000000006131505113246500245540ustar00rootroot00000000000000#!/bin/bash set -ex . `dirname "$0"`/../spec/shared/shlib/distro.sh . `dirname "$0"`/../spec/shared/shlib/set_env.sh . `dirname "$0"`/functions.sh set_env_vars set_env_python set_env_ruby export MONGODB_URI=${MONGODB_URI} export CLUSTER_PREFIX="ruby-driver-" export TEST_LAMBDA_DIRECTORY=`dirname "$0"`/../spec/faas/ruby-sam-app . `dirname "$0"`/aws_lambda/run-deployed-lambda-aws-tests.sh mongo-ruby-driver-2.21.3/.evergreen/run-tests-docker.sh000077500000000000000000000014051505113246500230000ustar00rootroot00000000000000#!/bin/bash set -e set -o pipefail if echo "$AUTH" |grep -q ^aws; then # Do not set -x as this will expose passwords in Evergreen logs set +x else set -x fi params= for var in MONGODB_VERSION TOPOLOGY RVM_RUBY \ OCSP_ALGORITHM OCSP_STATUS OCSP_DELEGATE OCSP_MUST_STAPLE \ OCSP_CONNECTIVITY OCSP_VERIFIER FLE \ AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY AWS_DEFAULT_REGION CRYPT_SHARED_VERSION MONGO_RUBY_DRIVER_AZURE_METADATA_HOST do value="${!var}" if test -n "$value"; then params="$params $var=${!var}" fi done if test -f .env.private; then params="$params -a .env.private" gem install dotenv || gem install --user dotenv fi # OCSP verifier tests need debian10 so that ocsp mock works ./.evergreen/test-on-docker -p -d $DOCKER_DISTRO $params mongo-ruby-driver-2.21.3/.evergreen/run-tests-ecs.sh000077500000000000000000000015651505113246500223120ustar00rootroot00000000000000#!/bin/bash set -e # IMPORTANT: Don't set trace (-x) to avoid secrets showing up in the logs. set +x MRSS_ROOT=`dirname "$0"`/../spec/shared . $MRSS_ROOT/shlib/distro.sh . $MRSS_ROOT/shlib/set_env.sh . $MRSS_ROOT/shlib/config.sh . `dirname "$0"`/functions.sh . `dirname "$0"`/functions-config.sh show_local_instructions set_home set_env_vars set_env_python set_env_ruby bundle install --quiet ruby -I.evergreen/lib -Ispec -recs_setup -e EcsSetup.new.run eval `cat .env.private.ecs` ./.evergreen/provision-remote root@$PRIVATE_IP local ./.evergreen/test-remote root@$PRIVATE_IP \ env AUTH=aws-ecs \ RVM_RUBY=$RVM_RUBY MONGODB_VERSION=$MONGODB_VERSION \ MONGO_RUBY_DRIVER_AWS_AUTH_USER_ARN="$MONGO_RUBY_DRIVER_AWS_AUTH_USER_ARN" \ TOPOLOGY="$TOPOLOGY" \ TEST_CMD="$TEST_CMD" .evergreen/run-tests.sh mkdir -p tmp scp root@$PRIVATE_IP:work/tmp/rspec.json tmp/ mongo-ruby-driver-2.21.3/.evergreen/run-tests-gcp.sh000077500000000000000000000010271505113246500223020ustar00rootroot00000000000000#!/bin/bash set -ex . `dirname "$0"`/../spec/shared/shlib/distro.sh . `dirname "$0"`/../spec/shared/shlib/set_env.sh . `dirname "$0"`/../spec/shared/shlib/server.sh . `dirname "$0"`/functions.sh set_env_vars set_env_python set_env_ruby sudo apt-get -y install libyaml-dev cmake bundle_install echo "Running specs" export MONGO_RUBY_DRIVER_CRYPT_SHARED_LIB_PATH=${CRYPT_SHARED_LIB_PATH} bundle exec rake spec:prepare bundle exec rspec spec/integration/client_side_encryption/on_demand_gcp_credentials_spec.rb exit ${test_status} mongo-ruby-driver-2.21.3/.evergreen/run-tests-kerberos-integration.sh000077500000000000000000000046641505113246500257000ustar00rootroot00000000000000#!/bin/bash set -e # IMPORTANT: Don't set trace (-x) to avoid secrets showing up in the logs. set +x MRSS_ROOT=`dirname "$0"`/../spec/shared . $MRSS_ROOT/shlib/distro.sh . $MRSS_ROOT/shlib/set_env.sh . $MRSS_ROOT/shlib/config.sh . `dirname "$0"`/functions.sh . `dirname "$0"`/functions-kerberos.sh . `dirname "$0"`/functions-config.sh arch=`host_distro` show_local_instructions set_env_vars set_env_python set_env_ruby # Note that: # # 1. .env.private is supposed to be in Dotenv format which supports # multi-line values. Currently all values set for Kerberos tests are # single-line hence this isn't an issue. # # 2. The database for Kerberos is $external. This means the file cannot be # simply sourced into the shell, as that would expand $external as a # variable. # # To assign variables in a loop: # https://unix.stackexchange.com/questions/348175/bash-scope-of-variables-in-a-for-loop-using-tee # # When running the tests via Docker, .env.private does not exist and instead # all of the variables in it are written into the image (and are already # available at this point). if test -f ./.env.private; then while read line; do k=`echo "$line" |awk -F= '{print $1}'` v=`echo "$line" |awk -F= '{print $2}'` eval export $k="'"$v"'" done < <(cat ./.env.private) fi if test -n "$SASL_HOST"; then configure_for_external_kerberos else configure_local_kerberos fi configure_kerberos_ip_addr # To test authentication using the mongo shell, note that the host name # must be uppercased when it is used in the username. # The following call works when using the docker image: # /opt/mongodb/bin/mongosh --host $SASL_HOST --authenticationMechanism=GSSAPI \ # --authenticationDatabase='$external' --username $SASL_USER@`echo $SASL_HOST |tr a-z A-Z` echo "Install dependencies" export BUNDLE_GEMFILE=gemfiles/mongo_kerberos.gemfile bundle_install # need to build the native extension, since it doesn't seem to build correctly # when installed via github. curdir=`pwd` cd `bundle info --path mongo_kerberos` # unset the BUNDLE_GEMFILE variable so the mongo_kerberos rakefile doesn't # get confused by it... saved_gemfile=$BUNDLE_GEMFILE unset BUNDLE_GEMFILE bundle install rake compile cd $curdir export BUNDLE_GEMFILE=$saved_gemfile bundle list export MONGO_RUBY_DRIVER_KERBEROS=1 export MONGO_RUBY_DRIVER_KERBEROS_INTEGRATION=1 if test -n "$TEST_CMD"; then eval $TEST_CMD else echo "Running tests" bundle exec rspec spec/kerberos fi mongo-ruby-driver-2.21.3/.evergreen/run-tests-kerberos-unit.sh000077500000000000000000000012031505113246500243160ustar00rootroot00000000000000#!/bin/bash set -ex MRSS_ROOT=`dirname "$0"`/../spec/shared . $MRSS_ROOT/shlib/distro.sh . $MRSS_ROOT/shlib/set_env.sh . $MRSS_ROOT/shlib/config.sh . `dirname "$0"`/functions.sh . `dirname "$0"`/functions-config.sh arch=`host_distro` show_local_instructions set_env_vars set_env_python set_env_ruby export BUNDLE_GEMFILE=gemfiles/mongo_kerberos.gemfile bundle_install export MONGO_RUBY_DRIVER_KERBEROS=1 bundle exec rspec \ spec/spec_tests/uri_options_spec.rb \ spec/spec_tests/connection_string_spec.rb \ spec/mongo/uri/srv_protocol_spec.rb \ spec/mongo/uri_spec.rb \ spec/integration/client_authentication_options_spec.rb mongo-ruby-driver-2.21.3/.evergreen/run-tests-serverless.sh000077500000000000000000000073721505113246500237370ustar00rootroot00000000000000#!/bin/bash set -ex . `dirname "$0"`/../spec/shared/shlib/distro.sh . `dirname "$0"`/../spec/shared/shlib/set_env.sh . `dirname "$0"`/functions.sh set_env_vars set_env_python set_env_ruby source ${DRIVERS_TOOLS}/.evergreen/serverless/secrets-export.sh bundle_install export MONGODB_URI=`echo ${SERVERLESS_URI} | sed -r 's/mongodb\+srv:\/\//mongodb\+srv:\/\/'"${SERVERLESS_ATLAS_USER}"':'"${SERVERLESS_ATLAS_PASSWORD}@"'/g'` export TOPOLOGY="load-balanced" if [ -n "${CRYPT_SHARED_LIB_PATH}" ]; then echo crypt_shared already present at ${CRYPT_SHARED_LIB_PATH} -- using this version export MONGO_RUBY_DRIVER_CRYPT_SHARED_LIB_PATH=$CRYPT_SHARED_LIB_PATH else python3 -u .evergreen/mongodl.py --component crypt_shared -V ${SERVERLESS_MONGODB_VERSION} --out `pwd`/csfle_lib --target `host_distro` || true if test -f `pwd`/csfle_lib/lib/mongo_crypt_v1.so then echo Using crypt shared library version ${SERVERLESS_MONGODB_VERSION} export MONGO_RUBY_DRIVER_CRYPT_SHARED_LIB_PATH=`pwd`/csfle_lib/lib/mongo_crypt_v1.so else echo Failed to download crypt shared library exit -1 fi fi if ! ( test -f /etc/os-release & grep -q ^ID.*ubuntu /etc/os-release & grep -q ^VERSION_ID.*22.04 /etc/os-release ); then echo Serverless tests assume ubuntu2204 echo If this has changed, update .evergreen/run-tests-serverless.sh as necessary exit -1 fi mkdir libmongocrypt cd libmongocrypt curl --retry 3 -fLo libmongocrypt-all.tar.gz "https://s3.amazonaws.com/mciuploads/libmongocrypt/all/master/latest/libmongocrypt-all.tar.gz" tar xf libmongocrypt-all.tar.gz # We assume that serverless tests always use ubuntu2204 export LIBMONGOCRYPT_PATH=`pwd`/ubuntu2204-64/nocrypto/lib/libmongocrypt.so cd - cd .evergreen/csfle . ./activate-kmstlsvenv.sh pip install boto3~=1.19 'cryptography<3.4' pykmip~=0.10.0 'sqlalchemy<2.0.0' python -u ./kms_http_server.py --ca_file ../x509gen/ca.pem --cert_file ../x509gen/server.pem --port 7999 & python -u ./kms_http_server.py --ca_file ../x509gen/ca.pem --cert_file ../x509gen/expired.pem --port 8000 & python -u ./kms_http_server.py --ca_file ../x509gen/ca.pem --cert_file ../x509gen/wrong-host.pem --port 8001 & python -u ./kms_http_server.py --ca_file ../x509gen/ca.pem --cert_file ../x509gen/server.pem --port 8002 --require_client_cert & python -u ./kms_kmip_server.py & echo "Waiting for mock KMS servers to start..." wait_for_kms_server() { for i in $(seq 60); do if curl -s "localhost:$1"; test $? -ne 7; then return 0 else sleep 1 fi done echo "Could not detect mock KMS server on port $1" return 1 } wait_for_kms_server 8000 wait_for_kms_server 8001 wait_for_kms_server 8002 wait_for_kms_server 5698 echo "Waiting for mock KMS servers to start... done." # Obtain temporary AWS credentials pip3 install boto3 PYTHON=python3 . ./set-temp-creds.sh cd - echo "Running specs" bundle exec rspec \ spec/spec_tests/client_side_encryption_spec.rb \ spec/spec_tests/crud_spec.rb \ spec/spec_tests/retryable_reads_spec.rb \ spec/spec_tests/retryable_writes_spec.rb \ spec/spec_tests/transactions_spec.rb \ spec/spec_tests/change_streams_unified_spec.rb \ spec/spec_tests/client_side_encryption_unified_spec.rb \ spec/spec_tests/command_monitoring_unified_spec.rb \ spec/spec_tests/crud_unified_spec.rb \ spec/spec_tests/gridfs_unified_spec.rb \ spec/spec_tests/retryable_reads_unified_spec.rb \ spec/spec_tests/retryable_writes_unified_spec.rb \ spec/spec_tests/sdam_unified_spec.rb \ spec/spec_tests/sessions_unified_spec.rb \ spec/spec_tests/transactions_unified_spec.rb kill_jruby # Terminate all kmip servers... and whatever else happens to be running # that is a python script. pkill python exit ${test_status} mongo-ruby-driver-2.21.3/.evergreen/run-tests.sh000077500000000000000000000316221505113246500215370ustar00rootroot00000000000000#!/bin/bash # Note that mlaunch is executed with (and therefore installed with) Python 2. # The reason for this is that in the past, some of the distros we tested on # had an ancient version of Python 3 that was unusable (e.g. it couldn't # install anything from PyPI due to outdated TLS/SSL implementation). # It is likely that all of the current distros we use have a recent enough # and working Python 3 implementation, such that we could use Python 3 for # everything. # # Note that some distros (e.g. ubuntu2004) do not contain a `python' binary # at all, thus python2 or python3 must be explicitly specified depending on # the desired version. set -e set -o pipefail if echo "$AUTH" |grep -q ^aws; then # Do not set -x as this will expose passwords in Evergreen logs set +x else set -x fi if test -z "$PROJECT_DIRECTORY"; then PROJECT_DIRECTORY=`realpath $(dirname $0)/..` fi MRSS_ROOT=`dirname "$0"`/../spec/shared . $MRSS_ROOT/shlib/distro.sh . $MRSS_ROOT/shlib/set_env.sh . $MRSS_ROOT/shlib/server.sh . $MRSS_ROOT/shlib/config.sh . `dirname "$0"`/functions.sh . `dirname "$0"`/functions-aws.sh . `dirname "$0"`/functions-config.sh arch=`host_distro` show_local_instructions set_home set_env_vars set_env_python set_env_ruby prepare_server if test "$DOCKER_PRELOAD" != 1; then install_mlaunch_venv fi # Make sure cmake is installed (in case we need to install the libmongocrypt # helper) if [ "$FLE" = "helper" ]; then install_cmake fi if test "$TOPOLOGY" = load-balanced; then install_haproxy fi # Launching mongod under $MONGO_ORCHESTRATION_HOME # makes its log available through log collecting machinery export dbdir="$MONGO_ORCHESTRATION_HOME"/db mkdir -p "$dbdir" if test -z "$TOPOLOGY"; then export TOPOLOGY=standalone fi calculate_server_args launch_ocsp_mock launch_server "$dbdir" uri_options="$URI_OPTIONS" bundle_install if test "$TOPOLOGY" = sharded-cluster; then if test -n "$SINGLE_MONGOS"; then # Some tests may run into https://jira.mongodb.org/browse/SERVER-16836 # when executing against a multi-sharded mongos. # At the same time, due to pinning in sharded transactions, # it is beneficial to test a single shard to ensure that server # monitoring and selection are working correctly and recover the driver's # ability to operate in reasonable time after errors and fail points trigger # on a single shard echo Restricting to a single mongos hosts=localhost:27017 else hosts=localhost:27017,localhost:27018 fi elif test "$TOPOLOGY" = replica-set; then # To set FCV we use mongo shell, it needs to be placed in replica set topology # or it can try to send the commands to secondaries. hosts=localhost:27017,localhost:27018 uri_options="$uri_options&replicaSet=test-rs" elif test "$TOPOLOGY" = replica-set-single-node; then hosts=localhost:27017 uri_options="$uri_options&replicaSet=test-rs" else hosts=localhost:27017 fi if test "$AUTH" = auth; then hosts="bob:pwd123@$hosts" elif test "$AUTH" = x509; then create_user_cmd="`cat <<'EOT' db.getSiblingDB("$external").runCommand( { createUser: "C=US,ST=New York,L=New York City,O=MongoDB,OU=x509,CN=localhost", roles: [ { role: "root", db: "admin" }, ], writeConcern: { w: "majority" , wtimeout: 5000 }, } ) EOT `" "$BINDIR"/mongosh --tls \ --tlsCAFile spec/support/certificates/ca.crt \ --tlsCertificateKeyFile spec/support/certificates/client-x509.pem \ -u bootstrap -p bootstrap \ --eval "$create_user_cmd" elif test "$AUTH" = aws-regular; then clear_instance_profile ruby -Ilib -I.evergreen/lib -rserver_setup -e ServerSetup.new.setup_aws_auth hosts="`uri_escape $MONGO_RUBY_DRIVER_AWS_AUTH_ACCESS_KEY_ID`:`uri_escape $MONGO_RUBY_DRIVER_AWS_AUTH_SECRET_ACCESS_KEY`@$hosts" elif test "$AUTH" = aws-assume-role; then clear_instance_profile ./.evergreen/aws -a "$MONGO_RUBY_DRIVER_AWS_AUTH_ACCESS_KEY_ID" \ -s "$MONGO_RUBY_DRIVER_AWS_AUTH_SECRET_ACCESS_KEY" \ -r us-east-1 \ assume-role "$MONGO_RUBY_DRIVER_AWS_AUTH_ASSUME_ROLE_ARN" >.env.private.gen eval `cat .env.private.gen` export MONGO_RUBY_DRIVER_AWS_AUTH_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID export MONGO_RUBY_DRIVER_AWS_AUTH_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY export MONGO_RUBY_DRIVER_AWS_AUTH_SESSION_TOKEN=$AWS_SESSION_TOKEN ruby -Ilib -I.evergreen/lib -rserver_setup -e ServerSetup.new.setup_aws_auth export AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID export AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY export AWS_SESSION_TOKEN=$AWS_SESSION_TOKEN aws sts get-caller-identity hosts="`uri_escape $MONGO_RUBY_DRIVER_AWS_AUTH_ACCESS_KEY_ID`:`uri_escape $MONGO_RUBY_DRIVER_AWS_AUTH_SECRET_ACCESS_KEY`@$hosts" uri_options="$uri_options&"\ "authMechanismProperties=AWS_SESSION_TOKEN:`uri_escape $MONGO_RUBY_DRIVER_AWS_AUTH_SESSION_TOKEN`" elif test "$AUTH" = aws-ec2; then ruby -Ilib -I.evergreen/lib -rserver_setup -e ServerSetup.new.setup_aws_auth # We need to assign an instance profile to the current instance, otherwise # since we don't place credentials into the environment the test suite # cannot connect to the MongoDB server while bootstrapping. # The EC2 credential retrieval tests clears the instance profile as part # of one of the tests. ruby -Ispec -Ilib -I.evergreen/lib -rec2_setup -e Ec2Setup.new.assign_instance_profile elif test "$AUTH" = aws-ecs; then if test -z "$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI"; then # drivers-evergreen-tools performs this operation in its ECS E2E tester. eval export `strings /proc/1/environ |grep ^AWS_CONTAINER_CREDENTIALS_RELATIVE_URI` fi ruby -Ilib -I.evergreen/lib -rserver_setup -e ServerSetup.new.setup_aws_auth elif test "$AUTH" = aws-web-identity; then clear_instance_profile ruby -Ilib -I.evergreen/lib -rserver_setup -e ServerSetup.new.setup_aws_auth elif test "$AUTH" = kerberos; then export MONGO_RUBY_DRIVER_KERBEROS=1 fi if test -n "$FLE"; then # Downloading crypt shared lib if [ -z "$MONGO_CRYPT_SHARED_DOWNLOAD_URL" ]; then crypt_shared_version=${CRYPT_SHARED_VERSION:-$("${BINDIR}"/mongod --version | grep -oP 'db version v\K.*')} python3 -u .evergreen/mongodl.py --component crypt_shared -V ${crypt_shared_version} --out $(pwd)/csfle_lib --target $(host_distro) || true if test -f $(pwd)/csfle_lib/lib/mongo_crypt_v1.so then export MONGO_RUBY_DRIVER_CRYPT_SHARED_LIB_PATH=$(pwd)/csfle_lib/lib/mongo_crypt_v1.so else echo 'Could not find crypt_shared library' fi else echo "Downloading crypt_shared package from $MONGO_CRYPT_SHARED_DOWNLOAD_URL" mkdir -p $(pwd)/csfle_lib cd $(pwd)/csfle_lib curl --retry 3 -fL $MONGO_CRYPT_SHARED_DOWNLOAD_URL | tar zxf - export MONGO_RUBY_DRIVER_CRYPT_SHARED_LIB_PATH=$(pwd)/lib/mongo_crypt_v1.so cd - fi # Start the KMS servers first so that they are launching while we are # fetching libmongocrypt. if test "$DOCKER_PRELOAD" != 1; then # We already have a virtualenv activated for mlaunch, # install kms dependencies into it. #. .evergreen/csfle/activate_venv.sh # Adjusted package versions: # cryptography 3.4 requires rust, see # https://github.com/pyca/cryptography/issues/5771. #pip install boto3~=1.19 cryptography~=3.4.8 pykmip~=0.10.0 pip3 install boto3~=1.19 'cryptography<3.4' pykmip~=0.10.0 'sqlalchemy<2.0.0' fi python3 -u .evergreen/csfle/kms_http_server.py --ca_file .evergreen/x509gen/ca.pem --cert_file .evergreen/x509gen/server.pem --port 7999 & python3 -u .evergreen/csfle/kms_http_server.py --ca_file .evergreen/x509gen/ca.pem --cert_file .evergreen/x509gen/expired.pem --port 8000 & python3 -u .evergreen/csfle/kms_http_server.py --ca_file .evergreen/x509gen/ca.pem --cert_file .evergreen/x509gen/wrong-host.pem --port 8001 & python3 -u .evergreen/csfle/kms_http_server.py --ca_file .evergreen/x509gen/ca.pem --cert_file .evergreen/x509gen/server.pem --port 8002 --require_client_cert & python3 -u .evergreen/csfle/kms_kmip_server.py & python3 -u .evergreen/csfle/fake_azure.py & python3 -u .evergreen/csfle/kms_failpoint_server.py --port 9003 & # Obtain temporary AWS credentials PYTHON=python3 . .evergreen/csfle/set-temp-creds.sh if test "$FLE" = helper; then echo "Using helper gem" elif test "$FLE" = path; then if false; then # We would ideally like to use the actual libmongocrypt binary here, # however there isn't a straightforward way to obtain a binary that # 1) is of a release version and 2) doesn't contain crypto. # These could be theoretically spelunked out of libmongocrypt's # evergreen tasks. curl --retry 3 -fLo libmongocrypt-all.tar.gz "https://s3.amazonaws.com/mciuploads/libmongocrypt/all/master/latest/libmongocrypt-all.tar.gz" tar xf libmongocrypt-all.tar.gz export LIBMONGOCRYPT_PATH=`pwd`/rhel-70-64-bit/nocrypto/lib64/libmongocrypt.so else # So, install the helper for the binary. gem install libmongocrypt-helper --pre # https://stackoverflow.com/questions/19072070/how-to-find-where-gem-files-are-installed path=$(find `gem env |grep INSTALLATION |awk -F: '{print $2}'` -name libmongocrypt.so |head -1 || true) if test -z "$path"; then echo Failed to find libmongocrypt.so in installed gems 1>&2 exit 1 fi cp $path . export LIBMONGOCRYPT_PATH=`pwd`/libmongocrypt.so gem uni libmongocrypt-helper fi test -f "$LIBMONGOCRYPT_PATH" ldd "$LIBMONGOCRYPT_PATH" else echo "Unknown FLE value: $FLE" 1>&2 exit 1 fi echo "Waiting for mock KMS servers to start..." wait_for_kms_server() { for i in $(seq 60); do if curl -s "localhost:$1"; test $? -ne 7; then return 0 else sleep 1 fi done echo "Could not detect mock KMS server on port $1" return 1 } wait_for_kms_server 8000 wait_for_kms_server 8001 wait_for_kms_server 8002 wait_for_kms_server 5698 wait_for_kms_server 8080 echo "Waiting for mock KMS servers to start... done." fi if test -n "$OCSP_CONNECTIVITY"; then # TODO Maybe OCSP_CONNECTIVITY=* should set SSL=ssl instead. uri_options="$uri_options&tls=true" fi if test -n "$EXTRA_URI_OPTIONS"; then uri_options="$uri_options&$EXTRA_URI_OPTIONS" fi export MONGODB_URI="mongodb://$hosts/?serverSelectionTimeoutMS=30000$uri_options" if echo "$AUTH" |grep -q ^aws-assume-role; then $BINDIR/mongosh "$MONGODB_URI" --eval 'db.runCommand({serverStatus: 1})' | wc fi set_fcv if test "$TOPOLOGY" = replica-set || test "$TOPOLOGY" = replica-set-single-node; then ruby -Ilib -I.evergreen/lib -rbundler/setup -rserver_setup -e ServerSetup.new.setup_tags fi if test "$API_VERSION_REQUIRED" = 1; then ruby -Ilib -I.evergreen/lib -rbundler/setup -rserver_setup -e ServerSetup.new.require_api_version export SERVER_API='version: "1"' fi if ! test "$OCSP_VERIFIER" = 1 && ! test -n "$OCSP_CONNECTIVITY"; then echo Preparing the test suite bundle exec rake spec:prepare fi if test "$TOPOLOGY" = sharded-cluster && test $MONGODB_VERSION = 3.6; then # On 3.6 server the sessions collection is not immediately available, # wait for it to spring into existence bundle exec rake spec:wait_for_sessions fi export MONGODB_URI="mongodb://$hosts/?appName=test-suite$uri_options" # Compression is handled via an environment variable, convert to URI option if test "$COMPRESSOR" = zlib && ! echo $MONGODB_URI |grep -q compressors=; then add_uri_option compressors=zlib fi if test "$COMPRESSOR" = snappy; then add_uri_option compressors=snappy fi if test "$COMPRESSOR" = zstd; then add_uri_option compressors=zstd fi echo "Running tests" set +e if test -n "$TEST_CMD"; then eval $TEST_CMD elif test "$FORK" = 1; then bundle exec rspec spec/integration/fork*spec.rb spec/stress/fork*spec.rb elif test "$STRESS" = 1; then bundle exec rspec spec/integration/fork*spec.rb spec/stress elif test "$OCSP_VERIFIER" = 1; then bundle exec rspec spec/integration/ocsp_verifier_spec.rb elif test -n "$OCSP_CONNECTIVITY"; then bundle exec rspec spec/integration/ocsp_connectivity_spec.rb elif test "$SOLO" = 1; then for attempt in `seq 10`; do echo "Attempt $attempt" bundle exec rspec spec/solo/clean_exit_spec.rb 2>&1 |tee test.log if grep -qi 'segmentation fault' test.log; then echo 'Test failed - Ruby crashed' 1>&2 exit 1 fi if fgrep -i '[BUG]' test.log; then echo 'Test failed - Ruby complained about a bug' 1>&2 exit 1 fi done else export JRUBY_OPTS=-J-Xmx2g bundle exec rake spec:ci fi test_status=$? echo "TEST STATUS: ${test_status}" set -e if test -f tmp/rspec-all.json; then mv tmp/rspec-all.json tmp/rspec.json fi kill_jruby || true if test -n "$OCSP_MOCK_PID"; then kill "$OCSP_MOCK_PID" fi python3 -m mtools.mlaunch.mlaunch stop --dir "$dbdir" || true if test -n "$FLE" && test "$DOCKER_PRELOAD" != 1; then # Terminate all kmip servers... and whatever else happens to be running # that is a python script. pkill python3 || true fi exit ${test_status} mongo-ruby-driver-2.21.3/.evergreen/serverless000077700000000000000000000000001505113246500333672../.mod/drivers-evergreen-tools/.evergreen/serverlessustar00rootroot00000000000000mongo-ruby-driver-2.21.3/.evergreen/shell-escape000077500000000000000000000001511505113246500215200ustar00rootroot00000000000000#!/usr/bin/env ruby require 'shellwords' puts ARGV.map { |arg| Shellwords.shellescape(arg) }.join(' ') mongo-ruby-driver-2.21.3/.evergreen/test-docker-remote000077500000000000000000000025571505113246500227040ustar00rootroot00000000000000#!/bin/sh # Copies the current directory to the specified target, then runs the # test-on-docker script on the target with the remaining arguments. # # The current directory is copied into the `work` subdirectory of the user's # home directory on the target. # # There is no provision in this script to specify the private SSH key to use # for authentication. It is recommended to use ssh-agent and add the key to the # agent. If target allows password authentication (EC2 instances do not # generally have password authentication initially configured on them) # it is possible to omit ssh-agent setup and enter the password each time it # is prompted for. # # Example: # # ./.evergreen/test-docker-remote admin@12.34.56.78 -p MONGODB_VERSION=4.2 # # Note: the private environment files (.env.private*) are copied to the target. # This is done in order to be able to test, for example, AWS authentication # from EC2 instances. set -e target="$1" if test -z "$target"; then echo Usage: `basename $0` user@host 1>&2 exit 1 fi shift . `dirname $0`/functions-remote.sh do_rsync --delete --exclude .git -av --exclude gem-private_key.pem \ . "$target":work cmd=`./.evergreen/shell-escape "$@"` # To debug the test-on-docker invocation: # do_ssh "$target" -t "cd work && set -x && ./.evergreen/test-on-docker $cmd" do_ssh "$target" -t "cd work && ./.evergreen/test-on-docker $cmd" mongo-ruby-driver-2.21.3/.evergreen/test-on-docker000077500000000000000000000004601505113246500220140ustar00rootroot00000000000000#!/usr/bin/env ruby $: << File.join(File.dirname(__FILE__), '../spec/shared/lib') require 'mrss/docker_runner' Mrss::DockerRunner.new( image_tag: 'ruby-driver-test', dockerfile_path: '.evergreen/Dockerfile', default_script: 'bash -x .evergreen/run-tests.sh', project_lib_subdir: 'mongo', ).run mongo-ruby-driver-2.21.3/.evergreen/test-remote000077500000000000000000000031211505113246500214230ustar00rootroot00000000000000#!/bin/sh # Copies the current directory to the specified target, then executes the # remaining arguments on the target as a shell command. # # The current directory is copied into the `work` subdirectory of the user's # home directory on the target. # # There is no provision in this script to specify the private SSH key to use # for authentication. It is recommended to use ssh-agent and add the key to the # agent. If target allows password authentication (EC2 instances do not # generally have password authentication initially configured on them) # it is possible to omit ssh-agent setup and enter the password each time it # is prompted for. # # Example: # # ./.evergreen/test-remote admin@12.34.56.78 \ # env MONGODB_VERSION=4.4 AUTH=aws-regular .evergreen/run-tests-aws-auth.sh # # Note: the private environment files (.env.private*) are copied to the target. # This is done in order to be able to test, for example, AWS authentication # from EC2 instances. set -e exec_only=false while getopts :e option; do case $option in e) exec_only=true ;; *) echo "Unknown option $option" 1>&2 exit 1 ;; esac done shift $(($OPTIND - 1)) target="$1" if test -z "$target"; then echo Usage: `basename $0` user@host 1>&2 exit 1 fi shift . `dirname $0`/functions-remote.sh if ! $exec_only; then do_ssh "$target" -t "sudo pkill -9 mongod; sudo pkill -9 mongos; sudo rm -rf work; sudo rm -rf /db" fi do_rsync --exclude .git -a --exclude gem-private_key.pem \ . "$target":work cmd=`./.evergreen/shell-escape "$@"` do_ssh "$target" -t "cd work && $cmd" mongo-ruby-driver-2.21.3/.evergreen/update-evergreen-configs000077500000000000000000000020741505113246500240510ustar00rootroot00000000000000#!/usr/bin/env ruby require 'erubi' require 'erubi/capture_end' require 'tilt' autoload :YAML, 'yaml' class Runner def run transform('config.yml') end def transform(output_file_name) contents = <<-EOT # GENERATED FILE - DO NOT EDIT. # Run `rake eg` to regenerate this file. EOT template_path = File.join(File.dirname(__FILE__), 'config/common.yml.erb') #contents << ERB.new(File.read(template_path)).result(get_binding) contents << Tilt.new(template_path, engine_class: Erubi::CaptureEndEngine).render(self) template_path = File.join(File.dirname(__FILE__), 'config/axes.yml.erb') contents << Tilt.new(template_path, engine_class: Erubi::CaptureEndEngine).render(self) template_path = File.join(File.dirname(__FILE__), 'config/standard.yml.erb') contents << Tilt.new(template_path, engine_class: Erubi::CaptureEndEngine).render(self) output_path = File.join(File.dirname(__FILE__), output_file_name) File.open(output_path, 'w') do |f| f << contents end end def get_binding binding end end Runner.new.run mongo-ruby-driver-2.21.3/.evergreen/validate000077500000000000000000000007371505113246500207560ustar00rootroot00000000000000#!/bin/sh # This script can be used to validate the contents of Evergreen configuration # and shell scripts (as much as possible) locally, prior to starting patch # and/or pull request builds. Invoke it with a relative or absolute path # like so: # # ./.evergreen/validate set -e this_dir="$(dirname "$0")" for yml in "$this_dir"/*.yml; do echo "Validating $yml" evergreen validate "$yml" done for sh in "$this_dir"/*.sh; do echo "Validating $sh" bash -n "$sh" done mongo-ruby-driver-2.21.3/.evergreen/venv-utils.sh000077700000000000000000000000001505113246500342472../.mod/drivers-evergreen-tools/.evergreen/venv-utils.shustar00rootroot00000000000000mongo-ruby-driver-2.21.3/.evergreen/x509gen000077700000000000000000000000001505113246500314522../.mod/drivers-evergreen-tools/.evergreen/x509gen/ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/.github/000077500000000000000000000000001505113246500165305ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/.github/CODEOWNERS000066400000000000000000000000241505113246500201170ustar00rootroot00000000000000* @mongodb/dbx-ruby mongo-ruby-driver-2.21.3/.github/release.yml000066400000000000000000000003461505113246500206760ustar00rootroot00000000000000# For configuring how release notes are auto-generated. # Requires the use of labels to categorize pull requests. # # See: https://docs.github.com/en/repositories/releasing-projects-on-github/automatically-generated-release-notes mongo-ruby-driver-2.21.3/.github/workflows/000077500000000000000000000000001505113246500205655ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/.github/workflows/cleanup.yml000066400000000000000000000022401505113246500227350ustar00rootroot00000000000000name: "Dry-Run Cleanup" run-name: "Dry Run Cleanup for ${{ github.ref }}" on: workflow_dispatch: inputs: confirm: description: Indicate whether you want this workflow to run (must be "true") required: true type: string tag: description: The name of the tag (and release) to clean up required: true type: string jobs: release: name: "Dry-Run Cleanup" environment: release runs-on: 'ubuntu-latest' if: ${{ inputs.confirm == 'true' }} permissions: # required for all workflows security-events: write # required to fetch internal or private CodeQL packs packages: read # only required for workflows in private repositories actions: read contents: write # required by the mongodb-labs/drivers-github-tools/setup@v2 step # also required by `rubygems/release-gem` id-token: write steps: - name: "Run the cleanup action" uses: mongodb-labs/drivers-github-tools/ruby/cleanup@v2 with: app_id: ${{ vars.APP_ID }} app_private_key: ${{ secrets.APP_PRIVATE_KEY }} tag: ${{ inputs.tag }} mongo-ruby-driver-2.21.3/.github/workflows/codeql.yml000066400000000000000000000055101505113246500225600ustar00rootroot00000000000000name: "CodeQL" on: push: branches: [ "master" ] pull_request: branches: [ "master" ] schedule: - cron: '20 0 * * 0' jobs: analyze: name: Analyze (${{ matrix.language }}) # Runner size impacts CodeQL analysis time. To learn more, please see: # - https://gh.io/recommended-hardware-resources-for-running-codeql # - https://gh.io/supported-runners-and-hardware-resources # - https://gh.io/using-larger-runners (GitHub.com only) # Consider using larger runners or machines with greater resources for possible analysis time improvements. runs-on: 'ubuntu-latest' timeout-minutes: 360 permissions: # required for all workflows security-events: write # required to fetch internal or private CodeQL packs packages: read # only required for workflows in private repositories actions: read contents: read strategy: fail-fast: false matrix: include: - language: ruby build-mode: none steps: - name: Checkout repository uses: actions/checkout@v4 # Initializes the CodeQL tools for scanning. - name: Initialize CodeQL uses: github/codeql-action/init@v3 with: languages: ${{ matrix.language }} build-mode: ${{ matrix.build-mode }} config: | paths-ignore: - .evergreen - spec # If you wish to specify custom queries, you can do so here or in a config file. # By default, queries listed here will override any specified in a config file. # Prefix the list here with "+" to use these queries and those in the config file. # For more details on CodeQL's query packs, refer to: https://docs.github.com/en/code-security/code-scanning/automatically-scanning-your-code-for-vulnerabilities-and-errors/configuring-code-scanning#using-queries-in-ql-packs # queries: security-extended,security-and-quality # If the analyze step fails for one of the languages you are analyzing with # "We were unable to automatically build your code", modify the matrix above # to set the build mode to "manual" for that language. Then modify this step # to build your code. # ℹ️ Command-line programs to run using the OS shell. # 📚 See https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idstepsrun - if: matrix.build-mode == 'manual' run: | echo 'If you are using a "manual" build mode for one or more of the' \ 'languages you are analyzing, replace this with the commands to build' \ 'your code, for example:' echo ' make bootstrap' echo ' make release' exit 1 - name: Perform CodeQL Analysis uses: github/codeql-action/analyze@v3 with: category: "/language:${{matrix.language}}" mongo-ruby-driver-2.21.3/.github/workflows/release.yml000066400000000000000000000047401505113246500227350ustar00rootroot00000000000000name: "Gem Release" run-name: "Gem Release for ${{ github.ref }}" on: # for auto-deploy when merging a release-candidate PR push: branches: - 'master' - '*-stable' # for manual release workflow_dispatch: inputs: pr: description: "The number of the merged release candidate PR" required: true env: SILK_ASSET_GROUP: mongodb-ruby-driver GEM_NAME: mongo PRODUCT_NAME: Ruby Driver PRODUCT_ID: mongodb-ruby-driver permissions: # required for all workflows security-events: write # required to fetch internal or private CodeQL packs packages: read # only required for workflows in private repositories actions: read pull-requests: read contents: write # required by the mongodb-labs/drivers-github-tools/setup@v2 step # also required by `rubygems/release-gem` id-token: write jobs: check: name: "Check Release" runs-on: ubuntu-latest outputs: message: ${{ steps.check.outputs.message }} ref: ${{ steps.check.outputs.ref }} steps: - name: "Run the check action" id: check uses: jamis/drivers-github-tools/ruby/pr-check@ruby-3643-update-release-process build: name: "Build Gems" needs: check environment: release runs-on: ubuntu-latest steps: - name: "Run the build action" uses: jamis/drivers-github-tools/ruby/build@ruby-3643-update-release-process with: app_id: ${{ vars.APP_ID }} app_private_key: ${{ secrets.APP_PRIVATE_KEY }} artifact: 'ruby-3.2' gem_name: ${{ env.GEM_NAME }} ruby_version: 'ruby-3.2' ref: ${{ needs.check.outputs.ref }} publish: name: "Publish Gems" needs: [ check, build ] environment: release runs-on: 'ubuntu-latest' steps: - name: "Run the publish action" uses: jamis/drivers-github-tools/ruby/publish@ruby-3643-update-release-process with: app_id: ${{ vars.APP_ID }} app_private_key: ${{ secrets.APP_PRIVATE_KEY }} aws_role_arn: ${{ secrets.AWS_ROLE_ARN }} aws_region_name: ${{ vars.AWS_REGION_NAME }} aws_secret_id: ${{ secrets.AWS_SECRET_ID }} dry_run: false gem_name: ${{ env.GEM_NAME }} product_name: ${{ env.PRODUCT_NAME }} product_id: ${{ env.PRODUCT_ID }} release_message: ${{ needs.check.outputs.message }} silk_asset_group: ${{ env.SILK_ASSET_GROUP }} ref: ${{ needs.check.outputs.ref }} mongo-ruby-driver-2.21.3/.github/workflows/rubocop.yml000066400000000000000000000005761505113246500227710ustar00rootroot00000000000000--- name: Rubocop on: [push, pull_request] jobs: build: runs-on: ubuntu-latest env: CI: true TESTOPTS: "-v" steps: - uses: actions/checkout@v3 - name: Set up Ruby 3.0 uses: ruby/setup-ruby@v1 with: ruby-version: 3.0 bundler-cache: true - name: Run RuboCop run: bundle exec rubocop --parallel mongo-ruby-driver-2.21.3/.github/workflows/test.yml000066400000000000000000000035751505113246500223010ustar00rootroot00000000000000# Note on topology: server: # The GH actions use mongo-orchestration, which uses a "server" topology for # the standalone one. name: Run Driver Tests on: [push, pull_request] jobs: build: name: "${{matrix.os}} ruby-${{matrix.ruby}} mongodb-${{matrix.mongodb}} ${{matrix.topology}}" env: CI: true TESTOPTS: "-v" runs-on: ubuntu-22.04 continue-on-error: true strategy: fail-fast: false matrix: os: [ ubuntu-22.04 ] ruby: [ "3.2" ] mongodb: [ "7.0", "8.0" ] topology: [ replica_set, sharded_cluster ] steps: - name: repo checkout uses: actions/checkout@v2 with: submodules: recursive - id: start-mongodb name: start mongodb uses: mongodb-labs/drivers-evergreen-tools@master with: version: "${{matrix.mongodb}}" topology: "${{matrix.topology}}" - name: load ruby uses: ruby/setup-ruby@v1 with: ruby-version: "${{matrix.ruby}}" bundler: 2 - name: bundle run: bundle install --jobs 4 --retry 3 - name: prepare test suite run: bundle exec rake spec:prepare env: MONGODB_URI: ${{ steps.start-mongodb.outputs.cluster-uri }} - name: prepare replica set run: ruby -Ilib -I.evergreen/lib -rbundler/setup -rserver_setup -e ServerSetup.new.setup_tags if: ${{ matrix.topology == 'replica_set' }} env: MONGODB_URI: ${{ steps.start-mongodb.outputs.cluster-uri }} - name: wait for sessions run: bundle exec rake spec:wait_for_sessions if: ${{ matrix.topology == 'sharded_cluster' && matrix.mongodb == '3.6' }} env: MONGODB_URI: ${{ steps.start-mongodb.outputs.cluster-uri }} - name: test timeout-minutes: 60 continue-on-error: false run: bundle exec rake spec:ci env: MONGODB_URI: ${{ steps.start-mongodb.outputs.cluster-uri }} mongo-ruby-driver-2.21.3/.gitignore000066400000000000000000000006561505113246500171670ustar00rootroot00000000000000*#* *.bundle *.class *.gem *.log *.o *.pid *.so *.swp *~ .DS_Store .idea/* .yardoc coverage yard-docs Gemfile.lock .ruby-gemset .ruby-version gem-private_key.pem nbproject tmp sandbox/* data/* .byebug_history gemfiles/*.gemfile.lock .env.private* .env build profile/data secrets-export.sh secrets-expansion.yml atlas-expansion.yml # AWS SAM-generated files spec/faas/ruby-sam-app/.aws-sam spec/faas/ruby-sam-app/events/event.json mongo-ruby-driver-2.21.3/.gitmodules000066400000000000000000000004021505113246500173410ustar00rootroot00000000000000[submodule ".mod/drivers-evergreen-tools"] path = .mod/drivers-evergreen-tools url = https://github.com/mongodb-labs/drivers-evergreen-tools [submodule "spec/shared"] path = spec/shared url = https://github.com/mongodb-labs/mongo-ruby-spec-shared mongo-ruby-driver-2.21.3/.mod/000077500000000000000000000000001505113246500160255ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/.mod/drivers-evergreen-tools/000077500000000000000000000000001505113246500226215ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/.rspec000066400000000000000000000001441505113246500163040ustar00rootroot00000000000000--tty --colour --format <%= %w(1 true yes).include?(ENV['CI']&.downcase) ? 'Rfc::Riff' : 'Fuubar'%> mongo-ruby-driver-2.21.3/.rubocop.yml000066400000000000000000000034551505113246500174510ustar00rootroot00000000000000require: - rubocop-performance - rubocop-rake - rubocop-rspec AllCops: TargetRubyVersion: 2.5 NewCops: enable Exclude: - 'spec/shared/**/*' - 'spec/faas/**/*' - 'vendor/**/*' Bundler: Enabled: true Gemspec: Enabled: true Layout: Enabled: true Lint: Enabled: true Metrics: Enabled: true Naming: Enabled: true Security: Enabled: true Style: Enabled: true # -------------------------------------- # Cops below this line set intentionally # -------------------------------------- Bundler/OrderedGems: Enabled: false Gemspec/OrderedDependencies: Enabled: false Layout/SpaceInsideArrayLiteralBrackets: EnforcedStyle: space Layout/SpaceInsidePercentLiteralDelimiters: Enabled: false Metrics/ClassLength: Max: 200 Metrics/ModuleLength: Enabled: false Metrics/MethodLength: Max: 20 Naming/MethodParameterName: AllowedNames: [ id, op ] RSpec/BeforeAfterAll: Enabled: false # Ideally, we'd use this one, too, but our tests have not historically followed # this style and it's not worth changing right now, IMO RSpec/DescribeClass: Enabled: false Style/FetchEnvVar: Enabled: false RSpec/ImplicitExpect: EnforcedStyle: is_expected RSpec/MultipleExpectations: Enabled: false RSpec/MultipleMemoizedHelpers: Enabled: false RSpec/NestedGroups: Enabled: false Style/Documentation: Exclude: - 'spec/**/*' Style/FormatStringToken: Enabled: false Style/ModuleFunction: EnforcedStyle: extend_self Style/OptionalBooleanParameter: Enabled: false Style/ParallelAssignment: Enabled: false Style/TernaryParentheses: EnforcedStyle: require_parentheses_when_complex Style/TrailingCommaInArrayLiteral: Enabled: false Style/TrailingCommaInHashLiteral: Enabled: false RSpec/ExampleLength: Max: 10 RSpec/MessageSpies: EnforcedStyle: receive mongo-ruby-driver-2.21.3/.yardopts000066400000000000000000000000311505113246500170300ustar00rootroot00000000000000lib/**/*.rb -o yard-docs mongo-ruby-driver-2.21.3/CONTRIBUTING.md000066400000000000000000000012531505113246500174220ustar00rootroot00000000000000# Contributing to the MongoDB Ruby Driver Thank you for your interest in contributing to the MongoDB Ruby driver. We are building this software together and appreciate and encourage contributions from the community. Pull Requests ------------- Pull requests should be made against the `master` branch and include relevant tests, if applicable. The Ruby driver team will backport the changes to the stable branches, if needed. JIRA Tickets ------------ The Ruby driver team uses [MongoDB JIRA](https://jira.mongodb.org/browse/RUBY) to schedule and track work. A JIRA ticket is not required when submitting a pull request, but is appreciated especially for non-trivial changes. mongo-ruby-driver-2.21.3/Gemfile000066400000000000000000000002301505113246500164560ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all source 'https://rubygems.org' gemspec require_relative './gemfiles/standard' standard_dependencies mongo-ruby-driver-2.21.3/LICENSE000066400000000000000000000250171505113246500162020ustar00rootroot00000000000000 Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS Copyright (C) 2009-2020 MongoDB, Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. mongo-ruby-driver-2.21.3/NOTICE000066400000000000000000000000721505113246500160730ustar00rootroot00000000000000MongoDB Ruby Driver Copyright (C) 2009-2020 MongoDB, Inc. mongo-ruby-driver-2.21.3/README.md000066400000000000000000000162701505113246500164550ustar00rootroot00000000000000MongoDB Ruby Driver [![Gem Version][rubygems-img]][rubygems-url] [![Inline docs][inch-img]][inch-url] ================================================================ The officially supported Ruby driver for [MongoDB](https://www.mongodb.org/). The Ruby driver supports Ruby 2.7-3.3 and JRuby 9.3-9.4. ## Installation Install via RubyGems, either via the command-line for ad-hoc uses: $ gem install mongo Or via a Gemfile for more general use: gem 'mongo' ### Release Integrity Each release of the MongoDB Ruby driver after version 2.20.0 has been automatically built and signed using the team's GPG key. To verify the driver's gem file: 1. [Download the GPG key](https://pgp.mongodb.com/ruby-driver.asc). 2. Import the key into your GPG keyring with `gpg --import ruby-driver.asc`. 3. Download the gem file (if you don't already have it). You can download it from RubyGems with `gem fetch mongo`, or you can download it from the [releases page](https://github.com/mongodb/mongo-ruby-driver/releases) on GitHub. 4. Download the corresponding detached signature file from the [same release](https://github.com/mongodb/mongo-ruby-driver/releases). Look at the bottom of the release that corresponds to the gem file, under the 'Assets' list, for a `.sig` file with the same version number as the gem you wish to install. 5. Verify the gem with `gpg --verify mongo-X.Y.Z.gem.sig mongo-X.Y.Z.gem` (replacing `X.Y.Z` with the actual version number). You are looking for text like "Good signature from "MongoDB Ruby Driver Release Signing Key " in the output. If you see that, the signature was found to correspond to the given gem file. (Note that other output, like "This key is not certified with a trusted signature!", is related to *web of trust* and depends on how strongly you, personally, trust the `ruby-driver.asc` key that you downloaded from us. To learn more, see https://www.gnupg.org/gph/en/manual/x334.html) ### Why not use RubyGems' gem-signing functionality? RubyGems' own gem signing is problematic, most significantly because there is no established chain of trust related to the keys used to sign gems. RubyGems' own documentation admits that "this method of signing gems is not widely used" (see https://guides.rubygems.org/security/). Discussions about this in the RubyGems community have been off-and-on for more than a decade, and while a solution will eventually arrive, we have settled on using GPG instead for the following reasons: 1. Many of the other driver teams at MongoDB are using GPG to sign their product releases. Consistency with the other teams means that we can reuse existing tooling for our own product releases. 2. GPG is widely available and has existing tools and procedures for dealing with web of trust (though they are admittedly quite arcane and intimidating to the uninitiated, unfortunately). Ultimately, most users do not bother to verify gems, and will not be impacted by our choice of GPG over RubyGems' native method. ## Documentation High level documentation and usage examples are located [here](https://www.mongodb.com/docs/ecosystem/drivers/ruby/). API documentation for the most recent release can be found [here](https://mongodb.com/docs/ruby-driver/current/api/). To build API documentation for the master branch, check out the repository locally and run `rake docs`. High-level driver documentation including tutorials and the reference that were in the docs folder can now be found at the docs-ruby repository, [here](https://github.com/mongodb/docs-ruby) ## Support Commercial support for the driver is available through the [MongoDB Support Portal](https://support.mongodb.com/). For questions, discussions or general technical support, please visit the [MongoDB Community Forum](https://developer.mongodb.com/community/forums/tags/c/drivers-odms-connectors/7/ruby-driver). Please see [Technical Support](https://mongodb.com/docs/manual/support/) page in the documentation for other support resources. ## Bugs & Feature Requests To report a bug in the driver or request a feature specific to the Ruby driver: 1. Visit [our issue tracker](https://jira.mongodb.org/) and login (or create an account if you do not have one already). 2. Navigate to the [RUBY project](https://jira.mongodb.org/browse/RUBY). 3. Click 'Create Issue' and fill out all of the applicable form fields. When creating an issue, please keep in mind that all information in JIRA for the RUBY project, as well as the core server (the SERVER project), is publicly visible. **PLEASE DO:** - Provide as much information as possible about the issue. - Provide detailed steps for reproducing the issue. - Provide any applicable code snippets, stack traces and log data. Do not include any sensitive data or server logs. - Specify version numbers of the driver and MongoDB server. **PLEASE DO NOT:** - Provide any sensitive data or server logs. - Report potential security issues publicly (see 'Security Issues' below). ## Security Issues If you have identified a potential security-related issue in the Ruby driver (or any other MongoDB product), please report it by following the [instructions here](https://www.mongodb.com/docs/manual/tutorial/create-a-vulnerability-report). ## Product Feature Requests To request a feature which is not specific to the Ruby driver, or which affects more than the driver alone (for example, a feature which requires MongoDB server support), please submit your idea through the [MongoDB Feedback Forum](https://feedback.mongodb.com/forums/924286-drivers). ## Maintenance and Bug Fix Policy New driver functionality is generally added in a backwards-compatible manner and results in new minor driver releases (2.x). Bug fixes are generally made on master first and are backported to the current minor driver release. Exceptions may be made on a case-by-case basis, for example security fixes may be backported to older stable branches. Only the most recent minor driver release is officially supported. Customers should use the most recent driver release in their applications. ## Running Tests Please refer to [spec/README.md](https://github.com/mongodb/mongo-ruby-driver/blob/master/spec/README.md) for instructions on how to run the driver's test suite. ## Releases Full release notes and release history are available [on the GitHub releases page](https://github.com/mongodb/mongo-ruby-driver/releases). The MongoDB Ruby driver follows [semantic versioning](https://semver.org/) for its releases. ## License Copyright (C) 2009-2020 MongoDB, Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. [rubygems-img]: https://badge.fury.io/rb/mongo.svg [rubygems-url]: http://badge.fury.io/rb/mongo [inch-img]: http://inch-ci.org/github/mongodb/mongo-ruby-driver.svg?branch=master [inch-url]: http://inch-ci.org/github/mongodb/mongo-ruby-driver mongo-ruby-driver-2.21.3/Rakefile000066400000000000000000000130541505113246500166400ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'bundler' require 'rspec/core/rake_task' if File.exist?('./spec/shared/lib/tasks/candidate.rake') load 'spec/shared/lib/tasks/candidate.rake' end ROOT = File.expand_path(File.join(File.dirname(__FILE__))) $: << File.join(ROOT, 'spec/shared/lib') CLASSIFIERS = [ [%r,^mongo/server,, :unit_server], [%r,^mongo,, :unit], [%r,^kerberos,, :unit], [%r,^integration/sdam_error_handling,, :sdam_integration], [%r,^integration/cursor_reaping,, :cursor_reaping], [%r,^integration/query_cache,, :query_cache], [%r,^integration/transactions_examples,, :tx_examples], [%r,^(atlas|integration),, :integration], [%r,^spec_tests/sdam_integration,, :spec_sdam_integration], [%r,^spec_tests,, :spec], ] RUN_PRIORITY = (ENV['RUN_PRIORITY'] || %( tx_examples unit unit_server integration sdam_integration cursor_reaping query_cache spec spec_sdam_integration )).split.map(&:to_sym) RSpec::Core::RakeTask.new(:spec) do |t| #t.rspec_opts = "--profile 5" if ENV['CI'] end task :default => ['spec:prepare', :spec] desc 'Build the gem' task :build do command = %w[ gem build ] command << "--output=#{ENV['GEM_FILE_NAME']}" if ENV['GEM_FILE_NAME'] command << (ENV['GEMSPEC'] || 'mongo.gemspec') system(*command) end # `rake version` is used by the deployment system so get the release version # of the product beng deployed. It must do nothing more than just print the # product version number. # # See the mongodb-labs/driver-github-tools/ruby/publish Github action. desc "Print the current value of Mongo::VERSION" task :version do require 'mongo/version' puts Mongo::VERSION end # overrides the default Bundler-provided `release` task, which also # builds the gem. Our release process assumes the gem has already # been built (and signed via GPG), so we just need `rake release` to # push the gem to rubygems. task :release do require 'mongo/version' if ENV['GITHUB_ACTION'].nil? abort <<~WARNING `rake release` must be invoked from the `Driver Release` GitHub action, and must not be invoked locally. This ensures the gem is properly signed and distributed by the appropriate user. Note that it is the `rubygems/release-gem@v1` step in the `Driver Release` action that invokes this task. Do not rename or remove this task, or the release-gem step will fail. Reimplement this task with caution. mongo-#{Mongo::VERSION}.gem was NOT pushed to RubyGems. WARNING end system 'gem', 'push', "mongo-#{Mongo::VERSION}.gem" end task :mongo do require 'mongo' end namespace :spec do desc 'Creates necessary user accounts in the cluster' task prepare: :mongo do $: << File.join(File.dirname(__FILE__), 'spec') require 'support/utils' require 'support/spec_setup' SpecSetup.new.run end desc 'Waits for sessions to be available in the deployment' task wait_for_sessions: :mongo do $: << File.join(File.dirname(__FILE__), 'spec') require 'support/utils' require 'support/spec_config' require 'support/client_registry' client = ClientRegistry.instance.global_client('authorized') client.database.command(ping: 1) deadline = Process.clock_gettime(Process::CLOCK_MONOTONIC) + 300 loop do begin client.cluster.validate_session_support! break rescue Mongo::Error::SessionsNotSupported if Process.clock_gettime(Process::CLOCK_MONOTONIC) >= deadline raise "Sessions did not become supported in 300 seconds" end client.cluster.scan! end end end desc 'Prints configuration used by the test suite' task config: :mongo do $: << File.join(File.dirname(__FILE__), 'spec') # Since this task is usually used for troubleshooting of test suite # configuration, leave driver log level at the default of debug to # have connection diagnostics printed during handshakes and such. require 'support/utils' require 'support/spec_config' require 'support/client_registry' SpecConfig.instance.print_summary end def spec_organizer require 'mrss/spec_organizer' Mrss::SpecOrganizer.new( root: ROOT, classifiers: CLASSIFIERS, priority_order: RUN_PRIORITY, ) end task :ci => ['spec:prepare'] do spec_organizer.run end desc 'Show test buckets' task :buckets do spec_organizer.ordered_buckets.each do |category, paths| puts "#{category || 'remaining'}: #{paths&.join(' ') || ''}" end end end desc 'Build and validate the evergreen config' task eg: %w[ eg:build eg:validate ] # 'eg' == 'evergreen', but evergreen is too many letters for convenience namespace :eg do desc 'Builds the .evergreen/config.yml file from the templates' task :build do ruby '.evergreen/update-evergreen-configs' end desc 'Validates the .evergreen/config.yml file' task :validate do system 'evergreen validate --project mongo-ruby-driver .evergreen/config.yml' end desc 'Updates the evergreen executable to the latest available version' task :update do system 'evergreen get-update --install' end desc 'Runs the current branch as an evergreen patch' task :patch do system 'evergreen patch --uncommitted --project mongo-ruby-driver --browse --auto-description --yes' end end desc "Generate all documentation" task :docs => 'docs:yard' namespace :docs do desc "Generate yard documention" task :yard do out = File.join('yard-docs', Mongo::VERSION) FileUtils.rm_rf(out) system "yardoc -o #{out} --title mongo-#{Mongo::VERSION}" end end load 'profile/driver_bench/rake/tasks.rake' mongo-ruby-driver-2.21.3/THIRD-PARTY-NOTICES000066400000000000000000000066311505113246500177720ustar00rootroot00000000000000The `mongo` gem uses third-party libraries or other resources that may be distributed under licenses different than the `mongo` gem. In the event that we accidentally failed to list a required notice, please bring it to our attention by creating a ticket at: https://jira.mongodb.org/browse/RUBY The attached notices are provided for information only. For any licenses that require disclosure of source, sources are available at https://github.com/mongodb/mongo-ruby-driver. 1) License Notice for the files https://github.com/ruby/ruby/blob/v2_5_1/lib/unicode_normalize/normalize.rb and https://github.com/ruby/ruby/blob/v2_5_1/lib/unicode_normalize/tables.rb ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Ruby is copyrighted free software by Yukihiro Matsumoto . You can redistribute it and/or modify it under either the terms of the 2-clause BSDL (see the file BSDL), or the conditions below: 1. You may make and give away verbatim copies of the source form of the software without restriction, provided that you duplicate all of the original copyright notices and associated disclaimers. 2. You may modify your copy of the software in any way, provided that you do at least ONE of the following: a) place your modifications in the Public Domain or otherwise make them Freely Available, such as by posting said modifications to Usenet or an equivalent medium, or by allowing the author to include your modifications in the software. b) use the modified software only within your corporation or organization. c) give non-standard binaries non-standard names, with instructions on where to get the original software distribution. d) make other distribution arrangements with the author. 3. You may distribute the software in object code or binary form, provided that you do at least ONE of the following: a) distribute the binaries and library files of the software, together with instructions (in the manual page or equivalent) on where to get the original distribution. b) accompany the distribution with the machine-readable source of the software. c) give non-standard binaries non-standard names, with instructions on where to get the original software distribution. d) make other distribution arrangements with the author. 4. You may modify and include the part of the software into any other software (possibly commercial). But some files in the distribution are not written by the author, so that they are not under these terms. For the list of those files and their copying conditions, see the file LEGAL. 5. The scripts and library files supplied as input to or produced as output from the software do not automatically fall under the copyright of the software, but belong to whomever generated them, and may be sold commercially, and may be aggregated with this software. 6. THIS SOFTWARE IS PROVIDED "AS IS" AND WITHOUT ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. Copyright Ayumu Nojima (野島 歩) and Martin J. Dürst (duerst@it.aoyama.ac.jp) mongo-ruby-driver-2.21.3/bin/000077500000000000000000000000001505113246500157405ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/bin/mongo_console000077500000000000000000000006761505113246500205400ustar00rootroot00000000000000#!/usr/bin/env ruby # frozen_string_literal: true # rubocop:todo all $LOAD_PATH[0, 0] = File.join(File.dirname(__FILE__), '..', 'lib') require 'mongo' # include the mongo namespace include Mongo begin require 'pry' rescue LoadError end begin require 'irb' rescue LoadError end if defined?(Pry) Pry.config.prompt_name = 'mongo' Pry.start elsif defined?(IRB) IRB.start else abort 'LoadError: mongo_console requires Pry or IRB' end mongo-ruby-driver-2.21.3/examples/000077500000000000000000000000001505113246500170065ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/examples/aggregate.rb000066400000000000000000000015151505113246500212630ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Group documents by field and calculate count. coll = client[:restaurants] results = coll.find.aggregate([ { '$group' => { '_id' => '$borough', 'count' => { '$sum' => 1 } } } ]) results.each do |result| puts result end # Filter and group documents results = coll.find.aggregate([ { '$match' => { 'borough' => 'Queens', 'cuisine' => 'Brazilian' } }, { '$group' => { '_id' => '$address.zipcode', 'count' => { '$sum' => 1 } } } ]) results.each do |result| puts result end mongo-ruby-driver-2.21.3/examples/create.rb000066400000000000000000000014261505113246500206010ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Insert a document require 'date' result = client[:restaurants].insert_one({ address: { street: '2 Avenue', zipcode: 10075, building: 1480, coord: [-73.9557413, 40.7720266] }, borough: 'Manhattan', cuisine: 'Italian', grades: [ { date: DateTime.strptime('2014-10-01', '%Y-%m-%d'), grade: 'A', score: 11 }, { date: DateTime.strptime('2014-01-16', '%Y-%m-%d'), grade: 'B', score: 17 } ], name: 'Vella', restaurant_id: '41704620' }) result.n #=> returns 1, because 1 document was inserted. mongo-ruby-driver-2.21.3/examples/delete.rb000066400000000000000000000006021505113246500205730ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Delete all documents matching a condition client[:restaurants].find('borough' => 'Manhattan').delete_many # Delete one document matching a condition client[:restaurants].find('borough' => 'Queens').delete_one # Delete all documents in a collection client[:restaurants].delete_many # Drop a collection client[:restaurants].drop mongo-ruby-driver-2.21.3/examples/index.rb000066400000000000000000000006541505113246500204470ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Create a single field index result = client[:restaurants].indexes.create_one(cuisine: Mongo::Index::ASCENDING) # Create a compound index result = client[:restaurants].indexes.create_one(cuisine: 1, zipcode: Mongo::Index::DESCENDING) # Create a single field unique index result = client[:restaurants].indexes.create_one({ cuisine: Mongo::Index::ASCENDING }, unique: true) mongo-ruby-driver-2.21.3/examples/query.rb000066400000000000000000000031731505113246500205040ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Query for all documents in a collection cursor = client[:restaurants].find cursor.each do |doc| puts doc end # Query for equality on a top level field cursor = client[:restaurants].find('borough' => 'Manhattan') cursor.each do |doc| puts doc end # Query by a field in an embedded document cursor = client[:restaurants].find('address.zipcode' => '10075') cursor.each do |doc| puts doc end # Query by a field in an array cursor = client[:restaurants].find('grades.grade' => 'B') cursor.each do |doc| puts doc end # Query with the greater-than operator cursor = client[:restaurants].find('grades.score' => { '$gt' => 30 }) cursor.each do |doc| puts doc end # Query with the less-than operator cursor = client[:restaurants].find('grades.score' => { '$lt' => 10 }) cursor.each do |doc| puts doc end # Query with a logical conjuction (AND) of query conditions cursor = client[:restaurants].find({ 'cuisine' => 'Italian', 'address.zipcode' => '10075'}) cursor.each do |doc| puts doc end # Query with a logical disjunction (OR) of query conditions cursor = client[:restaurants].find('$or' => [{ 'cuisine' => 'Italian' }, { 'address.zipcode' => '10075'} ] ) cursor.each do |doc| puts doc end # Sort query results cursor = client[:restaurants].find.sort('borough' => Mongo::Index::ASCENDING, 'address.zipcode' => Mongo::Index::DESCENDING) cursor.each do |doc| puts doc end mongo-ruby-driver-2.21.3/examples/update.rb000066400000000000000000000026151505113246500206210ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Update top-level fields in a single document client[:restaurants].find(name: 'Juni').update_one('$set'=> { 'cuisine' => 'American (New)' }, '$currentDate' => { 'lastModified' => true }) # Update an embedded document in a single document client[:restaurants].find(restaurant_id: '41156888').update_one('$set'=> { 'address.street' => 'East 31st Street' }) # Update multiple documents client[:restaurants].find('address.zipcode' => '10016').update_many('$set'=> { 'borough' => 'Manhattan' }, '$currentDate' => { 'lastModified' => true }) # Replace the contents of a single document client[:restaurants].find(restaurant_id: '41704620').replace_one( 'name' => 'Vella 2', 'address' => { 'coord' => [-73.9557413, 40.7720266], 'building' => '1480', 'street' => '2 Avenue', 'zipcode' => '10075' } ) mongo-ruby-driver-2.21.3/gemfiles/000077500000000000000000000000001505113246500167635ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/gemfiles/bson_4-stable.gemfile000066400000000000000000000003071505113246500227510ustar00rootroot00000000000000# rubocop:todo all source "https://rubygems.org" gemspec path: '..' gem 'bson', git: 'https://github.com/mongodb/bson-ruby', branch: '4-stable' require_relative './standard' standard_dependencies mongo-ruby-driver-2.21.3/gemfiles/bson_master.gemfile000066400000000000000000000003051505113246500226270ustar00rootroot00000000000000# rubocop:todo all source "https://rubygems.org" gemspec path: '..' gem 'bson', git: 'https://github.com/mongodb/bson-ruby', branch: 'master' require_relative './standard' standard_dependencies mongo-ruby-driver-2.21.3/gemfiles/bson_min.gemfile000066400000000000000000000002201505113246500221130ustar00rootroot00000000000000# rubocop:todo all source "https://rubygems.org" gemspec path: '..' gem 'bson', '4.14.1' require_relative './standard' standard_dependencies mongo-ruby-driver-2.21.3/gemfiles/mongo_kerberos.gemfile000066400000000000000000000003311505113246500233250ustar00rootroot00000000000000# rubocop:todo all source "https://rubygems.org" gemspec path: '..' gem 'mongo_kerberos', git: 'https://github.com/mongodb/mongo-ruby-kerberos', branch: 'master' require_relative './standard' standard_dependencies mongo-ruby-driver-2.21.3/gemfiles/snappy_compression.gemfile000066400000000000000000000002101505113246500242410ustar00rootroot00000000000000# rubocop:todo all source "https://rubygems.org" gemspec path: '..' gem 'snappy' require_relative './standard' standard_dependencies mongo-ruby-driver-2.21.3/gemfiles/standard.rb000066400000000000000000000034761505113246500211220ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:disable Metrics/AbcSize, Metrics/MethodLength, Metrics/BlockLength def standard_dependencies gem 'yard', '>= 0.9.35' gem 'ffi' group :development, :testing do gem 'jruby-openssl', platforms: :jruby gem 'json', platforms: :jruby gem 'rspec', '~> 3.12' gem 'activesupport', '<7.1' gem 'rake' gem 'webrick' gem 'byebug', platforms: :mri gem 'ruby-debug', platforms: :jruby gem 'aws-sdk-core', '~> 3' gem 'aws-sdk-cloudwatchlogs' gem 'aws-sdk-ec2' gem 'aws-sdk-ecs' gem 'aws-sdk-iam' gem 'aws-sdk-sts' gem 'paint' # for benchmark tests gem 'yajl-ruby', platforms: :mri, require: false gem 'celluloid', platforms: :mri, require: false gem 'rubocop', '~> 1.45.1' gem 'rubocop-performance', '~> 1.16.0' gem 'rubocop-rake', '~> 0.6.0' gem 'rubocop-rspec', '~> 2.18.1' platform :mri do # Debugger for VSCode. if !ENV['CI'] && !ENV['DOCKER'] && RUBY_VERSION < '3.0' gem 'debase' gem 'ruby-debug-ide' end end end group :testing do gem 'timecop' gem 'ice_nine' gem 'async', '2.23.1', platforms: :mri if RUBY_VERSION.match?(/^3\.1/) gem 'rubydns', platforms: :mri gem 'rspec-retry' gem 'rfc', '~> 0.2.0' gem 'fuubar' gem 'timeout-interrupt', platforms: :mri gem 'concurrent-ruby', platforms: :jruby gem 'dotenv' gem 'childprocess' end group :development do gem 'ruby-prof', platforms: :mri gem 'erubi' gem 'tilt' # solargraph depends on rbs, which won't build on jruby for some reason gem 'solargraph', platforms: :mri gem 'ruby-lsp', platforms: :mri end gem 'libmongocrypt-helper', '~> 1.14.0' if ENV['FLE'] == 'helper' end # rubocop:enable Metrics/AbcSize, Metrics/MethodLength, Metrics/BlockLength mongo-ruby-driver-2.21.3/gemfiles/zstd_compression.gemfile000066400000000000000000000002131505113246500237160ustar00rootroot00000000000000# rubocop:todo all source "https://rubygems.org" gemspec path: '..' gem 'zstd-ruby' require_relative './standard' standard_dependencies mongo-ruby-driver-2.21.3/lib/000077500000000000000000000000001505113246500157365ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo.rb000066400000000000000000000074301505113246500174060ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'base64' require 'forwardable' require 'ipaddr' require 'logger' require 'openssl' require 'rbconfig' require 'resolv' require 'securerandom' require 'set' require 'socket' require 'stringio' require 'timeout' require 'uri' require 'zlib' autoload :CGI, 'cgi' require 'bson' require 'mongo/id' require 'mongo/bson' require 'mongo/semaphore' require 'mongo/distinguishing_semaphore' require 'mongo/condition_variable' require 'mongo/csot_timeout_holder' require 'mongo/options' require 'mongo/loggable' require 'mongo/cluster_time' require 'mongo/topology_version' require 'mongo/monitoring' require 'mongo/logger' require 'mongo/retryable' require 'mongo/operation' require 'mongo/error' require 'mongo/event' require 'mongo/address' require 'mongo/auth' require 'mongo/protocol' require 'mongo/background_thread' require 'mongo/cluster' require 'mongo/cursor' require 'mongo/caching_cursor' require 'mongo/collection' require 'mongo/database' require 'mongo/crypt' require 'mongo/client' # Purposely out-of-order so that database is loaded first require 'mongo/client_encryption' require 'mongo/dbref' require 'mongo/grid' require 'mongo/index' require 'mongo/search_index/view' require 'mongo/lint' require 'mongo/query_cache' require 'mongo/server' require 'mongo/server_selector' require 'mongo/session' require 'mongo/socket' require 'mongo/srv' require 'mongo/timeout' require 'mongo/uri' require 'mongo/version' require 'mongo/write_concern' require 'mongo/utils' require 'mongo/config' module Mongo class << self extend Forwardable # Delegate the given option along with its = and ? methods to the given # object. # # @param [ Object ] obj The object to delegate to. # @param [ Symbol ] opt The method to delegate. def self.delegate_option(obj, opt) def_delegators obj, opt, "#{opt}=", "#{opt}?" end # Take all the public instance methods from the Config singleton and allow # them to be accessed through the Mongo module directly. def_delegators Config, :options= delegate_option Config, :broken_view_aggregate delegate_option Config, :broken_view_options delegate_option Config, :validate_update_replace end # Clears the driver's OCSP response cache. module_function def clear_ocsp_cache Socket::OcspCache.clear end # This is a user-settable list of hooks that will be invoked when any new # TLS socket is connected. Each hook should be a Proc that takes # an OpenSSL::SSL::SSLContext object as an argument. These hooks can be used # to modify the TLS context (for example to disallow certain ciphers). # # @return [ Array ] The list of procs to be invoked when a TLS socket # is connected (may be an empty Array). module_function def tls_context_hooks @tls_context_hooks ||= [] end # Set the TLS context hooks. # # @param [ Array ] hooks An Array of Procs, each of which should take # an OpenSSL::SSL::SSLContext object as an argument. module_function def tls_context_hooks=(hooks) unless hooks.is_a?(Array) && hooks.all? { |hook| hook.is_a?(Proc) } raise ArgumentError, "TLS context hooks must be an array of Procs" end @tls_context_hooks = hooks end end mongo-ruby-driver-2.21.3/lib/mongo/000077500000000000000000000000001505113246500170555ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/active_support.rb000066400000000000000000000013311505113246500224470ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2019-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # Require this file if using the Ruby drver with ActiveSupport. require "bson/active_support" mongo-ruby-driver-2.21.3/lib/mongo/address.rb000066400000000000000000000247471505113246500210450ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an 'AS IS' BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'mongo/address/ipv4' require 'mongo/address/ipv6' require 'mongo/address/unix' require 'mongo/address/validator' module Mongo # Represents an address to a server, either with an IP address or socket # path. # # @since 2.0.0 class Address extend Forwardable # Mapping from socket family to resolver class. # # @since 2.0.0 FAMILY_MAP = { ::Socket::PF_UNIX => Unix, ::Socket::AF_INET6 => IPv6, ::Socket::AF_INET => IPv4 }.freeze # The localhost constant. # # @since 2.1.0 LOCALHOST = 'localhost'.freeze # Initialize the address. # # @example Initialize the address with a DNS entry and port. # Mongo::Address.new("app.example.com:27017") # # @example Initialize the address with a DNS entry and no port. # Mongo::Address.new("app.example.com") # # @example Initialize the address with an IPV4 address and port. # Mongo::Address.new("127.0.0.1:27017") # # @example Initialize the address with an IPV4 address and no port. # Mongo::Address.new("127.0.0.1") # # @example Initialize the address with an IPV6 address and port. # Mongo::Address.new("[::1]:27017") # # @example Initialize the address with an IPV6 address and no port. # Mongo::Address.new("[::1]") # # @example Initialize the address with a unix socket. # Mongo::Address.new("/path/to/socket.sock") # # @param [ String ] seed The provided address. # @param [ Hash ] options The address options. # # @option options [ Float ] :connect_timeout Connect timeout. # # @since 2.0.0 def initialize(seed, options = {}) if seed.nil? raise ArgumentError, "address must be not nil" end @seed = seed @host, @port = parse_host_port @options = Hash[options.map { |k, v| [k.to_sym, v] }] end # @return [ String ] seed The seed address. attr_reader :seed # @return [ String ] host The original host name. attr_reader :host # @return [ Integer ] port The port. attr_reader :port # @api private attr_reader :options # Check equality of the address to another. # # @example Check address equality. # address == other # # @param [ Object ] other The other object. # # @return [ true, false ] If the objects are equal. # # @since 2.0.0 def ==(other) return false unless other.is_a?(Address) host == other.host && port == other.port end # Check equality for hashing. # # @example Check hashing equality. # address.eql?(other) # # @param [ Object ] other The other object. # # @return [ true, false ] If the objects are equal. # # @since 2.2.0 def eql?(other) self == other end # Calculate the hash value for the address. # # @example Calculate the hash value. # address.hash # # @return [ Integer ] The hash value. # # @since 2.0.0 def hash [ host, port ].hash end # Get a pretty printed address inspection. # # @example Get the address inspection. # address.inspect # # @return [ String ] The nice inspection string. # # @since 2.0.0 def inspect "#" end # Get a socket for the address stored in this object, given the options. # # If the address stored in this object looks like a Unix path, this method # returns a Unix domain socket for this path. # # Otherwise, this method attempts to resolve the address stored in # this object to IPv4 and IPv6 addresses using +Socket#getaddrinfo+, then # connects to the resulting addresses and returns the socket of the first # successful connection. The order in which address families (IPv4/IPV6) # are tried is the same order in which the addresses are returned by # +getaddrinfo+, and is determined by the host system. # # Name resolution is performed on each +socket+ call. This is done so that # any changes to which addresses the host names used as seeds or in # server configuration resolve to are immediately noticed by the driver, # even if a socket has been connected to the affected host name/address # before. However, note that DNS TTL values may still affect when a change # to a host address is noticed by the driver. # # This method propagates any exceptions raised during DNS resolution and # subsequent connection attempts. In case of a host name resolving to # multiple IP addresses, the error raised by the last attempt is propagated # to the caller. This method does not map exceptions to Mongo::Error # subclasses, and may raise any subclass of Exception. # # @example Get a socket. # address.socket(5, :ssl => true) # # @param [ Float ] socket_timeout The socket timeout. # @param [ Hash ] opts The options. # # @option opts [ Float ] :connect_timeout Connect timeout. # @option opts [ Boolean ] :csot Whether the client-side operation timeout # should be considered when connecting the socket. This option influences # only what errors will be raised if timeout expires. # @option opts [ true | false ] :ssl Whether to use SSL. # @option opts [ String ] :ssl_ca_cert # Same as the corresponding Client/Socket::SSL option. # @option opts [ Array ] :ssl_ca_cert_object # Same as the corresponding Client/Socket::SSL option. # @option opts [ String ] :ssl_ca_cert_string # Same as the corresponding Client/Socket::SSL option. # @option opts [ String ] :ssl_cert # Same as the corresponding Client/Socket::SSL option. # @option opts [ OpenSSL::X509::Certificate ] :ssl_cert_object # Same as the corresponding Client/Socket::SSL option. # @option opts [ String ] :ssl_cert_string # Same as the corresponding Client/Socket::SSL option. # @option opts [ String ] :ssl_key # Same as the corresponding Client/Socket::SSL option. # @option opts [ OpenSSL::PKey ] :ssl_key_object # Same as the corresponding Client/Socket::SSL option. # @option opts [ String ] :ssl_key_pass_phrase # Same as the corresponding Client/Socket::SSL option. # @option opts [ String ] :ssl_key_string # Same as the corresponding Client/Socket::SSL option. # @option opts [ true, false ] :ssl_verify # Same as the corresponding Client/Socket::SSL option. # @option opts [ true, false ] :ssl_verify_certificate # Same as the corresponding Client/Socket::SSL option. # @option opts [ true, false ] :ssl_verify_hostname # Same as the corresponding Client/Socket::SSL option. # # @return [ Mongo::Socket::SSL | Mongo::Socket::TCP | Mongo::Socket::Unix ] # The socket. # # @raise [ Mongo::Error ] If network connection failed. # # @since 2.0.0 # @api private def socket(socket_timeout, opts = {}) csot = !!opts[:csot] opts = { connect_timeout: Server::CONNECT_TIMEOUT, }.update(options).update(Hash[opts.map { |k, v| [k.to_sym, v] }]) map_exceptions(csot) do if seed.downcase =~ Unix::MATCH specific_address = Unix.new(seed.downcase) return specific_address.socket(socket_timeout, opts) end # When the driver connects to "localhost", it only attempts IPv4 # connections. When the driver connects to other hosts, it will # attempt both IPv4 and IPv6 connections. family = (host == LOCALHOST) ? ::Socket::AF_INET : ::Socket::AF_UNSPEC error = nil # Sometimes Socket#getaddrinfo returns the same info more than once # (multiple identical items in the returned array). It does not make # sense to try to connect to the same address more than once, thus # eliminate duplicates here. infos = ::Socket.getaddrinfo(host, nil, family, ::Socket::SOCK_STREAM) results = infos.map do |info| [info[4], info[3]] end.uniq results.each do |family, address_str| begin specific_address = FAMILY_MAP[family].new(address_str, port, host) socket = specific_address.socket(socket_timeout, opts) return socket rescue IOError, SystemCallError, Error::SocketTimeoutError, Error::SocketError => e error = e end end raise error end end # Get the address as a string. # # @example Get the address as a string. # address.to_s # # @return [ String ] The nice string. # # @since 2.0.0 def to_s if port if host.include?(':') "[#{host}]:#{port}" else "#{host}:#{port}" end else host end end private def parse_host_port address = seed.downcase case address when Unix::MATCH then Unix.parse(address) when IPv6::MATCH then IPv6.parse(address) else IPv4.parse(address) end end # Maps some errors to different ones, mostly low-level errors to driver # level errors # # @param [ Boolean ] csot Whether the client-side operation timeout # should be considered when connecting the socket. def map_exceptions(csot) begin yield rescue Errno::ETIMEDOUT => e if csot raise Error::TimeoutError, "#{e.class}: #{e} (for #{self})" else raise Error::SocketTimeoutError, "#{e.class}: #{e} (for #{self})" end rescue Error::SocketTimeoutError => e if csot raise Error::TimeoutError, "#{e.class}: #{e} (for #{self})" else raise e end rescue IOError, SystemCallError => e raise Error::SocketError, "#{e.class}: #{e} (for #{self})" rescue OpenSSL::SSL::SSLError => e raise Error::SocketError, "#{e.class}: #{e} (for #{self})" end end end end mongo-ruby-driver-2.21.3/lib/mongo/address/000077500000000000000000000000001505113246500205025ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/address/ipv4.rb000066400000000000000000000106141505113246500217130ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an 'AS IS' BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Address # Sets up resolution with IPv4 support if the address is an ip # address. # # @since 2.0.0 class IPv4 # @return [ String ] host The host. attr_reader :host # @return [ String ] host_name The original host name. attr_reader :host_name # @return [ Integer ] port The port. attr_reader :port # The regular expression to use to match an IPv4 ip address. # # @since 2.0.0 MATCH = Regexp.new('/\./').freeze # Split value constant. # # @since 2.1.0 SPLIT = ':'.freeze # Parse an IPv4 address into its host and port. # # @example Parse the address. # IPv4.parse("127.0.0.1:28011") # # @param [ String ] address The address to parse. # # @return [ Array ] The host and port pair. # # @since 2.0.0 def self.parse(address) parts = address.split(SPLIT) host = parts[0] port = (parts[1] || 27017).to_i [ host, port ] end # Initialize the IPv4 resolver. # # @example Initialize the resolver. # IPv4.new("127.0.0.1", 27017, 'localhost') # # @param [ String ] host The host. # @param [ Integer ] port The port. # # @since 2.0.0 def initialize(host, port, host_name=nil) @host = host @port = port @host_name = host_name end # Get a socket for the provided address type, given the options. # # @example Get an IPv4 socket. # ipv4.socket(5, :ssl => true) # # @param [ Float ] socket_timeout The socket timeout. # @param [ Hash ] options The options. # # @option options [ Float ] :connect_timeout Connect timeout. # @option options [ true | false ] :ssl Whether to use TLS. # @option options [ String ] :ssl_ca_cert # Same as the corresponding Client/Socket::SSL option. # @option options [ Array ] :ssl_ca_cert_object # Same as the corresponding Client/Socket::SSL option. # @option options [ String ] :ssl_ca_cert_string # Same as the corresponding Client/Socket::SSL option. # @option options [ String ] :ssl_cert # Same as the corresponding Client/Socket::SSL option. # @option options [ OpenSSL::X509::Certificate ] :ssl_cert_object # Same as the corresponding Client/Socket::SSL option. # @option options [ String ] :ssl_cert_string # Same as the corresponding Client/Socket::SSL option. # @option options [ String ] :ssl_key # Same as the corresponding Client/Socket::SSL option. # @option options [ OpenSSL::PKey ] :ssl_key_object # Same as the corresponding Client/Socket::SSL option. # @option options [ String ] :ssl_key_pass_phrase # Same as the corresponding Client/Socket::SSL option. # @option options [ String ] :ssl_key_string # Same as the corresponding Client/Socket::SSL option. # @option options [ true, false ] :ssl_verify # Same as the corresponding Client/Socket::SSL option. # @option options [ true, false ] :ssl_verify_certificate # Same as the corresponding Client/Socket::SSL option. # @option options [ true, false ] :ssl_verify_hostname # Same as the corresponding Client/Socket::SSL option. # # @return [ Mongo::Socket::SSL, Mongo::Socket::TCP ] The socket. # # @since 2.0.0 # @api private def socket(socket_timeout, options = {}) if options[:ssl] Socket::SSL.new(host, port, host_name, socket_timeout, Socket::PF_INET, options) else Socket::TCP.new(host, port, socket_timeout, Socket::PF_INET, options) end end end end end mongo-ruby-driver-2.21.3/lib/mongo/address/ipv6.rb000066400000000000000000000117161505113246500217210ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an 'AS IS' BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Address # Sets up resolution with IPv6 support if the address is an ip # address. # # @since 2.0.0 class IPv6 # @return [ String ] host The host. attr_reader :host # @return [ String ] host_name The original host name. attr_reader :host_name # @return [ Integer ] port The port. attr_reader :port # The regular expression to use to match an IPv6 ip address. # # @since 2.0.0 MATCH = Regexp.new('::').freeze # Parse an IPv6 address into its host and port. # # @example Parse the address. # IPv6.parse("[::1]:28011") # # @param [ String ] address The address to parse. # # @return [ Array ] The host and port pair. # # @since 2.0.0 def self.parse(address) # IPAddr's parser handles IP address only, not port. # Therefore we need to handle the port ourselves if address =~ /[\[\]]/ parts = address.match(/\A\[(.+)\](?::(\d+))?\z/) if parts.nil? raise ArgumentError, "Invalid IPv6 address: #{address}" end host = parts[1] port = (parts[2] || 27017).to_i else host = address port = 27017 end # Validate host. # This will raise IPAddr::InvalidAddressError # on newer rubies which is a subclass of ArgumentError # if host is invalid begin IPAddr.new(host) rescue ArgumentError raise ArgumentError, "Invalid IPv6 address: #{address}" end [ host, port ] end # Initialize the IPv6 resolver. # # @example Initialize the resolver. # IPv6.new("::1", 28011, 'localhost') # # @param [ String ] host The host. # @param [ Integer ] port The port. # # @since 2.0.0 def initialize(host, port, host_name=nil) @host = host @port = port @host_name = host_name end # Get a socket for the provided address type, given the options. # # @example Get an IPv6 socket. # ipv4.socket(5, :ssl => true) # # @param [ Float ] socket_timeout The socket timeout. # @param [ Hash ] options The options. # # @option options [ Float ] :connect_timeout Connect timeout. # @option options [ true | false ] :ssl Whether to use TLS. # @option options [ String ] :ssl_ca_cert # Same as the corresponding Client/Socket::SSL option. # @option options [ Array ] :ssl_ca_cert_object # Same as the corresponding Client/Socket::SSL option. # @option options [ String ] :ssl_ca_cert_string # Same as the corresponding Client/Socket::SSL option. # @option options [ String ] :ssl_cert # Same as the corresponding Client/Socket::SSL option. # @option options [ OpenSSL::X509::Certificate ] :ssl_cert_object # Same as the corresponding Client/Socket::SSL option. # @option options [ String ] :ssl_cert_string # Same as the corresponding Client/Socket::SSL option. # @option options [ String ] :ssl_key # Same as the corresponding Client/Socket::SSL option. # @option options [ OpenSSL::PKey ] :ssl_key_object # Same as the corresponding Client/Socket::SSL option. # @option options [ String ] :ssl_key_pass_phrase # Same as the corresponding Client/Socket::SSL option. # @option options [ String ] :ssl_key_string # Same as the corresponding Client/Socket::SSL option. # @option options [ true, false ] :ssl_verify # Same as the corresponding Client/Socket::SSL option. # @option options [ true, false ] :ssl_verify_certificate # Same as the corresponding Client/Socket::SSL option. # @option options [ true, false ] :ssl_verify_hostname # Same as the corresponding Client/Socket::SSL option. # # @return [ Mongo::Socket::SSL, Mongo::Socket::TCP ] The socket. # # @since 2.0.0 # @api private def socket(socket_timeout, options = {}) if options[:ssl] Socket::SSL.new(host, port, host_name, socket_timeout, Socket::PF_INET6, options) else Socket::TCP.new(host, port, socket_timeout, Socket::PF_INET6, options) end end end end end mongo-ruby-driver-2.21.3/lib/mongo/address/unix.rb000066400000000000000000000043161505113246500220160ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an 'AS IS' BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Address # Sets up socket addresses. # # @since 2.0.0 class Unix # @return [ String ] host The host. attr_reader :host # @return [ nil ] port Will always be nil. attr_reader :port # The regular expression to use to match a socket path. # # @since 2.0.0 MATCH = Regexp.new('\.sock').freeze # Parse a socket path. # # @example Parse the address. # Unix.parse("/path/to/socket.sock") # # @param [ String ] address The address to parse. # # @return [ Array ] A list with the host (socket path). # # @since 2.0.0 def self.parse(address) [ address ] end # Initialize the socket resolver. # # @example Initialize the resolver. # Unix.new("/path/to/socket.sock", "/path/to/socket.sock") # # @param [ String ] host The host. # # @since 2.0.0 def initialize(host, port=nil, host_name=nil) @host = host end # Get a socket for the provided address type, given the options. # # @example Get a Unix socket. # address.socket(5) # # @param [ Float ] socket_timeout The socket timeout. # @param [ Hash ] options The options. # # @option options [ Float ] :connect_timeout Connect timeout. # # @return [ Mongo::Socket::Unix ] The socket. # # @since 2.0.0 # @api private def socket(socket_timeout, options = {}) Socket::Unix.new(host, socket_timeout, options) end end end end mongo-ruby-driver-2.21.3/lib/mongo/address/validator.rb000066400000000000000000000074651505113246500230300ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2017-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an 'AS IS' BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Address # @api private module Validator # Takes an address string in ipv4/ipv6/hostname/socket path format and # validates its format. def validate_address_str!(address_str) case address_str when /\A\[[\d:]+\](?::(\d+))?\z/ # ipv6 with optional port if port_str = $1 validate_port_str!(port_str) end when /\A\//, /\.sock\z/ # Unix socket path. # Spec requires us to validate that the path has no unescaped # slashes, but if this were to be the case, parsing would have # already failed elsewhere because the URI would've been split in # a weird place. # The spec also allows relative socket paths and requires that # socket paths end in ".sock". We accept all paths but special case # the .sock extension to avoid relative paths falling into the # host:port case below. when /[\/\[\]]/ # Not a host:port nor an ipv4 address with optional port. # Possibly botched ipv6 address with e.g. port delimiter present and # port missing, or extra junk before or after. raise Error::InvalidAddress, "Invalid hostname: #{address_str}" when /:.*:/m raise Error::InvalidAddress, "Multiple port delimiters are not allowed: #{address_str}" else # host:port or ipv4 address with optional port number host, port = address_str.split(':') if host.empty? raise Error::InvalidAddress, "Host is empty: #{address_str}" end validate_hostname!(host) if port && port.empty? raise Error::InvalidAddress, "Port is empty: #{address_str}" end validate_port_str!(port) end end private # Validates format of the hostname, in particular for further use as # the origin in same origin verification. # # The hostname must have been normalized to remove the trailing dot if # it was obtained from a DNS record. This method prohibits trailing dots. def validate_hostname!(host) # Since we are performing same origin verification during SRV # processing, prohibit leading dots in hostnames, trailing dots # and runs of multiple dots. DNS resolution of SRV records yields # hostnames with trailing dots, those trailing dots are removed # during normalization process prior to validation. if host.start_with?('.') raise Error::InvalidAddress, "Hostname cannot start with a dot: #{host}" end if host.end_with?('.') raise Error::InvalidAddress, "Hostname cannot end with a dot: #{host}" end if host.include?('..') raise Error::InvalidAddress, "Runs of multiple dots are not allowed in hostname: #{host}" end end def validate_port_str!(port) unless port.nil? || (port.length > 0 && port.to_i > 0 && port.to_i <= 65535) raise Error::InvalidAddress, "Invalid port: #{port}. Port must be an integer greater than 0 and less than 65536" end end end end end mongo-ruby-driver-2.21.3/lib/mongo/auth.rb000066400000000000000000000133731505113246500203520ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'mongo/auth/credential_cache' require 'mongo/auth/stringprep' require 'mongo/auth/conversation_base' require 'mongo/auth/sasl_conversation_base' require 'mongo/auth/scram_conversation_base' require 'mongo/auth/user' require 'mongo/auth/roles' require 'mongo/auth/base' require 'mongo/auth/aws' require 'mongo/auth/cr' require 'mongo/auth/gssapi' require 'mongo/auth/ldap' require 'mongo/auth/scram' require 'mongo/auth/scram256' require 'mongo/auth/x509' require 'mongo/error/read_write_retryable' require 'mongo/error/labelable' module Mongo # This namespace contains all authentication related behavior. # # @since 2.0.0 module Auth extend self # The external database name. # # @since 2.0.0 # @api private EXTERNAL = '$external'.freeze # Constant for the nonce command. # # @since 2.0.0 # @api private GET_NONCE = { getnonce: 1 }.freeze # Constant for the nonce field. # # @since 2.0.0 # @api private NONCE = 'nonce'.freeze # Map the symbols parsed from the URI connection string to strategies. # # @note This map is not frozen because when mongo_kerberos is loaded, # it mutates this map by adding the Kerberos authenticator. # # @since 2.0.0 SOURCES = { aws: Aws, gssapi: Gssapi, mongodb_cr: CR, mongodb_x509: X509, plain: LDAP, scram: Scram, scram256: Scram256, } # Get an authenticator for the provided user to authenticate over the # provided connection. # # @param [ Auth::User ] user The user to authenticate. # @param [ Mongo::Connection ] connection The connection to authenticate over. # # @option opts [ String | nil ] speculative_auth_client_nonce The client # nonce used in speculative auth on the specified connection that # produced the specified speculative auth result. # @option opts [ BSON::Document | nil ] speculative_auth_result The # value of speculativeAuthenticate field of hello response of # the handshake on the specified connection. # # @return [ Auth::Aws | Auth::CR | Auth::Gssapi | Auth::LDAP | # Auth::Scram | Auth::Scram256 | Auth::X509 ] The authenticator. # # @since 2.0.0 # @api private def get(user, connection, **opts) mechanism = user.mechanism raise InvalidMechanism.new(mechanism) if !SOURCES.has_key?(mechanism) SOURCES[mechanism].new(user, connection, **opts) end # Raised when trying to authorize with an invalid configuration # # @since 2.11.0 class InvalidConfiguration < Mongo::Error::AuthError; end # Raised when trying to get an invalid authorization mechanism. # # @since 2.0.0 class InvalidMechanism < InvalidConfiguration # Instantiate the new error. # # @example Instantiate the error. # Mongo::Auth::InvalidMechanism.new(:test) # # @param [ Symbol ] mechanism The provided mechanism. # # @since 2.0.0 def initialize(mechanism) known_mechanisms = SOURCES.keys.sort.map do |key| key.inspect end.join(', ') super("#{mechanism.inspect} is invalid, please use one of the following mechanisms: #{known_mechanisms}") end end # Raised when a user is not authorized on a database. # # @since 2.0.0 class Unauthorized < Mongo::Error::AuthError include Error::ReadWriteRetryable include Error::Labelable # @return [ Integer ] The error code. attr_reader :code # Instantiate the new error. # # @example Instantiate the error. # Mongo::Auth::Unauthorized.new(user) # # @param [ Mongo::Auth::User ] user The unauthorized user. # @param [ String ] used_mechanism Auth mechanism actually used for # authentication. This is a full string like SCRAM-SHA-256. # @param [ String ] message The error message returned by the server. # @param [ Server ] server The server instance that authentication # was attempted against. # @param [ Integer ] The error code. # # @since 2.0.0 def initialize(user, used_mechanism: nil, message: nil, server: nil, code: nil ) @code = code configured_bits = [] used_bits = [ "auth source: #{user.auth_source}", ] if user.mechanism configured_bits << "mechanism: #{user.mechanism}" end if used_mechanism used_bits << "used mechanism: #{used_mechanism}" end if server used_bits << "used server: #{server.address} (#{server.status})" end used_user = if user.mechanism == :mongodb_x509 'Client certificate' else "User #{user.name}" end if configured_bits.empty? configured_bits = '' else configured_bits = " (#{configured_bits.join(', ')})" end used_bits = " (#{used_bits.join(', ')})" msg = "#{used_user}#{configured_bits} is not authorized to access #{user.database}#{used_bits}" if message msg += ': ' + message end super(msg) end end end end mongo-ruby-driver-2.21.3/lib/mongo/auth/000077500000000000000000000000001505113246500200165ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/auth/aws.rb000066400000000000000000000023161505113246500211370ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Auth class Aws < Base MECHANISM = 'MONGODB-AWS'.freeze # Log the user in on the current connection. # # @return [ BSON::Document ] The document of the authentication response. def login converse_2_step(connection, conversation) rescue StandardError CredentialsCache.instance.clear raise end end end end require 'mongo/auth/aws/conversation' require 'mongo/auth/aws/credentials' require 'mongo/auth/aws/credentials_cache' require 'mongo/auth/aws/credentials_retriever' require 'mongo/auth/aws/request' mongo-ruby-driver-2.21.3/lib/mongo/auth/aws/000077500000000000000000000000001505113246500206105ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/auth/aws/conversation.rb000066400000000000000000000104371505113246500236540ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Auth class Aws # Defines behavior around a single MONGODB-AWS conversation between the # client and server. # # @see https://github.com/mongodb/specifications/blob/master/source/auth/auth.md#mongodb-aws # # @api private class Conversation < SaslConversationBase # Continue the AWS conversation. This sends the client final message # to the server after setting the reply from the previous server # communication. # # @param [ BSON::Document ] reply_document The reply document of the # previous message. # @param [ Server::Connection ] connection The connection being # authenticated. # # @return [ Protocol::Message ] The next message to send. def continue(reply_document, connection) @conversation_id = reply_document[:conversationId] payload = reply_document[:payload].data payload = BSON::Document.from_bson(BSON::ByteBuffer.new(payload)) @server_nonce = payload[:s].data validate_server_nonce! @sts_host = payload[:h] unless (1..255).include?(@sts_host.bytesize) raise Error::InvalidServerAuthConfiguration, "STS host name length is not in 1..255 bytes range: #{@sts_host}" end selector = CLIENT_CONTINUE_MESSAGE.merge( payload: BSON::Binary.new(client_final_payload), conversationId: conversation_id, ) build_message(connection, user.auth_source, selector) end private # @return [ String ] The server nonce. attr_reader :server_nonce # Get the id of the conversation. # # @return [ Integer ] The conversation id. attr_reader :conversation_id def client_first_data { r: BSON::Binary.new(client_nonce), p: 110, } end def client_first_payload client_first_data.to_bson.to_s end def wrap_data(data) BSON::Binary.new(data.to_bson.to_s) end def client_nonce @client_nonce ||= SecureRandom.random_bytes(32) end def client_final_payload credentials = CredentialsRetriever.new(user).credentials request = Request.new( access_key_id: credentials.access_key_id, secret_access_key: credentials.secret_access_key, session_token: credentials.session_token, host: @sts_host, server_nonce: server_nonce, ) # Uncomment this line to validate obtained credentials on the # client side prior to sending them to the server. # This generally produces informative diagnostics as to why # the credentials are not valid (e.g., they could be expired) # whereas the server normally does not elaborate on why # authentication failed (but the reason usually is logged into # the server logs). # # Note that credential validation requires that the client is # able to access AWS STS. If this is not permitted by firewall # rules, validation will fail but credentials may be perfectly OK # and the server may be able to authenticate using them just fine # (provided the server is allowed to communicate with STS). #request.validate! payload = { a: request.authorization, d: request.formatted_time, } if credentials.session_token payload[:t] = credentials.session_token end payload.to_bson.to_s end end end end end mongo-ruby-driver-2.21.3/lib/mongo/auth/aws/credentials.rb000066400000000000000000000023201505113246500234270ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2023-present MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Auth class Aws # The AWS credential set. # # @api private Credentials = Struct.new(:access_key_id, :secret_access_key, :session_token, :expiration) do # @return [ true | false ] Whether the credentials have expired. def expired? if expiration.nil? false else # According to the spec, Credentials are considered # valid if they are more than five minutes away from expiring. Time.now.utc >= expiration - 300 end end end end end end mongo-ruby-driver-2.21.3/lib/mongo/auth/aws/credentials_cache.rb000066400000000000000000000041241505113246500245560ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2023-present MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Auth class Aws # Thread safe cache to store AWS credentials. # # @api private class CredentialsCache # Get or create the singleton instance of the cache. # # @return [ CredentialsCache ] The singleton instance. def self.instance @instance ||= new end def initialize @lock = Mutex.new @credentials = nil end # Set the credentials in the cache. # # @param [ Aws::Credentials ] credentials The credentials to cache. def credentials=(credentials) @lock.synchronize do @credentials = credentials end end # Get the credentials from the cache. # # @return [ Aws::Credentials ] The cached credentials. def credentials @lock.synchronize do @credentials end end # Fetch the credentials from the cache or yield to get them # if they are not in the cache or have expired. # # @return [ Aws::Credentials ] The cached credentials. def fetch @lock.synchronize do @credentials = yield if @credentials.nil? || @credentials.expired? @credentials end end # Clear the credentials from the cache. def clear @lock.synchronize do @credentials = nil end end end end end end mongo-ruby-driver-2.21.3/lib/mongo/auth/aws/credentials_retriever.rb000066400000000000000000000413101505113246500255200ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Auth class Aws # Raised when trying to authorize with an invalid configuration # # @api private class CredentialsNotFound < Mongo::Error::AuthError def initialize super("Could not locate AWS credentials (checked Client URI and Ruby options, environment variables, ECS and EC2 metadata, and Web Identity)") end end # Retrieves AWS credentials from a variety of sources. # # This class provides for AWS credentials retrieval from: # - the passed user (which receives the credentials passed to the # client via URI options and Ruby options) # - AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_SESSION_TOKEN # environment variables (commonly used by AWS SDKs and various tools, # as well as AWS Lambda) # - AssumeRoleWithWebIdentity API call # - EC2 metadata endpoint # - ECS metadata endpoint # # The sources listed above are consulted in the order specified. # The first source that contains any of the three credential components # (access key id, secret access key or session token) is used. # The credential components must form a valid set if any of the components # is specified; meaning, access key id and secret access key must # always be provided together, and if a session token is provided # the key id and secret key must also be provided. If a source provides # partial credentials, credential retrieval fails with an exception. # # @api private class CredentialsRetriever # Timeout for metadata operations, in seconds. # # The auth spec suggests a 10 second timeout but this seems # excessively long given that the endpoint is essentially local. METADATA_TIMEOUT = 5 # @param [ Auth::User | nil ] user The user object, if one was provided. # @param [ Auth::Aws::CredentialsCache ] credentials_cache The credentials cache. def initialize(user = nil, credentials_cache: CredentialsCache.instance) @user = user @credentials_cache = credentials_cache end # @return [ Auth::User | nil ] The user object, if one was provided. attr_reader :user # Retrieves a valid set of credentials, if possible, or raises # Auth::InvalidConfiguration. # # @param [ CsotTimeoutHolder | nil ] timeout_holder CSOT timeout, if any. # # @return [ Auth::Aws::Credentials ] A valid set of credentials. # # @raise Auth::InvalidConfiguration if a source contains an invalid set # of credentials. # @raise Auth::Aws::CredentialsNotFound if credentials could not be # retrieved from any source. # @raise Error::TimeoutError if credentials cannot be retrieved within # the timeout defined on the operation context. def credentials(timeout_holder = nil) credentials = credentials_from_user(user) return credentials unless credentials.nil? credentials = credentials_from_environment return credentials unless credentials.nil? credentials = @credentials_cache.fetch { obtain_credentials_from_endpoints(timeout_holder) } return credentials unless credentials.nil? raise Auth::Aws::CredentialsNotFound end private # Returns credentials from the user object. # # @param [ Auth::User | nil ] user The user object, if one was provided. # # @return [ Auth::Aws::Credentials | nil ] A set of credentials, or nil # # @raise Auth::InvalidConfiguration if a source contains an invalid set # of credentials. def credentials_from_user(user) return nil unless user credentials = Credentials.new( user.name, user.password, user.auth_mech_properties['aws_session_token'] ) return credentials if credentials_valid?(credentials, 'Mongo::Client URI or Ruby options') end # Returns credentials from environment variables. # # @return [ Auth::Aws::Credentials | nil ] A set of credentials, or nil # if retrieval failed or the obtained credentials are invalid. # # @raise Auth::InvalidConfiguration if a source contains an invalid set # of credentials. def credentials_from_environment credentials = Credentials.new( ENV['AWS_ACCESS_KEY_ID'], ENV['AWS_SECRET_ACCESS_KEY'], ENV['AWS_SESSION_TOKEN'] ) credentials if credentials && credentials_valid?(credentials, 'environment variables') end # Returns credentials from the AWS metadata endpoints. # # @param [ CsotTimeoutHolder ] timeout_holder CSOT timeout. # # @return [ Auth::Aws::Credentials | nil ] A set of credentials, or nil # if retrieval failed or the obtained credentials are invalid. # # @raise Auth::InvalidConfiguration if a source contains an invalid set # of credentials. # @ raise Error::TimeoutError if credentials cannot be retrieved within # the timeout defined on the operation context. def obtain_credentials_from_endpoints(timeout_holder = nil) if (credentials = web_identity_credentials(timeout_holder)) && credentials_valid?(credentials, 'Web identity token') credentials elsif (credentials = ecs_metadata_credentials(timeout_holder)) && credentials_valid?(credentials, 'ECS task metadata') credentials elsif (credentials = ec2_metadata_credentials(timeout_holder)) && credentials_valid?(credentials, 'EC2 instance metadata') credentials end end # Returns credentials from the EC2 metadata endpoint. The credentials # could be empty, partial or invalid. # # @param [ CsotTimeoutHolder ] timeout_holder CSOT timeout. # # @return [ Auth::Aws::Credentials | nil ] A set of credentials, or nil # if retrieval failed. # @ raise Error::TimeoutError if credentials cannot be retrieved within # the timeout. def ec2_metadata_credentials(timeout_holder = nil) timeout_holder&.check_timeout! http = Net::HTTP.new('169.254.169.254') req = Net::HTTP::Put.new('/latest/api/token', # The TTL is required in order to obtain the metadata token. {'x-aws-ec2-metadata-token-ttl-seconds' => '30'}) resp = with_timeout(timeout_holder) do http.request(req) end if resp.code != '200' return nil end metadata_token = resp.body resp = with_timeout(timeout_holder) do http_get(http, '/latest/meta-data/iam/security-credentials', metadata_token) end if resp.code != '200' return nil end role_name = resp.body escaped_role_name = CGI.escape(role_name).gsub('+', '%20') resp = with_timeout(timeout_holder) do http_get(http, "/latest/meta-data/iam/security-credentials/#{escaped_role_name}", metadata_token) end if resp.code != '200' return nil end payload = JSON.parse(resp.body) unless payload['Code'] == 'Success' return nil end Credentials.new( payload['AccessKeyId'], payload['SecretAccessKey'], payload['Token'], DateTime.parse(payload['Expiration']).to_time ) # When trying to use the EC2 metadata endpoint on ECS: # Errno::EINVAL: Failed to open TCP connection to 169.254.169.254:80 (Invalid argument - connect(2) for "169.254.169.254" port 80) rescue ::Timeout::Error, IOError, SystemCallError, TypeError return nil end # Returns credentials from the ECS metadata endpoint. The credentials # could be empty, partial or invalid. # # @param [ CsotTimeoutHolder | nil ] timeout_holder CSOT timeout. # # @return [ Auth::Aws::Credentials | nil ] A set of credentials, or nil # if retrieval failed. # @ raise Error::TimeoutError if credentials cannot be retrieved within # the timeout defined on the operation context. def ecs_metadata_credentials(timeout_holder = nil) timeout_holder&.check_timeout! relative_uri = ENV['AWS_CONTAINER_CREDENTIALS_RELATIVE_URI'] if relative_uri.nil? || relative_uri.empty? return nil end http = Net::HTTP.new('169.254.170.2') # Per https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html # the value in AWS_CONTAINER_CREDENTIALS_RELATIVE_URI includes # the leading slash. # The current language in MONGODB-AWS specification implies that # a leading slash must be added by the driver, but this is not # in fact needed. req = Net::HTTP::Get.new(relative_uri) resp = with_timeout(timeout_holder) do http.request(req) end if resp.code != '200' return nil end payload = JSON.parse(resp.body) Credentials.new( payload['AccessKeyId'], payload['SecretAccessKey'], payload['Token'], DateTime.parse(payload['Expiration']).to_time ) rescue ::Timeout::Error, IOError, SystemCallError, TypeError return nil end # Returns credentials associated with web identity token that is # stored in a file. This authentication mechanism is used to authenticate # inside EKS. See https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html # for further details. # # @param [ CsotTimeoutHolder | nil ] timeout_holder CSOT timeout. # # @return [ Auth::Aws::Credentials | nil ] A set of credentials, or nil # if retrieval failed. def web_identity_credentials(timeout_holder = nil) web_identity_token, role_arn, role_session_name = prepare_web_identity_inputs return nil if web_identity_token.nil? response = request_web_identity_credentials( web_identity_token, role_arn, role_session_name, timeout_holder ) return if response.nil? credentials_from_web_identity_response(response) end # Returns inputs for the AssumeRoleWithWebIdentity AWS API call. # # @return [ Array ] Web # identity token, role arn, and role session name. def prepare_web_identity_inputs token_file = ENV['AWS_WEB_IDENTITY_TOKEN_FILE'] role_arn = ENV['AWS_ROLE_ARN'] if token_file.nil? || role_arn.nil? return nil end web_identity_token = File.open(token_file).read role_session_name = ENV['AWS_ROLE_SESSION_NAME'] if role_session_name.nil? role_session_name = "ruby-app-#{SecureRandom.alphanumeric(50)}" end [web_identity_token, role_arn, role_session_name] rescue Errno::ENOENT, IOError, SystemCallError nil end # Calls AssumeRoleWithWebIdentity to obtain credentials for the # given web identity token. # # @param [ String ] token The OAuth 2.0 access token or # OpenID Connect ID token that is provided by the identity provider. # @param [ String ] role_arn The Amazon Resource Name (ARN) of the role # that the caller is assuming. # @param [ String ] role_session_name An identifier for the assumed # role session. # @param [ CsotTimeoutHolder | nil ] timeout_holder CSOT timeout. # # @return [ Net::HTTPResponse | nil ] AWS API response if successful, # otherwise nil. # # @ raise Error::TimeoutError if credentials cannot be retrieved within # the timeout defined on the operation context. def request_web_identity_credentials(token, role_arn, role_session_name, timeout_holder) timeout_holder&.check_timeout! uri = URI('https://sts.amazonaws.com/') params = { 'Action' => 'AssumeRoleWithWebIdentity', 'Version' => '2011-06-15', 'RoleArn' => role_arn, 'WebIdentityToken' => token, 'RoleSessionName' => role_session_name } uri.query = ::URI.encode_www_form(params) req = Net::HTTP::Post.new(uri) req['Accept'] = 'application/json' resp = with_timeout(timeout_holder) do Net::HTTP.start(uri.hostname, uri.port, use_ssl: true) do |https| https.request(req) end end if resp.code != '200' return nil end resp rescue Errno::ENOENT, IOError, SystemCallError nil end # Extracts credentials from AssumeRoleWithWebIdentity response. # # @param [ Net::HTTPResponse ] response AssumeRoleWithWebIdentity # call response. # # @return [ Auth::Aws::Credentials | nil ] A set of credentials, or nil # if response parsing failed. def credentials_from_web_identity_response(response) payload = JSON.parse(response.body).dig( 'AssumeRoleWithWebIdentityResponse', 'AssumeRoleWithWebIdentityResult', 'Credentials' ) || {} Credentials.new( payload['AccessKeyId'], payload['SecretAccessKey'], payload['SessionToken'], Time.at(payload['Expiration']) ) rescue JSON::ParserError, TypeError nil end def http_get(http, uri, metadata_token) req = Net::HTTP::Get.new(uri, {'x-aws-ec2-metadata-token' => metadata_token}) http.request(req) end # Checks whether the credentials provided are valid. # # Returns true if they are valid, false if they are empty, and # raises Auth::InvalidConfiguration if the credentials are # incomplete (i.e. some of the components are missing). def credentials_valid?(credentials, source) unless credentials.access_key_id || credentials.secret_access_key || credentials.session_token then return false end if credentials.access_key_id || credentials.secret_access_key if credentials.access_key_id && !credentials.secret_access_key raise Auth::InvalidConfiguration, "Access key ID is provided without secret access key (source: #{source})" end if credentials.secret_access_key && !credentials.access_key_id raise Auth::InvalidConfiguration, "Secret access key is provided without access key ID (source: #{source})" end elsif credentials.session_token raise Auth::InvalidConfiguration, "Session token is provided without access key ID or secret access key (source: #{source})" end true end # Execute the given block considering the timeout defined on the context, # or the default timeout value. # # We use +Timeout.timeout+ here because there is no other acceptable easy # way to time limit http requests. # # @param [ CsotTimeoutHolder | nil ] timeout_holder CSOT timeout. # # @ raise Error::TimeoutError if deadline exceeded. def with_timeout(timeout_holder) timeout = timeout_holder&.remaining_timeout_sec! || METADATA_TIMEOUT exception_class = if timeout_holder&.csot? Error::TimeoutError else nil end ::Timeout.timeout(timeout, exception_class) do yield end end end end end end mongo-ruby-driver-2.21.3/lib/mongo/auth/aws/request.rb000066400000000000000000000246351505113246500226370ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Net autoload :HTTP, 'net/http' end module Mongo module Auth class Aws # Helper class for working with AWS requests. # # The primary purpose of this class is to produce the canonical AWS # STS request and calculate the signed headers and signature for it. # # @api private class Request # The body of the STS GetCallerIdentity request. # # This is currently the only request that this class supports making. STS_REQUEST_BODY = "Action=GetCallerIdentity&Version=2011-06-15".freeze # The timeout, in seconds, to use for validating credentials via STS. VALIDATE_TIMEOUT = 10 # Constructs the request. # # @note By overriding the time, it is possible to create reproducible # requests (in other words, replay a request). # # @param [ String ] access_key_id The access key id. # @param [ String ] secret_access_key The secret access key. # @param [ String ] session_token The session token for temporary # credentials. # @param [ String ] host The value of Host HTTP header to use. # @param [ String ] server_nonce The server nonce binary string. # @param [ Time ] time The time of the request. def initialize(access_key_id:, secret_access_key:, session_token: nil, host:, server_nonce:, time: Time.now ) @access_key_id = access_key_id @secret_access_key = secret_access_key @session_token = session_token @host = host @server_nonce = server_nonce @time = time %i(access_key_id secret_access_key host server_nonce).each do |arg| value = instance_variable_get("@#{arg}") if value.nil? || value.empty? raise Error::InvalidServerAuthResponse, "Value for '#{arg}' is required" end end if host && host.length > 255 raise Error::InvalidServerAuthHost, "Value for 'host' is too long: #{@host}" end end # @return [ String ] access_key_id The access key id. attr_reader :access_key_id # @return [ String ] secret_access_key The secret access key. attr_reader :secret_access_key # @return [ String ] session_token The session token for temporary # credentials. attr_reader :session_token # @return [ String ] host The value of Host HTTP header to use. attr_reader :host # @return [ String ] server_nonce The server nonce binary string. attr_reader :server_nonce # @return [ Time ] time The time of the request. attr_reader :time # @return [ String ] formatted_time ISO8601-formatted time of the # request, as would be used in X-Amz-Date header. def formatted_time @formatted_time ||= @time.getutc.strftime('%Y%m%dT%H%M%SZ') end # @return [ String ] formatted_date YYYYMMDD formatted date of the request. def formatted_date formatted_time[0, 8] end # @return [ String ] region The region of the host, derived from the host. def region # Common case if host == 'sts.amazonaws.com' return 'us-east-1' end if host.start_with?('.') raise Error::InvalidServerAuthHost, "Host begins with a period: #{host}" end if host.end_with?('.') raise Error::InvalidServerAuthHost, "Host ends with a period: #{host}" end parts = host.split('.') if parts.any? { |part| part.empty? } raise Error::InvalidServerAuthHost, "Host has an empty component: #{host}" end if parts.length == 1 'us-east-1' else parts[1] end end # Returns the scope of the request, per the AWS signature V4 specification. # # @return [ String ] The scope. def scope "#{formatted_date}/#{region}/sts/aws4_request" end # Returns the hash containing the headers of the calculated canonical # request. # # @note Not all of these headers are part of the signed headers list, # the keys of the hash are not necessarily ordered lexicographically, # and the keys may be in any case. # # @return [ ] headers The headers. def headers headers = { 'content-length' => STS_REQUEST_BODY.length.to_s, 'content-type' => 'application/x-www-form-urlencoded', 'host' => host, 'x-amz-date' => formatted_time, 'x-mongodb-gs2-cb-flag' => 'n', 'x-mongodb-server-nonce' => Base64.encode64(server_nonce).gsub("\n", ''), } if session_token headers['x-amz-security-token'] = session_token end headers end # Returns the hash containing the headers of the calculated canonical # request that should be signed, in a ready to sign form. # # The differences between #headers and this method is this method: # # - Removes any headers that are not to be signed. Per AWS # specifications it should be possible to sign all headers, but # MongoDB server expects only some headers to be signed and will # not form the correct request if other headers are signed. # - Lowercases all header names. # - Orders the headers lexicographically in the hash. # # @return [ ] headers The headers. def headers_to_sign headers_to_sign = {} headers.keys.sort_by { |k| k.downcase }.each do |key| write_key = key.downcase headers_to_sign[write_key] = headers[key] end headers_to_sign end # Returns semicolon-separated list of names of signed headers, per # the AWS signature V4 specification. # # @return [ String ] The signed header list. def signed_headers_string headers_to_sign.keys.join(';') end # Returns the canonical request used during calculation of AWS V4 # signature. # # @return [ String ] The canonical request. def canonical_request headers = headers_to_sign serialized_headers = headers.map do |k, v| "#{k}:#{v}" end.join("\n") hashed_payload = Digest::SHA256.new.update(STS_REQUEST_BODY).hexdigest "POST\n/\n\n" + # There are two newlines after serialized headers because the # signature V4 specification treats each header as containing the # terminating newline, and there is an additional newline # separating headers from the signed header names. "#{serialized_headers}\n\n" + "#{signed_headers_string}\n" + hashed_payload end # Returns the calculated signature of the canonical request, per # the AWS signature V4 specification. # # @return [ String ] The signature. def signature hashed_canonical_request = Digest::SHA256.hexdigest(canonical_request) string_to_sign = "AWS4-HMAC-SHA256\n" + "#{formatted_time}\n" + "#{scope}\n" + hashed_canonical_request # All of the intermediate HMAC operations are not hex-encoded. mac = hmac("AWS4#{secret_access_key}", formatted_date) mac = hmac(mac, region) mac = hmac(mac, 'sts') signing_key = hmac(mac, 'aws4_request') # Only the final HMAC operation is hex-encoded. hmac_hex(signing_key, string_to_sign) end # Returns the value of the Authorization header, per the AWS # signature V4 specification. # # @return [ String ] Authorization header value. def authorization "AWS4-HMAC-SHA256 Credential=#{access_key_id}/#{scope}, SignedHeaders=#{signed_headers_string}, Signature=#{signature}" end # Validates the credentials and the constructed request components # by sending a real STS GetCallerIdentity request. # # @return [ Hash ] GetCallerIdentity result. def validate! sts_request = Net::HTTP::Post.new("https://#{host}").tap do |req| headers.each do |k, v| req[k] = v end req['authorization'] = authorization req['accept'] = 'application/json' req.body = STS_REQUEST_BODY end http = Net::HTTP.new(host, 443) http.use_ssl = true http.start do resp = Timeout.timeout(VALIDATE_TIMEOUT, Error::CredentialCheckError, 'GetCallerIdentity request timed out') do http.request(sts_request) end payload = JSON.parse(resp.body) if resp.code != '200' aws_code = payload.fetch('Error').fetch('Code') aws_message = payload.fetch('Error').fetch('Message') msg = "Credential check for user #{access_key_id} failed with HTTP status code #{resp.code}: #{aws_code}: #{aws_message}" msg += '.' unless msg.end_with?('.') msg += " Please check that the credentials are valid, and if they are temporary (i.e. use the session token) that the session token is provided and not expired" raise Error::CredentialCheckError, msg end payload.fetch('GetCallerIdentityResponse').fetch('GetCallerIdentityResult') end end private def hmac(key, data) OpenSSL::HMAC.digest("SHA256", key, data) end def hmac_hex(key, data) OpenSSL::HMAC.hexdigest("SHA256", key, data) end end end end end mongo-ruby-driver-2.21.3/lib/mongo/auth/base.rb000066400000000000000000000125201505113246500212550ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Auth # Base class for authenticators. # # Each authenticator is instantiated for authentication over a particular # connection. # # @api private class Base # @return [ Mongo::Auth::User ] The user to authenticate. attr_reader :user # @return [ Mongo::Connection ] The connection to authenticate over. attr_reader :connection # Initializes the authenticator. # # @param [ Auth::User ] user The user to authenticate. # @param [ Mongo::Connection ] connection The connection to authenticate # over. def initialize(user, connection, **opts) @user = user @connection = connection end def conversation @conversation ||= self.class.const_get(:Conversation).new(user, connection) end private # Performs a single-step conversation on the given connection. def converse_1_step(connection, conversation) msg = conversation.start(connection) dispatch_msg(connection, conversation, msg) end # Performs a two-step conversation on the given connection. # # The implementation is very similar to +converse_multi_step+, but # conversations using this method do not involve the server replying # with {done: true} to indicate the end of the conversation. def converse_2_step(connection, conversation) msg = conversation.start(connection) reply_document = dispatch_msg(connection, conversation, msg) msg = conversation.continue(reply_document, connection) dispatch_msg(connection, conversation, msg) end # Performs the variable-length SASL conversation on the given connection. # # @param [ Server::Connection ] connection The connection. # @param [ Auth::*::Conversation ] conversation The conversation. # @param [ BSON::Document | nil ] speculative_auth_result The # value of speculativeAuthenticate field of hello response of # the handshake on the specified connection. def converse_multi_step(connection, conversation, speculative_auth_result: nil ) # Although the SASL conversation in theory can have any number of # steps, all defined authentication methods have a predefined number # of steps, and therefore all of our authenticators have a fixed set # of methods that generate payloads with one method per step. # We support a maximum of 3 total exchanges (start, continue and # finalize) and in practice the first two exchanges always happen. if speculative_auth_result reply_document = speculative_auth_result else msg = conversation.start(connection) reply_document = dispatch_msg(connection, conversation, msg) end msg = conversation.continue(reply_document, connection) reply_document = dispatch_msg(connection, conversation, msg) conversation.process_continue_response(reply_document) unless reply_document[:done] msg = conversation.finalize(connection) reply_document = dispatch_msg(connection, conversation, msg) end unless reply_document[:done] raise Error::InvalidServerAuthResponse, 'Server did not respond with {done: true} after finalizing the conversation' end reply_document end def dispatch_msg(connection, conversation, msg) context = Operation::Context.new(options: { server_api: connection.options[:server_api], }) if server_api = context.server_api msg = msg.maybe_add_server_api(server_api) end reply = connection.dispatch([msg], context) reply_document = reply.documents.first validate_reply!(connection, conversation, reply_document) connection_global_id = if connection.respond_to?(:global_id) connection.global_id else nil end result = Operation::Result.new(reply, connection.description, connection_global_id, context: context) connection.update_cluster_time(result) reply_document end # Checks whether reply is successful (i.e. has {ok: 1} set) and # raises Unauthorized if not. def validate_reply!(connection, conversation, doc) if doc[:ok] != 1 message = Error::Parser.build_message( code: doc[:code], code_name: doc[:codeName], message: doc[:errmsg], ) raise Unauthorized.new(user, used_mechanism: self.class.const_get(:MECHANISM), message: message, server: connection.server, code: doc[:code] ) end end end end end mongo-ruby-driver-2.21.3/lib/mongo/auth/conversation_base.rb000066400000000000000000000054111505113246500240500ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Auth # Defines common behavior around authentication conversations between # the client and the server. # # @api private class ConversationBase # Create the new conversation. # # @param [ Auth::User ] user The user to authenticate. # @param [ Mongo::Connection ] connection The connection to authenticate # over. def initialize(user, connection, **opts) @user = user @connection = connection end # @return [ Auth::User ] user The user for the conversation. attr_reader :user # @return [ Mongo::Connection ] The connection to authenticate over. attr_reader :connection # Returns the hash to provide to the server in the handshake # as value of the speculativeAuthenticate key. # # If the auth mechanism does not support speculative authentication, # this method returns nil. # # @return [ Hash | nil ] Speculative authentication document. def speculative_auth_document nil end # @return [ Protocol::Message ] The message to send. def build_message(connection, auth_source, selector) if connection && connection.features.op_msg_enabled? selector = selector.dup selector[Protocol::Msg::DATABASE_IDENTIFIER] = auth_source cluster_time = connection.mongos? && connection.cluster_time if cluster_time selector[Operation::CLUSTER_TIME] = cluster_time end Protocol::Msg.new([], {}, selector) else Protocol::Query.new( auth_source, Database::COMMAND, selector, limit: -1, ) end end def validate_external_auth_source if user.auth_source != '$external' user_name_msg = if user.name " #{user.name}" else '' end mechanism = user.mechanism raise Auth::InvalidConfiguration, "User#{user_name_msg} specifies auth source '#{user.auth_source}', but the only valid auth source for #{mechanism} is '$external'" end end end end end mongo-ruby-driver-2.21.3/lib/mongo/auth/cr.rb000066400000000000000000000025271505113246500207550ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Auth # Defines behavior for MongoDB-CR authentication. # # @since 2.0.0 # @deprecated MONGODB-CR authentication mechanism is deprecated # as of MongoDB 3.6. Support for it in the Ruby driver will be # removed in driver version 3.0. Please use SCRAM instead. # @api private class CR < Base # The authentication mechanism string. # # @since 2.0.0 MECHANISM = 'MONGODB-CR'.freeze # Log the user in on the current connection. # # @return [ BSON::Document ] The document of the authentication response. def login converse_2_step(connection, conversation) end end end end require 'mongo/auth/cr/conversation' mongo-ruby-driver-2.21.3/lib/mongo/auth/cr/000077500000000000000000000000001505113246500204225ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/auth/cr/conversation.rb000066400000000000000000000051561505113246500234700ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Auth class CR # Defines behavior around a single MONGODB-CR conversation between the # client and server. # # @since 2.0.0 # @deprecated MONGODB-CR authentication mechanism is deprecated # as of MongoDB 3.6. Support for it in the Ruby driver will be # removed in driver version 3.0. Please use SCRAM instead. # @api private class Conversation < ConversationBase # The login message base. # # @since 2.0.0 LOGIN = { authenticate: 1 }.freeze # @return [ String ] database The database to authenticate against. attr_reader :database # @return [ String ] nonce The initial auth nonce. attr_reader :nonce # Start the CR conversation. This returns the first message that # needs to be sent to the server. # # @param [ Server::Connection ] connection The connection being # authenticated. # # @return [ Protocol::Message ] The first CR conversation message. # # @since 2.0.0 def start(connection) selector = Auth::GET_NONCE build_message(connection, user.auth_source, selector) end # Continue the CR conversation. This sends the client final message # to the server after setting the reply from the previous server # communication. # # @param [ BSON::Document ] reply_document The reply document of the # previous message. # @param [ Mongo::Server::Connection ] connection The connection being # authenticated. # # @return [ Protocol::Message ] The next message to send. # # @since 2.0.0 def continue(reply_document, connection) @nonce = reply_document[Auth::NONCE] selector = LOGIN.merge(user: user.name, nonce: nonce, key: user.auth_key(nonce)) build_message(connection, user.auth_source, selector) end end end end end mongo-ruby-driver-2.21.3/lib/mongo/auth/credential_cache.rb000066400000000000000000000026161505113246500236050ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2019-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Auth # Cache store for computed SCRAM credentials. # # @api private module CredentialCache class << self attr_reader :store end MUTEX = Mutex.new module_function def get(key) MUTEX.synchronize do @store ||= {} @store[key] end end module_function def set(key, value) MUTEX.synchronize do @store ||= {} @store[key] = value end end module_function def cache(key) value = get(key) if value.nil? value = yield set(key, value) end value end module_function def clear MUTEX.synchronize do @store = {} end end end end end mongo-ruby-driver-2.21.3/lib/mongo/auth/gssapi.rb000066400000000000000000000021771505113246500216400ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Auth # Defines behavior for Kerberos authentication. # # @api private class Gssapi < Base # The authentication mechanism string. # # @since 2.0.0 MECHANISM = 'GSSAPI'.freeze # Log the user in on the current connection. # # @return [ BSON::Document ] The document of the authentication response. def login converse_multi_step(connection, conversation) end end end end require 'mongo/auth/gssapi/conversation' mongo-ruby-driver-2.21.3/lib/mongo/auth/gssapi/000077500000000000000000000000001505113246500213045ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/auth/gssapi/conversation.rb000066400000000000000000000065331505113246500243520ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2015-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Auth class Gssapi # Defines behaviour around a single Kerberos conversation between the # client and the server. # # @api private class Conversation < SaslConversationBase # The base client first message. START_MESSAGE = { saslStart: 1, autoAuthorize: 1 }.freeze # The base client continue message. CONTINUE_MESSAGE = { saslContinue: 1 }.freeze # Create the new conversation. # # @example Create the new conversation. # Conversation.new(user, 'test.example.com') # # @param [ Auth::User ] user The user to converse about. # @param [ Mongo::Connection ] connection The connection to # authenticate over. # # @since 2.0.0 def initialize(user, connection, **opts) super host = connection.address.host unless defined?(Mongo::GssapiNative) require 'mongo_kerberos' end @authenticator = Mongo::GssapiNative::Authenticator.new( user.name, host, user.auth_mech_properties[:service_name] || 'mongodb', user.auth_mech_properties[:canonicalize_host_name] || false, ) end # @return [ Authenticator ] authenticator The native SASL authenticator. attr_reader :authenticator # Get the id of the conversation. # # @return [ Integer ] The conversation id. attr_reader :id def client_first_document start_token = authenticator.initialize_challenge START_MESSAGE.merge(mechanism: Gssapi::MECHANISM, payload: start_token) end # Continue the conversation. # # @param [ BSON::Document ] reply_document The reply document of the # previous message. # # @return [ Protocol::Message ] The next query to execute. def continue(reply_document, connection) @id = reply_document['conversationId'] payload = reply_document['payload'] continue_token = authenticator.evaluate_challenge(payload) selector = CONTINUE_MESSAGE.merge(payload: continue_token, conversationId: id) build_message(connection, '$external', selector) end def process_continue_response(reply_document) payload = reply_document['payload'] @continue_token = authenticator.evaluate_challenge(payload) end # @return [ Protocol::Message ] The next query to execute. def finalize(connection) selector = CONTINUE_MESSAGE.merge(payload: @continue_token, conversationId: id) build_message(connection, '$external', selector) end end end end end mongo-ruby-driver-2.21.3/lib/mongo/auth/ldap.rb000066400000000000000000000022141505113246500212620ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Auth # Defines behavior for LDAP Proxy authentication. # # @since 2.0.0 # @api private class LDAP < Base # The authentication mechanism string. # # @since 2.0.0 MECHANISM = 'PLAIN'.freeze # Log the user in on the current connection. # # @return [ BSON::Document ] The document of the authentication response. def login converse_1_step(connection, conversation) end end end end require 'mongo/auth/ldap/conversation' mongo-ruby-driver-2.21.3/lib/mongo/auth/ldap/000077500000000000000000000000001505113246500207365ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/auth/ldap/conversation.rb000066400000000000000000000032471505113246500240030ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Auth class LDAP # Defines behavior around a single PLAIN conversation between the # client and server. # # @since 2.0.0 # @api private class Conversation < ConversationBase # The login message. # # @since 2.0.0 LOGIN = { saslStart: 1, autoAuthorize: 1 }.freeze # Start the PLAIN conversation. This returns the first message that # needs to be sent to the server. # # @param [ Server::Connection ] connection The connection being # authenticated. # # @return [ Protocol::Query ] The first PLAIN conversation message. # # @since 2.0.0 def start(connection) validate_external_auth_source selector = LOGIN.merge(payload: payload, mechanism: LDAP::MECHANISM) build_message(connection, '$external', selector) end private def payload BSON::Binary.new("\x00#{user.name}\x00#{user.password}") end end end end end mongo-ruby-driver-2.21.3/lib/mongo/auth/roles.rb000066400000000000000000000064371505113246500215010ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Auth # Provides constants for the built in roles provided by MongoDB. # # @since 2.0.0 module Roles # @see https://www.mongodb.com/docs/manual/reference/built-in-roles/#backup # # @since 2.0.0 BACKUP = 'backup'.freeze # @see https://www.mongodb.com/docs/manual/reference/built-in-roles/#clusterAdmin # # @since 2.0.0 CLUSTER_ADMIN = 'clusterAdmin'.freeze # @see https://www.mongodb.com/docs/manual/reference/built-in-roles/#clusterManager # # @since 2.0.0 CLUSTER_MANAGER = 'clusterManager'.freeze # @see https://www.mongodb.com/docs/manual/reference/built-in-roles/#clusterMonitor # # @since 2.0.0 CLUSTER_MONITOR = 'clusterMonitor'.freeze # @see https://www.mongodb.com/docs/manual/reference/built-in-roles/#dbAdmin # # @since 2.0.0 DATABASE_ADMIN = 'dbAdmin'.freeze # @see https://www.mongodb.com/docs/manual/reference/built-in-roles/#dbAdminAnyDatabase # # @since 2.0.0 DATABASE_ADMIN_ANY_DATABASE = 'dbAdminAnyDatabase'.freeze # @see https://www.mongodb.com/docs/manual/reference/built-in-roles/#dbOwner # # @since 2.0.0 DATABASE_OWNER = 'dbOwner'.freeze # @see https://www.mongodb.com/docs/manual/reference/built-in-roles/#hostManager # # @since 2.0.0 HOST_MANAGER = 'hostManager'.freeze # @see https://www.mongodb.com/docs/manual/reference/built-in-roles/#read # # @since 2.0.0 READ = 'read'.freeze # @see https://www.mongodb.com/docs/manual/reference/built-in-roles/#readAnyDatabase # # @since 2.0.0 READ_ANY_DATABASE = 'readAnyDatabase'.freeze # @see https://www.mongodb.com/docs/manual/reference/built-in-roles/#readWriteAnyDatabase # # @since 2.0.0 READ_WRITE_ANY_DATABASE = 'readWriteAnyDatabase'.freeze # @see https://www.mongodb.com/docs/manual/reference/built-in-roles/#readWrite # # @since 2.0.0 READ_WRITE = 'readWrite'.freeze # @see https://www.mongodb.com/docs/manual/reference/built-in-roles/#restore # # @since 2.0.0 RESTORE = 'restore'.freeze # @see https://www.mongodb.com/docs/manual/reference/built-in-roles/#root # # @since 2.0.0 ROOT = 'root'.freeze # @see https://www.mongodb.com/docs/manual/reference/built-in-roles/#userAdmin # # @since 2.0.0 USER_ADMIN = 'userAdmin'.freeze # @see https://www.mongodb.com/docs/manual/reference/built-in-roles/#userAdminAnyDatabase # # @since 2.0.0 USER_ADMIN_ANY_DATABASE = 'userAdminAnyDatabase'.freeze end end end mongo-ruby-driver-2.21.3/lib/mongo/auth/sasl_conversation_base.rb000066400000000000000000000065451505113246500251030ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Auth # Defines common behavior around SASL conversations between # the client and the server. # # @api private class SaslConversationBase < ConversationBase # The base client first message. CLIENT_FIRST_MESSAGE = { saslStart: 1, autoAuthorize: 1 }.freeze # The base client continue message. CLIENT_CONTINUE_MESSAGE = { saslContinue: 1 }.freeze # Start the SASL conversation. This returns the first message that # needs to be sent to the server. # # @param [ Server::Connection ] connection The connection being authenticated. # # @return [ Protocol::Message ] The first SASL conversation message. def start(connection) selector = client_first_document build_message(connection, user.auth_source, selector) end private # Gets the auth mechanism name for the conversation class. # # Example return: SCRAM-SHA-1. # # @return [ String ] Auth mechanism name. def auth_mechanism_name # self.class.name is e.g. Mongo::Auth::Scram256::Mechanism. # We need Mongo::Auth::Scram::MECHANISM. # Pull out the Scram256 part, get that class off of Auth, # then get the value of MECHANISM constant in Scram256. # With ActiveSupport, this method would be: # self.class.module_parent.const_get(:MECHANISM) parts = self.class.name.split('::') parts.pop Auth.const_get(parts.last).const_get(:MECHANISM) end def client_first_message_options nil end def client_first_document payload = client_first_payload if Lint.enabled? unless payload.is_a?(String) raise Error::LintError, "Payload must be a string but is a #{payload.class}: #{payload}" end end doc = CLIENT_FIRST_MESSAGE.merge( mechanism: auth_mechanism_name, payload: BSON::Binary.new(payload), ) if options = client_first_message_options # Short SCRAM conversation, # https://jira.mongodb.org/browse/DRIVERS-707 doc[:options] = options end doc end # Helper method to validate that server nonce starts with the client # nonce. # # Note that this class does not define the client_nonce or server_nonce # attributes - derived classes must do so. def validate_server_nonce! if client_nonce.nil? || client_nonce.empty? raise ArgumentError, 'Cannot validate server nonce when client nonce is nil or empty' end unless server_nonce.start_with?(client_nonce) raise Error::InvalidNonce.new(client_nonce, server_nonce) end end end end end mongo-ruby-driver-2.21.3/lib/mongo/auth/scram.rb000066400000000000000000000052051505113246500214520ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Auth # Defines behavior for SCRAM authentication. # # @api private class Scram < Base # The authentication mechanism string. MECHANISM = 'SCRAM-SHA-1'.freeze # Initializes the Scram authenticator. # # @param [ Auth::User ] user The user to authenticate. # @param [ Mongo::Connection ] connection The connection to authenticate over. # # @option opts [ String | nil ] speculative_auth_client_nonce The client # nonce used in speculative auth on the specified connection that # produced the specified speculative auth result. # @option opts [ BSON::Document | nil ] speculative_auth_result The # value of speculativeAuthenticate field of hello response of # the handshake on the specified connection. def initialize(user, connection, **opts) super @speculative_auth_client_nonce = opts[:speculative_auth_client_nonce] @speculative_auth_result = opts[:speculative_auth_result] end # @return [ String | nil ] The client nonce used in speculative auth on # the current connection. attr_reader :speculative_auth_client_nonce # @return [ BSON::Document | nil ] The value of speculativeAuthenticate # field of hello response of the handshake on the current connection. attr_reader :speculative_auth_result def conversation @conversation ||= self.class.const_get(:Conversation).new( user, connection, client_nonce: speculative_auth_client_nonce) end # Log the user in on the current connection. # # @return [ BSON::Document ] The document of the authentication response. def login converse_multi_step(connection, conversation, speculative_auth_result: speculative_auth_result, ).tap do unless conversation.server_verified? raise Error::MissingScramServerSignature end end end end end end require 'mongo/auth/scram/conversation' mongo-ruby-driver-2.21.3/lib/mongo/auth/scram/000077500000000000000000000000001505113246500211235ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/auth/scram/conversation.rb000066400000000000000000000032551505113246500241670ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Auth class Scram # Defines behavior around a single SCRAM-SHA-1 conversation between # the client and server. # # @api private class Conversation < ScramConversationBase private # HI algorithm implementation. # # @api private # # @see http://tools.ietf.org/html/rfc5802#section-2.2 # # @since 2.0.0 def hi(data) OpenSSL::PKCS5.pbkdf2_hmac_sha1( data, salt, iterations, digest.size, ) end # Salted password algorithm implementation. # # @api private # # @see http://tools.ietf.org/html/rfc5802#section-3 # # @since 2.0.0 def salted_password @salted_password ||= CredentialCache.cache(cache_key(:salted_password)) do hi(user.hashed_password) end end def digest @digest ||= OpenSSL::Digest::SHA1.new.freeze end end end end end mongo-ruby-driver-2.21.3/lib/mongo/auth/scram256.rb000066400000000000000000000020001505113246500216750ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Auth # Defines behavior for SCRAM-SHA-256 authentication. # # The purpose of this class is to provide the namespace for the # Scram256::Conversation class. # # @api private class Scram256 < Scram # The authentication mechanism string. MECHANISM = 'SCRAM-SHA-256'.freeze end end end require 'mongo/auth/scram256/conversation' mongo-ruby-driver-2.21.3/lib/mongo/auth/scram256/000077500000000000000000000000001505113246500213605ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/auth/scram256/conversation.rb000066400000000000000000000033041505113246500244170ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Auth class Scram256 # Defines behavior around a single SCRAM-SHA-256 conversation between # the client and server. # # @api private class Conversation < ScramConversationBase private # HI algorithm implementation. # # @api private # # @see http://tools.ietf.org/html/rfc5802#section-2.2 # # @since 2.0.0 def hi(data) OpenSSL::PKCS5.pbkdf2_hmac( data, salt, iterations, digest.size, digest, ) end # Salted password algorithm implementation. # # @api private # # @see http://tools.ietf.org/html/rfc5802#section-3 # # @since 2.0.0 def salted_password @salted_password ||= CredentialCache.cache(cache_key(:salted_password)) do hi(user.sasl_prepped_password) end end def digest @digest ||= OpenSSL::Digest::SHA256.new.freeze end end end end end mongo-ruby-driver-2.21.3/lib/mongo/auth/scram_conversation_base.rb000066400000000000000000000252411505113246500252400ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Auth # Defines common behavior around authentication conversations between # the client and the server. # # @api private class ScramConversationBase < SaslConversationBase # The minimum iteration count for SCRAM-SHA-1 and SCRAM-SHA-256. MIN_ITER_COUNT = 4096 # Create the new conversation. # # @param [ Auth::User ] user The user to converse about. # @param [ String | nil ] client_nonce The client nonce to use. # If this conversation is created for a connection that performed # speculative authentication, this client nonce must be equal to the # client nonce used for speculative authentication; otherwise, the # client nonce must not be specified. def initialize(user, connection, client_nonce: nil) super @client_nonce = client_nonce || SecureRandom.base64 end # @return [ String ] client_nonce The client nonce. attr_reader :client_nonce # Get the id of the conversation. # # @example Get the id of the conversation. # conversation.id # # @return [ Integer ] The conversation id. attr_reader :id # Whether the client verified the ServerSignature from the server. # # @see https://jira.mongodb.org/browse/SECURITY-621 # # @return [ true | fase ] Whether the server's signature was verified. def server_verified? !!@server_verified end # Continue the SCRAM conversation. This sends the client final message # to the server after setting the reply from the previous server # communication. # # @param [ BSON::Document ] reply_document The reply document of the # previous message. # @param [ Server::Connection ] connection The connection being # authenticated. # # @return [ Protocol::Message ] The next message to send. def continue(reply_document, connection) @id = reply_document['conversationId'] payload_data = reply_document['payload'].data parsed_data = parse_payload(payload_data) @server_nonce = parsed_data.fetch('r') @salt = Base64.strict_decode64(parsed_data.fetch('s')) @iterations = parsed_data.fetch('i').to_i.tap do |i| if i < MIN_ITER_COUNT raise Error::InsufficientIterationCount.new( Error::InsufficientIterationCount.message(MIN_ITER_COUNT, i)) end end @auth_message = "#{first_bare},#{payload_data},#{without_proof}" validate_server_nonce! selector = CLIENT_CONTINUE_MESSAGE.merge( payload: client_final_message, conversationId: id, ) build_message(connection, user.auth_source, selector) end # Processes the second response from the server. # # @param [ BSON::Document ] reply_document The reply document of the # continue response. def process_continue_response(reply_document) payload_data = parse_payload(reply_document['payload'].data) check_server_signature(payload_data) end # Finalize the SCRAM conversation. This is meant to be iterated until # the provided reply indicates the conversation is finished. # # @param [ Server::Connection ] connection The connection being authenticated. # # @return [ Protocol::Message ] The next message to send. def finalize(connection) selector = CLIENT_CONTINUE_MESSAGE.merge( payload: client_empty_message, conversationId: id, ) build_message(connection, user.auth_source, selector) end # Returns the hash to provide to the server in the handshake # as value of the speculativeAuthenticate key. # # If the auth mechanism does not support speculative authentication, # this method returns nil. # # @return [ Hash | nil ] Speculative authentication document. def speculative_auth_document client_first_document.merge(db: user.auth_source) end private # Parses a payload like a=value,b=value2 into a hash like # {'a' => 'value', 'b' => 'value2'}. # # @param [ String ] payload The payload to parse. # # @return [ Hash ] Parsed key-value pairs. def parse_payload(payload) Hash[payload.split(',').reject { |v| v == '' }.map do |pair| k, v, = pair.split('=', 2) if k == '' raise Error::InvalidServerAuthResponse, 'Payload malformed: missing key' end [k, v] end] end def client_first_message_options {skipEmptyExchange: true} end # @see http://tools.ietf.org/html/rfc5802#section-3 def client_first_payload "n,,#{first_bare}" end # Auth message algorithm implementation. # # @api private # # @see http://tools.ietf.org/html/rfc5802#section-3 # # @since 2.0.0 attr_reader :auth_message # Get the empty client message. # # @api private # # @since 2.0.0 def client_empty_message BSON::Binary.new('') end # Get the final client message. # # @api private # # @see http://tools.ietf.org/html/rfc5802#section-3 # # @since 2.0.0 def client_final_message BSON::Binary.new("#{without_proof},p=#{client_final}") end # Client final implementation. # # @api private # # @see http://tools.ietf.org/html/rfc5802#section-7 # # @since 2.0.0 def client_final @client_final ||= client_proof(client_key, client_signature(stored_key(client_key), auth_message)) end # Looks for field 'v' in payload data, if it is present verifies the # server signature. If verification succeeds, sets @server_verified # to true. If verification fails, raises InvalidSignature. # # This method can be called from different conversation steps # depending on whether the short SCRAM conversation is used. def check_server_signature(payload_data) if verifier = payload_data['v'] if compare_digest(verifier, server_signature) @server_verified = true else raise Error::InvalidSignature.new(verifier, server_signature) end end end # Client key algorithm implementation. # # @api private # # @see http://tools.ietf.org/html/rfc5802#section-3 # # @since 2.0.0 def client_key @client_key ||= CredentialCache.cache(cache_key(:client_key)) do hmac(salted_password, 'Client Key') end end # Client proof algorithm implementation. # # @api private # # @see http://tools.ietf.org/html/rfc5802#section-3 # # @since 2.0.0 def client_proof(key, signature) @client_proof ||= Base64.strict_encode64(xor(key, signature)) end # Client signature algorithm implementation. # # @api private # # @see http://tools.ietf.org/html/rfc5802#section-3 # # @since 2.0.0 def client_signature(key, message) @client_signature ||= hmac(key, message) end # First bare implementation. # # @api private # # @see http://tools.ietf.org/html/rfc5802#section-7 # # @since 2.0.0 def first_bare @first_bare ||= "n=#{user.encoded_name},r=#{client_nonce}" end # H algorithm implementation. # # @api private # # @see http://tools.ietf.org/html/rfc5802#section-2.2 # # @since 2.0.0 def h(string) digest.digest(string) end # HMAC algorithm implementation. # # @api private # # @see http://tools.ietf.org/html/rfc5802#section-2.2 # # @since 2.0.0 def hmac(data, key) OpenSSL::HMAC.digest(digest, data, key) end # Get the iterations from the server response. # # @api private # # @since 2.0.0 attr_reader :iterations # Get the data from the returned payload. # # @api private # # @since 2.0.0 attr_reader :payload_data # Get the server nonce from the payload. # # @api private # # @since 2.0.0 attr_reader :server_nonce # Gets the salt from the server response. # # @api private # # @since 2.0.0 attr_reader :salt # @api private def cache_key(*extra) [user.password, salt, iterations, @mechanism] + extra end # Server key algorithm implementation. # # @api private # # @see http://tools.ietf.org/html/rfc5802#section-3 # # @since 2.0.0 def server_key @server_key ||= CredentialCache.cache(cache_key(:server_key)) do hmac(salted_password, 'Server Key') end end # Server signature algorithm implementation. # # @api private # # @see http://tools.ietf.org/html/rfc5802#section-3 # # @since 2.0.0 def server_signature @server_signature ||= Base64.strict_encode64(hmac(server_key, auth_message)) end # Stored key algorithm implementation. # # @api private # # @see http://tools.ietf.org/html/rfc5802#section-3 # # @since 2.0.0 def stored_key(key) h(key) end # Get the without proof message. # # @api private # # @see http://tools.ietf.org/html/rfc5802#section-7 # # @since 2.0.0 def without_proof @without_proof ||= "c=biws,r=#{server_nonce}" end # XOR operation for two strings. # # @api private # # @since 2.0.0 def xor(first, second) first.bytes.zip(second.bytes).map{ |(a,b)| (a ^ b).chr }.join('') end def compare_digest(a, b) check = a.bytesize ^ b.bytesize a.bytes.zip(b.bytes){ |x, y| check |= x ^ y.to_i } check == 0 end end end end mongo-ruby-driver-2.21.3/lib/mongo/auth/stringprep.rb000066400000000000000000000102751505113246500225450ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2018-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'mongo/auth/stringprep/tables' require 'mongo/auth/stringprep/profiles/sasl' module Mongo module Auth # This namespace contains all behavior related to string preparation # (RFC 3454). It's used to implement SCRAM-SHA-256 authentication, # which is available in MongoDB server versions 4.0 and later. # # @since 2.6.0 # @api private module StringPrep extend self # Prepare a string given a set of mappings and prohibited character tables. # # @example Prepare a string. # StringPrep.prepare("some string", # StringPrep::Profiles::SASL::MAPPINGS, # StringPrep::Profiles::SASL::PROHIBITED, # normalize: true, bidi: true) # # @param [ String ] data The string to prepare. # @param [ Array ] mappings A list of mappings to apply to the data. # @param [ Array ] prohibited A list of prohibited character lists to ensure the data doesn't # contain after mapping and normalizing the data. # @param [ Hash ] options Optional operations to perform during string preparation. # # @option options [ Boolean ] :normalize Whether or not to apply Unicode normalization to the # data. # @option options [ Boolean ] :bidi Whether or not to ensure that the data contains valid # bidirectional input. # # @raise [ Error::FailedStringPrepValidation ] If stringprep validations fails. # # @since 2.6.0 def prepare(data, mappings, prohibited, options = {}) apply_maps(data, mappings).tap do |mapped| normalize!(mapped) if options[:normalize] check_prohibited!(mapped, prohibited) check_bidi!(mapped) if options[:bidi] end end private def apply_maps(data, mappings) data.each_char.inject(+'') do |out, c| out << mapping(c.ord, mappings) end end def check_bidi!(out) if out.each_char.any? { |c| table_contains?(Tables::C8, c) } raise Mongo::Error::FailedStringPrepValidation.new(Error::FailedStringPrepValidation::INVALID_BIDIRECTIONAL) end if out.each_char.any? { |c| table_contains?(Tables::D1, c) } if out.each_char.any? { |c| table_contains?(Tables::D2, c) } raise Mongo::Error::FailedStringPrepValidation.new(Error::FailedStringPrepValidation::INVALID_BIDIRECTIONAL) end unless table_contains?(Tables::D1, out[0]) && table_contains?(Tables::D1, out[-1]) raise Mongo::Error::FailedStringPrepValidation.new(Error::FailedStringPrepValidation::INVALID_BIDIRECTIONAL) end end end def check_prohibited!(out, prohibited) out.each_char do |c| prohibited.each do |table| if table_contains?(table, c) raise Error::FailedStringPrepValidation.new(Error::FailedStringPrepValidation::PROHIBITED_CHARACTER) end end end end def mapping(c, mappings) m = mappings.find { |m| m.has_key?(c) } mapped = (m && m[c]) || [c] mapped.map { |i| i.chr(Encoding::UTF_8) }.join end def normalize!(out) if String.method_defined?(:unicode_normalize!) out.unicode_normalize!(:nfkc) else require 'mongo/auth/stringprep/unicode_normalize/normalize' out.replace(UnicodeNormalize.normalize(out, :nfkc)) end end def table_contains?(table, c) table.any? do |r| r.member?(c.ord) end end end end end mongo-ruby-driver-2.21.3/lib/mongo/auth/stringprep/000077500000000000000000000000001505113246500222135ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/auth/stringprep/profiles/000077500000000000000000000000001505113246500240365ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/auth/stringprep/profiles/sasl.rb000066400000000000000000000042341505113246500253300ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2018-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Auth module StringPrep module Profiles # Contains the mappings and prohibited lists for SASLPrep (RFC 4013). # # @note Only available for Ruby versions 2.2.0 and up. # # @since 2.6.0 # @api private module SASL MAP_NON_ASCII_TO_SPACE = { 0x00A0 => [0x0020], 0x1680 => [0x0020], 0x2000 => [0x0020], 0x2001 => [0x0020], 0x2002 => [0x0020], 0x2003 => [0x0020], 0x2004 => [0x0020], 0x2005 => [0x0020], 0x2006 => [0x0020], 0x2007 => [0x0020], 0x2008 => [0x0020], 0x2009 => [0x0020], 0x200A => [0x0020], 0x200B => [0x0020], 0x202F => [0x0020], 0x205F => [0x0020], 0x3000 => [0x0020], }.freeze # The mappings to use for SASL string preparation. # # @since 2.6.0 MAPPINGS = [ Tables::B1, MAP_NON_ASCII_TO_SPACE, ].freeze # The prohibited character lists to use for SASL string preparation. # # @since 2.6.0 PROHIBITED = [ Tables::A1, Tables::C1_2, Tables::C2_1, Tables::C2_2, Tables::C3, Tables::C4, Tables::C5, Tables::C6, Tables::C7, Tables::C8, Tables::C9, ].freeze end end end end end mongo-ruby-driver-2.21.3/lib/mongo/auth/stringprep/tables.rb000066400000000000000000003716561505113246500240340ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2018-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Auth module StringPrep # Contains character tables defined by RFC 3454 (string preparation). # # @since 2.6.0 # @api private module Tables # Table A1 as defined by RFC 3454 (string preparation). # # @since 2.6.0 A1 = [ 0x0221..0x0221, 0x0234..0x024F, 0x02AE..0x02AF, 0x02EF..0x02FF, 0x0350..0x035F, 0x0370..0x0373, 0x0376..0x0379, 0x037B..0x037D, 0x037F..0x0383, 0x038B..0x038B, 0x038D..0x038D, 0x03A2..0x03A2, 0x03CF..0x03CF, 0x03F7..0x03FF, 0x0487..0x0487, 0x04CF..0x04CF, 0x04F6..0x04F7, 0x04FA..0x04FF, 0x0510..0x0530, 0x0557..0x0558, 0x0560..0x0560, 0x0588..0x0588, 0x058B..0x0590, 0x05A2..0x05A2, 0x05BA..0x05BA, 0x05C5..0x05CF, 0x05EB..0x05EF, 0x05F5..0x060B, 0x060D..0x061A, 0x061C..0x061E, 0x0620..0x0620, 0x063B..0x063F, 0x0656..0x065F, 0x06EE..0x06EF, 0x06FF..0x06FF, 0x070E..0x070E, 0x072D..0x072F, 0x074B..0x077F, 0x07B2..0x0900, 0x0904..0x0904, 0x093A..0x093B, 0x094E..0x094F, 0x0955..0x0957, 0x0971..0x0980, 0x0984..0x0984, 0x098D..0x098E, 0x0991..0x0992, 0x09A9..0x09A9, 0x09B1..0x09B1, 0x09B3..0x09B5, 0x09BA..0x09BB, 0x09BD..0x09BD, 0x09C5..0x09C6, 0x09C9..0x09CA, 0x09CE..0x09D6, 0x09D8..0x09DB, 0x09DE..0x09DE, 0x09E4..0x09E5, 0x09FB..0x0A01, 0x0A03..0x0A04, 0x0A0B..0x0A0E, 0x0A11..0x0A12, 0x0A29..0x0A29, 0x0A31..0x0A31, 0x0A34..0x0A34, 0x0A37..0x0A37, 0x0A3A..0x0A3B, 0x0A3D..0x0A3D, 0x0A43..0x0A46, 0x0A49..0x0A4A, 0x0A4E..0x0A58, 0x0A5D..0x0A5D, 0x0A5F..0x0A65, 0x0A75..0x0A80, 0x0A84..0x0A84, 0x0A8C..0x0A8C, 0x0A8E..0x0A8E, 0x0A92..0x0A92, 0x0AA9..0x0AA9, 0x0AB1..0x0AB1, 0x0AB4..0x0AB4, 0x0ABA..0x0ABB, 0x0AC6..0x0AC6, 0x0ACA..0x0ACA, 0x0ACE..0x0ACF, 0x0AD1..0x0ADF, 0x0AE1..0x0AE5, 0x0AF0..0x0B00, 0x0B04..0x0B04, 0x0B0D..0x0B0E, 0x0B11..0x0B12, 0x0B29..0x0B29, 0x0B31..0x0B31, 0x0B34..0x0B35, 0x0B3A..0x0B3B, 0x0B44..0x0B46, 0x0B49..0x0B4A, 0x0B4E..0x0B55, 0x0B58..0x0B5B, 0x0B5E..0x0B5E, 0x0B62..0x0B65, 0x0B71..0x0B81, 0x0B84..0x0B84, 0x0B8B..0x0B8D, 0x0B91..0x0B91, 0x0B96..0x0B98, 0x0B9B..0x0B9B, 0x0B9D..0x0B9D, 0x0BA0..0x0BA2, 0x0BA5..0x0BA7, 0x0BAB..0x0BAD, 0x0BB6..0x0BB6, 0x0BBA..0x0BBD, 0x0BC3..0x0BC5, 0x0BC9..0x0BC9, 0x0BCE..0x0BD6, 0x0BD8..0x0BE6, 0x0BF3..0x0C00, 0x0C04..0x0C04, 0x0C0D..0x0C0D, 0x0C11..0x0C11, 0x0C29..0x0C29, 0x0C34..0x0C34, 0x0C3A..0x0C3D, 0x0C45..0x0C45, 0x0C49..0x0C49, 0x0C4E..0x0C54, 0x0C57..0x0C5F, 0x0C62..0x0C65, 0x0C70..0x0C81, 0x0C84..0x0C84, 0x0C8D..0x0C8D, 0x0C91..0x0C91, 0x0CA9..0x0CA9, 0x0CB4..0x0CB4, 0x0CBA..0x0CBD, 0x0CC5..0x0CC5, 0x0CC9..0x0CC9, 0x0CCE..0x0CD4, 0x0CD7..0x0CDD, 0x0CDF..0x0CDF, 0x0CE2..0x0CE5, 0x0CF0..0x0D01, 0x0D04..0x0D04, 0x0D0D..0x0D0D, 0x0D11..0x0D11, 0x0D29..0x0D29, 0x0D3A..0x0D3D, 0x0D44..0x0D45, 0x0D49..0x0D49, 0x0D4E..0x0D56, 0x0D58..0x0D5F, 0x0D62..0x0D65, 0x0D70..0x0D81, 0x0D84..0x0D84, 0x0D97..0x0D99, 0x0DB2..0x0DB2, 0x0DBC..0x0DBC, 0x0DBE..0x0DBF, 0x0DC7..0x0DC9, 0x0DCB..0x0DCE, 0x0DD5..0x0DD5, 0x0DD7..0x0DD7, 0x0DE0..0x0DF1, 0x0DF5..0x0E00, 0x0E3B..0x0E3E, 0x0E5C..0x0E80, 0x0E83..0x0E83, 0x0E85..0x0E86, 0x0E89..0x0E89, 0x0E8B..0x0E8C, 0x0E8E..0x0E93, 0x0E98..0x0E98, 0x0EA0..0x0EA0, 0x0EA4..0x0EA4, 0x0EA6..0x0EA6, 0x0EA8..0x0EA9, 0x0EAC..0x0EAC, 0x0EBA..0x0EBA, 0x0EBE..0x0EBF, 0x0EC5..0x0EC5, 0x0EC7..0x0EC7, 0x0ECE..0x0ECF, 0x0EDA..0x0EDB, 0x0EDE..0x0EFF, 0x0F48..0x0F48, 0x0F6B..0x0F70, 0x0F8C..0x0F8F, 0x0F98..0x0F98, 0x0FBD..0x0FBD, 0x0FCD..0x0FCE, 0x0FD0..0x0FFF, 0x1022..0x1022, 0x1028..0x1028, 0x102B..0x102B, 0x1033..0x1035, 0x103A..0x103F, 0x105A..0x109F, 0x10C6..0x10CF, 0x10F9..0x10FA, 0x10FC..0x10FF, 0x115A..0x115E, 0x11A3..0x11A7, 0x11FA..0x11FF, 0x1207..0x1207, 0x1247..0x1247, 0x1249..0x1249, 0x124E..0x124F, 0x1257..0x1257, 0x1259..0x1259, 0x125E..0x125F, 0x1287..0x1287, 0x1289..0x1289, 0x128E..0x128F, 0x12AF..0x12AF, 0x12B1..0x12B1, 0x12B6..0x12B7, 0x12BF..0x12BF, 0x12C1..0x12C1, 0x12C6..0x12C7, 0x12CF..0x12CF, 0x12D7..0x12D7, 0x12EF..0x12EF, 0x130F..0x130F, 0x1311..0x1311, 0x1316..0x1317, 0x131F..0x131F, 0x1347..0x1347, 0x135B..0x1360, 0x137D..0x139F, 0x13F5..0x1400, 0x1677..0x167F, 0x169D..0x169F, 0x16F1..0x16FF, 0x170D..0x170D, 0x1715..0x171F, 0x1737..0x173F, 0x1754..0x175F, 0x176D..0x176D, 0x1771..0x1771, 0x1774..0x177F, 0x17DD..0x17DF, 0x17EA..0x17FF, 0x180F..0x180F, 0x181A..0x181F, 0x1878..0x187F, 0x18AA..0x1DFF, 0x1E9C..0x1E9F, 0x1EFA..0x1EFF, 0x1F16..0x1F17, 0x1F1E..0x1F1F, 0x1F46..0x1F47, 0x1F4E..0x1F4F, 0x1F58..0x1F58, 0x1F5A..0x1F5A, 0x1F5C..0x1F5C, 0x1F5E..0x1F5E, 0x1F7E..0x1F7F, 0x1FB5..0x1FB5, 0x1FC5..0x1FC5, 0x1FD4..0x1FD5, 0x1FDC..0x1FDC, 0x1FF0..0x1FF1, 0x1FF5..0x1FF5, 0x1FFF..0x1FFF, 0x2053..0x2056, 0x2058..0x205E, 0x2064..0x2069, 0x2072..0x2073, 0x208F..0x209F, 0x20B2..0x20CF, 0x20EB..0x20FF, 0x213B..0x213C, 0x214C..0x2152, 0x2184..0x218F, 0x23CF..0x23FF, 0x2427..0x243F, 0x244B..0x245F, 0x24FF..0x24FF, 0x2614..0x2615, 0x2618..0x2618, 0x267E..0x267F, 0x268A..0x2700, 0x2705..0x2705, 0x270A..0x270B, 0x2728..0x2728, 0x274C..0x274C, 0x274E..0x274E, 0x2753..0x2755, 0x2757..0x2757, 0x275F..0x2760, 0x2795..0x2797, 0x27B0..0x27B0, 0x27BF..0x27CF, 0x27EC..0x27EF, 0x2B00..0x2E7F, 0x2E9A..0x2E9A, 0x2EF4..0x2EFF, 0x2FD6..0x2FEF, 0x2FFC..0x2FFF, 0x3040..0x3040, 0x3097..0x3098, 0x3100..0x3104, 0x312D..0x3130, 0x318F..0x318F, 0x31B8..0x31EF, 0x321D..0x321F, 0x3244..0x3250, 0x327C..0x327E, 0x32CC..0x32CF, 0x32FF..0x32FF, 0x3377..0x337A, 0x33DE..0x33DF, 0x33FF..0x33FF, 0x4DB6..0x4DFF, 0x9FA6..0x9FFF, 0xA48D..0xA48F, 0xA4C7..0xABFF, 0xD7A4..0xD7FF, 0xFA2E..0xFA2F, 0xFA6B..0xFAFF, 0xFB07..0xFB12, 0xFB18..0xFB1C, 0xFB37..0xFB37, 0xFB3D..0xFB3D, 0xFB3F..0xFB3F, 0xFB42..0xFB42, 0xFB45..0xFB45, 0xFBB2..0xFBD2, 0xFD40..0xFD4F, 0xFD90..0xFD91, 0xFDC8..0xFDCF, 0xFDFD..0xFDFF, 0xFE10..0xFE1F, 0xFE24..0xFE2F, 0xFE47..0xFE48, 0xFE53..0xFE53, 0xFE67..0xFE67, 0xFE6C..0xFE6F, 0xFE75..0xFE75, 0xFEFD..0xFEFE, 0xFF00..0xFF00, 0xFFBF..0xFFC1, 0xFFC8..0xFFC9, 0xFFD0..0xFFD1, 0xFFD8..0xFFD9, 0xFFDD..0xFFDF, 0xFFE7..0xFFE7, 0xFFEF..0xFFF8, 0x10000..0x102FF, 0x1031F..0x1031F, 0x10324..0x1032F, 0x1034B..0x103FF, 0x10426..0x10427, 0x1044E..0x1CFFF, 0x1D0F6..0x1D0FF, 0x1D127..0x1D129, 0x1D1DE..0x1D3FF, 0x1D455..0x1D455, 0x1D49D..0x1D49D, 0x1D4A0..0x1D4A1, 0x1D4A3..0x1D4A4, 0x1D4A7..0x1D4A8, 0x1D4AD..0x1D4AD, 0x1D4BA..0x1D4BA, 0x1D4BC..0x1D4BC, 0x1D4C1..0x1D4C1, 0x1D4C4..0x1D4C4, 0x1D506..0x1D506, 0x1D50B..0x1D50C, 0x1D515..0x1D515, 0x1D51D..0x1D51D, 0x1D53A..0x1D53A, 0x1D53F..0x1D53F, 0x1D545..0x1D545, 0x1D547..0x1D549, 0x1D551..0x1D551, 0x1D6A4..0x1D6A7, 0x1D7CA..0x1D7CD, 0x1D800..0x1FFFD, 0x2A6D7..0x2F7FF, 0x2FA1E..0x2FFFD, 0x30000..0x3FFFD, 0x40000..0x4FFFD, 0x50000..0x5FFFD, 0x60000..0x6FFFD, 0x70000..0x7FFFD, 0x80000..0x8FFFD, 0x90000..0x9FFFD, 0xA0000..0xAFFFD, 0xB0000..0xBFFFD, 0xC0000..0xCFFFD, 0xD0000..0xDFFFD, 0xE0000..0xE0000, 0xE0002..0xE001F, 0xE0080..0xEFFFD, ].freeze # Table B1 as defined by RFC 3454 (string preparation). # # @since 2.6.0 B1 = { 0x00AD => [], # Map to nothing 0x034F => [], # Map to nothing 0x180B => [], # Map to nothing 0x180C => [], # Map to nothing 0x180D => [], # Map to nothing 0x200B => [], # Map to nothing 0x200C => [], # Map to nothing 0x200D => [], # Map to nothing 0x2060 => [], # Map to nothing 0xFE00 => [], # Map to nothing 0xFE01 => [], # Map to nothing 0xFE02 => [], # Map to nothing 0xFE03 => [], # Map to nothing 0xFE04 => [], # Map to nothing 0xFE05 => [], # Map to nothing 0xFE06 => [], # Map to nothing 0xFE07 => [], # Map to nothing 0xFE08 => [], # Map to nothing 0xFE09 => [], # Map to nothing 0xFE0A => [], # Map to nothing 0xFE0B => [], # Map to nothing 0xFE0C => [], # Map to nothing 0xFE0D => [], # Map to nothing 0xFE0E => [], # Map to nothing 0xFE0F => [], # Map to nothing 0xFEFF => [], # Map to nothing }.freeze # Table B2 as defined by RFC 3454 (string preparation). # # @since 2.6.0 B2 = { 0x0041 => [0x0061], # Case map 0x0042 => [0x0062], # Case map 0x0043 => [0x0063], # Case map 0x0044 => [0x0064], # Case map 0x0045 => [0x0065], # Case map 0x0046 => [0x0066], # Case map 0x0047 => [0x0067], # Case map 0x0048 => [0x0068], # Case map 0x0049 => [0x0069], # Case map 0x004A => [0x006A], # Case map 0x004B => [0x006B], # Case map 0x004C => [0x006C], # Case map 0x004D => [0x006D], # Case map 0x004E => [0x006E], # Case map 0x004F => [0x006F], # Case map 0x0050 => [0x0070], # Case map 0x0051 => [0x0071], # Case map 0x0052 => [0x0072], # Case map 0x0053 => [0x0073], # Case map 0x0054 => [0x0074], # Case map 0x0055 => [0x0075], # Case map 0x0056 => [0x0076], # Case map 0x0057 => [0x0077], # Case map 0x0058 => [0x0078], # Case map 0x0059 => [0x0079], # Case map 0x005A => [0x007A], # Case map 0x00B5 => [0x03BC], # Case map 0x00C0 => [0x00E0], # Case map 0x00C1 => [0x00E1], # Case map 0x00C2 => [0x00E2], # Case map 0x00C3 => [0x00E3], # Case map 0x00C4 => [0x00E4], # Case map 0x00C5 => [0x00E5], # Case map 0x00C6 => [0x00E6], # Case map 0x00C7 => [0x00E7], # Case map 0x00C8 => [0x00E8], # Case map 0x00C9 => [0x00E9], # Case map 0x00CA => [0x00EA], # Case map 0x00CB => [0x00EB], # Case map 0x00CC => [0x00EC], # Case map 0x00CD => [0x00ED], # Case map 0x00CE => [0x00EE], # Case map 0x00CF => [0x00EF], # Case map 0x00D0 => [0x00F0], # Case map 0x00D1 => [0x00F1], # Case map 0x00D2 => [0x00F2], # Case map 0x00D3 => [0x00F3], # Case map 0x00D4 => [0x00F4], # Case map 0x00D5 => [0x00F5], # Case map 0x00D6 => [0x00F6], # Case map 0x00D8 => [0x00F8], # Case map 0x00D9 => [0x00F9], # Case map 0x00DA => [0x00FA], # Case map 0x00DB => [0x00FB], # Case map 0x00DC => [0x00FC], # Case map 0x00DD => [0x00FD], # Case map 0x00DE => [0x00FE], # Case map 0x00DF => [0x0073, 0x0073], # Case map 0x0100 => [0x0101], # Case map 0x0102 => [0x0103], # Case map 0x0104 => [0x0105], # Case map 0x0106 => [0x0107], # Case map 0x0108 => [0x0109], # Case map 0x010A => [0x010B], # Case map 0x010C => [0x010D], # Case map 0x010E => [0x010F], # Case map 0x0110 => [0x0111], # Case map 0x0112 => [0x0113], # Case map 0x0114 => [0x0115], # Case map 0x0116 => [0x0117], # Case map 0x0118 => [0x0119], # Case map 0x011A => [0x011B], # Case map 0x011C => [0x011D], # Case map 0x011E => [0x011F], # Case map 0x0120 => [0x0121], # Case map 0x0122 => [0x0123], # Case map 0x0124 => [0x0125], # Case map 0x0126 => [0x0127], # Case map 0x0128 => [0x0129], # Case map 0x012A => [0x012B], # Case map 0x012C => [0x012D], # Case map 0x012E => [0x012F], # Case map 0x0130 => [0x0069, 0x0307], # Case map 0x0132 => [0x0133], # Case map 0x0134 => [0x0135], # Case map 0x0136 => [0x0137], # Case map 0x0139 => [0x013A], # Case map 0x013B => [0x013C], # Case map 0x013D => [0x013E], # Case map 0x013F => [0x0140], # Case map 0x0141 => [0x0142], # Case map 0x0143 => [0x0144], # Case map 0x0145 => [0x0146], # Case map 0x0147 => [0x0148], # Case map 0x0149 => [0x02BC, 0x006E], # Case map 0x014A => [0x014B], # Case map 0x014C => [0x014D], # Case map 0x014E => [0x014F], # Case map 0x0150 => [0x0151], # Case map 0x0152 => [0x0153], # Case map 0x0154 => [0x0155], # Case map 0x0156 => [0x0157], # Case map 0x0158 => [0x0159], # Case map 0x015A => [0x015B], # Case map 0x015C => [0x015D], # Case map 0x015E => [0x015F], # Case map 0x0160 => [0x0161], # Case map 0x0162 => [0x0163], # Case map 0x0164 => [0x0165], # Case map 0x0166 => [0x0167], # Case map 0x0168 => [0x0169], # Case map 0x016A => [0x016B], # Case map 0x016C => [0x016D], # Case map 0x016E => [0x016F], # Case map 0x0170 => [0x0171], # Case map 0x0172 => [0x0173], # Case map 0x0174 => [0x0175], # Case map 0x0176 => [0x0177], # Case map 0x0178 => [0x00FF], # Case map 0x0179 => [0x017A], # Case map 0x017B => [0x017C], # Case map 0x017D => [0x017E], # Case map 0x017F => [0x0073], # Case map 0x0181 => [0x0253], # Case map 0x0182 => [0x0183], # Case map 0x0184 => [0x0185], # Case map 0x0186 => [0x0254], # Case map 0x0187 => [0x0188], # Case map 0x0189 => [0x0256], # Case map 0x018A => [0x0257], # Case map 0x018B => [0x018C], # Case map 0x018E => [0x01DD], # Case map 0x018F => [0x0259], # Case map 0x0190 => [0x025B], # Case map 0x0191 => [0x0192], # Case map 0x0193 => [0x0260], # Case map 0x0194 => [0x0263], # Case map 0x0196 => [0x0269], # Case map 0x0197 => [0x0268], # Case map 0x0198 => [0x0199], # Case map 0x019C => [0x026F], # Case map 0x019D => [0x0272], # Case map 0x019F => [0x0275], # Case map 0x01A0 => [0x01A1], # Case map 0x01A2 => [0x01A3], # Case map 0x01A4 => [0x01A5], # Case map 0x01A6 => [0x0280], # Case map 0x01A7 => [0x01A8], # Case map 0x01A9 => [0x0283], # Case map 0x01AC => [0x01AD], # Case map 0x01AE => [0x0288], # Case map 0x01AF => [0x01B0], # Case map 0x01B1 => [0x028A], # Case map 0x01B2 => [0x028B], # Case map 0x01B3 => [0x01B4], # Case map 0x01B5 => [0x01B6], # Case map 0x01B7 => [0x0292], # Case map 0x01B8 => [0x01B9], # Case map 0x01BC => [0x01BD], # Case map 0x01C4 => [0x01C6], # Case map 0x01C5 => [0x01C6], # Case map 0x01C7 => [0x01C9], # Case map 0x01C8 => [0x01C9], # Case map 0x01CA => [0x01CC], # Case map 0x01CB => [0x01CC], # Case map 0x01CD => [0x01CE], # Case map 0x01CF => [0x01D0], # Case map 0x01D1 => [0x01D2], # Case map 0x01D3 => [0x01D4], # Case map 0x01D5 => [0x01D6], # Case map 0x01D7 => [0x01D8], # Case map 0x01D9 => [0x01DA], # Case map 0x01DB => [0x01DC], # Case map 0x01DE => [0x01DF], # Case map 0x01E0 => [0x01E1], # Case map 0x01E2 => [0x01E3], # Case map 0x01E4 => [0x01E5], # Case map 0x01E6 => [0x01E7], # Case map 0x01E8 => [0x01E9], # Case map 0x01EA => [0x01EB], # Case map 0x01EC => [0x01ED], # Case map 0x01EE => [0x01EF], # Case map 0x01F0 => [0x006A, 0x030C], # Case map 0x01F1 => [0x01F3], # Case map 0x01F2 => [0x01F3], # Case map 0x01F4 => [0x01F5], # Case map 0x01F6 => [0x0195], # Case map 0x01F7 => [0x01BF], # Case map 0x01F8 => [0x01F9], # Case map 0x01FA => [0x01FB], # Case map 0x01FC => [0x01FD], # Case map 0x01FE => [0x01FF], # Case map 0x0200 => [0x0201], # Case map 0x0202 => [0x0203], # Case map 0x0204 => [0x0205], # Case map 0x0206 => [0x0207], # Case map 0x0208 => [0x0209], # Case map 0x020A => [0x020B], # Case map 0x020C => [0x020D], # Case map 0x020E => [0x020F], # Case map 0x0210 => [0x0211], # Case map 0x0212 => [0x0213], # Case map 0x0214 => [0x0215], # Case map 0x0216 => [0x0217], # Case map 0x0218 => [0x0219], # Case map 0x021A => [0x021B], # Case map 0x021C => [0x021D], # Case map 0x021E => [0x021F], # Case map 0x0220 => [0x019E], # Case map 0x0222 => [0x0223], # Case map 0x0224 => [0x0225], # Case map 0x0226 => [0x0227], # Case map 0x0228 => [0x0229], # Case map 0x022A => [0x022B], # Case map 0x022C => [0x022D], # Case map 0x022E => [0x022F], # Case map 0x0230 => [0x0231], # Case map 0x0232 => [0x0233], # Case map 0x0345 => [0x03B9], # Case map 0x037A => [0x0020, 0x03B9], # Additional folding 0x0386 => [0x03AC], # Case map 0x0388 => [0x03AD], # Case map 0x0389 => [0x03AE], # Case map 0x038A => [0x03AF], # Case map 0x038C => [0x03CC], # Case map 0x038E => [0x03CD], # Case map 0x038F => [0x03CE], # Case map 0x0390 => [0x03B9, 0x0308, 0x0301], # Case map 0x0391 => [0x03B1], # Case map 0x0392 => [0x03B2], # Case map 0x0393 => [0x03B3], # Case map 0x0394 => [0x03B4], # Case map 0x0395 => [0x03B5], # Case map 0x0396 => [0x03B6], # Case map 0x0397 => [0x03B7], # Case map 0x0398 => [0x03B8], # Case map 0x0399 => [0x03B9], # Case map 0x039A => [0x03BA], # Case map 0x039B => [0x03BB], # Case map 0x039C => [0x03BC], # Case map 0x039D => [0x03BD], # Case map 0x039E => [0x03BE], # Case map 0x039F => [0x03BF], # Case map 0x03A0 => [0x03C0], # Case map 0x03A1 => [0x03C1], # Case map 0x03A3 => [0x03C3], # Case map 0x03A4 => [0x03C4], # Case map 0x03A5 => [0x03C5], # Case map 0x03A6 => [0x03C6], # Case map 0x03A7 => [0x03C7], # Case map 0x03A8 => [0x03C8], # Case map 0x03A9 => [0x03C9], # Case map 0x03AA => [0x03CA], # Case map 0x03AB => [0x03CB], # Case map 0x03B0 => [0x03C5, 0x0308, 0x0301], # Case map 0x03C2 => [0x03C3], # Case map 0x03D0 => [0x03B2], # Case map 0x03D1 => [0x03B8], # Case map 0x03D2 => [0x03C5], # Additional folding 0x03D3 => [0x03CD], # Additional folding 0x03D4 => [0x03CB], # Additional folding 0x03D5 => [0x03C6], # Case map 0x03D6 => [0x03C0], # Case map 0x03D8 => [0x03D9], # Case map 0x03DA => [0x03DB], # Case map 0x03DC => [0x03DD], # Case map 0x03DE => [0x03DF], # Case map 0x03E0 => [0x03E1], # Case map 0x03E2 => [0x03E3], # Case map 0x03E4 => [0x03E5], # Case map 0x03E6 => [0x03E7], # Case map 0x03E8 => [0x03E9], # Case map 0x03EA => [0x03EB], # Case map 0x03EC => [0x03ED], # Case map 0x03EE => [0x03EF], # Case map 0x03F0 => [0x03BA], # Case map 0x03F1 => [0x03C1], # Case map 0x03F2 => [0x03C3], # Case map 0x03F4 => [0x03B8], # Case map 0x03F5 => [0x03B5], # Case map 0x0400 => [0x0450], # Case map 0x0401 => [0x0451], # Case map 0x0402 => [0x0452], # Case map 0x0403 => [0x0453], # Case map 0x0404 => [0x0454], # Case map 0x0405 => [0x0455], # Case map 0x0406 => [0x0456], # Case map 0x0407 => [0x0457], # Case map 0x0408 => [0x0458], # Case map 0x0409 => [0x0459], # Case map 0x040A => [0x045A], # Case map 0x040B => [0x045B], # Case map 0x040C => [0x045C], # Case map 0x040D => [0x045D], # Case map 0x040E => [0x045E], # Case map 0x040F => [0x045F], # Case map 0x0410 => [0x0430], # Case map 0x0411 => [0x0431], # Case map 0x0412 => [0x0432], # Case map 0x0413 => [0x0433], # Case map 0x0414 => [0x0434], # Case map 0x0415 => [0x0435], # Case map 0x0416 => [0x0436], # Case map 0x0417 => [0x0437], # Case map 0x0418 => [0x0438], # Case map 0x0419 => [0x0439], # Case map 0x041A => [0x043A], # Case map 0x041B => [0x043B], # Case map 0x041C => [0x043C], # Case map 0x041D => [0x043D], # Case map 0x041E => [0x043E], # Case map 0x041F => [0x043F], # Case map 0x0420 => [0x0440], # Case map 0x0421 => [0x0441], # Case map 0x0422 => [0x0442], # Case map 0x0423 => [0x0443], # Case map 0x0424 => [0x0444], # Case map 0x0425 => [0x0445], # Case map 0x0426 => [0x0446], # Case map 0x0427 => [0x0447], # Case map 0x0428 => [0x0448], # Case map 0x0429 => [0x0449], # Case map 0x042A => [0x044A], # Case map 0x042B => [0x044B], # Case map 0x042C => [0x044C], # Case map 0x042D => [0x044D], # Case map 0x042E => [0x044E], # Case map 0x042F => [0x044F], # Case map 0x0460 => [0x0461], # Case map 0x0462 => [0x0463], # Case map 0x0464 => [0x0465], # Case map 0x0466 => [0x0467], # Case map 0x0468 => [0x0469], # Case map 0x046A => [0x046B], # Case map 0x046C => [0x046D], # Case map 0x046E => [0x046F], # Case map 0x0470 => [0x0471], # Case map 0x0472 => [0x0473], # Case map 0x0474 => [0x0475], # Case map 0x0476 => [0x0477], # Case map 0x0478 => [0x0479], # Case map 0x047A => [0x047B], # Case map 0x047C => [0x047D], # Case map 0x047E => [0x047F], # Case map 0x0480 => [0x0481], # Case map 0x048A => [0x048B], # Case map 0x048C => [0x048D], # Case map 0x048E => [0x048F], # Case map 0x0490 => [0x0491], # Case map 0x0492 => [0x0493], # Case map 0x0494 => [0x0495], # Case map 0x0496 => [0x0497], # Case map 0x0498 => [0x0499], # Case map 0x049A => [0x049B], # Case map 0x049C => [0x049D], # Case map 0x049E => [0x049F], # Case map 0x04A0 => [0x04A1], # Case map 0x04A2 => [0x04A3], # Case map 0x04A4 => [0x04A5], # Case map 0x04A6 => [0x04A7], # Case map 0x04A8 => [0x04A9], # Case map 0x04AA => [0x04AB], # Case map 0x04AC => [0x04AD], # Case map 0x04AE => [0x04AF], # Case map 0x04B0 => [0x04B1], # Case map 0x04B2 => [0x04B3], # Case map 0x04B4 => [0x04B5], # Case map 0x04B6 => [0x04B7], # Case map 0x04B8 => [0x04B9], # Case map 0x04BA => [0x04BB], # Case map 0x04BC => [0x04BD], # Case map 0x04BE => [0x04BF], # Case map 0x04C1 => [0x04C2], # Case map 0x04C3 => [0x04C4], # Case map 0x04C5 => [0x04C6], # Case map 0x04C7 => [0x04C8], # Case map 0x04C9 => [0x04CA], # Case map 0x04CB => [0x04CC], # Case map 0x04CD => [0x04CE], # Case map 0x04D0 => [0x04D1], # Case map 0x04D2 => [0x04D3], # Case map 0x04D4 => [0x04D5], # Case map 0x04D6 => [0x04D7], # Case map 0x04D8 => [0x04D9], # Case map 0x04DA => [0x04DB], # Case map 0x04DC => [0x04DD], # Case map 0x04DE => [0x04DF], # Case map 0x04E0 => [0x04E1], # Case map 0x04E2 => [0x04E3], # Case map 0x04E4 => [0x04E5], # Case map 0x04E6 => [0x04E7], # Case map 0x04E8 => [0x04E9], # Case map 0x04EA => [0x04EB], # Case map 0x04EC => [0x04ED], # Case map 0x04EE => [0x04EF], # Case map 0x04F0 => [0x04F1], # Case map 0x04F2 => [0x04F3], # Case map 0x04F4 => [0x04F5], # Case map 0x04F8 => [0x04F9], # Case map 0x0500 => [0x0501], # Case map 0x0502 => [0x0503], # Case map 0x0504 => [0x0505], # Case map 0x0506 => [0x0507], # Case map 0x0508 => [0x0509], # Case map 0x050A => [0x050B], # Case map 0x050C => [0x050D], # Case map 0x050E => [0x050F], # Case map 0x0531 => [0x0561], # Case map 0x0532 => [0x0562], # Case map 0x0533 => [0x0563], # Case map 0x0534 => [0x0564], # Case map 0x0535 => [0x0565], # Case map 0x0536 => [0x0566], # Case map 0x0537 => [0x0567], # Case map 0x0538 => [0x0568], # Case map 0x0539 => [0x0569], # Case map 0x053A => [0x056A], # Case map 0x053B => [0x056B], # Case map 0x053C => [0x056C], # Case map 0x053D => [0x056D], # Case map 0x053E => [0x056E], # Case map 0x053F => [0x056F], # Case map 0x0540 => [0x0570], # Case map 0x0541 => [0x0571], # Case map 0x0542 => [0x0572], # Case map 0x0543 => [0x0573], # Case map 0x0544 => [0x0574], # Case map 0x0545 => [0x0575], # Case map 0x0546 => [0x0576], # Case map 0x0547 => [0x0577], # Case map 0x0548 => [0x0578], # Case map 0x0549 => [0x0579], # Case map 0x054A => [0x057A], # Case map 0x054B => [0x057B], # Case map 0x054C => [0x057C], # Case map 0x054D => [0x057D], # Case map 0x054E => [0x057E], # Case map 0x054F => [0x057F], # Case map 0x0550 => [0x0580], # Case map 0x0551 => [0x0581], # Case map 0x0552 => [0x0582], # Case map 0x0553 => [0x0583], # Case map 0x0554 => [0x0584], # Case map 0x0555 => [0x0585], # Case map 0x0556 => [0x0586], # Case map 0x0587 => [0x0565, 0x0582], # Case map 0x1E00 => [0x1E01], # Case map 0x1E02 => [0x1E03], # Case map 0x1E04 => [0x1E05], # Case map 0x1E06 => [0x1E07], # Case map 0x1E08 => [0x1E09], # Case map 0x1E0A => [0x1E0B], # Case map 0x1E0C => [0x1E0D], # Case map 0x1E0E => [0x1E0F], # Case map 0x1E10 => [0x1E11], # Case map 0x1E12 => [0x1E13], # Case map 0x1E14 => [0x1E15], # Case map 0x1E16 => [0x1E17], # Case map 0x1E18 => [0x1E19], # Case map 0x1E1A => [0x1E1B], # Case map 0x1E1C => [0x1E1D], # Case map 0x1E1E => [0x1E1F], # Case map 0x1E20 => [0x1E21], # Case map 0x1E22 => [0x1E23], # Case map 0x1E24 => [0x1E25], # Case map 0x1E26 => [0x1E27], # Case map 0x1E28 => [0x1E29], # Case map 0x1E2A => [0x1E2B], # Case map 0x1E2C => [0x1E2D], # Case map 0x1E2E => [0x1E2F], # Case map 0x1E30 => [0x1E31], # Case map 0x1E32 => [0x1E33], # Case map 0x1E34 => [0x1E35], # Case map 0x1E36 => [0x1E37], # Case map 0x1E38 => [0x1E39], # Case map 0x1E3A => [0x1E3B], # Case map 0x1E3C => [0x1E3D], # Case map 0x1E3E => [0x1E3F], # Case map 0x1E40 => [0x1E41], # Case map 0x1E42 => [0x1E43], # Case map 0x1E44 => [0x1E45], # Case map 0x1E46 => [0x1E47], # Case map 0x1E48 => [0x1E49], # Case map 0x1E4A => [0x1E4B], # Case map 0x1E4C => [0x1E4D], # Case map 0x1E4E => [0x1E4F], # Case map 0x1E50 => [0x1E51], # Case map 0x1E52 => [0x1E53], # Case map 0x1E54 => [0x1E55], # Case map 0x1E56 => [0x1E57], # Case map 0x1E58 => [0x1E59], # Case map 0x1E5A => [0x1E5B], # Case map 0x1E5C => [0x1E5D], # Case map 0x1E5E => [0x1E5F], # Case map 0x1E60 => [0x1E61], # Case map 0x1E62 => [0x1E63], # Case map 0x1E64 => [0x1E65], # Case map 0x1E66 => [0x1E67], # Case map 0x1E68 => [0x1E69], # Case map 0x1E6A => [0x1E6B], # Case map 0x1E6C => [0x1E6D], # Case map 0x1E6E => [0x1E6F], # Case map 0x1E70 => [0x1E71], # Case map 0x1E72 => [0x1E73], # Case map 0x1E74 => [0x1E75], # Case map 0x1E76 => [0x1E77], # Case map 0x1E78 => [0x1E79], # Case map 0x1E7A => [0x1E7B], # Case map 0x1E7C => [0x1E7D], # Case map 0x1E7E => [0x1E7F], # Case map 0x1E80 => [0x1E81], # Case map 0x1E82 => [0x1E83], # Case map 0x1E84 => [0x1E85], # Case map 0x1E86 => [0x1E87], # Case map 0x1E88 => [0x1E89], # Case map 0x1E8A => [0x1E8B], # Case map 0x1E8C => [0x1E8D], # Case map 0x1E8E => [0x1E8F], # Case map 0x1E90 => [0x1E91], # Case map 0x1E92 => [0x1E93], # Case map 0x1E94 => [0x1E95], # Case map 0x1E96 => [0x0068, 0x0331], # Case map 0x1E97 => [0x0074, 0x0308], # Case map 0x1E98 => [0x0077, 0x030A], # Case map 0x1E99 => [0x0079, 0x030A], # Case map 0x1E9A => [0x0061, 0x02BE], # Case map 0x1E9B => [0x1E61], # Case map 0x1EA0 => [0x1EA1], # Case map 0x1EA2 => [0x1EA3], # Case map 0x1EA4 => [0x1EA5], # Case map 0x1EA6 => [0x1EA7], # Case map 0x1EA8 => [0x1EA9], # Case map 0x1EAA => [0x1EAB], # Case map 0x1EAC => [0x1EAD], # Case map 0x1EAE => [0x1EAF], # Case map 0x1EB0 => [0x1EB1], # Case map 0x1EB2 => [0x1EB3], # Case map 0x1EB4 => [0x1EB5], # Case map 0x1EB6 => [0x1EB7], # Case map 0x1EB8 => [0x1EB9], # Case map 0x1EBA => [0x1EBB], # Case map 0x1EBC => [0x1EBD], # Case map 0x1EBE => [0x1EBF], # Case map 0x1EC0 => [0x1EC1], # Case map 0x1EC2 => [0x1EC3], # Case map 0x1EC4 => [0x1EC5], # Case map 0x1EC6 => [0x1EC7], # Case map 0x1EC8 => [0x1EC9], # Case map 0x1ECA => [0x1ECB], # Case map 0x1ECC => [0x1ECD], # Case map 0x1ECE => [0x1ECF], # Case map 0x1ED0 => [0x1ED1], # Case map 0x1ED2 => [0x1ED3], # Case map 0x1ED4 => [0x1ED5], # Case map 0x1ED6 => [0x1ED7], # Case map 0x1ED8 => [0x1ED9], # Case map 0x1EDA => [0x1EDB], # Case map 0x1EDC => [0x1EDD], # Case map 0x1EDE => [0x1EDF], # Case map 0x1EE0 => [0x1EE1], # Case map 0x1EE2 => [0x1EE3], # Case map 0x1EE4 => [0x1EE5], # Case map 0x1EE6 => [0x1EE7], # Case map 0x1EE8 => [0x1EE9], # Case map 0x1EEA => [0x1EEB], # Case map 0x1EEC => [0x1EED], # Case map 0x1EEE => [0x1EEF], # Case map 0x1EF0 => [0x1EF1], # Case map 0x1EF2 => [0x1EF3], # Case map 0x1EF4 => [0x1EF5], # Case map 0x1EF6 => [0x1EF7], # Case map 0x1EF8 => [0x1EF9], # Case map 0x1F08 => [0x1F00], # Case map 0x1F09 => [0x1F01], # Case map 0x1F0A => [0x1F02], # Case map 0x1F0B => [0x1F03], # Case map 0x1F0C => [0x1F04], # Case map 0x1F0D => [0x1F05], # Case map 0x1F0E => [0x1F06], # Case map 0x1F0F => [0x1F07], # Case map 0x1F18 => [0x1F10], # Case map 0x1F19 => [0x1F11], # Case map 0x1F1A => [0x1F12], # Case map 0x1F1B => [0x1F13], # Case map 0x1F1C => [0x1F14], # Case map 0x1F1D => [0x1F15], # Case map 0x1F28 => [0x1F20], # Case map 0x1F29 => [0x1F21], # Case map 0x1F2A => [0x1F22], # Case map 0x1F2B => [0x1F23], # Case map 0x1F2C => [0x1F24], # Case map 0x1F2D => [0x1F25], # Case map 0x1F2E => [0x1F26], # Case map 0x1F2F => [0x1F27], # Case map 0x1F38 => [0x1F30], # Case map 0x1F39 => [0x1F31], # Case map 0x1F3A => [0x1F32], # Case map 0x1F3B => [0x1F33], # Case map 0x1F3C => [0x1F34], # Case map 0x1F3D => [0x1F35], # Case map 0x1F3E => [0x1F36], # Case map 0x1F3F => [0x1F37], # Case map 0x1F48 => [0x1F40], # Case map 0x1F49 => [0x1F41], # Case map 0x1F4A => [0x1F42], # Case map 0x1F4B => [0x1F43], # Case map 0x1F4C => [0x1F44], # Case map 0x1F4D => [0x1F45], # Case map 0x1F50 => [0x03C5, 0x0313], # Case map 0x1F52 => [0x03C5, 0x0313, 0x0300], # Case map 0x1F54 => [0x03C5, 0x0313, 0x0301], # Case map 0x1F56 => [0x03C5, 0x0313, 0x0342], # Case map 0x1F59 => [0x1F51], # Case map 0x1F5B => [0x1F53], # Case map 0x1F5D => [0x1F55], # Case map 0x1F5F => [0x1F57], # Case map 0x1F68 => [0x1F60], # Case map 0x1F69 => [0x1F61], # Case map 0x1F6A => [0x1F62], # Case map 0x1F6B => [0x1F63], # Case map 0x1F6C => [0x1F64], # Case map 0x1F6D => [0x1F65], # Case map 0x1F6E => [0x1F66], # Case map 0x1F6F => [0x1F67], # Case map 0x1F80 => [0x1F00, 0x03B9], # Case map 0x1F81 => [0x1F01, 0x03B9], # Case map 0x1F82 => [0x1F02, 0x03B9], # Case map 0x1F83 => [0x1F03, 0x03B9], # Case map 0x1F84 => [0x1F04, 0x03B9], # Case map 0x1F85 => [0x1F05, 0x03B9], # Case map 0x1F86 => [0x1F06, 0x03B9], # Case map 0x1F87 => [0x1F07, 0x03B9], # Case map 0x1F88 => [0x1F00, 0x03B9], # Case map 0x1F89 => [0x1F01, 0x03B9], # Case map 0x1F8A => [0x1F02, 0x03B9], # Case map 0x1F8B => [0x1F03, 0x03B9], # Case map 0x1F8C => [0x1F04, 0x03B9], # Case map 0x1F8D => [0x1F05, 0x03B9], # Case map 0x1F8E => [0x1F06, 0x03B9], # Case map 0x1F8F => [0x1F07, 0x03B9], # Case map 0x1F90 => [0x1F20, 0x03B9], # Case map 0x1F91 => [0x1F21, 0x03B9], # Case map 0x1F92 => [0x1F22, 0x03B9], # Case map 0x1F93 => [0x1F23, 0x03B9], # Case map 0x1F94 => [0x1F24, 0x03B9], # Case map 0x1F95 => [0x1F25, 0x03B9], # Case map 0x1F96 => [0x1F26, 0x03B9], # Case map 0x1F97 => [0x1F27, 0x03B9], # Case map 0x1F98 => [0x1F20, 0x03B9], # Case map 0x1F99 => [0x1F21, 0x03B9], # Case map 0x1F9A => [0x1F22, 0x03B9], # Case map 0x1F9B => [0x1F23, 0x03B9], # Case map 0x1F9C => [0x1F24, 0x03B9], # Case map 0x1F9D => [0x1F25, 0x03B9], # Case map 0x1F9E => [0x1F26, 0x03B9], # Case map 0x1F9F => [0x1F27, 0x03B9], # Case map 0x1FA0 => [0x1F60, 0x03B9], # Case map 0x1FA1 => [0x1F61, 0x03B9], # Case map 0x1FA2 => [0x1F62, 0x03B9], # Case map 0x1FA3 => [0x1F63, 0x03B9], # Case map 0x1FA4 => [0x1F64, 0x03B9], # Case map 0x1FA5 => [0x1F65, 0x03B9], # Case map 0x1FA6 => [0x1F66, 0x03B9], # Case map 0x1FA7 => [0x1F67, 0x03B9], # Case map 0x1FA8 => [0x1F60, 0x03B9], # Case map 0x1FA9 => [0x1F61, 0x03B9], # Case map 0x1FAA => [0x1F62, 0x03B9], # Case map 0x1FAB => [0x1F63, 0x03B9], # Case map 0x1FAC => [0x1F64, 0x03B9], # Case map 0x1FAD => [0x1F65, 0x03B9], # Case map 0x1FAE => [0x1F66, 0x03B9], # Case map 0x1FAF => [0x1F67, 0x03B9], # Case map 0x1FB2 => [0x1F70, 0x03B9], # Case map 0x1FB3 => [0x03B1, 0x03B9], # Case map 0x1FB4 => [0x03AC, 0x03B9], # Case map 0x1FB6 => [0x03B1, 0x0342], # Case map 0x1FB7 => [0x03B1, 0x0342, 0x03B9], # Case map 0x1FB8 => [0x1FB0], # Case map 0x1FB9 => [0x1FB1], # Case map 0x1FBA => [0x1F70], # Case map 0x1FBB => [0x1F71], # Case map 0x1FBC => [0x03B1, 0x03B9], # Case map 0x1FBE => [0x03B9], # Case map 0x1FC2 => [0x1F74, 0x03B9], # Case map 0x1FC3 => [0x03B7, 0x03B9], # Case map 0x1FC4 => [0x03AE, 0x03B9], # Case map 0x1FC6 => [0x03B7, 0x0342], # Case map 0x1FC7 => [0x03B7, 0x0342, 0x03B9], # Case map 0x1FC8 => [0x1F72], # Case map 0x1FC9 => [0x1F73], # Case map 0x1FCA => [0x1F74], # Case map 0x1FCB => [0x1F75], # Case map 0x1FCC => [0x03B7, 0x03B9], # Case map 0x1FD2 => [0x03B9, 0x0308, 0x0300], # Case map 0x1FD3 => [0x03B9, 0x0308, 0x0301], # Case map 0x1FD6 => [0x03B9, 0x0342], # Case map 0x1FD7 => [0x03B9, 0x0308, 0x0342], # Case map 0x1FD8 => [0x1FD0], # Case map 0x1FD9 => [0x1FD1], # Case map 0x1FDA => [0x1F76], # Case map 0x1FDB => [0x1F77], # Case map 0x1FE2 => [0x03C5, 0x0308, 0x0300], # Case map 0x1FE3 => [0x03C5, 0x0308, 0x0301], # Case map 0x1FE4 => [0x03C1, 0x0313], # Case map 0x1FE6 => [0x03C5, 0x0342], # Case map 0x1FE7 => [0x03C5, 0x0308, 0x0342], # Case map 0x1FE8 => [0x1FE0], # Case map 0x1FE9 => [0x1FE1], # Case map 0x1FEA => [0x1F7A], # Case map 0x1FEB => [0x1F7B], # Case map 0x1FEC => [0x1FE5], # Case map 0x1FF2 => [0x1F7C, 0x03B9], # Case map 0x1FF3 => [0x03C9, 0x03B9], # Case map 0x1FF4 => [0x03CE, 0x03B9], # Case map 0x1FF6 => [0x03C9, 0x0342], # Case map 0x1FF7 => [0x03C9, 0x0342, 0x03B9], # Case map 0x1FF8 => [0x1F78], # Case map 0x1FF9 => [0x1F79], # Case map 0x1FFA => [0x1F7C], # Case map 0x1FFB => [0x1F7D], # Case map 0x1FFC => [0x03C9, 0x03B9], # Case map 0x20A8 => [0x0072, 0x0073], # Additional folding 0x2102 => [0x0063], # Additional folding 0x2103 => [0x00B0, 0x0063], # Additional folding 0x2107 => [0x025B], # Additional folding 0x2109 => [0x00B0, 0x0066], # Additional folding 0x210B => [0x0068], # Additional folding 0x210C => [0x0068], # Additional folding 0x210D => [0x0068], # Additional folding 0x2110 => [0x0069], # Additional folding 0x2111 => [0x0069], # Additional folding 0x2112 => [0x006C], # Additional folding 0x2115 => [0x006E], # Additional folding 0x2116 => [0x006E, 0x006F], # Additional folding 0x2119 => [0x0070], # Additional folding 0x211A => [0x0071], # Additional folding 0x211B => [0x0072], # Additional folding 0x211C => [0x0072], # Additional folding 0x211D => [0x0072], # Additional folding 0x2120 => [0x0073, 0x006D], # Additional folding 0x2121 => [0x0074, 0x0065, 0x006C], # Additional folding 0x2122 => [0x0074, 0x006D], # Additional folding 0x2124 => [0x007A], # Additional folding 0x2126 => [0x03C9], # Case map 0x2128 => [0x007A], # Additional folding 0x212A => [0x006B], # Case map 0x212B => [0x00E5], # Case map 0x212C => [0x0062], # Additional folding 0x212D => [0x0063], # Additional folding 0x2130 => [0x0065], # Additional folding 0x2131 => [0x0066], # Additional folding 0x2133 => [0x006D], # Additional folding 0x213E => [0x03B3], # Additional folding 0x213F => [0x03C0], # Additional folding 0x2145 => [0x0064], # Additional folding 0x2160 => [0x2170], # Case map 0x2161 => [0x2171], # Case map 0x2162 => [0x2172], # Case map 0x2163 => [0x2173], # Case map 0x2164 => [0x2174], # Case map 0x2165 => [0x2175], # Case map 0x2166 => [0x2176], # Case map 0x2167 => [0x2177], # Case map 0x2168 => [0x2178], # Case map 0x2169 => [0x2179], # Case map 0x216A => [0x217A], # Case map 0x216B => [0x217B], # Case map 0x216C => [0x217C], # Case map 0x216D => [0x217D], # Case map 0x216E => [0x217E], # Case map 0x216F => [0x217F], # Case map 0x24B6 => [0x24D0], # Case map 0x24B7 => [0x24D1], # Case map 0x24B8 => [0x24D2], # Case map 0x24B9 => [0x24D3], # Case map 0x24BA => [0x24D4], # Case map 0x24BB => [0x24D5], # Case map 0x24BC => [0x24D6], # Case map 0x24BD => [0x24D7], # Case map 0x24BE => [0x24D8], # Case map 0x24BF => [0x24D9], # Case map 0x24C0 => [0x24DA], # Case map 0x24C1 => [0x24DB], # Case map 0x24C2 => [0x24DC], # Case map 0x24C3 => [0x24DD], # Case map 0x24C4 => [0x24DE], # Case map 0x24C5 => [0x24DF], # Case map 0x24C6 => [0x24E0], # Case map 0x24C7 => [0x24E1], # Case map 0x24C8 => [0x24E2], # Case map 0x24C9 => [0x24E3], # Case map 0x24CA => [0x24E4], # Case map 0x24CB => [0x24E5], # Case map 0x24CC => [0x24E6], # Case map 0x24CD => [0x24E7], # Case map 0x24CE => [0x24E8], # Case map 0x24CF => [0x24E9], # Case map 0x3371 => [0x0068, 0x0070, 0x0061], # Additional folding 0x3373 => [0x0061, 0x0075], # Additional folding 0x3375 => [0x006F, 0x0076], # Additional folding 0x3380 => [0x0070, 0x0061], # Additional folding 0x3381 => [0x006E, 0x0061], # Additional folding 0x3382 => [0x03BC, 0x0061], # Additional folding 0x3383 => [0x006D, 0x0061], # Additional folding 0x3384 => [0x006B, 0x0061], # Additional folding 0x3385 => [0x006B, 0x0062], # Additional folding 0x3386 => [0x006D, 0x0062], # Additional folding 0x3387 => [0x0067, 0x0062], # Additional folding 0x338A => [0x0070, 0x0066], # Additional folding 0x338B => [0x006E, 0x0066], # Additional folding 0x338C => [0x03BC, 0x0066], # Additional folding 0x3390 => [0x0068, 0x007A], # Additional folding 0x3391 => [0x006B, 0x0068, 0x007A], # Additional folding 0x3392 => [0x006D, 0x0068, 0x007A], # Additional folding 0x3393 => [0x0067, 0x0068, 0x007A], # Additional folding 0x3394 => [0x0074, 0x0068, 0x007A], # Additional folding 0x33A9 => [0x0070, 0x0061], # Additional folding 0x33AA => [0x006B, 0x0070, 0x0061], # Additional folding 0x33AB => [0x006D, 0x0070, 0x0061], # Additional folding 0x33AC => [0x0067, 0x0070, 0x0061], # Additional folding 0x33B4 => [0x0070, 0x0076], # Additional folding 0x33B5 => [0x006E, 0x0076], # Additional folding 0x33B6 => [0x03BC, 0x0076], # Additional folding 0x33B7 => [0x006D, 0x0076], # Additional folding 0x33B8 => [0x006B, 0x0076], # Additional folding 0x33B9 => [0x006D, 0x0076], # Additional folding 0x33BA => [0x0070, 0x0077], # Additional folding 0x33BB => [0x006E, 0x0077], # Additional folding 0x33BC => [0x03BC, 0x0077], # Additional folding 0x33BD => [0x006D, 0x0077], # Additional folding 0x33BE => [0x006B, 0x0077], # Additional folding 0x33BF => [0x006D, 0x0077], # Additional folding 0x33C0 => [0x006B, 0x03C9], # Additional folding 0x33C1 => [0x006D, 0x03C9], # Additional folding 0x33C3 => [0x0062, 0x0071], # Additional folding 0x33C6 => [0x0063, 0x2215, 0x006B, 0x0067], # Additional folding 0x33C7 => [0x0063, 0x006F, 0x002E], # Additional folding 0x33C8 => [0x0064, 0x0062], # Additional folding 0x33C9 => [0x0067, 0x0079], # Additional folding 0x33CB => [0x0068, 0x0070], # Additional folding 0x33CD => [0x006B, 0x006B], # Additional folding 0x33CE => [0x006B, 0x006D], # Additional folding 0x33D7 => [0x0070, 0x0068], # Additional folding 0x33D9 => [0x0070, 0x0070, 0x006D], # Additional folding 0x33DA => [0x0070, 0x0072], # Additional folding 0x33DC => [0x0073, 0x0076], # Additional folding 0x33DD => [0x0077, 0x0062], # Additional folding 0xFB00 => [0x0066, 0x0066], # Case map 0xFB01 => [0x0066, 0x0069], # Case map 0xFB02 => [0x0066, 0x006C], # Case map 0xFB03 => [0x0066, 0x0066, 0x0069], # Case map 0xFB04 => [0x0066, 0x0066, 0x006C], # Case map 0xFB05 => [0x0073, 0x0074], # Case map 0xFB06 => [0x0073, 0x0074], # Case map 0xFB13 => [0x0574, 0x0576], # Case map 0xFB14 => [0x0574, 0x0565], # Case map 0xFB15 => [0x0574, 0x056B], # Case map 0xFB16 => [0x057E, 0x0576], # Case map 0xFB17 => [0x0574, 0x056D], # Case map 0xFF21 => [0xFF41], # Case map 0xFF22 => [0xFF42], # Case map 0xFF23 => [0xFF43], # Case map 0xFF24 => [0xFF44], # Case map 0xFF25 => [0xFF45], # Case map 0xFF26 => [0xFF46], # Case map 0xFF27 => [0xFF47], # Case map 0xFF28 => [0xFF48], # Case map 0xFF29 => [0xFF49], # Case map 0xFF2A => [0xFF4A], # Case map 0xFF2B => [0xFF4B], # Case map 0xFF2C => [0xFF4C], # Case map 0xFF2D => [0xFF4D], # Case map 0xFF2E => [0xFF4E], # Case map 0xFF2F => [0xFF4F], # Case map 0xFF30 => [0xFF50], # Case map 0xFF31 => [0xFF51], # Case map 0xFF32 => [0xFF52], # Case map 0xFF33 => [0xFF53], # Case map 0xFF34 => [0xFF54], # Case map 0xFF35 => [0xFF55], # Case map 0xFF36 => [0xFF56], # Case map 0xFF37 => [0xFF57], # Case map 0xFF38 => [0xFF58], # Case map 0xFF39 => [0xFF59], # Case map 0xFF3A => [0xFF5A], # Case map 0x10400 => [0x10428], # Case map 0x10401 => [0x10429], # Case map 0x10402 => [0x1042A], # Case map 0x10403 => [0x1042B], # Case map 0x10404 => [0x1042C], # Case map 0x10405 => [0x1042D], # Case map 0x10406 => [0x1042E], # Case map 0x10407 => [0x1042F], # Case map 0x10408 => [0x10430], # Case map 0x10409 => [0x10431], # Case map 0x1040A => [0x10432], # Case map 0x1040B => [0x10433], # Case map 0x1040C => [0x10434], # Case map 0x1040D => [0x10435], # Case map 0x1040E => [0x10436], # Case map 0x1040F => [0x10437], # Case map 0x10410 => [0x10438], # Case map 0x10411 => [0x10439], # Case map 0x10412 => [0x1043A], # Case map 0x10413 => [0x1043B], # Case map 0x10414 => [0x1043C], # Case map 0x10415 => [0x1043D], # Case map 0x10416 => [0x1043E], # Case map 0x10417 => [0x1043F], # Case map 0x10418 => [0x10440], # Case map 0x10419 => [0x10441], # Case map 0x1041A => [0x10442], # Case map 0x1041B => [0x10443], # Case map 0x1041C => [0x10444], # Case map 0x1041D => [0x10445], # Case map 0x1041E => [0x10446], # Case map 0x1041F => [0x10447], # Case map 0x10420 => [0x10448], # Case map 0x10421 => [0x10449], # Case map 0x10422 => [0x1044A], # Case map 0x10423 => [0x1044B], # Case map 0x10424 => [0x1044C], # Case map 0x10425 => [0x1044D], # Case map 0x1D400 => [0x0061], # Additional folding 0x1D401 => [0x0062], # Additional folding 0x1D402 => [0x0063], # Additional folding 0x1D403 => [0x0064], # Additional folding 0x1D404 => [0x0065], # Additional folding 0x1D405 => [0x0066], # Additional folding 0x1D406 => [0x0067], # Additional folding 0x1D407 => [0x0068], # Additional folding 0x1D408 => [0x0069], # Additional folding 0x1D409 => [0x006A], # Additional folding 0x1D40A => [0x006B], # Additional folding 0x1D40B => [0x006C], # Additional folding 0x1D40C => [0x006D], # Additional folding 0x1D40D => [0x006E], # Additional folding 0x1D40E => [0x006F], # Additional folding 0x1D40F => [0x0070], # Additional folding 0x1D410 => [0x0071], # Additional folding 0x1D411 => [0x0072], # Additional folding 0x1D412 => [0x0073], # Additional folding 0x1D413 => [0x0074], # Additional folding 0x1D414 => [0x0075], # Additional folding 0x1D415 => [0x0076], # Additional folding 0x1D416 => [0x0077], # Additional folding 0x1D417 => [0x0078], # Additional folding 0x1D418 => [0x0079], # Additional folding 0x1D419 => [0x007A], # Additional folding 0x1D434 => [0x0061], # Additional folding 0x1D435 => [0x0062], # Additional folding 0x1D436 => [0x0063], # Additional folding 0x1D437 => [0x0064], # Additional folding 0x1D438 => [0x0065], # Additional folding 0x1D439 => [0x0066], # Additional folding 0x1D43A => [0x0067], # Additional folding 0x1D43B => [0x0068], # Additional folding 0x1D43C => [0x0069], # Additional folding 0x1D43D => [0x006A], # Additional folding 0x1D43E => [0x006B], # Additional folding 0x1D43F => [0x006C], # Additional folding 0x1D440 => [0x006D], # Additional folding 0x1D441 => [0x006E], # Additional folding 0x1D442 => [0x006F], # Additional folding 0x1D443 => [0x0070], # Additional folding 0x1D444 => [0x0071], # Additional folding 0x1D445 => [0x0072], # Additional folding 0x1D446 => [0x0073], # Additional folding 0x1D447 => [0x0074], # Additional folding 0x1D448 => [0x0075], # Additional folding 0x1D449 => [0x0076], # Additional folding 0x1D44A => [0x0077], # Additional folding 0x1D44B => [0x0078], # Additional folding 0x1D44C => [0x0079], # Additional folding 0x1D44D => [0x007A], # Additional folding 0x1D468 => [0x0061], # Additional folding 0x1D469 => [0x0062], # Additional folding 0x1D46A => [0x0063], # Additional folding 0x1D46B => [0x0064], # Additional folding 0x1D46C => [0x0065], # Additional folding 0x1D46D => [0x0066], # Additional folding 0x1D46E => [0x0067], # Additional folding 0x1D46F => [0x0068], # Additional folding 0x1D470 => [0x0069], # Additional folding 0x1D471 => [0x006A], # Additional folding 0x1D472 => [0x006B], # Additional folding 0x1D473 => [0x006C], # Additional folding 0x1D474 => [0x006D], # Additional folding 0x1D475 => [0x006E], # Additional folding 0x1D476 => [0x006F], # Additional folding 0x1D477 => [0x0070], # Additional folding 0x1D478 => [0x0071], # Additional folding 0x1D479 => [0x0072], # Additional folding 0x1D47A => [0x0073], # Additional folding 0x1D47B => [0x0074], # Additional folding 0x1D47C => [0x0075], # Additional folding 0x1D47D => [0x0076], # Additional folding 0x1D47E => [0x0077], # Additional folding 0x1D47F => [0x0078], # Additional folding 0x1D480 => [0x0079], # Additional folding 0x1D481 => [0x007A], # Additional folding 0x1D49C => [0x0061], # Additional folding 0x1D49E => [0x0063], # Additional folding 0x1D49F => [0x0064], # Additional folding 0x1D4A2 => [0x0067], # Additional folding 0x1D4A5 => [0x006A], # Additional folding 0x1D4A6 => [0x006B], # Additional folding 0x1D4A9 => [0x006E], # Additional folding 0x1D4AA => [0x006F], # Additional folding 0x1D4AB => [0x0070], # Additional folding 0x1D4AC => [0x0071], # Additional folding 0x1D4AE => [0x0073], # Additional folding 0x1D4AF => [0x0074], # Additional folding 0x1D4B0 => [0x0075], # Additional folding 0x1D4B1 => [0x0076], # Additional folding 0x1D4B2 => [0x0077], # Additional folding 0x1D4B3 => [0x0078], # Additional folding 0x1D4B4 => [0x0079], # Additional folding 0x1D4B5 => [0x007A], # Additional folding 0x1D4D0 => [0x0061], # Additional folding 0x1D4D1 => [0x0062], # Additional folding 0x1D4D2 => [0x0063], # Additional folding 0x1D4D3 => [0x0064], # Additional folding 0x1D4D4 => [0x0065], # Additional folding 0x1D4D5 => [0x0066], # Additional folding 0x1D4D6 => [0x0067], # Additional folding 0x1D4D7 => [0x0068], # Additional folding 0x1D4D8 => [0x0069], # Additional folding 0x1D4D9 => [0x006A], # Additional folding 0x1D4DA => [0x006B], # Additional folding 0x1D4DB => [0x006C], # Additional folding 0x1D4DC => [0x006D], # Additional folding 0x1D4DD => [0x006E], # Additional folding 0x1D4DE => [0x006F], # Additional folding 0x1D4DF => [0x0070], # Additional folding 0x1D4E0 => [0x0071], # Additional folding 0x1D4E1 => [0x0072], # Additional folding 0x1D4E2 => [0x0073], # Additional folding 0x1D4E3 => [0x0074], # Additional folding 0x1D4E4 => [0x0075], # Additional folding 0x1D4E5 => [0x0076], # Additional folding 0x1D4E6 => [0x0077], # Additional folding 0x1D4E7 => [0x0078], # Additional folding 0x1D4E8 => [0x0079], # Additional folding 0x1D4E9 => [0x007A], # Additional folding 0x1D504 => [0x0061], # Additional folding 0x1D505 => [0x0062], # Additional folding 0x1D507 => [0x0064], # Additional folding 0x1D508 => [0x0065], # Additional folding 0x1D509 => [0x0066], # Additional folding 0x1D50A => [0x0067], # Additional folding 0x1D50D => [0x006A], # Additional folding 0x1D50E => [0x006B], # Additional folding 0x1D50F => [0x006C], # Additional folding 0x1D510 => [0x006D], # Additional folding 0x1D511 => [0x006E], # Additional folding 0x1D512 => [0x006F], # Additional folding 0x1D513 => [0x0070], # Additional folding 0x1D514 => [0x0071], # Additional folding 0x1D516 => [0x0073], # Additional folding 0x1D517 => [0x0074], # Additional folding 0x1D518 => [0x0075], # Additional folding 0x1D519 => [0x0076], # Additional folding 0x1D51A => [0x0077], # Additional folding 0x1D51B => [0x0078], # Additional folding 0x1D51C => [0x0079], # Additional folding 0x1D538 => [0x0061], # Additional folding 0x1D539 => [0x0062], # Additional folding 0x1D53B => [0x0064], # Additional folding 0x1D53C => [0x0065], # Additional folding 0x1D53D => [0x0066], # Additional folding 0x1D53E => [0x0067], # Additional folding 0x1D540 => [0x0069], # Additional folding 0x1D541 => [0x006A], # Additional folding 0x1D542 => [0x006B], # Additional folding 0x1D543 => [0x006C], # Additional folding 0x1D544 => [0x006D], # Additional folding 0x1D546 => [0x006F], # Additional folding 0x1D54A => [0x0073], # Additional folding 0x1D54B => [0x0074], # Additional folding 0x1D54C => [0x0075], # Additional folding 0x1D54D => [0x0076], # Additional folding 0x1D54E => [0x0077], # Additional folding 0x1D54F => [0x0078], # Additional folding 0x1D550 => [0x0079], # Additional folding 0x1D56C => [0x0061], # Additional folding 0x1D56D => [0x0062], # Additional folding 0x1D56E => [0x0063], # Additional folding 0x1D56F => [0x0064], # Additional folding 0x1D570 => [0x0065], # Additional folding 0x1D571 => [0x0066], # Additional folding 0x1D572 => [0x0067], # Additional folding 0x1D573 => [0x0068], # Additional folding 0x1D574 => [0x0069], # Additional folding 0x1D575 => [0x006A], # Additional folding 0x1D576 => [0x006B], # Additional folding 0x1D577 => [0x006C], # Additional folding 0x1D578 => [0x006D], # Additional folding 0x1D579 => [0x006E], # Additional folding 0x1D57A => [0x006F], # Additional folding 0x1D57B => [0x0070], # Additional folding 0x1D57C => [0x0071], # Additional folding 0x1D57D => [0x0072], # Additional folding 0x1D57E => [0x0073], # Additional folding 0x1D57F => [0x0074], # Additional folding 0x1D580 => [0x0075], # Additional folding 0x1D581 => [0x0076], # Additional folding 0x1D582 => [0x0077], # Additional folding 0x1D583 => [0x0078], # Additional folding 0x1D584 => [0x0079], # Additional folding 0x1D585 => [0x007A], # Additional folding 0x1D5A0 => [0x0061], # Additional folding 0x1D5A1 => [0x0062], # Additional folding 0x1D5A2 => [0x0063], # Additional folding 0x1D5A3 => [0x0064], # Additional folding 0x1D5A4 => [0x0065], # Additional folding 0x1D5A5 => [0x0066], # Additional folding 0x1D5A6 => [0x0067], # Additional folding 0x1D5A7 => [0x0068], # Additional folding 0x1D5A8 => [0x0069], # Additional folding 0x1D5A9 => [0x006A], # Additional folding 0x1D5AA => [0x006B], # Additional folding 0x1D5AB => [0x006C], # Additional folding 0x1D5AC => [0x006D], # Additional folding 0x1D5AD => [0x006E], # Additional folding 0x1D5AE => [0x006F], # Additional folding 0x1D5AF => [0x0070], # Additional folding 0x1D5B0 => [0x0071], # Additional folding 0x1D5B1 => [0x0072], # Additional folding 0x1D5B2 => [0x0073], # Additional folding 0x1D5B3 => [0x0074], # Additional folding 0x1D5B4 => [0x0075], # Additional folding 0x1D5B5 => [0x0076], # Additional folding 0x1D5B6 => [0x0077], # Additional folding 0x1D5B7 => [0x0078], # Additional folding 0x1D5B8 => [0x0079], # Additional folding 0x1D5B9 => [0x007A], # Additional folding 0x1D5D4 => [0x0061], # Additional folding 0x1D5D5 => [0x0062], # Additional folding 0x1D5D6 => [0x0063], # Additional folding 0x1D5D7 => [0x0064], # Additional folding 0x1D5D8 => [0x0065], # Additional folding 0x1D5D9 => [0x0066], # Additional folding 0x1D5DA => [0x0067], # Additional folding 0x1D5DB => [0x0068], # Additional folding 0x1D5DC => [0x0069], # Additional folding 0x1D5DD => [0x006A], # Additional folding 0x1D5DE => [0x006B], # Additional folding 0x1D5DF => [0x006C], # Additional folding 0x1D5E0 => [0x006D], # Additional folding 0x1D5E1 => [0x006E], # Additional folding 0x1D5E2 => [0x006F], # Additional folding 0x1D5E3 => [0x0070], # Additional folding 0x1D5E4 => [0x0071], # Additional folding 0x1D5E5 => [0x0072], # Additional folding 0x1D5E6 => [0x0073], # Additional folding 0x1D5E7 => [0x0074], # Additional folding 0x1D5E8 => [0x0075], # Additional folding 0x1D5E9 => [0x0076], # Additional folding 0x1D5EA => [0x0077], # Additional folding 0x1D5EB => [0x0078], # Additional folding 0x1D5EC => [0x0079], # Additional folding 0x1D5ED => [0x007A], # Additional folding 0x1D608 => [0x0061], # Additional folding 0x1D609 => [0x0062], # Additional folding 0x1D60A => [0x0063], # Additional folding 0x1D60B => [0x0064], # Additional folding 0x1D60C => [0x0065], # Additional folding 0x1D60D => [0x0066], # Additional folding 0x1D60E => [0x0067], # Additional folding 0x1D60F => [0x0068], # Additional folding 0x1D610 => [0x0069], # Additional folding 0x1D611 => [0x006A], # Additional folding 0x1D612 => [0x006B], # Additional folding 0x1D613 => [0x006C], # Additional folding 0x1D614 => [0x006D], # Additional folding 0x1D615 => [0x006E], # Additional folding 0x1D616 => [0x006F], # Additional folding 0x1D617 => [0x0070], # Additional folding 0x1D618 => [0x0071], # Additional folding 0x1D619 => [0x0072], # Additional folding 0x1D61A => [0x0073], # Additional folding 0x1D61B => [0x0074], # Additional folding 0x1D61C => [0x0075], # Additional folding 0x1D61D => [0x0076], # Additional folding 0x1D61E => [0x0077], # Additional folding 0x1D61F => [0x0078], # Additional folding 0x1D620 => [0x0079], # Additional folding 0x1D621 => [0x007A], # Additional folding 0x1D63C => [0x0061], # Additional folding 0x1D63D => [0x0062], # Additional folding 0x1D63E => [0x0063], # Additional folding 0x1D63F => [0x0064], # Additional folding 0x1D640 => [0x0065], # Additional folding 0x1D641 => [0x0066], # Additional folding 0x1D642 => [0x0067], # Additional folding 0x1D643 => [0x0068], # Additional folding 0x1D644 => [0x0069], # Additional folding 0x1D645 => [0x006A], # Additional folding 0x1D646 => [0x006B], # Additional folding 0x1D647 => [0x006C], # Additional folding 0x1D648 => [0x006D], # Additional folding 0x1D649 => [0x006E], # Additional folding 0x1D64A => [0x006F], # Additional folding 0x1D64B => [0x0070], # Additional folding 0x1D64C => [0x0071], # Additional folding 0x1D64D => [0x0072], # Additional folding 0x1D64E => [0x0073], # Additional folding 0x1D64F => [0x0074], # Additional folding 0x1D650 => [0x0075], # Additional folding 0x1D651 => [0x0076], # Additional folding 0x1D652 => [0x0077], # Additional folding 0x1D653 => [0x0078], # Additional folding 0x1D654 => [0x0079], # Additional folding 0x1D655 => [0x007A], # Additional folding 0x1D670 => [0x0061], # Additional folding 0x1D671 => [0x0062], # Additional folding 0x1D672 => [0x0063], # Additional folding 0x1D673 => [0x0064], # Additional folding 0x1D674 => [0x0065], # Additional folding 0x1D675 => [0x0066], # Additional folding 0x1D676 => [0x0067], # Additional folding 0x1D677 => [0x0068], # Additional folding 0x1D678 => [0x0069], # Additional folding 0x1D679 => [0x006A], # Additional folding 0x1D67A => [0x006B], # Additional folding 0x1D67B => [0x006C], # Additional folding 0x1D67C => [0x006D], # Additional folding 0x1D67D => [0x006E], # Additional folding 0x1D67E => [0x006F], # Additional folding 0x1D67F => [0x0070], # Additional folding 0x1D680 => [0x0071], # Additional folding 0x1D681 => [0x0072], # Additional folding 0x1D682 => [0x0073], # Additional folding 0x1D683 => [0x0074], # Additional folding 0x1D684 => [0x0075], # Additional folding 0x1D685 => [0x0076], # Additional folding 0x1D686 => [0x0077], # Additional folding 0x1D687 => [0x0078], # Additional folding 0x1D688 => [0x0079], # Additional folding 0x1D689 => [0x007A], # Additional folding 0x1D6A8 => [0x03B1], # Additional folding 0x1D6A9 => [0x03B2], # Additional folding 0x1D6AA => [0x03B3], # Additional folding 0x1D6AB => [0x03B4], # Additional folding 0x1D6AC => [0x03B5], # Additional folding 0x1D6AD => [0x03B6], # Additional folding 0x1D6AE => [0x03B7], # Additional folding 0x1D6AF => [0x03B8], # Additional folding 0x1D6B0 => [0x03B9], # Additional folding 0x1D6B1 => [0x03BA], # Additional folding 0x1D6B2 => [0x03BB], # Additional folding 0x1D6B3 => [0x03BC], # Additional folding 0x1D6B4 => [0x03BD], # Additional folding 0x1D6B5 => [0x03BE], # Additional folding 0x1D6B6 => [0x03BF], # Additional folding 0x1D6B7 => [0x03C0], # Additional folding 0x1D6B8 => [0x03C1], # Additional folding 0x1D6B9 => [0x03B8], # Additional folding 0x1D6BA => [0x03C3], # Additional folding 0x1D6BB => [0x03C4], # Additional folding 0x1D6BC => [0x03C5], # Additional folding 0x1D6BD => [0x03C6], # Additional folding 0x1D6BE => [0x03C7], # Additional folding 0x1D6BF => [0x03C8], # Additional folding 0x1D6C0 => [0x03C9], # Additional folding 0x1D6D3 => [0x03C3], # Additional folding 0x1D6E2 => [0x03B1], # Additional folding 0x1D6E3 => [0x03B2], # Additional folding 0x1D6E4 => [0x03B3], # Additional folding 0x1D6E5 => [0x03B4], # Additional folding 0x1D6E6 => [0x03B5], # Additional folding 0x1D6E7 => [0x03B6], # Additional folding 0x1D6E8 => [0x03B7], # Additional folding 0x1D6E9 => [0x03B8], # Additional folding 0x1D6EA => [0x03B9], # Additional folding 0x1D6EB => [0x03BA], # Additional folding 0x1D6EC => [0x03BB], # Additional folding 0x1D6ED => [0x03BC], # Additional folding 0x1D6EE => [0x03BD], # Additional folding 0x1D6EF => [0x03BE], # Additional folding 0x1D6F0 => [0x03BF], # Additional folding 0x1D6F1 => [0x03C0], # Additional folding 0x1D6F2 => [0x03C1], # Additional folding 0x1D6F3 => [0x03B8], # Additional folding 0x1D6F4 => [0x03C3], # Additional folding 0x1D6F5 => [0x03C4], # Additional folding 0x1D6F6 => [0x03C5], # Additional folding 0x1D6F7 => [0x03C6], # Additional folding 0x1D6F8 => [0x03C7], # Additional folding 0x1D6F9 => [0x03C8], # Additional folding 0x1D6FA => [0x03C9], # Additional folding 0x1D70D => [0x03C3], # Additional folding 0x1D71C => [0x03B1], # Additional folding 0x1D71D => [0x03B2], # Additional folding 0x1D71E => [0x03B3], # Additional folding 0x1D71F => [0x03B4], # Additional folding 0x1D720 => [0x03B5], # Additional folding 0x1D721 => [0x03B6], # Additional folding 0x1D722 => [0x03B7], # Additional folding 0x1D723 => [0x03B8], # Additional folding 0x1D724 => [0x03B9], # Additional folding 0x1D725 => [0x03BA], # Additional folding 0x1D726 => [0x03BB], # Additional folding 0x1D727 => [0x03BC], # Additional folding 0x1D728 => [0x03BD], # Additional folding 0x1D729 => [0x03BE], # Additional folding 0x1D72A => [0x03BF], # Additional folding 0x1D72B => [0x03C0], # Additional folding 0x1D72C => [0x03C1], # Additional folding 0x1D72D => [0x03B8], # Additional folding 0x1D72E => [0x03C3], # Additional folding 0x1D72F => [0x03C4], # Additional folding 0x1D730 => [0x03C5], # Additional folding 0x1D731 => [0x03C6], # Additional folding 0x1D732 => [0x03C7], # Additional folding 0x1D733 => [0x03C8], # Additional folding 0x1D734 => [0x03C9], # Additional folding 0x1D747 => [0x03C3], # Additional folding 0x1D756 => [0x03B1], # Additional folding 0x1D757 => [0x03B2], # Additional folding 0x1D758 => [0x03B3], # Additional folding 0x1D759 => [0x03B4], # Additional folding 0x1D75A => [0x03B5], # Additional folding 0x1D75B => [0x03B6], # Additional folding 0x1D75C => [0x03B7], # Additional folding 0x1D75D => [0x03B8], # Additional folding 0x1D75E => [0x03B9], # Additional folding 0x1D75F => [0x03BA], # Additional folding 0x1D760 => [0x03BB], # Additional folding 0x1D761 => [0x03BC], # Additional folding 0x1D762 => [0x03BD], # Additional folding 0x1D763 => [0x03BE], # Additional folding 0x1D764 => [0x03BF], # Additional folding 0x1D765 => [0x03C0], # Additional folding 0x1D766 => [0x03C1], # Additional folding 0x1D767 => [0x03B8], # Additional folding 0x1D768 => [0x03C3], # Additional folding 0x1D769 => [0x03C4], # Additional folding 0x1D76A => [0x03C5], # Additional folding 0x1D76B => [0x03C6], # Additional folding 0x1D76C => [0x03C7], # Additional folding 0x1D76D => [0x03C8], # Additional folding 0x1D76E => [0x03C9], # Additional folding 0x1D781 => [0x03C3], # Additional folding 0x1D790 => [0x03B1], # Additional folding 0x1D791 => [0x03B2], # Additional folding 0x1D792 => [0x03B3], # Additional folding 0x1D793 => [0x03B4], # Additional folding 0x1D794 => [0x03B5], # Additional folding 0x1D795 => [0x03B6], # Additional folding 0x1D796 => [0x03B7], # Additional folding 0x1D797 => [0x03B8], # Additional folding 0x1D798 => [0x03B9], # Additional folding 0x1D799 => [0x03BA], # Additional folding 0x1D79A => [0x03BB], # Additional folding 0x1D79B => [0x03BC], # Additional folding 0x1D79C => [0x03BD], # Additional folding 0x1D79D => [0x03BE], # Additional folding 0x1D79E => [0x03BF], # Additional folding 0x1D79F => [0x03C0], # Additional folding 0x1D7A0 => [0x03C1], # Additional folding 0x1D7A1 => [0x03B8], # Additional folding 0x1D7A2 => [0x03C3], # Additional folding 0x1D7A3 => [0x03C4], # Additional folding 0x1D7A4 => [0x03C5], # Additional folding 0x1D7A5 => [0x03C6], # Additional folding 0x1D7A6 => [0x03C7], # Additional folding 0x1D7A7 => [0x03C8], # Additional folding 0x1D7A8 => [0x03C9], # Additional folding 0x1D7BB => [0x03C3], # Additional folding }.freeze # Table B3 as defined by RFC 3454 (string preparation). # # @since 2.6.0 B3 = { 0x0041 => [0x0061], # Case map 0x0042 => [0x0062], # Case map 0x0043 => [0x0063], # Case map 0x0044 => [0x0064], # Case map 0x0045 => [0x0065], # Case map 0x0046 => [0x0066], # Case map 0x0047 => [0x0067], # Case map 0x0048 => [0x0068], # Case map 0x0049 => [0x0069], # Case map 0x004A => [0x006A], # Case map 0x004B => [0x006B], # Case map 0x004C => [0x006C], # Case map 0x004D => [0x006D], # Case map 0x004E => [0x006E], # Case map 0x004F => [0x006F], # Case map 0x0050 => [0x0070], # Case map 0x0051 => [0x0071], # Case map 0x0052 => [0x0072], # Case map 0x0053 => [0x0073], # Case map 0x0054 => [0x0074], # Case map 0x0055 => [0x0075], # Case map 0x0056 => [0x0076], # Case map 0x0057 => [0x0077], # Case map 0x0058 => [0x0078], # Case map 0x0059 => [0x0079], # Case map 0x005A => [0x007A], # Case map 0x00B5 => [0x03BC], # Case map 0x00C0 => [0x00E0], # Case map 0x00C1 => [0x00E1], # Case map 0x00C2 => [0x00E2], # Case map 0x00C3 => [0x00E3], # Case map 0x00C4 => [0x00E4], # Case map 0x00C5 => [0x00E5], # Case map 0x00C6 => [0x00E6], # Case map 0x00C7 => [0x00E7], # Case map 0x00C8 => [0x00E8], # Case map 0x00C9 => [0x00E9], # Case map 0x00CA => [0x00EA], # Case map 0x00CB => [0x00EB], # Case map 0x00CC => [0x00EC], # Case map 0x00CD => [0x00ED], # Case map 0x00CE => [0x00EE], # Case map 0x00CF => [0x00EF], # Case map 0x00D0 => [0x00F0], # Case map 0x00D1 => [0x00F1], # Case map 0x00D2 => [0x00F2], # Case map 0x00D3 => [0x00F3], # Case map 0x00D4 => [0x00F4], # Case map 0x00D5 => [0x00F5], # Case map 0x00D6 => [0x00F6], # Case map 0x00D8 => [0x00F8], # Case map 0x00D9 => [0x00F9], # Case map 0x00DA => [0x00FA], # Case map 0x00DB => [0x00FB], # Case map 0x00DC => [0x00FC], # Case map 0x00DD => [0x00FD], # Case map 0x00DE => [0x00FE], # Case map 0x00DF => [0x0073, 0x0073], # Case map 0x0100 => [0x0101], # Case map 0x0102 => [0x0103], # Case map 0x0104 => [0x0105], # Case map 0x0106 => [0x0107], # Case map 0x0108 => [0x0109], # Case map 0x010A => [0x010B], # Case map 0x010C => [0x010D], # Case map 0x010E => [0x010F], # Case map 0x0110 => [0x0111], # Case map 0x0112 => [0x0113], # Case map 0x0114 => [0x0115], # Case map 0x0116 => [0x0117], # Case map 0x0118 => [0x0119], # Case map 0x011A => [0x011B], # Case map 0x011C => [0x011D], # Case map 0x011E => [0x011F], # Case map 0x0120 => [0x0121], # Case map 0x0122 => [0x0123], # Case map 0x0124 => [0x0125], # Case map 0x0126 => [0x0127], # Case map 0x0128 => [0x0129], # Case map 0x012A => [0x012B], # Case map 0x012C => [0x012D], # Case map 0x012E => [0x012F], # Case map 0x0130 => [0x0069, 0x0307], # Case map 0x0132 => [0x0133], # Case map 0x0134 => [0x0135], # Case map 0x0136 => [0x0137], # Case map 0x0139 => [0x013A], # Case map 0x013B => [0x013C], # Case map 0x013D => [0x013E], # Case map 0x013F => [0x0140], # Case map 0x0141 => [0x0142], # Case map 0x0143 => [0x0144], # Case map 0x0145 => [0x0146], # Case map 0x0147 => [0x0148], # Case map 0x0149 => [0x02BC, 0x006E], # Case map 0x014A => [0x014B], # Case map 0x014C => [0x014D], # Case map 0x014E => [0x014F], # Case map 0x0150 => [0x0151], # Case map 0x0152 => [0x0153], # Case map 0x0154 => [0x0155], # Case map 0x0156 => [0x0157], # Case map 0x0158 => [0x0159], # Case map 0x015A => [0x015B], # Case map 0x015C => [0x015D], # Case map 0x015E => [0x015F], # Case map 0x0160 => [0x0161], # Case map 0x0162 => [0x0163], # Case map 0x0164 => [0x0165], # Case map 0x0166 => [0x0167], # Case map 0x0168 => [0x0169], # Case map 0x016A => [0x016B], # Case map 0x016C => [0x016D], # Case map 0x016E => [0x016F], # Case map 0x0170 => [0x0171], # Case map 0x0172 => [0x0173], # Case map 0x0174 => [0x0175], # Case map 0x0176 => [0x0177], # Case map 0x0178 => [0x00FF], # Case map 0x0179 => [0x017A], # Case map 0x017B => [0x017C], # Case map 0x017D => [0x017E], # Case map 0x017F => [0x0073], # Case map 0x0181 => [0x0253], # Case map 0x0182 => [0x0183], # Case map 0x0184 => [0x0185], # Case map 0x0186 => [0x0254], # Case map 0x0187 => [0x0188], # Case map 0x0189 => [0x0256], # Case map 0x018A => [0x0257], # Case map 0x018B => [0x018C], # Case map 0x018E => [0x01DD], # Case map 0x018F => [0x0259], # Case map 0x0190 => [0x025B], # Case map 0x0191 => [0x0192], # Case map 0x0193 => [0x0260], # Case map 0x0194 => [0x0263], # Case map 0x0196 => [0x0269], # Case map 0x0197 => [0x0268], # Case map 0x0198 => [0x0199], # Case map 0x019C => [0x026F], # Case map 0x019D => [0x0272], # Case map 0x019F => [0x0275], # Case map 0x01A0 => [0x01A1], # Case map 0x01A2 => [0x01A3], # Case map 0x01A4 => [0x01A5], # Case map 0x01A6 => [0x0280], # Case map 0x01A7 => [0x01A8], # Case map 0x01A9 => [0x0283], # Case map 0x01AC => [0x01AD], # Case map 0x01AE => [0x0288], # Case map 0x01AF => [0x01B0], # Case map 0x01B1 => [0x028A], # Case map 0x01B2 => [0x028B], # Case map 0x01B3 => [0x01B4], # Case map 0x01B5 => [0x01B6], # Case map 0x01B7 => [0x0292], # Case map 0x01B8 => [0x01B9], # Case map 0x01BC => [0x01BD], # Case map 0x01C4 => [0x01C6], # Case map 0x01C5 => [0x01C6], # Case map 0x01C7 => [0x01C9], # Case map 0x01C8 => [0x01C9], # Case map 0x01CA => [0x01CC], # Case map 0x01CB => [0x01CC], # Case map 0x01CD => [0x01CE], # Case map 0x01CF => [0x01D0], # Case map 0x01D1 => [0x01D2], # Case map 0x01D3 => [0x01D4], # Case map 0x01D5 => [0x01D6], # Case map 0x01D7 => [0x01D8], # Case map 0x01D9 => [0x01DA], # Case map 0x01DB => [0x01DC], # Case map 0x01DE => [0x01DF], # Case map 0x01E0 => [0x01E1], # Case map 0x01E2 => [0x01E3], # Case map 0x01E4 => [0x01E5], # Case map 0x01E6 => [0x01E7], # Case map 0x01E8 => [0x01E9], # Case map 0x01EA => [0x01EB], # Case map 0x01EC => [0x01ED], # Case map 0x01EE => [0x01EF], # Case map 0x01F0 => [0x006A, 0x030C], # Case map 0x01F1 => [0x01F3], # Case map 0x01F2 => [0x01F3], # Case map 0x01F4 => [0x01F5], # Case map 0x01F6 => [0x0195], # Case map 0x01F7 => [0x01BF], # Case map 0x01F8 => [0x01F9], # Case map 0x01FA => [0x01FB], # Case map 0x01FC => [0x01FD], # Case map 0x01FE => [0x01FF], # Case map 0x0200 => [0x0201], # Case map 0x0202 => [0x0203], # Case map 0x0204 => [0x0205], # Case map 0x0206 => [0x0207], # Case map 0x0208 => [0x0209], # Case map 0x020A => [0x020B], # Case map 0x020C => [0x020D], # Case map 0x020E => [0x020F], # Case map 0x0210 => [0x0211], # Case map 0x0212 => [0x0213], # Case map 0x0214 => [0x0215], # Case map 0x0216 => [0x0217], # Case map 0x0218 => [0x0219], # Case map 0x021A => [0x021B], # Case map 0x021C => [0x021D], # Case map 0x021E => [0x021F], # Case map 0x0220 => [0x019E], # Case map 0x0222 => [0x0223], # Case map 0x0224 => [0x0225], # Case map 0x0226 => [0x0227], # Case map 0x0228 => [0x0229], # Case map 0x022A => [0x022B], # Case map 0x022C => [0x022D], # Case map 0x022E => [0x022F], # Case map 0x0230 => [0x0231], # Case map 0x0232 => [0x0233], # Case map 0x0345 => [0x03B9], # Case map 0x0386 => [0x03AC], # Case map 0x0388 => [0x03AD], # Case map 0x0389 => [0x03AE], # Case map 0x038A => [0x03AF], # Case map 0x038C => [0x03CC], # Case map 0x038E => [0x03CD], # Case map 0x038F => [0x03CE], # Case map 0x0390 => [0x03B9, 0x0308, 0x0301], # Case map 0x0391 => [0x03B1], # Case map 0x0392 => [0x03B2], # Case map 0x0393 => [0x03B3], # Case map 0x0394 => [0x03B4], # Case map 0x0395 => [0x03B5], # Case map 0x0396 => [0x03B6], # Case map 0x0397 => [0x03B7], # Case map 0x0398 => [0x03B8], # Case map 0x0399 => [0x03B9], # Case map 0x039A => [0x03BA], # Case map 0x039B => [0x03BB], # Case map 0x039C => [0x03BC], # Case map 0x039D => [0x03BD], # Case map 0x039E => [0x03BE], # Case map 0x039F => [0x03BF], # Case map 0x03A0 => [0x03C0], # Case map 0x03A1 => [0x03C1], # Case map 0x03A3 => [0x03C3], # Case map 0x03A4 => [0x03C4], # Case map 0x03A5 => [0x03C5], # Case map 0x03A6 => [0x03C6], # Case map 0x03A7 => [0x03C7], # Case map 0x03A8 => [0x03C8], # Case map 0x03A9 => [0x03C9], # Case map 0x03AA => [0x03CA], # Case map 0x03AB => [0x03CB], # Case map 0x03B0 => [0x03C5, 0x0308, 0x0301], # Case map 0x03C2 => [0x03C3], # Case map 0x03D0 => [0x03B2], # Case map 0x03D1 => [0x03B8], # Case map 0x03D5 => [0x03C6], # Case map 0x03D6 => [0x03C0], # Case map 0x03D8 => [0x03D9], # Case map 0x03DA => [0x03DB], # Case map 0x03DC => [0x03DD], # Case map 0x03DE => [0x03DF], # Case map 0x03E0 => [0x03E1], # Case map 0x03E2 => [0x03E3], # Case map 0x03E4 => [0x03E5], # Case map 0x03E6 => [0x03E7], # Case map 0x03E8 => [0x03E9], # Case map 0x03EA => [0x03EB], # Case map 0x03EC => [0x03ED], # Case map 0x03EE => [0x03EF], # Case map 0x03F0 => [0x03BA], # Case map 0x03F1 => [0x03C1], # Case map 0x03F2 => [0x03C3], # Case map 0x03F4 => [0x03B8], # Case map 0x03F5 => [0x03B5], # Case map 0x0400 => [0x0450], # Case map 0x0401 => [0x0451], # Case map 0x0402 => [0x0452], # Case map 0x0403 => [0x0453], # Case map 0x0404 => [0x0454], # Case map 0x0405 => [0x0455], # Case map 0x0406 => [0x0456], # Case map 0x0407 => [0x0457], # Case map 0x0408 => [0x0458], # Case map 0x0409 => [0x0459], # Case map 0x040A => [0x045A], # Case map 0x040B => [0x045B], # Case map 0x040C => [0x045C], # Case map 0x040D => [0x045D], # Case map 0x040E => [0x045E], # Case map 0x040F => [0x045F], # Case map 0x0410 => [0x0430], # Case map 0x0411 => [0x0431], # Case map 0x0412 => [0x0432], # Case map 0x0413 => [0x0433], # Case map 0x0414 => [0x0434], # Case map 0x0415 => [0x0435], # Case map 0x0416 => [0x0436], # Case map 0x0417 => [0x0437], # Case map 0x0418 => [0x0438], # Case map 0x0419 => [0x0439], # Case map 0x041A => [0x043A], # Case map 0x041B => [0x043B], # Case map 0x041C => [0x043C], # Case map 0x041D => [0x043D], # Case map 0x041E => [0x043E], # Case map 0x041F => [0x043F], # Case map 0x0420 => [0x0440], # Case map 0x0421 => [0x0441], # Case map 0x0422 => [0x0442], # Case map 0x0423 => [0x0443], # Case map 0x0424 => [0x0444], # Case map 0x0425 => [0x0445], # Case map 0x0426 => [0x0446], # Case map 0x0427 => [0x0447], # Case map 0x0428 => [0x0448], # Case map 0x0429 => [0x0449], # Case map 0x042A => [0x044A], # Case map 0x042B => [0x044B], # Case map 0x042C => [0x044C], # Case map 0x042D => [0x044D], # Case map 0x042E => [0x044E], # Case map 0x042F => [0x044F], # Case map 0x0460 => [0x0461], # Case map 0x0462 => [0x0463], # Case map 0x0464 => [0x0465], # Case map 0x0466 => [0x0467], # Case map 0x0468 => [0x0469], # Case map 0x046A => [0x046B], # Case map 0x046C => [0x046D], # Case map 0x046E => [0x046F], # Case map 0x0470 => [0x0471], # Case map 0x0472 => [0x0473], # Case map 0x0474 => [0x0475], # Case map 0x0476 => [0x0477], # Case map 0x0478 => [0x0479], # Case map 0x047A => [0x047B], # Case map 0x047C => [0x047D], # Case map 0x047E => [0x047F], # Case map 0x0480 => [0x0481], # Case map 0x048A => [0x048B], # Case map 0x048C => [0x048D], # Case map 0x048E => [0x048F], # Case map 0x0490 => [0x0491], # Case map 0x0492 => [0x0493], # Case map 0x0494 => [0x0495], # Case map 0x0496 => [0x0497], # Case map 0x0498 => [0x0499], # Case map 0x049A => [0x049B], # Case map 0x049C => [0x049D], # Case map 0x049E => [0x049F], # Case map 0x04A0 => [0x04A1], # Case map 0x04A2 => [0x04A3], # Case map 0x04A4 => [0x04A5], # Case map 0x04A6 => [0x04A7], # Case map 0x04A8 => [0x04A9], # Case map 0x04AA => [0x04AB], # Case map 0x04AC => [0x04AD], # Case map 0x04AE => [0x04AF], # Case map 0x04B0 => [0x04B1], # Case map 0x04B2 => [0x04B3], # Case map 0x04B4 => [0x04B5], # Case map 0x04B6 => [0x04B7], # Case map 0x04B8 => [0x04B9], # Case map 0x04BA => [0x04BB], # Case map 0x04BC => [0x04BD], # Case map 0x04BE => [0x04BF], # Case map 0x04C1 => [0x04C2], # Case map 0x04C3 => [0x04C4], # Case map 0x04C5 => [0x04C6], # Case map 0x04C7 => [0x04C8], # Case map 0x04C9 => [0x04CA], # Case map 0x04CB => [0x04CC], # Case map 0x04CD => [0x04CE], # Case map 0x04D0 => [0x04D1], # Case map 0x04D2 => [0x04D3], # Case map 0x04D4 => [0x04D5], # Case map 0x04D6 => [0x04D7], # Case map 0x04D8 => [0x04D9], # Case map 0x04DA => [0x04DB], # Case map 0x04DC => [0x04DD], # Case map 0x04DE => [0x04DF], # Case map 0x04E0 => [0x04E1], # Case map 0x04E2 => [0x04E3], # Case map 0x04E4 => [0x04E5], # Case map 0x04E6 => [0x04E7], # Case map 0x04E8 => [0x04E9], # Case map 0x04EA => [0x04EB], # Case map 0x04EC => [0x04ED], # Case map 0x04EE => [0x04EF], # Case map 0x04F0 => [0x04F1], # Case map 0x04F2 => [0x04F3], # Case map 0x04F4 => [0x04F5], # Case map 0x04F8 => [0x04F9], # Case map 0x0500 => [0x0501], # Case map 0x0502 => [0x0503], # Case map 0x0504 => [0x0505], # Case map 0x0506 => [0x0507], # Case map 0x0508 => [0x0509], # Case map 0x050A => [0x050B], # Case map 0x050C => [0x050D], # Case map 0x050E => [0x050F], # Case map 0x0531 => [0x0561], # Case map 0x0532 => [0x0562], # Case map 0x0533 => [0x0563], # Case map 0x0534 => [0x0564], # Case map 0x0535 => [0x0565], # Case map 0x0536 => [0x0566], # Case map 0x0537 => [0x0567], # Case map 0x0538 => [0x0568], # Case map 0x0539 => [0x0569], # Case map 0x053A => [0x056A], # Case map 0x053B => [0x056B], # Case map 0x053C => [0x056C], # Case map 0x053D => [0x056D], # Case map 0x053E => [0x056E], # Case map 0x053F => [0x056F], # Case map 0x0540 => [0x0570], # Case map 0x0541 => [0x0571], # Case map 0x0542 => [0x0572], # Case map 0x0543 => [0x0573], # Case map 0x0544 => [0x0574], # Case map 0x0545 => [0x0575], # Case map 0x0546 => [0x0576], # Case map 0x0547 => [0x0577], # Case map 0x0548 => [0x0578], # Case map 0x0549 => [0x0579], # Case map 0x054A => [0x057A], # Case map 0x054B => [0x057B], # Case map 0x054C => [0x057C], # Case map 0x054D => [0x057D], # Case map 0x054E => [0x057E], # Case map 0x054F => [0x057F], # Case map 0x0550 => [0x0580], # Case map 0x0551 => [0x0581], # Case map 0x0552 => [0x0582], # Case map 0x0553 => [0x0583], # Case map 0x0554 => [0x0584], # Case map 0x0555 => [0x0585], # Case map 0x0556 => [0x0586], # Case map 0x0587 => [0x0565, 0x0582], # Case map 0x1E00 => [0x1E01], # Case map 0x1E02 => [0x1E03], # Case map 0x1E04 => [0x1E05], # Case map 0x1E06 => [0x1E07], # Case map 0x1E08 => [0x1E09], # Case map 0x1E0A => [0x1E0B], # Case map 0x1E0C => [0x1E0D], # Case map 0x1E0E => [0x1E0F], # Case map 0x1E10 => [0x1E11], # Case map 0x1E12 => [0x1E13], # Case map 0x1E14 => [0x1E15], # Case map 0x1E16 => [0x1E17], # Case map 0x1E18 => [0x1E19], # Case map 0x1E1A => [0x1E1B], # Case map 0x1E1C => [0x1E1D], # Case map 0x1E1E => [0x1E1F], # Case map 0x1E20 => [0x1E21], # Case map 0x1E22 => [0x1E23], # Case map 0x1E24 => [0x1E25], # Case map 0x1E26 => [0x1E27], # Case map 0x1E28 => [0x1E29], # Case map 0x1E2A => [0x1E2B], # Case map 0x1E2C => [0x1E2D], # Case map 0x1E2E => [0x1E2F], # Case map 0x1E30 => [0x1E31], # Case map 0x1E32 => [0x1E33], # Case map 0x1E34 => [0x1E35], # Case map 0x1E36 => [0x1E37], # Case map 0x1E38 => [0x1E39], # Case map 0x1E3A => [0x1E3B], # Case map 0x1E3C => [0x1E3D], # Case map 0x1E3E => [0x1E3F], # Case map 0x1E40 => [0x1E41], # Case map 0x1E42 => [0x1E43], # Case map 0x1E44 => [0x1E45], # Case map 0x1E46 => [0x1E47], # Case map 0x1E48 => [0x1E49], # Case map 0x1E4A => [0x1E4B], # Case map 0x1E4C => [0x1E4D], # Case map 0x1E4E => [0x1E4F], # Case map 0x1E50 => [0x1E51], # Case map 0x1E52 => [0x1E53], # Case map 0x1E54 => [0x1E55], # Case map 0x1E56 => [0x1E57], # Case map 0x1E58 => [0x1E59], # Case map 0x1E5A => [0x1E5B], # Case map 0x1E5C => [0x1E5D], # Case map 0x1E5E => [0x1E5F], # Case map 0x1E60 => [0x1E61], # Case map 0x1E62 => [0x1E63], # Case map 0x1E64 => [0x1E65], # Case map 0x1E66 => [0x1E67], # Case map 0x1E68 => [0x1E69], # Case map 0x1E6A => [0x1E6B], # Case map 0x1E6C => [0x1E6D], # Case map 0x1E6E => [0x1E6F], # Case map 0x1E70 => [0x1E71], # Case map 0x1E72 => [0x1E73], # Case map 0x1E74 => [0x1E75], # Case map 0x1E76 => [0x1E77], # Case map 0x1E78 => [0x1E79], # Case map 0x1E7A => [0x1E7B], # Case map 0x1E7C => [0x1E7D], # Case map 0x1E7E => [0x1E7F], # Case map 0x1E80 => [0x1E81], # Case map 0x1E82 => [0x1E83], # Case map 0x1E84 => [0x1E85], # Case map 0x1E86 => [0x1E87], # Case map 0x1E88 => [0x1E89], # Case map 0x1E8A => [0x1E8B], # Case map 0x1E8C => [0x1E8D], # Case map 0x1E8E => [0x1E8F], # Case map 0x1E90 => [0x1E91], # Case map 0x1E92 => [0x1E93], # Case map 0x1E94 => [0x1E95], # Case map 0x1E96 => [0x0068, 0x0331], # Case map 0x1E97 => [0x0074, 0x0308], # Case map 0x1E98 => [0x0077, 0x030A], # Case map 0x1E99 => [0x0079, 0x030A], # Case map 0x1E9A => [0x0061, 0x02BE], # Case map 0x1E9B => [0x1E61], # Case map 0x1EA0 => [0x1EA1], # Case map 0x1EA2 => [0x1EA3], # Case map 0x1EA4 => [0x1EA5], # Case map 0x1EA6 => [0x1EA7], # Case map 0x1EA8 => [0x1EA9], # Case map 0x1EAA => [0x1EAB], # Case map 0x1EAC => [0x1EAD], # Case map 0x1EAE => [0x1EAF], # Case map 0x1EB0 => [0x1EB1], # Case map 0x1EB2 => [0x1EB3], # Case map 0x1EB4 => [0x1EB5], # Case map 0x1EB6 => [0x1EB7], # Case map 0x1EB8 => [0x1EB9], # Case map 0x1EBA => [0x1EBB], # Case map 0x1EBC => [0x1EBD], # Case map 0x1EBE => [0x1EBF], # Case map 0x1EC0 => [0x1EC1], # Case map 0x1EC2 => [0x1EC3], # Case map 0x1EC4 => [0x1EC5], # Case map 0x1EC6 => [0x1EC7], # Case map 0x1EC8 => [0x1EC9], # Case map 0x1ECA => [0x1ECB], # Case map 0x1ECC => [0x1ECD], # Case map 0x1ECE => [0x1ECF], # Case map 0x1ED0 => [0x1ED1], # Case map 0x1ED2 => [0x1ED3], # Case map 0x1ED4 => [0x1ED5], # Case map 0x1ED6 => [0x1ED7], # Case map 0x1ED8 => [0x1ED9], # Case map 0x1EDA => [0x1EDB], # Case map 0x1EDC => [0x1EDD], # Case map 0x1EDE => [0x1EDF], # Case map 0x1EE0 => [0x1EE1], # Case map 0x1EE2 => [0x1EE3], # Case map 0x1EE4 => [0x1EE5], # Case map 0x1EE6 => [0x1EE7], # Case map 0x1EE8 => [0x1EE9], # Case map 0x1EEA => [0x1EEB], # Case map 0x1EEC => [0x1EED], # Case map 0x1EEE => [0x1EEF], # Case map 0x1EF0 => [0x1EF1], # Case map 0x1EF2 => [0x1EF3], # Case map 0x1EF4 => [0x1EF5], # Case map 0x1EF6 => [0x1EF7], # Case map 0x1EF8 => [0x1EF9], # Case map 0x1F08 => [0x1F00], # Case map 0x1F09 => [0x1F01], # Case map 0x1F0A => [0x1F02], # Case map 0x1F0B => [0x1F03], # Case map 0x1F0C => [0x1F04], # Case map 0x1F0D => [0x1F05], # Case map 0x1F0E => [0x1F06], # Case map 0x1F0F => [0x1F07], # Case map 0x1F18 => [0x1F10], # Case map 0x1F19 => [0x1F11], # Case map 0x1F1A => [0x1F12], # Case map 0x1F1B => [0x1F13], # Case map 0x1F1C => [0x1F14], # Case map 0x1F1D => [0x1F15], # Case map 0x1F28 => [0x1F20], # Case map 0x1F29 => [0x1F21], # Case map 0x1F2A => [0x1F22], # Case map 0x1F2B => [0x1F23], # Case map 0x1F2C => [0x1F24], # Case map 0x1F2D => [0x1F25], # Case map 0x1F2E => [0x1F26], # Case map 0x1F2F => [0x1F27], # Case map 0x1F38 => [0x1F30], # Case map 0x1F39 => [0x1F31], # Case map 0x1F3A => [0x1F32], # Case map 0x1F3B => [0x1F33], # Case map 0x1F3C => [0x1F34], # Case map 0x1F3D => [0x1F35], # Case map 0x1F3E => [0x1F36], # Case map 0x1F3F => [0x1F37], # Case map 0x1F48 => [0x1F40], # Case map 0x1F49 => [0x1F41], # Case map 0x1F4A => [0x1F42], # Case map 0x1F4B => [0x1F43], # Case map 0x1F4C => [0x1F44], # Case map 0x1F4D => [0x1F45], # Case map 0x1F50 => [0x03C5, 0x0313], # Case map 0x1F52 => [0x03C5, 0x0313, 0x0300], # Case map 0x1F54 => [0x03C5, 0x0313, 0x0301], # Case map 0x1F56 => [0x03C5, 0x0313, 0x0342], # Case map 0x1F59 => [0x1F51], # Case map 0x1F5B => [0x1F53], # Case map 0x1F5D => [0x1F55], # Case map 0x1F5F => [0x1F57], # Case map 0x1F68 => [0x1F60], # Case map 0x1F69 => [0x1F61], # Case map 0x1F6A => [0x1F62], # Case map 0x1F6B => [0x1F63], # Case map 0x1F6C => [0x1F64], # Case map 0x1F6D => [0x1F65], # Case map 0x1F6E => [0x1F66], # Case map 0x1F6F => [0x1F67], # Case map 0x1F80 => [0x1F00, 0x03B9], # Case map 0x1F81 => [0x1F01, 0x03B9], # Case map 0x1F82 => [0x1F02, 0x03B9], # Case map 0x1F83 => [0x1F03, 0x03B9], # Case map 0x1F84 => [0x1F04, 0x03B9], # Case map 0x1F85 => [0x1F05, 0x03B9], # Case map 0x1F86 => [0x1F06, 0x03B9], # Case map 0x1F87 => [0x1F07, 0x03B9], # Case map 0x1F88 => [0x1F00, 0x03B9], # Case map 0x1F89 => [0x1F01, 0x03B9], # Case map 0x1F8A => [0x1F02, 0x03B9], # Case map 0x1F8B => [0x1F03, 0x03B9], # Case map 0x1F8C => [0x1F04, 0x03B9], # Case map 0x1F8D => [0x1F05, 0x03B9], # Case map 0x1F8E => [0x1F06, 0x03B9], # Case map 0x1F8F => [0x1F07, 0x03B9], # Case map 0x1F90 => [0x1F20, 0x03B9], # Case map 0x1F91 => [0x1F21, 0x03B9], # Case map 0x1F92 => [0x1F22, 0x03B9], # Case map 0x1F93 => [0x1F23, 0x03B9], # Case map 0x1F94 => [0x1F24, 0x03B9], # Case map 0x1F95 => [0x1F25, 0x03B9], # Case map 0x1F96 => [0x1F26, 0x03B9], # Case map 0x1F97 => [0x1F27, 0x03B9], # Case map 0x1F98 => [0x1F20, 0x03B9], # Case map 0x1F99 => [0x1F21, 0x03B9], # Case map 0x1F9A => [0x1F22, 0x03B9], # Case map 0x1F9B => [0x1F23, 0x03B9], # Case map 0x1F9C => [0x1F24, 0x03B9], # Case map 0x1F9D => [0x1F25, 0x03B9], # Case map 0x1F9E => [0x1F26, 0x03B9], # Case map 0x1F9F => [0x1F27, 0x03B9], # Case map 0x1FA0 => [0x1F60, 0x03B9], # Case map 0x1FA1 => [0x1F61, 0x03B9], # Case map 0x1FA2 => [0x1F62, 0x03B9], # Case map 0x1FA3 => [0x1F63, 0x03B9], # Case map 0x1FA4 => [0x1F64, 0x03B9], # Case map 0x1FA5 => [0x1F65, 0x03B9], # Case map 0x1FA6 => [0x1F66, 0x03B9], # Case map 0x1FA7 => [0x1F67, 0x03B9], # Case map 0x1FA8 => [0x1F60, 0x03B9], # Case map 0x1FA9 => [0x1F61, 0x03B9], # Case map 0x1FAA => [0x1F62, 0x03B9], # Case map 0x1FAB => [0x1F63, 0x03B9], # Case map 0x1FAC => [0x1F64, 0x03B9], # Case map 0x1FAD => [0x1F65, 0x03B9], # Case map 0x1FAE => [0x1F66, 0x03B9], # Case map 0x1FAF => [0x1F67, 0x03B9], # Case map 0x1FB2 => [0x1F70, 0x03B9], # Case map 0x1FB3 => [0x03B1, 0x03B9], # Case map 0x1FB4 => [0x03AC, 0x03B9], # Case map 0x1FB6 => [0x03B1, 0x0342], # Case map 0x1FB7 => [0x03B1, 0x0342, 0x03B9], # Case map 0x1FB8 => [0x1FB0], # Case map 0x1FB9 => [0x1FB1], # Case map 0x1FBA => [0x1F70], # Case map 0x1FBB => [0x1F71], # Case map 0x1FBC => [0x03B1, 0x03B9], # Case map 0x1FBE => [0x03B9], # Case map 0x1FC2 => [0x1F74, 0x03B9], # Case map 0x1FC3 => [0x03B7, 0x03B9], # Case map 0x1FC4 => [0x03AE, 0x03B9], # Case map 0x1FC6 => [0x03B7, 0x0342], # Case map 0x1FC7 => [0x03B7, 0x0342, 0x03B9], # Case map 0x1FC8 => [0x1F72], # Case map 0x1FC9 => [0x1F73], # Case map 0x1FCA => [0x1F74], # Case map 0x1FCB => [0x1F75], # Case map 0x1FCC => [0x03B7, 0x03B9], # Case map 0x1FD2 => [0x03B9, 0x0308, 0x0300], # Case map 0x1FD3 => [0x03B9, 0x0308, 0x0301], # Case map 0x1FD6 => [0x03B9, 0x0342], # Case map 0x1FD7 => [0x03B9, 0x0308, 0x0342], # Case map 0x1FD8 => [0x1FD0], # Case map 0x1FD9 => [0x1FD1], # Case map 0x1FDA => [0x1F76], # Case map 0x1FDB => [0x1F77], # Case map 0x1FE2 => [0x03C5, 0x0308, 0x0300], # Case map 0x1FE3 => [0x03C5, 0x0308, 0x0301], # Case map 0x1FE4 => [0x03C1, 0x0313], # Case map 0x1FE6 => [0x03C5, 0x0342], # Case map 0x1FE7 => [0x03C5, 0x0308, 0x0342], # Case map 0x1FE8 => [0x1FE0], # Case map 0x1FE9 => [0x1FE1], # Case map 0x1FEA => [0x1F7A], # Case map 0x1FEB => [0x1F7B], # Case map 0x1FEC => [0x1FE5], # Case map 0x1FF2 => [0x1F7C, 0x03B9], # Case map 0x1FF3 => [0x03C9, 0x03B9], # Case map 0x1FF4 => [0x03CE, 0x03B9], # Case map 0x1FF6 => [0x03C9, 0x0342], # Case map 0x1FF7 => [0x03C9, 0x0342, 0x03B9], # Case map 0x1FF8 => [0x1F78], # Case map 0x1FF9 => [0x1F79], # Case map 0x1FFA => [0x1F7C], # Case map 0x1FFB => [0x1F7D], # Case map 0x1FFC => [0x03C9, 0x03B9], # Case map 0x2126 => [0x03C9], # Case map 0x212A => [0x006B], # Case map 0x212B => [0x00E5], # Case map 0x2160 => [0x2170], # Case map 0x2161 => [0x2171], # Case map 0x2162 => [0x2172], # Case map 0x2163 => [0x2173], # Case map 0x2164 => [0x2174], # Case map 0x2165 => [0x2175], # Case map 0x2166 => [0x2176], # Case map 0x2167 => [0x2177], # Case map 0x2168 => [0x2178], # Case map 0x2169 => [0x2179], # Case map 0x216A => [0x217A], # Case map 0x216B => [0x217B], # Case map 0x216C => [0x217C], # Case map 0x216D => [0x217D], # Case map 0x216E => [0x217E], # Case map 0x216F => [0x217F], # Case map 0x24B6 => [0x24D0], # Case map 0x24B7 => [0x24D1], # Case map 0x24B8 => [0x24D2], # Case map 0x24B9 => [0x24D3], # Case map 0x24BA => [0x24D4], # Case map 0x24BB => [0x24D5], # Case map 0x24BC => [0x24D6], # Case map 0x24BD => [0x24D7], # Case map 0x24BE => [0x24D8], # Case map 0x24BF => [0x24D9], # Case map 0x24C0 => [0x24DA], # Case map 0x24C1 => [0x24DB], # Case map 0x24C2 => [0x24DC], # Case map 0x24C3 => [0x24DD], # Case map 0x24C4 => [0x24DE], # Case map 0x24C5 => [0x24DF], # Case map 0x24C6 => [0x24E0], # Case map 0x24C7 => [0x24E1], # Case map 0x24C8 => [0x24E2], # Case map 0x24C9 => [0x24E3], # Case map 0x24CA => [0x24E4], # Case map 0x24CB => [0x24E5], # Case map 0x24CC => [0x24E6], # Case map 0x24CD => [0x24E7], # Case map 0x24CE => [0x24E8], # Case map 0x24CF => [0x24E9], # Case map 0xFB00 => [0x0066, 0x0066], # Case map 0xFB01 => [0x0066, 0x0069], # Case map 0xFB02 => [0x0066, 0x006C], # Case map 0xFB03 => [0x0066, 0x0066, 0x0069], # Case map 0xFB04 => [0x0066, 0x0066, 0x006C], # Case map 0xFB05 => [0x0073, 0x0074], # Case map 0xFB06 => [0x0073, 0x0074], # Case map 0xFB13 => [0x0574, 0x0576], # Case map 0xFB14 => [0x0574, 0x0565], # Case map 0xFB15 => [0x0574, 0x056B], # Case map 0xFB16 => [0x057E, 0x0576], # Case map 0xFB17 => [0x0574, 0x056D], # Case map 0xFF21 => [0xFF41], # Case map 0xFF22 => [0xFF42], # Case map 0xFF23 => [0xFF43], # Case map 0xFF24 => [0xFF44], # Case map 0xFF25 => [0xFF45], # Case map 0xFF26 => [0xFF46], # Case map 0xFF27 => [0xFF47], # Case map 0xFF28 => [0xFF48], # Case map 0xFF29 => [0xFF49], # Case map 0xFF2A => [0xFF4A], # Case map 0xFF2B => [0xFF4B], # Case map 0xFF2C => [0xFF4C], # Case map 0xFF2D => [0xFF4D], # Case map 0xFF2E => [0xFF4E], # Case map 0xFF2F => [0xFF4F], # Case map 0xFF30 => [0xFF50], # Case map 0xFF31 => [0xFF51], # Case map 0xFF32 => [0xFF52], # Case map 0xFF33 => [0xFF53], # Case map 0xFF34 => [0xFF54], # Case map 0xFF35 => [0xFF55], # Case map 0xFF36 => [0xFF56], # Case map 0xFF37 => [0xFF57], # Case map 0xFF38 => [0xFF58], # Case map 0xFF39 => [0xFF59], # Case map 0xFF3A => [0xFF5A], # Case map 0x10400 => [0x10428], # Case map 0x10401 => [0x10429], # Case map 0x10402 => [0x1042A], # Case map 0x10403 => [0x1042B], # Case map 0x10404 => [0x1042C], # Case map 0x10405 => [0x1042D], # Case map 0x10406 => [0x1042E], # Case map 0x10407 => [0x1042F], # Case map 0x10408 => [0x10430], # Case map 0x10409 => [0x10431], # Case map 0x1040A => [0x10432], # Case map 0x1040B => [0x10433], # Case map 0x1040C => [0x10434], # Case map 0x1040D => [0x10435], # Case map 0x1040E => [0x10436], # Case map 0x1040F => [0x10437], # Case map 0x10410 => [0x10438], # Case map 0x10411 => [0x10439], # Case map 0x10412 => [0x1043A], # Case map 0x10413 => [0x1043B], # Case map 0x10414 => [0x1043C], # Case map 0x10415 => [0x1043D], # Case map 0x10416 => [0x1043E], # Case map 0x10417 => [0x1043F], # Case map 0x10418 => [0x10440], # Case map 0x10419 => [0x10441], # Case map 0x1041A => [0x10442], # Case map 0x1041B => [0x10443], # Case map 0x1041C => [0x10444], # Case map 0x1041D => [0x10445], # Case map 0x1041E => [0x10446], # Case map 0x1041F => [0x10447], # Case map 0x10420 => [0x10448], # Case map 0x10421 => [0x10449], # Case map 0x10422 => [0x1044A], # Case map 0x10423 => [0x1044B], # Case map 0x10424 => [0x1044C], # Case map 0x10425 => [0x1044D], # Case map }.freeze # Table C1.1 as defined by RFC 3454 (string preparation). # # @since 2.6.0 C1_1 = [ 0x0020..0x0020, # SPACE ] # Table C1.2 as defined by RFC 3454 (string preparation). # # @since 2.6.0 C1_2 = [ 0x00A0..0x00A0, # NO-BREAK SPACE 0x1680..0x1680, # OGHAM SPACE MARK 0x2000..0x2000, # EN QUAD 0x2001..0x2001, # EM QUAD 0x2002..0x2002, # EN SPACE 0x2003..0x2003, # EM SPACE 0x2004..0x2004, # THREE-PER-EM SPACE 0x2005..0x2005, # FOUR-PER-EM SPACE 0x2006..0x2006, # SIX-PER-EM SPACE 0x2007..0x2007, # FIGURE SPACE 0x2008..0x2008, # PUNCTUATION SPACE 0x2009..0x2009, # THIN SPACE 0x200A..0x200A, # HAIR SPACE 0x200B..0x200B, # ZERO WIDTH SPACE 0x202F..0x202F, # NARROW NO-BREAK SPACE 0x205F..0x205F, # MEDIUM MATHEMATICAL SPACE 0x3000..0x3000, # IDEOGRAPHIC SPACE ].freeze # Table C2.1 as defined by RFC 3454 (string preparation). # # @since 2.6.0 C2_1 = [ 0x0000..0x001F, # [CONTROL CHARACTERS] 0x007F..0x007F, # DELETE ].freeze # Table C2.2 as defined by RFC 3454 (string preparation). # # @since 2.6.0 C2_2 = [ 0x0080..0x009F, # [CONTROL CHARACTERS] 0x06DD..0x06DD, # ARABIC END OF AYAH 0x070F..0x070F, # SYRIAC ABBREVIATION MARK 0x180E..0x180E, # MONGOLIAN VOWEL SEPARATOR 0x200C..0x200C, # ZERO WIDTH NON-JOINER 0x200D..0x200D, # ZERO WIDTH JOINER 0x2028..0x2028, # LINE SEPARATOR 0x2029..0x2029, # PARAGRAPH SEPARATOR 0x2060..0x2060, # WORD JOINER 0x2061..0x2061, # FUNCTION APPLICATION 0x2062..0x2062, # INVISIBLE TIMES 0x2063..0x2063, # INVISIBLE SEPARATOR 0x206A..0x206F, # [CONTROL CHARACTERS] 0xFEFF..0xFEFF, # ZERO WIDTH NO-BREAK SPACE 0xFFF9..0xFFFC, # [CONTROL CHARACTERS] 0x1D173..0x1D17A, # [MUSICAL CONTROL CHARACTERS] ].freeze # Table C3 as defined by RFC 3454 (string preparation). # # @since 2.6.0 C3 = [ 0xE000..0xF8FF, # [PRIVATE USE, PLANE 0] 0xF0000..0xFFFFD, # [PRIVATE USE, PLANE 15] 0x100000..0x10FFFD, # [PRIVATE USE, PLANE 16] ].freeze # Table C4 as defined by RFC 3454 (string preparation). # # @since 2.6.0 C4 = [ 0xFDD0..0xFDEF, # [NONCHARACTER CODE POINTS] 0xFFFE..0xFFFF, # [NONCHARACTER CODE POINTS] 0x1FFFE..0x1FFFF, # [NONCHARACTER CODE POINTS] 0x2FFFE..0x2FFFF, # [NONCHARACTER CODE POINTS] 0x3FFFE..0x3FFFF, # [NONCHARACTER CODE POINTS] 0x4FFFE..0x4FFFF, # [NONCHARACTER CODE POINTS] 0x5FFFE..0x5FFFF, # [NONCHARACTER CODE POINTS] 0x6FFFE..0x6FFFF, # [NONCHARACTER CODE POINTS] 0x7FFFE..0x7FFFF, # [NONCHARACTER CODE POINTS] 0x8FFFE..0x8FFFF, # [NONCHARACTER CODE POINTS] 0x9FFFE..0x9FFFF, # [NONCHARACTER CODE POINTS] 0xAFFFE..0xAFFFF, # [NONCHARACTER CODE POINTS] 0xBFFFE..0xBFFFF, # [NONCHARACTER CODE POINTS] 0xCFFFE..0xCFFFF, # [NONCHARACTER CODE POINTS] 0xDFFFE..0xDFFFF, # [NONCHARACTER CODE POINTS] 0xEFFFE..0xEFFFF, # [NONCHARACTER CODE POINTS] 0xFFFFE..0xFFFFF, # [NONCHARACTER CODE POINTS] 0x10FFFE..0x10FFFF, # [NONCHARACTER CODE POINTS] ].freeze # Table C5 as defined by RFC 3454 (string preparation). # # @since 2.6.0 C5 = [ 0xD800..0xDFFF, # [SURROGATE CODES] ].freeze # Table C6 as defined by RFC 3454 (string preparation). # # @since 2.6.0 C6 = [ 0xFFF9..0xFFF9, # INTERLINEAR ANNOTATION ANCHOR 0xFFFA..0xFFFA, # INTERLINEAR ANNOTATION SEPARATOR 0xFFFB..0xFFFB, # INTERLINEAR ANNOTATION TERMINATOR 0xFFFC..0xFFFC, # OBJECT REPLACEMENT CHARACTER 0xFFFD..0xFFFD, # REPLACEMENT CHARACTER ].freeze # Table C7 as defined by RFC 3454 (string preparation). # # @since 2.6.0 C7 = [ 0x2FF0..0x2FFB, # [IDEOGRAPHIC DESCRIPTION CHARACTERS] ].freeze # Table C8 as defined by RFC 3454 (string preparation). # # @since 2.6.0 C8 = [ 0x0340..0x0340, # COMBINING GRAVE TONE MARK 0x0341..0x0341, # COMBINING ACUTE TONE MARK 0x200E..0x200E, # LEFT-TO-RIGHT MARK 0x200F..0x200F, # RIGHT-TO-LEFT MARK 0x202A..0x202A, # LEFT-TO-RIGHT EMBEDDING 0x202B..0x202B, # RIGHT-TO-LEFT EMBEDDING 0x202C..0x202C, # POP DIRECTIONAL FORMATTING 0x202D..0x202D, # LEFT-TO-RIGHT OVERRIDE 0x202E..0x202E, # RIGHT-TO-LEFT OVERRIDE 0x206A..0x206A, # INHIBIT SYMMETRIC SWAPPING 0x206B..0x206B, # ACTIVATE SYMMETRIC SWAPPING 0x206C..0x206C, # INHIBIT ARABIC FORM SHAPING 0x206D..0x206D, # ACTIVATE ARABIC FORM SHAPING 0x206E..0x206E, # NATIONAL DIGIT SHAPES 0x206F..0x206F, # NOMINAL DIGIT SHAPES ].freeze # Table C9 as defined by RFC 3454 (string preparation). # # @since 2.6.0 C9 = [ 0xE0001..0xE0001, # LANGUAGE TAG 0xE0020..0xE007F, # [TAGGING CHARACTERS] ].freeze # Table D1 as defined by RFC 3454 (string preparation). # # @since 2.6.0 D1 = [ 0x05BE..0x05BE, 0x05C0..0x05C0, 0x05C3..0x05C3, 0x05D0..0x05EA, 0x05F0..0x05F4, 0x061B..0x061B, 0x061F..0x061F, 0x0621..0x063A, 0x0640..0x064A, 0x066D..0x066F, 0x0671..0x06D5, 0x06DD..0x06DD, 0x06E5..0x06E6, 0x06FA..0x06FE, 0x0700..0x070D, 0x0710..0x0710, 0x0712..0x072C, 0x0780..0x07A5, 0x07B1..0x07B1, 0x200F..0x200F, 0xFB1D..0xFB1D, 0xFB1F..0xFB28, 0xFB2A..0xFB36, 0xFB38..0xFB3C, 0xFB3E..0xFB3E, 0xFB40..0xFB41, 0xFB43..0xFB44, 0xFB46..0xFBB1, 0xFBD3..0xFD3D, 0xFD50..0xFD8F, 0xFD92..0xFDC7, 0xFDF0..0xFDFC, 0xFE70..0xFE74, 0xFE76..0xFEFC, ].freeze # Table D2 as defined by RFC 3454 (string preparation). # # @since 2.6.0 D2 = [ 0x0041..0x005A, 0x0061..0x007A, 0x00AA..0x00AA, 0x00B5..0x00B5, 0x00BA..0x00BA, 0x00C0..0x00D6, 0x00D8..0x00F6, 0x00F8..0x0220, 0x0222..0x0233, 0x0250..0x02AD, 0x02B0..0x02B8, 0x02BB..0x02C1, 0x02D0..0x02D1, 0x02E0..0x02E4, 0x02EE..0x02EE, 0x037A..0x037A, 0x0386..0x0386, 0x0388..0x038A, 0x038C..0x038C, 0x038E..0x03A1, 0x03A3..0x03CE, 0x03D0..0x03F5, 0x0400..0x0482, 0x048A..0x04CE, 0x04D0..0x04F5, 0x04F8..0x04F9, 0x0500..0x050F, 0x0531..0x0556, 0x0559..0x055F, 0x0561..0x0587, 0x0589..0x0589, 0x0903..0x0903, 0x0905..0x0939, 0x093D..0x0940, 0x0949..0x094C, 0x0950..0x0950, 0x0958..0x0961, 0x0964..0x0970, 0x0982..0x0983, 0x0985..0x098C, 0x098F..0x0990, 0x0993..0x09A8, 0x09AA..0x09B0, 0x09B2..0x09B2, 0x09B6..0x09B9, 0x09BE..0x09C0, 0x09C7..0x09C8, 0x09CB..0x09CC, 0x09D7..0x09D7, 0x09DC..0x09DD, 0x09DF..0x09E1, 0x09E6..0x09F1, 0x09F4..0x09FA, 0x0A05..0x0A0A, 0x0A0F..0x0A10, 0x0A13..0x0A28, 0x0A2A..0x0A30, 0x0A32..0x0A33, 0x0A35..0x0A36, 0x0A38..0x0A39, 0x0A3E..0x0A40, 0x0A59..0x0A5C, 0x0A5E..0x0A5E, 0x0A66..0x0A6F, 0x0A72..0x0A74, 0x0A83..0x0A83, 0x0A85..0x0A8B, 0x0A8D..0x0A8D, 0x0A8F..0x0A91, 0x0A93..0x0AA8, 0x0AAA..0x0AB0, 0x0AB2..0x0AB3, 0x0AB5..0x0AB9, 0x0ABD..0x0AC0, 0x0AC9..0x0AC9, 0x0ACB..0x0ACC, 0x0AD0..0x0AD0, 0x0AE0..0x0AE0, 0x0AE6..0x0AEF, 0x0B02..0x0B03, 0x0B05..0x0B0C, 0x0B0F..0x0B10, 0x0B13..0x0B28, 0x0B2A..0x0B30, 0x0B32..0x0B33, 0x0B36..0x0B39, 0x0B3D..0x0B3E, 0x0B40..0x0B40, 0x0B47..0x0B48, 0x0B4B..0x0B4C, 0x0B57..0x0B57, 0x0B5C..0x0B5D, 0x0B5F..0x0B61, 0x0B66..0x0B70, 0x0B83..0x0B83, 0x0B85..0x0B8A, 0x0B8E..0x0B90, 0x0B92..0x0B95, 0x0B99..0x0B9A, 0x0B9C..0x0B9C, 0x0B9E..0x0B9F, 0x0BA3..0x0BA4, 0x0BA8..0x0BAA, 0x0BAE..0x0BB5, 0x0BB7..0x0BB9, 0x0BBE..0x0BBF, 0x0BC1..0x0BC2, 0x0BC6..0x0BC8, 0x0BCA..0x0BCC, 0x0BD7..0x0BD7, 0x0BE7..0x0BF2, 0x0C01..0x0C03, 0x0C05..0x0C0C, 0x0C0E..0x0C10, 0x0C12..0x0C28, 0x0C2A..0x0C33, 0x0C35..0x0C39, 0x0C41..0x0C44, 0x0C60..0x0C61, 0x0C66..0x0C6F, 0x0C82..0x0C83, 0x0C85..0x0C8C, 0x0C8E..0x0C90, 0x0C92..0x0CA8, 0x0CAA..0x0CB3, 0x0CB5..0x0CB9, 0x0CBE..0x0CBE, 0x0CC0..0x0CC4, 0x0CC7..0x0CC8, 0x0CCA..0x0CCB, 0x0CD5..0x0CD6, 0x0CDE..0x0CDE, 0x0CE0..0x0CE1, 0x0CE6..0x0CEF, 0x0D02..0x0D03, 0x0D05..0x0D0C, 0x0D0E..0x0D10, 0x0D12..0x0D28, 0x0D2A..0x0D39, 0x0D3E..0x0D40, 0x0D46..0x0D48, 0x0D4A..0x0D4C, 0x0D57..0x0D57, 0x0D60..0x0D61, 0x0D66..0x0D6F, 0x0D82..0x0D83, 0x0D85..0x0D96, 0x0D9A..0x0DB1, 0x0DB3..0x0DBB, 0x0DBD..0x0DBD, 0x0DC0..0x0DC6, 0x0DCF..0x0DD1, 0x0DD8..0x0DDF, 0x0DF2..0x0DF4, 0x0E01..0x0E30, 0x0E32..0x0E33, 0x0E40..0x0E46, 0x0E4F..0x0E5B, 0x0E81..0x0E82, 0x0E84..0x0E84, 0x0E87..0x0E88, 0x0E8A..0x0E8A, 0x0E8D..0x0E8D, 0x0E94..0x0E97, 0x0E99..0x0E9F, 0x0EA1..0x0EA3, 0x0EA5..0x0EA5, 0x0EA7..0x0EA7, 0x0EAA..0x0EAB, 0x0EAD..0x0EB0, 0x0EB2..0x0EB3, 0x0EBD..0x0EBD, 0x0EC0..0x0EC4, 0x0EC6..0x0EC6, 0x0ED0..0x0ED9, 0x0EDC..0x0EDD, 0x0F00..0x0F17, 0x0F1A..0x0F34, 0x0F36..0x0F36, 0x0F38..0x0F38, 0x0F3E..0x0F47, 0x0F49..0x0F6A, 0x0F7F..0x0F7F, 0x0F85..0x0F85, 0x0F88..0x0F8B, 0x0FBE..0x0FC5, 0x0FC7..0x0FCC, 0x0FCF..0x0FCF, 0x1000..0x1021, 0x1023..0x1027, 0x1029..0x102A, 0x102C..0x102C, 0x1031..0x1031, 0x1038..0x1038, 0x1040..0x1057, 0x10A0..0x10C5, 0x10D0..0x10F8, 0x10FB..0x10FB, 0x1100..0x1159, 0x115F..0x11A2, 0x11A8..0x11F9, 0x1200..0x1206, 0x1208..0x1246, 0x1248..0x1248, 0x124A..0x124D, 0x1250..0x1256, 0x1258..0x1258, 0x125A..0x125D, 0x1260..0x1286, 0x1288..0x1288, 0x128A..0x128D, 0x1290..0x12AE, 0x12B0..0x12B0, 0x12B2..0x12B5, 0x12B8..0x12BE, 0x12C0..0x12C0, 0x12C2..0x12C5, 0x12C8..0x12CE, 0x12D0..0x12D6, 0x12D8..0x12EE, 0x12F0..0x130E, 0x1310..0x1310, 0x1312..0x1315, 0x1318..0x131E, 0x1320..0x1346, 0x1348..0x135A, 0x1361..0x137C, 0x13A0..0x13F4, 0x1401..0x1676, 0x1681..0x169A, 0x16A0..0x16F0, 0x1700..0x170C, 0x170E..0x1711, 0x1720..0x1731, 0x1735..0x1736, 0x1740..0x1751, 0x1760..0x176C, 0x176E..0x1770, 0x1780..0x17B6, 0x17BE..0x17C5, 0x17C7..0x17C8, 0x17D4..0x17DA, 0x17DC..0x17DC, 0x17E0..0x17E9, 0x1810..0x1819, 0x1820..0x1877, 0x1880..0x18A8, 0x1E00..0x1E9B, 0x1EA0..0x1EF9, 0x1F00..0x1F15, 0x1F18..0x1F1D, 0x1F20..0x1F45, 0x1F48..0x1F4D, 0x1F50..0x1F57, 0x1F59..0x1F59, 0x1F5B..0x1F5B, 0x1F5D..0x1F5D, 0x1F5F..0x1F7D, 0x1F80..0x1FB4, 0x1FB6..0x1FBC, 0x1FBE..0x1FBE, 0x1FC2..0x1FC4, 0x1FC6..0x1FCC, 0x1FD0..0x1FD3, 0x1FD6..0x1FDB, 0x1FE0..0x1FEC, 0x1FF2..0x1FF4, 0x1FF6..0x1FFC, 0x200E..0x200E, 0x2071..0x2071, 0x207F..0x207F, 0x2102..0x2102, 0x2107..0x2107, 0x210A..0x2113, 0x2115..0x2115, 0x2119..0x211D, 0x2124..0x2124, 0x2126..0x2126, 0x2128..0x2128, 0x212A..0x212D, 0x212F..0x2131, 0x2133..0x2139, 0x213D..0x213F, 0x2145..0x2149, 0x2160..0x2183, 0x2336..0x237A, 0x2395..0x2395, 0x249C..0x24E9, 0x3005..0x3007, 0x3021..0x3029, 0x3031..0x3035, 0x3038..0x303C, 0x3041..0x3096, 0x309D..0x309F, 0x30A1..0x30FA, 0x30FC..0x30FF, 0x3105..0x312C, 0x3131..0x318E, 0x3190..0x31B7, 0x31F0..0x321C, 0x3220..0x3243, 0x3260..0x327B, 0x327F..0x32B0, 0x32C0..0x32CB, 0x32D0..0x32FE, 0x3300..0x3376, 0x337B..0x33DD, 0x33E0..0x33FE, 0x3400..0x4DB5, 0x4E00..0x9FA5, 0xA000..0xA48C, 0xAC00..0xD7A3, 0xD800..0xFA2D, 0xFA30..0xFA6A, 0xFB00..0xFB06, 0xFB13..0xFB17, 0xFF21..0xFF3A, 0xFF41..0xFF5A, 0xFF66..0xFFBE, 0xFFC2..0xFFC7, 0xFFCA..0xFFCF, 0xFFD2..0xFFD7, 0xFFDA..0xFFDC, 0x10300..0x1031E, 0x10320..0x10323, 0x10330..0x1034A, 0x10400..0x10425, 0x10428..0x1044D, 0x1D000..0x1D0F5, 0x1D100..0x1D126, 0x1D12A..0x1D166, 0x1D16A..0x1D172, 0x1D183..0x1D184, 0x1D18C..0x1D1A9, 0x1D1AE..0x1D1DD, 0x1D400..0x1D454, 0x1D456..0x1D49C, 0x1D49E..0x1D49F, 0x1D4A2..0x1D4A2, 0x1D4A5..0x1D4A6, 0x1D4A9..0x1D4AC, 0x1D4AE..0x1D4B9, 0x1D4BB..0x1D4BB, 0x1D4BD..0x1D4C0, 0x1D4C2..0x1D4C3, 0x1D4C5..0x1D505, 0x1D507..0x1D50A, 0x1D50D..0x1D514, 0x1D516..0x1D51C, 0x1D51E..0x1D539, 0x1D53B..0x1D53E, 0x1D540..0x1D544, 0x1D546..0x1D546, 0x1D54A..0x1D550, 0x1D552..0x1D6A3, 0x1D6A8..0x1D7C9, 0x20000..0x2A6D6, 0x2F800..0x2FA1D, 0xF0000..0xFFFFD, 0x100000..0x10FFFD, ].freeze end end end end mongo-ruby-driver-2.21.3/lib/mongo/auth/stringprep/unicode_normalize/000077500000000000000000000000001505113246500257215ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/auth/stringprep/unicode_normalize/normalize.rb000066400000000000000000000140471505113246500302540ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright Ayumu Nojima (野島 歩) and Martin J. Dürst (duerst@it.aoyama.ac.jp) # This file, the companion file tables.rb (autogenerated), and the module, # constants, and method defined herein are part of the implementation of the # built-in String class, not part of the standard library. They should # therefore never be gemified. They implement the methods # String#unicode_normalize, String#unicode_normalize!, and String#unicode_normalized?. # # They are placed here because they are written in Ruby. They are loaded on # demand when any of the three methods mentioned above is executed for the # first time. This reduces the memory footprint and startup time for scripts # and applications that do not use those methods. # # The name and even the existence of the module UnicodeNormalize and all of its # content are purely an implementation detail, and should not be exposed in # any test or spec or otherwise. require 'mongo/auth/stringprep/unicode_normalize/tables' # @api private module UnicodeNormalize # :nodoc: ## Constant for max hash capacity to avoid DoS attack MAX_HASH_LENGTH = 18000 # enough for all test cases, otherwise tests get slow ## Regular Expressions and Hash Constants REGEXP_D = Regexp.compile(REGEXP_D_STRING, Regexp::EXTENDED) REGEXP_C = Regexp.compile(REGEXP_C_STRING, Regexp::EXTENDED) REGEXP_K = Regexp.compile(REGEXP_K_STRING, Regexp::EXTENDED) NF_HASH_D = Hash.new do |hash, key| hash.shift if hash.length>MAX_HASH_LENGTH # prevent DoS attack hash[key] = nfd_one(key) end NF_HASH_C = Hash.new do |hash, key| hash.shift if hash.length>MAX_HASH_LENGTH # prevent DoS attack hash[key] = nfc_one(key) end ## Constants For Hangul # for details such as the meaning of the identifiers below, please see # http://www.unicode.org/versions/Unicode7.0.0/ch03.pdf, pp. 144/145 SBASE = 0xAC00 LBASE = 0x1100 VBASE = 0x1161 TBASE = 0x11A7 LCOUNT = 19 VCOUNT = 21 TCOUNT = 28 NCOUNT = VCOUNT * TCOUNT SCOUNT = LCOUNT * NCOUNT # Unicode-based encodings (except UTF-8) UNICODE_ENCODINGS = [Encoding::UTF_16BE, Encoding::UTF_16LE, Encoding::UTF_32BE, Encoding::UTF_32LE, Encoding::GB18030, Encoding::UCS_2BE, Encoding::UCS_4BE] ## Hangul Algorithm def self.hangul_decomp_one(target) syllable_index = target.ord - SBASE return target if syllable_index < 0 || syllable_index >= SCOUNT l = LBASE + syllable_index / NCOUNT v = VBASE + (syllable_index % NCOUNT) / TCOUNT t = TBASE + syllable_index % TCOUNT (t==TBASE ? [l, v] : [l, v, t]).pack('U*') + target[1..-1] end def self.hangul_comp_one(string) length = string.length if length>1 and 0 <= (lead =string[0].ord-LBASE) and lead < LCOUNT and 0 <= (vowel=string[1].ord-VBASE) and vowel < VCOUNT lead_vowel = SBASE + (lead * VCOUNT + vowel) * TCOUNT if length>2 and 0 <= (trail=string[2].ord-TBASE) and trail < TCOUNT (lead_vowel + trail).chr(Encoding::UTF_8) + string[3..-1] else lead_vowel.chr(Encoding::UTF_8) + string[2..-1] end else string end end ## Canonical Ordering def self.canonical_ordering_one(string) sorting = string.each_char.collect { |c| [c, CLASS_TABLE[c]] } (sorting.length-2).downto(0) do |i| # almost, but not exactly bubble sort (0..i).each do |j| later_class = sorting[j+1].last if 0A-PR-Za-pr-z\u00A8\u00C6\u00D8" \ "\u00E6\u00F8\u017F\u01B7\u0292\u0391\u0395\u0397" \ "\u0399\u039F\u03A1\u03A5\u03A9\u03B1\u03B5\u03B7" \ "\u03B9\u03BF\u03C1\u03C5\u03C9\u03D2\u0406\u0410" \ "\u0413\u0415-\u0418\u041A\u041E\u0423\u0427\u042B\u042D" \ "\u0430\u0433\u0435-\u0438\u043A\u043E\u0443\u0447\u044B" \ "\u044D\u0456\u0474\u0475\u04D8\u04D9\u04E8\u04E9\u0627\u0648\u064A" \ "\u06C1\u06D2\u06D5\u0928\u0930\u0933\u09C7\u0B47" \ "\u0B92\u0BC6\u0BC7\u0C46\u0CBF\u0CC6\u0D46\u0D47\u0DD9\u1025" \ "\u1B05\u1B07\u1B09\u1B0B\u1B0D\u1B11\u1B3A\u1B3C" \ "\u1B3E\u1B3F\u1B42\u1FBF\u1FFE\u2190\u2192\u2194\u21D0" \ "\u21D2\u21D4\u2203\u2208\u220B\u2223\u2225\u223C" \ "\u2243\u2245\u2248\u224D\u2261\u2264\u2265\u2272\u2273\u2276\u2277" \ "\u227A-\u227D\u2282\u2283\u2286\u2287\u2291\u2292\u22A2\u22A8\u22A9\u22AB\u22B2-\u22B5" \ "\u3046\u304B\u304D\u304F\u3051\u3053\u3055\u3057" \ "\u3059\u305B\u305D\u305F\u3061\u3064\u3066\u3068" \ "\u306F\u3072\u3075\u3078\u307B\u309D\u30A6\u30AB" \ "\u30AD\u30AF\u30B1\u30B3\u30B5\u30B7\u30B9\u30BB" \ "\u30BD\u30BF\u30C1\u30C4\u30C6\u30C8\u30CF\u30D2" \ "\u30D5\u30D8\u30DB\u30EF-\u30F2\u30FD\u{11099}\u{1109B}\u{110A5}" \ "\u{11131}\u{11132}\u{11347}\u{114B9}\u{115B8}\u{115B9}" \ "]?#{accents}+" \ "|#{'' # precomposed Hangul syllables }" \ "[\u{AC00}-\u{D7A4}]" REGEXP_C_STRING = "#{'' # composition exclusions }" \ "[\u0340\u0341\u0343\u0344\u0374\u037E\u0387\u0958-\u095F\u09DC\u09DD\u09DF" \ "\u0A33\u0A36\u0A59-\u0A5B\u0A5E\u0B5C\u0B5D\u0F43\u0F4D\u0F52" \ "\u0F57\u0F5C\u0F69\u0F73\u0F75\u0F76\u0F78\u0F81\u0F93" \ "\u0F9D\u0FA2\u0FA7\u0FAC\u0FB9\u1F71\u1F73\u1F75" \ "\u1F77\u1F79\u1F7B\u1F7D\u1FBB\u1FBE\u1FC9\u1FCB" \ "\u1FD3\u1FDB\u1FE3\u1FEB\u1FEE\u1FEF\u1FF9\u1FFB\u1FFD" \ "\u2000\u2001\u2126\u212A\u212B\u2329\u232A\u2ADC\uF900-\uFA0D\uFA10\uFA12" \ "\uFA15-\uFA1E\uFA20\uFA22\uFA25\uFA26\uFA2A-\uFA6D\uFA70-\uFAD9\uFB1D\uFB1F" \ "\uFB2A-\uFB36\uFB38-\uFB3C\uFB3E\uFB40\uFB41\uFB43\uFB44\uFB46-\uFB4E\u{1D15E}-\u{1D164}\u{1D1BB}-\u{1D1C0}" \ "\u{2F800}-\u{2FA1D}" \ "]#{accents}*" \ "|#{'' # composition starters and characters that can be the result of a composition }" \ "[<->A-PR-Za-pr-z\u00A8\u00C0-\u00CF\u00D1-\u00D6" \ "\u00D8-\u00DD\u00E0-\u00EF\u00F1-\u00F6\u00F8-\u00FD\u00FF-\u010F\u0112-\u0125\u0128-\u0130\u0134-\u0137" \ "\u0139-\u013E\u0143-\u0148\u014C-\u0151\u0154-\u0165\u0168-\u017F\u01A0\u01A1\u01AF\u01B0\u01B7" \ "\u01CD-\u01DC\u01DE-\u01E3\u01E6-\u01F0\u01F4\u01F5\u01F8-\u021B\u021E\u021F\u0226-\u0233\u0292" \ "\u0385\u0386\u0388-\u038A\u038C\u038E-\u0391\u0395\u0397\u0399\u039F" \ "\u03A1\u03A5\u03A9-\u03B1\u03B5\u03B7\u03B9\u03BF\u03C1" \ "\u03C5\u03C9-\u03CE\u03D2-\u03D4\u0400\u0401\u0403\u0406\u0407\u040C-\u040E\u0410" \ "\u0413\u0415-\u041A\u041E\u0423\u0427\u042B\u042D\u0430" \ "\u0433\u0435-\u043A\u043E\u0443\u0447\u044B\u044D\u0450\u0451" \ "\u0453\u0456\u0457\u045C-\u045E\u0474-\u0477\u04C1\u04C2\u04D0-\u04D3\u04D6-\u04DF\u04E2-\u04F5" \ "\u04F8\u04F9\u0622-\u0627\u0648\u064A\u06C0-\u06C2\u06D2\u06D3\u06D5\u0928\u0929" \ "\u0930\u0931\u0933\u0934\u09C7\u09CB\u09CC\u0B47\u0B48\u0B4B\u0B4C\u0B92\u0B94" \ "\u0BC6\u0BC7\u0BCA-\u0BCC\u0C46\u0C48\u0CBF\u0CC0\u0CC6-\u0CC8\u0CCA\u0CCB\u0D46\u0D47" \ "\u0D4A-\u0D4C\u0DD9\u0DDA\u0DDC-\u0DDE\u1025\u1026\u1B05-\u1B0E\u1B11\u1B12\u1B3A-\u1B43\u1E00-\u1E99" \ "\u1E9B\u1EA0-\u1EF9\u1F00-\u1F15\u1F18-\u1F1D\u1F20-\u1F45\u1F48-\u1F4D\u1F50-\u1F57\u1F59" \ "\u1F5B\u1F5D\u1F5F-\u1F70\u1F72\u1F74\u1F76\u1F78\u1F7A" \ "\u1F7C\u1F80-\u1FB4\u1FB6-\u1FBA\u1FBC\u1FBF\u1FC1-\u1FC4\u1FC6-\u1FC8\u1FCA" \ "\u1FCC-\u1FD2\u1FD6-\u1FDA\u1FDD-\u1FE2\u1FE4-\u1FEA\u1FEC\u1FED\u1FF2-\u1FF4\u1FF6-\u1FF8\u1FFA" \ "\u1FFC\u1FFE\u2190\u2192\u2194\u219A\u219B\u21AE\u21CD-\u21D0" \ "\u21D2\u21D4\u2203\u2204\u2208\u2209\u220B\u220C\u2223-\u2226\u223C\u2241" \ "\u2243-\u2245\u2247-\u2249\u224D\u2260-\u2262\u2264\u2265\u226D-\u227D\u2280-\u2289\u2291\u2292" \ "\u22A2\u22A8\u22A9\u22AB-\u22AF\u22B2-\u22B5\u22E0-\u22E3\u22EA-\u22ED\u3046\u304B-\u3062" \ "\u3064-\u3069\u306F-\u307D\u3094\u309D\u309E\u30A6\u30AB-\u30C2\u30C4-\u30C9\u30CF-\u30DD" \ "\u30EF-\u30F2\u30F4\u30F7-\u30FA\u30FD\u30FE\u{11099}-\u{1109C}\u{110A5}\u{110AB}\u{1112E}\u{1112F}" \ "\u{11131}\u{11132}\u{11347}\u{1134B}\u{1134C}\u{114B9}\u{114BB}\u{114BC}\u{114BE}\u{115B8}-\u{115BB}" \ "]?#{accents}+" \ "|#{'' # Hangul syllables with separate trailer }" \ "[\uAC00\uAC1C\uAC38\uAC54\uAC70\uAC8C\uACA8\uACC4" \ "\uACE0\uACFC\uAD18\uAD34\uAD50\uAD6C\uAD88\uADA4" \ "\uADC0\uADDC\uADF8\uAE14\uAE30\uAE4C\uAE68\uAE84" \ "\uAEA0\uAEBC\uAED8\uAEF4\uAF10\uAF2C\uAF48\uAF64" \ "\uAF80\uAF9C\uAFB8\uAFD4\uAFF0\uB00C\uB028\uB044" \ "\uB060\uB07C\uB098\uB0B4\uB0D0\uB0EC\uB108\uB124" \ "\uB140\uB15C\uB178\uB194\uB1B0\uB1CC\uB1E8\uB204" \ "\uB220\uB23C\uB258\uB274\uB290\uB2AC\uB2C8\uB2E4" \ "\uB300\uB31C\uB338\uB354\uB370\uB38C\uB3A8\uB3C4" \ "\uB3E0\uB3FC\uB418\uB434\uB450\uB46C\uB488\uB4A4" \ "\uB4C0\uB4DC\uB4F8\uB514\uB530\uB54C\uB568\uB584" \ "\uB5A0\uB5BC\uB5D8\uB5F4\uB610\uB62C\uB648\uB664" \ "\uB680\uB69C\uB6B8\uB6D4\uB6F0\uB70C\uB728\uB744" \ "\uB760\uB77C\uB798\uB7B4\uB7D0\uB7EC\uB808\uB824" \ "\uB840\uB85C\uB878\uB894\uB8B0\uB8CC\uB8E8\uB904" \ "\uB920\uB93C\uB958\uB974\uB990\uB9AC\uB9C8\uB9E4" \ "\uBA00\uBA1C\uBA38\uBA54\uBA70\uBA8C\uBAA8\uBAC4" \ "\uBAE0\uBAFC\uBB18\uBB34\uBB50\uBB6C\uBB88\uBBA4" \ "\uBBC0\uBBDC\uBBF8\uBC14\uBC30\uBC4C\uBC68\uBC84" \ "\uBCA0\uBCBC\uBCD8\uBCF4\uBD10\uBD2C\uBD48\uBD64" \ "\uBD80\uBD9C\uBDB8\uBDD4\uBDF0\uBE0C\uBE28\uBE44" \ "\uBE60\uBE7C\uBE98\uBEB4\uBED0\uBEEC\uBF08\uBF24" \ "\uBF40\uBF5C\uBF78\uBF94\uBFB0\uBFCC\uBFE8\uC004" \ "\uC020\uC03C\uC058\uC074\uC090\uC0AC\uC0C8\uC0E4" \ "\uC100\uC11C\uC138\uC154\uC170\uC18C\uC1A8\uC1C4" \ "\uC1E0\uC1FC\uC218\uC234\uC250\uC26C\uC288\uC2A4" \ "\uC2C0\uC2DC\uC2F8\uC314\uC330\uC34C\uC368\uC384" \ "\uC3A0\uC3BC\uC3D8\uC3F4\uC410\uC42C\uC448\uC464" \ "\uC480\uC49C\uC4B8\uC4D4\uC4F0\uC50C\uC528\uC544" \ "\uC560\uC57C\uC598\uC5B4\uC5D0\uC5EC\uC608\uC624" \ "\uC640\uC65C\uC678\uC694\uC6B0\uC6CC\uC6E8\uC704" \ "\uC720\uC73C\uC758\uC774\uC790\uC7AC\uC7C8\uC7E4" \ "\uC800\uC81C\uC838\uC854\uC870\uC88C\uC8A8\uC8C4" \ "\uC8E0\uC8FC\uC918\uC934\uC950\uC96C\uC988\uC9A4" \ "\uC9C0\uC9DC\uC9F8\uCA14\uCA30\uCA4C\uCA68\uCA84" \ "\uCAA0\uCABC\uCAD8\uCAF4\uCB10\uCB2C\uCB48\uCB64" \ "\uCB80\uCB9C\uCBB8\uCBD4\uCBF0\uCC0C\uCC28\uCC44" \ "\uCC60\uCC7C\uCC98\uCCB4\uCCD0\uCCEC\uCD08\uCD24" \ "\uCD40\uCD5C\uCD78\uCD94\uCDB0\uCDCC\uCDE8\uCE04" \ "\uCE20\uCE3C\uCE58\uCE74\uCE90\uCEAC\uCEC8\uCEE4" \ "\uCF00\uCF1C\uCF38\uCF54\uCF70\uCF8C\uCFA8\uCFC4" \ "\uCFE0\uCFFC\uD018\uD034\uD050\uD06C\uD088\uD0A4" \ "\uD0C0\uD0DC\uD0F8\uD114\uD130\uD14C\uD168\uD184" \ "\uD1A0\uD1BC\uD1D8\uD1F4\uD210\uD22C\uD248\uD264" \ "\uD280\uD29C\uD2B8\uD2D4\uD2F0\uD30C\uD328\uD344" \ "\uD360\uD37C\uD398\uD3B4\uD3D0\uD3EC\uD408\uD424" \ "\uD440\uD45C\uD478\uD494\uD4B0\uD4CC\uD4E8\uD504" \ "\uD520\uD53C\uD558\uD574\uD590\uD5AC\uD5C8\uD5E4" \ "\uD600\uD61C\uD638\uD654\uD670\uD68C\uD6A8\uD6C4" \ "\uD6E0\uD6FC\uD718\uD734\uD750\uD76C\uD788" \ "][\u11A8-\u11C2]" \ "|#{'' # decomposed Hangul syllables }" \ "[\u1100-\u1112][\u1161-\u1175][\u11A8-\u11C2]?" REGEXP_K_STRING = "" \ "[\u00A0\u00A8\u00AA\u00AF\u00B2-\u00B5\u00B8-\u00BA\u00BC-\u00BE\u0132\u0133" \ "\u013F\u0140\u0149\u017F\u01C4-\u01CC\u01F1-\u01F3\u02B0-\u02B8\u02D8-\u02DD\u02E0-\u02E4" \ "\u037A\u0384\u0385\u03D0-\u03D6\u03F0-\u03F2\u03F4\u03F5\u03F9\u0587\u0675-\u0678" \ "\u0E33\u0EB3\u0EDC\u0EDD\u0F0C\u0F77\u0F79\u10FC\u1D2C-\u1D2E" \ "\u1D30-\u1D3A\u1D3C-\u1D4D\u1D4F-\u1D6A\u1D78\u1D9B-\u1DBF\u1E9A\u1E9B\u1FBD\u1FBF-\u1FC1" \ "\u1FCD-\u1FCF\u1FDD-\u1FDF\u1FED\u1FEE\u1FFD\u1FFE\u2000-\u200A\u2011\u2017\u2024-\u2026" \ "\u202F\u2033\u2034\u2036\u2037\u203C\u203E\u2047-\u2049\u2057\u205F" \ "\u2070\u2071\u2074-\u208E\u2090-\u209C\u20A8\u2100-\u2103\u2105-\u2107\u2109-\u2113\u2115\u2116" \ "\u2119-\u211D\u2120-\u2122\u2124\u2128\u212C\u212D\u212F-\u2131\u2133-\u2139\u213B-\u2140" \ "\u2145-\u2149\u2150-\u217F\u2189\u222C\u222D\u222F\u2230\u2460-\u24EA\u2A0C\u2A74-\u2A76" \ "\u2C7C\u2C7D\u2D6F\u2E9F\u2EF3\u2F00-\u2FD5\u3000\u3036\u3038-\u303A" \ "\u309B\u309C\u309F\u30FF\u3131-\u318E\u3192-\u319F\u3200-\u321E\u3220-\u3247\u3250-\u327E" \ "\u3280-\u32FE\u3300-\u33FF\uA69C\uA69D\uA770\uA7F8\uA7F9\uAB5C-\uAB5F\uFB00-\uFB06\uFB13-\uFB17" \ "\uFB20-\uFB29\uFB4F-\uFBB1\uFBD3-\uFD3D\uFD50-\uFD8F\uFD92-\uFDC7\uFDF0-\uFDFC\uFE10-\uFE19\uFE30-\uFE44" \ "\uFE47-\uFE52\uFE54-\uFE66\uFE68-\uFE6B\uFE70-\uFE72\uFE74\uFE76-\uFEFC\uFF01-\uFFBE\uFFC2-\uFFC7" \ "\uFFCA-\uFFCF\uFFD2-\uFFD7\uFFDA-\uFFDC\uFFE0-\uFFE6\uFFE8-\uFFEE\u{1D400}-\u{1D454}\u{1D456}-\u{1D49C}\u{1D49E}\u{1D49F}" \ "\u{1D4A2}\u{1D4A5}\u{1D4A6}\u{1D4A9}-\u{1D4AC}\u{1D4AE}-\u{1D4B9}\u{1D4BB}\u{1D4BD}-\u{1D4C3}\u{1D4C5}-\u{1D505}\u{1D507}-\u{1D50A}" \ "\u{1D50D}-\u{1D514}\u{1D516}-\u{1D51C}\u{1D51E}-\u{1D539}\u{1D53B}-\u{1D53E}\u{1D540}-\u{1D544}\u{1D546}\u{1D54A}-\u{1D550}\u{1D552}-\u{1D6A5}" \ "\u{1D6A8}-\u{1D7CB}\u{1D7CE}-\u{1D7FF}\u{1EE00}-\u{1EE03}\u{1EE05}-\u{1EE1F}\u{1EE21}\u{1EE22}\u{1EE24}\u{1EE27}\u{1EE29}-\u{1EE32}" \ "\u{1EE34}-\u{1EE37}\u{1EE39}\u{1EE3B}\u{1EE42}\u{1EE47}\u{1EE49}\u{1EE4B}\u{1EE4D}-\u{1EE4F}" \ "\u{1EE51}\u{1EE52}\u{1EE54}\u{1EE57}\u{1EE59}\u{1EE5B}\u{1EE5D}\u{1EE5F}\u{1EE61}\u{1EE62}" \ "\u{1EE64}\u{1EE67}-\u{1EE6A}\u{1EE6C}-\u{1EE72}\u{1EE74}-\u{1EE77}\u{1EE79}-\u{1EE7C}\u{1EE7E}\u{1EE80}-\u{1EE89}\u{1EE8B}-\u{1EE9B}" \ "\u{1EEA1}-\u{1EEA3}\u{1EEA5}-\u{1EEA9}\u{1EEAB}-\u{1EEBB}\u{1F100}-\u{1F10A}\u{1F110}-\u{1F12E}\u{1F130}-\u{1F14F}\u{1F16A}\u{1F16B}\u{1F190}" \ "\u{1F200}-\u{1F202}\u{1F210}-\u{1F23B}\u{1F240}-\u{1F248}\u{1F250}\u{1F251}" \ "]" class_table = { "\u0300"=>230, "\u0301"=>230, "\u0302"=>230, "\u0303"=>230, "\u0304"=>230, "\u0305"=>230, "\u0306"=>230, "\u0307"=>230, "\u0308"=>230, "\u0309"=>230, "\u030A"=>230, "\u030B"=>230, "\u030C"=>230, "\u030D"=>230, "\u030E"=>230, "\u030F"=>230, "\u0310"=>230, "\u0311"=>230, "\u0312"=>230, "\u0313"=>230, "\u0314"=>230, "\u0315"=>232, "\u0316"=>220, "\u0317"=>220, "\u0318"=>220, "\u0319"=>220, "\u031A"=>232, "\u031B"=>216, "\u031C"=>220, "\u031D"=>220, "\u031E"=>220, "\u031F"=>220, "\u0320"=>220, "\u0321"=>202, "\u0322"=>202, "\u0323"=>220, "\u0324"=>220, "\u0325"=>220, "\u0326"=>220, "\u0327"=>202, "\u0328"=>202, "\u0329"=>220, "\u032A"=>220, "\u032B"=>220, "\u032C"=>220, "\u032D"=>220, "\u032E"=>220, "\u032F"=>220, "\u0330"=>220, "\u0331"=>220, "\u0332"=>220, "\u0333"=>220, "\u0334"=>1, "\u0335"=>1, "\u0336"=>1, "\u0337"=>1, "\u0338"=>1, "\u0339"=>220, "\u033A"=>220, "\u033B"=>220, "\u033C"=>220, "\u033D"=>230, "\u033E"=>230, "\u033F"=>230, "\u0340"=>230, "\u0341"=>230, "\u0342"=>230, "\u0343"=>230, "\u0344"=>230, "\u0345"=>240, "\u0346"=>230, "\u0347"=>220, "\u0348"=>220, "\u0349"=>220, "\u034A"=>230, "\u034B"=>230, "\u034C"=>230, "\u034D"=>220, "\u034E"=>220, "\u0350"=>230, "\u0351"=>230, "\u0352"=>230, "\u0353"=>220, "\u0354"=>220, "\u0355"=>220, "\u0356"=>220, "\u0357"=>230, "\u0358"=>232, "\u0359"=>220, "\u035A"=>220, "\u035B"=>230, "\u035C"=>233, "\u035D"=>234, "\u035E"=>234, "\u035F"=>233, "\u0360"=>234, "\u0361"=>234, "\u0362"=>233, "\u0363"=>230, "\u0364"=>230, "\u0365"=>230, "\u0366"=>230, "\u0367"=>230, "\u0368"=>230, "\u0369"=>230, "\u036A"=>230, "\u036B"=>230, "\u036C"=>230, "\u036D"=>230, "\u036E"=>230, "\u036F"=>230, "\u0483"=>230, "\u0484"=>230, "\u0485"=>230, "\u0486"=>230, "\u0487"=>230, "\u0591"=>220, "\u0592"=>230, "\u0593"=>230, "\u0594"=>230, "\u0595"=>230, "\u0596"=>220, "\u0597"=>230, "\u0598"=>230, "\u0599"=>230, "\u059A"=>222, "\u059B"=>220, "\u059C"=>230, "\u059D"=>230, "\u059E"=>230, "\u059F"=>230, "\u05A0"=>230, "\u05A1"=>230, "\u05A2"=>220, "\u05A3"=>220, "\u05A4"=>220, "\u05A5"=>220, "\u05A6"=>220, "\u05A7"=>220, "\u05A8"=>230, "\u05A9"=>230, "\u05AA"=>220, "\u05AB"=>230, "\u05AC"=>230, "\u05AD"=>222, "\u05AE"=>228, "\u05AF"=>230, "\u05B0"=>10, "\u05B1"=>11, "\u05B2"=>12, "\u05B3"=>13, "\u05B4"=>14, "\u05B5"=>15, "\u05B6"=>16, "\u05B7"=>17, "\u05B8"=>18, "\u05B9"=>19, "\u05BA"=>19, "\u05BB"=>20, "\u05BC"=>21, "\u05BD"=>22, "\u05BF"=>23, "\u05C1"=>24, "\u05C2"=>25, "\u05C4"=>230, "\u05C5"=>220, "\u05C7"=>18, "\u0610"=>230, "\u0611"=>230, "\u0612"=>230, "\u0613"=>230, "\u0614"=>230, "\u0615"=>230, "\u0616"=>230, "\u0617"=>230, "\u0618"=>30, "\u0619"=>31, "\u061A"=>32, "\u064B"=>27, "\u064C"=>28, "\u064D"=>29, "\u064E"=>30, "\u064F"=>31, "\u0650"=>32, "\u0651"=>33, "\u0652"=>34, "\u0653"=>230, "\u0654"=>230, "\u0655"=>220, "\u0656"=>220, "\u0657"=>230, "\u0658"=>230, "\u0659"=>230, "\u065A"=>230, "\u065B"=>230, "\u065C"=>220, "\u065D"=>230, "\u065E"=>230, "\u065F"=>220, "\u0670"=>35, "\u06D6"=>230, "\u06D7"=>230, "\u06D8"=>230, "\u06D9"=>230, "\u06DA"=>230, "\u06DB"=>230, "\u06DC"=>230, "\u06DF"=>230, "\u06E0"=>230, "\u06E1"=>230, "\u06E2"=>230, "\u06E3"=>220, "\u06E4"=>230, "\u06E7"=>230, "\u06E8"=>230, "\u06EA"=>220, "\u06EB"=>230, "\u06EC"=>230, "\u06ED"=>220, "\u0711"=>36, "\u0730"=>230, "\u0731"=>220, "\u0732"=>230, "\u0733"=>230, "\u0734"=>220, "\u0735"=>230, "\u0736"=>230, "\u0737"=>220, "\u0738"=>220, "\u0739"=>220, "\u073A"=>230, "\u073B"=>220, "\u073C"=>220, "\u073D"=>230, "\u073E"=>220, "\u073F"=>230, "\u0740"=>230, "\u0741"=>230, "\u0742"=>220, "\u0743"=>230, "\u0744"=>220, "\u0745"=>230, "\u0746"=>220, "\u0747"=>230, "\u0748"=>220, "\u0749"=>230, "\u074A"=>230, "\u07EB"=>230, "\u07EC"=>230, "\u07ED"=>230, "\u07EE"=>230, "\u07EF"=>230, "\u07F0"=>230, "\u07F1"=>230, "\u07F2"=>220, "\u07F3"=>230, "\u0816"=>230, "\u0817"=>230, "\u0818"=>230, "\u0819"=>230, "\u081B"=>230, "\u081C"=>230, "\u081D"=>230, "\u081E"=>230, "\u081F"=>230, "\u0820"=>230, "\u0821"=>230, "\u0822"=>230, "\u0823"=>230, "\u0825"=>230, "\u0826"=>230, "\u0827"=>230, "\u0829"=>230, "\u082A"=>230, "\u082B"=>230, "\u082C"=>230, "\u082D"=>230, "\u0859"=>220, "\u085A"=>220, "\u085B"=>220, "\u08D4"=>230, "\u08D5"=>230, "\u08D6"=>230, "\u08D7"=>230, "\u08D8"=>230, "\u08D9"=>230, "\u08DA"=>230, "\u08DB"=>230, "\u08DC"=>230, "\u08DD"=>230, "\u08DE"=>230, "\u08DF"=>230, "\u08E0"=>230, "\u08E1"=>230, "\u08E3"=>220, "\u08E4"=>230, "\u08E5"=>230, "\u08E6"=>220, "\u08E7"=>230, "\u08E8"=>230, "\u08E9"=>220, "\u08EA"=>230, "\u08EB"=>230, "\u08EC"=>230, "\u08ED"=>220, "\u08EE"=>220, "\u08EF"=>220, "\u08F0"=>27, "\u08F1"=>28, "\u08F2"=>29, "\u08F3"=>230, "\u08F4"=>230, "\u08F5"=>230, "\u08F6"=>220, "\u08F7"=>230, "\u08F8"=>230, "\u08F9"=>220, "\u08FA"=>220, "\u08FB"=>230, "\u08FC"=>230, "\u08FD"=>230, "\u08FE"=>230, "\u08FF"=>230, "\u093C"=>7, "\u094D"=>9, "\u0951"=>230, "\u0952"=>220, "\u0953"=>230, "\u0954"=>230, "\u09BC"=>7, "\u09CD"=>9, "\u0A3C"=>7, "\u0A4D"=>9, "\u0ABC"=>7, "\u0ACD"=>9, "\u0B3C"=>7, "\u0B4D"=>9, "\u0BCD"=>9, "\u0C4D"=>9, "\u0C55"=>84, "\u0C56"=>91, "\u0CBC"=>7, "\u0CCD"=>9, "\u0D3B"=>9, "\u0D3C"=>9, "\u0D4D"=>9, "\u0DCA"=>9, "\u0E38"=>103, "\u0E39"=>103, "\u0E3A"=>9, "\u0E48"=>107, "\u0E49"=>107, "\u0E4A"=>107, "\u0E4B"=>107, "\u0EB8"=>118, "\u0EB9"=>118, "\u0EC8"=>122, "\u0EC9"=>122, "\u0ECA"=>122, "\u0ECB"=>122, "\u0F18"=>220, "\u0F19"=>220, "\u0F35"=>220, "\u0F37"=>220, "\u0F39"=>216, "\u0F71"=>129, "\u0F72"=>130, "\u0F74"=>132, "\u0F7A"=>130, "\u0F7B"=>130, "\u0F7C"=>130, "\u0F7D"=>130, "\u0F80"=>130, "\u0F82"=>230, "\u0F83"=>230, "\u0F84"=>9, "\u0F86"=>230, "\u0F87"=>230, "\u0FC6"=>220, "\u1037"=>7, "\u1039"=>9, "\u103A"=>9, "\u108D"=>220, "\u135D"=>230, "\u135E"=>230, "\u135F"=>230, "\u1714"=>9, "\u1734"=>9, "\u17D2"=>9, "\u17DD"=>230, "\u18A9"=>228, "\u1939"=>222, "\u193A"=>230, "\u193B"=>220, "\u1A17"=>230, "\u1A18"=>220, "\u1A60"=>9, "\u1A75"=>230, "\u1A76"=>230, "\u1A77"=>230, "\u1A78"=>230, "\u1A79"=>230, "\u1A7A"=>230, "\u1A7B"=>230, "\u1A7C"=>230, "\u1A7F"=>220, "\u1AB0"=>230, "\u1AB1"=>230, "\u1AB2"=>230, "\u1AB3"=>230, "\u1AB4"=>230, "\u1AB5"=>220, "\u1AB6"=>220, "\u1AB7"=>220, "\u1AB8"=>220, "\u1AB9"=>220, "\u1ABA"=>220, "\u1ABB"=>230, "\u1ABC"=>230, "\u1ABD"=>220, "\u1B34"=>7, "\u1B44"=>9, "\u1B6B"=>230, "\u1B6C"=>220, "\u1B6D"=>230, "\u1B6E"=>230, "\u1B6F"=>230, "\u1B70"=>230, "\u1B71"=>230, "\u1B72"=>230, "\u1B73"=>230, "\u1BAA"=>9, "\u1BAB"=>9, "\u1BE6"=>7, "\u1BF2"=>9, "\u1BF3"=>9, "\u1C37"=>7, "\u1CD0"=>230, "\u1CD1"=>230, "\u1CD2"=>230, "\u1CD4"=>1, "\u1CD5"=>220, "\u1CD6"=>220, "\u1CD7"=>220, "\u1CD8"=>220, "\u1CD9"=>220, "\u1CDA"=>230, "\u1CDB"=>230, "\u1CDC"=>220, "\u1CDD"=>220, "\u1CDE"=>220, "\u1CDF"=>220, "\u1CE0"=>230, "\u1CE2"=>1, "\u1CE3"=>1, "\u1CE4"=>1, "\u1CE5"=>1, "\u1CE6"=>1, "\u1CE7"=>1, "\u1CE8"=>1, "\u1CED"=>220, "\u1CF4"=>230, "\u1CF8"=>230, "\u1CF9"=>230, "\u1DC0"=>230, "\u1DC1"=>230, "\u1DC2"=>220, "\u1DC3"=>230, "\u1DC4"=>230, "\u1DC5"=>230, "\u1DC6"=>230, "\u1DC7"=>230, "\u1DC8"=>230, "\u1DC9"=>230, "\u1DCA"=>220, "\u1DCB"=>230, "\u1DCC"=>230, "\u1DCD"=>234, "\u1DCE"=>214, "\u1DCF"=>220, "\u1DD0"=>202, "\u1DD1"=>230, "\u1DD2"=>230, "\u1DD3"=>230, "\u1DD4"=>230, "\u1DD5"=>230, "\u1DD6"=>230, "\u1DD7"=>230, "\u1DD8"=>230, "\u1DD9"=>230, "\u1DDA"=>230, "\u1DDB"=>230, "\u1DDC"=>230, "\u1DDD"=>230, "\u1DDE"=>230, "\u1DDF"=>230, "\u1DE0"=>230, "\u1DE1"=>230, "\u1DE2"=>230, "\u1DE3"=>230, "\u1DE4"=>230, "\u1DE5"=>230, "\u1DE6"=>230, "\u1DE7"=>230, "\u1DE8"=>230, "\u1DE9"=>230, "\u1DEA"=>230, "\u1DEB"=>230, "\u1DEC"=>230, "\u1DED"=>230, "\u1DEE"=>230, "\u1DEF"=>230, "\u1DF0"=>230, "\u1DF1"=>230, "\u1DF2"=>230, "\u1DF3"=>230, "\u1DF4"=>230, "\u1DF5"=>230, "\u1DF6"=>232, "\u1DF7"=>228, "\u1DF8"=>228, "\u1DF9"=>220, "\u1DFB"=>230, "\u1DFC"=>233, "\u1DFD"=>220, "\u1DFE"=>230, "\u1DFF"=>220, "\u20D0"=>230, "\u20D1"=>230, "\u20D2"=>1, "\u20D3"=>1, "\u20D4"=>230, "\u20D5"=>230, "\u20D6"=>230, "\u20D7"=>230, "\u20D8"=>1, "\u20D9"=>1, "\u20DA"=>1, "\u20DB"=>230, "\u20DC"=>230, "\u20E1"=>230, "\u20E5"=>1, "\u20E6"=>1, "\u20E7"=>230, "\u20E8"=>220, "\u20E9"=>230, "\u20EA"=>1, "\u20EB"=>1, "\u20EC"=>220, "\u20ED"=>220, "\u20EE"=>220, "\u20EF"=>220, "\u20F0"=>230, "\u2CEF"=>230, "\u2CF0"=>230, "\u2CF1"=>230, "\u2D7F"=>9, "\u2DE0"=>230, "\u2DE1"=>230, "\u2DE2"=>230, "\u2DE3"=>230, "\u2DE4"=>230, "\u2DE5"=>230, "\u2DE6"=>230, "\u2DE7"=>230, "\u2DE8"=>230, "\u2DE9"=>230, "\u2DEA"=>230, "\u2DEB"=>230, "\u2DEC"=>230, "\u2DED"=>230, "\u2DEE"=>230, "\u2DEF"=>230, "\u2DF0"=>230, "\u2DF1"=>230, "\u2DF2"=>230, "\u2DF3"=>230, "\u2DF4"=>230, "\u2DF5"=>230, "\u2DF6"=>230, "\u2DF7"=>230, "\u2DF8"=>230, "\u2DF9"=>230, "\u2DFA"=>230, "\u2DFB"=>230, "\u2DFC"=>230, "\u2DFD"=>230, "\u2DFE"=>230, "\u2DFF"=>230, "\u302A"=>218, "\u302B"=>228, "\u302C"=>232, "\u302D"=>222, "\u302E"=>224, "\u302F"=>224, "\u3099"=>8, "\u309A"=>8, "\uA66F"=>230, "\uA674"=>230, "\uA675"=>230, "\uA676"=>230, "\uA677"=>230, "\uA678"=>230, "\uA679"=>230, "\uA67A"=>230, "\uA67B"=>230, "\uA67C"=>230, "\uA67D"=>230, "\uA69E"=>230, "\uA69F"=>230, "\uA6F0"=>230, "\uA6F1"=>230, "\uA806"=>9, "\uA8C4"=>9, "\uA8E0"=>230, "\uA8E1"=>230, "\uA8E2"=>230, "\uA8E3"=>230, "\uA8E4"=>230, "\uA8E5"=>230, "\uA8E6"=>230, "\uA8E7"=>230, "\uA8E8"=>230, "\uA8E9"=>230, "\uA8EA"=>230, "\uA8EB"=>230, "\uA8EC"=>230, "\uA8ED"=>230, "\uA8EE"=>230, "\uA8EF"=>230, "\uA8F0"=>230, "\uA8F1"=>230, "\uA92B"=>220, "\uA92C"=>220, "\uA92D"=>220, "\uA953"=>9, "\uA9B3"=>7, "\uA9C0"=>9, "\uAAB0"=>230, "\uAAB2"=>230, "\uAAB3"=>230, "\uAAB4"=>220, "\uAAB7"=>230, "\uAAB8"=>230, "\uAABE"=>230, "\uAABF"=>230, "\uAAC1"=>230, "\uAAF6"=>9, "\uABED"=>9, "\uFB1E"=>26, "\uFE20"=>230, "\uFE21"=>230, "\uFE22"=>230, "\uFE23"=>230, "\uFE24"=>230, "\uFE25"=>230, "\uFE26"=>230, "\uFE27"=>220, "\uFE28"=>220, "\uFE29"=>220, "\uFE2A"=>220, "\uFE2B"=>220, "\uFE2C"=>220, "\uFE2D"=>220, "\uFE2E"=>230, "\uFE2F"=>230, "\u{101FD}"=>220, "\u{102E0}"=>220, "\u{10376}"=>230, "\u{10377}"=>230, "\u{10378}"=>230, "\u{10379}"=>230, "\u{1037A}"=>230, "\u{10A0D}"=>220, "\u{10A0F}"=>230, "\u{10A38}"=>230, "\u{10A39}"=>1, "\u{10A3A}"=>220, "\u{10A3F}"=>9, "\u{10AE5}"=>230, "\u{10AE6}"=>220, "\u{11046}"=>9, "\u{1107F}"=>9, "\u{110B9}"=>9, "\u{110BA}"=>7, "\u{11100}"=>230, "\u{11101}"=>230, "\u{11102}"=>230, "\u{11133}"=>9, "\u{11134}"=>9, "\u{11173}"=>7, "\u{111C0}"=>9, "\u{111CA}"=>7, "\u{11235}"=>9, "\u{11236}"=>7, "\u{112E9}"=>7, "\u{112EA}"=>9, "\u{1133C}"=>7, "\u{1134D}"=>9, "\u{11366}"=>230, "\u{11367}"=>230, "\u{11368}"=>230, "\u{11369}"=>230, "\u{1136A}"=>230, "\u{1136B}"=>230, "\u{1136C}"=>230, "\u{11370}"=>230, "\u{11371}"=>230, "\u{11372}"=>230, "\u{11373}"=>230, "\u{11374}"=>230, "\u{11442}"=>9, "\u{11446}"=>7, "\u{114C2}"=>9, "\u{114C3}"=>7, "\u{115BF}"=>9, "\u{115C0}"=>7, "\u{1163F}"=>9, "\u{116B6}"=>9, "\u{116B7}"=>7, "\u{1172B}"=>9, "\u{11A34}"=>9, "\u{11A47}"=>9, "\u{11A99}"=>9, "\u{11C3F}"=>9, "\u{11D42}"=>7, "\u{11D44}"=>9, "\u{11D45}"=>9, "\u{16AF0}"=>1, "\u{16AF1}"=>1, "\u{16AF2}"=>1, "\u{16AF3}"=>1, "\u{16AF4}"=>1, "\u{16B30}"=>230, "\u{16B31}"=>230, "\u{16B32}"=>230, "\u{16B33}"=>230, "\u{16B34}"=>230, "\u{16B35}"=>230, "\u{16B36}"=>230, "\u{1BC9E}"=>1, "\u{1D165}"=>216, "\u{1D166}"=>216, "\u{1D167}"=>1, "\u{1D168}"=>1, "\u{1D169}"=>1, "\u{1D16D}"=>226, "\u{1D16E}"=>216, "\u{1D16F}"=>216, "\u{1D170}"=>216, "\u{1D171}"=>216, "\u{1D172}"=>216, "\u{1D17B}"=>220, "\u{1D17C}"=>220, "\u{1D17D}"=>220, "\u{1D17E}"=>220, "\u{1D17F}"=>220, "\u{1D180}"=>220, "\u{1D181}"=>220, "\u{1D182}"=>220, "\u{1D185}"=>230, "\u{1D186}"=>230, "\u{1D187}"=>230, "\u{1D188}"=>230, "\u{1D189}"=>230, "\u{1D18A}"=>220, "\u{1D18B}"=>220, "\u{1D1AA}"=>230, "\u{1D1AB}"=>230, "\u{1D1AC}"=>230, "\u{1D1AD}"=>230, "\u{1D242}"=>230, "\u{1D243}"=>230, "\u{1D244}"=>230, "\u{1E000}"=>230, "\u{1E001}"=>230, "\u{1E002}"=>230, "\u{1E003}"=>230, "\u{1E004}"=>230, "\u{1E005}"=>230, "\u{1E006}"=>230, "\u{1E008}"=>230, "\u{1E009}"=>230, "\u{1E00A}"=>230, "\u{1E00B}"=>230, "\u{1E00C}"=>230, "\u{1E00D}"=>230, "\u{1E00E}"=>230, "\u{1E00F}"=>230, "\u{1E010}"=>230, "\u{1E011}"=>230, "\u{1E012}"=>230, "\u{1E013}"=>230, "\u{1E014}"=>230, "\u{1E015}"=>230, "\u{1E016}"=>230, "\u{1E017}"=>230, "\u{1E018}"=>230, "\u{1E01B}"=>230, "\u{1E01C}"=>230, "\u{1E01D}"=>230, "\u{1E01E}"=>230, "\u{1E01F}"=>230, "\u{1E020}"=>230, "\u{1E021}"=>230, "\u{1E023}"=>230, "\u{1E024}"=>230, "\u{1E026}"=>230, "\u{1E027}"=>230, "\u{1E028}"=>230, "\u{1E029}"=>230, "\u{1E02A}"=>230, "\u{1E8D0}"=>220, "\u{1E8D1}"=>220, "\u{1E8D2}"=>220, "\u{1E8D3}"=>220, "\u{1E8D4}"=>220, "\u{1E8D5}"=>220, "\u{1E8D6}"=>220, "\u{1E944}"=>230, "\u{1E945}"=>230, "\u{1E946}"=>230, "\u{1E947}"=>230, "\u{1E948}"=>230, "\u{1E949}"=>230, "\u{1E94A}"=>7, } class_table.default = 0 CLASS_TABLE = class_table.freeze DECOMPOSITION_TABLE = { "\u00C0"=>"A\u0300", "\u00C1"=>"A\u0301", "\u00C2"=>"A\u0302", "\u00C3"=>"A\u0303", "\u00C4"=>"A\u0308", "\u00C5"=>"A\u030A", "\u00C7"=>"C\u0327", "\u00C8"=>"E\u0300", "\u00C9"=>"E\u0301", "\u00CA"=>"E\u0302", "\u00CB"=>"E\u0308", "\u00CC"=>"I\u0300", "\u00CD"=>"I\u0301", "\u00CE"=>"I\u0302", "\u00CF"=>"I\u0308", "\u00D1"=>"N\u0303", "\u00D2"=>"O\u0300", "\u00D3"=>"O\u0301", "\u00D4"=>"O\u0302", "\u00D5"=>"O\u0303", "\u00D6"=>"O\u0308", "\u00D9"=>"U\u0300", "\u00DA"=>"U\u0301", "\u00DB"=>"U\u0302", "\u00DC"=>"U\u0308", "\u00DD"=>"Y\u0301", "\u00E0"=>"a\u0300", "\u00E1"=>"a\u0301", "\u00E2"=>"a\u0302", "\u00E3"=>"a\u0303", "\u00E4"=>"a\u0308", "\u00E5"=>"a\u030A", "\u00E7"=>"c\u0327", "\u00E8"=>"e\u0300", "\u00E9"=>"e\u0301", "\u00EA"=>"e\u0302", "\u00EB"=>"e\u0308", "\u00EC"=>"i\u0300", "\u00ED"=>"i\u0301", "\u00EE"=>"i\u0302", "\u00EF"=>"i\u0308", "\u00F1"=>"n\u0303", "\u00F2"=>"o\u0300", "\u00F3"=>"o\u0301", "\u00F4"=>"o\u0302", "\u00F5"=>"o\u0303", "\u00F6"=>"o\u0308", "\u00F9"=>"u\u0300", "\u00FA"=>"u\u0301", "\u00FB"=>"u\u0302", "\u00FC"=>"u\u0308", "\u00FD"=>"y\u0301", "\u00FF"=>"y\u0308", "\u0100"=>"A\u0304", "\u0101"=>"a\u0304", "\u0102"=>"A\u0306", "\u0103"=>"a\u0306", "\u0104"=>"A\u0328", "\u0105"=>"a\u0328", "\u0106"=>"C\u0301", "\u0107"=>"c\u0301", "\u0108"=>"C\u0302", "\u0109"=>"c\u0302", "\u010A"=>"C\u0307", "\u010B"=>"c\u0307", "\u010C"=>"C\u030C", "\u010D"=>"c\u030C", "\u010E"=>"D\u030C", "\u010F"=>"d\u030C", "\u0112"=>"E\u0304", "\u0113"=>"e\u0304", "\u0114"=>"E\u0306", "\u0115"=>"e\u0306", "\u0116"=>"E\u0307", "\u0117"=>"e\u0307", "\u0118"=>"E\u0328", "\u0119"=>"e\u0328", "\u011A"=>"E\u030C", "\u011B"=>"e\u030C", "\u011C"=>"G\u0302", "\u011D"=>"g\u0302", "\u011E"=>"G\u0306", "\u011F"=>"g\u0306", "\u0120"=>"G\u0307", "\u0121"=>"g\u0307", "\u0122"=>"G\u0327", "\u0123"=>"g\u0327", "\u0124"=>"H\u0302", "\u0125"=>"h\u0302", "\u0128"=>"I\u0303", "\u0129"=>"i\u0303", "\u012A"=>"I\u0304", "\u012B"=>"i\u0304", "\u012C"=>"I\u0306", "\u012D"=>"i\u0306", "\u012E"=>"I\u0328", "\u012F"=>"i\u0328", "\u0130"=>"I\u0307", "\u0134"=>"J\u0302", "\u0135"=>"j\u0302", "\u0136"=>"K\u0327", "\u0137"=>"k\u0327", "\u0139"=>"L\u0301", "\u013A"=>"l\u0301", "\u013B"=>"L\u0327", "\u013C"=>"l\u0327", "\u013D"=>"L\u030C", "\u013E"=>"l\u030C", "\u0143"=>"N\u0301", "\u0144"=>"n\u0301", "\u0145"=>"N\u0327", "\u0146"=>"n\u0327", "\u0147"=>"N\u030C", "\u0148"=>"n\u030C", "\u014C"=>"O\u0304", "\u014D"=>"o\u0304", "\u014E"=>"O\u0306", "\u014F"=>"o\u0306", "\u0150"=>"O\u030B", "\u0151"=>"o\u030B", "\u0154"=>"R\u0301", "\u0155"=>"r\u0301", "\u0156"=>"R\u0327", "\u0157"=>"r\u0327", "\u0158"=>"R\u030C", "\u0159"=>"r\u030C", "\u015A"=>"S\u0301", "\u015B"=>"s\u0301", "\u015C"=>"S\u0302", "\u015D"=>"s\u0302", "\u015E"=>"S\u0327", "\u015F"=>"s\u0327", "\u0160"=>"S\u030C", "\u0161"=>"s\u030C", "\u0162"=>"T\u0327", "\u0163"=>"t\u0327", "\u0164"=>"T\u030C", "\u0165"=>"t\u030C", "\u0168"=>"U\u0303", "\u0169"=>"u\u0303", "\u016A"=>"U\u0304", "\u016B"=>"u\u0304", "\u016C"=>"U\u0306", "\u016D"=>"u\u0306", "\u016E"=>"U\u030A", "\u016F"=>"u\u030A", "\u0170"=>"U\u030B", "\u0171"=>"u\u030B", "\u0172"=>"U\u0328", "\u0173"=>"u\u0328", "\u0174"=>"W\u0302", "\u0175"=>"w\u0302", "\u0176"=>"Y\u0302", "\u0177"=>"y\u0302", "\u0178"=>"Y\u0308", "\u0179"=>"Z\u0301", "\u017A"=>"z\u0301", "\u017B"=>"Z\u0307", "\u017C"=>"z\u0307", "\u017D"=>"Z\u030C", "\u017E"=>"z\u030C", "\u01A0"=>"O\u031B", "\u01A1"=>"o\u031B", "\u01AF"=>"U\u031B", "\u01B0"=>"u\u031B", "\u01CD"=>"A\u030C", "\u01CE"=>"a\u030C", "\u01CF"=>"I\u030C", "\u01D0"=>"i\u030C", "\u01D1"=>"O\u030C", "\u01D2"=>"o\u030C", "\u01D3"=>"U\u030C", "\u01D4"=>"u\u030C", "\u01D5"=>"U\u0308\u0304", "\u01D6"=>"u\u0308\u0304", "\u01D7"=>"U\u0308\u0301", "\u01D8"=>"u\u0308\u0301", "\u01D9"=>"U\u0308\u030C", "\u01DA"=>"u\u0308\u030C", "\u01DB"=>"U\u0308\u0300", "\u01DC"=>"u\u0308\u0300", "\u01DE"=>"A\u0308\u0304", "\u01DF"=>"a\u0308\u0304", "\u01E0"=>"A\u0307\u0304", "\u01E1"=>"a\u0307\u0304", "\u01E2"=>"\u00C6\u0304", "\u01E3"=>"\u00E6\u0304", "\u01E6"=>"G\u030C", "\u01E7"=>"g\u030C", "\u01E8"=>"K\u030C", "\u01E9"=>"k\u030C", "\u01EA"=>"O\u0328", "\u01EB"=>"o\u0328", "\u01EC"=>"O\u0328\u0304", "\u01ED"=>"o\u0328\u0304", "\u01EE"=>"\u01B7\u030C", "\u01EF"=>"\u0292\u030C", "\u01F0"=>"j\u030C", "\u01F4"=>"G\u0301", "\u01F5"=>"g\u0301", "\u01F8"=>"N\u0300", "\u01F9"=>"n\u0300", "\u01FA"=>"A\u030A\u0301", "\u01FB"=>"a\u030A\u0301", "\u01FC"=>"\u00C6\u0301", "\u01FD"=>"\u00E6\u0301", "\u01FE"=>"\u00D8\u0301", "\u01FF"=>"\u00F8\u0301", "\u0200"=>"A\u030F", "\u0201"=>"a\u030F", "\u0202"=>"A\u0311", "\u0203"=>"a\u0311", "\u0204"=>"E\u030F", "\u0205"=>"e\u030F", "\u0206"=>"E\u0311", "\u0207"=>"e\u0311", "\u0208"=>"I\u030F", "\u0209"=>"i\u030F", "\u020A"=>"I\u0311", "\u020B"=>"i\u0311", "\u020C"=>"O\u030F", "\u020D"=>"o\u030F", "\u020E"=>"O\u0311", "\u020F"=>"o\u0311", "\u0210"=>"R\u030F", "\u0211"=>"r\u030F", "\u0212"=>"R\u0311", "\u0213"=>"r\u0311", "\u0214"=>"U\u030F", "\u0215"=>"u\u030F", "\u0216"=>"U\u0311", "\u0217"=>"u\u0311", "\u0218"=>"S\u0326", "\u0219"=>"s\u0326", "\u021A"=>"T\u0326", "\u021B"=>"t\u0326", "\u021E"=>"H\u030C", "\u021F"=>"h\u030C", "\u0226"=>"A\u0307", "\u0227"=>"a\u0307", "\u0228"=>"E\u0327", "\u0229"=>"e\u0327", "\u022A"=>"O\u0308\u0304", "\u022B"=>"o\u0308\u0304", "\u022C"=>"O\u0303\u0304", "\u022D"=>"o\u0303\u0304", "\u022E"=>"O\u0307", "\u022F"=>"o\u0307", "\u0230"=>"O\u0307\u0304", "\u0231"=>"o\u0307\u0304", "\u0232"=>"Y\u0304", "\u0233"=>"y\u0304", "\u0340"=>"\u0300", "\u0341"=>"\u0301", "\u0343"=>"\u0313", "\u0344"=>"\u0308\u0301", "\u0374"=>"\u02B9", "\u037E"=>";", "\u0385"=>"\u00A8\u0301", "\u0386"=>"\u0391\u0301", "\u0387"=>"\u00B7", "\u0388"=>"\u0395\u0301", "\u0389"=>"\u0397\u0301", "\u038A"=>"\u0399\u0301", "\u038C"=>"\u039F\u0301", "\u038E"=>"\u03A5\u0301", "\u038F"=>"\u03A9\u0301", "\u0390"=>"\u03B9\u0308\u0301", "\u03AA"=>"\u0399\u0308", "\u03AB"=>"\u03A5\u0308", "\u03AC"=>"\u03B1\u0301", "\u03AD"=>"\u03B5\u0301", "\u03AE"=>"\u03B7\u0301", "\u03AF"=>"\u03B9\u0301", "\u03B0"=>"\u03C5\u0308\u0301", "\u03CA"=>"\u03B9\u0308", "\u03CB"=>"\u03C5\u0308", "\u03CC"=>"\u03BF\u0301", "\u03CD"=>"\u03C5\u0301", "\u03CE"=>"\u03C9\u0301", "\u03D3"=>"\u03D2\u0301", "\u03D4"=>"\u03D2\u0308", "\u0400"=>"\u0415\u0300", "\u0401"=>"\u0415\u0308", "\u0403"=>"\u0413\u0301", "\u0407"=>"\u0406\u0308", "\u040C"=>"\u041A\u0301", "\u040D"=>"\u0418\u0300", "\u040E"=>"\u0423\u0306", "\u0419"=>"\u0418\u0306", "\u0439"=>"\u0438\u0306", "\u0450"=>"\u0435\u0300", "\u0451"=>"\u0435\u0308", "\u0453"=>"\u0433\u0301", "\u0457"=>"\u0456\u0308", "\u045C"=>"\u043A\u0301", "\u045D"=>"\u0438\u0300", "\u045E"=>"\u0443\u0306", "\u0476"=>"\u0474\u030F", "\u0477"=>"\u0475\u030F", "\u04C1"=>"\u0416\u0306", "\u04C2"=>"\u0436\u0306", "\u04D0"=>"\u0410\u0306", "\u04D1"=>"\u0430\u0306", "\u04D2"=>"\u0410\u0308", "\u04D3"=>"\u0430\u0308", "\u04D6"=>"\u0415\u0306", "\u04D7"=>"\u0435\u0306", "\u04DA"=>"\u04D8\u0308", "\u04DB"=>"\u04D9\u0308", "\u04DC"=>"\u0416\u0308", "\u04DD"=>"\u0436\u0308", "\u04DE"=>"\u0417\u0308", "\u04DF"=>"\u0437\u0308", "\u04E2"=>"\u0418\u0304", "\u04E3"=>"\u0438\u0304", "\u04E4"=>"\u0418\u0308", "\u04E5"=>"\u0438\u0308", "\u04E6"=>"\u041E\u0308", "\u04E7"=>"\u043E\u0308", "\u04EA"=>"\u04E8\u0308", "\u04EB"=>"\u04E9\u0308", "\u04EC"=>"\u042D\u0308", "\u04ED"=>"\u044D\u0308", "\u04EE"=>"\u0423\u0304", "\u04EF"=>"\u0443\u0304", "\u04F0"=>"\u0423\u0308", "\u04F1"=>"\u0443\u0308", "\u04F2"=>"\u0423\u030B", "\u04F3"=>"\u0443\u030B", "\u04F4"=>"\u0427\u0308", "\u04F5"=>"\u0447\u0308", "\u04F8"=>"\u042B\u0308", "\u04F9"=>"\u044B\u0308", "\u0622"=>"\u0627\u0653", "\u0623"=>"\u0627\u0654", "\u0624"=>"\u0648\u0654", "\u0625"=>"\u0627\u0655", "\u0626"=>"\u064A\u0654", "\u06C0"=>"\u06D5\u0654", "\u06C2"=>"\u06C1\u0654", "\u06D3"=>"\u06D2\u0654", "\u0929"=>"\u0928\u093C", "\u0931"=>"\u0930\u093C", "\u0934"=>"\u0933\u093C", "\u0958"=>"\u0915\u093C", "\u0959"=>"\u0916\u093C", "\u095A"=>"\u0917\u093C", "\u095B"=>"\u091C\u093C", "\u095C"=>"\u0921\u093C", "\u095D"=>"\u0922\u093C", "\u095E"=>"\u092B\u093C", "\u095F"=>"\u092F\u093C", "\u09CB"=>"\u09C7\u09BE", "\u09CC"=>"\u09C7\u09D7", "\u09DC"=>"\u09A1\u09BC", "\u09DD"=>"\u09A2\u09BC", "\u09DF"=>"\u09AF\u09BC", "\u0A33"=>"\u0A32\u0A3C", "\u0A36"=>"\u0A38\u0A3C", "\u0A59"=>"\u0A16\u0A3C", "\u0A5A"=>"\u0A17\u0A3C", "\u0A5B"=>"\u0A1C\u0A3C", "\u0A5E"=>"\u0A2B\u0A3C", "\u0B48"=>"\u0B47\u0B56", "\u0B4B"=>"\u0B47\u0B3E", "\u0B4C"=>"\u0B47\u0B57", "\u0B5C"=>"\u0B21\u0B3C", "\u0B5D"=>"\u0B22\u0B3C", "\u0B94"=>"\u0B92\u0BD7", "\u0BCA"=>"\u0BC6\u0BBE", "\u0BCB"=>"\u0BC7\u0BBE", "\u0BCC"=>"\u0BC6\u0BD7", "\u0C48"=>"\u0C46\u0C56", "\u0CC0"=>"\u0CBF\u0CD5", "\u0CC7"=>"\u0CC6\u0CD5", "\u0CC8"=>"\u0CC6\u0CD6", "\u0CCA"=>"\u0CC6\u0CC2", "\u0CCB"=>"\u0CC6\u0CC2\u0CD5", "\u0D4A"=>"\u0D46\u0D3E", "\u0D4B"=>"\u0D47\u0D3E", "\u0D4C"=>"\u0D46\u0D57", "\u0DDA"=>"\u0DD9\u0DCA", "\u0DDC"=>"\u0DD9\u0DCF", "\u0DDD"=>"\u0DD9\u0DCF\u0DCA", "\u0DDE"=>"\u0DD9\u0DDF", "\u0F43"=>"\u0F42\u0FB7", "\u0F4D"=>"\u0F4C\u0FB7", "\u0F52"=>"\u0F51\u0FB7", "\u0F57"=>"\u0F56\u0FB7", "\u0F5C"=>"\u0F5B\u0FB7", "\u0F69"=>"\u0F40\u0FB5", "\u0F73"=>"\u0F71\u0F72", "\u0F75"=>"\u0F71\u0F74", "\u0F76"=>"\u0FB2\u0F80", "\u0F78"=>"\u0FB3\u0F80", "\u0F81"=>"\u0F71\u0F80", "\u0F93"=>"\u0F92\u0FB7", "\u0F9D"=>"\u0F9C\u0FB7", "\u0FA2"=>"\u0FA1\u0FB7", "\u0FA7"=>"\u0FA6\u0FB7", "\u0FAC"=>"\u0FAB\u0FB7", "\u0FB9"=>"\u0F90\u0FB5", "\u1026"=>"\u1025\u102E", "\u1B06"=>"\u1B05\u1B35", "\u1B08"=>"\u1B07\u1B35", "\u1B0A"=>"\u1B09\u1B35", "\u1B0C"=>"\u1B0B\u1B35", "\u1B0E"=>"\u1B0D\u1B35", "\u1B12"=>"\u1B11\u1B35", "\u1B3B"=>"\u1B3A\u1B35", "\u1B3D"=>"\u1B3C\u1B35", "\u1B40"=>"\u1B3E\u1B35", "\u1B41"=>"\u1B3F\u1B35", "\u1B43"=>"\u1B42\u1B35", "\u1E00"=>"A\u0325", "\u1E01"=>"a\u0325", "\u1E02"=>"B\u0307", "\u1E03"=>"b\u0307", "\u1E04"=>"B\u0323", "\u1E05"=>"b\u0323", "\u1E06"=>"B\u0331", "\u1E07"=>"b\u0331", "\u1E08"=>"C\u0327\u0301", "\u1E09"=>"c\u0327\u0301", "\u1E0A"=>"D\u0307", "\u1E0B"=>"d\u0307", "\u1E0C"=>"D\u0323", "\u1E0D"=>"d\u0323", "\u1E0E"=>"D\u0331", "\u1E0F"=>"d\u0331", "\u1E10"=>"D\u0327", "\u1E11"=>"d\u0327", "\u1E12"=>"D\u032D", "\u1E13"=>"d\u032D", "\u1E14"=>"E\u0304\u0300", "\u1E15"=>"e\u0304\u0300", "\u1E16"=>"E\u0304\u0301", "\u1E17"=>"e\u0304\u0301", "\u1E18"=>"E\u032D", "\u1E19"=>"e\u032D", "\u1E1A"=>"E\u0330", "\u1E1B"=>"e\u0330", "\u1E1C"=>"E\u0327\u0306", "\u1E1D"=>"e\u0327\u0306", "\u1E1E"=>"F\u0307", "\u1E1F"=>"f\u0307", "\u1E20"=>"G\u0304", "\u1E21"=>"g\u0304", "\u1E22"=>"H\u0307", "\u1E23"=>"h\u0307", "\u1E24"=>"H\u0323", "\u1E25"=>"h\u0323", "\u1E26"=>"H\u0308", "\u1E27"=>"h\u0308", "\u1E28"=>"H\u0327", "\u1E29"=>"h\u0327", "\u1E2A"=>"H\u032E", "\u1E2B"=>"h\u032E", "\u1E2C"=>"I\u0330", "\u1E2D"=>"i\u0330", "\u1E2E"=>"I\u0308\u0301", "\u1E2F"=>"i\u0308\u0301", "\u1E30"=>"K\u0301", "\u1E31"=>"k\u0301", "\u1E32"=>"K\u0323", "\u1E33"=>"k\u0323", "\u1E34"=>"K\u0331", "\u1E35"=>"k\u0331", "\u1E36"=>"L\u0323", "\u1E37"=>"l\u0323", "\u1E38"=>"L\u0323\u0304", "\u1E39"=>"l\u0323\u0304", "\u1E3A"=>"L\u0331", "\u1E3B"=>"l\u0331", "\u1E3C"=>"L\u032D", "\u1E3D"=>"l\u032D", "\u1E3E"=>"M\u0301", "\u1E3F"=>"m\u0301", "\u1E40"=>"M\u0307", "\u1E41"=>"m\u0307", "\u1E42"=>"M\u0323", "\u1E43"=>"m\u0323", "\u1E44"=>"N\u0307", "\u1E45"=>"n\u0307", "\u1E46"=>"N\u0323", "\u1E47"=>"n\u0323", "\u1E48"=>"N\u0331", "\u1E49"=>"n\u0331", "\u1E4A"=>"N\u032D", "\u1E4B"=>"n\u032D", "\u1E4C"=>"O\u0303\u0301", "\u1E4D"=>"o\u0303\u0301", "\u1E4E"=>"O\u0303\u0308", "\u1E4F"=>"o\u0303\u0308", "\u1E50"=>"O\u0304\u0300", "\u1E51"=>"o\u0304\u0300", "\u1E52"=>"O\u0304\u0301", "\u1E53"=>"o\u0304\u0301", "\u1E54"=>"P\u0301", "\u1E55"=>"p\u0301", "\u1E56"=>"P\u0307", "\u1E57"=>"p\u0307", "\u1E58"=>"R\u0307", "\u1E59"=>"r\u0307", "\u1E5A"=>"R\u0323", "\u1E5B"=>"r\u0323", "\u1E5C"=>"R\u0323\u0304", "\u1E5D"=>"r\u0323\u0304", "\u1E5E"=>"R\u0331", "\u1E5F"=>"r\u0331", "\u1E60"=>"S\u0307", "\u1E61"=>"s\u0307", "\u1E62"=>"S\u0323", "\u1E63"=>"s\u0323", "\u1E64"=>"S\u0301\u0307", "\u1E65"=>"s\u0301\u0307", "\u1E66"=>"S\u030C\u0307", "\u1E67"=>"s\u030C\u0307", "\u1E68"=>"S\u0323\u0307", "\u1E69"=>"s\u0323\u0307", "\u1E6A"=>"T\u0307", "\u1E6B"=>"t\u0307", "\u1E6C"=>"T\u0323", "\u1E6D"=>"t\u0323", "\u1E6E"=>"T\u0331", "\u1E6F"=>"t\u0331", "\u1E70"=>"T\u032D", "\u1E71"=>"t\u032D", "\u1E72"=>"U\u0324", "\u1E73"=>"u\u0324", "\u1E74"=>"U\u0330", "\u1E75"=>"u\u0330", "\u1E76"=>"U\u032D", "\u1E77"=>"u\u032D", "\u1E78"=>"U\u0303\u0301", "\u1E79"=>"u\u0303\u0301", "\u1E7A"=>"U\u0304\u0308", "\u1E7B"=>"u\u0304\u0308", "\u1E7C"=>"V\u0303", "\u1E7D"=>"v\u0303", "\u1E7E"=>"V\u0323", "\u1E7F"=>"v\u0323", "\u1E80"=>"W\u0300", "\u1E81"=>"w\u0300", "\u1E82"=>"W\u0301", "\u1E83"=>"w\u0301", "\u1E84"=>"W\u0308", "\u1E85"=>"w\u0308", "\u1E86"=>"W\u0307", "\u1E87"=>"w\u0307", "\u1E88"=>"W\u0323", "\u1E89"=>"w\u0323", "\u1E8A"=>"X\u0307", "\u1E8B"=>"x\u0307", "\u1E8C"=>"X\u0308", "\u1E8D"=>"x\u0308", "\u1E8E"=>"Y\u0307", "\u1E8F"=>"y\u0307", "\u1E90"=>"Z\u0302", "\u1E91"=>"z\u0302", "\u1E92"=>"Z\u0323", "\u1E93"=>"z\u0323", "\u1E94"=>"Z\u0331", "\u1E95"=>"z\u0331", "\u1E96"=>"h\u0331", "\u1E97"=>"t\u0308", "\u1E98"=>"w\u030A", "\u1E99"=>"y\u030A", "\u1E9B"=>"\u017F\u0307", "\u1EA0"=>"A\u0323", "\u1EA1"=>"a\u0323", "\u1EA2"=>"A\u0309", "\u1EA3"=>"a\u0309", "\u1EA4"=>"A\u0302\u0301", "\u1EA5"=>"a\u0302\u0301", "\u1EA6"=>"A\u0302\u0300", "\u1EA7"=>"a\u0302\u0300", "\u1EA8"=>"A\u0302\u0309", "\u1EA9"=>"a\u0302\u0309", "\u1EAA"=>"A\u0302\u0303", "\u1EAB"=>"a\u0302\u0303", "\u1EAC"=>"A\u0323\u0302", "\u1EAD"=>"a\u0323\u0302", "\u1EAE"=>"A\u0306\u0301", "\u1EAF"=>"a\u0306\u0301", "\u1EB0"=>"A\u0306\u0300", "\u1EB1"=>"a\u0306\u0300", "\u1EB2"=>"A\u0306\u0309", "\u1EB3"=>"a\u0306\u0309", "\u1EB4"=>"A\u0306\u0303", "\u1EB5"=>"a\u0306\u0303", "\u1EB6"=>"A\u0323\u0306", "\u1EB7"=>"a\u0323\u0306", "\u1EB8"=>"E\u0323", "\u1EB9"=>"e\u0323", "\u1EBA"=>"E\u0309", "\u1EBB"=>"e\u0309", "\u1EBC"=>"E\u0303", "\u1EBD"=>"e\u0303", "\u1EBE"=>"E\u0302\u0301", "\u1EBF"=>"e\u0302\u0301", "\u1EC0"=>"E\u0302\u0300", "\u1EC1"=>"e\u0302\u0300", "\u1EC2"=>"E\u0302\u0309", "\u1EC3"=>"e\u0302\u0309", "\u1EC4"=>"E\u0302\u0303", "\u1EC5"=>"e\u0302\u0303", "\u1EC6"=>"E\u0323\u0302", "\u1EC7"=>"e\u0323\u0302", "\u1EC8"=>"I\u0309", "\u1EC9"=>"i\u0309", "\u1ECA"=>"I\u0323", "\u1ECB"=>"i\u0323", "\u1ECC"=>"O\u0323", "\u1ECD"=>"o\u0323", "\u1ECE"=>"O\u0309", "\u1ECF"=>"o\u0309", "\u1ED0"=>"O\u0302\u0301", "\u1ED1"=>"o\u0302\u0301", "\u1ED2"=>"O\u0302\u0300", "\u1ED3"=>"o\u0302\u0300", "\u1ED4"=>"O\u0302\u0309", "\u1ED5"=>"o\u0302\u0309", "\u1ED6"=>"O\u0302\u0303", "\u1ED7"=>"o\u0302\u0303", "\u1ED8"=>"O\u0323\u0302", "\u1ED9"=>"o\u0323\u0302", "\u1EDA"=>"O\u031B\u0301", "\u1EDB"=>"o\u031B\u0301", "\u1EDC"=>"O\u031B\u0300", "\u1EDD"=>"o\u031B\u0300", "\u1EDE"=>"O\u031B\u0309", "\u1EDF"=>"o\u031B\u0309", "\u1EE0"=>"O\u031B\u0303", "\u1EE1"=>"o\u031B\u0303", "\u1EE2"=>"O\u031B\u0323", "\u1EE3"=>"o\u031B\u0323", "\u1EE4"=>"U\u0323", "\u1EE5"=>"u\u0323", "\u1EE6"=>"U\u0309", "\u1EE7"=>"u\u0309", "\u1EE8"=>"U\u031B\u0301", "\u1EE9"=>"u\u031B\u0301", "\u1EEA"=>"U\u031B\u0300", "\u1EEB"=>"u\u031B\u0300", "\u1EEC"=>"U\u031B\u0309", "\u1EED"=>"u\u031B\u0309", "\u1EEE"=>"U\u031B\u0303", "\u1EEF"=>"u\u031B\u0303", "\u1EF0"=>"U\u031B\u0323", "\u1EF1"=>"u\u031B\u0323", "\u1EF2"=>"Y\u0300", "\u1EF3"=>"y\u0300", "\u1EF4"=>"Y\u0323", "\u1EF5"=>"y\u0323", "\u1EF6"=>"Y\u0309", "\u1EF7"=>"y\u0309", "\u1EF8"=>"Y\u0303", "\u1EF9"=>"y\u0303", "\u1F00"=>"\u03B1\u0313", "\u1F01"=>"\u03B1\u0314", "\u1F02"=>"\u03B1\u0313\u0300", "\u1F03"=>"\u03B1\u0314\u0300", "\u1F04"=>"\u03B1\u0313\u0301", "\u1F05"=>"\u03B1\u0314\u0301", "\u1F06"=>"\u03B1\u0313\u0342", "\u1F07"=>"\u03B1\u0314\u0342", "\u1F08"=>"\u0391\u0313", "\u1F09"=>"\u0391\u0314", "\u1F0A"=>"\u0391\u0313\u0300", "\u1F0B"=>"\u0391\u0314\u0300", "\u1F0C"=>"\u0391\u0313\u0301", "\u1F0D"=>"\u0391\u0314\u0301", "\u1F0E"=>"\u0391\u0313\u0342", "\u1F0F"=>"\u0391\u0314\u0342", "\u1F10"=>"\u03B5\u0313", "\u1F11"=>"\u03B5\u0314", "\u1F12"=>"\u03B5\u0313\u0300", "\u1F13"=>"\u03B5\u0314\u0300", "\u1F14"=>"\u03B5\u0313\u0301", "\u1F15"=>"\u03B5\u0314\u0301", "\u1F18"=>"\u0395\u0313", "\u1F19"=>"\u0395\u0314", "\u1F1A"=>"\u0395\u0313\u0300", "\u1F1B"=>"\u0395\u0314\u0300", "\u1F1C"=>"\u0395\u0313\u0301", "\u1F1D"=>"\u0395\u0314\u0301", "\u1F20"=>"\u03B7\u0313", "\u1F21"=>"\u03B7\u0314", "\u1F22"=>"\u03B7\u0313\u0300", "\u1F23"=>"\u03B7\u0314\u0300", "\u1F24"=>"\u03B7\u0313\u0301", "\u1F25"=>"\u03B7\u0314\u0301", "\u1F26"=>"\u03B7\u0313\u0342", "\u1F27"=>"\u03B7\u0314\u0342", "\u1F28"=>"\u0397\u0313", "\u1F29"=>"\u0397\u0314", "\u1F2A"=>"\u0397\u0313\u0300", "\u1F2B"=>"\u0397\u0314\u0300", "\u1F2C"=>"\u0397\u0313\u0301", "\u1F2D"=>"\u0397\u0314\u0301", "\u1F2E"=>"\u0397\u0313\u0342", "\u1F2F"=>"\u0397\u0314\u0342", "\u1F30"=>"\u03B9\u0313", "\u1F31"=>"\u03B9\u0314", "\u1F32"=>"\u03B9\u0313\u0300", "\u1F33"=>"\u03B9\u0314\u0300", "\u1F34"=>"\u03B9\u0313\u0301", "\u1F35"=>"\u03B9\u0314\u0301", "\u1F36"=>"\u03B9\u0313\u0342", "\u1F37"=>"\u03B9\u0314\u0342", "\u1F38"=>"\u0399\u0313", "\u1F39"=>"\u0399\u0314", "\u1F3A"=>"\u0399\u0313\u0300", "\u1F3B"=>"\u0399\u0314\u0300", "\u1F3C"=>"\u0399\u0313\u0301", "\u1F3D"=>"\u0399\u0314\u0301", "\u1F3E"=>"\u0399\u0313\u0342", "\u1F3F"=>"\u0399\u0314\u0342", "\u1F40"=>"\u03BF\u0313", "\u1F41"=>"\u03BF\u0314", "\u1F42"=>"\u03BF\u0313\u0300", "\u1F43"=>"\u03BF\u0314\u0300", "\u1F44"=>"\u03BF\u0313\u0301", "\u1F45"=>"\u03BF\u0314\u0301", "\u1F48"=>"\u039F\u0313", "\u1F49"=>"\u039F\u0314", "\u1F4A"=>"\u039F\u0313\u0300", "\u1F4B"=>"\u039F\u0314\u0300", "\u1F4C"=>"\u039F\u0313\u0301", "\u1F4D"=>"\u039F\u0314\u0301", "\u1F50"=>"\u03C5\u0313", "\u1F51"=>"\u03C5\u0314", "\u1F52"=>"\u03C5\u0313\u0300", "\u1F53"=>"\u03C5\u0314\u0300", "\u1F54"=>"\u03C5\u0313\u0301", "\u1F55"=>"\u03C5\u0314\u0301", "\u1F56"=>"\u03C5\u0313\u0342", "\u1F57"=>"\u03C5\u0314\u0342", "\u1F59"=>"\u03A5\u0314", "\u1F5B"=>"\u03A5\u0314\u0300", "\u1F5D"=>"\u03A5\u0314\u0301", "\u1F5F"=>"\u03A5\u0314\u0342", "\u1F60"=>"\u03C9\u0313", "\u1F61"=>"\u03C9\u0314", "\u1F62"=>"\u03C9\u0313\u0300", "\u1F63"=>"\u03C9\u0314\u0300", "\u1F64"=>"\u03C9\u0313\u0301", "\u1F65"=>"\u03C9\u0314\u0301", "\u1F66"=>"\u03C9\u0313\u0342", "\u1F67"=>"\u03C9\u0314\u0342", "\u1F68"=>"\u03A9\u0313", "\u1F69"=>"\u03A9\u0314", "\u1F6A"=>"\u03A9\u0313\u0300", "\u1F6B"=>"\u03A9\u0314\u0300", "\u1F6C"=>"\u03A9\u0313\u0301", "\u1F6D"=>"\u03A9\u0314\u0301", "\u1F6E"=>"\u03A9\u0313\u0342", "\u1F6F"=>"\u03A9\u0314\u0342", "\u1F70"=>"\u03B1\u0300", "\u1F71"=>"\u03B1\u0301", "\u1F72"=>"\u03B5\u0300", "\u1F73"=>"\u03B5\u0301", "\u1F74"=>"\u03B7\u0300", "\u1F75"=>"\u03B7\u0301", "\u1F76"=>"\u03B9\u0300", "\u1F77"=>"\u03B9\u0301", "\u1F78"=>"\u03BF\u0300", "\u1F79"=>"\u03BF\u0301", "\u1F7A"=>"\u03C5\u0300", "\u1F7B"=>"\u03C5\u0301", "\u1F7C"=>"\u03C9\u0300", "\u1F7D"=>"\u03C9\u0301", "\u1F80"=>"\u03B1\u0313\u0345", "\u1F81"=>"\u03B1\u0314\u0345", "\u1F82"=>"\u03B1\u0313\u0300\u0345", "\u1F83"=>"\u03B1\u0314\u0300\u0345", "\u1F84"=>"\u03B1\u0313\u0301\u0345", "\u1F85"=>"\u03B1\u0314\u0301\u0345", "\u1F86"=>"\u03B1\u0313\u0342\u0345", "\u1F87"=>"\u03B1\u0314\u0342\u0345", "\u1F88"=>"\u0391\u0313\u0345", "\u1F89"=>"\u0391\u0314\u0345", "\u1F8A"=>"\u0391\u0313\u0300\u0345", "\u1F8B"=>"\u0391\u0314\u0300\u0345", "\u1F8C"=>"\u0391\u0313\u0301\u0345", "\u1F8D"=>"\u0391\u0314\u0301\u0345", "\u1F8E"=>"\u0391\u0313\u0342\u0345", "\u1F8F"=>"\u0391\u0314\u0342\u0345", "\u1F90"=>"\u03B7\u0313\u0345", "\u1F91"=>"\u03B7\u0314\u0345", "\u1F92"=>"\u03B7\u0313\u0300\u0345", "\u1F93"=>"\u03B7\u0314\u0300\u0345", "\u1F94"=>"\u03B7\u0313\u0301\u0345", "\u1F95"=>"\u03B7\u0314\u0301\u0345", "\u1F96"=>"\u03B7\u0313\u0342\u0345", "\u1F97"=>"\u03B7\u0314\u0342\u0345", "\u1F98"=>"\u0397\u0313\u0345", "\u1F99"=>"\u0397\u0314\u0345", "\u1F9A"=>"\u0397\u0313\u0300\u0345", "\u1F9B"=>"\u0397\u0314\u0300\u0345", "\u1F9C"=>"\u0397\u0313\u0301\u0345", "\u1F9D"=>"\u0397\u0314\u0301\u0345", "\u1F9E"=>"\u0397\u0313\u0342\u0345", "\u1F9F"=>"\u0397\u0314\u0342\u0345", "\u1FA0"=>"\u03C9\u0313\u0345", "\u1FA1"=>"\u03C9\u0314\u0345", "\u1FA2"=>"\u03C9\u0313\u0300\u0345", "\u1FA3"=>"\u03C9\u0314\u0300\u0345", "\u1FA4"=>"\u03C9\u0313\u0301\u0345", "\u1FA5"=>"\u03C9\u0314\u0301\u0345", "\u1FA6"=>"\u03C9\u0313\u0342\u0345", "\u1FA7"=>"\u03C9\u0314\u0342\u0345", "\u1FA8"=>"\u03A9\u0313\u0345", "\u1FA9"=>"\u03A9\u0314\u0345", "\u1FAA"=>"\u03A9\u0313\u0300\u0345", "\u1FAB"=>"\u03A9\u0314\u0300\u0345", "\u1FAC"=>"\u03A9\u0313\u0301\u0345", "\u1FAD"=>"\u03A9\u0314\u0301\u0345", "\u1FAE"=>"\u03A9\u0313\u0342\u0345", "\u1FAF"=>"\u03A9\u0314\u0342\u0345", "\u1FB0"=>"\u03B1\u0306", "\u1FB1"=>"\u03B1\u0304", "\u1FB2"=>"\u03B1\u0300\u0345", "\u1FB3"=>"\u03B1\u0345", "\u1FB4"=>"\u03B1\u0301\u0345", "\u1FB6"=>"\u03B1\u0342", "\u1FB7"=>"\u03B1\u0342\u0345", "\u1FB8"=>"\u0391\u0306", "\u1FB9"=>"\u0391\u0304", "\u1FBA"=>"\u0391\u0300", "\u1FBB"=>"\u0391\u0301", "\u1FBC"=>"\u0391\u0345", "\u1FBE"=>"\u03B9", "\u1FC1"=>"\u00A8\u0342", "\u1FC2"=>"\u03B7\u0300\u0345", "\u1FC3"=>"\u03B7\u0345", "\u1FC4"=>"\u03B7\u0301\u0345", "\u1FC6"=>"\u03B7\u0342", "\u1FC7"=>"\u03B7\u0342\u0345", "\u1FC8"=>"\u0395\u0300", "\u1FC9"=>"\u0395\u0301", "\u1FCA"=>"\u0397\u0300", "\u1FCB"=>"\u0397\u0301", "\u1FCC"=>"\u0397\u0345", "\u1FCD"=>"\u1FBF\u0300", "\u1FCE"=>"\u1FBF\u0301", "\u1FCF"=>"\u1FBF\u0342", "\u1FD0"=>"\u03B9\u0306", "\u1FD1"=>"\u03B9\u0304", "\u1FD2"=>"\u03B9\u0308\u0300", "\u1FD3"=>"\u03B9\u0308\u0301", "\u1FD6"=>"\u03B9\u0342", "\u1FD7"=>"\u03B9\u0308\u0342", "\u1FD8"=>"\u0399\u0306", "\u1FD9"=>"\u0399\u0304", "\u1FDA"=>"\u0399\u0300", "\u1FDB"=>"\u0399\u0301", "\u1FDD"=>"\u1FFE\u0300", "\u1FDE"=>"\u1FFE\u0301", "\u1FDF"=>"\u1FFE\u0342", "\u1FE0"=>"\u03C5\u0306", "\u1FE1"=>"\u03C5\u0304", "\u1FE2"=>"\u03C5\u0308\u0300", "\u1FE3"=>"\u03C5\u0308\u0301", "\u1FE4"=>"\u03C1\u0313", "\u1FE5"=>"\u03C1\u0314", "\u1FE6"=>"\u03C5\u0342", "\u1FE7"=>"\u03C5\u0308\u0342", "\u1FE8"=>"\u03A5\u0306", "\u1FE9"=>"\u03A5\u0304", "\u1FEA"=>"\u03A5\u0300", "\u1FEB"=>"\u03A5\u0301", "\u1FEC"=>"\u03A1\u0314", "\u1FED"=>"\u00A8\u0300", "\u1FEE"=>"\u00A8\u0301", "\u1FEF"=>"`", "\u1FF2"=>"\u03C9\u0300\u0345", "\u1FF3"=>"\u03C9\u0345", "\u1FF4"=>"\u03C9\u0301\u0345", "\u1FF6"=>"\u03C9\u0342", "\u1FF7"=>"\u03C9\u0342\u0345", "\u1FF8"=>"\u039F\u0300", "\u1FF9"=>"\u039F\u0301", "\u1FFA"=>"\u03A9\u0300", "\u1FFB"=>"\u03A9\u0301", "\u1FFC"=>"\u03A9\u0345", "\u1FFD"=>"\u00B4", "\u2000"=>"\u2002", "\u2001"=>"\u2003", "\u2126"=>"\u03A9", "\u212A"=>"K", "\u212B"=>"A\u030A", "\u219A"=>"\u2190\u0338", "\u219B"=>"\u2192\u0338", "\u21AE"=>"\u2194\u0338", "\u21CD"=>"\u21D0\u0338", "\u21CE"=>"\u21D4\u0338", "\u21CF"=>"\u21D2\u0338", "\u2204"=>"\u2203\u0338", "\u2209"=>"\u2208\u0338", "\u220C"=>"\u220B\u0338", "\u2224"=>"\u2223\u0338", "\u2226"=>"\u2225\u0338", "\u2241"=>"\u223C\u0338", "\u2244"=>"\u2243\u0338", "\u2247"=>"\u2245\u0338", "\u2249"=>"\u2248\u0338", "\u2260"=>"=\u0338", "\u2262"=>"\u2261\u0338", "\u226D"=>"\u224D\u0338", "\u226E"=>"<\u0338", "\u226F"=>">\u0338", "\u2270"=>"\u2264\u0338", "\u2271"=>"\u2265\u0338", "\u2274"=>"\u2272\u0338", "\u2275"=>"\u2273\u0338", "\u2278"=>"\u2276\u0338", "\u2279"=>"\u2277\u0338", "\u2280"=>"\u227A\u0338", "\u2281"=>"\u227B\u0338", "\u2284"=>"\u2282\u0338", "\u2285"=>"\u2283\u0338", "\u2288"=>"\u2286\u0338", "\u2289"=>"\u2287\u0338", "\u22AC"=>"\u22A2\u0338", "\u22AD"=>"\u22A8\u0338", "\u22AE"=>"\u22A9\u0338", "\u22AF"=>"\u22AB\u0338", "\u22E0"=>"\u227C\u0338", "\u22E1"=>"\u227D\u0338", "\u22E2"=>"\u2291\u0338", "\u22E3"=>"\u2292\u0338", "\u22EA"=>"\u22B2\u0338", "\u22EB"=>"\u22B3\u0338", "\u22EC"=>"\u22B4\u0338", "\u22ED"=>"\u22B5\u0338", "\u2329"=>"\u3008", "\u232A"=>"\u3009", "\u2ADC"=>"\u2ADD\u0338", "\u304C"=>"\u304B\u3099", "\u304E"=>"\u304D\u3099", "\u3050"=>"\u304F\u3099", "\u3052"=>"\u3051\u3099", "\u3054"=>"\u3053\u3099", "\u3056"=>"\u3055\u3099", "\u3058"=>"\u3057\u3099", "\u305A"=>"\u3059\u3099", "\u305C"=>"\u305B\u3099", "\u305E"=>"\u305D\u3099", "\u3060"=>"\u305F\u3099", "\u3062"=>"\u3061\u3099", "\u3065"=>"\u3064\u3099", "\u3067"=>"\u3066\u3099", "\u3069"=>"\u3068\u3099", "\u3070"=>"\u306F\u3099", "\u3071"=>"\u306F\u309A", "\u3073"=>"\u3072\u3099", "\u3074"=>"\u3072\u309A", "\u3076"=>"\u3075\u3099", "\u3077"=>"\u3075\u309A", "\u3079"=>"\u3078\u3099", "\u307A"=>"\u3078\u309A", "\u307C"=>"\u307B\u3099", "\u307D"=>"\u307B\u309A", "\u3094"=>"\u3046\u3099", "\u309E"=>"\u309D\u3099", "\u30AC"=>"\u30AB\u3099", "\u30AE"=>"\u30AD\u3099", "\u30B0"=>"\u30AF\u3099", "\u30B2"=>"\u30B1\u3099", "\u30B4"=>"\u30B3\u3099", "\u30B6"=>"\u30B5\u3099", "\u30B8"=>"\u30B7\u3099", "\u30BA"=>"\u30B9\u3099", "\u30BC"=>"\u30BB\u3099", "\u30BE"=>"\u30BD\u3099", "\u30C0"=>"\u30BF\u3099", "\u30C2"=>"\u30C1\u3099", "\u30C5"=>"\u30C4\u3099", "\u30C7"=>"\u30C6\u3099", "\u30C9"=>"\u30C8\u3099", "\u30D0"=>"\u30CF\u3099", "\u30D1"=>"\u30CF\u309A", "\u30D3"=>"\u30D2\u3099", "\u30D4"=>"\u30D2\u309A", "\u30D6"=>"\u30D5\u3099", "\u30D7"=>"\u30D5\u309A", "\u30D9"=>"\u30D8\u3099", "\u30DA"=>"\u30D8\u309A", "\u30DC"=>"\u30DB\u3099", "\u30DD"=>"\u30DB\u309A", "\u30F4"=>"\u30A6\u3099", "\u30F7"=>"\u30EF\u3099", "\u30F8"=>"\u30F0\u3099", "\u30F9"=>"\u30F1\u3099", "\u30FA"=>"\u30F2\u3099", "\u30FE"=>"\u30FD\u3099", "\uF900"=>"\u8C48", "\uF901"=>"\u66F4", "\uF902"=>"\u8ECA", "\uF903"=>"\u8CC8", "\uF904"=>"\u6ED1", "\uF905"=>"\u4E32", "\uF906"=>"\u53E5", "\uF907"=>"\u9F9C", "\uF908"=>"\u9F9C", "\uF909"=>"\u5951", "\uF90A"=>"\u91D1", "\uF90B"=>"\u5587", "\uF90C"=>"\u5948", "\uF90D"=>"\u61F6", "\uF90E"=>"\u7669", "\uF90F"=>"\u7F85", "\uF910"=>"\u863F", "\uF911"=>"\u87BA", "\uF912"=>"\u88F8", "\uF913"=>"\u908F", "\uF914"=>"\u6A02", "\uF915"=>"\u6D1B", "\uF916"=>"\u70D9", "\uF917"=>"\u73DE", "\uF918"=>"\u843D", "\uF919"=>"\u916A", "\uF91A"=>"\u99F1", "\uF91B"=>"\u4E82", "\uF91C"=>"\u5375", "\uF91D"=>"\u6B04", "\uF91E"=>"\u721B", "\uF91F"=>"\u862D", "\uF920"=>"\u9E1E", "\uF921"=>"\u5D50", "\uF922"=>"\u6FEB", "\uF923"=>"\u85CD", "\uF924"=>"\u8964", "\uF925"=>"\u62C9", "\uF926"=>"\u81D8", "\uF927"=>"\u881F", "\uF928"=>"\u5ECA", "\uF929"=>"\u6717", "\uF92A"=>"\u6D6A", "\uF92B"=>"\u72FC", "\uF92C"=>"\u90CE", "\uF92D"=>"\u4F86", "\uF92E"=>"\u51B7", "\uF92F"=>"\u52DE", "\uF930"=>"\u64C4", "\uF931"=>"\u6AD3", "\uF932"=>"\u7210", "\uF933"=>"\u76E7", "\uF934"=>"\u8001", "\uF935"=>"\u8606", "\uF936"=>"\u865C", "\uF937"=>"\u8DEF", "\uF938"=>"\u9732", "\uF939"=>"\u9B6F", "\uF93A"=>"\u9DFA", "\uF93B"=>"\u788C", "\uF93C"=>"\u797F", "\uF93D"=>"\u7DA0", "\uF93E"=>"\u83C9", "\uF93F"=>"\u9304", "\uF940"=>"\u9E7F", "\uF941"=>"\u8AD6", "\uF942"=>"\u58DF", "\uF943"=>"\u5F04", "\uF944"=>"\u7C60", "\uF945"=>"\u807E", "\uF946"=>"\u7262", "\uF947"=>"\u78CA", "\uF948"=>"\u8CC2", "\uF949"=>"\u96F7", "\uF94A"=>"\u58D8", "\uF94B"=>"\u5C62", "\uF94C"=>"\u6A13", "\uF94D"=>"\u6DDA", "\uF94E"=>"\u6F0F", "\uF94F"=>"\u7D2F", "\uF950"=>"\u7E37", "\uF951"=>"\u964B", "\uF952"=>"\u52D2", "\uF953"=>"\u808B", "\uF954"=>"\u51DC", "\uF955"=>"\u51CC", "\uF956"=>"\u7A1C", "\uF957"=>"\u7DBE", "\uF958"=>"\u83F1", "\uF959"=>"\u9675", "\uF95A"=>"\u8B80", "\uF95B"=>"\u62CF", "\uF95C"=>"\u6A02", "\uF95D"=>"\u8AFE", "\uF95E"=>"\u4E39", "\uF95F"=>"\u5BE7", "\uF960"=>"\u6012", "\uF961"=>"\u7387", "\uF962"=>"\u7570", "\uF963"=>"\u5317", "\uF964"=>"\u78FB", "\uF965"=>"\u4FBF", "\uF966"=>"\u5FA9", "\uF967"=>"\u4E0D", "\uF968"=>"\u6CCC", "\uF969"=>"\u6578", "\uF96A"=>"\u7D22", "\uF96B"=>"\u53C3", "\uF96C"=>"\u585E", "\uF96D"=>"\u7701", "\uF96E"=>"\u8449", "\uF96F"=>"\u8AAA", "\uF970"=>"\u6BBA", "\uF971"=>"\u8FB0", "\uF972"=>"\u6C88", "\uF973"=>"\u62FE", "\uF974"=>"\u82E5", "\uF975"=>"\u63A0", "\uF976"=>"\u7565", "\uF977"=>"\u4EAE", "\uF978"=>"\u5169", "\uF979"=>"\u51C9", "\uF97A"=>"\u6881", "\uF97B"=>"\u7CE7", "\uF97C"=>"\u826F", "\uF97D"=>"\u8AD2", "\uF97E"=>"\u91CF", "\uF97F"=>"\u52F5", "\uF980"=>"\u5442", "\uF981"=>"\u5973", "\uF982"=>"\u5EEC", "\uF983"=>"\u65C5", "\uF984"=>"\u6FFE", "\uF985"=>"\u792A", "\uF986"=>"\u95AD", "\uF987"=>"\u9A6A", "\uF988"=>"\u9E97", "\uF989"=>"\u9ECE", "\uF98A"=>"\u529B", "\uF98B"=>"\u66C6", "\uF98C"=>"\u6B77", "\uF98D"=>"\u8F62", "\uF98E"=>"\u5E74", "\uF98F"=>"\u6190", "\uF990"=>"\u6200", "\uF991"=>"\u649A", "\uF992"=>"\u6F23", "\uF993"=>"\u7149", "\uF994"=>"\u7489", "\uF995"=>"\u79CA", "\uF996"=>"\u7DF4", "\uF997"=>"\u806F", "\uF998"=>"\u8F26", "\uF999"=>"\u84EE", "\uF99A"=>"\u9023", "\uF99B"=>"\u934A", "\uF99C"=>"\u5217", "\uF99D"=>"\u52A3", "\uF99E"=>"\u54BD", "\uF99F"=>"\u70C8", "\uF9A0"=>"\u88C2", "\uF9A1"=>"\u8AAA", "\uF9A2"=>"\u5EC9", "\uF9A3"=>"\u5FF5", "\uF9A4"=>"\u637B", "\uF9A5"=>"\u6BAE", "\uF9A6"=>"\u7C3E", "\uF9A7"=>"\u7375", "\uF9A8"=>"\u4EE4", "\uF9A9"=>"\u56F9", "\uF9AA"=>"\u5BE7", "\uF9AB"=>"\u5DBA", "\uF9AC"=>"\u601C", "\uF9AD"=>"\u73B2", "\uF9AE"=>"\u7469", "\uF9AF"=>"\u7F9A", "\uF9B0"=>"\u8046", "\uF9B1"=>"\u9234", "\uF9B2"=>"\u96F6", "\uF9B3"=>"\u9748", "\uF9B4"=>"\u9818", "\uF9B5"=>"\u4F8B", "\uF9B6"=>"\u79AE", "\uF9B7"=>"\u91B4", "\uF9B8"=>"\u96B8", "\uF9B9"=>"\u60E1", "\uF9BA"=>"\u4E86", "\uF9BB"=>"\u50DA", "\uF9BC"=>"\u5BEE", "\uF9BD"=>"\u5C3F", "\uF9BE"=>"\u6599", "\uF9BF"=>"\u6A02", "\uF9C0"=>"\u71CE", "\uF9C1"=>"\u7642", "\uF9C2"=>"\u84FC", "\uF9C3"=>"\u907C", "\uF9C4"=>"\u9F8D", "\uF9C5"=>"\u6688", "\uF9C6"=>"\u962E", "\uF9C7"=>"\u5289", "\uF9C8"=>"\u677B", "\uF9C9"=>"\u67F3", "\uF9CA"=>"\u6D41", "\uF9CB"=>"\u6E9C", "\uF9CC"=>"\u7409", "\uF9CD"=>"\u7559", "\uF9CE"=>"\u786B", "\uF9CF"=>"\u7D10", "\uF9D0"=>"\u985E", "\uF9D1"=>"\u516D", "\uF9D2"=>"\u622E", "\uF9D3"=>"\u9678", "\uF9D4"=>"\u502B", "\uF9D5"=>"\u5D19", "\uF9D6"=>"\u6DEA", "\uF9D7"=>"\u8F2A", "\uF9D8"=>"\u5F8B", "\uF9D9"=>"\u6144", "\uF9DA"=>"\u6817", "\uF9DB"=>"\u7387", "\uF9DC"=>"\u9686", "\uF9DD"=>"\u5229", "\uF9DE"=>"\u540F", "\uF9DF"=>"\u5C65", "\uF9E0"=>"\u6613", "\uF9E1"=>"\u674E", "\uF9E2"=>"\u68A8", "\uF9E3"=>"\u6CE5", "\uF9E4"=>"\u7406", "\uF9E5"=>"\u75E2", "\uF9E6"=>"\u7F79", "\uF9E7"=>"\u88CF", "\uF9E8"=>"\u88E1", "\uF9E9"=>"\u91CC", "\uF9EA"=>"\u96E2", "\uF9EB"=>"\u533F", "\uF9EC"=>"\u6EBA", "\uF9ED"=>"\u541D", "\uF9EE"=>"\u71D0", "\uF9EF"=>"\u7498", "\uF9F0"=>"\u85FA", "\uF9F1"=>"\u96A3", "\uF9F2"=>"\u9C57", "\uF9F3"=>"\u9E9F", "\uF9F4"=>"\u6797", "\uF9F5"=>"\u6DCB", "\uF9F6"=>"\u81E8", "\uF9F7"=>"\u7ACB", "\uF9F8"=>"\u7B20", "\uF9F9"=>"\u7C92", "\uF9FA"=>"\u72C0", "\uF9FB"=>"\u7099", "\uF9FC"=>"\u8B58", "\uF9FD"=>"\u4EC0", "\uF9FE"=>"\u8336", "\uF9FF"=>"\u523A", "\uFA00"=>"\u5207", "\uFA01"=>"\u5EA6", "\uFA02"=>"\u62D3", "\uFA03"=>"\u7CD6", "\uFA04"=>"\u5B85", "\uFA05"=>"\u6D1E", "\uFA06"=>"\u66B4", "\uFA07"=>"\u8F3B", "\uFA08"=>"\u884C", "\uFA09"=>"\u964D", "\uFA0A"=>"\u898B", "\uFA0B"=>"\u5ED3", "\uFA0C"=>"\u5140", "\uFA0D"=>"\u55C0", "\uFA10"=>"\u585A", "\uFA12"=>"\u6674", "\uFA15"=>"\u51DE", "\uFA16"=>"\u732A", "\uFA17"=>"\u76CA", "\uFA18"=>"\u793C", "\uFA19"=>"\u795E", "\uFA1A"=>"\u7965", "\uFA1B"=>"\u798F", "\uFA1C"=>"\u9756", "\uFA1D"=>"\u7CBE", "\uFA1E"=>"\u7FBD", "\uFA20"=>"\u8612", "\uFA22"=>"\u8AF8", "\uFA25"=>"\u9038", "\uFA26"=>"\u90FD", "\uFA2A"=>"\u98EF", "\uFA2B"=>"\u98FC", "\uFA2C"=>"\u9928", "\uFA2D"=>"\u9DB4", "\uFA2E"=>"\u90DE", "\uFA2F"=>"\u96B7", "\uFA30"=>"\u4FAE", "\uFA31"=>"\u50E7", "\uFA32"=>"\u514D", "\uFA33"=>"\u52C9", "\uFA34"=>"\u52E4", "\uFA35"=>"\u5351", "\uFA36"=>"\u559D", "\uFA37"=>"\u5606", "\uFA38"=>"\u5668", "\uFA39"=>"\u5840", "\uFA3A"=>"\u58A8", "\uFA3B"=>"\u5C64", "\uFA3C"=>"\u5C6E", "\uFA3D"=>"\u6094", "\uFA3E"=>"\u6168", "\uFA3F"=>"\u618E", "\uFA40"=>"\u61F2", "\uFA41"=>"\u654F", "\uFA42"=>"\u65E2", "\uFA43"=>"\u6691", "\uFA44"=>"\u6885", "\uFA45"=>"\u6D77", "\uFA46"=>"\u6E1A", "\uFA47"=>"\u6F22", "\uFA48"=>"\u716E", "\uFA49"=>"\u722B", "\uFA4A"=>"\u7422", "\uFA4B"=>"\u7891", "\uFA4C"=>"\u793E", "\uFA4D"=>"\u7949", "\uFA4E"=>"\u7948", "\uFA4F"=>"\u7950", "\uFA50"=>"\u7956", "\uFA51"=>"\u795D", "\uFA52"=>"\u798D", "\uFA53"=>"\u798E", "\uFA54"=>"\u7A40", "\uFA55"=>"\u7A81", "\uFA56"=>"\u7BC0", "\uFA57"=>"\u7DF4", "\uFA58"=>"\u7E09", "\uFA59"=>"\u7E41", "\uFA5A"=>"\u7F72", "\uFA5B"=>"\u8005", "\uFA5C"=>"\u81ED", "\uFA5D"=>"\u8279", "\uFA5E"=>"\u8279", "\uFA5F"=>"\u8457", "\uFA60"=>"\u8910", "\uFA61"=>"\u8996", "\uFA62"=>"\u8B01", "\uFA63"=>"\u8B39", "\uFA64"=>"\u8CD3", "\uFA65"=>"\u8D08", "\uFA66"=>"\u8FB6", "\uFA67"=>"\u9038", "\uFA68"=>"\u96E3", "\uFA69"=>"\u97FF", "\uFA6A"=>"\u983B", "\uFA6B"=>"\u6075", "\uFA6C"=>"\u{242EE}", "\uFA6D"=>"\u8218", "\uFA70"=>"\u4E26", "\uFA71"=>"\u51B5", "\uFA72"=>"\u5168", "\uFA73"=>"\u4F80", "\uFA74"=>"\u5145", "\uFA75"=>"\u5180", "\uFA76"=>"\u52C7", "\uFA77"=>"\u52FA", "\uFA78"=>"\u559D", "\uFA79"=>"\u5555", "\uFA7A"=>"\u5599", "\uFA7B"=>"\u55E2", "\uFA7C"=>"\u585A", "\uFA7D"=>"\u58B3", "\uFA7E"=>"\u5944", "\uFA7F"=>"\u5954", "\uFA80"=>"\u5A62", "\uFA81"=>"\u5B28", "\uFA82"=>"\u5ED2", "\uFA83"=>"\u5ED9", "\uFA84"=>"\u5F69", "\uFA85"=>"\u5FAD", "\uFA86"=>"\u60D8", "\uFA87"=>"\u614E", "\uFA88"=>"\u6108", "\uFA89"=>"\u618E", "\uFA8A"=>"\u6160", "\uFA8B"=>"\u61F2", "\uFA8C"=>"\u6234", "\uFA8D"=>"\u63C4", "\uFA8E"=>"\u641C", "\uFA8F"=>"\u6452", "\uFA90"=>"\u6556", "\uFA91"=>"\u6674", "\uFA92"=>"\u6717", "\uFA93"=>"\u671B", "\uFA94"=>"\u6756", "\uFA95"=>"\u6B79", "\uFA96"=>"\u6BBA", "\uFA97"=>"\u6D41", "\uFA98"=>"\u6EDB", "\uFA99"=>"\u6ECB", "\uFA9A"=>"\u6F22", "\uFA9B"=>"\u701E", "\uFA9C"=>"\u716E", "\uFA9D"=>"\u77A7", "\uFA9E"=>"\u7235", "\uFA9F"=>"\u72AF", "\uFAA0"=>"\u732A", "\uFAA1"=>"\u7471", "\uFAA2"=>"\u7506", "\uFAA3"=>"\u753B", "\uFAA4"=>"\u761D", "\uFAA5"=>"\u761F", "\uFAA6"=>"\u76CA", "\uFAA7"=>"\u76DB", "\uFAA8"=>"\u76F4", "\uFAA9"=>"\u774A", "\uFAAA"=>"\u7740", "\uFAAB"=>"\u78CC", "\uFAAC"=>"\u7AB1", "\uFAAD"=>"\u7BC0", "\uFAAE"=>"\u7C7B", "\uFAAF"=>"\u7D5B", "\uFAB0"=>"\u7DF4", "\uFAB1"=>"\u7F3E", "\uFAB2"=>"\u8005", "\uFAB3"=>"\u8352", "\uFAB4"=>"\u83EF", "\uFAB5"=>"\u8779", "\uFAB6"=>"\u8941", "\uFAB7"=>"\u8986", "\uFAB8"=>"\u8996", "\uFAB9"=>"\u8ABF", "\uFABA"=>"\u8AF8", "\uFABB"=>"\u8ACB", "\uFABC"=>"\u8B01", "\uFABD"=>"\u8AFE", "\uFABE"=>"\u8AED", "\uFABF"=>"\u8B39", "\uFAC0"=>"\u8B8A", "\uFAC1"=>"\u8D08", "\uFAC2"=>"\u8F38", "\uFAC3"=>"\u9072", "\uFAC4"=>"\u9199", "\uFAC5"=>"\u9276", "\uFAC6"=>"\u967C", "\uFAC7"=>"\u96E3", "\uFAC8"=>"\u9756", "\uFAC9"=>"\u97DB", "\uFACA"=>"\u97FF", "\uFACB"=>"\u980B", "\uFACC"=>"\u983B", "\uFACD"=>"\u9B12", "\uFACE"=>"\u9F9C", "\uFACF"=>"\u{2284A}", "\uFAD0"=>"\u{22844}", "\uFAD1"=>"\u{233D5}", "\uFAD2"=>"\u3B9D", "\uFAD3"=>"\u4018", "\uFAD4"=>"\u4039", "\uFAD5"=>"\u{25249}", "\uFAD6"=>"\u{25CD0}", "\uFAD7"=>"\u{27ED3}", "\uFAD8"=>"\u9F43", "\uFAD9"=>"\u9F8E", "\uFB1D"=>"\u05D9\u05B4", "\uFB1F"=>"\u05F2\u05B7", "\uFB2A"=>"\u05E9\u05C1", "\uFB2B"=>"\u05E9\u05C2", "\uFB2C"=>"\u05E9\u05BC\u05C1", "\uFB2D"=>"\u05E9\u05BC\u05C2", "\uFB2E"=>"\u05D0\u05B7", "\uFB2F"=>"\u05D0\u05B8", "\uFB30"=>"\u05D0\u05BC", "\uFB31"=>"\u05D1\u05BC", "\uFB32"=>"\u05D2\u05BC", "\uFB33"=>"\u05D3\u05BC", "\uFB34"=>"\u05D4\u05BC", "\uFB35"=>"\u05D5\u05BC", "\uFB36"=>"\u05D6\u05BC", "\uFB38"=>"\u05D8\u05BC", "\uFB39"=>"\u05D9\u05BC", "\uFB3A"=>"\u05DA\u05BC", "\uFB3B"=>"\u05DB\u05BC", "\uFB3C"=>"\u05DC\u05BC", "\uFB3E"=>"\u05DE\u05BC", "\uFB40"=>"\u05E0\u05BC", "\uFB41"=>"\u05E1\u05BC", "\uFB43"=>"\u05E3\u05BC", "\uFB44"=>"\u05E4\u05BC", "\uFB46"=>"\u05E6\u05BC", "\uFB47"=>"\u05E7\u05BC", "\uFB48"=>"\u05E8\u05BC", "\uFB49"=>"\u05E9\u05BC", "\uFB4A"=>"\u05EA\u05BC", "\uFB4B"=>"\u05D5\u05B9", "\uFB4C"=>"\u05D1\u05BF", "\uFB4D"=>"\u05DB\u05BF", "\uFB4E"=>"\u05E4\u05BF", "\u{1109A}"=>"\u{11099}\u{110BA}", "\u{1109C}"=>"\u{1109B}\u{110BA}", "\u{110AB}"=>"\u{110A5}\u{110BA}", "\u{1112E}"=>"\u{11131}\u{11127}", "\u{1112F}"=>"\u{11132}\u{11127}", "\u{1134B}"=>"\u{11347}\u{1133E}", "\u{1134C}"=>"\u{11347}\u{11357}", "\u{114BB}"=>"\u{114B9}\u{114BA}", "\u{114BC}"=>"\u{114B9}\u{114B0}", "\u{114BE}"=>"\u{114B9}\u{114BD}", "\u{115BA}"=>"\u{115B8}\u{115AF}", "\u{115BB}"=>"\u{115B9}\u{115AF}", "\u{1D15E}"=>"\u{1D157}\u{1D165}", "\u{1D15F}"=>"\u{1D158}\u{1D165}", "\u{1D160}"=>"\u{1D158}\u{1D165}\u{1D16E}", "\u{1D161}"=>"\u{1D158}\u{1D165}\u{1D16F}", "\u{1D162}"=>"\u{1D158}\u{1D165}\u{1D170}", "\u{1D163}"=>"\u{1D158}\u{1D165}\u{1D171}", "\u{1D164}"=>"\u{1D158}\u{1D165}\u{1D172}", "\u{1D1BB}"=>"\u{1D1B9}\u{1D165}", "\u{1D1BC}"=>"\u{1D1BA}\u{1D165}", "\u{1D1BD}"=>"\u{1D1B9}\u{1D165}\u{1D16E}", "\u{1D1BE}"=>"\u{1D1BA}\u{1D165}\u{1D16E}", "\u{1D1BF}"=>"\u{1D1B9}\u{1D165}\u{1D16F}", "\u{1D1C0}"=>"\u{1D1BA}\u{1D165}\u{1D16F}", "\u{2F800}"=>"\u4E3D", "\u{2F801}"=>"\u4E38", "\u{2F802}"=>"\u4E41", "\u{2F803}"=>"\u{20122}", "\u{2F804}"=>"\u4F60", "\u{2F805}"=>"\u4FAE", "\u{2F806}"=>"\u4FBB", "\u{2F807}"=>"\u5002", "\u{2F808}"=>"\u507A", "\u{2F809}"=>"\u5099", "\u{2F80A}"=>"\u50E7", "\u{2F80B}"=>"\u50CF", "\u{2F80C}"=>"\u349E", "\u{2F80D}"=>"\u{2063A}", "\u{2F80E}"=>"\u514D", "\u{2F80F}"=>"\u5154", "\u{2F810}"=>"\u5164", "\u{2F811}"=>"\u5177", "\u{2F812}"=>"\u{2051C}", "\u{2F813}"=>"\u34B9", "\u{2F814}"=>"\u5167", "\u{2F815}"=>"\u518D", "\u{2F816}"=>"\u{2054B}", "\u{2F817}"=>"\u5197", "\u{2F818}"=>"\u51A4", "\u{2F819}"=>"\u4ECC", "\u{2F81A}"=>"\u51AC", "\u{2F81B}"=>"\u51B5", "\u{2F81C}"=>"\u{291DF}", "\u{2F81D}"=>"\u51F5", "\u{2F81E}"=>"\u5203", "\u{2F81F}"=>"\u34DF", "\u{2F820}"=>"\u523B", "\u{2F821}"=>"\u5246", "\u{2F822}"=>"\u5272", "\u{2F823}"=>"\u5277", "\u{2F824}"=>"\u3515", "\u{2F825}"=>"\u52C7", "\u{2F826}"=>"\u52C9", "\u{2F827}"=>"\u52E4", "\u{2F828}"=>"\u52FA", "\u{2F829}"=>"\u5305", "\u{2F82A}"=>"\u5306", "\u{2F82B}"=>"\u5317", "\u{2F82C}"=>"\u5349", "\u{2F82D}"=>"\u5351", "\u{2F82E}"=>"\u535A", "\u{2F82F}"=>"\u5373", "\u{2F830}"=>"\u537D", "\u{2F831}"=>"\u537F", "\u{2F832}"=>"\u537F", "\u{2F833}"=>"\u537F", "\u{2F834}"=>"\u{20A2C}", "\u{2F835}"=>"\u7070", "\u{2F836}"=>"\u53CA", "\u{2F837}"=>"\u53DF", "\u{2F838}"=>"\u{20B63}", "\u{2F839}"=>"\u53EB", "\u{2F83A}"=>"\u53F1", "\u{2F83B}"=>"\u5406", "\u{2F83C}"=>"\u549E", "\u{2F83D}"=>"\u5438", "\u{2F83E}"=>"\u5448", "\u{2F83F}"=>"\u5468", "\u{2F840}"=>"\u54A2", "\u{2F841}"=>"\u54F6", "\u{2F842}"=>"\u5510", "\u{2F843}"=>"\u5553", "\u{2F844}"=>"\u5563", "\u{2F845}"=>"\u5584", "\u{2F846}"=>"\u5584", "\u{2F847}"=>"\u5599", "\u{2F848}"=>"\u55AB", "\u{2F849}"=>"\u55B3", "\u{2F84A}"=>"\u55C2", "\u{2F84B}"=>"\u5716", "\u{2F84C}"=>"\u5606", "\u{2F84D}"=>"\u5717", "\u{2F84E}"=>"\u5651", "\u{2F84F}"=>"\u5674", "\u{2F850}"=>"\u5207", "\u{2F851}"=>"\u58EE", "\u{2F852}"=>"\u57CE", "\u{2F853}"=>"\u57F4", "\u{2F854}"=>"\u580D", "\u{2F855}"=>"\u578B", "\u{2F856}"=>"\u5832", "\u{2F857}"=>"\u5831", "\u{2F858}"=>"\u58AC", "\u{2F859}"=>"\u{214E4}", "\u{2F85A}"=>"\u58F2", "\u{2F85B}"=>"\u58F7", "\u{2F85C}"=>"\u5906", "\u{2F85D}"=>"\u591A", "\u{2F85E}"=>"\u5922", "\u{2F85F}"=>"\u5962", "\u{2F860}"=>"\u{216A8}", "\u{2F861}"=>"\u{216EA}", "\u{2F862}"=>"\u59EC", "\u{2F863}"=>"\u5A1B", "\u{2F864}"=>"\u5A27", "\u{2F865}"=>"\u59D8", "\u{2F866}"=>"\u5A66", "\u{2F867}"=>"\u36EE", "\u{2F868}"=>"\u36FC", "\u{2F869}"=>"\u5B08", "\u{2F86A}"=>"\u5B3E", "\u{2F86B}"=>"\u5B3E", "\u{2F86C}"=>"\u{219C8}", "\u{2F86D}"=>"\u5BC3", "\u{2F86E}"=>"\u5BD8", "\u{2F86F}"=>"\u5BE7", "\u{2F870}"=>"\u5BF3", "\u{2F871}"=>"\u{21B18}", "\u{2F872}"=>"\u5BFF", "\u{2F873}"=>"\u5C06", "\u{2F874}"=>"\u5F53", "\u{2F875}"=>"\u5C22", "\u{2F876}"=>"\u3781", "\u{2F877}"=>"\u5C60", "\u{2F878}"=>"\u5C6E", "\u{2F879}"=>"\u5CC0", "\u{2F87A}"=>"\u5C8D", "\u{2F87B}"=>"\u{21DE4}", "\u{2F87C}"=>"\u5D43", "\u{2F87D}"=>"\u{21DE6}", "\u{2F87E}"=>"\u5D6E", "\u{2F87F}"=>"\u5D6B", "\u{2F880}"=>"\u5D7C", "\u{2F881}"=>"\u5DE1", "\u{2F882}"=>"\u5DE2", "\u{2F883}"=>"\u382F", "\u{2F884}"=>"\u5DFD", "\u{2F885}"=>"\u5E28", "\u{2F886}"=>"\u5E3D", "\u{2F887}"=>"\u5E69", "\u{2F888}"=>"\u3862", "\u{2F889}"=>"\u{22183}", "\u{2F88A}"=>"\u387C", "\u{2F88B}"=>"\u5EB0", "\u{2F88C}"=>"\u5EB3", "\u{2F88D}"=>"\u5EB6", "\u{2F88E}"=>"\u5ECA", "\u{2F88F}"=>"\u{2A392}", "\u{2F890}"=>"\u5EFE", "\u{2F891}"=>"\u{22331}", "\u{2F892}"=>"\u{22331}", "\u{2F893}"=>"\u8201", "\u{2F894}"=>"\u5F22", "\u{2F895}"=>"\u5F22", "\u{2F896}"=>"\u38C7", "\u{2F897}"=>"\u{232B8}", "\u{2F898}"=>"\u{261DA}", "\u{2F899}"=>"\u5F62", "\u{2F89A}"=>"\u5F6B", "\u{2F89B}"=>"\u38E3", "\u{2F89C}"=>"\u5F9A", "\u{2F89D}"=>"\u5FCD", "\u{2F89E}"=>"\u5FD7", "\u{2F89F}"=>"\u5FF9", "\u{2F8A0}"=>"\u6081", "\u{2F8A1}"=>"\u393A", "\u{2F8A2}"=>"\u391C", "\u{2F8A3}"=>"\u6094", "\u{2F8A4}"=>"\u{226D4}", "\u{2F8A5}"=>"\u60C7", "\u{2F8A6}"=>"\u6148", "\u{2F8A7}"=>"\u614C", "\u{2F8A8}"=>"\u614E", "\u{2F8A9}"=>"\u614C", "\u{2F8AA}"=>"\u617A", "\u{2F8AB}"=>"\u618E", "\u{2F8AC}"=>"\u61B2", "\u{2F8AD}"=>"\u61A4", "\u{2F8AE}"=>"\u61AF", "\u{2F8AF}"=>"\u61DE", "\u{2F8B0}"=>"\u61F2", "\u{2F8B1}"=>"\u61F6", "\u{2F8B2}"=>"\u6210", "\u{2F8B3}"=>"\u621B", "\u{2F8B4}"=>"\u625D", "\u{2F8B5}"=>"\u62B1", "\u{2F8B6}"=>"\u62D4", "\u{2F8B7}"=>"\u6350", "\u{2F8B8}"=>"\u{22B0C}", "\u{2F8B9}"=>"\u633D", "\u{2F8BA}"=>"\u62FC", "\u{2F8BB}"=>"\u6368", "\u{2F8BC}"=>"\u6383", "\u{2F8BD}"=>"\u63E4", "\u{2F8BE}"=>"\u{22BF1}", "\u{2F8BF}"=>"\u6422", "\u{2F8C0}"=>"\u63C5", "\u{2F8C1}"=>"\u63A9", "\u{2F8C2}"=>"\u3A2E", "\u{2F8C3}"=>"\u6469", "\u{2F8C4}"=>"\u647E", "\u{2F8C5}"=>"\u649D", "\u{2F8C6}"=>"\u6477", "\u{2F8C7}"=>"\u3A6C", "\u{2F8C8}"=>"\u654F", "\u{2F8C9}"=>"\u656C", "\u{2F8CA}"=>"\u{2300A}", "\u{2F8CB}"=>"\u65E3", "\u{2F8CC}"=>"\u66F8", "\u{2F8CD}"=>"\u6649", "\u{2F8CE}"=>"\u3B19", "\u{2F8CF}"=>"\u6691", "\u{2F8D0}"=>"\u3B08", "\u{2F8D1}"=>"\u3AE4", "\u{2F8D2}"=>"\u5192", "\u{2F8D3}"=>"\u5195", "\u{2F8D4}"=>"\u6700", "\u{2F8D5}"=>"\u669C", "\u{2F8D6}"=>"\u80AD", "\u{2F8D7}"=>"\u43D9", "\u{2F8D8}"=>"\u6717", "\u{2F8D9}"=>"\u671B", "\u{2F8DA}"=>"\u6721", "\u{2F8DB}"=>"\u675E", "\u{2F8DC}"=>"\u6753", "\u{2F8DD}"=>"\u{233C3}", "\u{2F8DE}"=>"\u3B49", "\u{2F8DF}"=>"\u67FA", "\u{2F8E0}"=>"\u6785", "\u{2F8E1}"=>"\u6852", "\u{2F8E2}"=>"\u6885", "\u{2F8E3}"=>"\u{2346D}", "\u{2F8E4}"=>"\u688E", "\u{2F8E5}"=>"\u681F", "\u{2F8E6}"=>"\u6914", "\u{2F8E7}"=>"\u3B9D", "\u{2F8E8}"=>"\u6942", "\u{2F8E9}"=>"\u69A3", "\u{2F8EA}"=>"\u69EA", "\u{2F8EB}"=>"\u6AA8", "\u{2F8EC}"=>"\u{236A3}", "\u{2F8ED}"=>"\u6ADB", "\u{2F8EE}"=>"\u3C18", "\u{2F8EF}"=>"\u6B21", "\u{2F8F0}"=>"\u{238A7}", "\u{2F8F1}"=>"\u6B54", "\u{2F8F2}"=>"\u3C4E", "\u{2F8F3}"=>"\u6B72", "\u{2F8F4}"=>"\u6B9F", "\u{2F8F5}"=>"\u6BBA", "\u{2F8F6}"=>"\u6BBB", "\u{2F8F7}"=>"\u{23A8D}", "\u{2F8F8}"=>"\u{21D0B}", "\u{2F8F9}"=>"\u{23AFA}", "\u{2F8FA}"=>"\u6C4E", "\u{2F8FB}"=>"\u{23CBC}", "\u{2F8FC}"=>"\u6CBF", "\u{2F8FD}"=>"\u6CCD", "\u{2F8FE}"=>"\u6C67", "\u{2F8FF}"=>"\u6D16", "\u{2F900}"=>"\u6D3E", "\u{2F901}"=>"\u6D77", "\u{2F902}"=>"\u6D41", "\u{2F903}"=>"\u6D69", "\u{2F904}"=>"\u6D78", "\u{2F905}"=>"\u6D85", "\u{2F906}"=>"\u{23D1E}", "\u{2F907}"=>"\u6D34", "\u{2F908}"=>"\u6E2F", "\u{2F909}"=>"\u6E6E", "\u{2F90A}"=>"\u3D33", "\u{2F90B}"=>"\u6ECB", "\u{2F90C}"=>"\u6EC7", "\u{2F90D}"=>"\u{23ED1}", "\u{2F90E}"=>"\u6DF9", "\u{2F90F}"=>"\u6F6E", "\u{2F910}"=>"\u{23F5E}", "\u{2F911}"=>"\u{23F8E}", "\u{2F912}"=>"\u6FC6", "\u{2F913}"=>"\u7039", "\u{2F914}"=>"\u701E", "\u{2F915}"=>"\u701B", "\u{2F916}"=>"\u3D96", "\u{2F917}"=>"\u704A", "\u{2F918}"=>"\u707D", "\u{2F919}"=>"\u7077", "\u{2F91A}"=>"\u70AD", "\u{2F91B}"=>"\u{20525}", "\u{2F91C}"=>"\u7145", "\u{2F91D}"=>"\u{24263}", "\u{2F91E}"=>"\u719C", "\u{2F91F}"=>"\u{243AB}", "\u{2F920}"=>"\u7228", "\u{2F921}"=>"\u7235", "\u{2F922}"=>"\u7250", "\u{2F923}"=>"\u{24608}", "\u{2F924}"=>"\u7280", "\u{2F925}"=>"\u7295", "\u{2F926}"=>"\u{24735}", "\u{2F927}"=>"\u{24814}", "\u{2F928}"=>"\u737A", "\u{2F929}"=>"\u738B", "\u{2F92A}"=>"\u3EAC", "\u{2F92B}"=>"\u73A5", "\u{2F92C}"=>"\u3EB8", "\u{2F92D}"=>"\u3EB8", "\u{2F92E}"=>"\u7447", "\u{2F92F}"=>"\u745C", "\u{2F930}"=>"\u7471", "\u{2F931}"=>"\u7485", "\u{2F932}"=>"\u74CA", "\u{2F933}"=>"\u3F1B", "\u{2F934}"=>"\u7524", "\u{2F935}"=>"\u{24C36}", "\u{2F936}"=>"\u753E", "\u{2F937}"=>"\u{24C92}", "\u{2F938}"=>"\u7570", "\u{2F939}"=>"\u{2219F}", "\u{2F93A}"=>"\u7610", "\u{2F93B}"=>"\u{24FA1}", "\u{2F93C}"=>"\u{24FB8}", "\u{2F93D}"=>"\u{25044}", "\u{2F93E}"=>"\u3FFC", "\u{2F93F}"=>"\u4008", "\u{2F940}"=>"\u76F4", "\u{2F941}"=>"\u{250F3}", "\u{2F942}"=>"\u{250F2}", "\u{2F943}"=>"\u{25119}", "\u{2F944}"=>"\u{25133}", "\u{2F945}"=>"\u771E", "\u{2F946}"=>"\u771F", "\u{2F947}"=>"\u771F", "\u{2F948}"=>"\u774A", "\u{2F949}"=>"\u4039", "\u{2F94A}"=>"\u778B", "\u{2F94B}"=>"\u4046", "\u{2F94C}"=>"\u4096", "\u{2F94D}"=>"\u{2541D}", "\u{2F94E}"=>"\u784E", "\u{2F94F}"=>"\u788C", "\u{2F950}"=>"\u78CC", "\u{2F951}"=>"\u40E3", "\u{2F952}"=>"\u{25626}", "\u{2F953}"=>"\u7956", "\u{2F954}"=>"\u{2569A}", "\u{2F955}"=>"\u{256C5}", "\u{2F956}"=>"\u798F", "\u{2F957}"=>"\u79EB", "\u{2F958}"=>"\u412F", "\u{2F959}"=>"\u7A40", "\u{2F95A}"=>"\u7A4A", "\u{2F95B}"=>"\u7A4F", "\u{2F95C}"=>"\u{2597C}", "\u{2F95D}"=>"\u{25AA7}", "\u{2F95E}"=>"\u{25AA7}", "\u{2F95F}"=>"\u7AEE", "\u{2F960}"=>"\u4202", "\u{2F961}"=>"\u{25BAB}", "\u{2F962}"=>"\u7BC6", "\u{2F963}"=>"\u7BC9", "\u{2F964}"=>"\u4227", "\u{2F965}"=>"\u{25C80}", "\u{2F966}"=>"\u7CD2", "\u{2F967}"=>"\u42A0", "\u{2F968}"=>"\u7CE8", "\u{2F969}"=>"\u7CE3", "\u{2F96A}"=>"\u7D00", "\u{2F96B}"=>"\u{25F86}", "\u{2F96C}"=>"\u7D63", "\u{2F96D}"=>"\u4301", "\u{2F96E}"=>"\u7DC7", "\u{2F96F}"=>"\u7E02", "\u{2F970}"=>"\u7E45", "\u{2F971}"=>"\u4334", "\u{2F972}"=>"\u{26228}", "\u{2F973}"=>"\u{26247}", "\u{2F974}"=>"\u4359", "\u{2F975}"=>"\u{262D9}", "\u{2F976}"=>"\u7F7A", "\u{2F977}"=>"\u{2633E}", "\u{2F978}"=>"\u7F95", "\u{2F979}"=>"\u7FFA", "\u{2F97A}"=>"\u8005", "\u{2F97B}"=>"\u{264DA}", "\u{2F97C}"=>"\u{26523}", "\u{2F97D}"=>"\u8060", "\u{2F97E}"=>"\u{265A8}", "\u{2F97F}"=>"\u8070", "\u{2F980}"=>"\u{2335F}", "\u{2F981}"=>"\u43D5", "\u{2F982}"=>"\u80B2", "\u{2F983}"=>"\u8103", "\u{2F984}"=>"\u440B", "\u{2F985}"=>"\u813E", "\u{2F986}"=>"\u5AB5", "\u{2F987}"=>"\u{267A7}", "\u{2F988}"=>"\u{267B5}", "\u{2F989}"=>"\u{23393}", "\u{2F98A}"=>"\u{2339C}", "\u{2F98B}"=>"\u8201", "\u{2F98C}"=>"\u8204", "\u{2F98D}"=>"\u8F9E", "\u{2F98E}"=>"\u446B", "\u{2F98F}"=>"\u8291", "\u{2F990}"=>"\u828B", "\u{2F991}"=>"\u829D", "\u{2F992}"=>"\u52B3", "\u{2F993}"=>"\u82B1", "\u{2F994}"=>"\u82B3", "\u{2F995}"=>"\u82BD", "\u{2F996}"=>"\u82E6", "\u{2F997}"=>"\u{26B3C}", "\u{2F998}"=>"\u82E5", "\u{2F999}"=>"\u831D", "\u{2F99A}"=>"\u8363", "\u{2F99B}"=>"\u83AD", "\u{2F99C}"=>"\u8323", "\u{2F99D}"=>"\u83BD", "\u{2F99E}"=>"\u83E7", "\u{2F99F}"=>"\u8457", "\u{2F9A0}"=>"\u8353", "\u{2F9A1}"=>"\u83CA", "\u{2F9A2}"=>"\u83CC", "\u{2F9A3}"=>"\u83DC", "\u{2F9A4}"=>"\u{26C36}", "\u{2F9A5}"=>"\u{26D6B}", "\u{2F9A6}"=>"\u{26CD5}", "\u{2F9A7}"=>"\u452B", "\u{2F9A8}"=>"\u84F1", "\u{2F9A9}"=>"\u84F3", "\u{2F9AA}"=>"\u8516", "\u{2F9AB}"=>"\u{273CA}", "\u{2F9AC}"=>"\u8564", "\u{2F9AD}"=>"\u{26F2C}", "\u{2F9AE}"=>"\u455D", "\u{2F9AF}"=>"\u4561", "\u{2F9B0}"=>"\u{26FB1}", "\u{2F9B1}"=>"\u{270D2}", "\u{2F9B2}"=>"\u456B", "\u{2F9B3}"=>"\u8650", "\u{2F9B4}"=>"\u865C", "\u{2F9B5}"=>"\u8667", "\u{2F9B6}"=>"\u8669", "\u{2F9B7}"=>"\u86A9", "\u{2F9B8}"=>"\u8688", "\u{2F9B9}"=>"\u870E", "\u{2F9BA}"=>"\u86E2", "\u{2F9BB}"=>"\u8779", "\u{2F9BC}"=>"\u8728", "\u{2F9BD}"=>"\u876B", "\u{2F9BE}"=>"\u8786", "\u{2F9BF}"=>"\u45D7", "\u{2F9C0}"=>"\u87E1", "\u{2F9C1}"=>"\u8801", "\u{2F9C2}"=>"\u45F9", "\u{2F9C3}"=>"\u8860", "\u{2F9C4}"=>"\u8863", "\u{2F9C5}"=>"\u{27667}", "\u{2F9C6}"=>"\u88D7", "\u{2F9C7}"=>"\u88DE", "\u{2F9C8}"=>"\u4635", "\u{2F9C9}"=>"\u88FA", "\u{2F9CA}"=>"\u34BB", "\u{2F9CB}"=>"\u{278AE}", "\u{2F9CC}"=>"\u{27966}", "\u{2F9CD}"=>"\u46BE", "\u{2F9CE}"=>"\u46C7", "\u{2F9CF}"=>"\u8AA0", "\u{2F9D0}"=>"\u8AED", "\u{2F9D1}"=>"\u8B8A", "\u{2F9D2}"=>"\u8C55", "\u{2F9D3}"=>"\u{27CA8}", "\u{2F9D4}"=>"\u8CAB", "\u{2F9D5}"=>"\u8CC1", "\u{2F9D6}"=>"\u8D1B", "\u{2F9D7}"=>"\u8D77", "\u{2F9D8}"=>"\u{27F2F}", "\u{2F9D9}"=>"\u{20804}", "\u{2F9DA}"=>"\u8DCB", "\u{2F9DB}"=>"\u8DBC", "\u{2F9DC}"=>"\u8DF0", "\u{2F9DD}"=>"\u{208DE}", "\u{2F9DE}"=>"\u8ED4", "\u{2F9DF}"=>"\u8F38", "\u{2F9E0}"=>"\u{285D2}", "\u{2F9E1}"=>"\u{285ED}", "\u{2F9E2}"=>"\u9094", "\u{2F9E3}"=>"\u90F1", "\u{2F9E4}"=>"\u9111", "\u{2F9E5}"=>"\u{2872E}", "\u{2F9E6}"=>"\u911B", "\u{2F9E7}"=>"\u9238", "\u{2F9E8}"=>"\u92D7", "\u{2F9E9}"=>"\u92D8", "\u{2F9EA}"=>"\u927C", "\u{2F9EB}"=>"\u93F9", "\u{2F9EC}"=>"\u9415", "\u{2F9ED}"=>"\u{28BFA}", "\u{2F9EE}"=>"\u958B", "\u{2F9EF}"=>"\u4995", "\u{2F9F0}"=>"\u95B7", "\u{2F9F1}"=>"\u{28D77}", "\u{2F9F2}"=>"\u49E6", "\u{2F9F3}"=>"\u96C3", "\u{2F9F4}"=>"\u5DB2", "\u{2F9F5}"=>"\u9723", "\u{2F9F6}"=>"\u{29145}", "\u{2F9F7}"=>"\u{2921A}", "\u{2F9F8}"=>"\u4A6E", "\u{2F9F9}"=>"\u4A76", "\u{2F9FA}"=>"\u97E0", "\u{2F9FB}"=>"\u{2940A}", "\u{2F9FC}"=>"\u4AB2", "\u{2F9FD}"=>"\u{29496}", "\u{2F9FE}"=>"\u980B", "\u{2F9FF}"=>"\u980B", "\u{2FA00}"=>"\u9829", "\u{2FA01}"=>"\u{295B6}", "\u{2FA02}"=>"\u98E2", "\u{2FA03}"=>"\u4B33", "\u{2FA04}"=>"\u9929", "\u{2FA05}"=>"\u99A7", "\u{2FA06}"=>"\u99C2", "\u{2FA07}"=>"\u99FE", "\u{2FA08}"=>"\u4BCE", "\u{2FA09}"=>"\u{29B30}", "\u{2FA0A}"=>"\u9B12", "\u{2FA0B}"=>"\u9C40", "\u{2FA0C}"=>"\u9CFD", "\u{2FA0D}"=>"\u4CCE", "\u{2FA0E}"=>"\u4CED", "\u{2FA0F}"=>"\u9D67", "\u{2FA10}"=>"\u{2A0CE}", "\u{2FA11}"=>"\u4CF8", "\u{2FA12}"=>"\u{2A105}", "\u{2FA13}"=>"\u{2A20E}", "\u{2FA14}"=>"\u{2A291}", "\u{2FA15}"=>"\u9EBB", "\u{2FA16}"=>"\u4D56", "\u{2FA17}"=>"\u9EF9", "\u{2FA18}"=>"\u9EFE", "\u{2FA19}"=>"\u9F05", "\u{2FA1A}"=>"\u9F0F", "\u{2FA1B}"=>"\u9F16", "\u{2FA1C}"=>"\u9F3B", "\u{2FA1D}"=>"\u{2A600}", }.freeze KOMPATIBLE_TABLE = { "\u00A0"=>" ", "\u00A8"=>" \u0308", "\u00AA"=>"a", "\u00AF"=>" \u0304", "\u00B2"=>"2", "\u00B3"=>"3", "\u00B4"=>" \u0301", "\u00B5"=>"\u03BC", "\u00B8"=>" \u0327", "\u00B9"=>"1", "\u00BA"=>"o", "\u00BC"=>"1\u20444", "\u00BD"=>"1\u20442", "\u00BE"=>"3\u20444", "\u0132"=>"IJ", "\u0133"=>"ij", "\u013F"=>"L\u00B7", "\u0140"=>"l\u00B7", "\u0149"=>"\u02BCn", "\u017F"=>"s", "\u01C4"=>"D\u017D", "\u01C5"=>"D\u017E", "\u01C6"=>"d\u017E", "\u01C7"=>"LJ", "\u01C8"=>"Lj", "\u01C9"=>"lj", "\u01CA"=>"NJ", "\u01CB"=>"Nj", "\u01CC"=>"nj", "\u01F1"=>"DZ", "\u01F2"=>"Dz", "\u01F3"=>"dz", "\u02B0"=>"h", "\u02B1"=>"\u0266", "\u02B2"=>"j", "\u02B3"=>"r", "\u02B4"=>"\u0279", "\u02B5"=>"\u027B", "\u02B6"=>"\u0281", "\u02B7"=>"w", "\u02B8"=>"y", "\u02D8"=>" \u0306", "\u02D9"=>" \u0307", "\u02DA"=>" \u030A", "\u02DB"=>" \u0328", "\u02DC"=>" \u0303", "\u02DD"=>" \u030B", "\u02E0"=>"\u0263", "\u02E1"=>"l", "\u02E2"=>"s", "\u02E3"=>"x", "\u02E4"=>"\u0295", "\u037A"=>" \u0345", "\u0384"=>" \u0301", "\u03D0"=>"\u03B2", "\u03D1"=>"\u03B8", "\u03D2"=>"\u03A5", "\u03D5"=>"\u03C6", "\u03D6"=>"\u03C0", "\u03F0"=>"\u03BA", "\u03F1"=>"\u03C1", "\u03F2"=>"\u03C2", "\u03F4"=>"\u0398", "\u03F5"=>"\u03B5", "\u03F9"=>"\u03A3", "\u0587"=>"\u0565\u0582", "\u0675"=>"\u0627\u0674", "\u0676"=>"\u0648\u0674", "\u0677"=>"\u06C7\u0674", "\u0678"=>"\u064A\u0674", "\u0E33"=>"\u0E4D\u0E32", "\u0EB3"=>"\u0ECD\u0EB2", "\u0EDC"=>"\u0EAB\u0E99", "\u0EDD"=>"\u0EAB\u0EA1", "\u0F0C"=>"\u0F0B", "\u0F77"=>"\u0FB2\u0F81", "\u0F79"=>"\u0FB3\u0F81", "\u10FC"=>"\u10DC", "\u1D2C"=>"A", "\u1D2D"=>"\u00C6", "\u1D2E"=>"B", "\u1D30"=>"D", "\u1D31"=>"E", "\u1D32"=>"\u018E", "\u1D33"=>"G", "\u1D34"=>"H", "\u1D35"=>"I", "\u1D36"=>"J", "\u1D37"=>"K", "\u1D38"=>"L", "\u1D39"=>"M", "\u1D3A"=>"N", "\u1D3C"=>"O", "\u1D3D"=>"\u0222", "\u1D3E"=>"P", "\u1D3F"=>"R", "\u1D40"=>"T", "\u1D41"=>"U", "\u1D42"=>"W", "\u1D43"=>"a", "\u1D44"=>"\u0250", "\u1D45"=>"\u0251", "\u1D46"=>"\u1D02", "\u1D47"=>"b", "\u1D48"=>"d", "\u1D49"=>"e", "\u1D4A"=>"\u0259", "\u1D4B"=>"\u025B", "\u1D4C"=>"\u025C", "\u1D4D"=>"g", "\u1D4F"=>"k", "\u1D50"=>"m", "\u1D51"=>"\u014B", "\u1D52"=>"o", "\u1D53"=>"\u0254", "\u1D54"=>"\u1D16", "\u1D55"=>"\u1D17", "\u1D56"=>"p", "\u1D57"=>"t", "\u1D58"=>"u", "\u1D59"=>"\u1D1D", "\u1D5A"=>"\u026F", "\u1D5B"=>"v", "\u1D5C"=>"\u1D25", "\u1D5D"=>"\u03B2", "\u1D5E"=>"\u03B3", "\u1D5F"=>"\u03B4", "\u1D60"=>"\u03C6", "\u1D61"=>"\u03C7", "\u1D62"=>"i", "\u1D63"=>"r", "\u1D64"=>"u", "\u1D65"=>"v", "\u1D66"=>"\u03B2", "\u1D67"=>"\u03B3", "\u1D68"=>"\u03C1", "\u1D69"=>"\u03C6", "\u1D6A"=>"\u03C7", "\u1D78"=>"\u043D", "\u1D9B"=>"\u0252", "\u1D9C"=>"c", "\u1D9D"=>"\u0255", "\u1D9E"=>"\u00F0", "\u1D9F"=>"\u025C", "\u1DA0"=>"f", "\u1DA1"=>"\u025F", "\u1DA2"=>"\u0261", "\u1DA3"=>"\u0265", "\u1DA4"=>"\u0268", "\u1DA5"=>"\u0269", "\u1DA6"=>"\u026A", "\u1DA7"=>"\u1D7B", "\u1DA8"=>"\u029D", "\u1DA9"=>"\u026D", "\u1DAA"=>"\u1D85", "\u1DAB"=>"\u029F", "\u1DAC"=>"\u0271", "\u1DAD"=>"\u0270", "\u1DAE"=>"\u0272", "\u1DAF"=>"\u0273", "\u1DB0"=>"\u0274", "\u1DB1"=>"\u0275", "\u1DB2"=>"\u0278", "\u1DB3"=>"\u0282", "\u1DB4"=>"\u0283", "\u1DB5"=>"\u01AB", "\u1DB6"=>"\u0289", "\u1DB7"=>"\u028A", "\u1DB8"=>"\u1D1C", "\u1DB9"=>"\u028B", "\u1DBA"=>"\u028C", "\u1DBB"=>"z", "\u1DBC"=>"\u0290", "\u1DBD"=>"\u0291", "\u1DBE"=>"\u0292", "\u1DBF"=>"\u03B8", "\u1E9A"=>"a\u02BE", "\u1FBD"=>" \u0313", "\u1FBF"=>" \u0313", "\u1FC0"=>" \u0342", "\u1FFE"=>" \u0314", "\u2002"=>" ", "\u2003"=>" ", "\u2004"=>" ", "\u2005"=>" ", "\u2006"=>" ", "\u2007"=>" ", "\u2008"=>" ", "\u2009"=>" ", "\u200A"=>" ", "\u2011"=>"\u2010", "\u2017"=>" \u0333", "\u2024"=>".", "\u2025"=>"..", "\u2026"=>"...", "\u202F"=>" ", "\u2033"=>"\u2032\u2032", "\u2034"=>"\u2032\u2032\u2032", "\u2036"=>"\u2035\u2035", "\u2037"=>"\u2035\u2035\u2035", "\u203C"=>"!!", "\u203E"=>" \u0305", "\u2047"=>"??", "\u2048"=>"?!", "\u2049"=>"!?", "\u2057"=>"\u2032\u2032\u2032\u2032", "\u205F"=>" ", "\u2070"=>"0", "\u2071"=>"i", "\u2074"=>"4", "\u2075"=>"5", "\u2076"=>"6", "\u2077"=>"7", "\u2078"=>"8", "\u2079"=>"9", "\u207A"=>"+", "\u207B"=>"\u2212", "\u207C"=>"=", "\u207D"=>"(", "\u207E"=>")", "\u207F"=>"n", "\u2080"=>"0", "\u2081"=>"1", "\u2082"=>"2", "\u2083"=>"3", "\u2084"=>"4", "\u2085"=>"5", "\u2086"=>"6", "\u2087"=>"7", "\u2088"=>"8", "\u2089"=>"9", "\u208A"=>"+", "\u208B"=>"\u2212", "\u208C"=>"=", "\u208D"=>"(", "\u208E"=>")", "\u2090"=>"a", "\u2091"=>"e", "\u2092"=>"o", "\u2093"=>"x", "\u2094"=>"\u0259", "\u2095"=>"h", "\u2096"=>"k", "\u2097"=>"l", "\u2098"=>"m", "\u2099"=>"n", "\u209A"=>"p", "\u209B"=>"s", "\u209C"=>"t", "\u20A8"=>"Rs", "\u2100"=>"a/c", "\u2101"=>"a/s", "\u2102"=>"C", "\u2103"=>"\u00B0C", "\u2105"=>"c/o", "\u2106"=>"c/u", "\u2107"=>"\u0190", "\u2109"=>"\u00B0F", "\u210A"=>"g", "\u210B"=>"H", "\u210C"=>"H", "\u210D"=>"H", "\u210E"=>"h", "\u210F"=>"\u0127", "\u2110"=>"I", "\u2111"=>"I", "\u2112"=>"L", "\u2113"=>"l", "\u2115"=>"N", "\u2116"=>"No", "\u2119"=>"P", "\u211A"=>"Q", "\u211B"=>"R", "\u211C"=>"R", "\u211D"=>"R", "\u2120"=>"SM", "\u2121"=>"TEL", "\u2122"=>"TM", "\u2124"=>"Z", "\u2128"=>"Z", "\u212C"=>"B", "\u212D"=>"C", "\u212F"=>"e", "\u2130"=>"E", "\u2131"=>"F", "\u2133"=>"M", "\u2134"=>"o", "\u2135"=>"\u05D0", "\u2136"=>"\u05D1", "\u2137"=>"\u05D2", "\u2138"=>"\u05D3", "\u2139"=>"i", "\u213B"=>"FAX", "\u213C"=>"\u03C0", "\u213D"=>"\u03B3", "\u213E"=>"\u0393", "\u213F"=>"\u03A0", "\u2140"=>"\u2211", "\u2145"=>"D", "\u2146"=>"d", "\u2147"=>"e", "\u2148"=>"i", "\u2149"=>"j", "\u2150"=>"1\u20447", "\u2151"=>"1\u20449", "\u2152"=>"1\u204410", "\u2153"=>"1\u20443", "\u2154"=>"2\u20443", "\u2155"=>"1\u20445", "\u2156"=>"2\u20445", "\u2157"=>"3\u20445", "\u2158"=>"4\u20445", "\u2159"=>"1\u20446", "\u215A"=>"5\u20446", "\u215B"=>"1\u20448", "\u215C"=>"3\u20448", "\u215D"=>"5\u20448", "\u215E"=>"7\u20448", "\u215F"=>"1\u2044", "\u2160"=>"I", "\u2161"=>"II", "\u2162"=>"III", "\u2163"=>"IV", "\u2164"=>"V", "\u2165"=>"VI", "\u2166"=>"VII", "\u2167"=>"VIII", "\u2168"=>"IX", "\u2169"=>"X", "\u216A"=>"XI", "\u216B"=>"XII", "\u216C"=>"L", "\u216D"=>"C", "\u216E"=>"D", "\u216F"=>"M", "\u2170"=>"i", "\u2171"=>"ii", "\u2172"=>"iii", "\u2173"=>"iv", "\u2174"=>"v", "\u2175"=>"vi", "\u2176"=>"vii", "\u2177"=>"viii", "\u2178"=>"ix", "\u2179"=>"x", "\u217A"=>"xi", "\u217B"=>"xii", "\u217C"=>"l", "\u217D"=>"c", "\u217E"=>"d", "\u217F"=>"m", "\u2189"=>"0\u20443", "\u222C"=>"\u222B\u222B", "\u222D"=>"\u222B\u222B\u222B", "\u222F"=>"\u222E\u222E", "\u2230"=>"\u222E\u222E\u222E", "\u2460"=>"1", "\u2461"=>"2", "\u2462"=>"3", "\u2463"=>"4", "\u2464"=>"5", "\u2465"=>"6", "\u2466"=>"7", "\u2467"=>"8", "\u2468"=>"9", "\u2469"=>"10", "\u246A"=>"11", "\u246B"=>"12", "\u246C"=>"13", "\u246D"=>"14", "\u246E"=>"15", "\u246F"=>"16", "\u2470"=>"17", "\u2471"=>"18", "\u2472"=>"19", "\u2473"=>"20", "\u2474"=>"(1)", "\u2475"=>"(2)", "\u2476"=>"(3)", "\u2477"=>"(4)", "\u2478"=>"(5)", "\u2479"=>"(6)", "\u247A"=>"(7)", "\u247B"=>"(8)", "\u247C"=>"(9)", "\u247D"=>"(10)", "\u247E"=>"(11)", "\u247F"=>"(12)", "\u2480"=>"(13)", "\u2481"=>"(14)", "\u2482"=>"(15)", "\u2483"=>"(16)", "\u2484"=>"(17)", "\u2485"=>"(18)", "\u2486"=>"(19)", "\u2487"=>"(20)", "\u2488"=>"1.", "\u2489"=>"2.", "\u248A"=>"3.", "\u248B"=>"4.", "\u248C"=>"5.", "\u248D"=>"6.", "\u248E"=>"7.", "\u248F"=>"8.", "\u2490"=>"9.", "\u2491"=>"10.", "\u2492"=>"11.", "\u2493"=>"12.", "\u2494"=>"13.", "\u2495"=>"14.", "\u2496"=>"15.", "\u2497"=>"16.", "\u2498"=>"17.", "\u2499"=>"18.", "\u249A"=>"19.", "\u249B"=>"20.", "\u249C"=>"(a)", "\u249D"=>"(b)", "\u249E"=>"(c)", "\u249F"=>"(d)", "\u24A0"=>"(e)", "\u24A1"=>"(f)", "\u24A2"=>"(g)", "\u24A3"=>"(h)", "\u24A4"=>"(i)", "\u24A5"=>"(j)", "\u24A6"=>"(k)", "\u24A7"=>"(l)", "\u24A8"=>"(m)", "\u24A9"=>"(n)", "\u24AA"=>"(o)", "\u24AB"=>"(p)", "\u24AC"=>"(q)", "\u24AD"=>"(r)", "\u24AE"=>"(s)", "\u24AF"=>"(t)", "\u24B0"=>"(u)", "\u24B1"=>"(v)", "\u24B2"=>"(w)", "\u24B3"=>"(x)", "\u24B4"=>"(y)", "\u24B5"=>"(z)", "\u24B6"=>"A", "\u24B7"=>"B", "\u24B8"=>"C", "\u24B9"=>"D", "\u24BA"=>"E", "\u24BB"=>"F", "\u24BC"=>"G", "\u24BD"=>"H", "\u24BE"=>"I", "\u24BF"=>"J", "\u24C0"=>"K", "\u24C1"=>"L", "\u24C2"=>"M", "\u24C3"=>"N", "\u24C4"=>"O", "\u24C5"=>"P", "\u24C6"=>"Q", "\u24C7"=>"R", "\u24C8"=>"S", "\u24C9"=>"T", "\u24CA"=>"U", "\u24CB"=>"V", "\u24CC"=>"W", "\u24CD"=>"X", "\u24CE"=>"Y", "\u24CF"=>"Z", "\u24D0"=>"a", "\u24D1"=>"b", "\u24D2"=>"c", "\u24D3"=>"d", "\u24D4"=>"e", "\u24D5"=>"f", "\u24D6"=>"g", "\u24D7"=>"h", "\u24D8"=>"i", "\u24D9"=>"j", "\u24DA"=>"k", "\u24DB"=>"l", "\u24DC"=>"m", "\u24DD"=>"n", "\u24DE"=>"o", "\u24DF"=>"p", "\u24E0"=>"q", "\u24E1"=>"r", "\u24E2"=>"s", "\u24E3"=>"t", "\u24E4"=>"u", "\u24E5"=>"v", "\u24E6"=>"w", "\u24E7"=>"x", "\u24E8"=>"y", "\u24E9"=>"z", "\u24EA"=>"0", "\u2A0C"=>"\u222B\u222B\u222B\u222B", "\u2A74"=>"::=", "\u2A75"=>"==", "\u2A76"=>"===", "\u2C7C"=>"j", "\u2C7D"=>"V", "\u2D6F"=>"\u2D61", "\u2E9F"=>"\u6BCD", "\u2EF3"=>"\u9F9F", "\u2F00"=>"\u4E00", "\u2F01"=>"\u4E28", "\u2F02"=>"\u4E36", "\u2F03"=>"\u4E3F", "\u2F04"=>"\u4E59", "\u2F05"=>"\u4E85", "\u2F06"=>"\u4E8C", "\u2F07"=>"\u4EA0", "\u2F08"=>"\u4EBA", "\u2F09"=>"\u513F", "\u2F0A"=>"\u5165", "\u2F0B"=>"\u516B", "\u2F0C"=>"\u5182", "\u2F0D"=>"\u5196", "\u2F0E"=>"\u51AB", "\u2F0F"=>"\u51E0", "\u2F10"=>"\u51F5", "\u2F11"=>"\u5200", "\u2F12"=>"\u529B", "\u2F13"=>"\u52F9", "\u2F14"=>"\u5315", "\u2F15"=>"\u531A", "\u2F16"=>"\u5338", "\u2F17"=>"\u5341", "\u2F18"=>"\u535C", "\u2F19"=>"\u5369", "\u2F1A"=>"\u5382", "\u2F1B"=>"\u53B6", "\u2F1C"=>"\u53C8", "\u2F1D"=>"\u53E3", "\u2F1E"=>"\u56D7", "\u2F1F"=>"\u571F", "\u2F20"=>"\u58EB", "\u2F21"=>"\u5902", "\u2F22"=>"\u590A", "\u2F23"=>"\u5915", "\u2F24"=>"\u5927", "\u2F25"=>"\u5973", "\u2F26"=>"\u5B50", "\u2F27"=>"\u5B80", "\u2F28"=>"\u5BF8", "\u2F29"=>"\u5C0F", "\u2F2A"=>"\u5C22", "\u2F2B"=>"\u5C38", "\u2F2C"=>"\u5C6E", "\u2F2D"=>"\u5C71", "\u2F2E"=>"\u5DDB", "\u2F2F"=>"\u5DE5", "\u2F30"=>"\u5DF1", "\u2F31"=>"\u5DFE", "\u2F32"=>"\u5E72", "\u2F33"=>"\u5E7A", "\u2F34"=>"\u5E7F", "\u2F35"=>"\u5EF4", "\u2F36"=>"\u5EFE", "\u2F37"=>"\u5F0B", "\u2F38"=>"\u5F13", "\u2F39"=>"\u5F50", "\u2F3A"=>"\u5F61", "\u2F3B"=>"\u5F73", "\u2F3C"=>"\u5FC3", "\u2F3D"=>"\u6208", "\u2F3E"=>"\u6236", "\u2F3F"=>"\u624B", "\u2F40"=>"\u652F", "\u2F41"=>"\u6534", "\u2F42"=>"\u6587", "\u2F43"=>"\u6597", "\u2F44"=>"\u65A4", "\u2F45"=>"\u65B9", "\u2F46"=>"\u65E0", "\u2F47"=>"\u65E5", "\u2F48"=>"\u66F0", "\u2F49"=>"\u6708", "\u2F4A"=>"\u6728", "\u2F4B"=>"\u6B20", "\u2F4C"=>"\u6B62", "\u2F4D"=>"\u6B79", "\u2F4E"=>"\u6BB3", "\u2F4F"=>"\u6BCB", "\u2F50"=>"\u6BD4", "\u2F51"=>"\u6BDB", "\u2F52"=>"\u6C0F", "\u2F53"=>"\u6C14", "\u2F54"=>"\u6C34", "\u2F55"=>"\u706B", "\u2F56"=>"\u722A", "\u2F57"=>"\u7236", "\u2F58"=>"\u723B", "\u2F59"=>"\u723F", "\u2F5A"=>"\u7247", "\u2F5B"=>"\u7259", "\u2F5C"=>"\u725B", "\u2F5D"=>"\u72AC", "\u2F5E"=>"\u7384", "\u2F5F"=>"\u7389", "\u2F60"=>"\u74DC", "\u2F61"=>"\u74E6", "\u2F62"=>"\u7518", "\u2F63"=>"\u751F", "\u2F64"=>"\u7528", "\u2F65"=>"\u7530", "\u2F66"=>"\u758B", "\u2F67"=>"\u7592", "\u2F68"=>"\u7676", "\u2F69"=>"\u767D", "\u2F6A"=>"\u76AE", "\u2F6B"=>"\u76BF", "\u2F6C"=>"\u76EE", "\u2F6D"=>"\u77DB", "\u2F6E"=>"\u77E2", "\u2F6F"=>"\u77F3", "\u2F70"=>"\u793A", "\u2F71"=>"\u79B8", "\u2F72"=>"\u79BE", "\u2F73"=>"\u7A74", "\u2F74"=>"\u7ACB", "\u2F75"=>"\u7AF9", "\u2F76"=>"\u7C73", "\u2F77"=>"\u7CF8", "\u2F78"=>"\u7F36", "\u2F79"=>"\u7F51", "\u2F7A"=>"\u7F8A", "\u2F7B"=>"\u7FBD", "\u2F7C"=>"\u8001", "\u2F7D"=>"\u800C", "\u2F7E"=>"\u8012", "\u2F7F"=>"\u8033", "\u2F80"=>"\u807F", "\u2F81"=>"\u8089", "\u2F82"=>"\u81E3", "\u2F83"=>"\u81EA", "\u2F84"=>"\u81F3", "\u2F85"=>"\u81FC", "\u2F86"=>"\u820C", "\u2F87"=>"\u821B", "\u2F88"=>"\u821F", "\u2F89"=>"\u826E", "\u2F8A"=>"\u8272", "\u2F8B"=>"\u8278", "\u2F8C"=>"\u864D", "\u2F8D"=>"\u866B", "\u2F8E"=>"\u8840", "\u2F8F"=>"\u884C", "\u2F90"=>"\u8863", "\u2F91"=>"\u897E", "\u2F92"=>"\u898B", "\u2F93"=>"\u89D2", "\u2F94"=>"\u8A00", "\u2F95"=>"\u8C37", "\u2F96"=>"\u8C46", "\u2F97"=>"\u8C55", "\u2F98"=>"\u8C78", "\u2F99"=>"\u8C9D", "\u2F9A"=>"\u8D64", "\u2F9B"=>"\u8D70", "\u2F9C"=>"\u8DB3", "\u2F9D"=>"\u8EAB", "\u2F9E"=>"\u8ECA", "\u2F9F"=>"\u8F9B", "\u2FA0"=>"\u8FB0", "\u2FA1"=>"\u8FB5", "\u2FA2"=>"\u9091", "\u2FA3"=>"\u9149", "\u2FA4"=>"\u91C6", "\u2FA5"=>"\u91CC", "\u2FA6"=>"\u91D1", "\u2FA7"=>"\u9577", "\u2FA8"=>"\u9580", "\u2FA9"=>"\u961C", "\u2FAA"=>"\u96B6", "\u2FAB"=>"\u96B9", "\u2FAC"=>"\u96E8", "\u2FAD"=>"\u9751", "\u2FAE"=>"\u975E", "\u2FAF"=>"\u9762", "\u2FB0"=>"\u9769", "\u2FB1"=>"\u97CB", "\u2FB2"=>"\u97ED", "\u2FB3"=>"\u97F3", "\u2FB4"=>"\u9801", "\u2FB5"=>"\u98A8", "\u2FB6"=>"\u98DB", "\u2FB7"=>"\u98DF", "\u2FB8"=>"\u9996", "\u2FB9"=>"\u9999", "\u2FBA"=>"\u99AC", "\u2FBB"=>"\u9AA8", "\u2FBC"=>"\u9AD8", "\u2FBD"=>"\u9ADF", "\u2FBE"=>"\u9B25", "\u2FBF"=>"\u9B2F", "\u2FC0"=>"\u9B32", "\u2FC1"=>"\u9B3C", "\u2FC2"=>"\u9B5A", "\u2FC3"=>"\u9CE5", "\u2FC4"=>"\u9E75", "\u2FC5"=>"\u9E7F", "\u2FC6"=>"\u9EA5", "\u2FC7"=>"\u9EBB", "\u2FC8"=>"\u9EC3", "\u2FC9"=>"\u9ECD", "\u2FCA"=>"\u9ED1", "\u2FCB"=>"\u9EF9", "\u2FCC"=>"\u9EFD", "\u2FCD"=>"\u9F0E", "\u2FCE"=>"\u9F13", "\u2FCF"=>"\u9F20", "\u2FD0"=>"\u9F3B", "\u2FD1"=>"\u9F4A", "\u2FD2"=>"\u9F52", "\u2FD3"=>"\u9F8D", "\u2FD4"=>"\u9F9C", "\u2FD5"=>"\u9FA0", "\u3000"=>" ", "\u3036"=>"\u3012", "\u3038"=>"\u5341", "\u3039"=>"\u5344", "\u303A"=>"\u5345", "\u309B"=>" \u3099", "\u309C"=>" \u309A", "\u309F"=>"\u3088\u308A", "\u30FF"=>"\u30B3\u30C8", "\u3131"=>"\u1100", "\u3132"=>"\u1101", "\u3133"=>"\u11AA", "\u3134"=>"\u1102", "\u3135"=>"\u11AC", "\u3136"=>"\u11AD", "\u3137"=>"\u1103", "\u3138"=>"\u1104", "\u3139"=>"\u1105", "\u313A"=>"\u11B0", "\u313B"=>"\u11B1", "\u313C"=>"\u11B2", "\u313D"=>"\u11B3", "\u313E"=>"\u11B4", "\u313F"=>"\u11B5", "\u3140"=>"\u111A", "\u3141"=>"\u1106", "\u3142"=>"\u1107", "\u3143"=>"\u1108", "\u3144"=>"\u1121", "\u3145"=>"\u1109", "\u3146"=>"\u110A", "\u3147"=>"\u110B", "\u3148"=>"\u110C", "\u3149"=>"\u110D", "\u314A"=>"\u110E", "\u314B"=>"\u110F", "\u314C"=>"\u1110", "\u314D"=>"\u1111", "\u314E"=>"\u1112", "\u314F"=>"\u1161", "\u3150"=>"\u1162", "\u3151"=>"\u1163", "\u3152"=>"\u1164", "\u3153"=>"\u1165", "\u3154"=>"\u1166", "\u3155"=>"\u1167", "\u3156"=>"\u1168", "\u3157"=>"\u1169", "\u3158"=>"\u116A", "\u3159"=>"\u116B", "\u315A"=>"\u116C", "\u315B"=>"\u116D", "\u315C"=>"\u116E", "\u315D"=>"\u116F", "\u315E"=>"\u1170", "\u315F"=>"\u1171", "\u3160"=>"\u1172", "\u3161"=>"\u1173", "\u3162"=>"\u1174", "\u3163"=>"\u1175", "\u3164"=>"\u1160", "\u3165"=>"\u1114", "\u3166"=>"\u1115", "\u3167"=>"\u11C7", "\u3168"=>"\u11C8", "\u3169"=>"\u11CC", "\u316A"=>"\u11CE", "\u316B"=>"\u11D3", "\u316C"=>"\u11D7", "\u316D"=>"\u11D9", "\u316E"=>"\u111C", "\u316F"=>"\u11DD", "\u3170"=>"\u11DF", "\u3171"=>"\u111D", "\u3172"=>"\u111E", "\u3173"=>"\u1120", "\u3174"=>"\u1122", "\u3175"=>"\u1123", "\u3176"=>"\u1127", "\u3177"=>"\u1129", "\u3178"=>"\u112B", "\u3179"=>"\u112C", "\u317A"=>"\u112D", "\u317B"=>"\u112E", "\u317C"=>"\u112F", "\u317D"=>"\u1132", "\u317E"=>"\u1136", "\u317F"=>"\u1140", "\u3180"=>"\u1147", "\u3181"=>"\u114C", "\u3182"=>"\u11F1", "\u3183"=>"\u11F2", "\u3184"=>"\u1157", "\u3185"=>"\u1158", "\u3186"=>"\u1159", "\u3187"=>"\u1184", "\u3188"=>"\u1185", "\u3189"=>"\u1188", "\u318A"=>"\u1191", "\u318B"=>"\u1192", "\u318C"=>"\u1194", "\u318D"=>"\u119E", "\u318E"=>"\u11A1", "\u3192"=>"\u4E00", "\u3193"=>"\u4E8C", "\u3194"=>"\u4E09", "\u3195"=>"\u56DB", "\u3196"=>"\u4E0A", "\u3197"=>"\u4E2D", "\u3198"=>"\u4E0B", "\u3199"=>"\u7532", "\u319A"=>"\u4E59", "\u319B"=>"\u4E19", "\u319C"=>"\u4E01", "\u319D"=>"\u5929", "\u319E"=>"\u5730", "\u319F"=>"\u4EBA", "\u3200"=>"(\u1100)", "\u3201"=>"(\u1102)", "\u3202"=>"(\u1103)", "\u3203"=>"(\u1105)", "\u3204"=>"(\u1106)", "\u3205"=>"(\u1107)", "\u3206"=>"(\u1109)", "\u3207"=>"(\u110B)", "\u3208"=>"(\u110C)", "\u3209"=>"(\u110E)", "\u320A"=>"(\u110F)", "\u320B"=>"(\u1110)", "\u320C"=>"(\u1111)", "\u320D"=>"(\u1112)", "\u320E"=>"(\u1100\u1161)", "\u320F"=>"(\u1102\u1161)", "\u3210"=>"(\u1103\u1161)", "\u3211"=>"(\u1105\u1161)", "\u3212"=>"(\u1106\u1161)", "\u3213"=>"(\u1107\u1161)", "\u3214"=>"(\u1109\u1161)", "\u3215"=>"(\u110B\u1161)", "\u3216"=>"(\u110C\u1161)", "\u3217"=>"(\u110E\u1161)", "\u3218"=>"(\u110F\u1161)", "\u3219"=>"(\u1110\u1161)", "\u321A"=>"(\u1111\u1161)", "\u321B"=>"(\u1112\u1161)", "\u321C"=>"(\u110C\u116E)", "\u321D"=>"(\u110B\u1169\u110C\u1165\u11AB)", "\u321E"=>"(\u110B\u1169\u1112\u116E)", "\u3220"=>"(\u4E00)", "\u3221"=>"(\u4E8C)", "\u3222"=>"(\u4E09)", "\u3223"=>"(\u56DB)", "\u3224"=>"(\u4E94)", "\u3225"=>"(\u516D)", "\u3226"=>"(\u4E03)", "\u3227"=>"(\u516B)", "\u3228"=>"(\u4E5D)", "\u3229"=>"(\u5341)", "\u322A"=>"(\u6708)", "\u322B"=>"(\u706B)", "\u322C"=>"(\u6C34)", "\u322D"=>"(\u6728)", "\u322E"=>"(\u91D1)", "\u322F"=>"(\u571F)", "\u3230"=>"(\u65E5)", "\u3231"=>"(\u682A)", "\u3232"=>"(\u6709)", "\u3233"=>"(\u793E)", "\u3234"=>"(\u540D)", "\u3235"=>"(\u7279)", "\u3236"=>"(\u8CA1)", "\u3237"=>"(\u795D)", "\u3238"=>"(\u52B4)", "\u3239"=>"(\u4EE3)", "\u323A"=>"(\u547C)", "\u323B"=>"(\u5B66)", "\u323C"=>"(\u76E3)", "\u323D"=>"(\u4F01)", "\u323E"=>"(\u8CC7)", "\u323F"=>"(\u5354)", "\u3240"=>"(\u796D)", "\u3241"=>"(\u4F11)", "\u3242"=>"(\u81EA)", "\u3243"=>"(\u81F3)", "\u3244"=>"\u554F", "\u3245"=>"\u5E7C", "\u3246"=>"\u6587", "\u3247"=>"\u7B8F", "\u3250"=>"PTE", "\u3251"=>"21", "\u3252"=>"22", "\u3253"=>"23", "\u3254"=>"24", "\u3255"=>"25", "\u3256"=>"26", "\u3257"=>"27", "\u3258"=>"28", "\u3259"=>"29", "\u325A"=>"30", "\u325B"=>"31", "\u325C"=>"32", "\u325D"=>"33", "\u325E"=>"34", "\u325F"=>"35", "\u3260"=>"\u1100", "\u3261"=>"\u1102", "\u3262"=>"\u1103", "\u3263"=>"\u1105", "\u3264"=>"\u1106", "\u3265"=>"\u1107", "\u3266"=>"\u1109", "\u3267"=>"\u110B", "\u3268"=>"\u110C", "\u3269"=>"\u110E", "\u326A"=>"\u110F", "\u326B"=>"\u1110", "\u326C"=>"\u1111", "\u326D"=>"\u1112", "\u326E"=>"\u1100\u1161", "\u326F"=>"\u1102\u1161", "\u3270"=>"\u1103\u1161", "\u3271"=>"\u1105\u1161", "\u3272"=>"\u1106\u1161", "\u3273"=>"\u1107\u1161", "\u3274"=>"\u1109\u1161", "\u3275"=>"\u110B\u1161", "\u3276"=>"\u110C\u1161", "\u3277"=>"\u110E\u1161", "\u3278"=>"\u110F\u1161", "\u3279"=>"\u1110\u1161", "\u327A"=>"\u1111\u1161", "\u327B"=>"\u1112\u1161", "\u327C"=>"\u110E\u1161\u11B7\u1100\u1169", "\u327D"=>"\u110C\u116E\u110B\u1174", "\u327E"=>"\u110B\u116E", "\u3280"=>"\u4E00", "\u3281"=>"\u4E8C", "\u3282"=>"\u4E09", "\u3283"=>"\u56DB", "\u3284"=>"\u4E94", "\u3285"=>"\u516D", "\u3286"=>"\u4E03", "\u3287"=>"\u516B", "\u3288"=>"\u4E5D", "\u3289"=>"\u5341", "\u328A"=>"\u6708", "\u328B"=>"\u706B", "\u328C"=>"\u6C34", "\u328D"=>"\u6728", "\u328E"=>"\u91D1", "\u328F"=>"\u571F", "\u3290"=>"\u65E5", "\u3291"=>"\u682A", "\u3292"=>"\u6709", "\u3293"=>"\u793E", "\u3294"=>"\u540D", "\u3295"=>"\u7279", "\u3296"=>"\u8CA1", "\u3297"=>"\u795D", "\u3298"=>"\u52B4", "\u3299"=>"\u79D8", "\u329A"=>"\u7537", "\u329B"=>"\u5973", "\u329C"=>"\u9069", "\u329D"=>"\u512A", "\u329E"=>"\u5370", "\u329F"=>"\u6CE8", "\u32A0"=>"\u9805", "\u32A1"=>"\u4F11", "\u32A2"=>"\u5199", "\u32A3"=>"\u6B63", "\u32A4"=>"\u4E0A", "\u32A5"=>"\u4E2D", "\u32A6"=>"\u4E0B", "\u32A7"=>"\u5DE6", "\u32A8"=>"\u53F3", "\u32A9"=>"\u533B", "\u32AA"=>"\u5B97", "\u32AB"=>"\u5B66", "\u32AC"=>"\u76E3", "\u32AD"=>"\u4F01", "\u32AE"=>"\u8CC7", "\u32AF"=>"\u5354", "\u32B0"=>"\u591C", "\u32B1"=>"36", "\u32B2"=>"37", "\u32B3"=>"38", "\u32B4"=>"39", "\u32B5"=>"40", "\u32B6"=>"41", "\u32B7"=>"42", "\u32B8"=>"43", "\u32B9"=>"44", "\u32BA"=>"45", "\u32BB"=>"46", "\u32BC"=>"47", "\u32BD"=>"48", "\u32BE"=>"49", "\u32BF"=>"50", "\u32C0"=>"1\u6708", "\u32C1"=>"2\u6708", "\u32C2"=>"3\u6708", "\u32C3"=>"4\u6708", "\u32C4"=>"5\u6708", "\u32C5"=>"6\u6708", "\u32C6"=>"7\u6708", "\u32C7"=>"8\u6708", "\u32C8"=>"9\u6708", "\u32C9"=>"10\u6708", "\u32CA"=>"11\u6708", "\u32CB"=>"12\u6708", "\u32CC"=>"Hg", "\u32CD"=>"erg", "\u32CE"=>"eV", "\u32CF"=>"LTD", "\u32D0"=>"\u30A2", "\u32D1"=>"\u30A4", "\u32D2"=>"\u30A6", "\u32D3"=>"\u30A8", "\u32D4"=>"\u30AA", "\u32D5"=>"\u30AB", "\u32D6"=>"\u30AD", "\u32D7"=>"\u30AF", "\u32D8"=>"\u30B1", "\u32D9"=>"\u30B3", "\u32DA"=>"\u30B5", "\u32DB"=>"\u30B7", "\u32DC"=>"\u30B9", "\u32DD"=>"\u30BB", "\u32DE"=>"\u30BD", "\u32DF"=>"\u30BF", "\u32E0"=>"\u30C1", "\u32E1"=>"\u30C4", "\u32E2"=>"\u30C6", "\u32E3"=>"\u30C8", "\u32E4"=>"\u30CA", "\u32E5"=>"\u30CB", "\u32E6"=>"\u30CC", "\u32E7"=>"\u30CD", "\u32E8"=>"\u30CE", "\u32E9"=>"\u30CF", "\u32EA"=>"\u30D2", "\u32EB"=>"\u30D5", "\u32EC"=>"\u30D8", "\u32ED"=>"\u30DB", "\u32EE"=>"\u30DE", "\u32EF"=>"\u30DF", "\u32F0"=>"\u30E0", "\u32F1"=>"\u30E1", "\u32F2"=>"\u30E2", "\u32F3"=>"\u30E4", "\u32F4"=>"\u30E6", "\u32F5"=>"\u30E8", "\u32F6"=>"\u30E9", "\u32F7"=>"\u30EA", "\u32F8"=>"\u30EB", "\u32F9"=>"\u30EC", "\u32FA"=>"\u30ED", "\u32FB"=>"\u30EF", "\u32FC"=>"\u30F0", "\u32FD"=>"\u30F1", "\u32FE"=>"\u30F2", "\u3300"=>"\u30A2\u30D1\u30FC\u30C8", "\u3301"=>"\u30A2\u30EB\u30D5\u30A1", "\u3302"=>"\u30A2\u30F3\u30DA\u30A2", "\u3303"=>"\u30A2\u30FC\u30EB", "\u3304"=>"\u30A4\u30CB\u30F3\u30B0", "\u3305"=>"\u30A4\u30F3\u30C1", "\u3306"=>"\u30A6\u30A9\u30F3", "\u3307"=>"\u30A8\u30B9\u30AF\u30FC\u30C9", "\u3308"=>"\u30A8\u30FC\u30AB\u30FC", "\u3309"=>"\u30AA\u30F3\u30B9", "\u330A"=>"\u30AA\u30FC\u30E0", "\u330B"=>"\u30AB\u30A4\u30EA", "\u330C"=>"\u30AB\u30E9\u30C3\u30C8", "\u330D"=>"\u30AB\u30ED\u30EA\u30FC", "\u330E"=>"\u30AC\u30ED\u30F3", "\u330F"=>"\u30AC\u30F3\u30DE", "\u3310"=>"\u30AE\u30AC", "\u3311"=>"\u30AE\u30CB\u30FC", "\u3312"=>"\u30AD\u30E5\u30EA\u30FC", "\u3313"=>"\u30AE\u30EB\u30C0\u30FC", "\u3314"=>"\u30AD\u30ED", "\u3315"=>"\u30AD\u30ED\u30B0\u30E9\u30E0", "\u3316"=>"\u30AD\u30ED\u30E1\u30FC\u30C8\u30EB", "\u3317"=>"\u30AD\u30ED\u30EF\u30C3\u30C8", "\u3318"=>"\u30B0\u30E9\u30E0", "\u3319"=>"\u30B0\u30E9\u30E0\u30C8\u30F3", "\u331A"=>"\u30AF\u30EB\u30BC\u30A4\u30ED", "\u331B"=>"\u30AF\u30ED\u30FC\u30CD", "\u331C"=>"\u30B1\u30FC\u30B9", "\u331D"=>"\u30B3\u30EB\u30CA", "\u331E"=>"\u30B3\u30FC\u30DD", "\u331F"=>"\u30B5\u30A4\u30AF\u30EB", "\u3320"=>"\u30B5\u30F3\u30C1\u30FC\u30E0", "\u3321"=>"\u30B7\u30EA\u30F3\u30B0", "\u3322"=>"\u30BB\u30F3\u30C1", "\u3323"=>"\u30BB\u30F3\u30C8", "\u3324"=>"\u30C0\u30FC\u30B9", "\u3325"=>"\u30C7\u30B7", "\u3326"=>"\u30C9\u30EB", "\u3327"=>"\u30C8\u30F3", "\u3328"=>"\u30CA\u30CE", "\u3329"=>"\u30CE\u30C3\u30C8", "\u332A"=>"\u30CF\u30A4\u30C4", "\u332B"=>"\u30D1\u30FC\u30BB\u30F3\u30C8", "\u332C"=>"\u30D1\u30FC\u30C4", "\u332D"=>"\u30D0\u30FC\u30EC\u30EB", "\u332E"=>"\u30D4\u30A2\u30B9\u30C8\u30EB", "\u332F"=>"\u30D4\u30AF\u30EB", "\u3330"=>"\u30D4\u30B3", "\u3331"=>"\u30D3\u30EB", "\u3332"=>"\u30D5\u30A1\u30E9\u30C3\u30C9", "\u3333"=>"\u30D5\u30A3\u30FC\u30C8", "\u3334"=>"\u30D6\u30C3\u30B7\u30A7\u30EB", "\u3335"=>"\u30D5\u30E9\u30F3", "\u3336"=>"\u30D8\u30AF\u30BF\u30FC\u30EB", "\u3337"=>"\u30DA\u30BD", "\u3338"=>"\u30DA\u30CB\u30D2", "\u3339"=>"\u30D8\u30EB\u30C4", "\u333A"=>"\u30DA\u30F3\u30B9", "\u333B"=>"\u30DA\u30FC\u30B8", "\u333C"=>"\u30D9\u30FC\u30BF", "\u333D"=>"\u30DD\u30A4\u30F3\u30C8", "\u333E"=>"\u30DC\u30EB\u30C8", "\u333F"=>"\u30DB\u30F3", "\u3340"=>"\u30DD\u30F3\u30C9", "\u3341"=>"\u30DB\u30FC\u30EB", "\u3342"=>"\u30DB\u30FC\u30F3", "\u3343"=>"\u30DE\u30A4\u30AF\u30ED", "\u3344"=>"\u30DE\u30A4\u30EB", "\u3345"=>"\u30DE\u30C3\u30CF", "\u3346"=>"\u30DE\u30EB\u30AF", "\u3347"=>"\u30DE\u30F3\u30B7\u30E7\u30F3", "\u3348"=>"\u30DF\u30AF\u30ED\u30F3", "\u3349"=>"\u30DF\u30EA", "\u334A"=>"\u30DF\u30EA\u30D0\u30FC\u30EB", "\u334B"=>"\u30E1\u30AC", "\u334C"=>"\u30E1\u30AC\u30C8\u30F3", "\u334D"=>"\u30E1\u30FC\u30C8\u30EB", "\u334E"=>"\u30E4\u30FC\u30C9", "\u334F"=>"\u30E4\u30FC\u30EB", "\u3350"=>"\u30E6\u30A2\u30F3", "\u3351"=>"\u30EA\u30C3\u30C8\u30EB", "\u3352"=>"\u30EA\u30E9", "\u3353"=>"\u30EB\u30D4\u30FC", "\u3354"=>"\u30EB\u30FC\u30D6\u30EB", "\u3355"=>"\u30EC\u30E0", "\u3356"=>"\u30EC\u30F3\u30C8\u30B2\u30F3", "\u3357"=>"\u30EF\u30C3\u30C8", "\u3358"=>"0\u70B9", "\u3359"=>"1\u70B9", "\u335A"=>"2\u70B9", "\u335B"=>"3\u70B9", "\u335C"=>"4\u70B9", "\u335D"=>"5\u70B9", "\u335E"=>"6\u70B9", "\u335F"=>"7\u70B9", "\u3360"=>"8\u70B9", "\u3361"=>"9\u70B9", "\u3362"=>"10\u70B9", "\u3363"=>"11\u70B9", "\u3364"=>"12\u70B9", "\u3365"=>"13\u70B9", "\u3366"=>"14\u70B9", "\u3367"=>"15\u70B9", "\u3368"=>"16\u70B9", "\u3369"=>"17\u70B9", "\u336A"=>"18\u70B9", "\u336B"=>"19\u70B9", "\u336C"=>"20\u70B9", "\u336D"=>"21\u70B9", "\u336E"=>"22\u70B9", "\u336F"=>"23\u70B9", "\u3370"=>"24\u70B9", "\u3371"=>"hPa", "\u3372"=>"da", "\u3373"=>"AU", "\u3374"=>"bar", "\u3375"=>"oV", "\u3376"=>"pc", "\u3377"=>"dm", "\u3378"=>"dm2", "\u3379"=>"dm3", "\u337A"=>"IU", "\u337B"=>"\u5E73\u6210", "\u337C"=>"\u662D\u548C", "\u337D"=>"\u5927\u6B63", "\u337E"=>"\u660E\u6CBB", "\u337F"=>"\u682A\u5F0F\u4F1A\u793E", "\u3380"=>"pA", "\u3381"=>"nA", "\u3382"=>"\u03BCA", "\u3383"=>"mA", "\u3384"=>"kA", "\u3385"=>"KB", "\u3386"=>"MB", "\u3387"=>"GB", "\u3388"=>"cal", "\u3389"=>"kcal", "\u338A"=>"pF", "\u338B"=>"nF", "\u338C"=>"\u03BCF", "\u338D"=>"\u03BCg", "\u338E"=>"mg", "\u338F"=>"kg", "\u3390"=>"Hz", "\u3391"=>"kHz", "\u3392"=>"MHz", "\u3393"=>"GHz", "\u3394"=>"THz", "\u3395"=>"\u03BCl", "\u3396"=>"ml", "\u3397"=>"dl", "\u3398"=>"kl", "\u3399"=>"fm", "\u339A"=>"nm", "\u339B"=>"\u03BCm", "\u339C"=>"mm", "\u339D"=>"cm", "\u339E"=>"km", "\u339F"=>"mm2", "\u33A0"=>"cm2", "\u33A1"=>"m2", "\u33A2"=>"km2", "\u33A3"=>"mm3", "\u33A4"=>"cm3", "\u33A5"=>"m3", "\u33A6"=>"km3", "\u33A7"=>"m\u2215s", "\u33A8"=>"m\u2215s2", "\u33A9"=>"Pa", "\u33AA"=>"kPa", "\u33AB"=>"MPa", "\u33AC"=>"GPa", "\u33AD"=>"rad", "\u33AE"=>"rad\u2215s", "\u33AF"=>"rad\u2215s2", "\u33B0"=>"ps", "\u33B1"=>"ns", "\u33B2"=>"\u03BCs", "\u33B3"=>"ms", "\u33B4"=>"pV", "\u33B5"=>"nV", "\u33B6"=>"\u03BCV", "\u33B7"=>"mV", "\u33B8"=>"kV", "\u33B9"=>"MV", "\u33BA"=>"pW", "\u33BB"=>"nW", "\u33BC"=>"\u03BCW", "\u33BD"=>"mW", "\u33BE"=>"kW", "\u33BF"=>"MW", "\u33C0"=>"k\u03A9", "\u33C1"=>"M\u03A9", "\u33C2"=>"a.m.", "\u33C3"=>"Bq", "\u33C4"=>"cc", "\u33C5"=>"cd", "\u33C6"=>"C\u2215kg", "\u33C7"=>"Co.", "\u33C8"=>"dB", "\u33C9"=>"Gy", "\u33CA"=>"ha", "\u33CB"=>"HP", "\u33CC"=>"in", "\u33CD"=>"KK", "\u33CE"=>"KM", "\u33CF"=>"kt", "\u33D0"=>"lm", "\u33D1"=>"ln", "\u33D2"=>"log", "\u33D3"=>"lx", "\u33D4"=>"mb", "\u33D5"=>"mil", "\u33D6"=>"mol", "\u33D7"=>"PH", "\u33D8"=>"p.m.", "\u33D9"=>"PPM", "\u33DA"=>"PR", "\u33DB"=>"sr", "\u33DC"=>"Sv", "\u33DD"=>"Wb", "\u33DE"=>"V\u2215m", "\u33DF"=>"A\u2215m", "\u33E0"=>"1\u65E5", "\u33E1"=>"2\u65E5", "\u33E2"=>"3\u65E5", "\u33E3"=>"4\u65E5", "\u33E4"=>"5\u65E5", "\u33E5"=>"6\u65E5", "\u33E6"=>"7\u65E5", "\u33E7"=>"8\u65E5", "\u33E8"=>"9\u65E5", "\u33E9"=>"10\u65E5", "\u33EA"=>"11\u65E5", "\u33EB"=>"12\u65E5", "\u33EC"=>"13\u65E5", "\u33ED"=>"14\u65E5", "\u33EE"=>"15\u65E5", "\u33EF"=>"16\u65E5", "\u33F0"=>"17\u65E5", "\u33F1"=>"18\u65E5", "\u33F2"=>"19\u65E5", "\u33F3"=>"20\u65E5", "\u33F4"=>"21\u65E5", "\u33F5"=>"22\u65E5", "\u33F6"=>"23\u65E5", "\u33F7"=>"24\u65E5", "\u33F8"=>"25\u65E5", "\u33F9"=>"26\u65E5", "\u33FA"=>"27\u65E5", "\u33FB"=>"28\u65E5", "\u33FC"=>"29\u65E5", "\u33FD"=>"30\u65E5", "\u33FE"=>"31\u65E5", "\u33FF"=>"gal", "\uA69C"=>"\u044A", "\uA69D"=>"\u044C", "\uA770"=>"\uA76F", "\uA7F8"=>"\u0126", "\uA7F9"=>"\u0153", "\uAB5C"=>"\uA727", "\uAB5D"=>"\uAB37", "\uAB5E"=>"\u026B", "\uAB5F"=>"\uAB52", "\uFB00"=>"ff", "\uFB01"=>"fi", "\uFB02"=>"fl", "\uFB03"=>"ffi", "\uFB04"=>"ffl", "\uFB05"=>"st", "\uFB06"=>"st", "\uFB13"=>"\u0574\u0576", "\uFB14"=>"\u0574\u0565", "\uFB15"=>"\u0574\u056B", "\uFB16"=>"\u057E\u0576", "\uFB17"=>"\u0574\u056D", "\uFB20"=>"\u05E2", "\uFB21"=>"\u05D0", "\uFB22"=>"\u05D3", "\uFB23"=>"\u05D4", "\uFB24"=>"\u05DB", "\uFB25"=>"\u05DC", "\uFB26"=>"\u05DD", "\uFB27"=>"\u05E8", "\uFB28"=>"\u05EA", "\uFB29"=>"+", "\uFB4F"=>"\u05D0\u05DC", "\uFB50"=>"\u0671", "\uFB51"=>"\u0671", "\uFB52"=>"\u067B", "\uFB53"=>"\u067B", "\uFB54"=>"\u067B", "\uFB55"=>"\u067B", "\uFB56"=>"\u067E", "\uFB57"=>"\u067E", "\uFB58"=>"\u067E", "\uFB59"=>"\u067E", "\uFB5A"=>"\u0680", "\uFB5B"=>"\u0680", "\uFB5C"=>"\u0680", "\uFB5D"=>"\u0680", "\uFB5E"=>"\u067A", "\uFB5F"=>"\u067A", "\uFB60"=>"\u067A", "\uFB61"=>"\u067A", "\uFB62"=>"\u067F", "\uFB63"=>"\u067F", "\uFB64"=>"\u067F", "\uFB65"=>"\u067F", "\uFB66"=>"\u0679", "\uFB67"=>"\u0679", "\uFB68"=>"\u0679", "\uFB69"=>"\u0679", "\uFB6A"=>"\u06A4", "\uFB6B"=>"\u06A4", "\uFB6C"=>"\u06A4", "\uFB6D"=>"\u06A4", "\uFB6E"=>"\u06A6", "\uFB6F"=>"\u06A6", "\uFB70"=>"\u06A6", "\uFB71"=>"\u06A6", "\uFB72"=>"\u0684", "\uFB73"=>"\u0684", "\uFB74"=>"\u0684", "\uFB75"=>"\u0684", "\uFB76"=>"\u0683", "\uFB77"=>"\u0683", "\uFB78"=>"\u0683", "\uFB79"=>"\u0683", "\uFB7A"=>"\u0686", "\uFB7B"=>"\u0686", "\uFB7C"=>"\u0686", "\uFB7D"=>"\u0686", "\uFB7E"=>"\u0687", "\uFB7F"=>"\u0687", "\uFB80"=>"\u0687", "\uFB81"=>"\u0687", "\uFB82"=>"\u068D", "\uFB83"=>"\u068D", "\uFB84"=>"\u068C", "\uFB85"=>"\u068C", "\uFB86"=>"\u068E", "\uFB87"=>"\u068E", "\uFB88"=>"\u0688", "\uFB89"=>"\u0688", "\uFB8A"=>"\u0698", "\uFB8B"=>"\u0698", "\uFB8C"=>"\u0691", "\uFB8D"=>"\u0691", "\uFB8E"=>"\u06A9", "\uFB8F"=>"\u06A9", "\uFB90"=>"\u06A9", "\uFB91"=>"\u06A9", "\uFB92"=>"\u06AF", "\uFB93"=>"\u06AF", "\uFB94"=>"\u06AF", "\uFB95"=>"\u06AF", "\uFB96"=>"\u06B3", "\uFB97"=>"\u06B3", "\uFB98"=>"\u06B3", "\uFB99"=>"\u06B3", "\uFB9A"=>"\u06B1", "\uFB9B"=>"\u06B1", "\uFB9C"=>"\u06B1", "\uFB9D"=>"\u06B1", "\uFB9E"=>"\u06BA", "\uFB9F"=>"\u06BA", "\uFBA0"=>"\u06BB", "\uFBA1"=>"\u06BB", "\uFBA2"=>"\u06BB", "\uFBA3"=>"\u06BB", "\uFBA4"=>"\u06C0", "\uFBA5"=>"\u06C0", "\uFBA6"=>"\u06C1", "\uFBA7"=>"\u06C1", "\uFBA8"=>"\u06C1", "\uFBA9"=>"\u06C1", "\uFBAA"=>"\u06BE", "\uFBAB"=>"\u06BE", "\uFBAC"=>"\u06BE", "\uFBAD"=>"\u06BE", "\uFBAE"=>"\u06D2", "\uFBAF"=>"\u06D2", "\uFBB0"=>"\u06D3", "\uFBB1"=>"\u06D3", "\uFBD3"=>"\u06AD", "\uFBD4"=>"\u06AD", "\uFBD5"=>"\u06AD", "\uFBD6"=>"\u06AD", "\uFBD7"=>"\u06C7", "\uFBD8"=>"\u06C7", "\uFBD9"=>"\u06C6", "\uFBDA"=>"\u06C6", "\uFBDB"=>"\u06C8", "\uFBDC"=>"\u06C8", "\uFBDD"=>"\u06C7\u0674", "\uFBDE"=>"\u06CB", "\uFBDF"=>"\u06CB", "\uFBE0"=>"\u06C5", "\uFBE1"=>"\u06C5", "\uFBE2"=>"\u06C9", "\uFBE3"=>"\u06C9", "\uFBE4"=>"\u06D0", "\uFBE5"=>"\u06D0", "\uFBE6"=>"\u06D0", "\uFBE7"=>"\u06D0", "\uFBE8"=>"\u0649", "\uFBE9"=>"\u0649", "\uFBEA"=>"\u0626\u0627", "\uFBEB"=>"\u0626\u0627", "\uFBEC"=>"\u0626\u06D5", "\uFBED"=>"\u0626\u06D5", "\uFBEE"=>"\u0626\u0648", "\uFBEF"=>"\u0626\u0648", "\uFBF0"=>"\u0626\u06C7", "\uFBF1"=>"\u0626\u06C7", "\uFBF2"=>"\u0626\u06C6", "\uFBF3"=>"\u0626\u06C6", "\uFBF4"=>"\u0626\u06C8", "\uFBF5"=>"\u0626\u06C8", "\uFBF6"=>"\u0626\u06D0", "\uFBF7"=>"\u0626\u06D0", "\uFBF8"=>"\u0626\u06D0", "\uFBF9"=>"\u0626\u0649", "\uFBFA"=>"\u0626\u0649", "\uFBFB"=>"\u0626\u0649", "\uFBFC"=>"\u06CC", "\uFBFD"=>"\u06CC", "\uFBFE"=>"\u06CC", "\uFBFF"=>"\u06CC", "\uFC00"=>"\u0626\u062C", "\uFC01"=>"\u0626\u062D", "\uFC02"=>"\u0626\u0645", "\uFC03"=>"\u0626\u0649", "\uFC04"=>"\u0626\u064A", "\uFC05"=>"\u0628\u062C", "\uFC06"=>"\u0628\u062D", "\uFC07"=>"\u0628\u062E", "\uFC08"=>"\u0628\u0645", "\uFC09"=>"\u0628\u0649", "\uFC0A"=>"\u0628\u064A", "\uFC0B"=>"\u062A\u062C", "\uFC0C"=>"\u062A\u062D", "\uFC0D"=>"\u062A\u062E", "\uFC0E"=>"\u062A\u0645", "\uFC0F"=>"\u062A\u0649", "\uFC10"=>"\u062A\u064A", "\uFC11"=>"\u062B\u062C", "\uFC12"=>"\u062B\u0645", "\uFC13"=>"\u062B\u0649", "\uFC14"=>"\u062B\u064A", "\uFC15"=>"\u062C\u062D", "\uFC16"=>"\u062C\u0645", "\uFC17"=>"\u062D\u062C", "\uFC18"=>"\u062D\u0645", "\uFC19"=>"\u062E\u062C", "\uFC1A"=>"\u062E\u062D", "\uFC1B"=>"\u062E\u0645", "\uFC1C"=>"\u0633\u062C", "\uFC1D"=>"\u0633\u062D", "\uFC1E"=>"\u0633\u062E", "\uFC1F"=>"\u0633\u0645", "\uFC20"=>"\u0635\u062D", "\uFC21"=>"\u0635\u0645", "\uFC22"=>"\u0636\u062C", "\uFC23"=>"\u0636\u062D", "\uFC24"=>"\u0636\u062E", "\uFC25"=>"\u0636\u0645", "\uFC26"=>"\u0637\u062D", "\uFC27"=>"\u0637\u0645", "\uFC28"=>"\u0638\u0645", "\uFC29"=>"\u0639\u062C", "\uFC2A"=>"\u0639\u0645", "\uFC2B"=>"\u063A\u062C", "\uFC2C"=>"\u063A\u0645", "\uFC2D"=>"\u0641\u062C", "\uFC2E"=>"\u0641\u062D", "\uFC2F"=>"\u0641\u062E", "\uFC30"=>"\u0641\u0645", "\uFC31"=>"\u0641\u0649", "\uFC32"=>"\u0641\u064A", "\uFC33"=>"\u0642\u062D", "\uFC34"=>"\u0642\u0645", "\uFC35"=>"\u0642\u0649", "\uFC36"=>"\u0642\u064A", "\uFC37"=>"\u0643\u0627", "\uFC38"=>"\u0643\u062C", "\uFC39"=>"\u0643\u062D", "\uFC3A"=>"\u0643\u062E", "\uFC3B"=>"\u0643\u0644", "\uFC3C"=>"\u0643\u0645", "\uFC3D"=>"\u0643\u0649", "\uFC3E"=>"\u0643\u064A", "\uFC3F"=>"\u0644\u062C", "\uFC40"=>"\u0644\u062D", "\uFC41"=>"\u0644\u062E", "\uFC42"=>"\u0644\u0645", "\uFC43"=>"\u0644\u0649", "\uFC44"=>"\u0644\u064A", "\uFC45"=>"\u0645\u062C", "\uFC46"=>"\u0645\u062D", "\uFC47"=>"\u0645\u062E", "\uFC48"=>"\u0645\u0645", "\uFC49"=>"\u0645\u0649", "\uFC4A"=>"\u0645\u064A", "\uFC4B"=>"\u0646\u062C", "\uFC4C"=>"\u0646\u062D", "\uFC4D"=>"\u0646\u062E", "\uFC4E"=>"\u0646\u0645", "\uFC4F"=>"\u0646\u0649", "\uFC50"=>"\u0646\u064A", "\uFC51"=>"\u0647\u062C", "\uFC52"=>"\u0647\u0645", "\uFC53"=>"\u0647\u0649", "\uFC54"=>"\u0647\u064A", "\uFC55"=>"\u064A\u062C", "\uFC56"=>"\u064A\u062D", "\uFC57"=>"\u064A\u062E", "\uFC58"=>"\u064A\u0645", "\uFC59"=>"\u064A\u0649", "\uFC5A"=>"\u064A\u064A", "\uFC5B"=>"\u0630\u0670", "\uFC5C"=>"\u0631\u0670", "\uFC5D"=>"\u0649\u0670", "\uFC5E"=>" \u064C\u0651", "\uFC5F"=>" \u064D\u0651", "\uFC60"=>" \u064E\u0651", "\uFC61"=>" \u064F\u0651", "\uFC62"=>" \u0650\u0651", "\uFC63"=>" \u0651\u0670", "\uFC64"=>"\u0626\u0631", "\uFC65"=>"\u0626\u0632", "\uFC66"=>"\u0626\u0645", "\uFC67"=>"\u0626\u0646", "\uFC68"=>"\u0626\u0649", "\uFC69"=>"\u0626\u064A", "\uFC6A"=>"\u0628\u0631", "\uFC6B"=>"\u0628\u0632", "\uFC6C"=>"\u0628\u0645", "\uFC6D"=>"\u0628\u0646", "\uFC6E"=>"\u0628\u0649", "\uFC6F"=>"\u0628\u064A", "\uFC70"=>"\u062A\u0631", "\uFC71"=>"\u062A\u0632", "\uFC72"=>"\u062A\u0645", "\uFC73"=>"\u062A\u0646", "\uFC74"=>"\u062A\u0649", "\uFC75"=>"\u062A\u064A", "\uFC76"=>"\u062B\u0631", "\uFC77"=>"\u062B\u0632", "\uFC78"=>"\u062B\u0645", "\uFC79"=>"\u062B\u0646", "\uFC7A"=>"\u062B\u0649", "\uFC7B"=>"\u062B\u064A", "\uFC7C"=>"\u0641\u0649", "\uFC7D"=>"\u0641\u064A", "\uFC7E"=>"\u0642\u0649", "\uFC7F"=>"\u0642\u064A", "\uFC80"=>"\u0643\u0627", "\uFC81"=>"\u0643\u0644", "\uFC82"=>"\u0643\u0645", "\uFC83"=>"\u0643\u0649", "\uFC84"=>"\u0643\u064A", "\uFC85"=>"\u0644\u0645", "\uFC86"=>"\u0644\u0649", "\uFC87"=>"\u0644\u064A", "\uFC88"=>"\u0645\u0627", "\uFC89"=>"\u0645\u0645", "\uFC8A"=>"\u0646\u0631", "\uFC8B"=>"\u0646\u0632", "\uFC8C"=>"\u0646\u0645", "\uFC8D"=>"\u0646\u0646", "\uFC8E"=>"\u0646\u0649", "\uFC8F"=>"\u0646\u064A", "\uFC90"=>"\u0649\u0670", "\uFC91"=>"\u064A\u0631", "\uFC92"=>"\u064A\u0632", "\uFC93"=>"\u064A\u0645", "\uFC94"=>"\u064A\u0646", "\uFC95"=>"\u064A\u0649", "\uFC96"=>"\u064A\u064A", "\uFC97"=>"\u0626\u062C", "\uFC98"=>"\u0626\u062D", "\uFC99"=>"\u0626\u062E", "\uFC9A"=>"\u0626\u0645", "\uFC9B"=>"\u0626\u0647", "\uFC9C"=>"\u0628\u062C", "\uFC9D"=>"\u0628\u062D", "\uFC9E"=>"\u0628\u062E", "\uFC9F"=>"\u0628\u0645", "\uFCA0"=>"\u0628\u0647", "\uFCA1"=>"\u062A\u062C", "\uFCA2"=>"\u062A\u062D", "\uFCA3"=>"\u062A\u062E", "\uFCA4"=>"\u062A\u0645", "\uFCA5"=>"\u062A\u0647", "\uFCA6"=>"\u062B\u0645", "\uFCA7"=>"\u062C\u062D", "\uFCA8"=>"\u062C\u0645", "\uFCA9"=>"\u062D\u062C", "\uFCAA"=>"\u062D\u0645", "\uFCAB"=>"\u062E\u062C", "\uFCAC"=>"\u062E\u0645", "\uFCAD"=>"\u0633\u062C", "\uFCAE"=>"\u0633\u062D", "\uFCAF"=>"\u0633\u062E", "\uFCB0"=>"\u0633\u0645", "\uFCB1"=>"\u0635\u062D", "\uFCB2"=>"\u0635\u062E", "\uFCB3"=>"\u0635\u0645", "\uFCB4"=>"\u0636\u062C", "\uFCB5"=>"\u0636\u062D", "\uFCB6"=>"\u0636\u062E", "\uFCB7"=>"\u0636\u0645", "\uFCB8"=>"\u0637\u062D", "\uFCB9"=>"\u0638\u0645", "\uFCBA"=>"\u0639\u062C", "\uFCBB"=>"\u0639\u0645", "\uFCBC"=>"\u063A\u062C", "\uFCBD"=>"\u063A\u0645", "\uFCBE"=>"\u0641\u062C", "\uFCBF"=>"\u0641\u062D", "\uFCC0"=>"\u0641\u062E", "\uFCC1"=>"\u0641\u0645", "\uFCC2"=>"\u0642\u062D", "\uFCC3"=>"\u0642\u0645", "\uFCC4"=>"\u0643\u062C", "\uFCC5"=>"\u0643\u062D", "\uFCC6"=>"\u0643\u062E", "\uFCC7"=>"\u0643\u0644", "\uFCC8"=>"\u0643\u0645", "\uFCC9"=>"\u0644\u062C", "\uFCCA"=>"\u0644\u062D", "\uFCCB"=>"\u0644\u062E", "\uFCCC"=>"\u0644\u0645", "\uFCCD"=>"\u0644\u0647", "\uFCCE"=>"\u0645\u062C", "\uFCCF"=>"\u0645\u062D", "\uFCD0"=>"\u0645\u062E", "\uFCD1"=>"\u0645\u0645", "\uFCD2"=>"\u0646\u062C", "\uFCD3"=>"\u0646\u062D", "\uFCD4"=>"\u0646\u062E", "\uFCD5"=>"\u0646\u0645", "\uFCD6"=>"\u0646\u0647", "\uFCD7"=>"\u0647\u062C", "\uFCD8"=>"\u0647\u0645", "\uFCD9"=>"\u0647\u0670", "\uFCDA"=>"\u064A\u062C", "\uFCDB"=>"\u064A\u062D", "\uFCDC"=>"\u064A\u062E", "\uFCDD"=>"\u064A\u0645", "\uFCDE"=>"\u064A\u0647", "\uFCDF"=>"\u0626\u0645", "\uFCE0"=>"\u0626\u0647", "\uFCE1"=>"\u0628\u0645", "\uFCE2"=>"\u0628\u0647", "\uFCE3"=>"\u062A\u0645", "\uFCE4"=>"\u062A\u0647", "\uFCE5"=>"\u062B\u0645", "\uFCE6"=>"\u062B\u0647", "\uFCE7"=>"\u0633\u0645", "\uFCE8"=>"\u0633\u0647", "\uFCE9"=>"\u0634\u0645", "\uFCEA"=>"\u0634\u0647", "\uFCEB"=>"\u0643\u0644", "\uFCEC"=>"\u0643\u0645", "\uFCED"=>"\u0644\u0645", "\uFCEE"=>"\u0646\u0645", "\uFCEF"=>"\u0646\u0647", "\uFCF0"=>"\u064A\u0645", "\uFCF1"=>"\u064A\u0647", "\uFCF2"=>"\u0640\u064E\u0651", "\uFCF3"=>"\u0640\u064F\u0651", "\uFCF4"=>"\u0640\u0650\u0651", "\uFCF5"=>"\u0637\u0649", "\uFCF6"=>"\u0637\u064A", "\uFCF7"=>"\u0639\u0649", "\uFCF8"=>"\u0639\u064A", "\uFCF9"=>"\u063A\u0649", "\uFCFA"=>"\u063A\u064A", "\uFCFB"=>"\u0633\u0649", "\uFCFC"=>"\u0633\u064A", "\uFCFD"=>"\u0634\u0649", "\uFCFE"=>"\u0634\u064A", "\uFCFF"=>"\u062D\u0649", "\uFD00"=>"\u062D\u064A", "\uFD01"=>"\u062C\u0649", "\uFD02"=>"\u062C\u064A", "\uFD03"=>"\u062E\u0649", "\uFD04"=>"\u062E\u064A", "\uFD05"=>"\u0635\u0649", "\uFD06"=>"\u0635\u064A", "\uFD07"=>"\u0636\u0649", "\uFD08"=>"\u0636\u064A", "\uFD09"=>"\u0634\u062C", "\uFD0A"=>"\u0634\u062D", "\uFD0B"=>"\u0634\u062E", "\uFD0C"=>"\u0634\u0645", "\uFD0D"=>"\u0634\u0631", "\uFD0E"=>"\u0633\u0631", "\uFD0F"=>"\u0635\u0631", "\uFD10"=>"\u0636\u0631", "\uFD11"=>"\u0637\u0649", "\uFD12"=>"\u0637\u064A", "\uFD13"=>"\u0639\u0649", "\uFD14"=>"\u0639\u064A", "\uFD15"=>"\u063A\u0649", "\uFD16"=>"\u063A\u064A", "\uFD17"=>"\u0633\u0649", "\uFD18"=>"\u0633\u064A", "\uFD19"=>"\u0634\u0649", "\uFD1A"=>"\u0634\u064A", "\uFD1B"=>"\u062D\u0649", "\uFD1C"=>"\u062D\u064A", "\uFD1D"=>"\u062C\u0649", "\uFD1E"=>"\u062C\u064A", "\uFD1F"=>"\u062E\u0649", "\uFD20"=>"\u062E\u064A", "\uFD21"=>"\u0635\u0649", "\uFD22"=>"\u0635\u064A", "\uFD23"=>"\u0636\u0649", "\uFD24"=>"\u0636\u064A", "\uFD25"=>"\u0634\u062C", "\uFD26"=>"\u0634\u062D", "\uFD27"=>"\u0634\u062E", "\uFD28"=>"\u0634\u0645", "\uFD29"=>"\u0634\u0631", "\uFD2A"=>"\u0633\u0631", "\uFD2B"=>"\u0635\u0631", "\uFD2C"=>"\u0636\u0631", "\uFD2D"=>"\u0634\u062C", "\uFD2E"=>"\u0634\u062D", "\uFD2F"=>"\u0634\u062E", "\uFD30"=>"\u0634\u0645", "\uFD31"=>"\u0633\u0647", "\uFD32"=>"\u0634\u0647", "\uFD33"=>"\u0637\u0645", "\uFD34"=>"\u0633\u062C", "\uFD35"=>"\u0633\u062D", "\uFD36"=>"\u0633\u062E", "\uFD37"=>"\u0634\u062C", "\uFD38"=>"\u0634\u062D", "\uFD39"=>"\u0634\u062E", "\uFD3A"=>"\u0637\u0645", "\uFD3B"=>"\u0638\u0645", "\uFD3C"=>"\u0627\u064B", "\uFD3D"=>"\u0627\u064B", "\uFD50"=>"\u062A\u062C\u0645", "\uFD51"=>"\u062A\u062D\u062C", "\uFD52"=>"\u062A\u062D\u062C", "\uFD53"=>"\u062A\u062D\u0645", "\uFD54"=>"\u062A\u062E\u0645", "\uFD55"=>"\u062A\u0645\u062C", "\uFD56"=>"\u062A\u0645\u062D", "\uFD57"=>"\u062A\u0645\u062E", "\uFD58"=>"\u062C\u0645\u062D", "\uFD59"=>"\u062C\u0645\u062D", "\uFD5A"=>"\u062D\u0645\u064A", "\uFD5B"=>"\u062D\u0645\u0649", "\uFD5C"=>"\u0633\u062D\u062C", "\uFD5D"=>"\u0633\u062C\u062D", "\uFD5E"=>"\u0633\u062C\u0649", "\uFD5F"=>"\u0633\u0645\u062D", "\uFD60"=>"\u0633\u0645\u062D", "\uFD61"=>"\u0633\u0645\u062C", "\uFD62"=>"\u0633\u0645\u0645", "\uFD63"=>"\u0633\u0645\u0645", "\uFD64"=>"\u0635\u062D\u062D", "\uFD65"=>"\u0635\u062D\u062D", "\uFD66"=>"\u0635\u0645\u0645", "\uFD67"=>"\u0634\u062D\u0645", "\uFD68"=>"\u0634\u062D\u0645", "\uFD69"=>"\u0634\u062C\u064A", "\uFD6A"=>"\u0634\u0645\u062E", "\uFD6B"=>"\u0634\u0645\u062E", "\uFD6C"=>"\u0634\u0645\u0645", "\uFD6D"=>"\u0634\u0645\u0645", "\uFD6E"=>"\u0636\u062D\u0649", "\uFD6F"=>"\u0636\u062E\u0645", "\uFD70"=>"\u0636\u062E\u0645", "\uFD71"=>"\u0637\u0645\u062D", "\uFD72"=>"\u0637\u0645\u062D", "\uFD73"=>"\u0637\u0645\u0645", "\uFD74"=>"\u0637\u0645\u064A", "\uFD75"=>"\u0639\u062C\u0645", "\uFD76"=>"\u0639\u0645\u0645", "\uFD77"=>"\u0639\u0645\u0645", "\uFD78"=>"\u0639\u0645\u0649", "\uFD79"=>"\u063A\u0645\u0645", "\uFD7A"=>"\u063A\u0645\u064A", "\uFD7B"=>"\u063A\u0645\u0649", "\uFD7C"=>"\u0641\u062E\u0645", "\uFD7D"=>"\u0641\u062E\u0645", "\uFD7E"=>"\u0642\u0645\u062D", "\uFD7F"=>"\u0642\u0645\u0645", "\uFD80"=>"\u0644\u062D\u0645", "\uFD81"=>"\u0644\u062D\u064A", "\uFD82"=>"\u0644\u062D\u0649", "\uFD83"=>"\u0644\u062C\u062C", "\uFD84"=>"\u0644\u062C\u062C", "\uFD85"=>"\u0644\u062E\u0645", "\uFD86"=>"\u0644\u062E\u0645", "\uFD87"=>"\u0644\u0645\u062D", "\uFD88"=>"\u0644\u0645\u062D", "\uFD89"=>"\u0645\u062D\u062C", "\uFD8A"=>"\u0645\u062D\u0645", "\uFD8B"=>"\u0645\u062D\u064A", "\uFD8C"=>"\u0645\u062C\u062D", "\uFD8D"=>"\u0645\u062C\u0645", "\uFD8E"=>"\u0645\u062E\u062C", "\uFD8F"=>"\u0645\u062E\u0645", "\uFD92"=>"\u0645\u062C\u062E", "\uFD93"=>"\u0647\u0645\u062C", "\uFD94"=>"\u0647\u0645\u0645", "\uFD95"=>"\u0646\u062D\u0645", "\uFD96"=>"\u0646\u062D\u0649", "\uFD97"=>"\u0646\u062C\u0645", "\uFD98"=>"\u0646\u062C\u0645", "\uFD99"=>"\u0646\u062C\u0649", "\uFD9A"=>"\u0646\u0645\u064A", "\uFD9B"=>"\u0646\u0645\u0649", "\uFD9C"=>"\u064A\u0645\u0645", "\uFD9D"=>"\u064A\u0645\u0645", "\uFD9E"=>"\u0628\u062E\u064A", "\uFD9F"=>"\u062A\u062C\u064A", "\uFDA0"=>"\u062A\u062C\u0649", "\uFDA1"=>"\u062A\u062E\u064A", "\uFDA2"=>"\u062A\u062E\u0649", "\uFDA3"=>"\u062A\u0645\u064A", "\uFDA4"=>"\u062A\u0645\u0649", "\uFDA5"=>"\u062C\u0645\u064A", "\uFDA6"=>"\u062C\u062D\u0649", "\uFDA7"=>"\u062C\u0645\u0649", "\uFDA8"=>"\u0633\u062E\u0649", "\uFDA9"=>"\u0635\u062D\u064A", "\uFDAA"=>"\u0634\u062D\u064A", "\uFDAB"=>"\u0636\u062D\u064A", "\uFDAC"=>"\u0644\u062C\u064A", "\uFDAD"=>"\u0644\u0645\u064A", "\uFDAE"=>"\u064A\u062D\u064A", "\uFDAF"=>"\u064A\u062C\u064A", "\uFDB0"=>"\u064A\u0645\u064A", "\uFDB1"=>"\u0645\u0645\u064A", "\uFDB2"=>"\u0642\u0645\u064A", "\uFDB3"=>"\u0646\u062D\u064A", "\uFDB4"=>"\u0642\u0645\u062D", "\uFDB5"=>"\u0644\u062D\u0645", "\uFDB6"=>"\u0639\u0645\u064A", "\uFDB7"=>"\u0643\u0645\u064A", "\uFDB8"=>"\u0646\u062C\u062D", "\uFDB9"=>"\u0645\u062E\u064A", "\uFDBA"=>"\u0644\u062C\u0645", "\uFDBB"=>"\u0643\u0645\u0645", "\uFDBC"=>"\u0644\u062C\u0645", "\uFDBD"=>"\u0646\u062C\u062D", "\uFDBE"=>"\u062C\u062D\u064A", "\uFDBF"=>"\u062D\u062C\u064A", "\uFDC0"=>"\u0645\u062C\u064A", "\uFDC1"=>"\u0641\u0645\u064A", "\uFDC2"=>"\u0628\u062D\u064A", "\uFDC3"=>"\u0643\u0645\u0645", "\uFDC4"=>"\u0639\u062C\u0645", "\uFDC5"=>"\u0635\u0645\u0645", "\uFDC6"=>"\u0633\u062E\u064A", "\uFDC7"=>"\u0646\u062C\u064A", "\uFDF0"=>"\u0635\u0644\u06D2", "\uFDF1"=>"\u0642\u0644\u06D2", "\uFDF2"=>"\u0627\u0644\u0644\u0647", "\uFDF3"=>"\u0627\u0643\u0628\u0631", "\uFDF4"=>"\u0645\u062D\u0645\u062F", "\uFDF5"=>"\u0635\u0644\u0639\u0645", "\uFDF6"=>"\u0631\u0633\u0648\u0644", "\uFDF7"=>"\u0639\u0644\u064A\u0647", "\uFDF8"=>"\u0648\u0633\u0644\u0645", "\uFDF9"=>"\u0635\u0644\u0649", "\uFDFA"=>"\u0635\u0644\u0649 \u0627\u0644\u0644\u0647 \u0639\u0644\u064A\u0647 \u0648\u0633\u0644\u0645", "\uFDFB"=>"\u062C\u0644 \u062C\u0644\u0627\u0644\u0647", "\uFDFC"=>"\u0631\u06CC\u0627\u0644", "\uFE10"=>",", "\uFE11"=>"\u3001", "\uFE12"=>"\u3002", "\uFE13"=>":", "\uFE14"=>";", "\uFE15"=>"!", "\uFE16"=>"?", "\uFE17"=>"\u3016", "\uFE18"=>"\u3017", "\uFE19"=>"...", "\uFE30"=>"..", "\uFE31"=>"\u2014", "\uFE32"=>"\u2013", "\uFE33"=>"_", "\uFE34"=>"_", "\uFE35"=>"(", "\uFE36"=>")", "\uFE37"=>"{", "\uFE38"=>"}", "\uFE39"=>"\u3014", "\uFE3A"=>"\u3015", "\uFE3B"=>"\u3010", "\uFE3C"=>"\u3011", "\uFE3D"=>"\u300A", "\uFE3E"=>"\u300B", "\uFE3F"=>"\u3008", "\uFE40"=>"\u3009", "\uFE41"=>"\u300C", "\uFE42"=>"\u300D", "\uFE43"=>"\u300E", "\uFE44"=>"\u300F", "\uFE47"=>"[", "\uFE48"=>"]", "\uFE49"=>" \u0305", "\uFE4A"=>" \u0305", "\uFE4B"=>" \u0305", "\uFE4C"=>" \u0305", "\uFE4D"=>"_", "\uFE4E"=>"_", "\uFE4F"=>"_", "\uFE50"=>",", "\uFE51"=>"\u3001", "\uFE52"=>".", "\uFE54"=>";", "\uFE55"=>":", "\uFE56"=>"?", "\uFE57"=>"!", "\uFE58"=>"\u2014", "\uFE59"=>"(", "\uFE5A"=>")", "\uFE5B"=>"{", "\uFE5C"=>"}", "\uFE5D"=>"\u3014", "\uFE5E"=>"\u3015", "\uFE5F"=>"#", "\uFE60"=>"&", "\uFE61"=>"*", "\uFE62"=>"+", "\uFE63"=>"-", "\uFE64"=>"<", "\uFE65"=>">", "\uFE66"=>"=", "\uFE68"=>"\\", "\uFE69"=>"$", "\uFE6A"=>"%", "\uFE6B"=>"@", "\uFE70"=>" \u064B", "\uFE71"=>"\u0640\u064B", "\uFE72"=>" \u064C", "\uFE74"=>" \u064D", "\uFE76"=>" \u064E", "\uFE77"=>"\u0640\u064E", "\uFE78"=>" \u064F", "\uFE79"=>"\u0640\u064F", "\uFE7A"=>" \u0650", "\uFE7B"=>"\u0640\u0650", "\uFE7C"=>" \u0651", "\uFE7D"=>"\u0640\u0651", "\uFE7E"=>" \u0652", "\uFE7F"=>"\u0640\u0652", "\uFE80"=>"\u0621", "\uFE81"=>"\u0622", "\uFE82"=>"\u0622", "\uFE83"=>"\u0623", "\uFE84"=>"\u0623", "\uFE85"=>"\u0624", "\uFE86"=>"\u0624", "\uFE87"=>"\u0625", "\uFE88"=>"\u0625", "\uFE89"=>"\u0626", "\uFE8A"=>"\u0626", "\uFE8B"=>"\u0626", "\uFE8C"=>"\u0626", "\uFE8D"=>"\u0627", "\uFE8E"=>"\u0627", "\uFE8F"=>"\u0628", "\uFE90"=>"\u0628", "\uFE91"=>"\u0628", "\uFE92"=>"\u0628", "\uFE93"=>"\u0629", "\uFE94"=>"\u0629", "\uFE95"=>"\u062A", "\uFE96"=>"\u062A", "\uFE97"=>"\u062A", "\uFE98"=>"\u062A", "\uFE99"=>"\u062B", "\uFE9A"=>"\u062B", "\uFE9B"=>"\u062B", "\uFE9C"=>"\u062B", "\uFE9D"=>"\u062C", "\uFE9E"=>"\u062C", "\uFE9F"=>"\u062C", "\uFEA0"=>"\u062C", "\uFEA1"=>"\u062D", "\uFEA2"=>"\u062D", "\uFEA3"=>"\u062D", "\uFEA4"=>"\u062D", "\uFEA5"=>"\u062E", "\uFEA6"=>"\u062E", "\uFEA7"=>"\u062E", "\uFEA8"=>"\u062E", "\uFEA9"=>"\u062F", "\uFEAA"=>"\u062F", "\uFEAB"=>"\u0630", "\uFEAC"=>"\u0630", "\uFEAD"=>"\u0631", "\uFEAE"=>"\u0631", "\uFEAF"=>"\u0632", "\uFEB0"=>"\u0632", "\uFEB1"=>"\u0633", "\uFEB2"=>"\u0633", "\uFEB3"=>"\u0633", "\uFEB4"=>"\u0633", "\uFEB5"=>"\u0634", "\uFEB6"=>"\u0634", "\uFEB7"=>"\u0634", "\uFEB8"=>"\u0634", "\uFEB9"=>"\u0635", "\uFEBA"=>"\u0635", "\uFEBB"=>"\u0635", "\uFEBC"=>"\u0635", "\uFEBD"=>"\u0636", "\uFEBE"=>"\u0636", "\uFEBF"=>"\u0636", "\uFEC0"=>"\u0636", "\uFEC1"=>"\u0637", "\uFEC2"=>"\u0637", "\uFEC3"=>"\u0637", "\uFEC4"=>"\u0637", "\uFEC5"=>"\u0638", "\uFEC6"=>"\u0638", "\uFEC7"=>"\u0638", "\uFEC8"=>"\u0638", "\uFEC9"=>"\u0639", "\uFECA"=>"\u0639", "\uFECB"=>"\u0639", "\uFECC"=>"\u0639", "\uFECD"=>"\u063A", "\uFECE"=>"\u063A", "\uFECF"=>"\u063A", "\uFED0"=>"\u063A", "\uFED1"=>"\u0641", "\uFED2"=>"\u0641", "\uFED3"=>"\u0641", "\uFED4"=>"\u0641", "\uFED5"=>"\u0642", "\uFED6"=>"\u0642", "\uFED7"=>"\u0642", "\uFED8"=>"\u0642", "\uFED9"=>"\u0643", "\uFEDA"=>"\u0643", "\uFEDB"=>"\u0643", "\uFEDC"=>"\u0643", "\uFEDD"=>"\u0644", "\uFEDE"=>"\u0644", "\uFEDF"=>"\u0644", "\uFEE0"=>"\u0644", "\uFEE1"=>"\u0645", "\uFEE2"=>"\u0645", "\uFEE3"=>"\u0645", "\uFEE4"=>"\u0645", "\uFEE5"=>"\u0646", "\uFEE6"=>"\u0646", "\uFEE7"=>"\u0646", "\uFEE8"=>"\u0646", "\uFEE9"=>"\u0647", "\uFEEA"=>"\u0647", "\uFEEB"=>"\u0647", "\uFEEC"=>"\u0647", "\uFEED"=>"\u0648", "\uFEEE"=>"\u0648", "\uFEEF"=>"\u0649", "\uFEF0"=>"\u0649", "\uFEF1"=>"\u064A", "\uFEF2"=>"\u064A", "\uFEF3"=>"\u064A", "\uFEF4"=>"\u064A", "\uFEF5"=>"\u0644\u0622", "\uFEF6"=>"\u0644\u0622", "\uFEF7"=>"\u0644\u0623", "\uFEF8"=>"\u0644\u0623", "\uFEF9"=>"\u0644\u0625", "\uFEFA"=>"\u0644\u0625", "\uFEFB"=>"\u0644\u0627", "\uFEFC"=>"\u0644\u0627", "\uFF01"=>"!", "\uFF02"=>"\"", "\uFF03"=>"#", "\uFF04"=>"$", "\uFF05"=>"%", "\uFF06"=>"&", "\uFF07"=>"'", "\uFF08"=>"(", "\uFF09"=>")", "\uFF0A"=>"*", "\uFF0B"=>"+", "\uFF0C"=>",", "\uFF0D"=>"-", "\uFF0E"=>".", "\uFF0F"=>"/", "\uFF10"=>"0", "\uFF11"=>"1", "\uFF12"=>"2", "\uFF13"=>"3", "\uFF14"=>"4", "\uFF15"=>"5", "\uFF16"=>"6", "\uFF17"=>"7", "\uFF18"=>"8", "\uFF19"=>"9", "\uFF1A"=>":", "\uFF1B"=>";", "\uFF1C"=>"<", "\uFF1D"=>"=", "\uFF1E"=>">", "\uFF1F"=>"?", "\uFF20"=>"@", "\uFF21"=>"A", "\uFF22"=>"B", "\uFF23"=>"C", "\uFF24"=>"D", "\uFF25"=>"E", "\uFF26"=>"F", "\uFF27"=>"G", "\uFF28"=>"H", "\uFF29"=>"I", "\uFF2A"=>"J", "\uFF2B"=>"K", "\uFF2C"=>"L", "\uFF2D"=>"M", "\uFF2E"=>"N", "\uFF2F"=>"O", "\uFF30"=>"P", "\uFF31"=>"Q", "\uFF32"=>"R", "\uFF33"=>"S", "\uFF34"=>"T", "\uFF35"=>"U", "\uFF36"=>"V", "\uFF37"=>"W", "\uFF38"=>"X", "\uFF39"=>"Y", "\uFF3A"=>"Z", "\uFF3B"=>"[", "\uFF3C"=>"\\", "\uFF3D"=>"]", "\uFF3E"=>"^", "\uFF3F"=>"_", "\uFF40"=>"`", "\uFF41"=>"a", "\uFF42"=>"b", "\uFF43"=>"c", "\uFF44"=>"d", "\uFF45"=>"e", "\uFF46"=>"f", "\uFF47"=>"g", "\uFF48"=>"h", "\uFF49"=>"i", "\uFF4A"=>"j", "\uFF4B"=>"k", "\uFF4C"=>"l", "\uFF4D"=>"m", "\uFF4E"=>"n", "\uFF4F"=>"o", "\uFF50"=>"p", "\uFF51"=>"q", "\uFF52"=>"r", "\uFF53"=>"s", "\uFF54"=>"t", "\uFF55"=>"u", "\uFF56"=>"v", "\uFF57"=>"w", "\uFF58"=>"x", "\uFF59"=>"y", "\uFF5A"=>"z", "\uFF5B"=>"{", "\uFF5C"=>"|", "\uFF5D"=>"}", "\uFF5E"=>"~", "\uFF5F"=>"\u2985", "\uFF60"=>"\u2986", "\uFF61"=>"\u3002", "\uFF62"=>"\u300C", "\uFF63"=>"\u300D", "\uFF64"=>"\u3001", "\uFF65"=>"\u30FB", "\uFF66"=>"\u30F2", "\uFF67"=>"\u30A1", "\uFF68"=>"\u30A3", "\uFF69"=>"\u30A5", "\uFF6A"=>"\u30A7", "\uFF6B"=>"\u30A9", "\uFF6C"=>"\u30E3", "\uFF6D"=>"\u30E5", "\uFF6E"=>"\u30E7", "\uFF6F"=>"\u30C3", "\uFF70"=>"\u30FC", "\uFF71"=>"\u30A2", "\uFF72"=>"\u30A4", "\uFF73"=>"\u30A6", "\uFF74"=>"\u30A8", "\uFF75"=>"\u30AA", "\uFF76"=>"\u30AB", "\uFF77"=>"\u30AD", "\uFF78"=>"\u30AF", "\uFF79"=>"\u30B1", "\uFF7A"=>"\u30B3", "\uFF7B"=>"\u30B5", "\uFF7C"=>"\u30B7", "\uFF7D"=>"\u30B9", "\uFF7E"=>"\u30BB", "\uFF7F"=>"\u30BD", "\uFF80"=>"\u30BF", "\uFF81"=>"\u30C1", "\uFF82"=>"\u30C4", "\uFF83"=>"\u30C6", "\uFF84"=>"\u30C8", "\uFF85"=>"\u30CA", "\uFF86"=>"\u30CB", "\uFF87"=>"\u30CC", "\uFF88"=>"\u30CD", "\uFF89"=>"\u30CE", "\uFF8A"=>"\u30CF", "\uFF8B"=>"\u30D2", "\uFF8C"=>"\u30D5", "\uFF8D"=>"\u30D8", "\uFF8E"=>"\u30DB", "\uFF8F"=>"\u30DE", "\uFF90"=>"\u30DF", "\uFF91"=>"\u30E0", "\uFF92"=>"\u30E1", "\uFF93"=>"\u30E2", "\uFF94"=>"\u30E4", "\uFF95"=>"\u30E6", "\uFF96"=>"\u30E8", "\uFF97"=>"\u30E9", "\uFF98"=>"\u30EA", "\uFF99"=>"\u30EB", "\uFF9A"=>"\u30EC", "\uFF9B"=>"\u30ED", "\uFF9C"=>"\u30EF", "\uFF9D"=>"\u30F3", "\uFF9E"=>"\u3099", "\uFF9F"=>"\u309A", "\uFFA0"=>"\u1160", "\uFFA1"=>"\u1100", "\uFFA2"=>"\u1101", "\uFFA3"=>"\u11AA", "\uFFA4"=>"\u1102", "\uFFA5"=>"\u11AC", "\uFFA6"=>"\u11AD", "\uFFA7"=>"\u1103", "\uFFA8"=>"\u1104", "\uFFA9"=>"\u1105", "\uFFAA"=>"\u11B0", "\uFFAB"=>"\u11B1", "\uFFAC"=>"\u11B2", "\uFFAD"=>"\u11B3", "\uFFAE"=>"\u11B4", "\uFFAF"=>"\u11B5", "\uFFB0"=>"\u111A", "\uFFB1"=>"\u1106", "\uFFB2"=>"\u1107", "\uFFB3"=>"\u1108", "\uFFB4"=>"\u1121", "\uFFB5"=>"\u1109", "\uFFB6"=>"\u110A", "\uFFB7"=>"\u110B", "\uFFB8"=>"\u110C", "\uFFB9"=>"\u110D", "\uFFBA"=>"\u110E", "\uFFBB"=>"\u110F", "\uFFBC"=>"\u1110", "\uFFBD"=>"\u1111", "\uFFBE"=>"\u1112", "\uFFC2"=>"\u1161", "\uFFC3"=>"\u1162", "\uFFC4"=>"\u1163", "\uFFC5"=>"\u1164", "\uFFC6"=>"\u1165", "\uFFC7"=>"\u1166", "\uFFCA"=>"\u1167", "\uFFCB"=>"\u1168", "\uFFCC"=>"\u1169", "\uFFCD"=>"\u116A", "\uFFCE"=>"\u116B", "\uFFCF"=>"\u116C", "\uFFD2"=>"\u116D", "\uFFD3"=>"\u116E", "\uFFD4"=>"\u116F", "\uFFD5"=>"\u1170", "\uFFD6"=>"\u1171", "\uFFD7"=>"\u1172", "\uFFDA"=>"\u1173", "\uFFDB"=>"\u1174", "\uFFDC"=>"\u1175", "\uFFE0"=>"\u00A2", "\uFFE1"=>"\u00A3", "\uFFE2"=>"\u00AC", "\uFFE3"=>" \u0304", "\uFFE4"=>"\u00A6", "\uFFE5"=>"\u00A5", "\uFFE6"=>"\u20A9", "\uFFE8"=>"\u2502", "\uFFE9"=>"\u2190", "\uFFEA"=>"\u2191", "\uFFEB"=>"\u2192", "\uFFEC"=>"\u2193", "\uFFED"=>"\u25A0", "\uFFEE"=>"\u25CB", "\u{1D400}"=>"A", "\u{1D401}"=>"B", "\u{1D402}"=>"C", "\u{1D403}"=>"D", "\u{1D404}"=>"E", "\u{1D405}"=>"F", "\u{1D406}"=>"G", "\u{1D407}"=>"H", "\u{1D408}"=>"I", "\u{1D409}"=>"J", "\u{1D40A}"=>"K", "\u{1D40B}"=>"L", "\u{1D40C}"=>"M", "\u{1D40D}"=>"N", "\u{1D40E}"=>"O", "\u{1D40F}"=>"P", "\u{1D410}"=>"Q", "\u{1D411}"=>"R", "\u{1D412}"=>"S", "\u{1D413}"=>"T", "\u{1D414}"=>"U", "\u{1D415}"=>"V", "\u{1D416}"=>"W", "\u{1D417}"=>"X", "\u{1D418}"=>"Y", "\u{1D419}"=>"Z", "\u{1D41A}"=>"a", "\u{1D41B}"=>"b", "\u{1D41C}"=>"c", "\u{1D41D}"=>"d", "\u{1D41E}"=>"e", "\u{1D41F}"=>"f", "\u{1D420}"=>"g", "\u{1D421}"=>"h", "\u{1D422}"=>"i", "\u{1D423}"=>"j", "\u{1D424}"=>"k", "\u{1D425}"=>"l", "\u{1D426}"=>"m", "\u{1D427}"=>"n", "\u{1D428}"=>"o", "\u{1D429}"=>"p", "\u{1D42A}"=>"q", "\u{1D42B}"=>"r", "\u{1D42C}"=>"s", "\u{1D42D}"=>"t", "\u{1D42E}"=>"u", "\u{1D42F}"=>"v", "\u{1D430}"=>"w", "\u{1D431}"=>"x", "\u{1D432}"=>"y", "\u{1D433}"=>"z", "\u{1D434}"=>"A", "\u{1D435}"=>"B", "\u{1D436}"=>"C", "\u{1D437}"=>"D", "\u{1D438}"=>"E", "\u{1D439}"=>"F", "\u{1D43A}"=>"G", "\u{1D43B}"=>"H", "\u{1D43C}"=>"I", "\u{1D43D}"=>"J", "\u{1D43E}"=>"K", "\u{1D43F}"=>"L", "\u{1D440}"=>"M", "\u{1D441}"=>"N", "\u{1D442}"=>"O", "\u{1D443}"=>"P", "\u{1D444}"=>"Q", "\u{1D445}"=>"R", "\u{1D446}"=>"S", "\u{1D447}"=>"T", "\u{1D448}"=>"U", "\u{1D449}"=>"V", "\u{1D44A}"=>"W", "\u{1D44B}"=>"X", "\u{1D44C}"=>"Y", "\u{1D44D}"=>"Z", "\u{1D44E}"=>"a", "\u{1D44F}"=>"b", "\u{1D450}"=>"c", "\u{1D451}"=>"d", "\u{1D452}"=>"e", "\u{1D453}"=>"f", "\u{1D454}"=>"g", "\u{1D456}"=>"i", "\u{1D457}"=>"j", "\u{1D458}"=>"k", "\u{1D459}"=>"l", "\u{1D45A}"=>"m", "\u{1D45B}"=>"n", "\u{1D45C}"=>"o", "\u{1D45D}"=>"p", "\u{1D45E}"=>"q", "\u{1D45F}"=>"r", "\u{1D460}"=>"s", "\u{1D461}"=>"t", "\u{1D462}"=>"u", "\u{1D463}"=>"v", "\u{1D464}"=>"w", "\u{1D465}"=>"x", "\u{1D466}"=>"y", "\u{1D467}"=>"z", "\u{1D468}"=>"A", "\u{1D469}"=>"B", "\u{1D46A}"=>"C", "\u{1D46B}"=>"D", "\u{1D46C}"=>"E", "\u{1D46D}"=>"F", "\u{1D46E}"=>"G", "\u{1D46F}"=>"H", "\u{1D470}"=>"I", "\u{1D471}"=>"J", "\u{1D472}"=>"K", "\u{1D473}"=>"L", "\u{1D474}"=>"M", "\u{1D475}"=>"N", "\u{1D476}"=>"O", "\u{1D477}"=>"P", "\u{1D478}"=>"Q", "\u{1D479}"=>"R", "\u{1D47A}"=>"S", "\u{1D47B}"=>"T", "\u{1D47C}"=>"U", "\u{1D47D}"=>"V", "\u{1D47E}"=>"W", "\u{1D47F}"=>"X", "\u{1D480}"=>"Y", "\u{1D481}"=>"Z", "\u{1D482}"=>"a", "\u{1D483}"=>"b", "\u{1D484}"=>"c", "\u{1D485}"=>"d", "\u{1D486}"=>"e", "\u{1D487}"=>"f", "\u{1D488}"=>"g", "\u{1D489}"=>"h", "\u{1D48A}"=>"i", "\u{1D48B}"=>"j", "\u{1D48C}"=>"k", "\u{1D48D}"=>"l", "\u{1D48E}"=>"m", "\u{1D48F}"=>"n", "\u{1D490}"=>"o", "\u{1D491}"=>"p", "\u{1D492}"=>"q", "\u{1D493}"=>"r", "\u{1D494}"=>"s", "\u{1D495}"=>"t", "\u{1D496}"=>"u", "\u{1D497}"=>"v", "\u{1D498}"=>"w", "\u{1D499}"=>"x", "\u{1D49A}"=>"y", "\u{1D49B}"=>"z", "\u{1D49C}"=>"A", "\u{1D49E}"=>"C", "\u{1D49F}"=>"D", "\u{1D4A2}"=>"G", "\u{1D4A5}"=>"J", "\u{1D4A6}"=>"K", "\u{1D4A9}"=>"N", "\u{1D4AA}"=>"O", "\u{1D4AB}"=>"P", "\u{1D4AC}"=>"Q", "\u{1D4AE}"=>"S", "\u{1D4AF}"=>"T", "\u{1D4B0}"=>"U", "\u{1D4B1}"=>"V", "\u{1D4B2}"=>"W", "\u{1D4B3}"=>"X", "\u{1D4B4}"=>"Y", "\u{1D4B5}"=>"Z", "\u{1D4B6}"=>"a", "\u{1D4B7}"=>"b", "\u{1D4B8}"=>"c", "\u{1D4B9}"=>"d", "\u{1D4BB}"=>"f", "\u{1D4BD}"=>"h", "\u{1D4BE}"=>"i", "\u{1D4BF}"=>"j", "\u{1D4C0}"=>"k", "\u{1D4C1}"=>"l", "\u{1D4C2}"=>"m", "\u{1D4C3}"=>"n", "\u{1D4C5}"=>"p", "\u{1D4C6}"=>"q", "\u{1D4C7}"=>"r", "\u{1D4C8}"=>"s", "\u{1D4C9}"=>"t", "\u{1D4CA}"=>"u", "\u{1D4CB}"=>"v", "\u{1D4CC}"=>"w", "\u{1D4CD}"=>"x", "\u{1D4CE}"=>"y", "\u{1D4CF}"=>"z", "\u{1D4D0}"=>"A", "\u{1D4D1}"=>"B", "\u{1D4D2}"=>"C", "\u{1D4D3}"=>"D", "\u{1D4D4}"=>"E", "\u{1D4D5}"=>"F", "\u{1D4D6}"=>"G", "\u{1D4D7}"=>"H", "\u{1D4D8}"=>"I", "\u{1D4D9}"=>"J", "\u{1D4DA}"=>"K", "\u{1D4DB}"=>"L", "\u{1D4DC}"=>"M", "\u{1D4DD}"=>"N", "\u{1D4DE}"=>"O", "\u{1D4DF}"=>"P", "\u{1D4E0}"=>"Q", "\u{1D4E1}"=>"R", "\u{1D4E2}"=>"S", "\u{1D4E3}"=>"T", "\u{1D4E4}"=>"U", "\u{1D4E5}"=>"V", "\u{1D4E6}"=>"W", "\u{1D4E7}"=>"X", "\u{1D4E8}"=>"Y", "\u{1D4E9}"=>"Z", "\u{1D4EA}"=>"a", "\u{1D4EB}"=>"b", "\u{1D4EC}"=>"c", "\u{1D4ED}"=>"d", "\u{1D4EE}"=>"e", "\u{1D4EF}"=>"f", "\u{1D4F0}"=>"g", "\u{1D4F1}"=>"h", "\u{1D4F2}"=>"i", "\u{1D4F3}"=>"j", "\u{1D4F4}"=>"k", "\u{1D4F5}"=>"l", "\u{1D4F6}"=>"m", "\u{1D4F7}"=>"n", "\u{1D4F8}"=>"o", "\u{1D4F9}"=>"p", "\u{1D4FA}"=>"q", "\u{1D4FB}"=>"r", "\u{1D4FC}"=>"s", "\u{1D4FD}"=>"t", "\u{1D4FE}"=>"u", "\u{1D4FF}"=>"v", "\u{1D500}"=>"w", "\u{1D501}"=>"x", "\u{1D502}"=>"y", "\u{1D503}"=>"z", "\u{1D504}"=>"A", "\u{1D505}"=>"B", "\u{1D507}"=>"D", "\u{1D508}"=>"E", "\u{1D509}"=>"F", "\u{1D50A}"=>"G", "\u{1D50D}"=>"J", "\u{1D50E}"=>"K", "\u{1D50F}"=>"L", "\u{1D510}"=>"M", "\u{1D511}"=>"N", "\u{1D512}"=>"O", "\u{1D513}"=>"P", "\u{1D514}"=>"Q", "\u{1D516}"=>"S", "\u{1D517}"=>"T", "\u{1D518}"=>"U", "\u{1D519}"=>"V", "\u{1D51A}"=>"W", "\u{1D51B}"=>"X", "\u{1D51C}"=>"Y", "\u{1D51E}"=>"a", "\u{1D51F}"=>"b", "\u{1D520}"=>"c", "\u{1D521}"=>"d", "\u{1D522}"=>"e", "\u{1D523}"=>"f", "\u{1D524}"=>"g", "\u{1D525}"=>"h", "\u{1D526}"=>"i", "\u{1D527}"=>"j", "\u{1D528}"=>"k", "\u{1D529}"=>"l", "\u{1D52A}"=>"m", "\u{1D52B}"=>"n", "\u{1D52C}"=>"o", "\u{1D52D}"=>"p", "\u{1D52E}"=>"q", "\u{1D52F}"=>"r", "\u{1D530}"=>"s", "\u{1D531}"=>"t", "\u{1D532}"=>"u", "\u{1D533}"=>"v", "\u{1D534}"=>"w", "\u{1D535}"=>"x", "\u{1D536}"=>"y", "\u{1D537}"=>"z", "\u{1D538}"=>"A", "\u{1D539}"=>"B", "\u{1D53B}"=>"D", "\u{1D53C}"=>"E", "\u{1D53D}"=>"F", "\u{1D53E}"=>"G", "\u{1D540}"=>"I", "\u{1D541}"=>"J", "\u{1D542}"=>"K", "\u{1D543}"=>"L", "\u{1D544}"=>"M", "\u{1D546}"=>"O", "\u{1D54A}"=>"S", "\u{1D54B}"=>"T", "\u{1D54C}"=>"U", "\u{1D54D}"=>"V", "\u{1D54E}"=>"W", "\u{1D54F}"=>"X", "\u{1D550}"=>"Y", "\u{1D552}"=>"a", "\u{1D553}"=>"b", "\u{1D554}"=>"c", "\u{1D555}"=>"d", "\u{1D556}"=>"e", "\u{1D557}"=>"f", "\u{1D558}"=>"g", "\u{1D559}"=>"h", "\u{1D55A}"=>"i", "\u{1D55B}"=>"j", "\u{1D55C}"=>"k", "\u{1D55D}"=>"l", "\u{1D55E}"=>"m", "\u{1D55F}"=>"n", "\u{1D560}"=>"o", "\u{1D561}"=>"p", "\u{1D562}"=>"q", "\u{1D563}"=>"r", "\u{1D564}"=>"s", "\u{1D565}"=>"t", "\u{1D566}"=>"u", "\u{1D567}"=>"v", "\u{1D568}"=>"w", "\u{1D569}"=>"x", "\u{1D56A}"=>"y", "\u{1D56B}"=>"z", "\u{1D56C}"=>"A", "\u{1D56D}"=>"B", "\u{1D56E}"=>"C", "\u{1D56F}"=>"D", "\u{1D570}"=>"E", "\u{1D571}"=>"F", "\u{1D572}"=>"G", "\u{1D573}"=>"H", "\u{1D574}"=>"I", "\u{1D575}"=>"J", "\u{1D576}"=>"K", "\u{1D577}"=>"L", "\u{1D578}"=>"M", "\u{1D579}"=>"N", "\u{1D57A}"=>"O", "\u{1D57B}"=>"P", "\u{1D57C}"=>"Q", "\u{1D57D}"=>"R", "\u{1D57E}"=>"S", "\u{1D57F}"=>"T", "\u{1D580}"=>"U", "\u{1D581}"=>"V", "\u{1D582}"=>"W", "\u{1D583}"=>"X", "\u{1D584}"=>"Y", "\u{1D585}"=>"Z", "\u{1D586}"=>"a", "\u{1D587}"=>"b", "\u{1D588}"=>"c", "\u{1D589}"=>"d", "\u{1D58A}"=>"e", "\u{1D58B}"=>"f", "\u{1D58C}"=>"g", "\u{1D58D}"=>"h", "\u{1D58E}"=>"i", "\u{1D58F}"=>"j", "\u{1D590}"=>"k", "\u{1D591}"=>"l", "\u{1D592}"=>"m", "\u{1D593}"=>"n", "\u{1D594}"=>"o", "\u{1D595}"=>"p", "\u{1D596}"=>"q", "\u{1D597}"=>"r", "\u{1D598}"=>"s", "\u{1D599}"=>"t", "\u{1D59A}"=>"u", "\u{1D59B}"=>"v", "\u{1D59C}"=>"w", "\u{1D59D}"=>"x", "\u{1D59E}"=>"y", "\u{1D59F}"=>"z", "\u{1D5A0}"=>"A", "\u{1D5A1}"=>"B", "\u{1D5A2}"=>"C", "\u{1D5A3}"=>"D", "\u{1D5A4}"=>"E", "\u{1D5A5}"=>"F", "\u{1D5A6}"=>"G", "\u{1D5A7}"=>"H", "\u{1D5A8}"=>"I", "\u{1D5A9}"=>"J", "\u{1D5AA}"=>"K", "\u{1D5AB}"=>"L", "\u{1D5AC}"=>"M", "\u{1D5AD}"=>"N", "\u{1D5AE}"=>"O", "\u{1D5AF}"=>"P", "\u{1D5B0}"=>"Q", "\u{1D5B1}"=>"R", "\u{1D5B2}"=>"S", "\u{1D5B3}"=>"T", "\u{1D5B4}"=>"U", "\u{1D5B5}"=>"V", "\u{1D5B6}"=>"W", "\u{1D5B7}"=>"X", "\u{1D5B8}"=>"Y", "\u{1D5B9}"=>"Z", "\u{1D5BA}"=>"a", "\u{1D5BB}"=>"b", "\u{1D5BC}"=>"c", "\u{1D5BD}"=>"d", "\u{1D5BE}"=>"e", "\u{1D5BF}"=>"f", "\u{1D5C0}"=>"g", "\u{1D5C1}"=>"h", "\u{1D5C2}"=>"i", "\u{1D5C3}"=>"j", "\u{1D5C4}"=>"k", "\u{1D5C5}"=>"l", "\u{1D5C6}"=>"m", "\u{1D5C7}"=>"n", "\u{1D5C8}"=>"o", "\u{1D5C9}"=>"p", "\u{1D5CA}"=>"q", "\u{1D5CB}"=>"r", "\u{1D5CC}"=>"s", "\u{1D5CD}"=>"t", "\u{1D5CE}"=>"u", "\u{1D5CF}"=>"v", "\u{1D5D0}"=>"w", "\u{1D5D1}"=>"x", "\u{1D5D2}"=>"y", "\u{1D5D3}"=>"z", "\u{1D5D4}"=>"A", "\u{1D5D5}"=>"B", "\u{1D5D6}"=>"C", "\u{1D5D7}"=>"D", "\u{1D5D8}"=>"E", "\u{1D5D9}"=>"F", "\u{1D5DA}"=>"G", "\u{1D5DB}"=>"H", "\u{1D5DC}"=>"I", "\u{1D5DD}"=>"J", "\u{1D5DE}"=>"K", "\u{1D5DF}"=>"L", "\u{1D5E0}"=>"M", "\u{1D5E1}"=>"N", "\u{1D5E2}"=>"O", "\u{1D5E3}"=>"P", "\u{1D5E4}"=>"Q", "\u{1D5E5}"=>"R", "\u{1D5E6}"=>"S", "\u{1D5E7}"=>"T", "\u{1D5E8}"=>"U", "\u{1D5E9}"=>"V", "\u{1D5EA}"=>"W", "\u{1D5EB}"=>"X", "\u{1D5EC}"=>"Y", "\u{1D5ED}"=>"Z", "\u{1D5EE}"=>"a", "\u{1D5EF}"=>"b", "\u{1D5F0}"=>"c", "\u{1D5F1}"=>"d", "\u{1D5F2}"=>"e", "\u{1D5F3}"=>"f", "\u{1D5F4}"=>"g", "\u{1D5F5}"=>"h", "\u{1D5F6}"=>"i", "\u{1D5F7}"=>"j", "\u{1D5F8}"=>"k", "\u{1D5F9}"=>"l", "\u{1D5FA}"=>"m", "\u{1D5FB}"=>"n", "\u{1D5FC}"=>"o", "\u{1D5FD}"=>"p", "\u{1D5FE}"=>"q", "\u{1D5FF}"=>"r", "\u{1D600}"=>"s", "\u{1D601}"=>"t", "\u{1D602}"=>"u", "\u{1D603}"=>"v", "\u{1D604}"=>"w", "\u{1D605}"=>"x", "\u{1D606}"=>"y", "\u{1D607}"=>"z", "\u{1D608}"=>"A", "\u{1D609}"=>"B", "\u{1D60A}"=>"C", "\u{1D60B}"=>"D", "\u{1D60C}"=>"E", "\u{1D60D}"=>"F", "\u{1D60E}"=>"G", "\u{1D60F}"=>"H", "\u{1D610}"=>"I", "\u{1D611}"=>"J", "\u{1D612}"=>"K", "\u{1D613}"=>"L", "\u{1D614}"=>"M", "\u{1D615}"=>"N", "\u{1D616}"=>"O", "\u{1D617}"=>"P", "\u{1D618}"=>"Q", "\u{1D619}"=>"R", "\u{1D61A}"=>"S", "\u{1D61B}"=>"T", "\u{1D61C}"=>"U", "\u{1D61D}"=>"V", "\u{1D61E}"=>"W", "\u{1D61F}"=>"X", "\u{1D620}"=>"Y", "\u{1D621}"=>"Z", "\u{1D622}"=>"a", "\u{1D623}"=>"b", "\u{1D624}"=>"c", "\u{1D625}"=>"d", "\u{1D626}"=>"e", "\u{1D627}"=>"f", "\u{1D628}"=>"g", "\u{1D629}"=>"h", "\u{1D62A}"=>"i", "\u{1D62B}"=>"j", "\u{1D62C}"=>"k", "\u{1D62D}"=>"l", "\u{1D62E}"=>"m", "\u{1D62F}"=>"n", "\u{1D630}"=>"o", "\u{1D631}"=>"p", "\u{1D632}"=>"q", "\u{1D633}"=>"r", "\u{1D634}"=>"s", "\u{1D635}"=>"t", "\u{1D636}"=>"u", "\u{1D637}"=>"v", "\u{1D638}"=>"w", "\u{1D639}"=>"x", "\u{1D63A}"=>"y", "\u{1D63B}"=>"z", "\u{1D63C}"=>"A", "\u{1D63D}"=>"B", "\u{1D63E}"=>"C", "\u{1D63F}"=>"D", "\u{1D640}"=>"E", "\u{1D641}"=>"F", "\u{1D642}"=>"G", "\u{1D643}"=>"H", "\u{1D644}"=>"I", "\u{1D645}"=>"J", "\u{1D646}"=>"K", "\u{1D647}"=>"L", "\u{1D648}"=>"M", "\u{1D649}"=>"N", "\u{1D64A}"=>"O", "\u{1D64B}"=>"P", "\u{1D64C}"=>"Q", "\u{1D64D}"=>"R", "\u{1D64E}"=>"S", "\u{1D64F}"=>"T", "\u{1D650}"=>"U", "\u{1D651}"=>"V", "\u{1D652}"=>"W", "\u{1D653}"=>"X", "\u{1D654}"=>"Y", "\u{1D655}"=>"Z", "\u{1D656}"=>"a", "\u{1D657}"=>"b", "\u{1D658}"=>"c", "\u{1D659}"=>"d", "\u{1D65A}"=>"e", "\u{1D65B}"=>"f", "\u{1D65C}"=>"g", "\u{1D65D}"=>"h", "\u{1D65E}"=>"i", "\u{1D65F}"=>"j", "\u{1D660}"=>"k", "\u{1D661}"=>"l", "\u{1D662}"=>"m", "\u{1D663}"=>"n", "\u{1D664}"=>"o", "\u{1D665}"=>"p", "\u{1D666}"=>"q", "\u{1D667}"=>"r", "\u{1D668}"=>"s", "\u{1D669}"=>"t", "\u{1D66A}"=>"u", "\u{1D66B}"=>"v", "\u{1D66C}"=>"w", "\u{1D66D}"=>"x", "\u{1D66E}"=>"y", "\u{1D66F}"=>"z", "\u{1D670}"=>"A", "\u{1D671}"=>"B", "\u{1D672}"=>"C", "\u{1D673}"=>"D", "\u{1D674}"=>"E", "\u{1D675}"=>"F", "\u{1D676}"=>"G", "\u{1D677}"=>"H", "\u{1D678}"=>"I", "\u{1D679}"=>"J", "\u{1D67A}"=>"K", "\u{1D67B}"=>"L", "\u{1D67C}"=>"M", "\u{1D67D}"=>"N", "\u{1D67E}"=>"O", "\u{1D67F}"=>"P", "\u{1D680}"=>"Q", "\u{1D681}"=>"R", "\u{1D682}"=>"S", "\u{1D683}"=>"T", "\u{1D684}"=>"U", "\u{1D685}"=>"V", "\u{1D686}"=>"W", "\u{1D687}"=>"X", "\u{1D688}"=>"Y", "\u{1D689}"=>"Z", "\u{1D68A}"=>"a", "\u{1D68B}"=>"b", "\u{1D68C}"=>"c", "\u{1D68D}"=>"d", "\u{1D68E}"=>"e", "\u{1D68F}"=>"f", "\u{1D690}"=>"g", "\u{1D691}"=>"h", "\u{1D692}"=>"i", "\u{1D693}"=>"j", "\u{1D694}"=>"k", "\u{1D695}"=>"l", "\u{1D696}"=>"m", "\u{1D697}"=>"n", "\u{1D698}"=>"o", "\u{1D699}"=>"p", "\u{1D69A}"=>"q", "\u{1D69B}"=>"r", "\u{1D69C}"=>"s", "\u{1D69D}"=>"t", "\u{1D69E}"=>"u", "\u{1D69F}"=>"v", "\u{1D6A0}"=>"w", "\u{1D6A1}"=>"x", "\u{1D6A2}"=>"y", "\u{1D6A3}"=>"z", "\u{1D6A4}"=>"\u0131", "\u{1D6A5}"=>"\u0237", "\u{1D6A8}"=>"\u0391", "\u{1D6A9}"=>"\u0392", "\u{1D6AA}"=>"\u0393", "\u{1D6AB}"=>"\u0394", "\u{1D6AC}"=>"\u0395", "\u{1D6AD}"=>"\u0396", "\u{1D6AE}"=>"\u0397", "\u{1D6AF}"=>"\u0398", "\u{1D6B0}"=>"\u0399", "\u{1D6B1}"=>"\u039A", "\u{1D6B2}"=>"\u039B", "\u{1D6B3}"=>"\u039C", "\u{1D6B4}"=>"\u039D", "\u{1D6B5}"=>"\u039E", "\u{1D6B6}"=>"\u039F", "\u{1D6B7}"=>"\u03A0", "\u{1D6B8}"=>"\u03A1", "\u{1D6B9}"=>"\u0398", "\u{1D6BA}"=>"\u03A3", "\u{1D6BB}"=>"\u03A4", "\u{1D6BC}"=>"\u03A5", "\u{1D6BD}"=>"\u03A6", "\u{1D6BE}"=>"\u03A7", "\u{1D6BF}"=>"\u03A8", "\u{1D6C0}"=>"\u03A9", "\u{1D6C1}"=>"\u2207", "\u{1D6C2}"=>"\u03B1", "\u{1D6C3}"=>"\u03B2", "\u{1D6C4}"=>"\u03B3", "\u{1D6C5}"=>"\u03B4", "\u{1D6C6}"=>"\u03B5", "\u{1D6C7}"=>"\u03B6", "\u{1D6C8}"=>"\u03B7", "\u{1D6C9}"=>"\u03B8", "\u{1D6CA}"=>"\u03B9", "\u{1D6CB}"=>"\u03BA", "\u{1D6CC}"=>"\u03BB", "\u{1D6CD}"=>"\u03BC", "\u{1D6CE}"=>"\u03BD", "\u{1D6CF}"=>"\u03BE", "\u{1D6D0}"=>"\u03BF", "\u{1D6D1}"=>"\u03C0", "\u{1D6D2}"=>"\u03C1", "\u{1D6D3}"=>"\u03C2", "\u{1D6D4}"=>"\u03C3", "\u{1D6D5}"=>"\u03C4", "\u{1D6D6}"=>"\u03C5", "\u{1D6D7}"=>"\u03C6", "\u{1D6D8}"=>"\u03C7", "\u{1D6D9}"=>"\u03C8", "\u{1D6DA}"=>"\u03C9", "\u{1D6DB}"=>"\u2202", "\u{1D6DC}"=>"\u03B5", "\u{1D6DD}"=>"\u03B8", "\u{1D6DE}"=>"\u03BA", "\u{1D6DF}"=>"\u03C6", "\u{1D6E0}"=>"\u03C1", "\u{1D6E1}"=>"\u03C0", "\u{1D6E2}"=>"\u0391", "\u{1D6E3}"=>"\u0392", "\u{1D6E4}"=>"\u0393", "\u{1D6E5}"=>"\u0394", "\u{1D6E6}"=>"\u0395", "\u{1D6E7}"=>"\u0396", "\u{1D6E8}"=>"\u0397", "\u{1D6E9}"=>"\u0398", "\u{1D6EA}"=>"\u0399", "\u{1D6EB}"=>"\u039A", "\u{1D6EC}"=>"\u039B", "\u{1D6ED}"=>"\u039C", "\u{1D6EE}"=>"\u039D", "\u{1D6EF}"=>"\u039E", "\u{1D6F0}"=>"\u039F", "\u{1D6F1}"=>"\u03A0", "\u{1D6F2}"=>"\u03A1", "\u{1D6F3}"=>"\u0398", "\u{1D6F4}"=>"\u03A3", "\u{1D6F5}"=>"\u03A4", "\u{1D6F6}"=>"\u03A5", "\u{1D6F7}"=>"\u03A6", "\u{1D6F8}"=>"\u03A7", "\u{1D6F9}"=>"\u03A8", "\u{1D6FA}"=>"\u03A9", "\u{1D6FB}"=>"\u2207", "\u{1D6FC}"=>"\u03B1", "\u{1D6FD}"=>"\u03B2", "\u{1D6FE}"=>"\u03B3", "\u{1D6FF}"=>"\u03B4", "\u{1D700}"=>"\u03B5", "\u{1D701}"=>"\u03B6", "\u{1D702}"=>"\u03B7", "\u{1D703}"=>"\u03B8", "\u{1D704}"=>"\u03B9", "\u{1D705}"=>"\u03BA", "\u{1D706}"=>"\u03BB", "\u{1D707}"=>"\u03BC", "\u{1D708}"=>"\u03BD", "\u{1D709}"=>"\u03BE", "\u{1D70A}"=>"\u03BF", "\u{1D70B}"=>"\u03C0", "\u{1D70C}"=>"\u03C1", "\u{1D70D}"=>"\u03C2", "\u{1D70E}"=>"\u03C3", "\u{1D70F}"=>"\u03C4", "\u{1D710}"=>"\u03C5", "\u{1D711}"=>"\u03C6", "\u{1D712}"=>"\u03C7", "\u{1D713}"=>"\u03C8", "\u{1D714}"=>"\u03C9", "\u{1D715}"=>"\u2202", "\u{1D716}"=>"\u03B5", "\u{1D717}"=>"\u03B8", "\u{1D718}"=>"\u03BA", "\u{1D719}"=>"\u03C6", "\u{1D71A}"=>"\u03C1", "\u{1D71B}"=>"\u03C0", "\u{1D71C}"=>"\u0391", "\u{1D71D}"=>"\u0392", "\u{1D71E}"=>"\u0393", "\u{1D71F}"=>"\u0394", "\u{1D720}"=>"\u0395", "\u{1D721}"=>"\u0396", "\u{1D722}"=>"\u0397", "\u{1D723}"=>"\u0398", "\u{1D724}"=>"\u0399", "\u{1D725}"=>"\u039A", "\u{1D726}"=>"\u039B", "\u{1D727}"=>"\u039C", "\u{1D728}"=>"\u039D", "\u{1D729}"=>"\u039E", "\u{1D72A}"=>"\u039F", "\u{1D72B}"=>"\u03A0", "\u{1D72C}"=>"\u03A1", "\u{1D72D}"=>"\u0398", "\u{1D72E}"=>"\u03A3", "\u{1D72F}"=>"\u03A4", "\u{1D730}"=>"\u03A5", "\u{1D731}"=>"\u03A6", "\u{1D732}"=>"\u03A7", "\u{1D733}"=>"\u03A8", "\u{1D734}"=>"\u03A9", "\u{1D735}"=>"\u2207", "\u{1D736}"=>"\u03B1", "\u{1D737}"=>"\u03B2", "\u{1D738}"=>"\u03B3", "\u{1D739}"=>"\u03B4", "\u{1D73A}"=>"\u03B5", "\u{1D73B}"=>"\u03B6", "\u{1D73C}"=>"\u03B7", "\u{1D73D}"=>"\u03B8", "\u{1D73E}"=>"\u03B9", "\u{1D73F}"=>"\u03BA", "\u{1D740}"=>"\u03BB", "\u{1D741}"=>"\u03BC", "\u{1D742}"=>"\u03BD", "\u{1D743}"=>"\u03BE", "\u{1D744}"=>"\u03BF", "\u{1D745}"=>"\u03C0", "\u{1D746}"=>"\u03C1", "\u{1D747}"=>"\u03C2", "\u{1D748}"=>"\u03C3", "\u{1D749}"=>"\u03C4", "\u{1D74A}"=>"\u03C5", "\u{1D74B}"=>"\u03C6", "\u{1D74C}"=>"\u03C7", "\u{1D74D}"=>"\u03C8", "\u{1D74E}"=>"\u03C9", "\u{1D74F}"=>"\u2202", "\u{1D750}"=>"\u03B5", "\u{1D751}"=>"\u03B8", "\u{1D752}"=>"\u03BA", "\u{1D753}"=>"\u03C6", "\u{1D754}"=>"\u03C1", "\u{1D755}"=>"\u03C0", "\u{1D756}"=>"\u0391", "\u{1D757}"=>"\u0392", "\u{1D758}"=>"\u0393", "\u{1D759}"=>"\u0394", "\u{1D75A}"=>"\u0395", "\u{1D75B}"=>"\u0396", "\u{1D75C}"=>"\u0397", "\u{1D75D}"=>"\u0398", "\u{1D75E}"=>"\u0399", "\u{1D75F}"=>"\u039A", "\u{1D760}"=>"\u039B", "\u{1D761}"=>"\u039C", "\u{1D762}"=>"\u039D", "\u{1D763}"=>"\u039E", "\u{1D764}"=>"\u039F", "\u{1D765}"=>"\u03A0", "\u{1D766}"=>"\u03A1", "\u{1D767}"=>"\u0398", "\u{1D768}"=>"\u03A3", "\u{1D769}"=>"\u03A4", "\u{1D76A}"=>"\u03A5", "\u{1D76B}"=>"\u03A6", "\u{1D76C}"=>"\u03A7", "\u{1D76D}"=>"\u03A8", "\u{1D76E}"=>"\u03A9", "\u{1D76F}"=>"\u2207", "\u{1D770}"=>"\u03B1", "\u{1D771}"=>"\u03B2", "\u{1D772}"=>"\u03B3", "\u{1D773}"=>"\u03B4", "\u{1D774}"=>"\u03B5", "\u{1D775}"=>"\u03B6", "\u{1D776}"=>"\u03B7", "\u{1D777}"=>"\u03B8", "\u{1D778}"=>"\u03B9", "\u{1D779}"=>"\u03BA", "\u{1D77A}"=>"\u03BB", "\u{1D77B}"=>"\u03BC", "\u{1D77C}"=>"\u03BD", "\u{1D77D}"=>"\u03BE", "\u{1D77E}"=>"\u03BF", "\u{1D77F}"=>"\u03C0", "\u{1D780}"=>"\u03C1", "\u{1D781}"=>"\u03C2", "\u{1D782}"=>"\u03C3", "\u{1D783}"=>"\u03C4", "\u{1D784}"=>"\u03C5", "\u{1D785}"=>"\u03C6", "\u{1D786}"=>"\u03C7", "\u{1D787}"=>"\u03C8", "\u{1D788}"=>"\u03C9", "\u{1D789}"=>"\u2202", "\u{1D78A}"=>"\u03B5", "\u{1D78B}"=>"\u03B8", "\u{1D78C}"=>"\u03BA", "\u{1D78D}"=>"\u03C6", "\u{1D78E}"=>"\u03C1", "\u{1D78F}"=>"\u03C0", "\u{1D790}"=>"\u0391", "\u{1D791}"=>"\u0392", "\u{1D792}"=>"\u0393", "\u{1D793}"=>"\u0394", "\u{1D794}"=>"\u0395", "\u{1D795}"=>"\u0396", "\u{1D796}"=>"\u0397", "\u{1D797}"=>"\u0398", "\u{1D798}"=>"\u0399", "\u{1D799}"=>"\u039A", "\u{1D79A}"=>"\u039B", "\u{1D79B}"=>"\u039C", "\u{1D79C}"=>"\u039D", "\u{1D79D}"=>"\u039E", "\u{1D79E}"=>"\u039F", "\u{1D79F}"=>"\u03A0", "\u{1D7A0}"=>"\u03A1", "\u{1D7A1}"=>"\u0398", "\u{1D7A2}"=>"\u03A3", "\u{1D7A3}"=>"\u03A4", "\u{1D7A4}"=>"\u03A5", "\u{1D7A5}"=>"\u03A6", "\u{1D7A6}"=>"\u03A7", "\u{1D7A7}"=>"\u03A8", "\u{1D7A8}"=>"\u03A9", "\u{1D7A9}"=>"\u2207", "\u{1D7AA}"=>"\u03B1", "\u{1D7AB}"=>"\u03B2", "\u{1D7AC}"=>"\u03B3", "\u{1D7AD}"=>"\u03B4", "\u{1D7AE}"=>"\u03B5", "\u{1D7AF}"=>"\u03B6", "\u{1D7B0}"=>"\u03B7", "\u{1D7B1}"=>"\u03B8", "\u{1D7B2}"=>"\u03B9", "\u{1D7B3}"=>"\u03BA", "\u{1D7B4}"=>"\u03BB", "\u{1D7B5}"=>"\u03BC", "\u{1D7B6}"=>"\u03BD", "\u{1D7B7}"=>"\u03BE", "\u{1D7B8}"=>"\u03BF", "\u{1D7B9}"=>"\u03C0", "\u{1D7BA}"=>"\u03C1", "\u{1D7BB}"=>"\u03C2", "\u{1D7BC}"=>"\u03C3", "\u{1D7BD}"=>"\u03C4", "\u{1D7BE}"=>"\u03C5", "\u{1D7BF}"=>"\u03C6", "\u{1D7C0}"=>"\u03C7", "\u{1D7C1}"=>"\u03C8", "\u{1D7C2}"=>"\u03C9", "\u{1D7C3}"=>"\u2202", "\u{1D7C4}"=>"\u03B5", "\u{1D7C5}"=>"\u03B8", "\u{1D7C6}"=>"\u03BA", "\u{1D7C7}"=>"\u03C6", "\u{1D7C8}"=>"\u03C1", "\u{1D7C9}"=>"\u03C0", "\u{1D7CA}"=>"\u03DC", "\u{1D7CB}"=>"\u03DD", "\u{1D7CE}"=>"0", "\u{1D7CF}"=>"1", "\u{1D7D0}"=>"2", "\u{1D7D1}"=>"3", "\u{1D7D2}"=>"4", "\u{1D7D3}"=>"5", "\u{1D7D4}"=>"6", "\u{1D7D5}"=>"7", "\u{1D7D6}"=>"8", "\u{1D7D7}"=>"9", "\u{1D7D8}"=>"0", "\u{1D7D9}"=>"1", "\u{1D7DA}"=>"2", "\u{1D7DB}"=>"3", "\u{1D7DC}"=>"4", "\u{1D7DD}"=>"5", "\u{1D7DE}"=>"6", "\u{1D7DF}"=>"7", "\u{1D7E0}"=>"8", "\u{1D7E1}"=>"9", "\u{1D7E2}"=>"0", "\u{1D7E3}"=>"1", "\u{1D7E4}"=>"2", "\u{1D7E5}"=>"3", "\u{1D7E6}"=>"4", "\u{1D7E7}"=>"5", "\u{1D7E8}"=>"6", "\u{1D7E9}"=>"7", "\u{1D7EA}"=>"8", "\u{1D7EB}"=>"9", "\u{1D7EC}"=>"0", "\u{1D7ED}"=>"1", "\u{1D7EE}"=>"2", "\u{1D7EF}"=>"3", "\u{1D7F0}"=>"4", "\u{1D7F1}"=>"5", "\u{1D7F2}"=>"6", "\u{1D7F3}"=>"7", "\u{1D7F4}"=>"8", "\u{1D7F5}"=>"9", "\u{1D7F6}"=>"0", "\u{1D7F7}"=>"1", "\u{1D7F8}"=>"2", "\u{1D7F9}"=>"3", "\u{1D7FA}"=>"4", "\u{1D7FB}"=>"5", "\u{1D7FC}"=>"6", "\u{1D7FD}"=>"7", "\u{1D7FE}"=>"8", "\u{1D7FF}"=>"9", "\u{1EE00}"=>"\u0627", "\u{1EE01}"=>"\u0628", "\u{1EE02}"=>"\u062C", "\u{1EE03}"=>"\u062F", "\u{1EE05}"=>"\u0648", "\u{1EE06}"=>"\u0632", "\u{1EE07}"=>"\u062D", "\u{1EE08}"=>"\u0637", "\u{1EE09}"=>"\u064A", "\u{1EE0A}"=>"\u0643", "\u{1EE0B}"=>"\u0644", "\u{1EE0C}"=>"\u0645", "\u{1EE0D}"=>"\u0646", "\u{1EE0E}"=>"\u0633", "\u{1EE0F}"=>"\u0639", "\u{1EE10}"=>"\u0641", "\u{1EE11}"=>"\u0635", "\u{1EE12}"=>"\u0642", "\u{1EE13}"=>"\u0631", "\u{1EE14}"=>"\u0634", "\u{1EE15}"=>"\u062A", "\u{1EE16}"=>"\u062B", "\u{1EE17}"=>"\u062E", "\u{1EE18}"=>"\u0630", "\u{1EE19}"=>"\u0636", "\u{1EE1A}"=>"\u0638", "\u{1EE1B}"=>"\u063A", "\u{1EE1C}"=>"\u066E", "\u{1EE1D}"=>"\u06BA", "\u{1EE1E}"=>"\u06A1", "\u{1EE1F}"=>"\u066F", "\u{1EE21}"=>"\u0628", "\u{1EE22}"=>"\u062C", "\u{1EE24}"=>"\u0647", "\u{1EE27}"=>"\u062D", "\u{1EE29}"=>"\u064A", "\u{1EE2A}"=>"\u0643", "\u{1EE2B}"=>"\u0644", "\u{1EE2C}"=>"\u0645", "\u{1EE2D}"=>"\u0646", "\u{1EE2E}"=>"\u0633", "\u{1EE2F}"=>"\u0639", "\u{1EE30}"=>"\u0641", "\u{1EE31}"=>"\u0635", "\u{1EE32}"=>"\u0642", "\u{1EE34}"=>"\u0634", "\u{1EE35}"=>"\u062A", "\u{1EE36}"=>"\u062B", "\u{1EE37}"=>"\u062E", "\u{1EE39}"=>"\u0636", "\u{1EE3B}"=>"\u063A", "\u{1EE42}"=>"\u062C", "\u{1EE47}"=>"\u062D", "\u{1EE49}"=>"\u064A", "\u{1EE4B}"=>"\u0644", "\u{1EE4D}"=>"\u0646", "\u{1EE4E}"=>"\u0633", "\u{1EE4F}"=>"\u0639", "\u{1EE51}"=>"\u0635", "\u{1EE52}"=>"\u0642", "\u{1EE54}"=>"\u0634", "\u{1EE57}"=>"\u062E", "\u{1EE59}"=>"\u0636", "\u{1EE5B}"=>"\u063A", "\u{1EE5D}"=>"\u06BA", "\u{1EE5F}"=>"\u066F", "\u{1EE61}"=>"\u0628", "\u{1EE62}"=>"\u062C", "\u{1EE64}"=>"\u0647", "\u{1EE67}"=>"\u062D", "\u{1EE68}"=>"\u0637", "\u{1EE69}"=>"\u064A", "\u{1EE6A}"=>"\u0643", "\u{1EE6C}"=>"\u0645", "\u{1EE6D}"=>"\u0646", "\u{1EE6E}"=>"\u0633", "\u{1EE6F}"=>"\u0639", "\u{1EE70}"=>"\u0641", "\u{1EE71}"=>"\u0635", "\u{1EE72}"=>"\u0642", "\u{1EE74}"=>"\u0634", "\u{1EE75}"=>"\u062A", "\u{1EE76}"=>"\u062B", "\u{1EE77}"=>"\u062E", "\u{1EE79}"=>"\u0636", "\u{1EE7A}"=>"\u0638", "\u{1EE7B}"=>"\u063A", "\u{1EE7C}"=>"\u066E", "\u{1EE7E}"=>"\u06A1", "\u{1EE80}"=>"\u0627", "\u{1EE81}"=>"\u0628", "\u{1EE82}"=>"\u062C", "\u{1EE83}"=>"\u062F", "\u{1EE84}"=>"\u0647", "\u{1EE85}"=>"\u0648", "\u{1EE86}"=>"\u0632", "\u{1EE87}"=>"\u062D", "\u{1EE88}"=>"\u0637", "\u{1EE89}"=>"\u064A", "\u{1EE8B}"=>"\u0644", "\u{1EE8C}"=>"\u0645", "\u{1EE8D}"=>"\u0646", "\u{1EE8E}"=>"\u0633", "\u{1EE8F}"=>"\u0639", "\u{1EE90}"=>"\u0641", "\u{1EE91}"=>"\u0635", "\u{1EE92}"=>"\u0642", "\u{1EE93}"=>"\u0631", "\u{1EE94}"=>"\u0634", "\u{1EE95}"=>"\u062A", "\u{1EE96}"=>"\u062B", "\u{1EE97}"=>"\u062E", "\u{1EE98}"=>"\u0630", "\u{1EE99}"=>"\u0636", "\u{1EE9A}"=>"\u0638", "\u{1EE9B}"=>"\u063A", "\u{1EEA1}"=>"\u0628", "\u{1EEA2}"=>"\u062C", "\u{1EEA3}"=>"\u062F", "\u{1EEA5}"=>"\u0648", "\u{1EEA6}"=>"\u0632", "\u{1EEA7}"=>"\u062D", "\u{1EEA8}"=>"\u0637", "\u{1EEA9}"=>"\u064A", "\u{1EEAB}"=>"\u0644", "\u{1EEAC}"=>"\u0645", "\u{1EEAD}"=>"\u0646", "\u{1EEAE}"=>"\u0633", "\u{1EEAF}"=>"\u0639", "\u{1EEB0}"=>"\u0641", "\u{1EEB1}"=>"\u0635", "\u{1EEB2}"=>"\u0642", "\u{1EEB3}"=>"\u0631", "\u{1EEB4}"=>"\u0634", "\u{1EEB5}"=>"\u062A", "\u{1EEB6}"=>"\u062B", "\u{1EEB7}"=>"\u062E", "\u{1EEB8}"=>"\u0630", "\u{1EEB9}"=>"\u0636", "\u{1EEBA}"=>"\u0638", "\u{1EEBB}"=>"\u063A", "\u{1F100}"=>"0.", "\u{1F101}"=>"0,", "\u{1F102}"=>"1,", "\u{1F103}"=>"2,", "\u{1F104}"=>"3,", "\u{1F105}"=>"4,", "\u{1F106}"=>"5,", "\u{1F107}"=>"6,", "\u{1F108}"=>"7,", "\u{1F109}"=>"8,", "\u{1F10A}"=>"9,", "\u{1F110}"=>"(A)", "\u{1F111}"=>"(B)", "\u{1F112}"=>"(C)", "\u{1F113}"=>"(D)", "\u{1F114}"=>"(E)", "\u{1F115}"=>"(F)", "\u{1F116}"=>"(G)", "\u{1F117}"=>"(H)", "\u{1F118}"=>"(I)", "\u{1F119}"=>"(J)", "\u{1F11A}"=>"(K)", "\u{1F11B}"=>"(L)", "\u{1F11C}"=>"(M)", "\u{1F11D}"=>"(N)", "\u{1F11E}"=>"(O)", "\u{1F11F}"=>"(P)", "\u{1F120}"=>"(Q)", "\u{1F121}"=>"(R)", "\u{1F122}"=>"(S)", "\u{1F123}"=>"(T)", "\u{1F124}"=>"(U)", "\u{1F125}"=>"(V)", "\u{1F126}"=>"(W)", "\u{1F127}"=>"(X)", "\u{1F128}"=>"(Y)", "\u{1F129}"=>"(Z)", "\u{1F12A}"=>"\u3014S\u3015", "\u{1F12B}"=>"C", "\u{1F12C}"=>"R", "\u{1F12D}"=>"CD", "\u{1F12E}"=>"WZ", "\u{1F130}"=>"A", "\u{1F131}"=>"B", "\u{1F132}"=>"C", "\u{1F133}"=>"D", "\u{1F134}"=>"E", "\u{1F135}"=>"F", "\u{1F136}"=>"G", "\u{1F137}"=>"H", "\u{1F138}"=>"I", "\u{1F139}"=>"J", "\u{1F13A}"=>"K", "\u{1F13B}"=>"L", "\u{1F13C}"=>"M", "\u{1F13D}"=>"N", "\u{1F13E}"=>"O", "\u{1F13F}"=>"P", "\u{1F140}"=>"Q", "\u{1F141}"=>"R", "\u{1F142}"=>"S", "\u{1F143}"=>"T", "\u{1F144}"=>"U", "\u{1F145}"=>"V", "\u{1F146}"=>"W", "\u{1F147}"=>"X", "\u{1F148}"=>"Y", "\u{1F149}"=>"Z", "\u{1F14A}"=>"HV", "\u{1F14B}"=>"MV", "\u{1F14C}"=>"SD", "\u{1F14D}"=>"SS", "\u{1F14E}"=>"PPV", "\u{1F14F}"=>"WC", "\u{1F16A}"=>"MC", "\u{1F16B}"=>"MD", "\u{1F190}"=>"DJ", "\u{1F200}"=>"\u307B\u304B", "\u{1F201}"=>"\u30B3\u30B3", "\u{1F202}"=>"\u30B5", "\u{1F210}"=>"\u624B", "\u{1F211}"=>"\u5B57", "\u{1F212}"=>"\u53CC", "\u{1F213}"=>"\u30C7", "\u{1F214}"=>"\u4E8C", "\u{1F215}"=>"\u591A", "\u{1F216}"=>"\u89E3", "\u{1F217}"=>"\u5929", "\u{1F218}"=>"\u4EA4", "\u{1F219}"=>"\u6620", "\u{1F21A}"=>"\u7121", "\u{1F21B}"=>"\u6599", "\u{1F21C}"=>"\u524D", "\u{1F21D}"=>"\u5F8C", "\u{1F21E}"=>"\u518D", "\u{1F21F}"=>"\u65B0", "\u{1F220}"=>"\u521D", "\u{1F221}"=>"\u7D42", "\u{1F222}"=>"\u751F", "\u{1F223}"=>"\u8CA9", "\u{1F224}"=>"\u58F0", "\u{1F225}"=>"\u5439", "\u{1F226}"=>"\u6F14", "\u{1F227}"=>"\u6295", "\u{1F228}"=>"\u6355", "\u{1F229}"=>"\u4E00", "\u{1F22A}"=>"\u4E09", "\u{1F22B}"=>"\u904A", "\u{1F22C}"=>"\u5DE6", "\u{1F22D}"=>"\u4E2D", "\u{1F22E}"=>"\u53F3", "\u{1F22F}"=>"\u6307", "\u{1F230}"=>"\u8D70", "\u{1F231}"=>"\u6253", "\u{1F232}"=>"\u7981", "\u{1F233}"=>"\u7A7A", "\u{1F234}"=>"\u5408", "\u{1F235}"=>"\u6E80", "\u{1F236}"=>"\u6709", "\u{1F237}"=>"\u6708", "\u{1F238}"=>"\u7533", "\u{1F239}"=>"\u5272", "\u{1F23A}"=>"\u55B6", "\u{1F23B}"=>"\u914D", "\u{1F240}"=>"\u3014\u672C\u3015", "\u{1F241}"=>"\u3014\u4E09\u3015", "\u{1F242}"=>"\u3014\u4E8C\u3015", "\u{1F243}"=>"\u3014\u5B89\u3015", "\u{1F244}"=>"\u3014\u70B9\u3015", "\u{1F245}"=>"\u3014\u6253\u3015", "\u{1F246}"=>"\u3014\u76D7\u3015", "\u{1F247}"=>"\u3014\u52DD\u3015", "\u{1F248}"=>"\u3014\u6557\u3015", "\u{1F250}"=>"\u5F97", "\u{1F251}"=>"\u53EF", "\u0385"=>" \u0308\u0301", "\u03D3"=>"\u03A5\u0301", "\u03D4"=>"\u03A5\u0308", "\u1E9B"=>"s\u0307", "\u1FC1"=>" \u0308\u0342", "\u1FCD"=>" \u0313\u0300", "\u1FCE"=>" \u0313\u0301", "\u1FCF"=>" \u0313\u0342", "\u1FDD"=>" \u0314\u0300", "\u1FDE"=>" \u0314\u0301", "\u1FDF"=>" \u0314\u0342", "\u1FED"=>" \u0308\u0300", "\u1FEE"=>" \u0308\u0301", "\u1FFD"=>" \u0301", "\u2000"=>" ", "\u2001"=>" ", }.freeze COMPOSITION_TABLE = { "A\u0300"=>"\u00C0", "A\u0301"=>"\u00C1", "A\u0302"=>"\u00C2", "A\u0303"=>"\u00C3", "A\u0308"=>"\u00C4", "A\u030A"=>"\u00C5", "C\u0327"=>"\u00C7", "E\u0300"=>"\u00C8", "E\u0301"=>"\u00C9", "E\u0302"=>"\u00CA", "E\u0308"=>"\u00CB", "I\u0300"=>"\u00CC", "I\u0301"=>"\u00CD", "I\u0302"=>"\u00CE", "I\u0308"=>"\u00CF", "N\u0303"=>"\u00D1", "O\u0300"=>"\u00D2", "O\u0301"=>"\u00D3", "O\u0302"=>"\u00D4", "O\u0303"=>"\u00D5", "O\u0308"=>"\u00D6", "U\u0300"=>"\u00D9", "U\u0301"=>"\u00DA", "U\u0302"=>"\u00DB", "U\u0308"=>"\u00DC", "Y\u0301"=>"\u00DD", "a\u0300"=>"\u00E0", "a\u0301"=>"\u00E1", "a\u0302"=>"\u00E2", "a\u0303"=>"\u00E3", "a\u0308"=>"\u00E4", "a\u030A"=>"\u00E5", "c\u0327"=>"\u00E7", "e\u0300"=>"\u00E8", "e\u0301"=>"\u00E9", "e\u0302"=>"\u00EA", "e\u0308"=>"\u00EB", "i\u0300"=>"\u00EC", "i\u0301"=>"\u00ED", "i\u0302"=>"\u00EE", "i\u0308"=>"\u00EF", "n\u0303"=>"\u00F1", "o\u0300"=>"\u00F2", "o\u0301"=>"\u00F3", "o\u0302"=>"\u00F4", "o\u0303"=>"\u00F5", "o\u0308"=>"\u00F6", "u\u0300"=>"\u00F9", "u\u0301"=>"\u00FA", "u\u0302"=>"\u00FB", "u\u0308"=>"\u00FC", "y\u0301"=>"\u00FD", "y\u0308"=>"\u00FF", "A\u0304"=>"\u0100", "a\u0304"=>"\u0101", "A\u0306"=>"\u0102", "a\u0306"=>"\u0103", "A\u0328"=>"\u0104", "a\u0328"=>"\u0105", "C\u0301"=>"\u0106", "c\u0301"=>"\u0107", "C\u0302"=>"\u0108", "c\u0302"=>"\u0109", "C\u0307"=>"\u010A", "c\u0307"=>"\u010B", "C\u030C"=>"\u010C", "c\u030C"=>"\u010D", "D\u030C"=>"\u010E", "d\u030C"=>"\u010F", "E\u0304"=>"\u0112", "e\u0304"=>"\u0113", "E\u0306"=>"\u0114", "e\u0306"=>"\u0115", "E\u0307"=>"\u0116", "e\u0307"=>"\u0117", "E\u0328"=>"\u0118", "e\u0328"=>"\u0119", "E\u030C"=>"\u011A", "e\u030C"=>"\u011B", "G\u0302"=>"\u011C", "g\u0302"=>"\u011D", "G\u0306"=>"\u011E", "g\u0306"=>"\u011F", "G\u0307"=>"\u0120", "g\u0307"=>"\u0121", "G\u0327"=>"\u0122", "g\u0327"=>"\u0123", "H\u0302"=>"\u0124", "h\u0302"=>"\u0125", "I\u0303"=>"\u0128", "i\u0303"=>"\u0129", "I\u0304"=>"\u012A", "i\u0304"=>"\u012B", "I\u0306"=>"\u012C", "i\u0306"=>"\u012D", "I\u0328"=>"\u012E", "i\u0328"=>"\u012F", "I\u0307"=>"\u0130", "J\u0302"=>"\u0134", "j\u0302"=>"\u0135", "K\u0327"=>"\u0136", "k\u0327"=>"\u0137", "L\u0301"=>"\u0139", "l\u0301"=>"\u013A", "L\u0327"=>"\u013B", "l\u0327"=>"\u013C", "L\u030C"=>"\u013D", "l\u030C"=>"\u013E", "N\u0301"=>"\u0143", "n\u0301"=>"\u0144", "N\u0327"=>"\u0145", "n\u0327"=>"\u0146", "N\u030C"=>"\u0147", "n\u030C"=>"\u0148", "O\u0304"=>"\u014C", "o\u0304"=>"\u014D", "O\u0306"=>"\u014E", "o\u0306"=>"\u014F", "O\u030B"=>"\u0150", "o\u030B"=>"\u0151", "R\u0301"=>"\u0154", "r\u0301"=>"\u0155", "R\u0327"=>"\u0156", "r\u0327"=>"\u0157", "R\u030C"=>"\u0158", "r\u030C"=>"\u0159", "S\u0301"=>"\u015A", "s\u0301"=>"\u015B", "S\u0302"=>"\u015C", "s\u0302"=>"\u015D", "S\u0327"=>"\u015E", "s\u0327"=>"\u015F", "S\u030C"=>"\u0160", "s\u030C"=>"\u0161", "T\u0327"=>"\u0162", "t\u0327"=>"\u0163", "T\u030C"=>"\u0164", "t\u030C"=>"\u0165", "U\u0303"=>"\u0168", "u\u0303"=>"\u0169", "U\u0304"=>"\u016A", "u\u0304"=>"\u016B", "U\u0306"=>"\u016C", "u\u0306"=>"\u016D", "U\u030A"=>"\u016E", "u\u030A"=>"\u016F", "U\u030B"=>"\u0170", "u\u030B"=>"\u0171", "U\u0328"=>"\u0172", "u\u0328"=>"\u0173", "W\u0302"=>"\u0174", "w\u0302"=>"\u0175", "Y\u0302"=>"\u0176", "y\u0302"=>"\u0177", "Y\u0308"=>"\u0178", "Z\u0301"=>"\u0179", "z\u0301"=>"\u017A", "Z\u0307"=>"\u017B", "z\u0307"=>"\u017C", "Z\u030C"=>"\u017D", "z\u030C"=>"\u017E", "O\u031B"=>"\u01A0", "o\u031B"=>"\u01A1", "U\u031B"=>"\u01AF", "u\u031B"=>"\u01B0", "A\u030C"=>"\u01CD", "a\u030C"=>"\u01CE", "I\u030C"=>"\u01CF", "i\u030C"=>"\u01D0", "O\u030C"=>"\u01D1", "o\u030C"=>"\u01D2", "U\u030C"=>"\u01D3", "u\u030C"=>"\u01D4", "\u00DC\u0304"=>"\u01D5", "\u00FC\u0304"=>"\u01D6", "\u00DC\u0301"=>"\u01D7", "\u00FC\u0301"=>"\u01D8", "\u00DC\u030C"=>"\u01D9", "\u00FC\u030C"=>"\u01DA", "\u00DC\u0300"=>"\u01DB", "\u00FC\u0300"=>"\u01DC", "\u00C4\u0304"=>"\u01DE", "\u00E4\u0304"=>"\u01DF", "\u0226\u0304"=>"\u01E0", "\u0227\u0304"=>"\u01E1", "\u00C6\u0304"=>"\u01E2", "\u00E6\u0304"=>"\u01E3", "G\u030C"=>"\u01E6", "g\u030C"=>"\u01E7", "K\u030C"=>"\u01E8", "k\u030C"=>"\u01E9", "O\u0328"=>"\u01EA", "o\u0328"=>"\u01EB", "\u01EA\u0304"=>"\u01EC", "\u01EB\u0304"=>"\u01ED", "\u01B7\u030C"=>"\u01EE", "\u0292\u030C"=>"\u01EF", "j\u030C"=>"\u01F0", "G\u0301"=>"\u01F4", "g\u0301"=>"\u01F5", "N\u0300"=>"\u01F8", "n\u0300"=>"\u01F9", "\u00C5\u0301"=>"\u01FA", "\u00E5\u0301"=>"\u01FB", "\u00C6\u0301"=>"\u01FC", "\u00E6\u0301"=>"\u01FD", "\u00D8\u0301"=>"\u01FE", "\u00F8\u0301"=>"\u01FF", "A\u030F"=>"\u0200", "a\u030F"=>"\u0201", "A\u0311"=>"\u0202", "a\u0311"=>"\u0203", "E\u030F"=>"\u0204", "e\u030F"=>"\u0205", "E\u0311"=>"\u0206", "e\u0311"=>"\u0207", "I\u030F"=>"\u0208", "i\u030F"=>"\u0209", "I\u0311"=>"\u020A", "i\u0311"=>"\u020B", "O\u030F"=>"\u020C", "o\u030F"=>"\u020D", "O\u0311"=>"\u020E", "o\u0311"=>"\u020F", "R\u030F"=>"\u0210", "r\u030F"=>"\u0211", "R\u0311"=>"\u0212", "r\u0311"=>"\u0213", "U\u030F"=>"\u0214", "u\u030F"=>"\u0215", "U\u0311"=>"\u0216", "u\u0311"=>"\u0217", "S\u0326"=>"\u0218", "s\u0326"=>"\u0219", "T\u0326"=>"\u021A", "t\u0326"=>"\u021B", "H\u030C"=>"\u021E", "h\u030C"=>"\u021F", "A\u0307"=>"\u0226", "a\u0307"=>"\u0227", "E\u0327"=>"\u0228", "e\u0327"=>"\u0229", "\u00D6\u0304"=>"\u022A", "\u00F6\u0304"=>"\u022B", "\u00D5\u0304"=>"\u022C", "\u00F5\u0304"=>"\u022D", "O\u0307"=>"\u022E", "o\u0307"=>"\u022F", "\u022E\u0304"=>"\u0230", "\u022F\u0304"=>"\u0231", "Y\u0304"=>"\u0232", "y\u0304"=>"\u0233", "\u00A8\u0301"=>"\u0385", "\u0391\u0301"=>"\u0386", "\u0395\u0301"=>"\u0388", "\u0397\u0301"=>"\u0389", "\u0399\u0301"=>"\u038A", "\u039F\u0301"=>"\u038C", "\u03A5\u0301"=>"\u038E", "\u03A9\u0301"=>"\u038F", "\u03CA\u0301"=>"\u0390", "\u0399\u0308"=>"\u03AA", "\u03A5\u0308"=>"\u03AB", "\u03B1\u0301"=>"\u03AC", "\u03B5\u0301"=>"\u03AD", "\u03B7\u0301"=>"\u03AE", "\u03B9\u0301"=>"\u03AF", "\u03CB\u0301"=>"\u03B0", "\u03B9\u0308"=>"\u03CA", "\u03C5\u0308"=>"\u03CB", "\u03BF\u0301"=>"\u03CC", "\u03C5\u0301"=>"\u03CD", "\u03C9\u0301"=>"\u03CE", "\u03D2\u0301"=>"\u03D3", "\u03D2\u0308"=>"\u03D4", "\u0415\u0300"=>"\u0400", "\u0415\u0308"=>"\u0401", "\u0413\u0301"=>"\u0403", "\u0406\u0308"=>"\u0407", "\u041A\u0301"=>"\u040C", "\u0418\u0300"=>"\u040D", "\u0423\u0306"=>"\u040E", "\u0418\u0306"=>"\u0419", "\u0438\u0306"=>"\u0439", "\u0435\u0300"=>"\u0450", "\u0435\u0308"=>"\u0451", "\u0433\u0301"=>"\u0453", "\u0456\u0308"=>"\u0457", "\u043A\u0301"=>"\u045C", "\u0438\u0300"=>"\u045D", "\u0443\u0306"=>"\u045E", "\u0474\u030F"=>"\u0476", "\u0475\u030F"=>"\u0477", "\u0416\u0306"=>"\u04C1", "\u0436\u0306"=>"\u04C2", "\u0410\u0306"=>"\u04D0", "\u0430\u0306"=>"\u04D1", "\u0410\u0308"=>"\u04D2", "\u0430\u0308"=>"\u04D3", "\u0415\u0306"=>"\u04D6", "\u0435\u0306"=>"\u04D7", "\u04D8\u0308"=>"\u04DA", "\u04D9\u0308"=>"\u04DB", "\u0416\u0308"=>"\u04DC", "\u0436\u0308"=>"\u04DD", "\u0417\u0308"=>"\u04DE", "\u0437\u0308"=>"\u04DF", "\u0418\u0304"=>"\u04E2", "\u0438\u0304"=>"\u04E3", "\u0418\u0308"=>"\u04E4", "\u0438\u0308"=>"\u04E5", "\u041E\u0308"=>"\u04E6", "\u043E\u0308"=>"\u04E7", "\u04E8\u0308"=>"\u04EA", "\u04E9\u0308"=>"\u04EB", "\u042D\u0308"=>"\u04EC", "\u044D\u0308"=>"\u04ED", "\u0423\u0304"=>"\u04EE", "\u0443\u0304"=>"\u04EF", "\u0423\u0308"=>"\u04F0", "\u0443\u0308"=>"\u04F1", "\u0423\u030B"=>"\u04F2", "\u0443\u030B"=>"\u04F3", "\u0427\u0308"=>"\u04F4", "\u0447\u0308"=>"\u04F5", "\u042B\u0308"=>"\u04F8", "\u044B\u0308"=>"\u04F9", "\u0627\u0653"=>"\u0622", "\u0627\u0654"=>"\u0623", "\u0648\u0654"=>"\u0624", "\u0627\u0655"=>"\u0625", "\u064A\u0654"=>"\u0626", "\u06D5\u0654"=>"\u06C0", "\u06C1\u0654"=>"\u06C2", "\u06D2\u0654"=>"\u06D3", "\u0928\u093C"=>"\u0929", "\u0930\u093C"=>"\u0931", "\u0933\u093C"=>"\u0934", "\u09C7\u09BE"=>"\u09CB", "\u09C7\u09D7"=>"\u09CC", "\u0B47\u0B56"=>"\u0B48", "\u0B47\u0B3E"=>"\u0B4B", "\u0B47\u0B57"=>"\u0B4C", "\u0B92\u0BD7"=>"\u0B94", "\u0BC6\u0BBE"=>"\u0BCA", "\u0BC7\u0BBE"=>"\u0BCB", "\u0BC6\u0BD7"=>"\u0BCC", "\u0C46\u0C56"=>"\u0C48", "\u0CBF\u0CD5"=>"\u0CC0", "\u0CC6\u0CD5"=>"\u0CC7", "\u0CC6\u0CD6"=>"\u0CC8", "\u0CC6\u0CC2"=>"\u0CCA", "\u0CCA\u0CD5"=>"\u0CCB", "\u0D46\u0D3E"=>"\u0D4A", "\u0D47\u0D3E"=>"\u0D4B", "\u0D46\u0D57"=>"\u0D4C", "\u0DD9\u0DCA"=>"\u0DDA", "\u0DD9\u0DCF"=>"\u0DDC", "\u0DDC\u0DCA"=>"\u0DDD", "\u0DD9\u0DDF"=>"\u0DDE", "\u1025\u102E"=>"\u1026", "\u1B05\u1B35"=>"\u1B06", "\u1B07\u1B35"=>"\u1B08", "\u1B09\u1B35"=>"\u1B0A", "\u1B0B\u1B35"=>"\u1B0C", "\u1B0D\u1B35"=>"\u1B0E", "\u1B11\u1B35"=>"\u1B12", "\u1B3A\u1B35"=>"\u1B3B", "\u1B3C\u1B35"=>"\u1B3D", "\u1B3E\u1B35"=>"\u1B40", "\u1B3F\u1B35"=>"\u1B41", "\u1B42\u1B35"=>"\u1B43", "A\u0325"=>"\u1E00", "a\u0325"=>"\u1E01", "B\u0307"=>"\u1E02", "b\u0307"=>"\u1E03", "B\u0323"=>"\u1E04", "b\u0323"=>"\u1E05", "B\u0331"=>"\u1E06", "b\u0331"=>"\u1E07", "\u00C7\u0301"=>"\u1E08", "\u00E7\u0301"=>"\u1E09", "D\u0307"=>"\u1E0A", "d\u0307"=>"\u1E0B", "D\u0323"=>"\u1E0C", "d\u0323"=>"\u1E0D", "D\u0331"=>"\u1E0E", "d\u0331"=>"\u1E0F", "D\u0327"=>"\u1E10", "d\u0327"=>"\u1E11", "D\u032D"=>"\u1E12", "d\u032D"=>"\u1E13", "\u0112\u0300"=>"\u1E14", "\u0113\u0300"=>"\u1E15", "\u0112\u0301"=>"\u1E16", "\u0113\u0301"=>"\u1E17", "E\u032D"=>"\u1E18", "e\u032D"=>"\u1E19", "E\u0330"=>"\u1E1A", "e\u0330"=>"\u1E1B", "\u0228\u0306"=>"\u1E1C", "\u0229\u0306"=>"\u1E1D", "F\u0307"=>"\u1E1E", "f\u0307"=>"\u1E1F", "G\u0304"=>"\u1E20", "g\u0304"=>"\u1E21", "H\u0307"=>"\u1E22", "h\u0307"=>"\u1E23", "H\u0323"=>"\u1E24", "h\u0323"=>"\u1E25", "H\u0308"=>"\u1E26", "h\u0308"=>"\u1E27", "H\u0327"=>"\u1E28", "h\u0327"=>"\u1E29", "H\u032E"=>"\u1E2A", "h\u032E"=>"\u1E2B", "I\u0330"=>"\u1E2C", "i\u0330"=>"\u1E2D", "\u00CF\u0301"=>"\u1E2E", "\u00EF\u0301"=>"\u1E2F", "K\u0301"=>"\u1E30", "k\u0301"=>"\u1E31", "K\u0323"=>"\u1E32", "k\u0323"=>"\u1E33", "K\u0331"=>"\u1E34", "k\u0331"=>"\u1E35", "L\u0323"=>"\u1E36", "l\u0323"=>"\u1E37", "\u1E36\u0304"=>"\u1E38", "\u1E37\u0304"=>"\u1E39", "L\u0331"=>"\u1E3A", "l\u0331"=>"\u1E3B", "L\u032D"=>"\u1E3C", "l\u032D"=>"\u1E3D", "M\u0301"=>"\u1E3E", "m\u0301"=>"\u1E3F", "M\u0307"=>"\u1E40", "m\u0307"=>"\u1E41", "M\u0323"=>"\u1E42", "m\u0323"=>"\u1E43", "N\u0307"=>"\u1E44", "n\u0307"=>"\u1E45", "N\u0323"=>"\u1E46", "n\u0323"=>"\u1E47", "N\u0331"=>"\u1E48", "n\u0331"=>"\u1E49", "N\u032D"=>"\u1E4A", "n\u032D"=>"\u1E4B", "\u00D5\u0301"=>"\u1E4C", "\u00F5\u0301"=>"\u1E4D", "\u00D5\u0308"=>"\u1E4E", "\u00F5\u0308"=>"\u1E4F", "\u014C\u0300"=>"\u1E50", "\u014D\u0300"=>"\u1E51", "\u014C\u0301"=>"\u1E52", "\u014D\u0301"=>"\u1E53", "P\u0301"=>"\u1E54", "p\u0301"=>"\u1E55", "P\u0307"=>"\u1E56", "p\u0307"=>"\u1E57", "R\u0307"=>"\u1E58", "r\u0307"=>"\u1E59", "R\u0323"=>"\u1E5A", "r\u0323"=>"\u1E5B", "\u1E5A\u0304"=>"\u1E5C", "\u1E5B\u0304"=>"\u1E5D", "R\u0331"=>"\u1E5E", "r\u0331"=>"\u1E5F", "S\u0307"=>"\u1E60", "s\u0307"=>"\u1E61", "S\u0323"=>"\u1E62", "s\u0323"=>"\u1E63", "\u015A\u0307"=>"\u1E64", "\u015B\u0307"=>"\u1E65", "\u0160\u0307"=>"\u1E66", "\u0161\u0307"=>"\u1E67", "\u1E62\u0307"=>"\u1E68", "\u1E63\u0307"=>"\u1E69", "T\u0307"=>"\u1E6A", "t\u0307"=>"\u1E6B", "T\u0323"=>"\u1E6C", "t\u0323"=>"\u1E6D", "T\u0331"=>"\u1E6E", "t\u0331"=>"\u1E6F", "T\u032D"=>"\u1E70", "t\u032D"=>"\u1E71", "U\u0324"=>"\u1E72", "u\u0324"=>"\u1E73", "U\u0330"=>"\u1E74", "u\u0330"=>"\u1E75", "U\u032D"=>"\u1E76", "u\u032D"=>"\u1E77", "\u0168\u0301"=>"\u1E78", "\u0169\u0301"=>"\u1E79", "\u016A\u0308"=>"\u1E7A", "\u016B\u0308"=>"\u1E7B", "V\u0303"=>"\u1E7C", "v\u0303"=>"\u1E7D", "V\u0323"=>"\u1E7E", "v\u0323"=>"\u1E7F", "W\u0300"=>"\u1E80", "w\u0300"=>"\u1E81", "W\u0301"=>"\u1E82", "w\u0301"=>"\u1E83", "W\u0308"=>"\u1E84", "w\u0308"=>"\u1E85", "W\u0307"=>"\u1E86", "w\u0307"=>"\u1E87", "W\u0323"=>"\u1E88", "w\u0323"=>"\u1E89", "X\u0307"=>"\u1E8A", "x\u0307"=>"\u1E8B", "X\u0308"=>"\u1E8C", "x\u0308"=>"\u1E8D", "Y\u0307"=>"\u1E8E", "y\u0307"=>"\u1E8F", "Z\u0302"=>"\u1E90", "z\u0302"=>"\u1E91", "Z\u0323"=>"\u1E92", "z\u0323"=>"\u1E93", "Z\u0331"=>"\u1E94", "z\u0331"=>"\u1E95", "h\u0331"=>"\u1E96", "t\u0308"=>"\u1E97", "w\u030A"=>"\u1E98", "y\u030A"=>"\u1E99", "\u017F\u0307"=>"\u1E9B", "A\u0323"=>"\u1EA0", "a\u0323"=>"\u1EA1", "A\u0309"=>"\u1EA2", "a\u0309"=>"\u1EA3", "\u00C2\u0301"=>"\u1EA4", "\u00E2\u0301"=>"\u1EA5", "\u00C2\u0300"=>"\u1EA6", "\u00E2\u0300"=>"\u1EA7", "\u00C2\u0309"=>"\u1EA8", "\u00E2\u0309"=>"\u1EA9", "\u00C2\u0303"=>"\u1EAA", "\u00E2\u0303"=>"\u1EAB", "\u1EA0\u0302"=>"\u1EAC", "\u1EA1\u0302"=>"\u1EAD", "\u0102\u0301"=>"\u1EAE", "\u0103\u0301"=>"\u1EAF", "\u0102\u0300"=>"\u1EB0", "\u0103\u0300"=>"\u1EB1", "\u0102\u0309"=>"\u1EB2", "\u0103\u0309"=>"\u1EB3", "\u0102\u0303"=>"\u1EB4", "\u0103\u0303"=>"\u1EB5", "\u1EA0\u0306"=>"\u1EB6", "\u1EA1\u0306"=>"\u1EB7", "E\u0323"=>"\u1EB8", "e\u0323"=>"\u1EB9", "E\u0309"=>"\u1EBA", "e\u0309"=>"\u1EBB", "E\u0303"=>"\u1EBC", "e\u0303"=>"\u1EBD", "\u00CA\u0301"=>"\u1EBE", "\u00EA\u0301"=>"\u1EBF", "\u00CA\u0300"=>"\u1EC0", "\u00EA\u0300"=>"\u1EC1", "\u00CA\u0309"=>"\u1EC2", "\u00EA\u0309"=>"\u1EC3", "\u00CA\u0303"=>"\u1EC4", "\u00EA\u0303"=>"\u1EC5", "\u1EB8\u0302"=>"\u1EC6", "\u1EB9\u0302"=>"\u1EC7", "I\u0309"=>"\u1EC8", "i\u0309"=>"\u1EC9", "I\u0323"=>"\u1ECA", "i\u0323"=>"\u1ECB", "O\u0323"=>"\u1ECC", "o\u0323"=>"\u1ECD", "O\u0309"=>"\u1ECE", "o\u0309"=>"\u1ECF", "\u00D4\u0301"=>"\u1ED0", "\u00F4\u0301"=>"\u1ED1", "\u00D4\u0300"=>"\u1ED2", "\u00F4\u0300"=>"\u1ED3", "\u00D4\u0309"=>"\u1ED4", "\u00F4\u0309"=>"\u1ED5", "\u00D4\u0303"=>"\u1ED6", "\u00F4\u0303"=>"\u1ED7", "\u1ECC\u0302"=>"\u1ED8", "\u1ECD\u0302"=>"\u1ED9", "\u01A0\u0301"=>"\u1EDA", "\u01A1\u0301"=>"\u1EDB", "\u01A0\u0300"=>"\u1EDC", "\u01A1\u0300"=>"\u1EDD", "\u01A0\u0309"=>"\u1EDE", "\u01A1\u0309"=>"\u1EDF", "\u01A0\u0303"=>"\u1EE0", "\u01A1\u0303"=>"\u1EE1", "\u01A0\u0323"=>"\u1EE2", "\u01A1\u0323"=>"\u1EE3", "U\u0323"=>"\u1EE4", "u\u0323"=>"\u1EE5", "U\u0309"=>"\u1EE6", "u\u0309"=>"\u1EE7", "\u01AF\u0301"=>"\u1EE8", "\u01B0\u0301"=>"\u1EE9", "\u01AF\u0300"=>"\u1EEA", "\u01B0\u0300"=>"\u1EEB", "\u01AF\u0309"=>"\u1EEC", "\u01B0\u0309"=>"\u1EED", "\u01AF\u0303"=>"\u1EEE", "\u01B0\u0303"=>"\u1EEF", "\u01AF\u0323"=>"\u1EF0", "\u01B0\u0323"=>"\u1EF1", "Y\u0300"=>"\u1EF2", "y\u0300"=>"\u1EF3", "Y\u0323"=>"\u1EF4", "y\u0323"=>"\u1EF5", "Y\u0309"=>"\u1EF6", "y\u0309"=>"\u1EF7", "Y\u0303"=>"\u1EF8", "y\u0303"=>"\u1EF9", "\u03B1\u0313"=>"\u1F00", "\u03B1\u0314"=>"\u1F01", "\u1F00\u0300"=>"\u1F02", "\u1F01\u0300"=>"\u1F03", "\u1F00\u0301"=>"\u1F04", "\u1F01\u0301"=>"\u1F05", "\u1F00\u0342"=>"\u1F06", "\u1F01\u0342"=>"\u1F07", "\u0391\u0313"=>"\u1F08", "\u0391\u0314"=>"\u1F09", "\u1F08\u0300"=>"\u1F0A", "\u1F09\u0300"=>"\u1F0B", "\u1F08\u0301"=>"\u1F0C", "\u1F09\u0301"=>"\u1F0D", "\u1F08\u0342"=>"\u1F0E", "\u1F09\u0342"=>"\u1F0F", "\u03B5\u0313"=>"\u1F10", "\u03B5\u0314"=>"\u1F11", "\u1F10\u0300"=>"\u1F12", "\u1F11\u0300"=>"\u1F13", "\u1F10\u0301"=>"\u1F14", "\u1F11\u0301"=>"\u1F15", "\u0395\u0313"=>"\u1F18", "\u0395\u0314"=>"\u1F19", "\u1F18\u0300"=>"\u1F1A", "\u1F19\u0300"=>"\u1F1B", "\u1F18\u0301"=>"\u1F1C", "\u1F19\u0301"=>"\u1F1D", "\u03B7\u0313"=>"\u1F20", "\u03B7\u0314"=>"\u1F21", "\u1F20\u0300"=>"\u1F22", "\u1F21\u0300"=>"\u1F23", "\u1F20\u0301"=>"\u1F24", "\u1F21\u0301"=>"\u1F25", "\u1F20\u0342"=>"\u1F26", "\u1F21\u0342"=>"\u1F27", "\u0397\u0313"=>"\u1F28", "\u0397\u0314"=>"\u1F29", "\u1F28\u0300"=>"\u1F2A", "\u1F29\u0300"=>"\u1F2B", "\u1F28\u0301"=>"\u1F2C", "\u1F29\u0301"=>"\u1F2D", "\u1F28\u0342"=>"\u1F2E", "\u1F29\u0342"=>"\u1F2F", "\u03B9\u0313"=>"\u1F30", "\u03B9\u0314"=>"\u1F31", "\u1F30\u0300"=>"\u1F32", "\u1F31\u0300"=>"\u1F33", "\u1F30\u0301"=>"\u1F34", "\u1F31\u0301"=>"\u1F35", "\u1F30\u0342"=>"\u1F36", "\u1F31\u0342"=>"\u1F37", "\u0399\u0313"=>"\u1F38", "\u0399\u0314"=>"\u1F39", "\u1F38\u0300"=>"\u1F3A", "\u1F39\u0300"=>"\u1F3B", "\u1F38\u0301"=>"\u1F3C", "\u1F39\u0301"=>"\u1F3D", "\u1F38\u0342"=>"\u1F3E", "\u1F39\u0342"=>"\u1F3F", "\u03BF\u0313"=>"\u1F40", "\u03BF\u0314"=>"\u1F41", "\u1F40\u0300"=>"\u1F42", "\u1F41\u0300"=>"\u1F43", "\u1F40\u0301"=>"\u1F44", "\u1F41\u0301"=>"\u1F45", "\u039F\u0313"=>"\u1F48", "\u039F\u0314"=>"\u1F49", "\u1F48\u0300"=>"\u1F4A", "\u1F49\u0300"=>"\u1F4B", "\u1F48\u0301"=>"\u1F4C", "\u1F49\u0301"=>"\u1F4D", "\u03C5\u0313"=>"\u1F50", "\u03C5\u0314"=>"\u1F51", "\u1F50\u0300"=>"\u1F52", "\u1F51\u0300"=>"\u1F53", "\u1F50\u0301"=>"\u1F54", "\u1F51\u0301"=>"\u1F55", "\u1F50\u0342"=>"\u1F56", "\u1F51\u0342"=>"\u1F57", "\u03A5\u0314"=>"\u1F59", "\u1F59\u0300"=>"\u1F5B", "\u1F59\u0301"=>"\u1F5D", "\u1F59\u0342"=>"\u1F5F", "\u03C9\u0313"=>"\u1F60", "\u03C9\u0314"=>"\u1F61", "\u1F60\u0300"=>"\u1F62", "\u1F61\u0300"=>"\u1F63", "\u1F60\u0301"=>"\u1F64", "\u1F61\u0301"=>"\u1F65", "\u1F60\u0342"=>"\u1F66", "\u1F61\u0342"=>"\u1F67", "\u03A9\u0313"=>"\u1F68", "\u03A9\u0314"=>"\u1F69", "\u1F68\u0300"=>"\u1F6A", "\u1F69\u0300"=>"\u1F6B", "\u1F68\u0301"=>"\u1F6C", "\u1F69\u0301"=>"\u1F6D", "\u1F68\u0342"=>"\u1F6E", "\u1F69\u0342"=>"\u1F6F", "\u03B1\u0300"=>"\u1F70", "\u03B5\u0300"=>"\u1F72", "\u03B7\u0300"=>"\u1F74", "\u03B9\u0300"=>"\u1F76", "\u03BF\u0300"=>"\u1F78", "\u03C5\u0300"=>"\u1F7A", "\u03C9\u0300"=>"\u1F7C", "\u1F00\u0345"=>"\u1F80", "\u1F01\u0345"=>"\u1F81", "\u1F02\u0345"=>"\u1F82", "\u1F03\u0345"=>"\u1F83", "\u1F04\u0345"=>"\u1F84", "\u1F05\u0345"=>"\u1F85", "\u1F06\u0345"=>"\u1F86", "\u1F07\u0345"=>"\u1F87", "\u1F08\u0345"=>"\u1F88", "\u1F09\u0345"=>"\u1F89", "\u1F0A\u0345"=>"\u1F8A", "\u1F0B\u0345"=>"\u1F8B", "\u1F0C\u0345"=>"\u1F8C", "\u1F0D\u0345"=>"\u1F8D", "\u1F0E\u0345"=>"\u1F8E", "\u1F0F\u0345"=>"\u1F8F", "\u1F20\u0345"=>"\u1F90", "\u1F21\u0345"=>"\u1F91", "\u1F22\u0345"=>"\u1F92", "\u1F23\u0345"=>"\u1F93", "\u1F24\u0345"=>"\u1F94", "\u1F25\u0345"=>"\u1F95", "\u1F26\u0345"=>"\u1F96", "\u1F27\u0345"=>"\u1F97", "\u1F28\u0345"=>"\u1F98", "\u1F29\u0345"=>"\u1F99", "\u1F2A\u0345"=>"\u1F9A", "\u1F2B\u0345"=>"\u1F9B", "\u1F2C\u0345"=>"\u1F9C", "\u1F2D\u0345"=>"\u1F9D", "\u1F2E\u0345"=>"\u1F9E", "\u1F2F\u0345"=>"\u1F9F", "\u1F60\u0345"=>"\u1FA0", "\u1F61\u0345"=>"\u1FA1", "\u1F62\u0345"=>"\u1FA2", "\u1F63\u0345"=>"\u1FA3", "\u1F64\u0345"=>"\u1FA4", "\u1F65\u0345"=>"\u1FA5", "\u1F66\u0345"=>"\u1FA6", "\u1F67\u0345"=>"\u1FA7", "\u1F68\u0345"=>"\u1FA8", "\u1F69\u0345"=>"\u1FA9", "\u1F6A\u0345"=>"\u1FAA", "\u1F6B\u0345"=>"\u1FAB", "\u1F6C\u0345"=>"\u1FAC", "\u1F6D\u0345"=>"\u1FAD", "\u1F6E\u0345"=>"\u1FAE", "\u1F6F\u0345"=>"\u1FAF", "\u03B1\u0306"=>"\u1FB0", "\u03B1\u0304"=>"\u1FB1", "\u1F70\u0345"=>"\u1FB2", "\u03B1\u0345"=>"\u1FB3", "\u03AC\u0345"=>"\u1FB4", "\u03B1\u0342"=>"\u1FB6", "\u1FB6\u0345"=>"\u1FB7", "\u0391\u0306"=>"\u1FB8", "\u0391\u0304"=>"\u1FB9", "\u0391\u0300"=>"\u1FBA", "\u0391\u0345"=>"\u1FBC", "\u00A8\u0342"=>"\u1FC1", "\u1F74\u0345"=>"\u1FC2", "\u03B7\u0345"=>"\u1FC3", "\u03AE\u0345"=>"\u1FC4", "\u03B7\u0342"=>"\u1FC6", "\u1FC6\u0345"=>"\u1FC7", "\u0395\u0300"=>"\u1FC8", "\u0397\u0300"=>"\u1FCA", "\u0397\u0345"=>"\u1FCC", "\u1FBF\u0300"=>"\u1FCD", "\u1FBF\u0301"=>"\u1FCE", "\u1FBF\u0342"=>"\u1FCF", "\u03B9\u0306"=>"\u1FD0", "\u03B9\u0304"=>"\u1FD1", "\u03CA\u0300"=>"\u1FD2", "\u03B9\u0342"=>"\u1FD6", "\u03CA\u0342"=>"\u1FD7", "\u0399\u0306"=>"\u1FD8", "\u0399\u0304"=>"\u1FD9", "\u0399\u0300"=>"\u1FDA", "\u1FFE\u0300"=>"\u1FDD", "\u1FFE\u0301"=>"\u1FDE", "\u1FFE\u0342"=>"\u1FDF", "\u03C5\u0306"=>"\u1FE0", "\u03C5\u0304"=>"\u1FE1", "\u03CB\u0300"=>"\u1FE2", "\u03C1\u0313"=>"\u1FE4", "\u03C1\u0314"=>"\u1FE5", "\u03C5\u0342"=>"\u1FE6", "\u03CB\u0342"=>"\u1FE7", "\u03A5\u0306"=>"\u1FE8", "\u03A5\u0304"=>"\u1FE9", "\u03A5\u0300"=>"\u1FEA", "\u03A1\u0314"=>"\u1FEC", "\u00A8\u0300"=>"\u1FED", "\u1F7C\u0345"=>"\u1FF2", "\u03C9\u0345"=>"\u1FF3", "\u03CE\u0345"=>"\u1FF4", "\u03C9\u0342"=>"\u1FF6", "\u1FF6\u0345"=>"\u1FF7", "\u039F\u0300"=>"\u1FF8", "\u03A9\u0300"=>"\u1FFA", "\u03A9\u0345"=>"\u1FFC", "\u2190\u0338"=>"\u219A", "\u2192\u0338"=>"\u219B", "\u2194\u0338"=>"\u21AE", "\u21D0\u0338"=>"\u21CD", "\u21D4\u0338"=>"\u21CE", "\u21D2\u0338"=>"\u21CF", "\u2203\u0338"=>"\u2204", "\u2208\u0338"=>"\u2209", "\u220B\u0338"=>"\u220C", "\u2223\u0338"=>"\u2224", "\u2225\u0338"=>"\u2226", "\u223C\u0338"=>"\u2241", "\u2243\u0338"=>"\u2244", "\u2245\u0338"=>"\u2247", "\u2248\u0338"=>"\u2249", "=\u0338"=>"\u2260", "\u2261\u0338"=>"\u2262", "\u224D\u0338"=>"\u226D", "<\u0338"=>"\u226E", ">\u0338"=>"\u226F", "\u2264\u0338"=>"\u2270", "\u2265\u0338"=>"\u2271", "\u2272\u0338"=>"\u2274", "\u2273\u0338"=>"\u2275", "\u2276\u0338"=>"\u2278", "\u2277\u0338"=>"\u2279", "\u227A\u0338"=>"\u2280", "\u227B\u0338"=>"\u2281", "\u2282\u0338"=>"\u2284", "\u2283\u0338"=>"\u2285", "\u2286\u0338"=>"\u2288", "\u2287\u0338"=>"\u2289", "\u22A2\u0338"=>"\u22AC", "\u22A8\u0338"=>"\u22AD", "\u22A9\u0338"=>"\u22AE", "\u22AB\u0338"=>"\u22AF", "\u227C\u0338"=>"\u22E0", "\u227D\u0338"=>"\u22E1", "\u2291\u0338"=>"\u22E2", "\u2292\u0338"=>"\u22E3", "\u22B2\u0338"=>"\u22EA", "\u22B3\u0338"=>"\u22EB", "\u22B4\u0338"=>"\u22EC", "\u22B5\u0338"=>"\u22ED", "\u304B\u3099"=>"\u304C", "\u304D\u3099"=>"\u304E", "\u304F\u3099"=>"\u3050", "\u3051\u3099"=>"\u3052", "\u3053\u3099"=>"\u3054", "\u3055\u3099"=>"\u3056", "\u3057\u3099"=>"\u3058", "\u3059\u3099"=>"\u305A", "\u305B\u3099"=>"\u305C", "\u305D\u3099"=>"\u305E", "\u305F\u3099"=>"\u3060", "\u3061\u3099"=>"\u3062", "\u3064\u3099"=>"\u3065", "\u3066\u3099"=>"\u3067", "\u3068\u3099"=>"\u3069", "\u306F\u3099"=>"\u3070", "\u306F\u309A"=>"\u3071", "\u3072\u3099"=>"\u3073", "\u3072\u309A"=>"\u3074", "\u3075\u3099"=>"\u3076", "\u3075\u309A"=>"\u3077", "\u3078\u3099"=>"\u3079", "\u3078\u309A"=>"\u307A", "\u307B\u3099"=>"\u307C", "\u307B\u309A"=>"\u307D", "\u3046\u3099"=>"\u3094", "\u309D\u3099"=>"\u309E", "\u30AB\u3099"=>"\u30AC", "\u30AD\u3099"=>"\u30AE", "\u30AF\u3099"=>"\u30B0", "\u30B1\u3099"=>"\u30B2", "\u30B3\u3099"=>"\u30B4", "\u30B5\u3099"=>"\u30B6", "\u30B7\u3099"=>"\u30B8", "\u30B9\u3099"=>"\u30BA", "\u30BB\u3099"=>"\u30BC", "\u30BD\u3099"=>"\u30BE", "\u30BF\u3099"=>"\u30C0", "\u30C1\u3099"=>"\u30C2", "\u30C4\u3099"=>"\u30C5", "\u30C6\u3099"=>"\u30C7", "\u30C8\u3099"=>"\u30C9", "\u30CF\u3099"=>"\u30D0", "\u30CF\u309A"=>"\u30D1", "\u30D2\u3099"=>"\u30D3", "\u30D2\u309A"=>"\u30D4", "\u30D5\u3099"=>"\u30D6", "\u30D5\u309A"=>"\u30D7", "\u30D8\u3099"=>"\u30D9", "\u30D8\u309A"=>"\u30DA", "\u30DB\u3099"=>"\u30DC", "\u30DB\u309A"=>"\u30DD", "\u30A6\u3099"=>"\u30F4", "\u30EF\u3099"=>"\u30F7", "\u30F0\u3099"=>"\u30F8", "\u30F1\u3099"=>"\u30F9", "\u30F2\u3099"=>"\u30FA", "\u30FD\u3099"=>"\u30FE", "\u{11099}\u{110BA}"=>"\u{1109A}", "\u{1109B}\u{110BA}"=>"\u{1109C}", "\u{110A5}\u{110BA}"=>"\u{110AB}", "\u{11131}\u{11127}"=>"\u{1112E}", "\u{11132}\u{11127}"=>"\u{1112F}", "\u{11347}\u{1133E}"=>"\u{1134B}", "\u{11347}\u{11357}"=>"\u{1134C}", "\u{114B9}\u{114BA}"=>"\u{114BB}", "\u{114B9}\u{114B0}"=>"\u{114BC}", "\u{114B9}\u{114BD}"=>"\u{114BE}", "\u{115B8}\u{115AF}"=>"\u{115BA}", "\u{115B9}\u{115AF}"=>"\u{115BB}", }.freeze end mongo-ruby-driver-2.21.3/lib/mongo/auth/user.rb000066400000000000000000000154741505113246500213340ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'mongo/auth/user/view' module Mongo module Auth # Represents a user in MongoDB. # # @since 2.0.0 class User include Loggable # @return [ String ] The authorization source, either a database or # external name. attr_reader :auth_source # @return [ String ] The database the user is created in. attr_reader :database # @return [ Hash ] The authentication mechanism properties. attr_reader :auth_mech_properties # @return [ Symbol ] The authorization mechanism. attr_reader :mechanism # @return [ String ] The username. attr_reader :name # @return [ String ] The cleartext password. attr_reader :password # @return [ Array ] roles The user roles. attr_reader :roles # Loggable requires an options attribute. We don't have any options # hence provide this as a stub. # # @api private def options {} end # Determine if this user is equal to another. # # @example Check user equality. # user == other # # @param [ Object ] other The object to compare against. # # @return [ true, false ] If the objects are equal. # # @since 2.0.0 def ==(other) return false unless other.is_a?(User) name == other.name && database == other.database && password == other.password end # Get an authentication key for the user based on a nonce from the # server. # # @example Get the authentication key. # user.auth_key(nonce) # # @param [ String ] nonce The response from the server. # # @return [ String ] The authentication key. # # @since 2.0.0 def auth_key(nonce) Digest::MD5.hexdigest("#{nonce}#{name}#{hashed_password}") end # Get the UTF-8 encoded name with escaped special characters for use with # SCRAM authorization. # # @example Get the encoded name. # user.encoded_name # # @return [ String ] The encoded user name. # # @since 2.0.0 def encoded_name name.encode(BSON::UTF8).gsub('=','=3D').gsub(',','=2C') end # Get the hash key for the user. # # @example Get the hash key. # user.hash # # @return [ String ] The user hash key. # # @since 2.0.0 def hash [ name, database, password ].hash end # Get the user's hashed password for SCRAM-SHA-1. # # @example Get the user's hashed password. # user.hashed_password # # @return [ String ] The hashed password. # # @since 2.0.0 def hashed_password unless password raise Error::MissingPassword end @hashed_password ||= Digest::MD5.hexdigest("#{name}:mongo:#{password}").encode(BSON::UTF8) end # Get the user's stringprepped password for SCRAM-SHA-256. # # @api private def sasl_prepped_password unless password raise Error::MissingPassword end @sasl_prepped_password ||= StringPrep.prepare(password, StringPrep::Profiles::SASL::MAPPINGS, StringPrep::Profiles::SASL::PROHIBITED, normalize: true, bidi: true).encode(BSON::UTF8) end # Create the new user. # # @example Create a new user. # Mongo::Auth::User.new(options) # # @param [ Hash ] options The options to create the user from. # # @option options [ String ] :auth_source The authorization database or # external source. # @option options [ String ] :database The database the user is # authorized for. # @option options [ String ] :user The user name. # @option options [ String ] :password The user's password. # @option options [ String ] :pwd Legacy option for the user's password. # If :password and :pwd are both specified, :password takes precedence. # @option options [ Symbol ] :auth_mech The authorization mechanism. # @option options [ Array, Array ] roles The user roles. # # @since 2.0.0 def initialize(options) @database = options[:database] || Database::ADMIN @auth_source = options[:auth_source] || self.class.default_auth_source(options) @name = options[:user] @password = options[:password] || options[:pwd] @mechanism = options[:auth_mech] if @mechanism # Since the driver must select an authentication class for # the specified mechanism, mechanisms that the driver does not # know about, and cannot translate to an authentication class, # need to be rejected. unless @mechanism.is_a?(Symbol) # Although we documented auth_mech option as being a symbol, we # have not enforced this; warn, reject in lint mode if Lint.enabled? raise Error::LintError, "Auth mechanism #{@mechanism.inspect} must be specified as a symbol" else log_warn("Auth mechanism #{@mechanism.inspect} should be specified as a symbol") @mechanism = @mechanism.to_sym end end unless Auth::SOURCES.key?(@mechanism) raise InvalidMechanism.new(options[:auth_mech]) end end @auth_mech_properties = options[:auth_mech_properties] || {} @roles = options[:roles] || [] end # Get the specification for the user, used in creation. # # @example Get the user's specification. # user.spec # # @return [ Hash ] The user spec. # # @since 2.0.0 def spec {roles: roles}.tap do |spec| if password spec[:pwd] = password end end end private # Generate default auth source based on the URI and options # # @api private def self.default_auth_source(options) case options[:auth_mech] when :aws, :gssapi, :mongodb_x509 '$external' when :plain options[:database] || '$external' else options[:database] || Database::ADMIN end end end end end mongo-ruby-driver-2.21.3/lib/mongo/auth/user/000077500000000000000000000000001505113246500207745ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/auth/user/view.rb000066400000000000000000000127021505113246500222750ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Auth class User # Defines behavior for user related operation on databases. # # @since 2.0.0 class View extend Forwardable # @return [ Database ] database The view's database. attr_reader :database def_delegators :database, :cluster, :read_preference, :client def_delegators :cluster, :next_primary # Create a new user in the database. # # @example Create a new read/write user. # view.create('user', password: 'password', roles: [ 'readWrite' ]) # # @param [ Auth::User, String ] user_or_name The user object or user name. # @param [ Hash ] options The user options. # # @option options [ Session ] :session The session to use for the operation. # @option options [ Hash ] :write_concern The write concern options. # # @return [ Result ] The command response. # # @since 2.0.0 def create(user_or_name, options = {}) user = generate(user_or_name, options) execute_operation(options) do |session| Operation::CreateUser.new( user: user, db_name: database.name, session: session, write_concern: options[:write_concern] && WriteConcern.get(options[:write_concern]), ) end end # Initialize the new user view. # # @example Initialize the user view. # View::User.new(database) # # @param [ Mongo::Database ] database The database the view is for. # # @since 2.0.0 def initialize(database) @database = database end # Remove a user from the database. # # @example Remove the user from the database. # view.remove('user') # # @param [ String ] name The user name. # @param [ Hash ] options The options for the remove operation. # # @option options [ Session ] :session The session to use for the operation. # @option options [ Hash ] :write_concern The write concern options. # # @return [ Result ] The command response. # # @since 2.0.0 def remove(name, options = {}) execute_operation(options) do |session| Operation::RemoveUser.new( user_name: name, db_name: database.name, session: session, write_concern: options[:write_concern] && WriteConcern.get(options[:write_concern]), ) end end # Update a user in the database. # # @example Update a user. # view.update('name', password: 'testpwd') # # @param [ Auth::User, String ] user_or_name The user object or user name. # @param [ Hash ] options The user options. # # @option options [ Session ] :session The session to use for the operation. # @option options [ Hash ] :write_concern The write concern options. # # @return [ Result ] The response. # # @since 2.0.0 def update(user_or_name, options = {}) user = generate(user_or_name, options) execute_operation(options) do |session| Operation::UpdateUser.new( user: user, db_name: database.name, session: session, write_concern: options[:write_concern] && WriteConcern.get(options[:write_concern]), ) end end # Get info for a particular user in the database. # # @example Get a particular user's info. # view.info('emily') # # @param [ String ] name The user name. # @param [ Hash ] options The options for the info operation. # # @option options [ Session ] :session The session to use for the operation. # # @return [ Array ] An array wrapping a document containing information on a particular user. # # @since 2.1.0 def info(name, options = {}) user_query(name, options).documents end private def user_query(name, options = {}) execute_operation(options) do |session| Operation::UsersInfo.new( user_name: name, db_name: database.name, session: session ) end end def generate(user, options) user.is_a?(String) ? Auth::User.new({ user: user }.merge(options)) : user end def execute_operation(options) client.send(:with_session, options) do |session| op = yield session op.execute(next_primary(nil, session), context: Operation::Context.new(client: client, session: session)) end end end end end end mongo-ruby-driver-2.21.3/lib/mongo/auth/x509.rb000066400000000000000000000034651505113246500210600ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Auth # Defines behavior for X.509 authentication. # # @since 2.0.0 # @api private class X509 < Base # The authentication mechanism string. # # @since 2.0.0 MECHANISM = 'MONGODB-X509'.freeze # Initializes the X.509 authenticator. # # @param [ Auth::User ] user The user to authenticate. # @param [ Mongo::Connection ] connection The connection to authenticate over. def initialize(user, connection, **opts) # The only valid database for X.509 authentication is $external. if user.auth_source != '$external' user_name_msg = if user.name " #{user.name}" else '' end raise Auth::InvalidConfiguration, "User#{user_name_msg} specifies auth source '#{user.auth_source}', but the only valid auth source for X.509 is '$external'" end super end # Log the user in on the current connection. # # @return [ BSON::Document ] The document of the authentication response. def login converse_1_step(connection, conversation) end end end end require 'mongo/auth/x509/conversation' mongo-ruby-driver-2.21.3/lib/mongo/auth/x509/000077500000000000000000000000001505113246500205235ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/auth/x509/conversation.rb000066400000000000000000000041351505113246500235650ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Auth class X509 # Defines behavior around a single X.509 conversation between the # client and server. # # @since 2.0.0 # @api private class Conversation < ConversationBase # The login message. # # @since 2.0.0 LOGIN = { authenticate: 1, mechanism: X509::MECHANISM }.freeze # Start the X.509 conversation. This returns the first message that # needs to be sent to the server. # # @param [ Server::Connection ] connection The connection being # authenticated. # # @return [ Protocol::Message ] The first X.509 conversation message. # # @since 2.0.0 def start(connection) validate_external_auth_source selector = client_first_document build_message(connection, '$external', selector) end # Returns the hash to provide to the server in the handshake # as value of the speculativeAuthenticate key. # # If the auth mechanism does not support speculative authentication, # this method returns nil. # # @return [ Hash | nil ] Speculative authentication document. def speculative_auth_document client_first_document end private def client_first_document LOGIN.dup.tap do |payload| payload[:user] = user.name if user.name end end end end end end mongo-ruby-driver-2.21.3/lib/mongo/background_thread.rb000066400000000000000000000136611505113246500230570ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2019-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo # The run!, running? and stop! methods used to be part of the public API # in some of the classes which now include this module. Therefore these # methods must be considered part of the driver's public API for backwards # compatibility reasons. However using these methods outside of the driver # is deprecated. # # @note Do not start or stop background threads in finalizers. See # https://jira.mongodb.org/browse/RUBY-2453 and # https://bugs.ruby-lang.org/issues/16288. When interpreter exits, # background threads are stopped first and finalizers are invoked next, # and MRI's internal data structures are basically corrupt at this point # if threads are being referenced. Prior to interpreter shutdown this # means threads cannot be stopped by objects going out of scope, but # most likely the threads hold references to said objects anyway if # work is being performed thus the objects wouldn't go out of scope in # the first place. # # @api private module BackgroundThread include Loggable # Start the background thread. # # If the thread is already running, this method does nothing. # # @api public for backwards compatibility only def run! if @stop_requested && @thread wait_for_stop if @thread.alive? log_warn("Starting a new background thread in #{self}, but the previous background thread is still running") @thread = nil end @stop_requested = false end if running? @thread else start! end end # @api public for backwards compatibility only def running? if @thread @thread.alive? else false end end # Stop the background thread and wait for to terminate for a reasonable # amount of time. # # @return [ true | false ] Whether the thread was terminated. # # @api public for backwards compatibility only def stop! # If the thread was not started, there is nothing to stop. # # Classes including this module may want to perform additional # cleanup, which they can do by overriding this method. return true unless @thread # Background threads generally perform operations in a loop. # This flag is meant to be checked on each iteration of the # working loops and the thread should stop working when this flag # is set. @stop_requested = true # Besides setting the flag, a particular class may have additional # ways of signaling the background thread to either stop working or # wake up to check the stop flag, for example, setting a semaphore. # This can be accomplished by providing the pre_stop method. pre_stop # Now we have requested the graceful termination, and we could wait # for the thread to exit on its own accord. A future version of the # driver may allow a certain amount of time for the thread to quit. # For now, we additionally use the Ruby machinery to request the thread # be terminated, and do so immediately. # # Note that this may cause the background thread to terminate in # the middle of an operation. @thread.kill wait_for_stop end private def start! @thread = Thread.new do catch(:done) do until @stop_requested do_work end end end end # Waits for the thread to die, with a timeout. # # Returns true if the thread died, false otherwise. def wait_for_stop # Wait for the thread to die. This is important in order to reliably # clean up resources like connections knowing that no background # thread will reconnect because it is still working. # # However, we do not want to wait indefinitely because in theory # a background thread could be performing, say, network I/O and if # the network is no longer available that could take a long time. start_time = Utils.monotonic_time ([0.1, 0.15] + [0.2] * 5 + [0.3] * 20).each do |interval| begin Timeout.timeout(interval) do @thread.join end break rescue ::Timeout::Error end end # Some driver objects can be reconnected, for backwards compatibiilty # reasons. Clear the thread instance variable to support this cleanly. if @thread.alive? log_warn("Failed to stop the background thread in #{self} in #{(Utils.monotonic_time - start_time).to_i} seconds: #{@thread.inspect} (thread status: #{@thread.status})") # On JRuby the thread may be stuck in aborting state # seemingly indefinitely. If the thread is aborting, consider it dead # for our purposes (we will create a new thread if needed, and # the background thread monitor will not detect the aborting thread # as being alive). if @thread.status == 'aborting' @thread = nil @stop_requested = false end false else @thread = nil @stop_requested = false true end end # Override this method to do the work in the background thread. def do_work end # Override this method to perform additional signaling for the background # thread to stop. def pre_stop end end end mongo-ruby-driver-2.21.3/lib/mongo/bson.rb000066400000000000000000000017301505113246500203440ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2015-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # Patch for allowing deprecated symbols to be used. # # @since 2.2.1 class Symbol # Overrides the default BSON type to use the symbol type instead of a # string type. # # @example Get the bson type. # :test.bson_type # # @return [ String ] The character 14. # # @since 2.2.1 def bson_type BSON::Symbol::BSON_TYPE end end mongo-ruby-driver-2.21.3/lib/mongo/bulk_write.rb000066400000000000000000000341511505113246500215550ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'mongo/bulk_write/result' require 'mongo/bulk_write/transformable' require 'mongo/bulk_write/validatable' require 'mongo/bulk_write/combineable' require 'mongo/bulk_write/ordered_combiner' require 'mongo/bulk_write/unordered_combiner' require 'mongo/bulk_write/result_combiner' module Mongo class BulkWrite extend Forwardable include Operation::ResponseHandling # @return [ Mongo::Collection ] collection The collection. attr_reader :collection # @return [ Array ] requests The requests. attr_reader :requests # @return [ Hash, BSON::Document ] options The options. attr_reader :options # Delegate various methods to the collection. def_delegators :@collection, :database, :cluster, :write_with_retry, :nro_write_with_retry, :next_primary def_delegators :database, :client # Execute the bulk write operation. # # @example Execute the bulk write. # bulk_write.execute # # @return [ Mongo::BulkWrite::Result ] The result. # # @since 2.1.0 def execute operation_id = Monitoring.next_operation_id result_combiner = ResultCombiner.new operations = op_combiner.combine validate_requests! deadline = calculate_deadline client.with_session(@options) do |session| operations.each do |operation| context = Operation::Context.new( client: client, session: session, operation_timeouts: { operation_timeout_ms: op_timeout_ms(deadline) } ) if single_statement?(operation) write_concern = write_concern(session) write_with_retry(write_concern, context: context) do |connection, txn_num, context| execute_operation( operation.keys.first, operation.values.flatten, connection, context, operation_id, result_combiner, session, txn_num) end else nro_write_with_retry(write_concern, context: context) do |connection, txn_num, context| execute_operation( operation.keys.first, operation.values.flatten, connection, context, operation_id, result_combiner, session) end end end end result_combiner.result end # Create the new bulk write operation. # # @api private # # @example Create an ordered bulk write. # Mongo::BulkWrite.new(collection, [{ insert_one: { _id: 1 }}]) # # @example Create an unordered bulk write. # Mongo::BulkWrite.new(collection, [{ insert_one: { _id: 1 }}], ordered: false) # # @example Create an ordered mixed bulk write. # Mongo::BulkWrite.new( # collection, # [ # { insert_one: { _id: 1 }}, # { update_one: { filter: { _id: 0 }, update: { '$set' => { name: 'test' }}}}, # { delete_one: { filter: { _id: 2 }}} # ] # ) # # @param [ Mongo::Collection ] collection The collection. # @param [ Enumerable ] requests The requests, # cannot be empty. # @param [ Hash, BSON::Document ] options The options. # # @since 2.1.0 def initialize(collection, requests, options = {}) @collection = collection @requests = requests @options = options || {} if @options[:timeout_ms] && @options[:timeout_ms] < 0 raise ArgumentError, "timeout_ms options must be non-negative integer" end end # Is the bulk write ordered? # # @api private # # @example Is the bulk write ordered? # bulk_write.ordered? # # @return [ true, false ] If the bulk write is ordered. # # @since 2.1.0 def ordered? @ordered ||= options.fetch(:ordered, true) end # Get the write concern for the bulk write. # # @api private # # @example Get the write concern. # bulk_write.write_concern # # @return [ WriteConcern ] The write concern. # # @since 2.1.0 def write_concern(session = nil) @write_concern ||= options[:write_concern] ? WriteConcern.get(options[:write_concern]) : collection.write_concern_with_session(session) end private SINGLE_STATEMENT_OPS = [ :delete_one, :update_one, :insert_one ].freeze # @return [ Float | nil ] Deadline for the batch of operations, if set. def calculate_deadline timeout_ms = @options[:timeout_ms] || collection.timeout_ms return nil if timeout_ms.nil? if timeout_ms == 0 0 else Utils.monotonic_time + (timeout_ms / 1_000.0) end end # @param [ Float | nil ] deadline Deadline for the batch of operations. # # @return [ Integer | nil ] Timeout in milliseconds for the next operation. def op_timeout_ms(deadline) return nil if deadline.nil? if deadline == 0 0 else ((deadline - Utils.monotonic_time) * 1_000).to_i end end def single_statement?(operation) SINGLE_STATEMENT_OPS.include?(operation.keys.first) end def base_spec(operation_id, session) { :db_name => database.name, :coll_name => collection.name, :write_concern => write_concern(session), :ordered => ordered?, :operation_id => operation_id, :bypass_document_validation => !!options[:bypass_document_validation], :max_time_ms => options[:max_time_ms], :options => options, :id_generator => client.options[:id_generator], :session => session, :comment => options[:comment], :let => options[:let], } end def execute_operation(name, values, connection, context, operation_id, result_combiner, session, txn_num = nil) validate_collation!(connection) validate_array_filters!(connection) validate_hint!(connection) unpin_maybe(session, connection) do if values.size > connection.description.max_write_batch_size split_execute(name, values, connection, context, operation_id, result_combiner, session, txn_num) else result = send(name, values, connection, context, operation_id, session, txn_num) add_server_diagnostics(connection) do add_error_labels(connection, context) do result_combiner.combine!(result, values.size) end end end end # With OP_MSG (3.6+ servers), the size of each section in the message # is independently capped at 16m and each bulk operation becomes # its own section. The size of the entire bulk write is limited to 48m. # With OP_QUERY (pre-3.6 servers), the entire bulk write is sent as a # single document and is thus subject to the 16m document size limit. # This means the splits differ between pre-3.6 and 3.6+ servers, with # 3.6+ servers being able to split less. rescue Error::MaxBSONSize, Error::MaxMessageSize => e raise e if values.size <= 1 unpin_maybe(session, connection) do split_execute(name, values, connection, context, operation_id, result_combiner, session, txn_num) end end def op_combiner @op_combiner ||= ordered? ? OrderedCombiner.new(requests) : UnorderedCombiner.new(requests) end def split_execute(name, values, connection, context, operation_id, result_combiner, session, txn_num) execute_operation(name, values.shift(values.size / 2), connection, context, operation_id, result_combiner, session, txn_num) txn_num = session.next_txn_num if txn_num && !session.in_transaction? execute_operation(name, values, connection, context, operation_id, result_combiner, session, txn_num) end def delete_one(documents, connection, context, operation_id, session, txn_num) QueryCache.clear_namespace(collection.namespace) spec = base_spec(operation_id, session).merge(:deletes => documents, :txn_num => txn_num) Operation::Delete.new(spec).bulk_execute(connection, context: context) end def delete_many(documents, connection, context, operation_id, session, txn_num) QueryCache.clear_namespace(collection.namespace) spec = base_spec(operation_id, session).merge(:deletes => documents) Operation::Delete.new(spec).bulk_execute(connection, context: context) end def insert_one(documents, connection, context, operation_id, session, txn_num) QueryCache.clear_namespace(collection.namespace) spec = base_spec(operation_id, session).merge(:documents => documents, :txn_num => txn_num) Operation::Insert.new(spec).bulk_execute(connection, context: context) end def update_one(documents, connection, context, operation_id, session, txn_num) QueryCache.clear_namespace(collection.namespace) spec = base_spec(operation_id, session).merge(:updates => documents, :txn_num => txn_num) Operation::Update.new(spec).bulk_execute(connection, context: context) end alias :replace_one :update_one def update_many(documents, connection, context, operation_id, session, txn_num) QueryCache.clear_namespace(collection.namespace) spec = base_spec(operation_id, session).merge(:updates => documents) Operation::Update.new(spec).bulk_execute(connection, context: context) end private def validate_collation!(connection) if op_combiner.has_collation? && !connection.features.collation_enabled? raise Error::UnsupportedCollation.new end end def validate_array_filters!(connection) if op_combiner.has_array_filters? && !connection.features.array_filters_enabled? raise Error::UnsupportedArrayFilters.new end end def validate_hint!(connection) if op_combiner.has_hint? if !can_hint?(connection) && write_concern && !write_concern.acknowledged? raise Error::UnsupportedOption.hint_error(unacknowledged_write: true) elsif !connection.features.update_delete_option_validation_enabled? raise Error::UnsupportedOption.hint_error end end end # Loop through the requests and check if each operation is allowed to send # a hint for each operation on the given server version. # # For the following operations, the client can send a hint for servers >= 4.2 # and for the rest, the client can only send it for 4.4+: # - updateOne # - updateMany # - replaceOne # # @param [ Connection ] connection The connection object. # # @return [ true | false ] Whether the request is able to send hints for # the current server version. def can_hint?(connection) gte_4_2 = connection.server.description.server_version_gte?('4.2') gte_4_4 = connection.server.description.server_version_gte?('4.4') op_combiner.requests.all? do |req| op = req.keys.first if req[op].keys.include?(:hint) if [:update_one, :update_many, :replace_one].include?(op) gte_4_2 else gte_4_4 end else true end end end # Perform the request document validation required by driver specifications. # This method validates the first key of each update request document to be # an operator (i.e. start with $) and the first key of each replacement # document to not be an operator (i.e. not start with $). The request document # may be invalid without this method flagging it as such (for example an # update or replacement document containing some keys which are operators # and some which are not), in which case the driver expects the server to # fail the operation with an error. # # Raise an ArgumentError if requests is empty. # # @raise [ Error::InvalidUpdateDocument, Error::InvalidReplacementDocument, # ArgumentError ] # if the document is invalid. def validate_requests! requests_empty = true @requests.each do |req| requests_empty = false if op = req.keys.first if [:update_one, :update_many].include?(op) if doc = maybe_first(req.dig(op, :update)) if key = doc.keys&.first unless key.to_s.start_with?("$") if Mongo.validate_update_replace raise Error::InvalidUpdateDocument.new(key: key) else Error::InvalidUpdateDocument.warn(Logger.logger, key) end end end end elsif op == :replace_one if key = req.dig(op, :replacement)&.keys&.first if key.to_s.start_with?("$") if Mongo.validate_update_replace raise Error::InvalidReplacementDocument.new(key: key) else Error::InvalidReplacementDocument.warn(Logger.logger, key) end end end end end end.tap do raise ArgumentError, "Bulk write requests cannot be empty" if requests_empty end end # If the given object is an array return the first element, otherwise # return the given object. # # @param [ Object ] obj The given object. # # @return [ Object ] The first element of the array or the given object. def maybe_first(obj) obj.is_a?(Array) ? obj.first : obj end end end mongo-ruby-driver-2.21.3/lib/mongo/bulk_write/000077500000000000000000000000001505113246500212245ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/bulk_write/combineable.rb000066400000000000000000000037521505113246500240200ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2015-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class BulkWrite # Defines behavior around combiners # # @api private # # @since 2.1.0 module Combineable # @return [ Array ] requests The provided requests. attr_reader :requests # Create the ordered combiner. # # @api private # # @example Create the ordered combiner. # OrderedCombiner.new([{ insert_one: { _id: 0 }}]) # # @param [ Array ] requests The bulk requests. # # @since 2.1.0 def initialize(requests) @requests = requests @has_collation = false @has_array_filters = false @has_hint = false end # @return [ Boolean ] Whether one or more operation specifies the collation # option. def has_collation? @has_collation end # @return [ Boolean ] Whether one or more operation specifies the # array_filters option. def has_array_filters? @has_array_filters end # @return [ Boolean ] Whether one or more operation specifies the # hint option. def has_hint? @has_hint end private def combine_requests(ops) requests.reduce(ops) do |operations, request| add(operations, request.keys.first, request.values.first) end end end end end mongo-ruby-driver-2.21.3/lib/mongo/bulk_write/ordered_combiner.rb000066400000000000000000000027521505113246500250610ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2015-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class BulkWrite # Combines groups of bulk write operations in order. # # @api private # # @since 2.1.0 class OrderedCombiner include Transformable include Validatable include Combineable # Combine the requests in order. # # @api private # # @example Combine the requests. # combiner.combine # # @return [ Array ] The combined requests. # # @since 2.1.0 def combine combine_requests([]) end private def add(operations, name, document) operations.push({ name => []}) if next_group?(name, operations) operations[-1][name].push(transform(name, document)) operations end def next_group?(name, operations) !operations[-1] || !operations[-1].key?(name) end end end end mongo-ruby-driver-2.21.3/lib/mongo/bulk_write/result.rb000066400000000000000000000120101505113246500230610ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2015-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class BulkWrite # Wraps a series of bulk write operations in a result object. # # @since 2.0.6 class Result # @return [ Boolean ] Is the result acknowledged? def acknowledged? @acknowledged end # Constant for number removed. # # @since 2.1.0 REMOVED_COUNT = 'n_removed'.freeze # Constant for number inserted. # # @since 2.1.0 INSERTED_COUNT = 'n_inserted'.freeze # Constant for inserted ids. # # @since 2.1.0 INSERTED_IDS = 'inserted_ids'.freeze # Constant for number matched. # # @since 2.1.0 MATCHED_COUNT = 'n_matched'.freeze # Constant for number modified. # # @since 2.1.0 MODIFIED_COUNT = 'n_modified'.freeze # Constant for upserted. # # @since 2.1.0 UPSERTED = 'upserted'.freeze # Constant for number upserted. # # @since 2.1.0 UPSERTED_COUNT = 'n_upserted'.freeze # Constant for upserted ids. # # @since 2.1.0 UPSERTED_IDS = 'upserted_ids'.freeze # The fields contained in the result document returned from executing the # operations. # # @since 2.1.0. FIELDS = [ INSERTED_COUNT, REMOVED_COUNT, MODIFIED_COUNT, UPSERTED_COUNT, MATCHED_COUNT, Operation::Result::N ].freeze # Returns the number of documents deleted. # # @example Get the number of deleted documents. # result.deleted_count # # @return [ Integer ] The number deleted. # # @since 2.1.0 def deleted_count @results[REMOVED_COUNT] end # Create the new result object from the results document. # # @example Create the new result. # Result.new({ 'n_inserted' => 10 }) # # @param [ BSON::Document, Hash ] results The results document. # @param [ Boolean ] acknowledged Is the result acknowledged? # # @since 2.1.0 # # @api private def initialize(results, acknowledged) @results = results @acknowledged = acknowledged end # Returns the number of documents inserted. # # @example Get the number of inserted documents. # result.inserted_count # # @return [ Integer ] The number inserted. # # @since 2.1.0 def inserted_count @results[INSERTED_COUNT] end # Get the inserted document ids, if the operation has inserts. # # @example Get the inserted ids. # result.inserted_ids # # @return [ Array ] The inserted ids. # # @since 2.1.0 def inserted_ids @results[INSERTED_IDS] end # Returns the number of documents matched. # # @example Get the number of matched documents. # result.matched_count # # @return [ Integer ] The number matched. # # @since 2.1.0 def matched_count @results[MATCHED_COUNT] end # Returns the number of documents modified. # # @example Get the number of modified documents. # result.modified_count # # @return [ Integer ] The number modified. # # @since 2.1.0 def modified_count @results[MODIFIED_COUNT] end # Returns the number of documents upserted. # # @example Get the number of upserted documents. # result.upserted_count # # @return [ Integer ] The number upserted. # # @since 2.1.0 def upserted_count @results[UPSERTED_COUNT] end # Get the upserted document ids, if the operation has inserts. # # @example Get the upserted ids. # result.upserted_ids # # @return [ Array ] The upserted ids. # # @since 2.1.0 def upserted_ids @results[UPSERTED_IDS] || [] end # Validates the bulk write result. # # @example Validate the result. # result.validate! # # @raise [ Error::BulkWriteError ] If the result contains errors. # # @return [ Result ] The result. # # @since 2.1.0 def validate! if @results['writeErrors'] || @results['writeConcernErrors'] raise Error::BulkWriteError.new(@results) else self end end end end end mongo-ruby-driver-2.21.3/lib/mongo/bulk_write/result_combiner.rb000066400000000000000000000100361505113246500247450ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2015-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class BulkWrite # Combines bulk write results together. # # @api private # # @since 2.1.0 class ResultCombiner # @return [ Integer ] count The number of documents in the entire batch. attr_reader :count # @return [ Hash ] results The results hash. attr_reader :results # Create the new result combiner. # # @api private # # @example Create the result combiner. # ResultCombiner.new # # @since 2.1.0 def initialize @results = {} @count = 0 end # Adds a result to the overall results. # # @api private # # @example Add the result. # combiner.combine!(result, count) # # @param [ Operation::Result ] result The result to combine. # @param [ Integer ] count The count of requests in the batch. # # @since 2.1.0 def combine!(result, count) # Errors can be communicated by the server in a variety of fields: # writeError, writeErrors, writeConcernError, writeConcernErrors. # Currently only errors given in writeConcernErrors will cause # counts not to be added, because this behavior is covered by the # retryable writes tests. It is possible that some or all of the # other errors should also be excluded when combining counts and # ids, and it is also possible that only a subset of these error # fields is actually possible in the context of bulk writes. unless result.write_concern_error? combine_counts!(result) combine_ids!(result) end combine_errors!(result) @count += count @acknowledged = result.acknowledged? end # Get the final result. # # @api private # # @return [ BulkWrite::Result ] The final result. # # @since 2.1.0 def result BulkWrite::Result.new(results, @acknowledged).validate! end private def combine_counts!(result) Result::FIELDS.each do |field| if result.respond_to?(field) && value = result.send(field) results.merge!(field => (results[field] || 0) + value) end end end def combine_ids!(result) if result.respond_to?(Result::INSERTED_IDS) results[Result::INSERTED_IDS] = (results[Result::INSERTED_IDS] || []) + result.inserted_ids end if result.respond_to?(Result::UPSERTED) results[Result::UPSERTED_IDS] = (results[Result::UPSERTED_IDS] || []) + result.upserted.map{ |doc| doc['_id'] } end end def combine_errors!(result) combine_write_errors!(result) combine_write_concern_errors!(result) end def combine_write_errors!(result) if write_errors = result.aggregate_write_errors(count) results.merge!( 'writeErrors' => ((results['writeErrors'] || []) << write_errors).flatten ) else result.validate! end end def combine_write_concern_errors!(result) if write_concern_errors = result.aggregate_write_concern_errors(count) results['writeConcernErrors'] = (results['writeConcernErrors'] || []) + write_concern_errors end end end end end mongo-ruby-driver-2.21.3/lib/mongo/bulk_write/transformable.rb000066400000000000000000000102351505113246500244110ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2015-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class BulkWrite # Defines behavior around transformations. # # @api private # # @since 2.1.0 module Transformable # The delete many model constant. # # @since 2.1.0 DELETE_MANY = :delete_many.freeze # The delete one model constant. # # @since 2.1.0 DELETE_ONE = :delete_one.freeze # The insert one model constant. # # @since 2.1.0 INSERT_ONE = :insert_one.freeze # The replace one model constant. # # @since 2.1.0 REPLACE_ONE = :replace_one.freeze # The update many model constant. # # @since 2.1.0 UPDATE_MANY = :update_many.freeze # The update one model constant. # # @since 2.1.0 UPDATE_ONE = :update_one.freeze # Proc to transform delete many ops. # # @since 2.1.0 DELETE_MANY_TRANSFORM = ->(doc){ { Operation::Q => doc[:filter], Operation::LIMIT => 0, }.tap do |d| d[Operation::COLLATION] = doc[:collation] if doc[:collation] d['hint'] = doc[:hint] if doc[:hint] end } # Proc to transform delete one ops. # # @since 2.1.0 DELETE_ONE_TRANSFORM = ->(doc){ { Operation::Q => doc[:filter], Operation::LIMIT => 1, }.tap do |d| d[Operation::COLLATION] = doc[:collation] if doc[:collation] d['hint'] = doc[:hint] if doc[:hint] end } # Proc to transform insert one ops. # # @since 2.1.0 INSERT_ONE_TRANSFORM = ->(doc){ doc } # Proc to transfor replace one ops. # # @since 2.1.0 REPLACE_ONE_TRANSFORM = ->(doc){ { Operation::Q => doc[:filter], Operation::U => doc[:replacement], }.tap do |d| d['upsert'] = true if doc[:upsert] d[Operation::COLLATION] = doc[:collation] if doc[:collation] d['hint'] = doc[:hint] if doc[:hint] end } # Proc to transform update many ops. # # @since 2.1.0 UPDATE_MANY_TRANSFORM = ->(doc){ { Operation::Q => doc[:filter], Operation::U => doc[:update], Operation::MULTI => true, }.tap do |d| d['upsert'] = true if doc[:upsert] d[Operation::COLLATION] = doc[:collation] if doc[:collation] d[Operation::ARRAY_FILTERS] = doc[:array_filters] if doc[:array_filters] d['hint'] = doc[:hint] if doc[:hint] end } # Proc to transform update one ops. # # @since 2.1.0 UPDATE_ONE_TRANSFORM = ->(doc){ { Operation::Q => doc[:filter], Operation::U => doc[:update], }.tap do |d| d['upsert'] = true if doc[:upsert] d[Operation::COLLATION] = doc[:collation] if doc[:collation] d[Operation::ARRAY_FILTERS] = doc[:array_filters] if doc[:array_filters] d['hint'] = doc[:hint] if doc[:hint] end } # Document mappers from the bulk api input into proper commands. # # @since 2.1.0 MAPPERS = { DELETE_MANY => DELETE_MANY_TRANSFORM, DELETE_ONE => DELETE_ONE_TRANSFORM, INSERT_ONE => INSERT_ONE_TRANSFORM, REPLACE_ONE => REPLACE_ONE_TRANSFORM, UPDATE_MANY => UPDATE_MANY_TRANSFORM, UPDATE_ONE => UPDATE_ONE_TRANSFORM }.freeze private def transform(name, document) validate(name, document) MAPPERS[name].call(document) end end end end mongo-ruby-driver-2.21.3/lib/mongo/bulk_write/unordered_combiner.rb000066400000000000000000000025741505113246500254260ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2015-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class BulkWrite # Combines groups of bulk write operations in no order. # # @api private # # @since 2.1.0 class UnorderedCombiner include Transformable include Validatable include Combineable # Combine the requests in order. # # @api private # # @example Combine the requests. # combiner.combine # # @return [ Array ] The combined requests. # # @since 2.1.0 def combine combine_requests({}).map do |name, ops| { name => ops } end end private def add(operations, name, document) (operations[name] ||= []).push(transform(name, document)) operations end end end end mongo-ruby-driver-2.21.3/lib/mongo/bulk_write/validatable.rb000066400000000000000000000041231505113246500240210ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2015-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class BulkWrite # Defines behavior around validations. # # @api private # # @since 2.1.0 module Validatable # Validate the document. # # @api private # # @example Validate the document. # validatable.validate(:insert_one, { _id: 0 }) # # @param [ Symbol ] name The operation name. # @param [ Hash, BSON::Document ] document The document. # # @raise [ InvalidBulkOperation ] If not valid. # # @return [ Hash, BSON::Document ] The document. # # @since 2.1.0 def validate(name, document) validate_operation(name) validate_document(name, document) if document.respond_to?(:keys) && (document[:collation] || document[Operation::COLLATION]) @has_collation = true end if document.respond_to?(:keys) && document[:array_filters] @has_array_filters = true end if document.respond_to?(:keys) && document[:hint] @has_hint = true end end private def validate_document(name, document) if document.respond_to?(:keys) || document.respond_to?(:data) document else raise Error::InvalidBulkOperation.new(name, document) end end def validate_operation(name) unless Transformable::MAPPERS.key?(name) raise Error::InvalidBulkOperationType.new(name) end end end end end mongo-ruby-driver-2.21.3/lib/mongo/caching_cursor.rb000066400000000000000000000041011505113246500223670ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo # A Cursor that attempts to load documents from memory first before hitting # the database if the same query has already been executed. # # @api semiprivate class CachingCursor < Cursor # @return [ Array ] The cursor's cached documents. # @api private attr_reader :cached_docs # We iterate over the cached documents if they exist already in the # cursor otherwise proceed as normal. # # @example Iterate over the documents. # cursor.each do |doc| # # ... # end def each if @cached_docs @cached_docs.each do |doc| yield doc end unless closed? # StopIteration raised by try_next ends this loop. loop do document = try_next yield document if document end end else super end end # Get a human-readable string representation of +Cursor+. # # @example Inspect the cursor. # cursor.inspect # # @return [ String ] A string representation of a +Cursor+ instance. def inspect "#" end # Acquires the next document for cursor iteration and then # inserts that document in the @cached_docs array. # # @api private def try_next @cached_docs ||= [] document = super @cached_docs << document if document document end end end mongo-ruby-driver-2.21.3/lib/mongo/client.rb000066400000000000000000002057101505113246500206650ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an 'AS IS' BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo # The client is the entry point to the driver and is the main object that # will be interacted with. # # @since 2.0.0 class Client extend Forwardable include Loggable # The options that do not affect the behavior of a cluster and its # subcomponents. # # @since 2.1.0 CRUD_OPTIONS = [ :auto_encryption_options, :database, :read, :read_concern, :write, :write_concern, :retry_reads, :max_read_retries, :read_retry_interval, :retry_writes, :max_write_retries, # Options which cannot currently be here: # # :server_selection_timeout # Server selection timeout is used by cluster constructor to figure out # how long to wait for initial scan in compatibility mode, but once # the cluster is initialized it no longer uses this timeout. # Unfortunately server selector reads server selection timeout out of # the cluster, and this behavior is required by Cluster#next_primary # which takes no arguments. When next_primary is removed we can revsit # using the same cluster object with different server selection timeouts. ].freeze # Valid client options. # # @since 2.1.2 VALID_OPTIONS = [ :app_name, :auth_mech, :auth_mech_properties, :auth_source, :auto_encryption_options, :bg_error_backtrace, :cleanup, :compressors, :direct_connection, :connect, :connect_timeout, :database, :heartbeat_frequency, :id_generator, :load_balanced, :local_threshold, :logger, :log_prefix, :max_connecting, :max_idle_time, :max_pool_size, :max_read_retries, :max_write_retries, :min_pool_size, :monitoring, :monitoring_io, :password, :platform, :populator_io, :read, :read_concern, :read_retry_interval, :replica_set, :resolv_options, :retry_reads, :retry_writes, :scan, :sdam_proc, :server_api, :server_selection_timeout, :socket_timeout, :srv_max_hosts, :srv_service_name, :ssl, :ssl_ca_cert, :ssl_ca_cert_object, :ssl_ca_cert_string, :ssl_cert, :ssl_cert_object, :ssl_cert_string, :ssl_key, :ssl_key_object, :ssl_key_pass_phrase, :ssl_key_string, :ssl_verify, :ssl_verify_certificate, :ssl_verify_hostname, :ssl_verify_ocsp_endpoint, :timeout_ms, :truncate_logs, :user, :wait_queue_timeout, :wrapping_libraries, :write, :write_concern, :zlib_compression_level, ].freeze # The compression algorithms supported by the driver. # # @since 2.5.0 VALID_COMPRESSORS = [ Mongo::Protocol::Compressed::ZSTD, Mongo::Protocol::Compressed::SNAPPY, Mongo::Protocol::Compressed::ZLIB ].freeze # The known server API versions. VALID_SERVER_API_VERSIONS = %w( 1 ).freeze # @return [ Mongo::Cluster ] cluster The cluster of servers for the client. attr_reader :cluster # @return [ Mongo::Database ] database The database the client is operating on. attr_reader :database # @return [ Hash ] options The configuration options. attr_reader :options # @return [ Mongo::Crypt::AutoEncrypter ] The object that encapsulates # auto-encryption behavior attr_reader :encrypter # Delegate command and collections execution to the current database. def_delegators :@database, :command, :collections # Delegate subscription to monitoring. def_delegators :monitoring, :subscribe, :unsubscribe # @return [ Monitoring ] monitoring The monitoring. # @api private def monitoring if cluster cluster.monitoring else @monitoring end end private :monitoring # Determine if this client is equivalent to another object. # # @example Check client equality. # client == other # # @param [ Object ] other The object to compare to. # # @return [ true, false ] If the objects are equal. # # @since 2.0.0 def ==(other) return false unless other.is_a?(Client) cluster == other.cluster && options == other.options end alias_method :eql?, :== # Get a collection object for the provided collection name. # # @example Get the collection. # client[:users] # # @param [ String, Symbol ] collection_name The name of the collection. # @param [ Hash ] options The options to the collection. # # @return [ Mongo::Collection ] The collection. # # @since 2.0.0 def [](collection_name, options = {}) database[collection_name, options] end # Get the hash value of the client. # # @example Get the client hash value. # client.hash # # @return [ Integer ] The client hash value. # # @since 2.0.0 def hash [cluster, options].hash end # Instantiate a new driver client. # # @example Instantiate a single server or mongos client. # Mongo::Client.new(['127.0.0.1:27017']) # # @example Instantiate a client for a replica set. # Mongo::Client.new(['127.0.0.1:27017', '127.0.0.1:27021']) # # @example Directly connect to a mongod in a replica set # Mongo::Client.new(['127.0.0.1:27017'], :connect => :direct) # # without `:connect => :direct`, Mongo::Client will discover and # # connect to the replica set if given the address of a server in # # a replica set # # @param [ Array | String ] addresses_or_uri The array of server addresses in the # form of host:port or a MongoDB URI connection string. # @param [ Hash ] options The options to be used by the client. If a MongoDB URI # connection string is also provided, these options take precedence over any # analogous options present in the URI string. # # @option options [ String, Symbol ] :app_name Application name that is # printed to the mongod logs upon establishing a connection in server # versions >= 3.4. # @option options [ Symbol ] :auth_mech The authentication mechanism to # use. One of :mongodb_cr, :mongodb_x509, :plain, :scram, :scram256 # @option options [ Hash ] :auth_mech_properties # @option options [ String ] :auth_source The source to authenticate from. # @option options [ true | false | nil | Integer ] :bg_error_backtrace # Experimental. Set to true to log complete backtraces for errors in # background threads. Set to false or nil to not log backtraces. Provide # a positive integer to log up to that many backtrace lines. # @option options [ Array ] :compressors A list of potential # compressors to use, in order of preference. The driver chooses the # first compressor that is also supported by the server. Currently the # driver only supports 'zstd, 'snappy' and 'zlib'. # @option options [ true | false ] :direct_connection Whether to connect # directly to the specified seed, bypassing topology discovery. Exactly # one seed must be provided. # @option options [ Symbol ] :connect Deprecated - use :direct_connection # option instead of this option. The connection method to use. This # forces the cluster to behave in the specified way instead of # auto-discovering. One of :direct, :replica_set, :sharded, # :load_balanced. If :connect is set to :load_balanced, the driver # will behave as if the server is a load balancer even if it isn't # connected to a load balancer. # @option options [ Float ] :connect_timeout The timeout, in seconds, to # attempt a connection. # @option options [ String ] :database The database to connect to. # @option options [ Float ] :heartbeat_frequency The interval, in seconds, # for the server monitor to refresh its description via hello. # @option options [ Object ] :id_generator A custom object to generate ids # for documents. Must respond to #generate. # @option options [ true | false ] :load_balanced Whether to expect to # connect to a load balancer. # @option options [ Integer ] :local_threshold The local threshold boundary # in seconds for selecting a near server for an operation. # @option options [ Logger ] :logger A custom logger to use. # @option options [ String ] :log_prefix A custom log prefix to use when # logging. This option is experimental and subject to change in a future # version of the driver. # @option options [ Integer ] :max_connecting The maximum number of # connections that can be connecting simultaneously. The default is 2. # This option should be increased if there are many threads that share # the same client and the application is experiencing timeouts # while waiting for connections to be established. # selecting a server for an operation. The default is 2. # @option options [ Integer ] :max_idle_time The maximum seconds a socket can remain idle # since it has been checked in to the pool. # @option options [ Integer ] :max_pool_size The maximum size of the # connection pool. Setting this option to zero creates an unlimited connection pool. # @option options [ Integer ] :max_read_retries The maximum number of read # retries when legacy read retries are in use. # @option options [ Integer ] :max_write_retries The maximum number of write # retries when legacy write retries are in use. # @option options [ Integer ] :min_pool_size The minimum size of the # connection pool. # @option options [ true, false ] :monitoring If false is given, the # client is initialized without global SDAM event subscribers and # will not publish SDAM events. Command monitoring and legacy events # will still be published, and the driver will still perform SDAM and # monitor its cluster in order to perform server selection. Built-in # driver logging of SDAM events will be disabled because it is # implemented through SDAM event subscription. Client#subscribe will # succeed for all event types, but subscribers to SDAM events will # not be invoked. Values other than false result in default behavior # which is to perform normal SDAM event publication. # @option options [ true, false ] :monitoring_io For internal driver # use only. Set to false to prevent SDAM-related I/O from being # done by this client or servers under it. Note: setting this option # to false will make the client non-functional. It is intended for # use in tests which manually invoke SDAM state transitions. # @option options [ true | false ] :cleanup For internal driver use only. # Set to false to prevent endSessions command being sent to the server # to clean up server sessions when the cluster is disconnected, and to # to not start the periodic executor. If :monitoring_io is false, # :cleanup automatically defaults to false as well. # @option options [ String ] :password The user's password. # @option options [ String ] :platform Platform information to include in # the metadata printed to the mongod logs upon establishing a connection # in server versions >= 3.4. # @option options [ Hash ] :read The read preference options. The hash # may have the following items: # - *:mode* -- read preference specified as a symbol; valid values are # *:primary*, *:primary_preferred*, *:secondary*, *:secondary_preferred* # and *:nearest*. # - *:tag_sets* -- an array of hashes. # - *:local_threshold*. # @option options [ Hash ] :read_concern The read concern option. # @option options [ Float ] :read_retry_interval The interval, in seconds, # in which reads on a mongos are retried. # @option options [ Symbol ] :replica_set The name of the replica set to # connect to. Servers not in this replica set will be ignored. # @option options [ true | false ] :retry_reads If true, modern retryable # reads are enabled (which is the default). If false, modern retryable # reads are disabled and legacy retryable reads are enabled. # @option options [ true | false ] :retry_writes Retry writes once when # connected to a replica set or sharded cluster versions 3.6 and up. # (Default is true.) # @option options [ true | false ] :scan Whether to scan all seeds # in constructor. The default in driver version 2.x is to do so; # driver version 3.x will not scan seeds in constructor. Opt in to the # new behavior by setting this option to false. *Note:* setting # this option to nil enables scanning seeds in constructor in driver # version 2.x. Driver version 3.x will recognize this option but # will ignore it and will never scan seeds in the constructor. # @option options [ Proc ] :sdam_proc A Proc to invoke with the client # as the argument prior to performing server discovery and monitoring. # Use this to set up SDAM event listeners to receive events published # during client construction. # # Note: the client is not fully constructed when sdam_proc is invoked, # in particular the cluster is nil at this time. sdam_proc should # limit itself to calling #subscribe and #unsubscribe methods on the # client only. # @option options [ Hash ] :server_api The requested server API version. # This hash can have the following items: # - *:version* -- string # - *:strict* -- boolean # - *:deprecation_errors* -- boolean # @option options [ Integer ] :server_selection_timeout The timeout in seconds # for selecting a server for an operation. # @option options [ Float ] :socket_timeout The timeout, in seconds, to # execute operations on a socket. This option is deprecated, use # :timeout_ms instead. # @option options [ Integer ] :srv_max_hosts The maximum number of mongoses # that the driver will communicate with for sharded topologies. If this # option is 0, then there will be no maximum number of mongoses. If the # given URI resolves to more hosts than ``:srv_max_hosts``, the client # will ramdomly choose an ``:srv_max_hosts`` sized subset of hosts. # @option options [ String ] :srv_service_name The service name to use in # the SRV DNS query. # @option options [ true, false ] :ssl Whether to use TLS. # @option options [ String ] :ssl_ca_cert The file containing concatenated # certificate authority certificates used to validate certs passed from the # other end of the connection. Intermediate certificates should NOT be # specified in files referenced by this option. One of :ssl_ca_cert, # :ssl_ca_cert_string or :ssl_ca_cert_object (in order of priority) is # required when using :ssl_verify. # @option options [ Array ] :ssl_ca_cert_object # An array of OpenSSL::X509::Certificate objects representing the # certificate authority certificates used to validate certs passed from # the other end of the connection. Intermediate certificates should NOT # be specified in files referenced by this option. One of :ssl_ca_cert, # :ssl_ca_cert_string or :ssl_ca_cert_object (in order of priority) # is required when using :ssl_verify. # @option options [ String ] :ssl_ca_cert_string A string containing # certificate authority certificate used to validate certs passed from the # other end of the connection. This option allows passing only one CA # certificate to the driver. Intermediate certificates should NOT # be specified in files referenced by this option. One of :ssl_ca_cert, # :ssl_ca_cert_string or :ssl_ca_cert_object (in order of priority) is # required when using :ssl_verify. # @option options [ String ] :ssl_cert The certificate file used to identify # the connection against MongoDB. A certificate chain may be passed by # specifying the client certificate first followed by any intermediate # certificates up to the CA certificate. The file may also contain the # certificate's private key, which will be ignored. This option, if present, # takes precedence over the values of :ssl_cert_string and :ssl_cert_object # @option options [ OpenSSL::X509::Certificate ] :ssl_cert_object The OpenSSL::X509::Certificate # used to identify the connection against MongoDB. Only one certificate # may be passed through this option. # @option options [ String ] :ssl_cert_string A string containing the PEM-encoded # certificate used to identify the connection against MongoDB. A certificate # chain may be passed by specifying the client certificate first followed # by any intermediate certificates up to the CA certificate. The string # may also contain the certificate's private key, which will be ignored, # This option, if present, takes precedence over the value of :ssl_cert_object # @option options [ String ] :ssl_key The private keyfile used to identify the # connection against MongoDB. Note that even if the key is stored in the same # file as the certificate, both need to be explicitly specified. This option, # if present, takes precedence over the values of :ssl_key_string and :ssl_key_object # @option options [ OpenSSL::PKey ] :ssl_key_object The private key used to identify the # connection against MongoDB # @option options [ String ] :ssl_key_pass_phrase A passphrase for the private key. # @option options [ String ] :ssl_key_string A string containing the PEM-encoded private key # used to identify the connection against MongoDB. This parameter, if present, # takes precedence over the value of option :ssl_key_object # @option options [ true, false ] :ssl_verify Whether to perform peer certificate validation and # hostname verification. Note that the decision of whether to validate certificates will be # overridden if :ssl_verify_certificate is set, and the decision of whether to validate # hostnames will be overridden if :ssl_verify_hostname is set. # @option options [ true, false ] :ssl_verify_certificate Whether to perform peer certificate # validation. This setting overrides :ssl_verify with respect to whether certificate # validation is performed. # @option options [ true, false ] :ssl_verify_hostname Whether to perform peer hostname # validation. This setting overrides :ssl_verify with respect to whether hostname validation # is performed. # @option options [ Integer ] :timeout_ms The operation timeout in milliseconds. # Must be a non-negative integer. An explicit value of 0 means infinite. # The default value is unset which means the feature is not enabled. # @option options [ true, false ] :truncate_logs Whether to truncate the # logs at the default 250 characters. # @option options [ String ] :user The user name. # @option options [ Float ] :wait_queue_timeout The time to wait, in # seconds, in the connection pool for a connection to be checked in. # This option is deprecated, use :timeout_ms instead. # @option options [ Array ] :wrapping_libraries Information about # libraries such as ODMs that are wrapping the driver, to be added to # metadata sent to the server. Specify the lower level libraries first. # Allowed hash keys: :name, :version, :platform. # @option options [ Hash ] :write Deprecated. Equivalent to :write_concern # option. # @option options [ Hash ] :write_concern The write concern options. # Can be :w => Integer|String, :wtimeout => Integer (in milliseconds, deprecated), # :j => Boolean, :fsync => Boolean. # @option options [ Integer ] :zlib_compression_level The Zlib compression level to use, if using compression. # See Ruby's Zlib module for valid levels. # @option options [ Hash ] :resolv_options For internal driver use only. # Options to pass through to Resolv::DNS constructor for SRV lookups. # @option options [ Hash ] :auto_encryption_options Auto-encryption related # options. # - :key_vault_client => Client | nil, a client connected to the MongoDB # instance containing the encryption key vault # - :key_vault_namespace => String, the namespace of the key vault in the # format database.collection # - :kms_providers => Hash, A hash of key management service (KMS) configuration # information. Valid hash keys are :aws, :azure, :gcp, :kmip, :local. # There may be more than one kms provider specified. # - :kms_tls_options => Hash, A hash of TLS options to authenticate to # KMS providers, usually used for KMIP servers. Valid hash keys # are :aws, :azure, :gcp, :kmip, :local. There may be more than one # kms provider specified. # - :schema_map => Hash | nil, JSONSchema for one or more collections # specifying which fields should be encrypted. This option is # mutually exclusive with :schema_map_path. # - Note: Schemas supplied in the schema_map only apply to configuring # automatic encryption for client side encryption. Other validation # rules in the JSON schema will not be enforced by the driver and will # result in an error. # - Note: Supplying a schema_map provides more security than relying on # JSON Schemas obtained from the server. It protects against a # malicious server advertising a false JSON Schema, which could trick # the client into sending unencrypted data that should be encrypted. # - Note: If a collection is present on both the :encrypted_fields_map # and :schema_map, an error will be raised. # - :schema_map_path => String | nil A path to a file contains the JSON schema # of the collection that stores auto encrypted documents. This option is # mutually exclusive with :schema_map. # - :bypass_auto_encryption => Boolean, when true, disables auto encryption; # defaults to false. # - :extra_options => Hash | nil, options related to spawning mongocryptd # (this part of the API is subject to change). # - :encrypted_fields_map => Hash | nil, maps a collection namespace to # a hash describing encrypted fields for queryable encryption. # - Note: If a collection is present on both the encryptedFieldsMap # and schemaMap, an error will be raised. # - :bypass_query_analysis => Boolean | nil, when true disables automatic # analysis of outgoing commands. # - :crypt_shared_lib_path => [ String | nil ] Path that should # be the used to load the crypt shared library. Providing this option # overrides default crypt shared library load paths for libmongocrypt. # - :crypt_shared_lib_required => [ Boolean | nil ] Whether # crypt shared library is required. If 'true', an error will be raised # if a crypt_shared library cannot be loaded by libmongocrypt. # # Notes on automatic encryption: # - Automatic encryption is an enterprise only feature that only applies # to operations on a collection. # - Automatic encryption is not supported for operations on a database or # view. # - Automatic encryption requires the authenticated user to have the # listCollections privilege. # - At worst, automatic encryption may triple the number of connections # used by the Client at any one time. # - If automatic encryption fails on an operation, use a MongoClient # configured with bypass_auto_encryption: true and use # ClientEncryption.encrypt to manually encrypt values. # - Enabling Client Side Encryption reduces the maximum write batch size # and may have a negative performance impact. # # @since 2.0.0 def initialize(addresses_or_uri, options = nil) options = options ? options.dup : {} processed = process_addresses(addresses_or_uri, options) uri = processed[:uri] addresses = processed[:addresses] options = processed[:options] # If the URI is an SRV URI, note this so that we can start # SRV polling if the topology is a sharded cluster. srv_uri = uri if uri.is_a?(URI::SRVProtocol) options = self.class.canonicalize_ruby_options(options) # The server API version is specified to be a string. # However, it is very annoying to always provide the number 1 as a string, # therefore cast to the string type here. if server_api = options[:server_api] if server_api.is_a?(Hash) server_api = Options::Redacted.new(server_api) if (version = server_api[:version]).is_a?(Integer) options[:server_api] = server_api.merge(version: version.to_s) end end end # Special handling for sdam_proc as it is only used during client # construction sdam_proc = options.delete(:sdam_proc) # For gssapi service_name, the default option is given in a hash # (one level down from the top level). merged_options = default_options(options) options.each do |k, v| default_v = merged_options[k] if Hash === default_v v = default_v.merge(v) end merged_options[k] = v end options = merged_options options.keys.each do |k| if options[k].nil? options.delete(k) end end @options = validate_new_options!(options) =begin WriteConcern object support if @options[:write_concern].is_a?(WriteConcern::Base) # Cache the instance so that we do not needlessly reconstruct it. @write_concern = @options[:write_concern] @options[:write_concern] = @write_concern.options end =end @options.freeze validate_options!(addresses, is_srv: uri.is_a?(URI::SRVProtocol)) validate_authentication_options! database_options = @options.dup database_options.delete(:server_api) @database = Database.new(self, @options[:database], database_options) # Temporarily set monitoring so that event subscriptions can be # set up without there being a cluster @monitoring = Monitoring.new(@options) if sdam_proc sdam_proc.call(self) end @connect_lock = Mutex.new @connect_lock.synchronize do @cluster = Cluster.new(addresses, @monitoring, cluster_options.merge(srv_uri: srv_uri)) end begin # Unset monitoring, it will be taken out of cluster from now on remove_instance_variable('@monitoring') if @options[:auto_encryption_options] @connect_lock.synchronize do build_encrypter end end rescue begin @cluster.close rescue => e log_warn("Eror closing cluster in client constructor's exception handler: #{e.class}: #{e}") # Drop this exception so that the original exception is raised end raise end if block_given? begin yield(self) ensure close end end end # @api private def cluster_options # We share clusters when a new client with different CRUD_OPTIONS # is requested; therefore, cluster should not be getting any of these # options upon instantiation options.reject do |key, value| CRUD_OPTIONS.include?(key.to_sym) end.merge( # but need to put the database back in for auth... database: options[:database], # Put these options in for legacy compatibility, but note that # their values on the client and the cluster do not have to match - # applications should read these values from client, not from cluster max_read_retries: options[:max_read_retries], read_retry_interval: options[:read_retry_interval], ).tap do |options| # If the client has a cluster already, forward srv_uri to the new # cluster to maintain SRV monitoring. If the client is brand new, # its constructor sets srv_uri manually. if cluster options.update(srv_uri: cluster.options[:srv_uri]) end end end # Get the maximum number of times the client can retry a read operation # when using legacy read retries. # # @return [ Integer ] The maximum number of retries. # # @api private def max_read_retries options[:max_read_retries] || Cluster::MAX_READ_RETRIES end # Get the interval, in seconds, in which read retries when using legacy # read retries. # # @return [ Float ] The interval. # # @api private def read_retry_interval options[:read_retry_interval] || Cluster::READ_RETRY_INTERVAL end # Get the maximum number of times the client can retry a write operation # when using legacy write retries. # # @return [ Integer ] The maximum number of retries. # # @api private def max_write_retries options[:max_write_retries] || Cluster::MAX_WRITE_RETRIES end # Get an inspection of the client as a string. # # @example Inspect the client. # client.inspect # # @return [ String ] The inspection string. # # @since 2.0.0 def inspect "#" end # Get a summary of the client state. # # @note The exact format and layout of the returned summary string is # not part of the driver's public API and may be changed at any time. # # @return [ String ] The summary string. # # @since 2.7.0 def summary "#" end # Get the server selector. It either uses the read preference # defined in the client options or defaults to a Primary server selector. # # @example Get the server selector. # client.server_selector # # @return [ Mongo::ServerSelector ] The server selector using the # user-defined read preference or a Primary server selector default. # # @since 2.5.0 def server_selector @server_selector ||= if read_preference ServerSelector.get(read_preference) else ServerSelector.primary end end # Get the read preference from the options passed to the client. # # @example Get the read preference. # client.read_preference # # @return [ BSON::Document ] The user-defined read preference. # The document may have the following fields: # - *:mode* -- read preference specified as a symbol; valid values are # *:primary*, *:primary_preferred*, *:secondary*, *:secondary_preferred* # and *:nearest*. # - *:tag_sets* -- an array of hashes. # - *:local_threshold*. # # @since 2.0.0 def read_preference @read_preference ||= options[:read] end # Creates a new client configured to use the database with the provided # name, and using the other options configured in this client. # # @note The new client shares the cluster with the original client, # and as a result also shares the monitoring instance and monitoring # event subscribers. # # @example Create a client for the `users' database. # client.use(:users) # # @param [ String, Symbol ] name The name of the database to use. # # @return [ Mongo::Client ] A new client instance. # # @since 2.0.0 def use(name) with(database: name) end # Creates a new client with the passed options merged over the existing # options of this client. Useful for one-offs to change specific options # without altering the original client. # # @note Depending on options given, the returned client may share the # cluster with the original client or be created with a new cluster. # If a new cluster is created, the monitoring event subscribers on # the new client are set to the default event subscriber set and # none of the subscribers on the original client are copied over. # # @example Get a client with changed options. # client.with(:read => { :mode => :primary_preferred }) # # @param [ Hash ] new_options The new options to use. # # @return [ Mongo::Client ] A new client instance. # # @since 2.0.0 def with(new_options = nil) clone.tap do |client| opts = client.update_options(new_options || Options::Redacted.new) Database.create(client) # We can't use the same cluster if some options that would affect it # have changed. if cluster_modifying?(opts) Cluster.create(client, monitoring: opts[:monitoring]) end end end # Updates this client's options from new_options, validating all options. # # The new options may be transformed according to various rules. # The final hash of options actually applied to the client is returned. # # If options fail validation, this method may warn or raise an exception. # If this method raises an exception, the client should be discarded # (similarly to if a constructor raised an exception). # # @param [ Hash ] new_options The new options to use. # # @return [ Hash ] Modified new options written into the client. # # @api private def update_options(new_options) old_options = @options new_options = self.class.canonicalize_ruby_options(new_options || {}) validate_new_options!(new_options).tap do |opts| # Our options are frozen options = @options.dup if options[:write] && opts[:write_concern] options.delete(:write) end if options[:write_concern] && opts[:write] options.delete(:write_concern) end options.update(opts) @options = options.freeze auto_encryption_options_changed = @options[:auto_encryption_options] != old_options[:auto_encryption_options] # If there are new auto_encryption_options, create a new encrypter. # Otherwise, allow the new client to share an encrypter with the # original client. # # If auto_encryption_options are nil, set @encrypter to nil, but do not # close the encrypter because it may still be used by the original client. if @options[:auto_encryption_options] && auto_encryption_options_changed @connect_lock.synchronize do build_encrypter end elsif @options[:auto_encryption_options].nil? @connect_lock.synchronize do @encrypter = nil end end validate_options! validate_authentication_options! end end # Get the read concern for this client. # # @example Get the client read concern. # client.read_concern # # @return [ Hash ] The read concern. # # @since 2.6.0 def read_concern options[:read_concern] end # Get the write concern for this client. If no option was provided, then a # default single server acknowledgement will be used. # # @example Get the client write concern. # client.write_concern # # @return [ Mongo::WriteConcern ] The write concern. # # @since 2.0.0 def write_concern @write_concern ||= WriteConcern.get(options[:write_concern] || options[:write]) end def closed? !!@closed end # Close all connections. # # @return [ true ] Always true. # # @since 2.1.0 def close @connect_lock.synchronize do @closed = true do_close end true end # Close encrypter and clean up auto-encryption resources. # # @return [ true ] Always true. def close_encrypter @encrypter.close if @encrypter true end # Reconnect the client. # # @example Reconnect the client. # client.reconnect # # @return [ true ] Always true. # # @since 2.1.0 def reconnect addresses = cluster.addresses.map(&:to_s) @connect_lock.synchronize do do_close rescue nil @cluster = Cluster.new(addresses, monitoring, cluster_options) if @options[:auto_encryption_options] build_encrypter end @closed = false end true end # Get the names of all databases. # # @example Get the database names. # client.database_names # # @param [ Hash ] filter The filter criteria for getting a list of databases. # @param [ Hash ] opts The command options. # # @option opts [ true, false ] :authorized_databases A flag that determines # which databases are returned based on user privileges when access control # is enabled # # See https://mongodb.com/docs/manual/reference/command/listDatabases/ # for more information and usage. # @option opts [ Session ] :session The session to use. # @option opts [ Object ] :comment A user-provided # comment to attach to this command. # @option options [ Integer ] :timeout_ms The operation timeout in milliseconds. # Must be a non-negative integer. An explicit value of 0 means infinite. # The default value is unset which means the feature is not enabled. # # @return [ Array ] The names of the databases. # # @since 2.0.5 def database_names(filter = {}, opts = {}) list_databases(filter, true, opts).collect{ |info| info['name'] } end # Get info for each database. # # @example Get the info for each database. # client.list_databases # # @param [ Hash ] filter The filter criteria for getting a list of databases. # @param [ true, false ] name_only Whether to only return each database name without full metadata. # @param [ Hash ] opts The command options. # # @option opts [ true, false ] :authorized_databases A flag that determines # which databases are returned based on user privileges when access control # is enabled. # @option opts [ Object ] :comment A user-provided # comment to attach to this command. # @option options [ Integer ] :timeout_ms The operation timeout in milliseconds. # Must be a non-negative integer. An explicit value of 0 means infinite. # The default value is unset which means the feature is not enabled. # # See https://mongodb.com/docs/manual/reference/command/listDatabases/ # for more information and usage. # @option opts [ Session ] :session The session to use. # @option options [ Object ] :comment A user-provided # comment to attach to this command. # # @return [ Array ] The info for each database. # # @since 2.0.5 def list_databases(filter = {}, name_only = false, opts = {}) cmd = { listDatabases: 1 } cmd[:nameOnly] = !!name_only cmd[:filter] = filter unless filter.empty? cmd[:authorizedDatabases] = true if opts[:authorized_databases] use(Database::ADMIN).database.read_command(cmd, opts).first[Database::DATABASES] end # Returns a list of Mongo::Database objects. # # @example Get a list of Mongo::Database objects. # client.list_mongo_databases # # @param [ Hash ] filter The filter criteria for getting a list of databases. # @param [ Hash ] opts The command options. # # @option opts [ Session ] :session The session to use. # @option options [ Object ] :comment A user-provided # comment to attach to this command. # # @return [ Array ] The list of database objects. # # @since 2.5.0 def list_mongo_databases(filter = {}, opts = {}) database_names(filter, opts).collect do |name| Database.new(self, name, options) end end # Start a session. # # If the deployment does not support sessions, raises # Mongo::Error::InvalidSession. This exception can also be raised when # the driver is not connected to a data-bearing server, for example # during failover. # # @example Start a session. # client.start_session(causal_consistency: true) # # @param [ Hash ] options The session options. Accepts the options # that Session#initialize accepts. # # @note A Session cannot be used by multiple threads at once; session # objects are not thread-safe. # # @return [ Session ] The session. # # @since 2.5.0 def start_session(options = {}) session = get_session!(options.merge(implicit: false)) if block_given? begin yield session ensure session.end_session end else session end end # As of version 3.6 of the MongoDB server, a ``$changeStream`` pipeline stage is supported # in the aggregation framework. As of version 4.0, this stage allows users to request that # notifications are sent for all changes that occur in the client's cluster. # # @example Get change notifications for the client's cluster. # client.watch([{ '$match' => { operationType: { '$in' => ['insert', 'replace'] } } }]) # # @param [ Array ] pipeline Optional additional filter operators. # @param [ Hash ] options The change stream options. # @option options [ String ] :full_document Allowed values: nil, 'default', # 'updateLookup', 'whenAvailable', 'required'. # # The default is to not send a value (i.e. nil), which is equivalent to # 'default'. By default, the change notification for partial updates will # include a delta describing the changes to the document. # # When set to 'updateLookup', the change notification for partial updates # will include both a delta describing the changes to the document as well # as a copy of the entire document that was changed from some time after # the change occurred. # # When set to 'whenAvailable', configures the change stream to return the # post-image of the modified document for replace and update change events # if the post-image for this event is available. # # When set to 'required', the same behavior as 'whenAvailable' except that # an error is raised if the post-image is not available. # @option options [ String ] :full_document_before_change Allowed values: nil, # 'whenAvailable', 'required', 'off'. # # The default is to not send a value (i.e. nil), which is equivalent to 'off'. # # When set to 'whenAvailable', configures the change stream to return the # pre-image of the modified document for replace, update, and delete change # events if it is available. # # When set to 'required', the same behavior as 'whenAvailable' except that # an error is raised if the pre-image is not available. # @option options [ BSON::Document, Hash ] :resume_after Specifies the logical starting point # for the new change stream. # @option options [ Integer ] :max_await_time_ms The maximum amount of time for the server to # wait on new documents to satisfy a change stream query. # @option options [ Integer ] :batch_size The number of documents to return per batch. # @option options [ BSON::Document, Hash ] :collation The collation to use. # @option options [ Session ] :session The session to use. # @option options [ BSON::Timestamp ] :start_at_operation_time Only return # changes that occurred at or after the specified timestamp. Any command run # against the server will return a cluster time that can be used here. # Only recognized by server versions 4.0+. # @option options [ Object ] :comment A user-provided # comment to attach to this command. # @option options [ Boolean ] :show_expanded_events Enables the server to # send the 'expanded' list of change stream events. The list of additional # events included with this flag set are: createIndexes, dropIndexes, # modify, create, shardCollection, reshardCollection, # refineCollectionShardKey. # # @note A change stream only allows 'majority' read concern. # @note This helper method is preferable to running a raw aggregation with a $changeStream # stage, for the purpose of supporting resumability. # # @return [ ChangeStream ] The change stream object. # # @since 2.6.0 def watch(pipeline = [], options = {}) return use(Database::ADMIN).watch(pipeline, options) unless database.name == Database::ADMIN view_options = options.dup view_options[:cursor_type] = :tailable_await if options[:max_await_time_ms] Mongo::Collection::View::ChangeStream.new( Mongo::Collection::View.new(self["#{Database::COMMAND}.aggregate"], {}, view_options), pipeline, Mongo::Collection::View::ChangeStream::CLUSTER, options) end # Returns a session to use for operations if possible. # # If :session option is set, validates that session and returns it. # Otherwise, if deployment supports sessions, creates a new session and # returns it. When a new session is created, the session will be implicit # (lifecycle is managed by the driver) if the :implicit option is given, # otherwise the session will be explicit (lifecycle managed by the # application). If deployment does not support session, returns nil. # # @option options [ true | false ] :implicit When no session is passed in, # whether to create an implicit session. # @option options [ Session ] :session The session to validate and return. # # @return [ Session | nil ] Session object or nil if sessions are not # supported by the deployment. # # @api private def get_session(options = {}) get_session!(options) rescue Error::SessionsNotSupported nil end # Creates a session to use for operations if possible and yields it to # the provided block. # # If :session option is set, validates that session and uses it. # Otherwise, if deployment supports sessions, creates a new session and # uses it. When a new session is created, the session will be implicit # (lifecycle is managed by the driver) if the :implicit option is given, # otherwise the session will be explicit (lifecycle managed by the # application). If deployment does not support session, yields nil to # the block. # # When the block finishes, if the session was created and was implicit, # or if an implicit session was passed in, the session is ended which # returns it to the pool of available sessions. # # @option options [ true | false ] :implicit When no session is passed in, # whether to create an implicit session. # @option options [ Session ] :session The session to validate and return. # # @api private def with_session(options = {}, &block) # TODO: Add this back in RUBY-3174. # assert_not_closed session = get_session(options) yield session ensure if session && session.implicit? session.end_session end end class << self # Lowercases auth mechanism properties, if given, in the specified # options, then converts the options to an instance of Options::Redacted. # # @api private def canonicalize_ruby_options(options) Options::Redacted.new(Hash[options.map do |k, v| if k == :auth_mech_properties || k == 'auth_mech_properties' if v v = Hash[v.map { |pk, pv| [pk.downcase, pv] }] end end [k, v] end]) end end # Returns encrypted field map hash if provided when creating the client. # # @return [ Hash | nil ] Encrypted field map hash, or nil if not set. # @api private def encrypted_fields_map @encrypted_fields_map ||= @options.fetch(:auto_encryption_options, {})[:encrypted_fields_map] end # @return [ Integer | nil ] Value of timeout_ms option if set. # @api private def timeout_ms @options[:timeout_ms] end # @return [ Float | nil ] Value of timeout_ms option converted to seconds. # @api private def timeout_sec if timeout_ms.nil? nil else timeout_ms / 1_000.0 end end private # Attempts to parse the given list of addresses, using the provided options. # # @param [ String | Array ] addresses the list of addresses # @param [ Hash ] options the options that may drive how the list is # processed. # # @return [ Hash<:uri, :addresses, :options> ] the results of processing the # list of addresses. def process_addresses(addresses, options) if addresses.is_a?(String) process_addresses_string(addresses, options) else process_addresses_array(addresses, options) end end # Attempts to parse the given list of addresses, using the provided options. # # @param [ String ] addresses the list of addresses # @param [ Hash ] options the options that may drive how the list is # processed. # # @return [ Hash<:uri, :addresses, :options> ] the results of processing the # list of addresses. def process_addresses_string(addresses, options) {}.tap do |processed| processed[:uri] = uri = URI.get(addresses, options) processed[:addresses] = uri.servers uri_options = uri.client_options.dup # Special handing for :write and :write_concern: allow client Ruby # options to override URI options, even when the Ruby option uses the # deprecated :write key and the URI option uses the current # :write_concern key if options[:write] uri_options.delete(:write_concern) end processed[:options] = uri_options.merge(options) @srv_records = uri.srv_records end end # Attempts to parse the given list of addresses, using the provided options. # # @param [ Array ] addresses the list of addresses # @param [ Hash ] options the options that may drive how the list is # processed. # # @return [ Hash<:uri, :addresses, :options> ] the results of processing the # list of addresses. def process_addresses_array(addresses, options) {}.tap do |processed| processed[:addresses] = addresses processed[:options] = options addresses.each do |addr| if addr =~ /\Amongodb(\+srv)?:\/\//i raise ArgumentError, "Host '#{addr}' should not contain protocol. Did you mean to not use an array?" end end @srv_records = nil end end # Create a new encrypter object using the client's auto encryption options def build_encrypter @encrypter = Crypt::AutoEncrypter.new( @options[:auto_encryption_options].merge(client: self) ) end # Generate default client options based on the URI and options # passed into the Client constructor. def default_options(options) Database::DEFAULT_OPTIONS.dup.tap do |default_options| if options[:auth_mech] || options[:user] default_options[:auth_source] = Auth::User.default_auth_source(options) end if options[:auth_mech] == :gssapi default_options[:auth_mech_properties] = { service_name: 'mongodb' } end default_options[:retry_reads] = true default_options[:retry_writes] = true end end # Implementation for #close, assumes the connect lock is already acquired. def do_close @cluster.close close_encrypter end # Returns a session to use for operations. # # If :session option is set, validates that session and returns it. # Otherwise, if deployment supports sessions, creates a new session and # returns it. When a new session is created, the session will be implicit # (lifecycle is managed by the driver) if the :implicit option is given, # otherwise the session will be explicit (lifecycle managed by the # application). If deployment does not support session, raises # Error::InvalidSession. # # @option options [ true | false ] :implicit When no session is passed in, # whether to create an implicit session. # @option options [ Session ] :session The session to validate and return. # @option options [ Operation::Context | nil ] :context Context of the # operation the session is used for. # # @return [ Session ] A session object. # # @raise Error::SessionsNotSupported if sessions are not supported by # the deployment. # # @api private def get_session!(options = {}) if options[:session] return options[:session].validate!(self) end cluster.validate_session_support!(timeout: timeout_sec) options = {implicit: true}.update(options) server_session = if options[:implicit] nil else cluster.session_pool.checkout end Session.new(server_session, self, options) end # Auxiliary method that is called by interpreter when copying the client # via dup or clone. # # @param [ Mongo::Client ] original Client that is being cloned. # # @api private def initialize_copy(original) @options = original.options.dup @connect_lock = Mutex.new @monitoring = @cluster ? monitoring : Monitoring.new(options) @database = nil @read_preference = nil @write_concern = nil end def cluster_modifying?(new_options) cluster_options = new_options.reject do |name| CRUD_OPTIONS.include?(name.to_sym) end cluster_options.any? do |name, value| options[name] != value end end # Validates options in the provided argument for validity. # The argument may contain a subset of options that the client will # eventually have; this method validates each of the provided options # but does not check for interactions between combinations of options. def validate_new_options!(opts) return Options::Redacted.new unless opts if opts[:read_concern] # Raise an error for non user-settable options if opts[:read_concern][:after_cluster_time] raise Mongo::Error::InvalidReadConcern.new( 'The after_cluster_time read_concern option cannot be specified by the user' ) end given_keys = opts[:read_concern].keys.map(&:to_s) allowed_keys = ['level'] invalid_keys = given_keys - allowed_keys # Warn that options are invalid but keep it and forward to the server unless invalid_keys.empty? log_warn("Read concern has invalid keys: #{invalid_keys.join(',')}.") end end if server_api = opts[:server_api] unless server_api.is_a?(Hash) raise ArgumentError, ":server_api value must be a hash: #{server_api}" end extra_keys = server_api.keys - %w(version strict deprecation_errors) unless extra_keys.empty? raise ArgumentError, "Unknown keys under :server_api: #{extra_keys.map(&:inspect).join(', ')}" end if version = server_api[:version] unless VALID_SERVER_API_VERSIONS.include?(version) raise ArgumentError, "Unknown server API version: #{version}" end end end Lint.validate_underscore_read_preference(opts[:read]) Lint.validate_read_concern_option(opts[:read_concern]) opts.each.inject(Options::Redacted.new) do |_options, (k, v)| key = k.to_sym if VALID_OPTIONS.include?(key) validate_max_min_pool_size!(key, opts) validate_max_connecting!(key, opts) validate_read!(key, opts) if key == :compressors compressors = valid_compressors(v) if compressors.include?('snappy') validate_snappy_compression! end if compressors.include?('zstd') validate_zstd_compression! end _options[key] = compressors unless compressors.empty? elsif key == :srv_max_hosts if v && (!v.is_a?(Integer) || v < 0) log_warn("#{v} is not a valid integer for srv_max_hosts") else _options[key] = v end else _options[key] = v end else log_warn("Unsupported client option '#{k}'. It will be ignored.") end _options end end # Validates all options after they are set on the client. # This method is intended to catch combinations of options which are # not allowed. def validate_options!(addresses = nil, is_srv: nil) if options[:write] && options[:write_concern] && options[:write] != options[:write_concern] raise ArgumentError, "If :write and :write_concern are both given, they must be identical: #{options.inspect}" end connect = options[:connect]&.to_sym if connect && !%i(direct replica_set sharded load_balanced).include?(connect) raise ArgumentError, "Invalid :connect option value: #{connect}" end if options[:direct_connection] if connect && connect != :direct raise ArgumentError, "Conflicting client options: direct_connection=true and connect=#{connect}" end # When a new client is created, we get the list of seed addresses if addresses && addresses.length > 1 raise ArgumentError, "direct_connection=true cannot be used with multiple seeds" end # When a client is copied using #with, we have a cluster if cluster && !cluster.topology.is_a?(Mongo::Cluster::Topology::Single) raise ArgumentError, "direct_connection=true cannot be used with topologies other than Single (this client is #{cluster.topology.class.name.sub(/.*::/, '')})" end end if options[:load_balanced] if addresses && addresses.length > 1 raise ArgumentError, "load_balanced=true cannot be used with multiple seeds" end if options[:direct_connection] raise ArgumentError, "direct_connection=true cannot be used with load_balanced=true" end if connect && connect != :load_balanced raise ArgumentError, "connect=#{connect} cannot be used with load_balanced=true" end if options[:replica_set] raise ArgumentError, "load_balanced=true cannot be used with replica_set option" end end if connect == :load_balanced if addresses && addresses.length > 1 raise ArgumentError, "connect=load_balanced cannot be used with multiple seeds" end if options[:replica_set] raise ArgumentError, "connect=load_balanced cannot be used with replica_set option" end end if options[:direct_connection] == false && connect && connect == :direct raise ArgumentError, "Conflicting client options: direct_connection=false and connect=#{connect}" end %i(connect_timeout socket_timeout).each do |key| if value = options[key] unless Numeric === value raise ArgumentError, "#{key} must be a non-negative number: #{value}" end if value < 0 raise ArgumentError, "#{key} must be a non-negative number: #{value}" end end end if value = options[:bg_error_backtrace] case value when Integer if value <= 0 raise ArgumentError, ":bg_error_backtrace option value must be true, false, nil or a positive integer: #{value}" end when true # OK else raise ArgumentError, ":bg_error_backtrace option value must be true, false, nil or a positive integer: #{value}" end end if libraries = options[:wrapping_libraries] unless Array === libraries raise ArgumentError, ":wrapping_libraries must be an array of hashes: #{libraries}" end libraries = libraries.map do |library| Utils.shallow_symbolize_keys(library) end libraries.each do |library| unless Hash === library raise ArgumentError, ":wrapping_libraries element is not a hash: #{library}" end if library.empty? raise ArgumentError, ":wrapping_libraries element is empty" end unless (library.keys - %i(name platform version)).empty? raise ArgumentError, ":wrapping_libraries element has invalid keys (allowed keys: :name, :platform, :version): #{library}" end library.each do |key, value| if value.include?('|') raise ArgumentError, ":wrapping_libraries element value cannot include '|': #{value}" end end end end if options[:srv_max_hosts] && options[:srv_max_hosts] > 0 if options[:replica_set] raise ArgumentError, ":srv_max_hosts > 0 cannot be used with :replica_set option" end if options[:load_balanced] raise ArgumentError, ":srv_max_hosts > 0 cannot be used with :load_balanced=true" end end unless is_srv.nil? || is_srv if options[:srv_max_hosts] raise ArgumentError, ":srv_max_hosts cannot be used on non-SRV URI" end if options[:srv_service_name] raise ArgumentError, ":srv_service_name cannot be used on non-SRV URI" end end end # Validates all authentication-related options after they are set on the client # This method is intended to catch combinations of options which are not allowed def validate_authentication_options! auth_mech = options[:auth_mech] user = options[:user] password = options[:password] auth_source = options[:auth_source] mech_properties = options[:auth_mech_properties] if auth_mech.nil? if user && user.empty? raise Mongo::Auth::InvalidConfiguration, 'Empty username is not supported for default auth mechanism' end if auth_source == '' raise Mongo::Auth::InvalidConfiguration, 'Auth source cannot be empty for default auth mechanism' end return end if !Mongo::Auth::SOURCES.key?(auth_mech) raise Mongo::Auth::InvalidMechanism.new(auth_mech) end if user.nil? && !%i(aws mongodb_x509).include?(auth_mech) raise Mongo::Auth::InvalidConfiguration, "Username is required for auth mechanism #{auth_mech}" end if password.nil? && !%i(aws gssapi mongodb_x509).include?(auth_mech) raise Mongo::Auth::InvalidConfiguration, "Password is required for auth mechanism #{auth_mech}" end if password && auth_mech == :mongodb_x509 raise Mongo::Auth::InvalidConfiguration, 'Password is not supported for :mongodb_x509 auth mechanism' end if auth_mech == :aws && user && !password raise Mongo::Auth::InvalidConfiguration, 'Username is provided but password is not provided for :aws auth mechanism' end if %i(aws gssapi mongodb_x509).include?(auth_mech) if !['$external', nil].include?(auth_source) raise Mongo::Auth::InvalidConfiguration, "#{auth_source} is an invalid auth source for #{auth_mech}; valid options are $external and nil" end else # Auth source is the database name, and thus cannot be the empty string. if auth_source == '' raise Mongo::Auth::InvalidConfiguration, "Auth source cannot be empty for auth mechanism #{auth_mech}" end end if mech_properties && !%i(aws gssapi).include?(auth_mech) raise Mongo::Auth::InvalidConfiguration, ":mechanism_properties are not supported for auth mechanism #{auth_mech}" end end def valid_compressors(compressors) compressors.select do |compressor| if !VALID_COMPRESSORS.include?(compressor) log_warn("Unsupported compressor '#{compressor}' in list '#{compressors}'. " + "This compressor will not be used.") false else true end end end def validate_snappy_compression! return if defined?(Snappy) require 'snappy' rescue LoadError => e raise Error::UnmetDependency, "Cannot enable snappy compression because the snappy gem " \ "has not been installed. Add \"gem 'snappy'\" to your Gemfile and run " \ "\"bundle install\" to install the gem. (#{e.class}: #{e})" end def validate_zstd_compression! return if defined?(Zstd) require 'zstd-ruby' rescue LoadError => e raise Error::UnmetDependency, "Cannot enable zstd compression because the zstd-ruby gem " \ "has not been installed. Add \"gem 'zstd-ruby'\" to your Gemfile and run " \ "\"bundle install\" to install the gem. (#{e.class}: #{e})" end def validate_max_min_pool_size!(option, opts) if option == :min_pool_size && opts[:min_pool_size] max = opts[:max_pool_size] || Server::ConnectionPool::DEFAULT_MAX_SIZE if max != 0 && opts[:min_pool_size] > max raise Error::InvalidMinPoolSize.new(opts[:min_pool_size], max) end end true end # Validates whether the max_connecting option is valid. # # @param [ Symbol ] option The option to validate. # @param [ Hash ] opts The client options. # # @return [ true ] If the option is valid. # @raise [ Error::InvalidMaxConnecting ] If the option is invalid. def validate_max_connecting!(option, opts) if option == :max_connecting && opts.key?(:max_connecting) max_connecting = opts[:max_connecting] || Server::ConnectionPool::DEFAULT_MAX_CONNECTING if max_connecting <= 0 raise Error::InvalidMaxConnecting.new(opts[:max_connecting]) end end true end def validate_read!(option, opts) if option == :read && opts.has_key?(:read) read = opts[:read] # We could check if read is a Hash, but this would fail # for custom classes implementing key access ([]). # Instead reject common cases of strings and symbols. if read.is_a?(String) || read.is_a?(Symbol) raise Error::InvalidReadOption.new(read, "the read preference must be specified as a hash: { mode: #{read.inspect} }") end if mode = read[:mode] mode = mode.to_sym unless Mongo::ServerSelector::PREFERENCES.include?(mode) raise Error::InvalidReadOption.new(read, "mode #{mode} is not one of recognized modes") end end end true end def assert_not_closed if closed? raise Error::ClientClosed, "The client was closed and is not usable for operations. Call #reconnect to reset this client instance or create a new client instance" end end end end mongo-ruby-driver-2.21.3/lib/mongo/client_encryption.rb000066400000000000000000000336661505113246500231500ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2019-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo # ClientEncryption encapsulates explicit operations on a key vault # collection that cannot be done directly on a MongoClient. It # provides an API for explicitly encrypting and decrypting values, # and creating data keys. class ClientEncryption # Create a new ClientEncryption object with the provided options. # # @param [ Mongo::Client ] key_vault_client A Mongo::Client # that is connected to the MongoDB instance where the key vault # collection is stored. # @param [ Hash ] options The ClientEncryption options. # # @option options [ String ] :key_vault_namespace The name of the # key vault collection in the format "database.collection". # @option options [ Hash ] :kms_providers A hash of key management service # configuration information. # @see Mongo::Crypt::KMS::Credentials for list of options for every # supported provider. # @note There may be more than one KMS provider specified. # @option options [ Hash ] :kms_tls_options TLS options to connect to KMS # providers. Keys of the hash should be KSM provider names; values # should be hashes of TLS connection options. The options are equivalent # to TLS connection options of Mongo::Client. # @see Mongo::Client#initialize for list of TLS options. # @option options [ Integer ] :timeout_ms The operation timeout in milliseconds. # Must be a non-negative integer. An explicit value of 0 means infinite. # The default value is unset which means the feature is disabled. # # @raise [ ArgumentError ] If required options are missing or incorrectly # formatted. def initialize(key_vault_client, options = {}) @encrypter = Crypt::ExplicitEncrypter.new( key_vault_client, options[:key_vault_namespace], Crypt::KMS::Credentials.new(options[:kms_providers]), Crypt::KMS::Validations.validate_tls_options(options[:kms_tls_options]) ) end # Generates a data key used for encryption/decryption and stores # that key in the KMS collection. The generated key is encrypted with # the KMS master key. # # @param [ String ] kms_provider The KMS provider to use. Valid values are # "aws" and "local". # @param [ Hash ] options # # @option options [ Hash ] :master_key Information about the AWS master key. # Required if kms_provider is "aws". # - :region [ String ] The The AWS region of the master key (required). # - :key [ String ] The Amazon Resource Name (ARN) of the master key (required). # - :endpoint [ String ] An alternate host to send KMS requests to (optional). # endpoint should be a host name with an optional port number separated # by a colon (e.g. "kms.us-east-1.amazonaws.com" or # "kms.us-east-1.amazonaws.com:443"). An endpoint in any other format # will not be properly parsed. # @option options [ Array ] :key_alt_names An optional array of # strings specifying alternate names for the new data key. # @option options [ String | nil ] :key_material Optional # 96 bytes to use as custom key material for the data key being created. # If :key_material option is given, the custom key material is used # for encrypting and decrypting data. # # @return [ BSON::Binary ] The 16-byte UUID of the new data key as a # BSON::Binary object with type :uuid. def create_data_key(kms_provider, options={}) key_document = Crypt::KMS::MasterKeyDocument.new(kms_provider, options) key_alt_names = options[:key_alt_names] key_material = options[:key_material] @encrypter.create_and_insert_data_key(key_document, key_alt_names, key_material) end # Encrypts a value using the specified encryption key and algorithm. # # @param [ Object ] value The value to encrypt. # @param [ Hash ] options # # @option options [ BSON::Binary ] :key_id A BSON::Binary object of type :uuid # representing the UUID of the encryption key as it is stored in the key # vault collection. # @option options [ String ] :key_alt_name The alternate name for the # encryption key. # @option options [ String ] :algorithm The algorithm used to encrypt the value. # Valid algorithms are "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic", # "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "Indexed", "Unindexed". # @option options [ Integer | nil ] :contention_factor Contention factor # to be applied if encryption algorithm is set to "Indexed". If not # provided, it defaults to a value of 0. Contention factor should be set # only if encryption algorithm is set to "Indexed". # @option options [ String | nil ] query_type Query type to be applied # if encryption algorithm is set to "Indexed". Query type should be set # only if encryption algorithm is set to "Indexed". The only allowed # value is "equality". # # @note The :key_id and :key_alt_name options are mutually exclusive. Only # one is required to perform explicit encryption. # # @return [ BSON::Binary ] A BSON Binary object of subtype 6 (ciphertext) # representing the encrypted value. # # @raise [ ArgumentError ] if either contention_factor or query_type # is set, and algorithm is not "Indexed". def encrypt(value, options={}) @encrypter.encrypt(value, options) end # Encrypts a Match Expression or Aggregate Expression to query a range index. # # @example Encrypt Match Expression. # encryption.encrypt_expression( # {'$and' => [{'field' => {'$gt' => 10}}, {'field' => {'$lt' => 20 }}]} # ) # @example Encrypt Aggregate Expression. # encryption.encrypt_expression( # {'$and' => [{'$gt' => ['$field', 10]}, {'$lt' => ['$field', 20]}} # ) # {$and: [{$gt: [, ]}, {$lt: [, ]}] # Only supported when queryType is "range" and algorithm is "Range". # @note: The Range algorithm is experimental only. It is not intended # for public use. It is subject to breaking changes. # # @param [ Hash ] expression Expression to encrypt. # # @param [ Hash ] options # @option options [ BSON::Binary ] :key_id A BSON::Binary object of type :uuid # representing the UUID of the encryption key as it is stored in the key # vault collection. # @option options [ String ] :key_alt_name The alternate name for the # encryption key. # @option options [ String ] :algorithm The algorithm used to encrypt the # expression. The only allowed value is "Range" # @option options [ Integer | nil ] :contention_factor Contention factor # to be applied If not provided, it defaults to a value of 0. # @option options [ String | nil ] query_type Query type to be applied. # The only allowed value is "range". # # @note The :key_id and :key_alt_name options are mutually exclusive. Only # one is required to perform explicit encryption. # # @return [ BSON::Binary ] A BSON Binary object of subtype 6 (ciphertext) # representing the encrypted expression. # # @raise [ ArgumentError ] if disallowed values in options are set. def encrypt_expression(expression, options = {}) @encrypter.encrypt_expression(expression, options) end # Decrypts a value that has already been encrypted. # # @param [ BSON::Binary ] value A BSON Binary object of subtype 6 (ciphertext) # that will be decrypted. # # @return [ Object ] The decrypted value. def decrypt(value) @encrypter.decrypt(value) end # Adds a key_alt_name for the key in the key vault collection with the given id. # # @param [ BSON::Binary ] id Id of the key to add new key alt name. # @param [ String ] key_alt_name New key alt name to add. # # @return [ BSON::Document | nil ] Document describing the identified key # before adding the key alt name, or nil if no such key. def add_key_alt_name(id, key_alt_name) @encrypter.add_key_alt_name(id, key_alt_name) end # Removes the key with the given id from the key vault collection. # # @param [ BSON::Binary ] id Id of the key to delete. # # @return [ Operation::Result ] The response from the database for the delete_one # operation that deletes the key. def delete_key(id) @encrypter.delete_key(id) end # Finds a single key with the given id. # # @param [ BSON::Binary ] id Id of the key to get. # # @return [ BSON::Document | nil ] The found key document or nil # if not found. def get_key(id) @encrypter.get_key(id) end # Returns a key in the key vault collection with the given key_alt_name. # # @param [ String ] key_alt_name Key alt name to find a key. # # @return [ BSON::Document | nil ] The found key document or nil # if not found. def get_key_by_alt_name(key_alt_name) @encrypter.get_key_by_alt_name(key_alt_name) end # Returns all keys in the key vault collection. # # @return [ Collection::View ] Keys in the key vault collection. def get_keys @encrypter.get_keys end alias :keys :get_keys # Removes a key_alt_name from a key in the key vault collection with the given id. # # @param [ BSON::Binary ] id Id of the key to remove key alt name. # @param [ String ] key_alt_name Key alt name to remove. # # @return [ BSON::Document | nil ] Document describing the identified key # before removing the key alt name, or nil if no such key. def remove_key_alt_name(id, key_alt_name) @encrypter.remove_key_alt_name(id, key_alt_name) end # Decrypts multiple data keys and (re-)encrypts them with a new master_key, # or with their current master_key if a new one is not given. # # @param [ Hash ] filter Filter used to find keys to be updated. # @param [ Hash ] options # # @option options [ String ] :provider KMS provider to encrypt keys. # @option options [ Hash | nil ] :master_key Document describing master key # to encrypt keys. # # @return [ Crypt::RewrapManyDataKeyResult ] Result of the operation. def rewrap_many_data_key(filter, opts = {}) @encrypter.rewrap_many_data_key(filter, opts) end # Create collection with encrypted fields. # # If :encryption_fields contains a keyId with a null value, a data key # will be automatically generated and assigned to keyId value. # # @note This method does not update the :encrypted_fields_map in the client's # :auto_encryption_options. Therefore, in order to use the collection # created by this method with automatic encryption, the user must create # a new client after calling this function with the :encrypted_fields returned. # # @param [ Mongo::Database ] database Database to create collection in. # @param [ String ] coll_name Name of collection to create. # @param [ Hash ] coll_opts Options for collection to create. # @param [ String ] kms_provider KMS provider to encrypt fields. # @param [ Hash | nil ] master_key Document describing master key to encrypt fields. # # @return [ Array ] The result of the create # collection operation and the encrypted fields map used to create # the collection. def create_encrypted_collection(database, coll_name, coll_opts, kms_provider, master_key) raise ArgumentError, 'coll_opts must contain :encrypted_fields' unless coll_opts[:encrypted_fields] encrypted_fields = create_data_keys(coll_opts[:encrypted_fields], kms_provider, master_key) begin new_coll_opts = coll_opts.dup.merge(encrypted_fields: encrypted_fields) [database[coll_name].create(new_coll_opts), encrypted_fields] rescue Mongo::Error => e raise Error::CryptError, "Error creating collection with encrypted fields \ #{encrypted_fields}: #{e.class}: #{e.message}" end end private # Create data keys for fields in encrypted_fields that has :keyId key, # but the value is nil. # # @param [ Hash ] encrypted_fields Encrypted fields map. # @param [ String ] kms_provider KMS provider to encrypt fields. # @param [ Hash | nil ] master_key Document describing master key to encrypt fields. # # @return [ Hash ] Encrypted fields map with keyIds for fields # that did not have one. def create_data_keys(encrypted_fields, kms_provider, master_key) encrypted_fields = encrypted_fields.dup # We must return the partially formed encrypted_fields hash if an error # occurs - https://github.com/mongodb/specifications/blob/master/source/client-side-encryption/client-side-encryption.md#create-encrypted-collection-helper # Thefore, we do this in a loop instead of using #map. encrypted_fields[:fields].size.times do |i| field = encrypted_fields[:fields][i] next unless field.is_a?(Hash) && field.fetch(:keyId, false).nil? begin encrypted_fields[:fields][i][:keyId] = create_data_key(kms_provider, master_key: master_key) rescue Error::CryptError => e raise Error::CryptError, "Error creating data key for field #{field[:path]} \ with encrypted fields #{encrypted_fields}: #{e.class}: #{e.message}" end end encrypted_fields end end end mongo-ruby-driver-2.21.3/lib/mongo/cluster.rb000066400000000000000000001175161505113246500210760ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'mongo/cluster/topology' require 'mongo/cluster/reapers/socket_reaper' require 'mongo/cluster/reapers/cursor_reaper' require 'mongo/cluster/periodic_executor' module Mongo # Represents a group of servers on the server side, either as a # single server, a replica set, or a single or multiple mongos. # # @since 2.0.0 class Cluster extend Forwardable include Monitoring::Publishable include Event::Subscriber include Loggable include ClusterTime::Consumer # The default number of legacy read retries. # # @since 2.1.1 MAX_READ_RETRIES = 1 # The default number of legacy write retries. # # @since 2.4.2 MAX_WRITE_RETRIES = 1 # The default read retry interval, in seconds, when using legacy read # retries. # # @since 2.1.1 READ_RETRY_INTERVAL = 5 # How often an idle primary writes a no-op to the oplog. # # @since 2.4.0 IDLE_WRITE_PERIOD_SECONDS = 10 # The cluster time key in responses from mongos servers. # # @since 2.5.0 # @deprecated CLUSTER_TIME = 'clusterTime'.freeze # Instantiate the new cluster. # # @api private # # @example Instantiate the cluster. # Mongo::Cluster.new(["127.0.0.1:27017"], monitoring) # # @note Cluster should never be directly instantiated outside of a Client. # # @note When connecting to a mongodb+srv:// URI, the client expands such a # URI into a list of servers and passes that list to the Cluster # constructor. When connecting to a standalone mongod, the Cluster # constructor receives the corresponding address as an array of one string. # # @param [ Array ] seeds The addresses of the configured servers # @param [ Monitoring ] monitoring The monitoring. # @param [ Hash ] options Options. Client constructor forwards its # options to Cluster constructor, although Cluster recognizes # only a subset of the options recognized by Client. # # @option options [ true | false ] :direct_connection Whether to connect # directly to the specified seed, bypassing topology discovery. Exactly # one seed must be provided. # @option options [ Symbol ] :connect Deprecated - use :direct_connection # option instead of this option. The connection method to use. This # forces the cluster to behave in the specified way instead of # auto-discovering. One of :direct, :replica_set, :sharded # @option options [ Symbol ] :replica_set The name of the replica set to # connect to. Servers not in this replica set will be ignored. # @option options [ true | false ] :scan Whether to scan all seeds # in constructor. The default in driver version 2.x is to do so; # driver version 3.x will not scan seeds in constructor. Opt in to the # new behavior by setting this option to false. *Note:* setting # this option to nil enables scanning seeds in constructor in driver # version 2.x. Driver version 3.x will recognize this option but # will ignore it and will never scan seeds in the constructor. # @option options [ true | false ] :monitoring_io For internal driver # use only. Set to false to prevent SDAM-related I/O from being # done by this cluster or servers under it. Note: setting this option # to false will make the cluster non-functional. It is intended for # use in tests which manually invoke SDAM state transitions. # @option options [ true | false ] :cleanup For internal driver use only. # Set to false to prevent endSessions command being sent to the server # to clean up server sessions when the cluster is disconnected, and to # to not start the periodic executor. If :monitoring_io is false, # :cleanup automatically defaults to false as well. # @option options [ Float ] :heartbeat_frequency The interval, in seconds, # for the server monitor to refresh its description via hello. # @option options [ Hash ] :resolv_options For internal driver use only. # Options to pass through to Resolv::DNS constructor for SRV lookups. # @option options [ Hash ] :server_api The requested server API version. # This hash can have the following items: # - *:version* -- string # - *:strict* -- boolean # - *:deprecation_errors* -- boolean # # @since 2.0.0 def initialize(seeds, monitoring, options = Options::Redacted.new) if seeds.nil? raise ArgumentError, 'Seeds cannot be nil' end options = options.dup if options[:monitoring_io] == false && !options.key?(:cleanup) options[:cleanup] = false end @options = options.freeze # @update_lock covers @servers, @connecting, @connected, @topology and # @sessions_supported. Generally instance variables that do not have a # designated for them lock should only be modified under the update lock. # Note that topology change is locked by @update_lock and not by # @sdam_flow_lock. @update_lock = Mutex.new @servers = [] @monitoring = monitoring @event_listeners = Event::Listeners.new @app_metadata = Server::AppMetadata.new(@options.merge(purpose: :application)) @monitor_app_metadata = Server::Monitor::AppMetadata.new(@options.merge(purpose: :monitor)) @push_monitor_app_metadata = Server::Monitor::AppMetadata.new(@options.merge(purpose: :push_monitor)) @cluster_time_lock = Mutex.new @cluster_time = nil @srv_monitor_lock = Mutex.new @srv_monitor = nil @server_selection_semaphore = Semaphore.new @topology = Topology.initial(self, monitoring, options) # State change lock is similar to the sdam flow lock, but is designed # to serialize state changes initated by consumers of Cluster # (e.g. application connecting or disconnecting the cluster), so that # e.g. an application calling disconnect-connect-disconnect rapidly # does not put the cluster into an inconsistent state. # Monitoring updates performed internally by the driver do not take # the state change lock. @state_change_lock = Mutex.new # @sdam_flow_lock covers just the sdam flow. Note it does not apply # to @topology replacements which are done under @update_lock. @sdam_flow_lock = Mutex.new @session_pool = Session::SessionPool.new(self) if seeds.empty? && load_balanced? raise ArgumentError, 'Load-balanced clusters with no seeds are prohibited' end # The opening topology is always unknown with no servers. # https://github.com/mongodb/specifications/pull/388 opening_topology = Topology::Unknown.new(options, monitoring, self) publish_sdam_event( Monitoring::TOPOLOGY_OPENING, Monitoring::Event::TopologyOpening.new(opening_topology) ) @seeds = seeds = seeds.uniq servers = seeds.map do |seed| # Server opening events must be sent after topology change events. # Therefore separate server addition, done here before topology change # event is published, from starting to monitor the server which is # done later. add(seed, monitor: false) end if seeds.size >= 1 # Recreate the topology to get the current server list into it recreate_topology(topology, opening_topology) end possibly_warn_about_compatibility! if load_balanced? # We are required by the specifications to produce certain SDAM events # when in load-balanced topology. # These events don't make a lot of sense from the standpoint of the # driver's SDAM implementation, nor from the standpoint of the # driver's load balancer implementation. # They are just required boilerplate. # # Note that this call must be done above the monitoring_io check # because that short-circuits the rest of the constructor. fabricate_lb_sdam_events_and_set_server_type end if options[:monitoring_io] == false # Omit periodic executor construction, because without servers # no commands can be sent to the cluster and there shouldn't ever # be anything that needs to be cleaned up. # # Omit monitoring individual servers and the legacy single round of # of SDAM on the main thread, as it would race with tests that mock # SDAM responses. @connecting = @connected = false return end # Update instance variables prior to starting monitoring threads. @connecting = false @connected = true if options[:cleanup] != false @cursor_reaper = CursorReaper.new(self) @socket_reaper = SocketReaper.new(self) @periodic_executor = PeriodicExecutor.new([ @cursor_reaper, @socket_reaper, ], options) @periodic_executor.run! end unless load_balanced? # Need to record start time prior to starting monitoring start_monotime = Utils.monotonic_time servers.each do |server| server.start_monitoring end if options[:scan] != false server_selection_timeout = options[:server_selection_timeout] || ServerSelector::SERVER_SELECTION_TIMEOUT # The server selection timeout can be very short especially in # tests, when the client waits for a synchronous scan before # starting server selection. Limiting the scan to server selection time # then aborts the scan before it can process even local servers. # Therefore, allow at least 3 seconds for the scan here. if server_selection_timeout < 3 server_selection_timeout = 3 end deadline = start_monotime + server_selection_timeout # Wait for the first scan of each server to complete, for # backwards compatibility. # If any servers are discovered during this SDAM round we are going to # wait for these servers to also be queried, and so on, up to the # server selection timeout or the 3 second minimum. loop do # Ensure we do not try to read the servers list while SDAM is running servers = @sdam_flow_lock.synchronize do servers_list.dup end if servers.all? { |server| server.last_scan_monotime && server.last_scan_monotime >= start_monotime } break end if (time_remaining = deadline - Utils.monotonic_time) <= 0 break end log_debug("Waiting for up to #{'%.2f' % time_remaining} seconds for servers to be scanned: #{summary}") # Since the semaphore may have been signaled between us checking # the servers list above and the wait call below, we should not # wait for the full remaining time - wait for up to 0.5 second, then # recheck the state. begin server_selection_semaphore.wait([time_remaining, 0.5].min) rescue ::Timeout::Error # nothing end end end start_stop_srv_monitor end end # Create a cluster for the provided client, for use when we don't want the # client's original cluster instance to be the same. # # @example Create a cluster for the client. # Cluster.create(client) # # @param [ Client ] client The client to create on. # @param [ Monitoring | nil ] monitoring. The monitoring instance to use # with the new cluster. If nil, a new instance of Monitoring will be # created. # # @return [ Cluster ] The cluster. # # @since 2.0.0 # @api private def self.create(client, monitoring: nil) cluster = Cluster.new( client.cluster.addresses.map(&:to_s), monitoring || Monitoring.new, client.cluster_options, ) client.instance_variable_set(:@cluster, cluster) end # @return [ Hash ] The options hash. attr_reader :options # @return [ Monitoring ] monitoring The monitoring. attr_reader :monitoring # @return [ Object ] The cluster topology. attr_reader :topology # @return [ Mongo::Server::AppMetadata ] The application metadata, used for # connection handshakes. # # @since 2.4.0 attr_reader :app_metadata # @api private attr_reader :monitor_app_metadata # @api private attr_reader :push_monitor_app_metadata # @return [ Array ] The addresses of seed servers. Contains # addresses that were given to Cluster when it was instantiated, not # current addresses that the cluster is using as a result of SDAM. # # @since 2.7.0 # @api private attr_reader :seeds # @private # # @since 2.5.1 attr_reader :session_pool def_delegators :topology, :replica_set?, :replica_set_name, :sharded?, :single?, :unknown? # Returns whether the cluster is configured to be in the load-balanced # topology. # # @return [ true | false ] Whether the topology is load-balanced. def load_balanced? topology.is_a?(Topology::LoadBalanced) end [:register_cursor, :schedule_kill_cursor, :unregister_cursor].each do |m| define_method(m) do |*args| if options[:cleanup] != false @cursor_reaper.send(m, *args) end end end # @api private attr_reader :srv_monitor # Get the maximum number of times the client can retry a read operation # when using legacy read retries. # # @note max_read_retries should be retrieved from the Client instance, # not from a Cluster instance, because clusters may be shared between # clients with different values for max read retries. # # @example Get the max read retries. # cluster.max_read_retries # # @return [ Integer ] The maximum number of retries. # # @since 2.1.1 # @deprecated def max_read_retries options[:max_read_retries] || MAX_READ_RETRIES end # Get the interval, in seconds, in which read retries when using legacy # read retries. # # @note read_retry_interval should be retrieved from the Client instance, # not from a Cluster instance, because clusters may be shared between # clients with different values for the read retry interval. # # @example Get the read retry interval. # cluster.read_retry_interval # # @return [ Float ] The interval. # # @since 2.1.1 # @deprecated def read_retry_interval options[:read_retry_interval] || READ_RETRY_INTERVAL end # Get the refresh interval for the server. This will be defined via an # option or will default to 10. # # @return [ Float ] The heartbeat interval, in seconds. # # @since 2.10.0 # @api private def heartbeat_interval options[:heartbeat_frequency] || Server::Monitor::DEFAULT_HEARTBEAT_INTERVAL end # Whether the cluster object is in the process of connecting to its cluster. # # @return [ true|false ] Whether the cluster is connecting. # # @api private def connecting? @update_lock.synchronize do !!@connecting end end # Whether the cluster object is connected to its cluster. # # @return [ true|false ] Whether the cluster is connected. # # @api private # @since 2.7.0 def connected? @update_lock.synchronize do !!@connected end end # Get a list of server candidates from the cluster that can have operations # executed on them. # # @example Get the server candidates for an operation. # cluster.servers # # @return [ Array ] The candidate servers. # # @since 2.0.0 def servers topology.servers(servers_list) end # The addresses in the cluster. # # @example Get the addresses in the cluster. # cluster.addresses # # @return [ Array ] The addresses. # # @since 2.0.6 def addresses servers_list.map(&:address) end # The logical session timeout value in minutes. # # @example Get the logical session timeout in minutes. # cluster.logical_session_timeout # # @return [ Integer, nil ] The logical session timeout. # # @since 2.5.0 def_delegators :topology, :logical_session_timeout # Get the nicer formatted string for use in inspection. # # @example Inspect the cluster. # cluster.inspect # # @return [ String ] The cluster inspection. # # @since 2.0.0 def inspect "#" end # @note This method is experimental and subject to change. # # @api experimental # @since 2.7.0 def summary "#" end # @api private attr_reader :server_selection_semaphore # Closes the cluster. # # @note Applications should call Client#close to disconnect from # the cluster rather than calling this method. This method is for # internal driver use only. # # Disconnects all servers in the cluster, publishing appropriate SDAM # events in the process. Stops SRV monitoring if it is active. # Marks the cluster disconnected. # # A closed cluster is no longer usable. If the client is reconnected, # it will create a new cluster instance. # # @return [ nil ] Always nil. # # @api private def close @state_change_lock.synchronize do unless connecting? || connected? return nil end if options[:cleanup] != false session_pool.end_sessions @periodic_executor.stop! end @srv_monitor_lock.synchronize do if @srv_monitor @srv_monitor.stop! end end @servers.each do |server| if server.connected? server.close publish_sdam_event( Monitoring::SERVER_CLOSED, Monitoring::Event::ServerClosed.new(server.address, topology) ) end end publish_sdam_event( Monitoring::TOPOLOGY_CLOSED, Monitoring::Event::TopologyClosed.new(topology) ) @update_lock.synchronize do @connecting = @connected = false end end nil end # Reconnect all servers. # # @example Reconnect the cluster's servers. # cluster.reconnect! # # @return [ true ] Always true. # # @since 2.1.0 # @deprecated Use Client#reconnect to reconnect to the cluster instead of # calling this method. This method does not send SDAM events. def reconnect! @state_change_lock.synchronize do @update_lock.synchronize do @connecting = true end scan! servers.each do |server| server.reconnect! end @periodic_executor.restart! @srv_monitor_lock.synchronize do if @srv_monitor @srv_monitor.run! end end @update_lock.synchronize do @connecting = false @connected = true end end end # Force a scan of all known servers in the cluster. # # If the sync parameter is true which is the default, the scan is # performed synchronously in the thread which called this method. # Each server in the cluster is checked sequentially. If there are # many servers in the cluster or they are slow to respond, this # can be a long running operation. # # If the sync parameter is false, this method instructs all server # monitor threads to perform an immediate scan and returns without # waiting for scan results. # # @note In both synchronous and asynchronous scans, each monitor # thread maintains a minimum interval between scans, meaning # calling this method may not initiate a scan on a particular server # the very next instant. # # @example Force a full cluster scan. # cluster.scan! # # @return [ true ] Always true. # # @since 2.0.0 def scan!(sync=true) if sync servers_list.each do |server| if server.monitor server.monitor.scan! else log_warn("Synchronous scan requested on cluster #{summary} but server #{server} has no monitor") end end else servers_list.each do |server| server.scan_semaphore.signal end end true end # Runs SDAM flow on the cluster. # # This method can be invoked to process a new server description returned # by the server on a monitoring or non-monitoring connection, and also # by the driver when it marks a server unknown as a result of a (network) # error. # # @param [ Server::Description ] previous_desc Previous server description. # @param [ Server::Description ] updated_desc The changed description. # @param [ Hash ] options Options. # # @option options [ true | false ] :keep_connection_pool Usually when the # new server description is unknown, the connection pool on the # respective server is cleared. Set this option to true to keep the # existing connection pool (required when handling not master errors # on 4.2+ servers). # @option options [ true | false ] :awaited Whether the updated description # was a result of processing an awaited hello. # @option options [ Object ] :service_id Change state for the specified # service id only. # @option options [ Mongo::Error | nil ] :scan_error The error encountered # while scanning, or nil if no error was raised. # # @api private def run_sdam_flow(previous_desc, updated_desc, options = {}) if load_balanced? if updated_desc.config.empty? unless options[:keep_connection_pool] servers_list.each do |server| # TODO should service id be taken out of updated_desc? # We could also assert that # options[:service_id] == updated_desc.service_id err = options[:scan_error] interrupt = err && (err.is_a?(Error::SocketError) || err.is_a?(Error::SocketTimeoutError)) server.clear_connection_pool(service_id: options[:service_id], interrupt_in_use_connections: interrupt) end end end return end @sdam_flow_lock.synchronize do flow = SdamFlow.new(self, previous_desc, updated_desc, awaited: options[:awaited]) flow.server_description_changed # SDAM flow may alter the updated description - grab the final # version for the purposes of broadcasting if a server is available updated_desc = flow.updated_desc unless options[:keep_connection_pool] if flow.became_unknown? servers_list.each do |server| if server.address == updated_desc.address err = options[:scan_error] interrupt = err && (err.is_a?(Error::SocketError) || err.is_a?(Error::SocketTimeoutError)) server.clear_connection_pool(interrupt_in_use_connections: interrupt) end end end end start_stop_srv_monitor end # Some updated descriptions, e.g. a mismatched me one, result in the # server whose description we are processing being removed from # the topology. When this happens, the server's monitoring thread gets # killed. As a result, any code after the flow invocation may not run # a particular monitor instance, hence there should generally not be # any code in this method past the flow invocation. # # However, this broadcast call can be here because if the monitoring # thread got killed the server should have been closed and no client # should be currently waiting for it, thus not signaling the semaphore # shouldn't cause any problems. unless updated_desc.unknown? server_selection_semaphore.broadcast end end # Sets the list of servers to the addresses in the provided list of address # strings. # # This method is called by the SRV monitor after receiving new DNS records # for the monitored hostname. # # Removes servers in the cluster whose addresses are not in the passed # list of server addresses, and adds servers for any addresses in the # argument which are not already in the cluster. # # @param [ Array ] server_address_strs List of server addresses # to sync the cluster servers to. # # @api private def set_server_list(server_address_strs) @sdam_flow_lock.synchronize do # If one of the new addresses is not in the current servers list, # add it to the servers list. server_address_strs.each do |address_str| unless servers_list.any? { |server| server.address.seed == address_str } add(address_str) end end # If one of the servers' addresses are not in the new address list, # remove that server from the servers list. servers_list.each do |server| unless server_address_strs.any? { |address_str| server.address.seed == address_str } remove(server.address.seed) end end end end # Determine if this cluster of servers is equal to another object. Checks the # servers currently in the cluster, not what was configured. # # @example Is the cluster equal to the object? # cluster == other # # @param [ Object ] other The object to compare to. # # @return [ true, false ] If the objects are equal. # # @since 2.0.0 def ==(other) return false unless other.is_a?(Cluster) addresses == other.addresses && options == other.options end # Determine if the cluster would select a readable server for the # provided read preference. # # @example Is a readable server present? # topology.has_readable_server?(server_selector) # # @param [ ServerSelector ] server_selector The server # selector. # # @return [ true, false ] If a readable server is present. # # @since 2.4.0 def has_readable_server?(server_selector = nil) topology.has_readable_server?(self, server_selector) end # Determine if the cluster would select a writable server. # # @example Is a writable server present? # topology.has_writable_server? # # @return [ true, false ] If a writable server is present. # # @since 2.4.0 def has_writable_server? topology.has_writable_server?(self) end # Get the next primary server we can send an operation to. # # @example Get the next primary server. # cluster.next_primary # # @param [ true, false ] ping Whether to ping the server before selection. # Deprecated and ignored. # @param [ Session | nil ] session Optional session to take into account # for mongos pinning. # @param [ Float | nil ] :timeout Timeout in seconds for the operation, # if any. # # @return [ Mongo::Server ] A primary server. # # @since 2.0.0 def next_primary(ping = nil, session = nil, timeout: nil) ServerSelector.primary.select_server( self, nil, session, timeout: timeout ) end # Get the connection pool for the server. # # @example Get the connection pool. # cluster.pool(server) # # @param [ Server ] server The server. # # @return [ Server::ConnectionPool ] The connection pool. # # @since 2.2.0 # @deprecated def pool(server) server.pool end # Update the max cluster time seen in a response. # # @example Update the cluster time. # cluster.update_cluster_time(result) # # @param [ Operation::Result ] result The operation result containing the cluster time. # # @return [ Object ] The cluster time. # # @since 2.5.0 def update_cluster_time(result) if cluster_time_doc = result.cluster_time @cluster_time_lock.synchronize do advance_cluster_time(cluster_time_doc) end end end # Add a server to the cluster with the provided address. Useful in # auto-discovery of new servers when an existing server executes a hello # and potentially non-configured servers were included. # # @example Add the server for the address to the cluster. # cluster.add('127.0.0.1:27018') # # @param [ String ] host The address of the server to add. # # @option options [ Boolean ] :monitor For internal driver use only: # whether to monitor the newly added server. # # @return [ Server ] The newly added server, if not present already. # # @since 2.0.0 def add(host, add_options=nil) address = Address.new(host, options) if !addresses.include?(address) opts = options.merge(monitor: false) # If we aren't starting the montoring threads, we also don't want to # start the pool's populator thread. opts.merge!(populator_io: false) unless options.fetch(:monitoring_io, true) # Note that in a load-balanced topology, every server must be a # load balancer (load_balancer: true is specified in the options) # but this option isn't set here because we are required by the # specifications to pretent the server started out as an unknown one # and publish server description change event into the load balancer # one. The actual correct description for this server will be set # by the fabricate_lb_sdam_events_and_set_server_type method. server = Server.new(address, self, @monitoring, event_listeners, opts) @update_lock.synchronize do # Need to recheck whether server is present in @servers, because # the previous check was not under a lock. # Since we are under the update lock here, we cannot call servers_list. return if @servers.map(&:address).include?(address) @servers.push(server) end if add_options.nil? || add_options[:monitor] != false server.start_monitoring end server end end # Remove the server from the cluster for the provided address, if it # exists. # # @example Remove the server from the cluster. # server.remove('127.0.0.1:27017') # # @param [ String ] host The host/port or socket address. # @param [ true | false ] disconnect Whether to disconnect the servers # being removed. For internal driver use only. # # @return [ Array | true | false ] If disconnect is any value other # than false, including nil, returns whether any servers were removed. # If disconnect is false, returns an array of servers that were removed # (and should be disconnected by the caller). # # @note The return value of this method is not part of the driver's # public API. # # @since 2.0.0 def remove(host, disconnect: true) address = Address.new(host) removed_servers = [] @update_lock.synchronize do @servers.delete_if do |server| (server.address == address).tap do |delete| if delete removed_servers << server end end end end if disconnect != false removed_servers.each do |server| disconnect_server_if_connected(server) end end if disconnect != false removed_servers.any? else removed_servers end end # @api private def update_topology(new_topology) old_topology = nil @update_lock.synchronize do old_topology = topology @topology = new_topology end # If new topology has data bearing servers, we know for sure whether # sessions are supported - update our cached value. # If new topology has no data bearing servers, leave the old value # as it is and sessions_supported? method will perform server selection # to try to determine session support accurately, falling back to the # last known value. if topology.data_bearing_servers? sessions_supported = !!topology.logical_session_timeout @update_lock.synchronize do @sessions_supported = sessions_supported end end publish_sdam_event( Monitoring::TOPOLOGY_CHANGED, Monitoring::Event::TopologyChanged.new(old_topology, topology) ) end # @api private def servers_list @update_lock.synchronize do @servers.dup end end # @api private def disconnect_server_if_connected(server) if server.connected? server.clear_description server.disconnect! publish_sdam_event( Monitoring::SERVER_CLOSED, Monitoring::Event::ServerClosed.new(server.address, topology) ) end end # Raises Error::SessionsNotAvailable if the deployment that the driver # is connected to does not support sessions. # # Session support may change over time, for example due to servers in the # deployment being upgraded or downgraded. If the client isn't connected to # any servers and doesn't find any servers # for the duration of server selection timeout, this method will raise # NoServerAvailable. This method is called from the operation execution flow, # and if it raises NoServerAvailable the entire operation will fail # with that exception, since the operation execution has waited for # the server selection timeout for any server to become available # (which would be a superset of the servers suitable for the operation being # attempted) and none materialized. # # @raise [ Error::SessionsNotAvailable ] If the deployment that the driver # is connected to does not support sessions. # @raise [ Error::NoServerAvailable ] If the client isn't connected to # any servers and doesn't find any servers for the duration of # server selection timeout. # # @param [ Float | nil ] :timeout Timeout for the validation. Since the # validation process involves server selection, # # @api private def validate_session_support!(timeout: nil) if topology.is_a?(Topology::LoadBalanced) return end @state_change_lock.synchronize do @sdam_flow_lock.synchronize do if topology.data_bearing_servers? unless topology.logical_session_timeout raise_sessions_not_supported end end end end # No data bearing servers known - perform server selection to try to # get a response from at least one of them, to return an accurate # assessment of whether sessions are currently supported. ServerSelector.get(mode: :primary_preferred).select_server(self, timeout: timeout) @state_change_lock.synchronize do @sdam_flow_lock.synchronize do unless topology.logical_session_timeout raise_sessions_not_supported end end end end private # @api private def start_stop_srv_monitor # SRV URI is either always given or not for a given cluster, if one # wasn't given we shouldn't ever have an SRV monitor to manage. return unless options[:srv_uri] if topology.is_a?(Topology::Sharded) || topology.is_a?(Topology::Unknown) # Start SRV monitor @srv_monitor_lock.synchronize do unless @srv_monitor monitor_options = Utils.shallow_symbolize_keys(options.merge( timeout: options[:connect_timeout] || Server::CONNECT_TIMEOUT)) @srv_monitor = _srv_monitor = Srv::Monitor.new(self, **monitor_options) end @srv_monitor.run! end else # Stop SRV monitor if running. This path is taken when the client # is given an SRV URI to a standalone/replica set; when the topology # is discovered, since it's not a sharded cluster, the SRV monitor # needs to be stopped. @srv_monitor_lock.synchronize do if @srv_monitor @srv_monitor.stop! end end end end def raise_sessions_not_supported # Intentionally using @servers instead of +servers+ here because we # are supposed to be already holding the @update_lock and we cannot # recursively acquire it again. offending_servers = @servers.select do |server| server.description.data_bearing? && server.logical_session_timeout.nil? end reason = if offending_servers.empty? "There are no known data bearing servers (current seeds: #{@servers.map(&:address).map(&:seed).join(', ')})" else "The following servers have null logical session timeout: #{offending_servers.map(&:address).map(&:seed).join(', ')}" end msg = "The deployment that the driver is connected to does not support sessions: #{reason}" raise Error::SessionsNotSupported, msg end def fabricate_lb_sdam_events_and_set_server_type # Although there is no monitoring connection in load balanced mode, # we must emit the following series of SDAM events. server = @servers.first # We are guaranteed to have the server here. server.publish_opening_event server_desc = server.description # This is where a load balancer actually gets its correct server # description. server.update_description( Server::Description.new(server.address, {}, load_balancer: true, force_load_balancer: options[:connect] == :load_balanced, ) ) publish_sdam_event( Monitoring::SERVER_DESCRIPTION_CHANGED, Monitoring::Event::ServerDescriptionChanged.new( server.address, topology, server_desc, server.description ) ) recreate_topology(topology, topology) end def recreate_topology(new_topology_template, previous_topology) @topology = topology.class.new(new_topology_template.options, new_topology_template.monitoring, self) publish_sdam_event( Monitoring::TOPOLOGY_CHANGED, Monitoring::Event::TopologyChanged.new(previous_topology, @topology) ) end COSMOSDB_HOST_PATTERNS = %w[ .cosmos.azure.com ] COSMOSDB_LOG_MESSAGE = 'You appear to be connected to a CosmosDB cluster. ' \ 'For more information regarding feature compatibility and support please visit ' \ 'https://www.mongodb.com/supportability/cosmosdb' DOCUMENTDB_HOST_PATTERNS = %w[ .docdb.amazonaws.com .docdb-elastic.amazonaws.com ] DOCUMENTDB_LOG_MESSAGE = 'You appear to be connected to a DocumentDB cluster. ' \ 'For more information regarding feature compatibility and support please visit ' \ 'https://www.mongodb.com/supportability/documentdb' # Compares the server hosts with address suffixes of known services # that provide limited MongoDB API compatibility, and warns about them. def possibly_warn_about_compatibility! if topology.server_hosts_match_any?(COSMOSDB_HOST_PATTERNS) log_info COSMOSDB_LOG_MESSAGE return end if topology.server_hosts_match_any?(DOCUMENTDB_HOST_PATTERNS) log_info DOCUMENTDB_LOG_MESSAGE return end end end end require 'mongo/cluster/sdam_flow' mongo-ruby-driver-2.21.3/lib/mongo/cluster/000077500000000000000000000000001505113246500205365ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/cluster/periodic_executor.rb000066400000000000000000000046141505113246500246040ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Cluster # A manager that calls #execute on its executors at a regular interval. # # @api private # # @since 2.5.0 class PeriodicExecutor include BackgroundThread # The default time interval for the periodic executor to execute. # # @since 2.5.0 FREQUENCY = 5 # Create a periodic executor. # # @example Create a PeriodicExecutor. # Mongo::Cluster::PeriodicExecutor.new([reaper, reaper2]) # # @param [ Array ] executors The executors. Each must respond # to #execute and #flush. # @param [ Hash ] options The options. # # @option options [ Logger ] :logger A custom logger to use. # # @api private def initialize(executors, options = {}) @thread = nil @executors = executors @stop_semaphore = Semaphore.new @options = options end attr_reader :options alias :restart! :run! def do_work execute @stop_semaphore.wait(FREQUENCY) end def pre_stop @stop_semaphore.signal end def stop(final = false) super begin flush rescue end true end # Trigger an execute call on each reaper. # # @example Trigger all reapers. # periodic_executor.execute # # @api private # # @since 2.5.0 def execute @executors.each(&:execute) true end # Execute all pending operations. # # @example Execute all pending operations. # periodic_executor.flush # # @api private # # @since 2.5.0 def flush @executors.each(&:flush) true end end end end mongo-ruby-driver-2.21.3/lib/mongo/cluster/reapers/000077500000000000000000000000001505113246500221775ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/cluster/reapers/cursor_reaper.rb000066400000000000000000000150601505113246500254010ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Cluster # A manager that sends kill cursors operations at regular intervals to close # cursors that have been garbage collected without being exhausted. # # @api private # # @since 2.3.0 class CursorReaper include Retryable # The default time interval for the cursor reaper to send pending # kill cursors operations. # # @since 2.3.0 FREQUENCY = 1.freeze # Create a cursor reaper. # # @param [ Cluster ] cluster The cluster. # # @api private def initialize(cluster) @cluster = cluster @to_kill = {} @active_cursor_ids = Set.new @mutex = Mutex.new @kill_spec_queue = Queue.new end attr_reader :cluster # Schedule a kill cursors operation to be eventually executed. # # @param [ Cursor::KillSpec ] kill_spec The kill specification. # # @api private def schedule_kill_cursor(kill_spec) @kill_spec_queue << kill_spec end # Register a cursor id as active. # # @example Register a cursor as active. # cursor_reaper.register_cursor(id) # # @param [ Integer ] id The id of the cursor to register as active. # # @api private # # @since 2.3.0 def register_cursor(id) if id.nil? raise ArgumentError, 'register_cursor called with nil cursor_id' end if id == 0 raise ArgumentError, 'register_cursor called with cursor_id=0' end @mutex.synchronize do @active_cursor_ids << id end end # Unregister a cursor id, indicating that it's no longer active. # # @example Unregister a cursor. # cursor_reaper.unregister_cursor(id) # # @param [ Integer ] id The id of the cursor to unregister. # # @api private # # @since 2.3.0 def unregister_cursor(id) if id.nil? raise ArgumentError, 'unregister_cursor called with nil cursor_id' end if id == 0 raise ArgumentError, 'unregister_cursor called with cursor_id=0' end @mutex.synchronize do @active_cursor_ids.delete(id) end end # Read and decode scheduled kill cursors operations. # # This method mutates instance variables without locking, so is is not # thread safe. Generally, it should not be called itself, this is a helper # for `kill_cursor` method. # # @api private def read_scheduled_kill_specs while kill_spec = @kill_spec_queue.pop(true) if @active_cursor_ids.include?(kill_spec.cursor_id) @to_kill[kill_spec.server_address] ||= Set.new @to_kill[kill_spec.server_address] << kill_spec end end rescue ThreadError # Empty queue, nothing to do. end # Execute all pending kill cursors operations. # # @example Execute pending kill cursors operations. # cursor_reaper.kill_cursors # # @api private # # @since 2.3.0 def kill_cursors # TODO optimize this to batch kill cursor operations for the same # server/database/collection instead of killing each cursor # individually. loop do server_address = nil kill_spec = @mutex.synchronize do read_scheduled_kill_specs # Find a server that has any cursors scheduled for destruction. server_address, specs = @to_kill.detect { |_, specs| specs.any? } if specs.nil? # All servers have empty specs, nothing to do. return end # Note that this mutates the spec in the queue. # If the kill cursor operation fails, we don't attempt to # kill that cursor again. spec = specs.take(1).tap do |arr| specs.subtract(arr) end.first unless @active_cursor_ids.include?(spec.cursor_id) # The cursor was already killed, typically because it has # been iterated to completion. Remove the kill spec from # our records without doing any more work. spec = nil end spec end # If there was a spec to kill but its cursor was already killed, # look for another spec. next unless kill_spec # We could also pass kill_spec directly into the KillCursors # operation, though this would make that operation have a # different API from all of the other ones which accept hashes. spec = { cursor_ids: [kill_spec.cursor_id], coll_name: kill_spec.coll_name, db_name: kill_spec.db_name, } op = Operation::KillCursors.new(spec) server = cluster.servers.detect do |server| server.address == server_address end unless server # TODO We currently don't have a server for the address that the # cursor is associated with. We should leave the cursor in the # queue to be killed at a later time (when the server comes back). next end options = { server_api: server.options[:server_api], connection_global_id: kill_spec.connection_global_id, } if connection = kill_spec.connection op.execute_with_connection(connection, context: Operation::Context.new(options: options)) connection.connection_pool.check_in(connection) else op.execute(server, context: Operation::Context.new(options: options)) end if session = kill_spec.session if session.implicit? session.end_session end end end end alias :execute :kill_cursors alias :flush :kill_cursors end end end mongo-ruby-driver-2.21.3/lib/mongo/cluster/reapers/socket_reaper.rb000066400000000000000000000034201505113246500253510ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Cluster # A manager that calls a method on each of a cluster's pools to close idle # sockets. # # @api private # # @since 2.5.0 class SocketReaper # Initialize the SocketReaper object. # # @example Initialize the socket reaper. # SocketReaper.new(cluster) # # @param [ Mongo::Cluster ] cluster The cluster whose pools' idle sockets # need to be reaped at regular intervals. # # @since 2.5.0 def initialize(cluster) @cluster = cluster end # Execute the operation to close the pool's idle sockets. # # @example Close the idle sockets in each of the cluster's pools. # socket_reaper.execute # # @since 2.5.0 def execute @cluster.servers.each do |server| server.pool_internal&.close_idle_sockets end true end # When the socket reaper is garbage-collected, there's no need to close # idle sockets; sockets will be closed anyway when the pools are # garbage collected. # # @since 2.5.0 def flush end end end end mongo-ruby-driver-2.21.3/lib/mongo/cluster/sdam_flow.rb000066400000000000000000000606611505113246500230470ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2018-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. class Mongo::Cluster # Handles SDAM flow for a server description changed event. # # Updates server descriptions, topology descriptions and publishes # SDAM events. # # SdamFlow is meant to be instantiated once for every server description # changed event that needs to be processed. # # @api private class SdamFlow extend Forwardable def initialize(cluster, previous_desc, updated_desc, awaited: false) @cluster = cluster @topology = cluster.topology @original_desc = @previous_desc = previous_desc @updated_desc = updated_desc @servers_to_disconnect = [] @awaited = !!awaited end attr_reader :cluster def_delegators :cluster, :servers_list, :seeds, :publish_sdam_event, :log_warn # The topology stored in this attribute can change multiple times throughout # a single sdam flow (e.g. unknown -> RS no primary -> RS with primary). # Events for topology change get sent at the end of flow processing, # such that the above example only publishes an unknown -> RS with primary # event to the application. # # @return Mongo::Cluster::Topology The current topology. attr_reader :topology attr_reader :previous_desc attr_reader :updated_desc attr_reader :original_desc def awaited? @awaited end def_delegators :topology, :replica_set_name # Updates descriptions on all servers whose address matches # updated_desc's address. def update_server_descriptions servers_list.each do |server| if server.address == updated_desc.address # SDAM flow must be run when topology version in the new description # is equal to the current topology version, per the example in # https://github.com/mongodb/specifications/blob/master/source/server-discovery-and-monitoring/server-discovery-and-monitoring.md#what-is-the-purpose-of-topologyversion unless updated_desc.topology_version_gte?(server.description) return false end @server_description_changed = server.description != updated_desc # Always update server description, so that fields that do not # affect description equality comparisons but are part of the # description are updated. server.update_description(updated_desc) server.update_last_scan # If there was no content difference between descriptions, we # still need to run sdam flow, but if the flow produces no change # in topology we will omit sending events. return true end end false end def server_description_changed @previous_server_descriptions = servers_list.map do |server| [server.address.to_s, server.description] end unless update_server_descriptions # All of the transitions require that server whose updated_desc we are # processing is still in the cluster (i.e., was not removed as a result # of processing another response, potentially concurrently). # If update_server_descriptions returned false we have no servers # in the topology for the description we are processing, stop. return end case topology when Topology::LoadBalanced @updated_desc = ::Mongo::Server::Description::LoadBalancer.new( updated_desc.address, ) update_server_descriptions when Topology::Single if topology.replica_set_name if updated_desc.replica_set_name != topology.replica_set_name log_warn( "Server #{updated_desc.address.to_s} has an incorrect replica set name '#{updated_desc.replica_set_name}'; expected '#{topology.replica_set_name}'" ) @updated_desc = ::Mongo::Server::Description.new( updated_desc.address, {}, average_round_trip_time: updated_desc.average_round_trip_time, minimum_round_trip_time: updated_desc.minimum_round_trip_time ) update_server_descriptions end end when Topology::Unknown if updated_desc.standalone? update_unknown_with_standalone elsif updated_desc.mongos? @topology = Topology::Sharded.new(topology.options, topology.monitoring, self) elsif updated_desc.primary? @topology = Topology::ReplicaSetWithPrimary.new( topology.options.merge(replica_set_name: updated_desc.replica_set_name), topology.monitoring, self) update_rs_from_primary elsif updated_desc.secondary? || updated_desc.arbiter? || updated_desc.other? @topology = Topology::ReplicaSetNoPrimary.new( topology.options.merge(replica_set_name: updated_desc.replica_set_name), topology.monitoring, self) update_rs_without_primary end when Topology::Sharded unless updated_desc.unknown? || updated_desc.mongos? log_warn( "Removing server #{updated_desc.address.to_s} because it is of the wrong type (#{updated_desc.server_type.to_s.upcase}) - expected SHARDED" ) remove end when Topology::ReplicaSetWithPrimary if updated_desc.standalone? || updated_desc.mongos? log_warn( "Removing server #{updated_desc.address.to_s} because it is of the wrong type (#{updated_desc.server_type.to_s.upcase}) - expected a replica set member" ) remove check_if_has_primary elsif updated_desc.primary? update_rs_from_primary elsif updated_desc.secondary? || updated_desc.arbiter? || updated_desc.other? update_rs_with_primary_from_member else check_if_has_primary end when Topology::ReplicaSetNoPrimary if updated_desc.standalone? || updated_desc.mongos? log_warn( "Removing server #{updated_desc.address.to_s} because it is of the wrong type (#{updated_desc.server_type.to_s.upcase}) - expected a replica set member" ) remove elsif updated_desc.primary? # Here we change topology type to RS with primary, however # while processing updated_desc we may find that its RS name # does not match our existing RS name. For this reason # is is imperative to NOT pass updated_desc's RS name to # topology constructor here. # During processing we may remove the server whose updated_desc # we are be processing (e.g. the RS name mismatch case again), # in which case topoogy type will go back to RS without primary # in the check_if_has_primary step. @topology = Topology::ReplicaSetWithPrimary.new( # Do not pass updated_desc's RS name here topology.options, topology.monitoring, self) update_rs_from_primary elsif updated_desc.secondary? || updated_desc.arbiter? || updated_desc.other? update_rs_without_primary end else raise ArgumentError, "Unknown topology #{topology.class}" end verify_invariants commit_changes disconnect_servers end # Transitions from unknown to single topology type, when a standalone # server is discovered. def update_unknown_with_standalone if seeds.length == 1 @topology = Topology::Single.new( topology.options, topology.monitoring, self) else log_warn( "Removing server #{updated_desc.address.to_s} because it is a standalone and we have multiple seeds (#{seeds.length})" ) remove end end # Updates topology which must be a ReplicaSetWithPrimary with information # from the primary's server description. # # This method does not change topology type to ReplicaSetWithPrimary - # this needs to have been done prior to calling this method. # # If the primary whose description is being processed is determined to be # stale, this method will change the server description and topology # type to unknown. def update_rs_from_primary if topology.replica_set_name.nil? @topology = Topology::ReplicaSetWithPrimary.new( topology.options.merge(replica_set_name: updated_desc.replica_set_name), topology.monitoring, self) end if topology.replica_set_name != updated_desc.replica_set_name log_warn( "Removing server #{updated_desc.address.to_s} because it has an " + "incorrect replica set name '#{updated_desc.replica_set_name}'; " + "expected '#{topology.replica_set_name}'" ) remove check_if_has_primary return end if stale_primary? @updated_desc = ::Mongo::Server::Description.new( updated_desc.address, {}, average_round_trip_time: updated_desc.average_round_trip_time, minimum_round_trip_time: updated_desc.minimum_round_trip_time ) update_server_descriptions check_if_has_primary return end if updated_desc.max_wire_version >= 17 @topology = Topology::ReplicaSetWithPrimary.new( topology.options.merge( max_election_id: updated_desc.election_id, max_set_version: updated_desc.set_version ), topology.monitoring, self) else max_election_id = topology.new_max_election_id(updated_desc) max_set_version = topology.new_max_set_version(updated_desc) if max_election_id != topology.max_election_id || max_set_version != topology.max_set_version then @topology = Topology::ReplicaSetWithPrimary.new( topology.options.merge( max_election_id: max_election_id, max_set_version: max_set_version ), topology.monitoring, self) end end # At this point we have accepted the updated server description # and the topology (both are primary). Commit these changes so that # their respective SDAM events are published before SDAM events for # server additions/removals that follow publish_description_change_event servers_list.each do |server| if server.address != updated_desc.address if server.primary? server.update_description( ::Mongo::Server::Description.new( server.address, {}, average_round_trip_time: server.description.average_round_trip_time, minimum_round_trip_time: updated_desc.minimum_round_trip_time ) ) end end end servers = add_servers_from_desc(updated_desc) remove_servers_not_in_desc(updated_desc) check_if_has_primary servers.each do |server| server.start_monitoring end end # Updates a ReplicaSetWithPrimary topology from a non-primary member. def update_rs_with_primary_from_member if topology.replica_set_name != updated_desc.replica_set_name log_warn( "Removing server #{updated_desc.address.to_s} because it has an " + "incorrect replica set name (#{updated_desc.replica_set_name}); " + "current set name is #{topology.replica_set_name}" ) remove check_if_has_primary return end if updated_desc.me_mismatch? log_warn( "Removing server #{updated_desc.address.to_s} because it " + "reported itself as #{updated_desc.me}" ) remove check_if_has_primary return end have_primary = false servers_list.each do |server| if server.primary? have_primary = true break end end unless have_primary @topology = Topology::ReplicaSetNoPrimary.new( topology.options, topology.monitoring, self) end end # Updates a ReplicaSetNoPrimary topology from a non-primary member. def update_rs_without_primary if topology.replica_set_name.nil? @topology = Topology::ReplicaSetNoPrimary.new( topology.options.merge(replica_set_name: updated_desc.replica_set_name), topology.monitoring, self) end if topology.replica_set_name != updated_desc.replica_set_name log_warn( "Removing server #{updated_desc.address.to_s} because it has an " + "incorrect replica set name (#{updated_desc.replica_set_name}); " + "current set name is #{topology.replica_set_name}" ) remove return end publish_description_change_event servers = add_servers_from_desc(updated_desc) commit_changes servers.each do |server| server.start_monitoring end if updated_desc.me_mismatch? log_warn( "Removing server #{updated_desc.address.to_s} because it " + "reported itself as #{updated_desc.me}" ) remove return end end # Adds all servers referenced in the given description (which is # supposed to have come from a good primary) which are not # already in the cluster, to the cluster. # # @note Servers are added unmonitored. Monitoring must be started later # separately. # # @return [ Array ] Servers actually added to the cluster. # This is the set of servers on which monitoring should be started. def add_servers_from_desc(updated_desc) added_servers = [] %w(hosts passives arbiters).each do |m| updated_desc.send(m).each do |address_str| if server = cluster.add(address_str, monitor: false) added_servers << server end end end verify_invariants added_servers end # Removes servers from the topology which are not present in the # given server description (which is supposed to have come from a # good primary). def remove_servers_not_in_desc(updated_desc) updated_desc_address_strs = %w(hosts passives arbiters).map do |m| updated_desc.send(m) end.flatten servers_list.each do |server| unless updated_desc_address_strs.include?(address_str = server.address.to_s) updated_host = updated_desc.address.to_s if updated_desc.me && updated_desc.me != updated_host updated_host += " (self-identified as #{updated_desc.me})" end log_warn( "Removing server #{address_str} because it is not in hosts reported by primary " + "#{updated_host}. Reported hosts are: " + updated_desc.hosts.join(', ') ) do_remove(address_str) end end end # Removes the server whose description we are processing from the # topology. def remove publish_description_change_event do_remove(updated_desc.address.to_s) end # Removes specified server from topology and warns if the topology ends # up with an empty server list as a result def do_remove(address_str) servers = cluster.remove(address_str, disconnect: false) servers.each do |server| # Clear the description so that the server is marked as unknown. server.clear_description # We need to publish server closed event here, but we cannot close # the server because it could be the server owning the monitor in # whose thread this flow is presently executing, in which case closing # the server can terminate the thread and leave SDAM processing # incomplete. Thus we have to remove the server from the cluster, # publish the event, but do not call disconnect on the server until # the very end when all processing has completed. publish_sdam_event( Mongo::Monitoring::SERVER_CLOSED, Mongo::Monitoring::Event::ServerClosed.new(server.address, cluster.topology) ) end @servers_to_disconnect += servers if servers_list.empty? log_warn( "Topology now has no servers - this is likely a misconfiguration of the cluster and/or the application" ) end end def publish_description_change_event # This method may be invoked when server description definitely changed # but prior to the topology getting updated. Therefore we check both # server description changes and overall topology changes. When this # method is called at the end of SDAM flow as part of "commit changes" # step, server description change is incorporated into the topology # change. unless @server_description_changed || topology_effectively_changed? return end # updated_desc here may not be the description we received from # the server - in case of a stale primary, the server reported itself # as being a primary but updated_desc here will be unknown. # We used to not notify on Unknown -> Unknown server changes. # Technically these are valid state changes (or at least as valid as # other server description changes when the description has not # changed meaningfully but the events are still published). # The current version of the driver notifies on Unknown -> Unknown # transitions. # Avoid dispatching events when updated description is the same as # previous description. This allows this method to be called multiple # times in the flow when the events should be published, without # worrying about whether there are any unpublished changes. if updated_desc.object_id == previous_desc.object_id return end publish_sdam_event( ::Mongo::Monitoring::SERVER_DESCRIPTION_CHANGED, ::Mongo::Monitoring::Event::ServerDescriptionChanged.new( updated_desc.address, topology, previous_desc, updated_desc, awaited: awaited?, ) ) @previous_desc = updated_desc @need_topology_changed_event = true end # Publishes server description changed events, updates topology on # the cluster and publishes topology changed event, as needed # based on operations performed during SDAM flow processing. def commit_changes # The application-visible sequence of events should be as follows: # # 1. Description change for the server which we are processing; # 2. Topology change, if any; # 3. Description changes for other servers, if any. # # The tricky part here is that the server description changes are # not all processed together. publish_description_change_event start_pool_if_data_bearing topology_changed_event_published = false if !topology.equal?(cluster.topology) || @need_topology_changed_event # We are about to publish topology changed event. # Recreate the topology instance to get its server descriptions # up to date. @topology = topology.class.new(topology.options, topology.monitoring, cluster) # This sends the SDAM event cluster.update_topology(topology) topology_changed_event_published = true @need_topology_changed_event = false end # If a server description changed, topology description change event # must be published with the previous and next topologies being of # the same type, unless we already published topology change event if topology_changed_event_published return end if updated_desc.unknown? && previous_desc.unknown? return end if updated_desc.object_id == previous_desc.object_id return end unless topology_effectively_changed? return end # If we are here, there has been a change in the server descriptions # in our topology, but topology class has not changed. # Publish the topology changed event and recreate the topology to # get the new list of server descriptions into it. @topology = topology.class.new(topology.options, topology.monitoring, cluster) # This sends the SDAM event cluster.update_topology(topology) end def disconnect_servers while server = @servers_to_disconnect.shift if server.connected? # Do not publish server closed event, as this was already done server.disconnect! end end end # If the server being processed is identified as data bearing, creates the # server's connection pool so it can start populating def start_pool_if_data_bearing return if !updated_desc.data_bearing? servers_list.each do |server| if server.address == @updated_desc.address server.pool end end end # Checks if the cluster has a primary, and if not, transitions the topology # to ReplicaSetNoPrimary. Topology must be ReplicaSetWithPrimary when # invoking this method. def check_if_has_primary unless topology.replica_set? raise ArgumentError, "check_if_has_primary should only be called when topology is replica set, but it is #{topology.class.name.sub(/.*::/, '')}" end primary = servers_list.detect do |server| # A primary with the wrong set name is not a primary server.primary? && server.description.replica_set_name == topology.replica_set_name end unless primary @topology = Topology::ReplicaSetNoPrimary.new( topology.options, topology.monitoring, self) end end # Whether updated_desc is for a stale primary. def stale_primary? if updated_desc.max_wire_version >= 17 if updated_desc.election_id.nil? && !topology.max_election_id.nil? return true end if updated_desc.election_id && topology.max_election_id && updated_desc.election_id < topology.max_election_id return true end if updated_desc.election_id == topology.max_election_id if updated_desc.set_version.nil? && !topology.max_set_version.nil? return true end if updated_desc.set_version && topology.max_set_version && updated_desc.set_version < topology.max_set_version return true end end else if updated_desc.election_id && updated_desc.set_version if topology.max_set_version && topology.max_election_id && (updated_desc.set_version < topology.max_set_version || (updated_desc.set_version == topology.max_set_version && updated_desc.election_id < topology.max_election_id)) return true end end end false end # Returns whether the server whose description this flow processed # was not previously unknown, and is now. Used to decide, in particular, # whether to clear the server's connection pool. def became_unknown? updated_desc.unknown? && !original_desc.unknown? end # Returns whether topology meaningfully changed as a result of running # SDAM flow. # # The spec defines topology equality through equality of topology types # and server descriptions in each topology; this definition is not usable # by us because our topology objects do not hold server descriptions and # are instead "live". Thus we have to store the full list of server # descriptions at the beginning of SDAM flow and compare them to the # current ones. def topology_effectively_changed? unless topology.equal?(cluster.topology) return true end server_descriptions = servers_list.map do |server| [server.address.to_s, server.description] end @previous_server_descriptions != server_descriptions end def verify_invariants if Mongo::Lint.enabled? if cluster.topology.single? if cluster.servers_list.length > 1 raise Mongo::Error::LintError, "Trying to create a single topology with multiple servers: #{cluster.servers_list}" end end end end end end mongo-ruby-driver-2.21.3/lib/mongo/cluster/topology.rb000066400000000000000000000125331505113246500227430ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Cluster # Defines behavior for getting servers. # # Topologies are associated with their clusters - for example, a # ReplicaSet topology contains the replica set name. A topology # object therefore cannot be used with multiple cluster objects. # # At the same time, topology objects do not know anything about # specific *servers* in a cluster, despite what their constructor # may suggest. Which means, in particular, that topology change events # require the application to maintain cluster references on its own # if it wants to track server changes within a replica set. # # @since 2.0.0 module Topology extend self end end end require 'mongo/cluster/topology/base' require 'mongo/cluster/topology/no_replica_set_options' require 'mongo/cluster/topology/load_balanced' require 'mongo/cluster/topology/replica_set_no_primary' require 'mongo/cluster/topology/replica_set_with_primary' require 'mongo/cluster/topology/sharded' require 'mongo/cluster/topology/single' require 'mongo/cluster/topology/unknown' module Mongo class Cluster module Topology # The various topologies for server selection. # # @since 2.0.0 # @api private OPTIONS = { direct: Single, load_balanced: LoadBalanced, replica_set: ReplicaSetNoPrimary, sharded: Sharded, }.freeze # Get the initial cluster topology for the provided options. # # @example Get the initial cluster topology. # Topology.initial(topology: :replica_set) # # @param [ Cluster ] cluster The cluster. # @param [ Monitoring ] monitoring The monitoring. # @param [ Hash ] options The cluster options. # # @option options [ true | false ] :direct_connection Whether to connect # directly to the specified seed, bypassing topology discovery. Exactly # one seed must be provided. # @option options [ Symbol ] :connect Deprecated - use :direct_connection # option instead of this option. The connection method to use. This # forces the cluster to behave in the specified way instead of # auto-discovering. One of :direct, :replica_set, :sharded, # :load_balanced. If :connect is set to :load_balanced, the driver # will behave as if the server is a load balancer even if it isn't # connected to a load balancer. # @option options [ true | false ] :load_balanced Whether to expect to # connect to a load balancer. # @option options [ Symbol ] :replica_set The name of the replica set to # connect to. Servers not in this replica set will be ignored. # # @return [ ReplicaSet, Sharded, Single, LoadBalanced ] The topology. # # @since 2.0.0 # @api private def initial(cluster, monitoring, options) connect = options[:connect]&.to_sym cls = if options[:direct_connection] if connect && connect != :direct raise ArgumentError, "Conflicting topology options: direct_connection=true and connect=#{connect}" end if options[:load_balanced] raise ArgumentError, "Conflicting topology options: direct_connection=true and load_balanced=true" end Single elsif options[:direct_connection] == false && connect && connect == :direct raise ArgumentError, "Conflicting topology options: direct_connection=false and connect=#{connect}" elsif connect && connect != :load_balanced if options[:load_balanced] raise ArgumentError, "Conflicting topology options: connect=#{options[:connect].inspect} and load_balanced=true" end OPTIONS.fetch(options[:connect].to_sym) elsif options.key?(:replica_set) || options.key?(:replica_set_name) if options[:load_balanced] raise ArgumentError, "Conflicting topology options: replica_set/replica_set_name and load_balanced=true" end ReplicaSetNoPrimary elsif options[:load_balanced] || connect == :load_balanced LoadBalanced else Unknown end # Options here are client/cluster/server options. # In particular the replica set name key is different for # topology. # If replica_set_name is given (as might be internally by driver), # use that key. # Otherwise (e.g. options passed down from client), # move replica_set to replica_set_name. if (cls <= ReplicaSetNoPrimary || cls == Single) && !options[:replica_set_name] options = options.dup options[:replica_set_name] = options.delete(:replica_set) end cls.new(options, monitoring, cluster) end end end end mongo-ruby-driver-2.21.3/lib/mongo/cluster/topology/000077500000000000000000000000001505113246500224125ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/cluster/topology/base.rb000066400000000000000000000174141505113246500236600ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2018-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Cluster module Topology # Defines behavior common to all topologies. # # @since 2.7.0 class Base extend Forwardable include Loggable include Monitoring::Publishable # Initialize the topology with the options. # # @param [ Hash ] options The options. # @param [ Monitoring ] monitoring The monitoring. # @param [ Cluster ] cluster The cluster. # # @option options [ Symbol ] :replica_set Name of the replica set to # connect to. Can be left blank (either nil or the empty string are # accepted) to discover the name from the cluster. If the addresses # belong to different replica sets there is no guarantee which # replica set is selected - in particular, the driver may choose # the replica set name of a secondary if it returns its response # prior to a primary belonging to a different replica set. # This option can only be specified when instantiating a replica # set topology. # @option options [ BSON::ObjectId ] :max_election_id Max election id # per the SDAM specification. # This option can only be specified when instantiating a replica # set topology. # @option options [ Integer ] :max_set_version Max set version # per the SDAM specification. # This option can only be specified when instantiating a replica # set topology. # # @since 2.7.0 # @api private def initialize(options, monitoring, cluster) options = validate_options(options, cluster) @options = options @monitoring = monitoring @cluster = cluster # The list of server descriptions is simply fixed at the time of # topology creation. If server description change later, a # new topology instance should be created. @server_descriptions = {} (servers = cluster.servers_list).each do |server| @server_descriptions[server.address.to_s] = server.description end if is_a?(LoadBalanced) @compatible = true else begin server_descriptions.each do |address_str, desc| unless desc.unknown? desc.features.check_driver_support! end end rescue Error::UnsupportedFeatures => e @compatible = false @compatibility_error = e else @compatible = true end end @have_data_bearing_servers = false @logical_session_timeout = server_descriptions.inject(nil) do |min, (address_str, desc)| # LST is only read from data-bearing servers if desc.data_bearing? @have_data_bearing_servers = true break unless timeout = desc.logical_session_timeout [timeout, (min || timeout)].min else min end end if Mongo::Lint.enabled? freeze end end # @return [ Hash ] options The options. attr_reader :options # @return [ Cluster ] The cluster. # @api private attr_reader :cluster private :cluster # @return [ Array ] addresses Server addresses. def addresses cluster.addresses.map(&:seed) end # @return [ monitoring ] monitoring the monitoring. attr_reader :monitoring # Get the replica set name configured for this topology. # # @example Get the replica set name. # topology.replica_set_name # # @return [ String ] The name of the configured replica set. # # @since 2.0.0 def replica_set_name options[:replica_set_name] end # @return [ Hash ] server_descriptions The map of address strings to # server descriptions, one for each server in the cluster. # # @since 2.7.0 attr_reader :server_descriptions # @return [ true|false ] compatible Whether topology is compatible # with the driver. # # @since 2.7.0 def compatible? @compatible end # @return [ Exception ] compatibility_error If topology is incompatible # with the driver, an exception with information regarding the incompatibility. # If topology is compatible with the driver, nil. # # @since 2.7.0 attr_reader :compatibility_error # The logical session timeout value in minutes. # # @note The value is in minutes, unlike most other times in the # driver which are returned in seconds. # # @return [ Integer, nil ] The logical session timeout. # # @since 2.7.0 attr_reader :logical_session_timeout # @return [ true | false ] have_data_bearing_servers Whether the # topology has any data bearing servers, for the purposes of # logical session timeout calculation. # # @api private def data_bearing_servers? @have_data_bearing_servers end # The largest electionId ever reported by a primary. # May be nil. # # @return [ BSON::ObjectId ] The election id. # # @since 2.7.0 def max_election_id options[:max_election_id] end # The largest setVersion ever reported by a primary. # May be nil. # # @return [ Integer ] The set version. # # @since 2.7.0 def max_set_version options[:max_set_version] end # @api private def new_max_election_id(description) if description.election_id && (max_election_id.nil? || description.election_id > max_election_id) description.election_id else max_election_id end end # @api private def new_max_set_version(description) if description.set_version && (max_set_version.nil? || description.set_version > max_set_version) description.set_version else max_set_version end end # Compares each server address against the list of patterns. # # @param [ Array ] patterns the URL suffixes to compare # each server against. # # @return [ true | false ] whether any of the addresses match any of # the patterns or not. # # @api private def server_hosts_match_any?(patterns) server_descriptions.any? do |addr_spec, _desc| addr, _port = addr_spec.split(/:/) patterns.any? { |pattern| addr.end_with?(pattern) } end end private # Validates and/or transforms options as necessary for the topology. # # @return [ Hash ] New options def validate_options(options, cluster) options end end end end end mongo-ruby-driver-2.21.3/lib/mongo/cluster/topology/load_balanced.rb000066400000000000000000000062321505113246500254720ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2021 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Cluster module Topology # Defines behavior for when a cluster is in load-balanced topology. class LoadBalanced < Base # The display name for the topology. NAME = 'LoadBalanced'.freeze # Get the display name. # # @return [ String ] The display name. def display_name self.class.name.gsub(/.*::/, '') end # @note This method is experimental and subject to change. # # @api experimental def summary details = server_descriptions.keys.join(',') "#{display_name}[#{details}]" end # Determine if the topology would select a readable server for the # provided candidates and read preference. # # @param [ Cluster ] cluster The cluster. # @param [ ServerSelector ] server_selector The server # selector. # # @return [ true ] A standalone always has a readable server. def has_readable_server?(cluster, server_selector = nil); true; end # Determine if the topology would select a writable server for the # provided candidates. # # @param [ Cluster ] cluster The cluster. # # @return [ true ] A standalone always has a writable server. def has_writable_server?(cluster); true; end # Returns whether this topology is one of the replica set ones. # # @return [ false ] Always false. def replica_set?; false; end # Select appropriate servers for this topology. # # @param [ Array ] servers The known servers. # # @return [ Array ] All of the known servers. def servers(servers, name = nil) servers end # Returns whether this topology is sharded. # # @return [ false ] Always false. def sharded?; false; end # Returns whether this topology is Single. # # @return [ true ] Always false. def single?; false; end # Returns whether this topology is Unknown. # # @return [ false ] Always false. def unknown?; false; end private def validate_options(options, cluster) if cluster.servers_list.length > 1 raise ArgumentError, "Cannot instantiate a load-balanced topology with more than one server in the cluster: #{cluster.servers_list.map(&:address).map(&:seed).join(', ')}" end super(options, cluster) end end end end end mongo-ruby-driver-2.21.3/lib/mongo/cluster/topology/no_replica_set_options.rb000066400000000000000000000023061505113246500275010ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2018-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Cluster module Topology module NoReplicaSetOptions private def validate_options(options, cluster) # These options can be set to nil for convenience, but not to # any value including an empty string. [:replica_set_name, :max_election_id, :max_set_version].each do |option| if options[option] raise ArgumentError, "Topology #{self.class.name} cannot have the :#{option} option set" end end super(options, cluster) end end end end end mongo-ruby-driver-2.21.3/lib/mongo/cluster/topology/replica_set_no_primary.rb000066400000000000000000000120251505113246500274700ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Cluster module Topology # Defines behavior when a cluster is in replica set topology, # and there is no primary or the primary has not yet been discovered # by the driver. # # @since 2.0.0 class ReplicaSetNoPrimary < Base # The display name for the topology. # # @since 2.0.0 # @deprecated NAME = 'Replica Set'.freeze # Get the display name. # # @example Get the display name. # ReplicaSet.display_name # # @return [ String ] The display name. # # @since 2.0.0 def display_name self.class.name.gsub(/.*::/, '') end # @note This method is experimental and subject to change. # # @api experimental # @since 2.7.0 def summary details = server_descriptions.keys.join(',') if details != '' details << ',' end details << "name=#{replica_set_name}" if max_set_version details << ",v=#{max_set_version}" end if max_election_id details << ",e=#{max_election_id && max_election_id.to_s.sub(/^0+/, '')}" end "#{display_name}[#{details}]" end # Determine if the topology would select a readable server for the # provided candidates and read preference. # # @example Is a readable server present? # topology.has_readable_server?(cluster, server_selector) # # @param [ Cluster ] cluster The cluster. # @param [ ServerSelector ] server_selector The server # selector. # # @return [ true, false ] If a readable server is present. # # @since 2.4.0 # @deprecated def has_readable_server?(cluster, server_selector = nil) !(server_selector || ServerSelector.primary).try_select_server(cluster).nil? end # Determine if the topology would select a writable server for the # provided candidates. # # @example Is a writable server present? # topology.has_writable_server?(servers) # # @param [ Cluster ] cluster The cluster. # # @return [ true, false ] If a writable server is present. # # @since 2.4.0 def has_writable_server?(cluster) !ServerSelector.primary.try_select_server(cluster).nil? end # A replica set topology is a replica set. # # @example Is the topology a replica set? # topology.replica_set? # # @return [ true ] Always true. # # @since 2.0.0 def replica_set?; true; end # Select appropriate servers for this topology. # # @example Select the servers. # ReplicaSet.servers(servers) # # @param [ Array ] servers The known servers. # # @return [ Array ] The servers in the replica set. # # @since 2.0.0 def servers(servers) servers.select do |server| (replica_set_name.nil? || server.replica_set_name == replica_set_name) && server.primary? || server.secondary? end end # A replica set topology is not sharded. # # @example Is the topology sharded? # ReplicaSet.sharded? # # @return [ false ] Always false. # # @since 2.0.0 def sharded?; false; end # A replica set topology is not single. # # @example Is the topology single? # ReplicaSet.single? # # @return [ false ] Always false. # # @since 2.0.0 def single?; false; end # A replica set topology is not unknown. # # @example Is the topology unknown? # ReplicaSet.unknown? # # @return [ false ] Always false. # # @since 2.0.0 def unknown?; false; end private def validate_options(options, cluster) if options[:replica_set_name] == '' options = options.merge(replica_set_name: nil) end unless options[:replica_set_name] raise ArgumentError, 'Cannot instantiate a replica set topology without a replica set name' end super(options, cluster) end end end end end mongo-ruby-driver-2.21.3/lib/mongo/cluster/topology/replica_set_with_primary.rb000066400000000000000000000016461505113246500300360ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2018-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Cluster module Topology # Defines behavior when a cluster is in replica set topology, # and is a no primary which has been discovered by the driver. # # @since 2.7.0 class ReplicaSetWithPrimary < ReplicaSetNoPrimary end end end end mongo-ruby-driver-2.21.3/lib/mongo/cluster/topology/sharded.rb000066400000000000000000000076041505113246500243600ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Cluster module Topology # Defines behavior for when a cluster is in sharded topology. # # @since 2.0.0 class Sharded < Base include NoReplicaSetOptions # The display name for the topology. # # @since 2.0.0 NAME = 'Sharded'.freeze # Get the display name. # # @example Get the display name. # Sharded.display_name # # @return [ String ] The display name. # # @since 2.0.0 def display_name self.class.name.gsub(/.*::/, '') end # @note This method is experimental and subject to change. # # @api experimental # @since 2.7.0 def summary details = server_descriptions.keys.join(',') "#{display_name}[#{details}]" end # Determine if the topology would select a readable server for the # provided candidates and read preference. # # @example Is a readable server present? # topology.has_readable_server?(cluster, server_selector) # # @param [ Cluster ] cluster The cluster. # @param [ ServerSelector ] server_selector The server # selector. # # @return [ true ] A Sharded cluster always has a readable server. # # @since 2.4.0 def has_readable_server?(cluster, server_selector = nil); true; end # Determine if the topology would select a writable server for the # provided candidates. # # @example Is a writable server present? # topology.has_writable_server?(servers) # # @param [ Cluster ] cluster The cluster. # # @return [ true ] A Sharded cluster always has a writable server. # # @since 2.4.0 def has_writable_server?(cluster); true; end # A sharded topology is not a replica set. # # @example Is the topology a replica set? # Sharded.replica_set? # # @return [ false ] Always false. # # @since 2.0.0 def replica_set?; false; end # Select appropriate servers for this topology. # # @example Select the servers. # Sharded.servers(servers) # # @param [ Array ] servers The known servers. # # @return [ Array ] The mongos servers. # # @since 2.0.0 def servers(servers) servers.select { |server| server.mongos? } end # A sharded topology is sharded. # # @example Is the topology sharded? # Sharded.sharded? # # @return [ true ] Always true. # # @since 2.0.0 def sharded?; true; end # A sharded topology is not single. # # @example Is the topology single? # Sharded.single? # # @return [ false ] Always false. # # @since 2.0.0 def single?; false; end # A sharded topology is not unknown. # # @example Is the topology unknown? # Sharded.unknown? # # @return [ false ] Always false. # # @since 2.0.0 def unknown?; false; end end end end end mongo-ruby-driver-2.21.3/lib/mongo/cluster/topology/single.rb000066400000000000000000000102721505113246500242220ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Cluster module Topology # Defines behavior for when a cluster is in single topology. # # @since 2.0.0 class Single < Base # The display name for the topology. # # @since 2.0.0 NAME = 'Single'.freeze # Get the display name. # # @example Get the display name. # Single.display_name # # @return [ String ] The display name. # # @since 2.0.0 def display_name self.class.name.gsub(/.*::/, '') end # @note This method is experimental and subject to change. # # @api experimental # @since 2.7.0 def summary details = server_descriptions.keys.join(',') "#{display_name}[#{details}]" end # Determine if the topology would select a readable server for the # provided candidates and read preference. # # @example Is a readable server present? # topology.has_readable_server?(cluster, server_selector) # # @param [ Cluster ] cluster The cluster. # @param [ ServerSelector ] server_selector The server # selector. # # @return [ true ] A standalone always has a readable server. # # @since 2.4.0 def has_readable_server?(cluster, server_selector = nil); true; end # Determine if the topology would select a writable server for the # provided candidates. # # @example Is a writable server present? # topology.has_writable_server?(servers) # # @param [ Cluster ] cluster The cluster. # # @return [ true ] A standalone always has a writable server. # # @since 2.4.0 def has_writable_server?(cluster); true; end # A single topology is not a replica set. # # @example Is the topology a replica set? # Single.replica_set? # # @return [ false ] Always false. # # @since 2.0.0 def replica_set?; false; end # Select appropriate servers for this topology. # # @example Select the servers. # Single.servers(servers, 'test') # # @param [ Array ] servers The known servers. # # @return [ Array ] The single servers. # # @since 2.0.0 def servers(servers, name = nil) servers.reject { |server| server.unknown? } end # A single topology is not sharded. # # @example Is the topology sharded? # Single.sharded? # # @return [ false ] Always false. # # @since 2.0.0 def sharded?; false; end # A single topology is single. # # @example Is the topology single? # Single.single? # # @return [ true ] Always true. # # @since 2.0.0 def single?; true; end # An single topology is not unknown. # # @example Is the topology unknown? # Single.unknown? # # @return [ false ] Always false. # # @since 2.0.0 def unknown?; false; end private def validate_options(options, cluster) if cluster.servers_list.length > 1 raise ArgumentError, "Cannot instantiate a single topology with more than one server in the cluster: #{cluster.servers_list.map(&:address).map(&:seed).join(', ')}" end super(options, cluster) end end end end end mongo-ruby-driver-2.21.3/lib/mongo/cluster/topology/unknown.rb000066400000000000000000000076261505113246500244510ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2015-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Cluster module Topology # Defines behavior for when a cluster is in an unknown state. # # @since 2.0.0 class Unknown < Base include NoReplicaSetOptions # The display name for the topology. # # @since 2.0.0 NAME = 'Unknown'.freeze # Get the display name. # # @example Get the display name. # Unknown.display_name # # @return [ String ] The display name. # # @since 2.0.0 def display_name self.class.name.gsub(/.*::/, '') end # @note This method is experimental and subject to change. # # @api experimental # @since 2.7.0 def summary details = server_descriptions.keys.join(',') "#{display_name}[#{details}]" end # Determine if the topology would select a readable server for the # provided candidates and read preference. # # @example Is a readable server present? # topology.has_readable_server?(cluster, server_selector) # # @param [ Cluster ] cluster The cluster. # @param [ ServerSelector ] server_selector The server # selector. # # @return [ false ] An Unknown topology will never have a readable server. # # @since 2.4.0 def has_readable_server?(cluster, server_selector = nil); false; end # Determine if the topology would select a writable server for the # provided candidates. # # @example Is a writable server present? # topology.has_writable_server?(servers) # # @param [ Cluster ] cluster The cluster. # # @return [ false ] An Unknown topology will never have a writable server. # # @since 2.4.0 def has_writable_server?(cluster); false; end # An unknown topology is not a replica set. # # @example Is the topology a replica set? # Unknown.replica_set? # # @return [ false ] Always false. # # @since 2.0.0 def replica_set?; false; end # Select appropriate servers for this topology. # # @example Select the servers. # Unknown.servers(servers) # # @param [ Array ] servers The known servers. # # @raise [ Unknown ] Cannot select servers when the topology is # unknown. # # @since 2.0.0 def servers(servers) [] end # An unknown topology is not sharded. # # @example Is the topology sharded? # Unknown.sharded? # # @return [ false ] Always false. # # @since 2.0.0 def sharded?; false; end # An unknown topology is not single. # # @example Is the topology single? # Unknown.single? # # @return [ true ] Always false. # # @since 2.0.0 def single?; false; end # An unknown topology is unknown. # # @example Is the topology unknown? # Unknown.unknown? # # @return [ true ] Always true. # # @since 2.0.0 def unknown?; true; end end end end end mongo-ruby-driver-2.21.3/lib/mongo/cluster_time.rb000066400000000000000000000110431505113246500221000ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2019-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo # ClusterTime encapsulates cluster time storage and operations. # # The primary operation performed on the cluster time is advancing it: # given another cluster time, pick the newer of the two. # # This class provides comparison methods that are used to figure out which # cluster time is newer, and provides diagnostics in lint mode when # the actual time is missing from a cluster time document. # # @api private class ClusterTime < BSON::Document def initialize(elements = nil) super if Lint.enabled? && !self['clusterTime'] raise ArgumentError, 'Creating a cluster time without clusterTime field' end end # Advances the cluster time in the receiver to the cluster time in +other+. # # +other+ can be nil or be behind the cluster time in the receiver; in # these cases the receiver is returned unmodified. If receiver is advanced, # a new ClusterTime object is returned. # # Return value is nil or a ClusterTime instance. def advance(other) if self['clusterTime'] && other['clusterTime'] && other['clusterTime'] > self['clusterTime'] then ClusterTime[other] else self end end # Compares two ClusterTime instances by comparing their timestamps. def <=>(other) if self['clusterTime'] && other['clusterTime'] self['clusterTime'] <=> other['clusterTime'] elsif !self['clusterTime'] raise ArgumentError, "Cannot compare cluster times when receiver is missing clusterTime key: #{inspect}" else other['clusterTime'] raise ArgumentError, "Cannot compare cluster times when other is missing clusterTime key: #{other.inspect}" end end # Older Rubies do not implement other logical operators through <=>. # TODO revise whether these methods are needed when # https://jira.mongodb.org/browse/RUBY-1622 is implemented. def >=(other) (self <=> other) != -1 end def >(other) (self <=> other) == 1 end def <=(other) (self <=> other) != 1 end def <(other) (self <=> other) == -1 end # Compares two ClusterTime instances by comparing their timestamps. def ==(other) if self['clusterTime'] && other['clusterTime'] && self['clusterTime'] == other['clusterTime'] then true else false end end class << self # Converts a BSON::Document to a ClusterTime. # # +doc+ can be nil, in which case nil is returned. def [](doc) if doc.nil? || doc.is_a?(ClusterTime) doc else ClusterTime.new(doc) end end end # This module provides common cluster time tracking behavior. # # @note Although attributes and methods defined in this module are part of # the public API for the classes including this module, the fact that # the methods are defined on this module and not directly on the # including classes is not part of the public API. module Consumer # The cluster time tracked by the object including this module. # # @return [ nil | ClusterTime ] The cluster time. # # Changed in version 2.9.0: This attribute became an instance of # ClusterTime, which is a subclass of BSON::Document. # Previously it was an instance of BSON::Document. # # @since 2.5.0 attr_reader :cluster_time # Advance the tracked cluster time document for the object including # this module. # # @param [ BSON::Document ] new_cluster_time The new cluster time document. # # @return [ ClusterTime ] The resulting cluster time. # # @since 2.5.0 def advance_cluster_time(new_cluster_time) if @cluster_time @cluster_time = @cluster_time.advance(new_cluster_time) else @cluster_time = ClusterTime[new_cluster_time] end end end end end mongo-ruby-driver-2.21.3/lib/mongo/collection.rb000066400000000000000000001610641505113246500215450ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'mongo/bulk_write' require 'mongo/collection/view' require 'mongo/collection/helpers' require 'mongo/collection/queryable_encryption' module Mongo # Represents a collection in the database and operations that can directly be # applied to one. # # @since 2.0.0 class Collection extend Forwardable include Retryable include QueryableEncryption include Helpers # The capped option. # # @since 2.1.0 CAPPED = 'capped'.freeze # The ns field constant. # # @since 2.1.0 NS = 'ns'.freeze # @return [ Mongo::Database ] The database the collection resides in. attr_reader :database # @return [ String ] The name of the collection. attr_reader :name # @return [ Hash ] The collection options. attr_reader :options # Get client, cluster, read preference, write concern, and encrypted_fields_map from client. def_delegators :database, :client, :cluster, :encrypted_fields_map # Delegate to the cluster for the next primary. def_delegators :cluster, :next_primary # Options that can be updated on a new Collection instance via the #with method. # # @since 2.1.0 CHANGEABLE_OPTIONS = [ :read, :read_concern, :write, :write_concern ].freeze # Options map to transform create collection options. # # @api private CREATE_COLLECTION_OPTIONS = { :time_series => :timeseries, :expire_after => :expireAfterSeconds, :clustered_index => :clusteredIndex, :change_stream_pre_and_post_images => :changeStreamPreAndPostImages, :encrypted_fields => :encryptedFields, :validator => :validator, :view_on => :viewOn } # Check if a collection is equal to another object. Will check the name and # the database for equality. # # @example Check collection equality. # collection == other # # @param [ Object ] other The object to check. # # @return [ true | false ] If the objects are equal. # # @since 2.0.0 def ==(other) return false unless other.is_a?(Collection) name == other.name && database == other.database && options == other.options end # Instantiate a new collection. # # @example Instantiate a new collection. # Mongo::Collection.new(database, 'test') # # @param [ Mongo::Database ] database The collection's database. # @param [ String, Symbol ] name The collection name. # @param [ Hash ] options The collection options. # # @option opts [ true | false ] :capped Create a fixed-sized collection. # @option opts [ Hash ] :change_stream_pre_and_post_images Used to enable # pre- and post-images on the created collection. # The hash may have the following items: # - *:enabled* -- true or false. # @option opts [ Hash ] :clustered_index Create a clustered index. # This option specifies how this collection should be clustered on _id. # The hash may have the following items: # - *:key* -- The clustered index key field. Must be set to { _id: 1 }. # - *:unique* -- Must be set to true. The collection will not accept # inserted or updated documents where the clustered index key value # matches an existing value in the index. # - *:name* -- Optional. A name that uniquely identifies the clustered index. # @option opts [ Hash ] :collation The collation to use. # @option opts [ Hash ] :encrypted_fields Hash describing encrypted fields # for queryable encryption. # @option opts [ Integer ] :expire_after Number indicating # after how many seconds old time-series data should be deleted. # @option opts [ Integer ] :max The maximum number of documents in a # capped collection. The size limit takes precedents over max. # @option opts [ Array ] :pipeline An array of pipeline stages. # A view will be created by applying this pipeline to the view_on # collection or view. # @option options [ Hash ] :read_concern The read concern options hash, # with the following optional keys: # - *:level* -- the read preference level as a symbol; valid values # are *:local*, *:majority*, and *:snapshot* # @option options [ Hash ] :read The read preference options. # The hash may have the following items: # - *:mode* -- read preference specified as a symbol; valid values are # *:primary*, *:primary_preferred*, *:secondary*, *:secondary_preferred* # and *:nearest*. # - *:tag_sets* -- an array of hashes. # - *:local_threshold*. # @option options [ Session ] :session The session to use for the operation. # @option options [ Integer ] :size The size of the capped collection. # @option options [ Integer ] :timeout_ms The operation timeout in milliseconds. # Must be a non-negative integer. An explicit value of 0 means infinite. # The default value is unset which means the value is inherited from # the database or the client. # @option opts [ Hash ] :time_series Create a time-series collection. # The hash may have the following items: # - *:timeField* -- The name of the field which contains the date in each # time series document. # - *:metaField* -- The name of the field which contains metadata in each # time series document. # - *:granularity* -- Set the granularity to the value that is the closest # match to the time span between consecutive incoming measurements. # Possible values are "seconds" (default), "minutes", and "hours". # @option opts [ Hash ] :validator Hash describing document validation # options for the collection. # @option opts [ String ] :view_on The name of the source collection or # view from which to create a view. # @option opts [ Hash ] :write Deprecated. Equivalent to :write_concern # option. # @option opts [ Hash ] :write_concern The write concern options. # Can be :w => Integer|String, :fsync => Boolean, :j => Boolean. # # @since 2.0.0 def initialize(database, name, options = {}) raise Error::InvalidCollectionName.new unless name if options[:write] && options[:write_concern] && options[:write] != options[:write_concern] raise ArgumentError, "If :write and :write_concern are both given, they must be identical: #{options.inspect}" end @database = database @name = name.to_s.freeze @options = options.dup @timeout_ms = options.delete(:timeout_ms) =begin WriteConcern object support if @options[:write_concern].is_a?(WriteConcern::Base) # Cache the instance so that we do not needlessly reconstruct it. @write_concern = @options[:write_concern] @options[:write_concern] = @write_concern.options end =end @options.freeze end # Get the effective read concern for this collection instance. # # If a read concern was provided in collection options, that read concern # will be returned, otherwise the database's effective read concern will # be returned. # # @example Get the read concern. # collection.read_concern # # @return [ Hash ] The read concern. # # @since 2.2.0 def read_concern options[:read_concern] || database.read_concern end # Get the server selector for this collection. # # @example Get the server selector. # collection.server_selector # # @return [ Mongo::ServerSelector ] The server selector. # # @since 2.0.0 def server_selector @server_selector ||= ServerSelector.get(read_preference || database.server_selector) end # Get the effective read preference for this collection. # # If a read preference was provided in collection options, that read # preference will be returned, otherwise the database's effective read # preference will be returned. # # @example Get the read preference. # collection.read_preference # # @return [ Hash ] The read preference. # # @since 2.0.0 def read_preference @read_preference ||= options[:read] || database.read_preference end # Get the effective write concern on this collection. # # If a write concern was provided in collection options, that write # concern will be returned, otherwise the database's effective write # concern will be returned. # # @example Get the write concern. # collection.write_concern # # @return [ Mongo::WriteConcern ] The write concern. # # @since 2.0.0 def write_concern @write_concern ||= WriteConcern.get( options[:write_concern] || options[:write] || database.write_concern) end # Get the write concern to use for an operation on this collection, # given a session. # # If the session is in a transaction and the collection # has an unacknowledged write concern, remove the write # concern's :w option. Otherwise, return the unmodified # write concern. # # @return [ Mongo::WriteConcern ] The write concern. # # @api private def write_concern_with_session(session) wc = write_concern if session && session.in_transaction? if wc && !wc.acknowledged? opts = wc.options.dup opts.delete(:w) return WriteConcern.get(opts) end end wc end # Provides a new collection with either a new read preference, new read # concern or new write concern merged over the existing read preference / # read concern / write concern. # # @example Get a collection with a changed read preference. # collection.with(read: { mode: :primary_preferred }) # @example Get a collection with a changed read concern. # collection.with(read_concern: { level: :majority }) # # @example Get a collection with a changed write concern. # collection.with(write_concern: { w: 3 }) # # @param [ Hash ] new_options The new options to use. # # @option new_options [ Hash ] :read The read preference options. # The hash may have the following items: # - *:mode* -- read preference specified as a symbol; valid values are # *:primary*, *:primary_preferred*, *:secondary*, *:secondary_preferred* # and *:nearest*. # - *:tag_sets* -- an array of hashes. # - *:local_threshold*. # @option new_options [ Hash ] :read_concern The read concern options hash, # with the following optional keys: # - *:level* -- the read preference level as a symbol; valid values # are *:local*, *:majority*, and *:snapshot* # @option new_options [ Hash ] :write Deprecated. Equivalent to :write_concern # option. # @option new_options [ Hash ] :write_concern The write concern options. # Can be :w => Integer|String, :fsync => Boolean, :j => Boolean. # # @return [ Mongo::Collection ] A new collection instance. # # @since 2.1.0 def with(new_options) new_options.keys.each do |k| raise Error::UnchangeableCollectionOption.new(k) unless CHANGEABLE_OPTIONS.include?(k) end options = @options.dup if options[:write] && new_options[:write_concern] options.delete(:write) end if options[:write_concern] && new_options[:write] options.delete(:write_concern) end Collection.new(database, name, options.update(new_options)) end # Is the collection capped? # # @example Is the collection capped? # collection.capped? # # @return [ true | false ] If the collection is capped. # # @since 2.0.0 def capped? database.list_collections(filter: { name: name }) .first &.dig('options', CAPPED) || false end # Force the collection to be created in the database. # # @example Force the collection to be created. # collection.create # # @param [ Hash ] opts The options for the create operation. # # @option opts [ true | false ] :capped Create a fixed-sized collection. # @option opts [ Hash ] :change_stream_pre_and_post_images Used to enable # pre- and post-images on the created collection. # The hash may have the following items: # - *:enabled* -- true or false. # @option opts [ Hash ] :clustered_index Create a clustered index. # This option specifies how this collection should be clustered on _id. # The hash may have the following items: # - *:key* -- The clustered index key field. Must be set to { _id: 1 }. # - *:unique* -- Must be set to true. The collection will not accept # inserted or updated documents where the clustered index key value # matches an existing value in the index. # - *:name* -- Optional. A name that uniquely identifies the clustered index. # @option opts [ Hash ] :collation The collation to use when creating the # collection. This option will not be sent to the server when calling # collection methods. # @option opts [ Hash ] :encrypted_fields Hash describing encrypted fields # for queryable encryption. # @option opts [ Integer ] :expire_after Number indicating # after how many seconds old time-series data should be deleted. # @option opts [ Integer ] :max The maximum number of documents in a # capped collection. The size limit takes precedents over max. # @option opts [ Array ] :pipeline An array of pipeline stages. # A view will be created by applying this pipeline to the view_on # collection or view. # @option opts [ Session ] :session The session to use for the operation. # @option opts [ Integer ] :size The size of the capped collection. # @option opts [ Hash ] :time_series Create a time-series collection. # The hash may have the following items: # - *:timeField* -- The name of the field which contains the date in each # time series document. # - *:metaField* -- The name of the field which contains metadata in each # time series document. # - *:granularity* -- Set the granularity to the value that is the closest # match to the time span between consecutive incoming measurements. # Possible values are "seconds" (default), "minutes", and "hours". # @option opts [ Hash ] :validator Hash describing document validation # options for the collection. # @option opts [ String ] :view_on The name of the source collection or # view from which to create a view. # @option opts [ Hash ] :write Deprecated. Equivalent to :write_concern # option. # @option opts [ Hash ] :write_concern The write concern options. # Can be :w => Integer|String, :fsync => Boolean, :j => Boolean. # # @return [ Result ] The result of the command. # # @since 2.0.0 def create(opts = {}) # Passing read options to create command causes it to break. # Filter the read options out. Session is also excluded here as it gets # used by the call to with_session and should not be part of the # operation. If it gets passed to the operation it would fail BSON # serialization. # TODO put the list of read options in a class-level constant when # we figure out what the full set of them is. options = Hash[self.options.merge(opts).reject do |key, value| %w(read read_preference read_concern session).include?(key.to_s) end] # Converting Ruby options to server style. CREATE_COLLECTION_OPTIONS.each do |ruby_key, server_key| if options.key?(ruby_key) options[server_key] = options.delete(ruby_key) end end operation = { :create => name }.merge(options) operation.delete(:write) operation.delete(:write_concern) client.send(:with_session, opts) do |session| write_concern = if opts[:write_concern] WriteConcern.get(opts[:write_concern]) else self.write_concern end context = Operation::Context.new( client: client, session: session ) maybe_create_qe_collections(opts[:encrypted_fields], client, session) do |encrypted_fields| Operation::Create.new( selector: operation, db_name: database.name, write_concern: write_concern, session: session, # Note that these are collection options, collation isn't # taken from options passed to the create method. collation: options[:collation] || options['collation'], encrypted_fields: encrypted_fields, validator: options[:validator], ).execute( next_primary(nil, session), context: context ) end end end # Drop the collection. Will also drop all indexes associated with the # collection, as well as associated queryable encryption collections. # # @note An error returned if the collection doesn't exist is suppressed. # # @example Drop the collection. # collection.drop # # @param [ Hash ] opts The options for the drop operation. # # @option opts [ Session ] :session The session to use for the operation. # @option opts [ Hash ] :write_concern The write concern options. # @option opts [ Hash | nil ] :encrypted_fields Encrypted fields hash that # was provided to `create` collection helper. # @option options [ Integer ] :timeout_ms The operation timeout in milliseconds. # Must be a non-negative integer. An explicit value of 0 means infinite. # The default value is unset which means the value is inherited from # the collection or the database or the client. # # @return [ Result ] The result of the command. # # @since 2.0.0 def drop(opts = {}) client.with_session(opts) do |session| maybe_drop_emm_collections(opts[:encrypted_fields], client, session) do temp_write_concern = write_concern write_concern = if opts[:write_concern] WriteConcern.get(opts[:write_concern]) else temp_write_concern end context = Operation::Context.new( client: client, session: session, operation_timeouts: operation_timeouts(opts) ) operation = Operation::Drop.new({ selector: { :drop => name }, db_name: database.name, write_concern: write_concern, session: session, }) do_drop(operation, session, context) end end end # Find documents in the collection. # # @example Find documents in the collection by a selector. # collection.find(name: 1) # # @example Get all documents in a collection. # collection.find # # @param [ Hash ] filter The filter to use in the find. # @param [ Hash ] options The options for the find. # # @option options [ true | false ] :allow_disk_use When set to true, the # server can write temporary data to disk while executing the find # operation. This option is only available on MongoDB server versions # 4.4 and newer. # @option options [ true | false ] :allow_partial_results Allows the query to get partial # results if some shards are down. # @option options [ Integer ] :batch_size The number of documents returned in each batch # of results from MongoDB. # @option options [ Hash ] :collation The collation to use. # @option options [ Object ] :comment A user-provided comment to attach to # this command. # @option options [ :tailable, :tailable_await ] :cursor_type The type of cursor to use. # @option options [ Integer ] :limit The max number of docs to return from the query. # @option options [ Integer ] :max_time_ms The maximum amount of time to # allow the query to run, in milliseconds. This option is deprecated, use # :timeout_ms instead. # @option options [ Hash ] :modifiers A document containing meta-operators modifying the # output or behavior of a query. # @option options [ true | false ] :no_cursor_timeout The server normally times out idle # cursors after an inactivity period (10 minutes) to prevent excess memory use. # Set this option to prevent that. # @option options [ true | false ] :oplog_replay For internal replication # use only, applications should not set this option. # @option options [ Hash ] :projection The fields to include or exclude from each doc # in the result set. # @option options [ Session ] :session The session to use. # @option options [ Integer ] :skip The number of docs to skip before returning results. # @option options [ Hash ] :sort The key and direction pairs by which the result set # will be sorted. # @option options [ :cursor_lifetime | :iteration ] :timeout_mode How to interpret # :timeout_ms (whether it applies to the lifetime of the cursor, or per # iteration). # @option options [ Integer ] :timeout_ms The operation timeout in milliseconds. # Must be a non-negative integer. An explicit value of 0 means infinite. # The default value is unset which means the value is inherited from # the collection or the database or the client. # @option options [ Hash ] :let Mapping of variables to use in the command. # See the server documentation for details. # # @return [ CollectionView ] The collection view. # # @since 2.0.0 def find(filter = nil, options = {}) View.new(self, filter || {}, options) end # Perform an aggregation on the collection. # # @example Perform an aggregation. # collection.aggregate([ { "$group" => { "_id" => "$city", "tpop" => { "$sum" => "$pop" }}} ]) # # @param [ Array ] pipeline The aggregation pipeline. # @param [ Hash ] options The aggregation options. # # @option options [ true | false ] :allow_disk_use Set to true if disk # usage is allowed during the aggregation. # @option options [ Integer ] :batch_size The number of documents to return # per batch. # @option options [ true | false ] :bypass_document_validation Whether or # not to skip document level validation. # @option options [ Hash ] :collation The collation to use. # @option options [ Object ] :comment A user-provided # comment to attach to this command. # @option options [ String ] :hint The index to use for the aggregation. # @option options [ Hash ] :let Mapping of variables to use in the pipeline. # See the server documentation for details. # @option options [ Integer ] :max_time_ms The maximum amount of time to # allow the query to run, in milliseconds. This option is deprecated, use # :timeout_ms instead. # @option options [ Session ] :session The session to use. # @option options [ Integer ] :timeout_ms The operation timeout in milliseconds. # Must be a non-negative integer. An explicit value of 0 means infinite. # The default value is unset which means the value is inherited from # the collection or the database or the client. # # @return [ View::Aggregation ] The aggregation object. # # @since 2.1.0 def aggregate(pipeline, options = {}) View.new(self, {}, options).aggregate(pipeline, options) end # As of version 3.6 of the MongoDB server, a ``$changeStream`` pipeline # stage is supported in the aggregation framework. This stage allows users # to request that notifications are sent for all changes to a particular # collection. # # @example Get change notifications for a given collection. # collection.watch([{ '$match' => { operationType: { '$in' => ['insert', 'replace'] } } }]) # # @param [ Array ] pipeline Optional additional filter operators. # @param [ Hash ] options The change stream options. # # @option options [ String ] :full_document Allowed values: nil, 'default', # 'updateLookup', 'whenAvailable', 'required'. # # The default is to not send a value (i.e. nil), which is equivalent to # 'default'. By default, the change notification for partial updates will # include a delta describing the changes to the document. # # When set to 'updateLookup', the change notification for partial updates # will include both a delta describing the changes to the document as well # as a copy of the entire document that was changed from some time after # the change occurred. # # When set to 'whenAvailable', configures the change stream to return the # post-image of the modified document for replace and update change events # if the post-image for this event is available. # # When set to 'required', the same behavior as 'whenAvailable' except that # an error is raised if the post-image is not available. # @option options [ String ] :full_document_before_change Allowed values: nil, # 'whenAvailable', 'required', 'off'. # # The default is to not send a value (i.e. nil), which is equivalent to 'off'. # # When set to 'whenAvailable', configures the change stream to return the # pre-image of the modified document for replace, update, and delete change # events if it is available. # # When set to 'required', the same behavior as 'whenAvailable' except that # an error is raised if the pre-image is not available. # @option options [ BSON::Document, Hash ] :resume_after Specifies the # logical starting point for the new change stream. # @option options [ Integer ] :max_await_time_ms The maximum amount of time # for the server to wait on new documents to satisfy a change stream query. # @option options [ Integer ] :batch_size The number of documents to return # per batch. # @option options [ BSON::Document, Hash ] :collation The collation to use. # @option options [ Session ] :session The session to use. # @option options [ BSON::Timestamp ] :start_at_operation_time Only return # changes that occurred at or after the specified timestamp. Any command run # against the server will return a cluster time that can be used here. # Only recognized by server versions 4.0+. # @option options [ Object ] :comment A user-provided # comment to attach to this command. # @option options [ Boolean ] :show_expanded_events Enables the server to # send the 'expanded' list of change stream events. The list of additional # events included with this flag set are: createIndexes, dropIndexes, # modify, create, shardCollection, reshardCollection, # refineCollectionShardKey. # @option options [ :cursor_lifetime | :iteration ] :timeout_mode How to interpret # :timeout_ms (whether it applies to the lifetime of the cursor, or per # iteration). # @option options [ Integer ] :timeout_ms The operation timeout in milliseconds. # Must be a non-negative integer. An explicit value of 0 means infinite. # The default value is unset which means the value is inherited from # the collection or the database or the client. # # @note A change stream only allows 'majority' read concern. # @note This helper method is preferable to running a raw aggregation with # a $changeStream stage, for the purpose of supporting resumability. # # @return [ ChangeStream ] The change stream object. # # @since 2.5.0 def watch(pipeline = [], options = {}) view_options = options.dup view_options[:cursor_type] = :tailable_await if options[:max_await_time_ms] View::ChangeStream.new(View.new(self, {}, view_options), pipeline, nil, options) end # Gets an estimated number of matching documents in the collection. # # @example Get the count. # collection.count(name: 1) # # @param [ Hash ] filter A filter for matching documents. # @param [ Hash ] options The count options. # # @option options [ Hash ] :hint The index to use. # @option options [ Integer ] :limit The maximum number of documents to count. # @option options [ Integer ] :max_time_ms The maximum amount of time to # allow the query to run, in milliseconds. This option is deprecated, use # :timeout_ms instead. # @option options [ Integer ] :skip The number of documents to skip before counting. # @option options [ Hash ] :read The read preference options. # @option options [ Hash ] :collation The collation to use. # @option options [ Session ] :session The session to use. # @option options [ Object ] :comment A user-provided # comment to attach to this command. # @option options [ Integer ] :timeout_ms The operation timeout in milliseconds. # Must be a non-negative integer. An explicit value of 0 means infinite. # The default value is unset which means the value is inherited from # the collection or the database or the client. # # @return [ Integer ] The document count. # # @since 2.1.0 # # @deprecated Use #count_documents or estimated_document_count instead. However, note that the # following operators will need to be substituted when switching to #count_documents: # * $where should be replaced with $expr (only works on 3.6+) # * $near should be replaced with $geoWithin with $center # * $nearSphere should be replaced with $geoWithin with $centerSphere def count(filter = nil, options = {}) View.new(self, filter || {}, options).count(options) end # Gets the number of documents matching the query. Unlike the deprecated # #count method, this will return the exact number of documents matching # the filter (or exact number of documents in the collection, if no filter # is provided) rather than an estimate. # # Use #estimated_document_count to retrieve an estimate of the number # of documents in the collection using the collection metadata. # # @param [ Hash ] filter A filter for matching documents. # @param [ Hash ] options Options for the operation. # # @option options :skip [ Integer ] The number of documents to skip. # @option options :hint [ Hash ] Override default index selection and force # MongoDB to use a specific index for the query. Requires server version 3.6+. # @option options :limit [ Integer ] Max number of docs to count. # @option options :max_time_ms [ Integer ] The maximum amount of time to allow the # command to run. # @option options :read [ Hash ] The read preference options. # @option options :collation [ Hash ] The collation to use. # @option options [ Session ] :session The session to use. # @option options [ Object ] :comment A user-provided # comment to attach to this command. # @option options [ Integer ] :timeout_ms The operation timeout in milliseconds. # Must be a non-negative integer. An explicit value of 0 means infinite. # The default value is unset which means the value is inherited from # the collection or the database or the client. # # @return [ Integer ] The document count. # # @since 2.6.0 def count_documents(filter = {}, options = {}) View.new(self, filter, options).count_documents(options) end # Gets an estimate of the number of documents in the collection using the # collection metadata. # # Use #count_documents to retrieve the exact number of documents in the # collection, or to count documents matching a filter. # # @param [ Hash ] options Options for the operation. # # @option options :max_time_ms [ Integer ] The maximum amount of time to allow # the command to run for on the server. # @option options [ Hash ] :read The read preference options. # @option options [ Object ] :comment A user-provided # comment to attach to this command. # @option options [ Integer ] :timeout_ms The operation timeout in milliseconds. # Must be a non-negative integer. An explicit value of 0 means infinite. # The default value is unset which means the value is inherited from # the collection or the database or the client. # # @return [ Integer ] The document count. # # @since 2.6.0 def estimated_document_count(options = {}) View.new(self, {}, options).estimated_document_count(options) end # Get a list of distinct values for a specific field. # # @example Get the distinct values. # collection.distinct('name') # # @param [ Symbol, String ] field_name The name of the field. # @param [ Hash ] filter The documents from which to retrieve the distinct values. # @param [ Hash ] options The distinct command options. # # @option options [ Integer ] :max_time_ms The maximum amount of time to # allow the query to run, in milliseconds. This option is deprecated, use # :timeout_ms instead. # @option options [ Hash ] :read The read preference options. # @option options [ Hash ] :collation The collation to use. # @option options [ Session ] :session The session to use. # @option options [ Integer ] :timeout_ms The operation timeout in milliseconds. # Must be a non-negative integer. An explicit value of 0 means infinite. # The default value is unset which means the value is inherited from # the collection or the database or the client. # # @return [ Array ] The list of distinct values. # # @since 2.1.0 def distinct(field_name, filter = nil, options = {}) View.new(self, filter || {}, options).distinct(field_name, options) end # Get a view of all indexes for this collection. Can be iterated or has # more operations. # # @example Get the index view. # collection.indexes # # @param [ Hash ] options Options for getting a list of all indexes. # # @option options [ Session ] :session The session to use. # # @return [ Index::View ] The index view. # # @since 2.0.0 def indexes(options = {}) Index::View.new(self, options) end # Get a view of all search indexes for this collection. Can be iterated or # operated on directly. If id or name are given, the iterator will return # only the indicated index. For all other operations, id and name are # ignored. # # @note Only one of id or name may be given; it is an error to specify both, # although both may be omitted safely. # # @param [ Hash ] options The options to use to configure the view. # # @option options [ String ] :id The id of the specific index to query (optional) # @option options [ String ] :name The name of the specific index to query (optional) # @option options [ Hash ] :aggregate The options hash to pass to the # aggregate command (optional) # # @return [ SearchIndex::View ] The search index view. # # @since 2.0.0 def search_indexes(options = {}) SearchIndex::View.new(self, options) end # Get a pretty printed string inspection for the collection. # # @example Inspect the collection. # collection.inspect # # @return [ String ] The collection inspection. # # @since 2.0.0 def inspect "#" end # Insert a single document into the collection. # # @example Insert a document into the collection. # collection.insert_one({ name: 'test' }) # # @param [ Hash ] document The document to insert. # @param [ Hash ] opts The insert options. # # @option opts [ true | false ] :bypass_document_validation Whether or # not to skip document level validation. # @option opts [ Object ] :comment A user-provided comment to attach to # this command. # @option opts [ Session ] :session The session to use for the operation. # @option options [ Integer ] :timeout_ms The operation timeout in milliseconds. # Must be a non-negative integer. An explicit value of 0 means infinite. # The default value is unset which means the value is inherited from # the collection or the database or the client. # @option opts [ Hash ] :write_concern The write concern options. # Can be :w => Integer, :fsync => Boolean, :j => Boolean. # # @return [ Result ] The database response wrapper. # # @since 2.0.0 def insert_one(document, opts = {}) QueryCache.clear_namespace(namespace) client.with_session(opts) do |session| write_concern = if opts[:write_concern] WriteConcern.get(opts[:write_concern]) else write_concern_with_session(session) end if document.nil? raise ArgumentError, "Document to be inserted cannot be nil" end context = Operation::Context.new( client: client, session: session, operation_timeouts: operation_timeouts(opts) ) write_with_retry(write_concern, context: context) do |connection, txn_num, context| Operation::Insert.new( :documents => [ document ], :db_name => database.name, :coll_name => name, :write_concern => write_concern, :bypass_document_validation => !!opts[:bypass_document_validation], :options => opts, :id_generator => client.options[:id_generator], :session => session, :txn_num => txn_num, :comment => opts[:comment] ).execute_with_connection(connection, context: context) end end end # Insert the provided documents into the collection. # # @example Insert documents into the collection. # collection.insert_many([{ name: 'test' }]) # # @param [ Enumerable ] documents The documents to insert. # @param [ Hash ] options The insert options. # # @option options [ true | false ] :bypass_document_validation Whether or # not to skip document level validation. # @option options [ Object ] :comment A user-provided comment to attach to # this command. # @option options [ true | false ] :ordered Whether the operations # should be executed in order. # @option options [ Session ] :session The session to use for the operation. # @option options [ Integer ] :timeout_ms The operation timeout in milliseconds. # Must be a non-negative integer. An explicit value of 0 means infinite. # The default value is unset which means the value is inherited from # the collection or the database or the client. # @option options [ Hash ] :write_concern The write concern options. # Can be :w => Integer, :fsync => Boolean, :j => Boolean. # # @return [ Result ] The database response wrapper. # # @since 2.0.0 def insert_many(documents, options = {}) QueryCache.clear_namespace(namespace) inserts = documents.map{ |doc| { :insert_one => doc }} bulk_write(inserts, options) end # Execute a batch of bulk write operations. # # @example Execute a bulk write. # collection.bulk_write(operations, options) # # @param [ Enumerable ] requests The bulk write requests. # @param [ Hash ] options The options. # # @option options [ true | false ] :ordered Whether the operations # should be executed in order. # @option options [ Hash ] :write_concern The write concern options. # Can be :w => Integer, :fsync => Boolean, :j => Boolean. # @option options [ true | false ] :bypass_document_validation Whether or # not to skip document level validation. # @option options [ Session ] :session The session to use for the set of operations. # @option options [ Integer ] :timeout_ms The operation timeout in milliseconds. # Must be a non-negative integer. An explicit value of 0 means infinite. # The default value is unset which means the value is inherited from # the collection or the database or the client. # @option options [ Hash ] :let Mapping of variables to use in the command. # See the server documentation for details. # # @return [ BulkWrite::Result ] The result of the operation. # # @since 2.0.0 def bulk_write(requests, options = {}) BulkWrite.new(self, requests, options).execute end # Remove a document from the collection. # # @example Remove a single document from the collection. # collection.delete_one # # @param [ Hash ] filter The filter to use. # @param [ Hash ] options The options. # # @option options [ Hash ] :collation The collation to use. # @option options [ Session ] :session The session to use. # @option options [ Hash | String ] :hint The index to use for this operation. # May be specified as a Hash (e.g. { _id: 1 }) or a String (e.g. "_id_"). # @option options [ Integer ] :timeout_ms The operation timeout in milliseconds. # Must be a non-negative integer. An explicit value of 0 means infinite. # The default value is unset which means the value is inherited from # the collection or the database or the client. # @option options [ Hash ] :let Mapping of variables to use in the command. # See the server documentation for details. # # @return [ Result ] The response from the database. # # @since 2.1.0 def delete_one(filter = nil, options = {}) find(filter, options).delete_one(options) end # Remove documents from the collection. # # @example Remove multiple documents from the collection. # collection.delete_many # # @param [ Hash ] filter The filter to use. # @param [ Hash ] options The options. # # @option options [ Hash ] :collation The collation to use. # @option options [ Session ] :session The session to use. # @option options [ Hash | String ] :hint The index to use for this operation. # May be specified as a Hash (e.g. { _id: 1 }) or a String (e.g. "_id_"). # @option options [ Integer ] :timeout_ms The operation timeout in milliseconds. # Must be a non-negative integer. An explicit value of 0 means infinite. # The default value is unset which means the value is inherited from # the collection or the database or the client. # @option options [ Hash ] :let Mapping of variables to use in the command. # See the server documentation for details. # # @return [ Result ] The response from the database. # # @since 2.1.0 def delete_many(filter = nil, options = {}) find(filter, options).delete_many(options) end # Execute a parallel scan on the collection view. # # Returns a list of up to cursor_count cursors that can be iterated concurrently. # As long as the collection is not modified during scanning, each document appears once # in one of the cursors' result sets. # # @example Execute a parallel collection scan. # collection.parallel_scan(2) # # @param [ Integer ] cursor_count The max number of cursors to return. # @param [ Hash ] options The parallel scan command options. # # @option options [ Integer ] :max_time_ms The maximum amount of time to # allow the query to run, in milliseconds. This option is deprecated, use # :timeout_ms instead. # @option options [ Session ] :session The session to use. # @option options [ :cursor_lifetime | :iteration ] :timeout_mode How to interpret # :timeout_ms (whether it applies to the lifetime of the cursor, or per # iteration). # @option options [ Integer ] :timeout_ms The operation timeout in milliseconds. # Must be a non-negative integer. An explicit value of 0 means infinite. # The default value is unset which means the value is inherited from # the collection or the database or the client. # # @return [ Array ] An array of cursors. # # @since 2.1 def parallel_scan(cursor_count, options = {}) find({}, options).parallel_scan(cursor_count, options) end # Replaces a single document in the collection with the new document. # # @example Replace a single document. # collection.replace_one({ name: 'test' }, { name: 'test1' }) # # @param [ Hash ] filter The filter to use. # @param [ Hash ] replacement The replacement document.. # @param [ Hash ] options The options. # # @option options [ true | false ] :upsert Whether to upsert if the # document doesn't exist. # @option options [ true | false ] :bypass_document_validation Whether or # not to skip document level validation. # @option options [ Hash ] :collation The collation to use. # @option options [ Session ] :session The session to use. # @option options [ Integer ] :timeout_ms The operation timeout in milliseconds. # Must be a non-negative integer. An explicit value of 0 means infinite. # The default value is unset which means the value is inherited from # the collection or the database or the client. # @option options [ Hash | String ] :hint The index to use for this operation. # May be specified as a Hash (e.g. { _id: 1 }) or a String (e.g. "_id_"). # @option options [ Hash ] :let Mapping of variables to use in the command. # See the server documentation for details. # # @return [ Result ] The response from the database. # # @since 2.1.0 def replace_one(filter, replacement, options = {}) find(filter, options).replace_one(replacement, options) end # Update documents in the collection. # # @example Update multiple documents in the collection. # collection.update_many({ name: 'test'}, '$set' => { name: 'test1' }) # # @param [ Hash ] filter The filter to use. # @param [ Hash | Array ] update The update document or pipeline. # @param [ Hash ] options The options. # # @option options [ true | false ] :upsert Whether to upsert if the # document doesn't exist. # @option options [ true | false ] :bypass_document_validation Whether or # not to skip document level validation. # @option options [ Hash ] :collation The collation to use. # @option options [ Array ] :array_filters A set of filters specifying to which array elements # an update should apply. # @option options [ Session ] :session The session to use. # @option options [ Integer ] :timeout_ms The operation timeout in milliseconds. # Must be a non-negative integer. An explicit value of 0 means infinite. # The default value is unset which means the value is inherited from # the collection or the database or the client. # @option options [ Hash | String ] :hint The index to use for this operation. # May be specified as a Hash (e.g. { _id: 1 }) or a String (e.g. "_id_"). # @option options [ Hash ] :let Mapping of variables to use in the command. # See the server documentation for details. # # @return [ Result ] The response from the database. # # @since 2.1.0 def update_many(filter, update, options = {}) find(filter, options).update_many(update, options) end # Update a single document in the collection. # # @example Update a single document in the collection. # collection.update_one({ name: 'test'}, '$set' => { name: 'test1'}) # # @param [ Hash ] filter The filter to use. # @param [ Hash | Array ] update The update document or pipeline. # @param [ Hash ] options The options. # # @option options [ true | false ] :upsert Whether to upsert if the # document doesn't exist. # @option options [ true | false ] :bypass_document_validation Whether or # not to skip document level validation. # @option options [ Hash ] :collation The collation to use. # @option options [ Array ] :array_filters A set of filters specifying to which array elements # an update should apply. # @option options [ Session ] :session The session to use. # @option options [ Integer ] :timeout_ms The operation timeout in milliseconds. # Must be a non-negative integer. An explicit value of 0 means infinite. # The default value is unset which means the value is inherited from # the collection or the database or the client. # @option options [ Hash | String ] :hint The index to use for this operation. # May be specified as a Hash (e.g. { _id: 1 }) or a String (e.g. "_id_"). # @option options [ Hash ] :let Mapping of variables to use in the command. # See the server documentation for details. # # @return [ Result ] The response from the database. # # @since 2.1.0 def update_one(filter, update, options = {}) find(filter, options).update_one(update, options) end # Finds a single document in the database via findAndModify and deletes # it, returning the original document. # # @example Find one document and delete it. # collection.find_one_and_delete(name: 'test') # # @param [ Hash ] filter The filter to use. # @param [ Hash ] options The options. # # @option options [ Integer ] :max_time_ms The maximum amount of time to # allow the query to run, in milliseconds. This option is deprecated, use # :timeout_ms instead. # @option options [ Hash ] :projection The fields to include or exclude in the returned doc. # @option options [ Hash ] :sort The key and direction pairs by which the result set # will be sorted. # @option options [ Hash ] :write_concern The write concern options. # Defaults to the collection's write concern. # @option options [ Hash ] :collation The collation to use. # @option options [ Session ] :session The session to use. # @option options [ Integer ] :timeout_ms The operation timeout in milliseconds. # Must be a non-negative integer. An explicit value of 0 means infinite. # The default value is unset which means the value is inherited from # the collection or the database or the client. # @option options [ Hash | String ] :hint The index to use for this operation. # May be specified as a Hash (e.g. { _id: 1 }) or a String (e.g. "_id_"). # @option options [ Hash ] :let Mapping of variables to use in the command. # See the server documentation for details. # # @return [ BSON::Document, nil ] The document, if found. # # @since 2.1.0 def find_one_and_delete(filter, options = {}) find(filter, options).find_one_and_delete(options) end # Finds a single document via findAndModify and updates it, returning the original doc unless # otherwise specified. # # @example Find a document and update it, returning the original. # collection.find_one_and_update({ name: 'test' }, { "$set" => { name: 'test1' }}) # # @example Find a document and update it, returning the updated document. # collection.find_one_and_update({ name: 'test' }, { "$set" => { name: 'test1' }}, :return_document => :after) # # @param [ Hash ] filter The filter to use. # @param [ Hash | Array ] update The update document or pipeline. # @param [ Hash ] options The options. # # @option options [ Integer ] :max_time_ms The maximum amount of time to allow the command # to run in milliseconds. # @option options [ Hash ] :projection The fields to include or exclude in the returned doc. # @option options [ Hash ] :sort The key and direction pairs by which the result set # will be sorted. # @option options [ Symbol ] :return_document Either :before or :after. # @option options [ true | false ] :upsert Whether to upsert if the document doesn't exist. # @option options [ true | false ] :bypass_document_validation Whether or # not to skip document level validation. # @option options [ Hash ] :write_concern The write concern options. # Defaults to the collection's write concern. # @option options [ Hash ] :collation The collation to use. # @option options [ Array ] :array_filters A set of filters specifying to which array elements # an update should apply. # @option options [ Session ] :session The session to use. # @option options [ Hash | String ] :hint The index to use for this operation. # May be specified as a Hash (e.g. { _id: 1 }) or a String (e.g. "_id_"). # @option options [ Hash ] :let Mapping of variables to use in the command. # See the server documentation for details. # @option options [ Integer ] :timeout_ms The operation timeout in milliseconds. # Must be a non-negative integer. An explicit value of 0 means infinite. # The default value is unset which means the value is inherited from # the collection or the database or the client. # # @return [ BSON::Document ] The document. # # @since 2.1.0 def find_one_and_update(filter, update, options = {}) find(filter, options).find_one_and_update(update, options) end # Finds a single document and replaces it, returning the original doc unless # otherwise specified. # # @example Find a document and replace it, returning the original. # collection.find_one_and_replace({ name: 'test' }, { name: 'test1' }) # # @example Find a document and replace it, returning the new document. # collection.find_one_and_replace({ name: 'test' }, { name: 'test1' }, :return_document => :after) # # @param [ Hash ] filter The filter to use. # @param [ BSON::Document ] replacement The replacement document. # @param [ Hash ] options The options. # # @option options [ Integer ] :max_time_ms The maximum amount of time to allow the command # to run in milliseconds. # @option options [ Hash ] :projection The fields to include or exclude in the returned doc. # @option options [ Hash ] :sort The key and direction pairs by which the result set # will be sorted. # @option options [ Symbol ] :return_document Either :before or :after. # @option options [ true | false ] :upsert Whether to upsert if the document doesn't exist. # @option options [ true | false ] :bypass_document_validation Whether or # not to skip document level validation. # @option options [ Hash ] :write_concern The write concern options. # Defaults to the collection's write concern. # @option options [ Hash ] :collation The collation to use. # @option options [ Session ] :session The session to use. # @option options [ Hash | String ] :hint The index to use for this operation. # May be specified as a Hash (e.g. { _id: 1 }) or a String (e.g. "_id_"). # @option options [ Integer ] :timeout_ms The operation timeout in milliseconds. # Must be a non-negative integer. An explicit value of 0 means infinite. # The default value is unset which means the value is inherited from # the collection or the database or the client. # @option options [ Hash ] :let Mapping of variables to use in the command. # See the server documentation for details. # # @return [ BSON::Document ] The document. # # @since 2.1.0 def find_one_and_replace(filter, replacement, options = {}) find(filter, options).find_one_and_update(replacement, options) end # Get the fully qualified namespace of the collection. # # @example Get the fully qualified namespace. # collection.namespace # # @return [ String ] The collection namespace. # # @since 2.0.0 def namespace "#{database.name}.#{name}" end # Whether the collection is a system collection. # # @return [ Boolean ] Whether the system is a system collection. # # @api private def system_collection? name.start_with?('system.') end # @return [ Integer | nil ] Operation timeout that is for this database or # for the corresponding client. # # @api private def timeout_ms @timeout_ms || database.timeout_ms end # @return [ Hash ] timeout_ms value set on the operation level (if any), # and/or timeout_ms that is set on collection/database/client level (if any). # # @api private def operation_timeouts(opts = {}) # TODO: We should re-evaluate if we need two timeouts separately. {}.tap do |result| if opts[:timeout_ms].nil? result[:inherited_timeout_ms] = timeout_ms else result[:operation_timeout_ms] = opts.delete(:timeout_ms) end end end end end mongo-ruby-driver-2.21.3/lib/mongo/collection/000077500000000000000000000000001505113246500212105ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/collection/helpers.rb000066400000000000000000000027231505113246500232030ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2022 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Collection # This module contains helper methods collection class. # # @api private module Helpers # Executes drop operation and and ignores NamespaceNotFound error. # # @param [ Operation::Drop ] operation Drop operation to be executed. # @param [ Session ] session Session to be use for execution. # @param [ Operation::Context ] context Context to use for execution. # # @return [ Result ] The result of the execution. def do_drop(operation, session, context) operation.execute(next_primary(nil, session), context: context) rescue Error::OperationFailure::Family => ex # NamespaceNotFound if ex.code == 26 || ex.code.nil? && ex.message =~ /ns not found/ false else raise end end end end end mongo-ruby-driver-2.21.3/lib/mongo/collection/queryable_encryption.rb000066400000000000000000000134321505113246500260030ustar00rootroot00000000000000# frozen_string_literal: true # Copyright (C) 2014-2022 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Collection # This module contains methods for creating and dropping auxiliary collections # for queryable encryption. # # @api private module QueryableEncryption # The minimum wire version for QE2 support QE2_MIN_WIRE_VERSION = 21 # Creates auxiliary collections and indices for queryable encryption if necessary. # # @param [ Hash | nil ] encrypted_fields Encrypted fields hash that was # provided to `create` collection helper. # @param [ Client ] client Mongo client to be used to create auxiliary collections. # @param [ Session ] session Session to be used to create auxiliary collections. # # @return [ Result ] The result of provided block. def maybe_create_qe_collections(encrypted_fields, client, session) encrypted_fields = encrypted_fields_from(encrypted_fields) return yield if encrypted_fields.empty? server = next_primary(nil, session) context = Operation::Context.new(client: client, session: session) server.with_connection do |connection| check_wire_version!(connection) emm_collections(encrypted_fields).each do |coll| create_operation_for(coll) .execute_with_connection(connection, context: context) end end yield(encrypted_fields).tap do |result| indexes.create_one(__safeContent__: 1) if result end end # Drops auxiliary collections and indices for queryable encryption if necessary. # # @param [ Hash | nil ] encrypted_fields Encrypted fields hash that was # provided to `create` collection helper. # @param [ Client ] client Mongo client to be used to drop auxiliary collections. # @param [ Session ] session Session to be used to drop auxiliary collections. # # @return [ Result ] The result of provided block. def maybe_drop_emm_collections(encrypted_fields, client, session) encrypted_fields = if encrypted_fields encrypted_fields elsif encrypted_fields_map encrypted_fields_for_drop_from_map else {} end return yield if encrypted_fields.empty? emm_collections(encrypted_fields).each do |coll| context = Operation::Context.new(client: client, session: session) operation = Operation::Drop.new( selector: { drop: coll }, db_name: database.name, session: session ) do_drop(operation, session, context) end yield end private # Checks if names for auxiliary collections are set and returns them, # otherwise returns default names. # # @param [ Hash ] encrypted_fields Encrypted fields hash. # # @return [ Array ] Array of auxiliary collections names. def emm_collections(encrypted_fields) [ encrypted_fields['escCollection'] || "enxcol_.#{name}.esc", encrypted_fields['ecocCollection'] || "enxcol_.#{name}.ecoc", ] end # Creating encrypted collections is only supported on 7.0.0 and later # (wire version 21+). # # @param [ Mongo::Connection ] connection The connection to check # the wire version of. # # @raise [ Mongo::Error ] if the wire version is not # recent enough def check_wire_version!(connection) return unless connection.description.max_wire_version < QE2_MIN_WIRE_VERSION raise Mongo::Error, 'Driver support of Queryable Encryption is incompatible with server. ' \ 'Upgrade server to use Queryable Encryption.' end # Tries to return the encrypted fields from the argument. If the argument # is nil, tries to find the encrypted fields from the # encrypted_fields_map. # # @param [ Hash | nil ] fields the encrypted fields # # @return [ Hash ] the encrypted fields def encrypted_fields_from(fields) fields || (encrypted_fields_map && encrypted_fields_map[namespace]) || {} end # Tries to return the encrypted fields from the {{encrypted_fields_map}} # value, for the current namespace. # # @return [ Hash | nil ] the encrypted fields, if found def encrypted_fields_for_drop_from_map encrypted_fields_map[namespace] || database.list_collections(filter: { name: name }) .first &.fetch(:options, {}) &.fetch(:encryptedFields, {}) || {} end # Returns a new create operation for the given collection. # # @param [ String ] coll the name of the collection to create. # # @return [ Operation::Create ] the new create operation. def create_operation_for(coll) Operation::Create.new( selector: { create: coll, clusteredIndex: { key: { _id: 1 }, unique: true } }, db_name: database.name ) end end end end mongo-ruby-driver-2.21.3/lib/mongo/collection/view.rb000066400000000000000000000232241505113246500225120ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'mongo/collection/view/builder' require 'mongo/collection/view/immutable' require 'mongo/collection/view/iterable' require 'mongo/collection/view/explainable' require 'mongo/collection/view/aggregation' require 'mongo/collection/view/change_stream' require 'mongo/collection/view/map_reduce' require 'mongo/collection/view/readable' require 'mongo/collection/view/writable' module Mongo class Collection # Representation of a query and options producing a result set of documents. # # A +View+ can be modified using helpers. Helpers can be chained, # as each one returns a +View+ if arguments are provided. # # The query message is sent to the server when a "terminator" is called. # For example, when #each is called on a +View+, a Cursor object is # created, which then sends the query to the server. # # A +View+ is not created directly by a user. Rather, +View+ # creates a +View+ when a CRUD operation is called and returns it to # the user to interact with. # # @note The +View+ API is semipublic. # @api semipublic class View extend Forwardable include Enumerable include Immutable include Iterable include Readable include Explainable include Writable # @return [ Collection ] The +Collection+ to query. attr_reader :collection # @return [ Hash ] The query filter. attr_reader :filter # Delegate necessary operations to the collection. def_delegators :collection, :client, :cluster, :database, :nro_write_with_retry, :read_with_retry, :read_with_retry_cursor, :write_with_retry, :write_concern_with_session # Delegate to the cluster for the next primary. def_delegators :cluster, :next_primary alias :selector :filter # @return [ Integer | nil | The timeout_ms value that was passed as an # option to the view. # # @api private attr_reader :operation_timeout_ms # Compare two +View+ objects. # # @example Compare the view with another object. # view == other # # @return [ true, false ] Equal if collection, filter, and options of two # +View+ match. # # @since 2.0.0 def ==(other) return false unless other.is_a?(View) collection == other.collection && filter == other.filter && options == other.options end alias_method :eql?, :== # A hash value for the +View+ composed of the collection namespace, # hash of the options and hash of the filter. # # @example Get the hash value. # view.hash # # @return [ Integer ] A hash value of the +View+ object. # # @since 2.0.0 def hash [ collection.namespace, options.hash, filter.hash ].hash end # Creates a new +View+. # # @example Find all users named Emily. # View.new(collection, {:name => 'Emily'}) # # @example Find all users named Emily skipping 5 and returning 10. # View.new(collection, {:name => 'Emily'}, :skip => 5, :limit => 10) # # @example Find all users named Emily using a specific read preference. # View.new(collection, {:name => 'Emily'}, :read => :secondary_preferred) # # @param [ Collection ] collection The +Collection+ to query. # @param [ Hash ] filter The query filter. # @param [ Hash ] options The additional query options. # # @option options [ true, false ] :allow_disk_use When set to true, the # server can write temporary data to disk while executing the find # operation. This option is only available on MongoDB server versions # 4.4 and newer. # @option options [ Integer ] :batch_size The number of documents to # return in each response from MongoDB. # @option options [ Hash ] :collation The collation to use. # @option options [ String ] :comment Associate a comment with the query. # @option options [ :tailable, :tailable_await ] :cursor_type The type of cursor to use. # @option options [ Hash ] :explain Execute an explain with the provided # explain options (known options are :verbose and :verbosity) rather # than a find. # @option options [ Hash ] :hint Override the default index selection and # force MongoDB to use a specific index for the query. # @option options [ Integer ] :limit Max number of documents to return. # @option options [ Integer ] :max_scan Constrain the query to only scan # the specified number of documents. Use to prevent queries from # running for too long. Deprecated as of MongoDB server version 4.0. # @option options [ Hash ] :projection The fields to include or exclude # in the returned documents. # @option options [ Hash ] :read The read preference to use for the # query. If none is provided, the collection's default read preference # is used. # @option options [ Hash ] :read_concern The read concern to use for # the query. # @option options [ true | false ] :show_disk_loc Return disk location # info as a field in each doc. # @option options [ Integer ] :skip The number of documents to skip. # @option options [ true | false ] :snapshot Prevents returning a # document more than once. Deprecated as of MongoDB server version 4.0. # @option options [ Hash ] :sort The key and direction pairs used to sort # the results. # @option options [ :cursor_lifetime | :iteration ] :timeout_mode How to interpret # :timeout_ms (whether it applies to the lifetime of the cursor, or per # iteration). # @option options [ Integer ] :timeout_ms The operation timeout in milliseconds. # Must be a non-negative integer. An explicit value of 0 means infinite. # The default value is unset which means the value is inherited from # the collection or the database or the client. # # @since 2.0.0 def initialize(collection, filter = {}, options = {}) validate_doc!(filter) filter = BSON::Document.new(filter) options = BSON::Document.new(options) @collection = collection @operation_timeout_ms = options.delete(:timeout_ms) validate_timeout_mode!(options) # This is when users pass $query in filter and other modifiers # alongside? query = filter.delete(:$query) # This makes modifiers contain the filter if filter wasn't # given via $query but as top-level keys, presumably # downstream code ignores non-modifier keys in the modifiers? modifiers = filter.merge(options.delete(:modifiers) || {}) @filter = (query || filter).freeze @options = Operation::Find::Builder::Modifiers.map_driver_options(modifiers).merge!(options).freeze end # The timeout_ms value to use for this operation; either specified as an # option to the view, or inherited from the collection. # # @return [ Integer | nil ] the timeout_ms for this operation def timeout_ms operation_timeout_ms || collection.timeout_ms end # Get a human-readable string representation of +View+. # # @example Get the inspection. # view.inspect # # @return [ String ] A string representation of a +View+ instance. # # @since 2.0.0 def inspect "#" end # Get the write concern on this +View+. # # @example Get the write concern. # view.write_concern # # @return [ Mongo::WriteConcern ] The write concern. # # @since 2.0.0 def write_concern WriteConcern.get(options[:write_concern] || options[:write] || collection.write_concern) end # @return [ Hash ] timeout_ms value set on the operation level (if any), # and/or timeout_ms that is set on collection/database/client level (if any). # # @api private def operation_timeouts(opts = {}) {}.tap do |result| if opts[:timeout_ms] || operation_timeout_ms result[:operation_timeout_ms] = opts[:timeout_ms] || operation_timeout_ms else result[:inherited_timeout_ms] = collection.timeout_ms end end end private def initialize_copy(other) @collection = other.collection @options = other.options.dup @filter = other.filter.dup end def new(options) options = options.merge(timeout_ms: operation_timeout_ms) if operation_timeout_ms View.new(collection, filter, options) end def view; self; end def with_session(opts = {}, &block) client.with_session(@options.merge(opts), &block) end end end end mongo-ruby-driver-2.21.3/lib/mongo/collection/view/000077500000000000000000000000001505113246500221625ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/collection/view/aggregation.rb000066400000000000000000000134751505113246500250100ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'mongo/collection/view/aggregation/behavior' module Mongo class Collection class View # Provides behavior around an aggregation pipeline on a collection view. # # @since 2.0.0 class Aggregation include Behavior # @return [ Array ] pipeline The aggregation pipeline. attr_reader :pipeline # Initialize the aggregation for the provided collection view, pipeline # and options. # # @example Create the new aggregation view. # Aggregation.view.new(view, pipeline) # # @param [ Collection::View ] view The collection view. # @param [ Array ] pipeline The pipeline of operations. # @param [ Hash ] options The aggregation options. # # @option options [ true, false ] :allow_disk_use Set to true if disk # usage is allowed during the aggregation. # @option options [ Integer ] :batch_size The number of documents to return # per batch. # @option options [ true, false ] :bypass_document_validation Whether or # not to skip document level validation. # @option options [ Hash ] :collation The collation to use. # @option options [ Object ] :comment A user-provided # comment to attach to this command. # @option options [ String ] :hint The index to use for the aggregation. # @option options [ Hash ] :let Mapping of variables to use in the pipeline. # See the server documentation for details. # @option options [ Integer ] :max_time_ms The maximum amount of time in # milliseconds to allow the aggregation to run. This option is deprecated, use # :timeout_ms instead. # @option options [ Session ] :session The session to use. # @option options [ :cursor_lifetime | :iteration ] :timeout_mode How to interpret # :timeout_ms (whether it applies to the lifetime of the cursor, or per # iteration). # @option options [ Integer ] :timeout_ms The operation timeout in milliseconds. # Must be a non-negative integer. An explicit value of 0 means infinite. # The default value is unset which means the value is inherited from # the collection or the database or the client. # # @since 2.0.0 def initialize(view, pipeline, options = {}) perform_setup(view, options) do @pipeline = pipeline.dup unless Mongo.broken_view_aggregate || view.filter.empty? @pipeline.unshift(:$match => view.filter) end end end private def new(options) Aggregation.new(view, pipeline, options) end def initial_query_op(session, read_preference) Operation::Aggregate.new(aggregate_spec(session, read_preference)) end # Return effective read preference for the operation. # # If the pipeline contains $merge or $out, and read preference specified # by user is secondary or secondary_preferred, and target server is below # 5.0, than this method returns primary read preference, because the # aggregation will be routed to primary. Otherwise return the original # read preference. # # See https://github.com/mongodb/specifications/blob/master/source/crud/crud.md#read-preferences-and-server-selection # # @param [ Server::Connection ] connection The connection which # will be used for the operation. # @return [ Hash | nil ] read preference hash that should be sent with # this command. def effective_read_preference(connection) return unless view.read_preference return view.read_preference unless write? return view.read_preference unless [:secondary, :secondary_preferred].include?(view.read_preference[:mode]) primary_read_preference = {mode: :primary} description = connection.description if description.primary? log_warn("Routing the Aggregation operation to the primary server") primary_read_preference elsif description.mongos? && !description.features.merge_out_on_secondary_enabled? log_warn("Routing the Aggregation operation to the primary server") primary_read_preference else view.read_preference end end def send_initial_query(server, context) if server.load_balancer? # Connection will be checked in when cursor is drained. connection = server.pool.check_out(context: context) initial_query_op( context.session, effective_read_preference(connection) ).execute_with_connection( connection, context: context ) else server.with_connection do |connection| initial_query_op( context.session, effective_read_preference(connection) ).execute_with_connection( connection, context: context ) end end end end end end end mongo-ruby-driver-2.21.3/lib/mongo/collection/view/aggregation/000077500000000000000000000000001505113246500244515ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/collection/view/aggregation/behavior.rb000066400000000000000000000103501505113246500265740ustar00rootroot00000000000000# frozen_string_literal: true module Mongo class Collection class View class Aggregation # Distills the behavior common to aggregator classes, like # View::Aggregator and View::ChangeStream. module Behavior extend Forwardable include Enumerable include Immutable include Iterable include Explainable include Loggable include Retryable # @return [ View ] view The collection view. attr_reader :view # Delegate necessary operations to the view. def_delegators :view, :collection, :read, :cluster, :cursor_type, :limit, :batch_size # Delegate necessary operations to the collection. def_delegators :collection, :database, :client # Set to true if disk usage is allowed during the aggregation. # # @example Set disk usage flag. # aggregation.allow_disk_use(true) # # @param [ true, false ] value The flag value. # # @return [ true, false, Aggregation ] The aggregation if a value was # set or the value if used as a getter. # # @since 2.0.0 def allow_disk_use(value = nil) configure(:allow_disk_use, value) end # Get the explain plan for the aggregation. # # @example Get the explain plan for the aggregation. # aggregation.explain # # @return [ Hash ] The explain plan. # # @since 2.0.0 def explain self.class.new(view, pipeline, options.merge(explain: true)).first end # Whether this aggregation will write its result to a database collection. # # @return [ Boolean ] Whether the aggregation will write its result # to a collection. # # @api private def write? pipeline.any? { |op| op.key?('$out') || op.key?(:$out) || op.key?('$merge') || op.key?(:$merge) } end # @return [ Integer | nil ] the timeout_ms value that was passed as # an option to this object, or which was inherited from the view. # # @api private def timeout_ms @timeout_ms || view.timeout_ms end private # Common setup for all classes that include this behavior; the # constructor should invoke this method. def perform_setup(view, options, forbid: []) @view = view @timeout_ms = options.delete(:timeout_ms) @options = BSON::Document.new(options).freeze yield validate_timeout_mode!(options, forbid: forbid) end def server_selector @view.send(:server_selector) end def aggregate_spec(session, read_preference) Builder::Aggregation.new( pipeline, view, options.merge(session: session, read_preference: read_preference) ).specification end # Skip, sort, limit, projection are specified as pipeline stages # rather than as options. def cache_options { namespace: collection.namespace, selector: pipeline, read_concern: view.read_concern, read_preference: view.read_preference, collation: options[:collation], # Aggregations can read documents from more than one collection, # so they will be cleared on every write operation. multi_collection: true, } end # @return [ Hash ] timeout_ms value set on the operation level (if any), # and/or timeout_ms that is set on collection/database/client level (if any). # # @api private def operation_timeouts(opts = {}) {}.tap do |result| if opts[:timeout_ms] || @timeout_ms result[:operation_timeout_ms] = opts.delete(:timeout_ms) || @timeout_ms else result[:inherited_timeout_ms] = view.timeout_ms end end end end end end end end mongo-ruby-driver-2.21.3/lib/mongo/collection/view/builder.rb000066400000000000000000000013411505113246500241340ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2015-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'mongo/collection/view/builder/aggregation' require 'mongo/collection/view/builder/map_reduce' mongo-ruby-driver-2.21.3/lib/mongo/collection/view/builder/000077500000000000000000000000001505113246500236105ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/collection/view/builder/aggregation.rb000066400000000000000000000104271505113246500264300ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2015-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Collection class View module Builder # Builds an aggregation command specification from the view and options. # # @since 2.2.0 class Aggregation extend Forwardable # The mappings from ruby options to the aggregation options. # # @since 2.2.0 MAPPINGS = BSON::Document.new( allow_disk_use: 'allowDiskUse', bypass_document_validation: 'bypassDocumentValidation', explain: 'explain', collation: 'collation', comment: 'comment', hint: 'hint', let: 'let', # This is intentional; max_await_time_ms is an alias for maxTimeMS # used on getMore commands for change streams. max_await_time_ms: 'maxTimeMS', max_time_ms: 'maxTimeMS', ).freeze def_delegators :@view, :collection, :database, :read, :write_concern # @return [ Array ] pipeline The pipeline. attr_reader :pipeline # @return [ Collection::View ] view The collection view. attr_reader :view # @return [ Hash ] options The map/reduce specific options. attr_reader :options # Initialize the builder. # # @param [ Array ] pipeline The aggregation pipeline. # @param [ Collection::View ] view The collection view. # @param [ Hash ] options The map/reduce and read preference options. # # @since 2.2.0 def initialize(pipeline, view, options) @pipeline = pipeline @view = view @options = options end # Get the specification to pass to the aggregation operation. # # @example Get the specification. # builder.specification # # @return [ Hash ] The specification. # # @since 2.2.0 def specification spec = { selector: aggregation_command, db_name: database.name, read: @options[:read_preference] || view.read_preference, session: @options[:session], collation: @options[:collation], } if write? spec.update(write_concern: write_concern) end spec end private def write? pipeline.any? do |operator| operator[:$out] || operator['$out'] || operator[:$merge] || operator['$merge'] end end def aggregation_command command = BSON::Document.new # aggregate must be the first key in the command document if view.is_a?(Collection::View) command[:aggregate] = collection.name elsif view.is_a?(Database::View) command[:aggregate] = 1 else raise ArgumentError, "Unknown view class: #{view}" end command[:pipeline] = pipeline if read_concern = view.read_concern command[:readConcern] = Options::Mapper.transform_values_to_strings( read_concern) end command[:cursor] = batch_size_doc command.merge!(Options::Mapper.transform_documents(options, MAPPINGS)) command end def batch_size_doc value = options[:batch_size] || view.batch_size if value == 0 && write? {} elsif value { :batchSize => value } else {} end end end end end end end mongo-ruby-driver-2.21.3/lib/mongo/collection/view/builder/map_reduce.rb000066400000000000000000000110231505113246500262360ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2015-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Collection class View module Builder # Builds a map/reduce specification from the view and options. # # @since 2.2.0 class MapReduce extend Forwardable # The mappings from ruby options to the map/reduce options. # # @since 2.2.0 MAPPINGS = BSON::Document.new( finalize: 'finalize', js_mode: 'jsMode', out: 'out', scope: 'scope', verbose: 'verbose', bypass_document_validation: 'bypassDocumentValidation', collation: 'collation', ).freeze def_delegators :@view, :collection, :database, :filter, :read, :write_concern # @return [ String ] map The map function. attr_reader :map # @return [ String ] reduce The reduce function. attr_reader :reduce # @return [ Collection::View ] view The collection view. attr_reader :view # @return [ Hash ] options The map/reduce specific options. attr_reader :options # Initialize the builder. # # @example Initialize the builder. # MapReduce.new(map, reduce, view, options) # # @param [ String ] map The map function. # @param [ String ] reduce The reduce function. # @param [ Collection::View ] view The collection view. # @param [ Hash ] options The map/reduce options. # # @since 2.2.0 def initialize(map, reduce, view, options) @map = map @reduce = reduce @view = view @options = options end # Get the specification to pass to the map/reduce operation. # # @example Get the specification. # builder.specification # # @return [ Hash ] The specification. # # @since 2.2.0 def specification spec = { selector: map_reduce_command, db_name: database.name, # Note that selector just above may also have a read preference # specified, per the #map_reduce_command method below. read: read, session: options[:session] } write?(spec) ? spec.merge!(write_concern: write_concern) : spec end private def write?(spec) if out = spec[:selector][:out] out.is_a?(String) || (out.respond_to?(:keys) && out.keys.first.to_s.downcase != View::MapReduce::INLINE) end end def map_reduce_command command = BSON::Document.new( :mapReduce => collection.name, :map => map, :reduce => reduce, :query => filter, :out => { inline: 1 }, ) # Shouldn't this use self.read ? if collection.read_concern command[:readConcern] = Options::Mapper.transform_values_to_strings( collection.read_concern) end command.update(view_options) command.update(options.slice(:collation)) # Read preference isn't simply passed in the command payload # (it may need to be converted to wire protocol flags). # Ideally it should be removed here, however due to Mongoid 7 # using this method and requiring :read to be returned from it, # we cannot do this just yet - see RUBY-2932. #command.delete(:read) command.merge!(Options::Mapper.transform_documents(options, MAPPINGS)) command end def view_options @view_options ||= (opts = view.options.dup opts.delete(:session) opts) end end end end end end mongo-ruby-driver-2.21.3/lib/mongo/collection/view/change_stream.rb000066400000000000000000000432571505113246500253220ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2017-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'mongo/collection/view/aggregation/behavior' require 'mongo/collection/view/change_stream/retryable' module Mongo class Collection class View # Provides behavior around a `$changeStream` pipeline stage in the # aggregation framework. Specifying this stage allows users to request # that notifications are sent for all changes to a particular collection # or database. # # @note Only available in server versions 3.6 and higher. # @note ChangeStreams do not work properly with JRuby because of the # issue documented here: https://github.com/jruby/jruby/issues/4212. # Namely, JRuby eagerly evaluates #next on an Enumerator in a background # green thread, therefore calling #next on the change stream will cause # getMores to be called in a loop in the background. # # # @since 2.5.0 class ChangeStream include Aggregation::Behavior include Retryable # @return [ String ] The fullDocument option default value. # # @since 2.5.0 FULL_DOCUMENT_DEFAULT = 'default'.freeze # @return [ Symbol ] Used to indicate that the change stream should listen for changes on # the entire database rather than just the collection. # # @since 2.6.0 DATABASE = :database # @return [ Symbol ] Used to indicate that the change stream should listen for changes on # the entire cluster rather than just the collection. # # @since 2.6.0 CLUSTER = :cluster # @return [ BSON::Document ] The change stream options. # # @since 2.5.0 attr_reader :options # @return [ Cursor ] the underlying cursor for this operation # @api private attr_reader :cursor # Initialize the change stream for the provided collection view, pipeline # and options. # # @example Create the new change stream view. # ChangeStream.new(view, pipeline, options) # # @param [ Collection::View ] view The collection view. # @param [ Array ] pipeline The pipeline of operators to filter the change notifications. # @param [ Hash ] options The change stream options. # # @option options [ String ] :full_document Allowed values: nil, 'default', # 'updateLookup', 'whenAvailable', 'required'. # # The default is to not send a value (i.e. nil), which is equivalent to # 'default'. By default, the change notification for partial updates will # include a delta describing the changes to the document. # # When set to 'updateLookup', the change notification for partial updates # will include both a delta describing the changes to the document as well # as a copy of the entire document that was changed from some time after # the change occurred. # # When set to 'whenAvailable', configures the change stream to return the # post-image of the modified document for replace and update change events # if the post-image for this event is available. # # When set to 'required', the same behavior as 'whenAvailable' except that # an error is raised if the post-image is not available. # @option options [ String ] :full_document_before_change Allowed values: nil, # 'whenAvailable', 'required', 'off'. # # The default is to not send a value (i.e. nil), which is equivalent to 'off'. # # When set to 'whenAvailable', configures the change stream to return the # pre-image of the modified document for replace, update, and delete change # events if it is available. # # When set to 'required', the same behavior as 'whenAvailable' except that # an error is raised if the pre-image is not available. # @option options [ BSON::Document, Hash ] :resume_after Specifies the logical starting point for the # new change stream. # @option options [ Integer ] :max_await_time_ms The maximum amount of time for the server to wait # on new documents to satisfy a change stream query. # @option options [ Integer ] :batch_size The number of documents to return per batch. # @option options [ BSON::Document, Hash ] :collation The collation to use. # @option options [ BSON::Timestamp ] :start_at_operation_time Only # return changes that occurred at or after the specified timestamp. Any # command run against the server will return a cluster time that can # be used here. Only recognized by server versions 4.0+. # @option options [ Bson::Document, Hash ] :start_after Similar to :resume_after, this # option takes a resume token and starts a new change stream returning the first # notification after the token. This will allow users to watch collections that have been # dropped and recreated or newly renamed collections without missing any notifications. # @option options [ Object ] :comment A user-provided # comment to attach to this command. # @option options [ Boolean ] :show_expanded_events Enables the server to # send the 'expanded' list of change stream events. The list of additional # events included with this flag set are: createIndexes, dropIndexes, # modify, create, shardCollection, reshardCollection, # refineCollectionShardKey. # # The server will report an error if `startAfter` and `resumeAfter` are both specified. # # @since 2.5.0 def initialize(view, pipeline, changes_for, options = {}) # change stream cursors can only be :iterable, so we don't allow # timeout_mode to be specified. perform_setup(view, options, forbid: %i[ timeout_mode ]) do @changes_for = changes_for @change_stream_filters = pipeline && pipeline.dup @start_after = @options[:start_after] end # The resume token tracked by the change stream, used only # when there is no cursor, or no cursor resume token @resume_token = @start_after || @options[:resume_after] create_cursor! # We send different parameters when we resume a change stream # compared to when we send the first query @resuming = true end # Iterate through documents returned by the change stream. # # This method retries once per error on resumable errors # (two consecutive errors result in the second error being raised, # an error which is recovered from resets the error count to zero). # # @example Iterate through the stream of documents. # stream.each do |document| # p document # end # # @return [ Enumerator ] The enumerator. # # @since 2.5.0 # # @yieldparam [ BSON::Document ] Each change stream document. def each raise StopIteration.new if closed? loop do document = try_next yield document if document end rescue StopIteration return self end # Return one document from the change stream, if one is available. # # Retries once on a resumable error. # # Raises StopIteration if the change stream is closed. # # This method will wait up to max_await_time_ms milliseconds # for changes from the server, and if no changes are received # it will return nil. # # @return [ BSON::Document | nil ] A change stream document. # @since 2.6.0 def try_next recreate_cursor! if @timed_out raise StopIteration.new if closed? begin doc = @cursor.try_next rescue Mongo::Error => e # "If a next call fails with a timeout error, drivers MUST NOT # invalidate the change stream. The subsequent next call MUST # perform a resume attempt to establish a new change stream on the # server..." # # However, SocketTimeoutErrors are TimeoutErrors, but are also # change-stream-resumable. To preserve existing (specified) behavior, # We only count timeouts when the error is not also # change-stream-resumable. @timed_out = e.is_a?(Mongo::Error::TimeoutError) && !e.change_stream_resumable? raise unless @timed_out || e.change_stream_resumable? @resume_token = @cursor.resume_token raise e if @timed_out recreate_cursor!(@cursor.context) retry end # We need to verify each doc has an _id, so we # have a resume token to work with if doc && doc['_id'].nil? raise Error::MissingResumeToken end doc end def to_enum enum = super enum.send(:instance_variable_set, '@obj', self) class << enum def try_next @obj.try_next end end enum end # Close the change stream. # # @example Close the change stream. # stream.close # # @note This method attempts to close the cursor used by the change # stream, which in turn closes the server-side change stream cursor. # This method ignores any errors that occur when closing the # server-side cursor. # # @params [ Hash ] opts Options to be passed to the cursor close # command. # # @return [ nil ] Always nil. # # @since 2.5.0 def close(opts = {}) unless closed? begin @cursor.close(opts) rescue Error::OperationFailure::Family, Error::SocketError, Error::SocketTimeoutError, Error::MissingConnection # ignore end @cursor = nil end end # Is the change stream closed? # # @example Determine whether the change stream is closed. # stream.closed? # # @return [ true, false ] If the change stream is closed. # # @since 2.5.0 def closed? @cursor.nil? end # Get a formatted string for use in inspection. # # @example Inspect the change stream object. # stream.inspect # # @return [ String ] The change stream inspection. # # @since 2.5.0 def inspect "#" end # Returns the resume token that the stream will # use to automatically resume, if one exists. # # @example Get the change stream resume token. # stream.resume_token # # @return [ BSON::Document | nil ] The change stream resume token. # # @since 2.10.0 def resume_token cursor_resume_token = @cursor.resume_token if @cursor cursor_resume_token || @resume_token end # "change streams are an abstraction around tailable-awaitData cursors..." # # @return :tailable_await def cursor_type :tailable_await end # "change streams...implicitly use ITERATION mode" # # @return :iteration def timeout_mode :iteration end # Returns the value of the max_await_time_ms option that was # passed to this change stream. # # @return [ Integer | nil ] the max_await_time_ms value def max_await_time_ms options[:max_await_time_ms] end private def for_cluster? @changes_for == CLUSTER end def for_database? @changes_for == DATABASE end def for_collection? !for_cluster? && !for_database? end def create_cursor!(timeout_ms = nil) # clear the cache because we may get a newer or an older server # (rolling upgrades) @start_at_operation_time_supported = nil session = client.get_session(@options) context = Operation::Context.new(client: client, session: session, view: self, operation_timeouts: timeout_ms ? { operation_timeout_ms: timeout_ms } : operation_timeouts) start_at_operation_time = nil start_at_operation_time_supported = nil @cursor = read_with_retry_cursor(session, server_selector, self, context: context) do |server| server.with_connection do |connection| start_at_operation_time_supported = connection.description.server_version_gte?('4.0') result = send_initial_query(connection, context) if doc = result.replies.first && result.replies.first.documents.first start_at_operation_time = doc['operationTime'] else # The above may set @start_at_operation_time to nil # if it was not in the document for some reason, # for consistency set it to nil here as well. # NB: since this block may be executed more than once, each # execution must write to start_at_operation_time either way. start_at_operation_time = nil end result end end @start_at_operation_time = start_at_operation_time @start_at_operation_time_supported = start_at_operation_time_supported end def pipeline [{ '$changeStream' => change_doc }] + @change_stream_filters end def aggregate_spec(session, read_preference) super(session, read_preference).tap do |spec| spec[:selector][:aggregate] = 1 unless for_collection? end end def change_doc {}.tap do |doc| if @options[:full_document] doc[:fullDocument] = @options[:full_document] end if @options[:full_document_before_change] doc[:fullDocumentBeforeChange] = @options[:full_document_before_change] end if @options.key?(:show_expanded_events) doc[:showExpandedEvents] = @options[:show_expanded_events] end if resuming? # We have a resume token once we retrieved any documents. # However, if the first getMore fails and the user didn't pass # a resume token we won't have a resume token to use. # Use start_at_operation time in this case if resume_token # Spec says we need to remove both startAtOperationTime and startAfter if # either was passed in by user, thus we won't forward them doc[:resumeAfter] = resume_token elsif @start_at_operation_time_supported && @start_at_operation_time # It is crucial to check @start_at_operation_time_supported # here - we may have switched to an older server that # does not support operation times and therefore shouldn't # try to send one to it! # # @start_at_operation_time is already a BSON::Timestamp doc[:startAtOperationTime] = @start_at_operation_time else # Can't resume if we don't have either raise Mongo::Error::MissingResumeToken end else if @start_after doc[:startAfter] = @start_after elsif resume_token doc[:resumeAfter] = resume_token end if options[:start_at_operation_time] doc[:startAtOperationTime] = time_to_bson_timestamp( options[:start_at_operation_time]) end end doc[:allChangesForCluster] = true if for_cluster? end end def send_initial_query(connection, context) initial_query_op(context.session, view.read_preference) .execute_with_connection( connection, context: context, ) end def time_to_bson_timestamp(time) if time.is_a?(Time) seconds = time.to_f BSON::Timestamp.new(seconds.to_i, ((seconds - seconds.to_i) * 1000000).to_i) elsif time.is_a?(BSON::Timestamp) time else raise ArgumentError, 'Time must be a Time or a BSON::Timestamp instance' end end def resuming? !!@resuming end # Recreates the current cursor (typically as a consequence of attempting # to resume the change stream) def recreate_cursor!(context = nil) @timed_out = false close create_cursor!(context&.remaining_timeout_ms) end end end end end mongo-ruby-driver-2.21.3/lib/mongo/collection/view/change_stream/000077500000000000000000000000001505113246500247625ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/collection/view/change_stream/retryable.rb000066400000000000000000000021261505113246500273010ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2017-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Collection class View class ChangeStream < Aggregation # Behavior around resuming a change stream. # # @since 2.5.0 module Retryable private def read_with_one_retry yield rescue Mongo::Error => e if e.change_stream_resumable? yield else raise(e) end end end end end end end mongo-ruby-driver-2.21.3/lib/mongo/collection/view/explainable.rb000066400000000000000000000057201505113246500247770ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Collection class View # Defines explain related behavior for collection view. # # @since 2.0.0 module Explainable # The query planner verbosity constant. # # @since 2.2.0 QUERY_PLANNER = 'queryPlanner'.freeze # The execution stats verbosity constant. # # @since 2.2.0 EXECUTION_STATS = 'executionStats'.freeze # The all plans execution verbosity constant. # # @since 2.2.0 ALL_PLANS_EXECUTION = 'allPlansExecution'.freeze # Get the query plan for the query. # # @example Get the query plan for the query with execution statistics. # view.explain(verbosity: :execution_stats) # # @option opts [ true | false ] :verbose The level of detail # to return for MongoDB 2.6 servers. # @option opts [ String | Symbol ] :verbosity The type of information # to return for MongoDB 3.0 and newer servers. If the value is a # symbol, it will be stringified and converted from underscore # style to camel case style (e.g. :query_planner => "queryPlanner"). # # @return [ Hash ] A single document with the query plan. # # @see https://mongodb.com/docs/manual/reference/method/db.collection.explain/#db.collection.explain # # @since 2.0.0 def explain(**opts) self.class.new(collection, selector, options.merge(explain_options(**opts))).first end private def explained? !!options[:explain] end # @option opts [ true | false ] :verbose The level of detail # to return for MongoDB 2.6 servers. # @option opts [ String | Symbol ] :verbosity The type of information # to return for MongoDB 3.0 and newer servers. If the value is a # symbol, it will be stringified and converted from underscore # style to camel case style (e.g. :query_planner => "queryPlanner"). def explain_options(**opts) explain_limit = limit || 0 # Note: opts will never be nil here. if Symbol === opts[:verbosity] opts[:verbosity] = Utils.camelize(opts[:verbosity]) end { limit: -explain_limit.abs, explain: opts } end end end end end mongo-ruby-driver-2.21.3/lib/mongo/collection/view/immutable.rb000066400000000000000000000020761505113246500244730ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Collection class View # Defines behavior around views being configurable and immutable. # # @since 2.0.0 module Immutable # @return [ Hash ] options The additional query options. attr_reader :options private def configure(field, value) return options[field] if value.nil? new(options.merge(field => value)) end end end end end mongo-ruby-driver-2.21.3/lib/mongo/collection/view/iterable.rb000066400000000000000000000174021505113246500243020ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'mongo/cursor_host' module Mongo class Collection class View # Defines iteration related behavior for collection views, including # cursor instantiation. # # @since 2.0.0 module Iterable include Mongo::CursorHost # Iterate through documents returned by a query with this +View+. # # @example Iterate through the result of the view. # view.each do |document| # p document # end # # @return [ Enumerator ] The enumerator. # # @since 2.0.0 # # @yieldparam [ Hash ] Each matching document. def each @cursor = prefer_cached_cursor? ? cached_cursor : new_cursor_for_iteration return @cursor.to_enum unless block_given? limit_for_cached_query = compute_limit_for_cached_query # Ruby versions 2.5 and older do not support arr[0..nil] syntax, so # this must be a separate conditional. cursor_to_iterate = if limit_for_cached_query @cursor.to_a[0...limit_for_cached_query] else @cursor end cursor_to_iterate.each do |doc| yield doc end end # Cleans up resources associated with this query. # # If there is a server cursor associated with this query, it is # closed by sending a KillCursors command to the server. # # @note This method propagates any errors that occur when closing the # server-side cursor. # # @return [ nil ] Always nil. # # @raise [ Error::OperationFailure::Family ] If the server cursor close fails. # # @since 2.1.0 def close_query if @cursor @cursor.close end end alias :kill_cursors :close_query private def select_cursor(session) context = Operation::Context.new( client: client, session: session, operation_timeouts: operation_timeouts, view: self ) if respond_to?(:write?, true) && write? server = server_selector.select_server(cluster, nil, session, write_aggregation: true) result = send_initial_query(server, context) if use_query_cache? CachingCursor.new(view, result, server, session: session, context: context) else Cursor.new(view, result, server, session: session, context: context) end else read_with_retry_cursor(session, server_selector, view, context: context) do |server| send_initial_query(server, context) end end end def cached_cursor QueryCache.get(**cache_options) end def cache_options # NB: this hash is passed as keyword argument and must have symbol # keys. { namespace: collection.namespace, selector: selector, skip: skip, sort: sort, limit: limit, projection: projection, collation: collation, read_concern: read_concern, read_preference: read_preference, } end def initial_query_op(session) spec = { coll_name: collection.name, filter: filter, projection: projection, db_name: database.name, session: session, collation: collation, sort: sort, skip: skip, let: options[:let], limit: limit, allow_disk_use: options[:allow_disk_use], allow_partial_results: options[:allow_partial_results], read: read, read_concern: options[:read_concern] || read_concern, batch_size: batch_size, hint: options[:hint], max_scan: options[:max_scan], max_value: options[:max_value], min_value: options[:min_value], no_cursor_timeout: options[:no_cursor_timeout], return_key: options[:return_key], show_disk_loc: options[:show_disk_loc], comment: options[:comment], oplog_replay: oplog_replay } if spec[:oplog_replay] collection.client.log_warn("The :oplog_replay option is deprecated and ignored by MongoDB 4.4 and later") end maybe_set_tailable_options(spec) if explained? spec[:explain] = options[:explain] Operation::Explain.new(spec) else Operation::Find.new(spec) end end def send_initial_query(server, context) operation = initial_query_op(context.session) if server.load_balancer? # Connection will be checked in when cursor is drained. connection = server.pool.check_out(context: context) operation.execute_with_connection(connection, context: context) else operation.execute(server, context: context) end end def use_query_cache? QueryCache.enabled? && !collection.system_collection? end # If the caching cursor is closed and was not fully iterated, # the documents we have in it are not the complete result set and # we have no way of completing that iteration. # Therefore, discard that cursor and start iteration again. def prefer_cached_cursor? use_query_cache? && cached_cursor && (cached_cursor.fully_iterated? || !cached_cursor.closed?) end # Start a new cursor for use when iterating (via #each). def new_cursor_for_iteration session = client.get_session(@options) select_cursor(session).tap do |cursor| if use_query_cache? # No need to store the cursor in the query cache if there is # already a cached cursor stored at this key. QueryCache.set(cursor, **cache_options) end end end def compute_limit_for_cached_query return nil unless use_query_cache? && respond_to?(:limit) # If a query with a limit is performed, the query cache will # re-use results from an earlier query with the same or larger # limit, and then impose the lower limit during iteration. return QueryCache.normalized_limit(limit) end # Add tailable cusror options to the command specifiction if needed. # # @param [ Hash ] spec The command specification. def maybe_set_tailable_options(spec) case cursor_type when :tailable spec[:tailable] = true when :tailable_await spec[:tailable] = true spec[:await_data] = true end end # @return [ true | false | nil ] options[:oplog_replay], if # set, otherwise the same option from the collection. def oplog_replay v = options[:oplog_replay] v.nil? ? collection.options[:oplog_replay] : v end end end end end mongo-ruby-driver-2.21.3/lib/mongo/collection/view/map_reduce.rb000066400000000000000000000271141505113246500246200ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Collection class View # Provides behavior around a map/reduce operation on the collection # view. # # @since 2.0.0 class MapReduce extend Forwardable include Enumerable include Immutable include Loggable include Retryable # The inline option. # # @since 2.1.0 INLINE = 'inline'.freeze # Reroute message. # # @since 2.1.0 # @deprecated REROUTE = 'Rerouting the MapReduce operation to the primary server.'.freeze # @return [ View ] view The collection view. attr_reader :view # @return [ String ] map The map function. attr_reader :map_function # @return [ String ] reduce The reduce function. attr_reader :reduce_function # Delegate necessary operations to the view. def_delegators :view, :collection, :read, :cluster, :timeout_ms # Delegate necessary operations to the collection. def_delegators :collection, :database, :client # Iterate through documents returned by the map/reduce. # # @example Iterate through the result of the map/reduce. # map_reduce.each do |document| # p document # end # # @return [ Enumerator ] The enumerator. # # @since 2.0.0 # # @yieldparam [ Hash ] Each matching document. def each @cursor = nil session = client.get_session(@options) server = cluster.next_primary(nil, session) context = Operation::Context.new(client: client, session: session, operation_timeouts: view.operation_timeouts) if server.load_balancer? # Connection will be checked in when cursor is drained. connection = server.pool.check_out(context: context) result = send_initial_query_with_connection(connection, context.session, context: context) result = send_fetch_query_with_connection(connection, session) unless inline? else result = send_initial_query(server, context) result = send_fetch_query(server, session) unless inline? end @cursor = Cursor.new(view, result, server, session: session) if block_given? @cursor.each do |doc| yield doc end else @cursor.to_enum end end # Set or get the finalize function for the operation. # # @example Set the finalize function. # map_reduce.finalize(function) # # @param [ String ] function The finalize js function. # # @return [ MapReduce, String ] The new MapReduce operation or the # value of the function. # # @since 2.0.0 def finalize(function = nil) configure(:finalize, function) end # Initialize the map/reduce for the provided collection view, functions # and options. # # @example Create the new map/reduce view. # # @param [ Collection::View ] view The collection view. # @param [ String ] map The map function. # @param [ String ] reduce The reduce function. # @param [ Hash ] options The map/reduce options. # # @since 2.0.0 def initialize(view, map, reduce, options = {}) @view = view @map_function = map.dup.freeze @reduce_function = reduce.dup.freeze @options = BSON::Document.new(options).freeze client.log_warn('The map_reduce operation is deprecated, please use the aggregation pipeline instead') end # Set or get the jsMode flag for the operation. # # @example Set js mode for the operation. # map_reduce.js_mode(true) # # @param [ true, false ] value The jsMode value. # # @return [ MapReduce, true, false ] The new MapReduce operation or the # value of the jsMode flag. # # @since 2.0.0 def js_mode(value = nil) configure(:js_mode, value) end # Set or get the output location for the operation. # # @example Set the output to inline. # map_reduce.out(inline: 1) # # @example Set the output collection to merge. # map_reduce.out(merge: 'users') # # @example Set the output collection to replace. # map_reduce.out(replace: 'users') # # @example Set the output collection to reduce. # map_reduce.out(reduce: 'users') # # @param [ Hash ] location The output location details. # # @return [ MapReduce, Hash ] The new MapReduce operation or the value # of the output location. # # @since 2.0.0 def out(location = nil) configure(:out, location) end # Returns the collection name where the map-reduce result is written to. # If the result is returned inline, returns nil. def out_collection_name if options[:out].respond_to?(:keys) options[:out][OUT_ACTIONS.find do |action| options[:out][action] end] end || options[:out] end # Returns the database name where the map-reduce result is written to. # If the result is returned inline, returns nil. def out_database_name if options[:out] if options[:out].respond_to?(:keys) && (db = options[:out][:db]) db else database.name end end end # Set or get a scope on the operation. # # @example Set the scope value. # map_reduce.scope(value: 'test') # # @param [ Hash ] object The scope object. # # @return [ MapReduce, Hash ] The new MapReduce operation or the value # of the scope. # # @since 2.0.0 def scope(object = nil) configure(:scope, object) end # Whether to include the timing information in the result. # # @example Set the verbose value. # map_reduce.verbose(false) # # @param [ true, false ] value Whether to include timing information # in the result. # # @return [ MapReduce, Hash ] The new MapReduce operation or the value # of the verbose option. # # @since 2.0.5 def verbose(value = nil) configure(:verbose, value) end # Execute the map reduce, without doing a fetch query to retrieve the results # if outputted to a collection. # # @example Execute the map reduce and get the raw result. # map_reduce.execute # # @return [ Mongo::Operation::Result ] The raw map reduce result # # @since 2.5.0 def execute view.send(:with_session, @options) do |session| write_concern = view.write_concern_with_session(session) context = Operation::Context.new(client: client, session: session) nro_write_with_retry(write_concern, context: context) do |connection, txn_num, context| send_initial_query_with_connection(connection, session, context: context) end end end private OUT_ACTIONS = [ :replace, :merge, :reduce ].freeze def server_selector @view.send(:server_selector) end def inline? out.nil? || out == { inline: 1 } || out == { INLINE => 1 } end def map_reduce_spec(session = nil) Builder::MapReduce.new(map_function, reduce_function, view, options.merge(session: session)).specification end def new(options) MapReduce.new(view, map_function, reduce_function, options) end def initial_query_op(session) spec = map_reduce_spec(session) # Read preference isn't simply passed in the command payload # (it may need to be converted to wire protocol flags). # Passing it in command payload produces errors on at least # 5.0 mongoses. # In the future map_reduce_command should remove :read # from its return value, however we cannot do this right now # due to Mongoid 7 relying on :read being returned as part of # the command - see RUBY-2932. # Delete :read here for now because it cannot be sent to mongos this way. spec = spec.dup spec[:selector] = spec[:selector].dup spec[:selector].delete(:read) Operation::MapReduce.new(spec) end def valid_server?(description) if secondary_ok? true else description.standalone? || description.mongos? || description.primary? || description.load_balancer? end end def secondary_ok? out.respond_to?(:keys) && out.keys.first.to_s.downcase == INLINE end def send_initial_query(server, context) server.with_connection do |connection| send_initial_query_with_connection(connection, context.session, context: context) end end def send_initial_query_with_connection(connection, session, context:) op = initial_query_op(session) if valid_server?(connection.description) op.execute_with_connection(connection, context: context) else msg = "Rerouting the MapReduce operation to the primary server - #{connection.address} is not suitable because it is not currently the primray" log_warn(msg) server = cluster.next_primary(nil, session) op.execute(server, context: context) end end def fetch_query_spec Builder::MapReduce.new(map_function, reduce_function, view, options).query_specification end def find_command_spec(session) Builder::MapReduce.new(map_function, reduce_function, view, options.merge(session: session)).command_specification end def fetch_query_op(session) spec = { coll_name: out_collection_name, db_name: out_database_name, filter: {}, session: session, read: read, read_concern: options[:read_concern] || collection.read_concern, collation: options[:collation] || view.options[:collation], } Operation::Find.new(spec) end def send_fetch_query(server, session) fetch_query_op(session).execute(server, context: Operation::Context.new(client: client, session: session)) end def send_fetch_query_with_connection(connection, session) fetch_query_op( session ).execute_with_connection( connection, context: Operation::Context.new(client: client, session: session) ) end end end end end mongo-ruby-driver-2.21.3/lib/mongo/collection/view/readable.rb000066400000000000000000000714271505113246500242610ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Collection class View # Defines read related behavior for collection view. # # @since 2.0.0 module Readable # Execute an aggregation on the collection view. # # @example Aggregate documents. # view.aggregate([ # { "$group" => { "_id" => "$city", "tpop" => { "$sum" => "$pop" }}} # ]) # # @param [ Array ] pipeline The aggregation pipeline. # @param [ Hash ] options The aggregation options. # # @option options [ true, false ] :allow_disk_use Set to true if disk # usage is allowed during the aggregation. # @option options [ Integer ] :batch_size The number of documents to return # per batch. # @option options [ true, false ] :bypass_document_validation Whether or # not to skip document level validation. # @option options [ Hash ] :collation The collation to use. # @option options [ Object ] :comment A user-provided # comment to attach to this command. # @option options [ String ] :hint The index to use for the aggregation. # @option options [ Hash ] :let Mapping of variables to use in the pipeline. # See the server documentation for details. # @option options [ Integer ] :max_time_ms The maximum amount of time in # milliseconds to allow the aggregation to run. This option is deprecated, use # :timeout_ms instead. # @option options [ Session ] :session The session to use. # @option options [ Integer ] :timeout_ms The operation timeout in milliseconds. # Must be a non-negative integer. An explicit value of 0 means infinite. # The default value is unset which means the value is inherited from # the collection or the database or the client. # # @return [ Aggregation ] The aggregation object. # # @since 2.0.0 def aggregate(pipeline, options = {}) options = @options.merge(options) unless Mongo.broken_view_options aggregation = Aggregation.new(self, pipeline, options) # Because the $merge and $out pipeline stages write documents to the # collection, it is necessary to clear the cache when they are performed. # # Opt to clear the entire cache rather than one namespace because # the $out and $merge stages do not have to write to the same namespace # on which the aggregation is performed. QueryCache.clear if aggregation.write? aggregation end # Allows the server to write temporary data to disk while executing # a find operation. # # @return [ View ] The new view. def allow_disk_use configure(:allow_disk_use, true) end # Allows the query to get partial results if some shards are down. # # @example Allow partial results. # view.allow_partial_results # # @return [ View ] The new view. # # @since 2.0.0 def allow_partial_results configure(:allow_partial_results, true) end # Tell the query's cursor to stay open and wait for data. # # @example Await data on the cursor. # view.await_data # # @return [ View ] The new view. # # @since 2.0.0 def await_data configure(:await_data, true) end # The number of documents returned in each batch of results from MongoDB. # # @example Set the batch size. # view.batch_size(5) # # @note Specifying 1 or a negative number is analogous to setting a limit. # # @param [ Integer ] batch_size The size of each batch of results. # # @return [ Integer, View ] Either the batch_size value or a # new +View+. # # @since 2.0.0 def batch_size(batch_size = nil) configure(:batch_size, batch_size) end # Associate a comment with the query. # # @example Add a comment. # view.comment('slow query') # # @note Set profilingLevel to 2 and the comment will be logged in the profile # collection along with the query. # # @param [ Object ] comment The comment to be associated with the query. # # @return [ String, View ] Either the comment or a # new +View+. # # @since 2.0.0 def comment(comment = nil) configure(:comment, comment) end # Get a count of matching documents in the collection. # # @example Get the number of documents in the collection. # collection_view.count # # @param [ Hash ] opts Options for the operation. # # @option opts :skip [ Integer ] The number of documents to skip. # @option opts :hint [ Hash ] Override default index selection and force # MongoDB to use a specific index for the query. # @option opts :limit [ Integer ] Max number of docs to count. # @option opts :max_time_ms [ Integer ] The maximum amount of time to allow the # command to run. # @option opts [ Hash ] :read The read preference options. # @option opts [ Hash ] :collation The collation to use. # @option opts [ Mongo::Session ] :session The session to use for the operation. # @option opts [ Object ] :comment A user-provided # comment to attach to this command. # @option options [ Integer ] :timeout_ms The operation timeout in milliseconds. # Must be a non-negative integer. An explicit value of 0 means infinite. # The default value is unset which means the value is inherited from # the collection or the database or the client. # # @return [ Integer ] The document count. # # @since 2.0.0 # # @deprecated Use #count_documents or #estimated_document_count instead. However, note that # the following operators will need to be substituted when switching to #count_documents: # * $where should be replaced with $expr (only works on 3.6+) # * $near should be replaced with $geoWithin with $center # * $nearSphere should be replaced with $geoWithin with $centerSphere def count(opts = {}) opts = @options.merge(opts) unless Mongo.broken_view_options cmd = { :count => collection.name, :query => filter } cmd[:skip] = opts[:skip] if opts[:skip] cmd[:hint] = opts[:hint] if opts[:hint] cmd[:limit] = opts[:limit] if opts[:limit] if read_concern cmd[:readConcern] = Options::Mapper.transform_values_to_strings( read_concern) end cmd[:maxTimeMS] = opts[:max_time_ms] if opts[:max_time_ms] Mongo::Lint.validate_underscore_read_preference(opts[:read]) read_pref = opts[:read] || read_preference selector = ServerSelector.get(read_pref || server_selector) with_session(opts) do |session| context = Operation::Context.new( client: client, session: session, operation_timeouts: operation_timeouts(opts) ) read_with_retry(session, selector, context) do |server| Operation::Count.new( selector: cmd, db_name: database.name, options: {:limit => -1}, read: read_pref, session: session, # For some reason collation was historically accepted as a # string key. Note that this isn't documented as valid usage. collation: opts[:collation] || opts['collation'] || collation, comment: opts[:comment], ).execute( server, context: context ) end.n.to_i end end # Get a count of matching documents in the collection. # # @example Get the number of documents in the collection. # collection_view.count # # @param [ Hash ] opts Options for the operation. # # @option opts :skip [ Integer ] The number of documents to skip. # @option opts :hint [ Hash ] Override default index selection and force # MongoDB to use a specific index for the query. Requires server version 3.6+. # @option opts :limit [ Integer ] Max number of docs to count. # @option opts :max_time_ms [ Integer ] The maximum amount of time to allow the # command to run. This option is deprecated, use # :timeout_ms instead. # @option opts [ Hash ] :read The read preference options. # @option opts [ Hash ] :collation The collation to use. # @option opts [ Mongo::Session ] :session The session to use for the operation. # @option ops [ Object ] :comment A user-provided # comment to attach to this command. # @option options [ Integer ] :timeout_ms The operation timeout in milliseconds. # Must be a non-negative integer. An explicit value of 0 means infinite. # The default value is unset which means the value is inherited from # the collection or the database or the client. # # @return [ Integer ] The document count. # # @since 2.6.0 def count_documents(opts = {}) opts = @options.merge(opts) unless Mongo.broken_view_options pipeline = [:'$match' => filter] pipeline << { :'$skip' => opts[:skip] } if opts[:skip] pipeline << { :'$limit' => opts[:limit] } if opts[:limit] pipeline << { :'$group' => { _id: 1, n: { :'$sum' => 1 } } } opts = opts.slice(:hint, :max_time_ms, :read, :collation, :session, :comment, :timeout_ms) opts[:collation] ||= collation first = aggregate(pipeline, opts).first return 0 unless first first['n'].to_i end # Gets an estimate of the count of documents in a collection using collection metadata. # # @example Get the number of documents in the collection. # collection_view.estimated_document_count # # @param [ Hash ] opts Options for the operation. # # @option opts :max_time_ms [ Integer ] The maximum amount of time to allow the command to # run. # @option opts [ Hash ] :read The read preference options. # @option opts [ Object ] :comment A user-provided # comment to attach to this command. # @option options [ Integer ] :timeout_ms The operation timeout in milliseconds. # Must be a non-negative integer. An explicit value of 0 means infinite. # The default value is unset which means the value is inherited from # the collection or the database or the client. # # @return [ Integer ] The document count. # # @since 2.6.0 def estimated_document_count(opts = {}) unless view.filter.empty? raise ArgumentError, "Cannot call estimated_document_count when querying with a filter" end %i[limit skip].each do |opt| if options.key?(opt) || opts.key?(opt) raise ArgumentError, "Cannot call estimated_document_count when querying with #{opt}" end end opts = @options.merge(opts) unless Mongo.broken_view_options Mongo::Lint.validate_underscore_read_preference(opts[:read]) read_pref = opts[:read] || read_preference selector = ServerSelector.get(read_pref || server_selector) with_session(opts) do |session| context = Operation::Context.new( client: client, session: session, operation_timeouts: operation_timeouts(opts) ) read_with_retry(session, selector, context) do |server| cmd = { count: collection.name } cmd[:maxTimeMS] = opts[:max_time_ms] if opts[:max_time_ms] if read_concern cmd[:readConcern] = Options::Mapper.transform_values_to_strings(read_concern) end result = Operation::Count.new( selector: cmd, db_name: database.name, read: read_pref, session: session, comment: opts[:comment], ).execute(server, context: context) result.n.to_i end end rescue Error::OperationFailure::Family => exc if exc.code == 26 # NamespaceNotFound # This should only happen with the aggregation pipeline path # (server 4.9+). Previous servers should return 0 for nonexistent # collections. 0 else raise end end # Get a list of distinct values for a specific field. # # @example Get the distinct values. # collection_view.distinct('name') # # @param [ String, Symbol ] field_name The name of the field. # @param [ Hash ] opts Options for the distinct command. # # @option opts :max_time_ms [ Integer ] The maximum amount of time to allow the # command to run. # @option opts [ Hash ] :read The read preference options. # @option opts [ Hash ] :collation The collation to use. # @option options [ Object ] :comment A user-provided # comment to attach to this command. # # @return [ Array ] The list of distinct values. # # @since 2.0.0 def distinct(field_name, opts = {}) if field_name.nil? raise ArgumentError, 'Field name for distinct operation must be not nil' end opts = @options.merge(opts) unless Mongo.broken_view_options cmd = { :distinct => collection.name, :key => field_name.to_s, :query => filter, } cmd[:maxTimeMS] = opts[:max_time_ms] if opts[:max_time_ms] if read_concern cmd[:readConcern] = Options::Mapper.transform_values_to_strings( read_concern) end Mongo::Lint.validate_underscore_read_preference(opts[:read]) read_pref = opts[:read] || read_preference selector = ServerSelector.get(read_pref || server_selector) with_session(opts) do |session| context = Operation::Context.new( client: client, session: session, operation_timeouts: operation_timeouts(opts) ) read_with_retry(session, selector, context) do |server| Operation::Distinct.new( selector: cmd, db_name: database.name, options: {:limit => -1}, read: read_pref, session: session, comment: opts[:comment], # For some reason collation was historically accepted as a # string key. Note that this isn't documented as valid usage. collation: opts[:collation] || opts['collation'] || collation, ).execute( server, context: context ) end.first['values'] end end # The index that MongoDB will be forced to use for the query. # # @example Set the index hint. # view.hint(name: 1) # # @param [ Hash ] hint The index to use for the query. # # @return [ Hash, View ] Either the hint or a new +View+. # # @since 2.0.0 def hint(hint = nil) configure(:hint, hint) end # The max number of docs to return from the query. # # @example Set the limit. # view.limit(5) # # @param [ Integer ] limit The number of docs to return. # # @return [ Integer, View ] Either the limit or a new +View+. # # @since 2.0.0 def limit(limit = nil) configure(:limit, limit) end # Execute a map/reduce operation on the collection view. # # @example Execute a map/reduce. # view.map_reduce(map, reduce) # # @param [ String ] map The map js function. # @param [ String ] reduce The reduce js function. # @param [ Hash ] options The map/reduce options. # # @return [ MapReduce ] The map reduce wrapper. # # @since 2.0.0 def map_reduce(map, reduce, options = {}) MapReduce.new(self, map, reduce, @options.merge(options)) end # Set the max number of documents to scan. # # @example Set the max scan value. # view.max_scan(1000) # # @param [ Integer ] value The max number to scan. # # @return [ Integer, View ] The value or a new +View+. # # @since 2.0.0 # # @deprecated This option is deprecated as of MongoDB server # version 4.0. def max_scan(value = nil) configure(:max_scan, value) end # Set the maximum value to search. # # @example Set the max value. # view.max_value(_id: 1) # # @param [ Hash ] value The max field and value. # # @return [ Hash, View ] The value or a new +View+. # # @since 2.1.0 def max_value(value = nil) configure(:max_value, value) end # Set the minimum value to search. # # @example Set the min value. # view.min_value(_id: 1) # # @param [ Hash ] value The min field and value. # # @return [ Hash, View ] The value or a new +View+. # # @since 2.1.0 def min_value(value = nil) configure(:min_value, value) end # The server normally times out idle cursors after an inactivity period # (10 minutes) to prevent excess memory use. Set this option to prevent that. # # @example Set the cursor to not timeout. # view.no_cursor_timeout # # @return [ View ] The new view. # # @since 2.0.0 def no_cursor_timeout configure(:no_cursor_timeout, true) end # The fields to include or exclude from each doc in the result set. # # @example Set the fields to include or exclude. # view.projection(name: 1) # # @note A value of 0 excludes a field from the doc. A value of 1 includes it. # Values must all be 0 or all be 1, with the exception of the _id value. # The _id field is included by default. It must be excluded explicitly. # # @param [ Hash ] document The field and 1 or 0, to include or exclude it. # # @return [ Hash, View ] Either the fields or a new +View+. # # @since 2.0.0 def projection(document = nil) validate_doc!(document) if document configure(:projection, document) end # The read preference to use for the query. # # @note If none is specified for the query, the read preference of the # collection will be used. # # @param [ Hash ] value The read preference mode to use for the query. # # @return [ Symbol, View ] Either the read preference or a # new +View+. # # @since 2.0.0 def read(value = nil) return read_preference if value.nil? configure(:read, value) end # Set whether to return only the indexed field or fields. # # @example Set the return key value. # view.return_key(true) # # @param [ true, false ] value The return key value. # # @return [ true, false, View ] The value or a new +View+. # # @since 2.1.0 def return_key(value = nil) configure(:return_key, value) end # Set whether the disk location should be shown for each document. # # @example Set show disk location option. # view.show_disk_loc(true) # # @param [ true, false ] value The value for the field. # # @return [ true, false, View ] Either the value or a new # +View+. # # @since 2.0.0 def show_disk_loc(value = nil) configure(:show_disk_loc, value) end alias :show_record_id :show_disk_loc # The number of docs to skip before returning results. # # @example Set the number to skip. # view.skip(10) # # @param [ Integer ] number Number of docs to skip. # # @return [ Integer, View ] Either the skip value or a # new +View+. # # @since 2.0.0 def skip(number = nil) configure(:skip, number) end # Set the snapshot value for the view. # # @note When set to true, prevents documents from returning more than # once. # # @example Set the snapshot value. # view.snapshot(true) # # @param [ true, false ] value The snapshot value. # # @since 2.0.0 # # @deprecated This option is deprecated as of MongoDB server # version 4.0. def snapshot(value = nil) configure(:snapshot, value) end # The key and direction pairs by which the result set will be sorted. # # @example Set the sort criteria # view.sort(name: -1) # # @param [ Hash ] spec The attributes and directions to sort by. # # @return [ Hash, View ] Either the sort setting or a # new +View+. # # @since 2.0.0 def sort(spec = nil) configure(:sort, spec) end # If called without arguments or with a nil argument, returns # the legacy (OP_QUERY) server modifiers for the current view. # If called with a non-nil argument, which must be a Hash or a # subclass, merges the provided modifiers into the current view. # Both string and symbol keys are allowed in the input hash. # # @example Set the modifiers document. # view.modifiers(:$orderby => Mongo::Index::ASCENDING) # # @param [ Hash ] doc The modifiers document. # # @return [ Hash, View ] Either the modifiers document or a new +View+. # # @since 2.1.0 def modifiers(doc = nil) if doc.nil? Operation::Find::Builder::Modifiers.map_server_modifiers(options) else new(options.merge(Operation::Find::Builder::Modifiers.map_driver_options(BSON::Document.new(doc)))) end end # A cumulative time limit in milliseconds for processing get more operations # on a cursor. # # @example Set the max await time ms value. # view.max_await_time_ms(500) # # @param [ Integer ] max The max time in milliseconds. # # @return [ Integer, View ] Either the max await time ms value or a new +View+. # # @since 2.1.0 def max_await_time_ms(max = nil) configure(:max_await_time_ms, max) end # A cumulative time limit in milliseconds for processing operations on a cursor. # # @example Set the max time ms value. # view.max_time_ms(500) # # @param [ Integer ] max The max time in milliseconds. # # @return [ Integer, View ] Either the max time ms value or a new +View+. # # @since 2.1.0 def max_time_ms(max = nil) configure(:max_time_ms, max) end # The type of cursor to use. Can be :tailable or :tailable_await. # # @example Set the cursor type. # view.cursor_type(:tailable) # # @param [ :tailable, :tailable_await ] type The cursor type. # # @return [ :tailable, :tailable_await, View ] Either the cursor type setting or a new +View+. # # @since 2.3.0 def cursor_type(type = nil) configure(:cursor_type, type) end # The per-operation timeout in milliseconds. Must a positive integer. # # @param [ Integer ] timeout_ms Timeout value. # # @return [ Integer, View ] Either the timeout_ms value or a new +View+. def timeout_ms(timeout_ms = nil) configure(:timeout_ms, timeout_ms) end # @api private def read_concern if options[:session] && options[:session].in_transaction? options[:session].send(:txn_read_concern) || collection.client.read_concern else collection.read_concern end end # @api private def read_preference @read_preference ||= begin # Operation read preference is always respected, and has the # highest priority. If we are in a transaction, we look at # transaction read preference and default to client, ignoring # collection read preference. If we are not in transaction we # look at collection read preference which defaults to client. rp = if options[:read] options[:read] elsif options[:session] && options[:session].in_transaction? options[:session].txn_read_preference || collection.client.read_preference else collection.read_preference end Lint.validate_underscore_read_preference(rp) rp end end def parallel_scan(cursor_count, options = {}) if options[:session] # The session would be overwritten by the one in +options+ later. session = client.get_session(@options) else session = nil end server = server_selector.select_server(cluster, nil, session) spec = { coll_name: collection.name, db_name: database.name, cursor_count: cursor_count, read_concern: read_concern, session: session, }.update(options) session = spec[:session] op = Operation::ParallelScan.new(spec) # Note that the context object shouldn't be reused for subsequent # GetMore operations. context = Operation::Context.new(client: client, session: session) result = op.execute(server, context: context) result.cursor_ids.map do |cursor_id| spec = { cursor_id: cursor_id, coll_name: collection.name, db_name: database.name, session: session, batch_size: batch_size, to_return: 0, # max_time_ms is not being passed here, I assume intentionally? } op = Operation::GetMore.new(spec) context = Operation::Context.new( client: client, session: session, connection_global_id: result.connection_global_id, ) result = if server.load_balancer? # Connection will be checked in when cursor is drained. connection = server.pool.check_out(context: context) op.execute_with_connection(connection, context: context) else op.execute(server, context: context) end Cursor.new(self, result, server, session: session) end end private def collation(doc = nil) configure(:collation, doc) end def server_selector @server_selector ||= if options[:session] && options[:session].in_transaction? ServerSelector.get(read_preference || client.server_selector) else ServerSelector.get(read_preference || collection.server_selector) end end def validate_doc!(doc) raise Error::InvalidDocument.new unless doc.respond_to?(:keys) end end end end end mongo-ruby-driver-2.21.3/lib/mongo/collection/view/writable.rb000066400000000000000000000714011505113246500243230ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Collection class View # Defines write related behavior for collection view. # # @since 2.0.0 module Writable # The array filters field constant. # # @since 2.5.0 ARRAY_FILTERS = 'array_filters'.freeze # Finds a single document in the database via findAndModify and deletes # it, returning the original document. # # @example Find one document and delete it. # view.find_one_and_delete # # @param [ Hash ] opts The options. # # @option opts [ Integer ] :max_time_ms The maximum amount of time to allow the command # to run in milliseconds. This option is deprecated, use # :timeout_ms instead. # @option opts [ Hash ] :projection The fields to include or exclude in the returned doc. # @option opts [ Hash ] :sort The key and direction pairs by which the result set # will be sorted. # @option opts [ Hash ] :collation The collation to use. # @option opts [ Session ] :session The session to use. # @option opts [ Hash | String ] :hint The index to use for this operation. # May be specified as a Hash (e.g. { _id: 1 }) or a String (e.g. "_id_"). # @option options [ Integer ] :timeout_ms The operation timeout in milliseconds. # Must be a non-negative integer. An explicit value of 0 means infinite. # The default value is unset which means the value is inherited from # the collection or the database or the client. # @option opts [ Hash ] :write_concern The write concern options. # Can be :w => Integer, :fsync => Boolean, :j => Boolean. # @option opts [ Hash ] :let Mapping of variables to use in the command. # See the server documentation for details. # @option opts [ Object ] :comment A user-provided # comment to attach to this command. # # @return [ BSON::Document, nil ] The document, if found. # # @since 2.0.0 def find_one_and_delete(opts = {}) with_session(opts) do |session| write_concern = if opts[:write_concern] WriteConcern.get(opts[:write_concern]) else write_concern_with_session(session) end QueryCache.clear_namespace(collection.namespace) cmd = { findAndModify: collection.name, query: filter, remove: true, fields: projection, sort: sort, maxTimeMS: max_time_ms, bypassDocumentValidation: opts[:bypass_document_validation], hint: opts[:hint], collation: opts[:collation] || opts['collation'] || collation, let: opts[:let], comment: opts[:comment], }.compact context = Operation::Context.new( client: client, session: session, operation_timeouts: operation_timeouts(opts) ) write_with_retry(write_concern, context: context) do |connection, txn_num, context| gte_4_4 = connection.server.description.server_version_gte?('4.4') if !gte_4_4 && opts[:hint] && write_concern && !write_concern.acknowledged? raise Error::UnsupportedOption.hint_error(unacknowledged_write: true) end Operation::WriteCommand.new( selector: cmd, db_name: database.name, write_concern: write_concern, session: session, txn_num: txn_num, ).execute_with_connection(connection, context: context) end end.first&.fetch('value', nil) end # Finds a single document and replaces it. # # @example Find a document and replace it, returning the original. # view.find_one_and_replace({ name: 'test' }, :return_document => :before) # # @example Find a document and replace it, returning the new document. # view.find_one_and_replace({ name: 'test' }, :return_document => :after) # # @param [ BSON::Document ] replacement The replacement. # @param [ Hash ] opts The options. # # @option opts [ Symbol ] :return_document Either :before or :after. # @option opts [ true, false ] :upsert Whether to upsert if the document doesn't exist. # @option opts [ true, false ] :bypass_document_validation Whether or # not to skip document level validation. # @option opts [ Hash ] :collation The collation to use. # @option opts [ Hash | String ] :hint The index to use for this operation. # May be specified as a Hash (e.g. { _id: 1 }) or a String (e.g. "_id_"). # @option options [ Integer ] :timeout_ms The operation timeout in milliseconds. # Must be a non-negative integer. An explicit value of 0 means infinite. # The default value is unset which means the value is inherited from # the collection or the database or the client. # @option opts [ Hash ] :write_concern The write concern options. # Can be :w => Integer, :fsync => Boolean, :j => Boolean. # @option opts [ Hash ] :let Mapping of variables to use in the command. # See the server documentation for details. # # @return [ BSON::Document ] The document. # # @since 2.0.0 def find_one_and_replace(replacement, opts = {}) find_one_and_update(replacement, opts) end # Finds a single document and updates it. # # @example Find a document and update it, returning the original. # view.find_one_and_update({ "$set" => { name: 'test' }}, :return_document => :before) # # @param [ BSON::Document ] document The updates. # @param [ Hash ] opts The options. # # @option opts [ Integer ] :max_time_ms The maximum amount of time to allow the command # to run in milliseconds. This option is deprecated, use # :timeout_ms instead. # @option opts [ Hash ] :projection The fields to include or exclude in the returned doc. # @option opts [ Hash ] :sort The key and direction pairs by which the result set # will be sorted. # @option opts [ Symbol ] :return_document Either :before or :after. # @option opts [ true, false ] :upsert Whether to upsert if the document doesn't exist. # @option opts [ true, false ] :bypass_document_validation Whether or # not to skip document level validation. # @option opts [ Hash ] :collation The collation to use. # @option opts [ Array ] :array_filters A set of filters specifying to which array elements # an update should apply. # @option opts [ Session ] :session The session to use. # @option options [ Integer ] :timeout_ms The operation timeout in milliseconds. # Must be a non-negative integer. An explicit value of 0 means infinite. # The default value is unset which means the value is inherited from # the collection or the database or the client. # @option opts [ Hash | String ] :hint The index to use for this operation. # May be specified as a Hash (e.g. { _id: 1 }) or a String (e.g. "_id_"). # @option opts [ Hash ] :write_concern The write concern options. # Can be :w => Integer, :fsync => Boolean, :j => Boolean. # @option opts [ Hash ] :let Mapping of variables to use in the command. # See the server documentation for details. # @option opts [ Object ] :comment A user-provided # comment to attach to this command. # # @return [ BSON::Document | nil ] The document or nil if none is found. # # @since 2.0.0 def find_one_and_update(document, opts = {}) value = with_session(opts) do |session| write_concern = if opts[:write_concern] WriteConcern.get(opts[:write_concern]) else write_concern_with_session(session) end QueryCache.clear_namespace(collection.namespace) cmd = { findAndModify: collection.name, query: filter, arrayFilters: opts[:array_filters] || opts['array_filters'], update: document, fields: projection, sort: sort, new: !!(opts[:return_document] && opts[:return_document] == :after), upsert: opts[:upsert], maxTimeMS: max_time_ms, bypassDocumentValidation: opts[:bypass_document_validation], hint: opts[:hint], collation: opts[:collation] || opts['collation'] || collation, let: opts[:let], comment: opts[:comment] }.compact context = Operation::Context.new( client: client, session: session, operation_timeouts: operation_timeouts(opts) ) write_with_retry(write_concern, context: context) do |connection, txn_num, context| gte_4_4 = connection.server.description.server_version_gte?('4.4') if !gte_4_4 && opts[:hint] && write_concern && !write_concern.acknowledged? raise Error::UnsupportedOption.hint_error(unacknowledged_write: true) end Operation::WriteCommand.new( selector: cmd, db_name: database.name, write_concern: write_concern, session: session, txn_num: txn_num, ).execute_with_connection(connection, context: context) end end.first&.fetch('value', nil) value unless value.nil? || value.empty? end # Remove documents from the collection. # # @example Remove multiple documents from the collection. # collection_view.delete_many # # @param [ Hash ] opts The options. # # @option opts [ Hash ] :collation The collation to use. # @option opts [ Session ] :session The session to use. # @option options [ Integer ] :timeout_ms The operation timeout in milliseconds. # Must be a non-negative integer. An explicit value of 0 means infinite. # The default value is unset which means the value is inherited from # the collection or the database or the client. # @option opts [ Hash | String ] :hint The index to use for this operation. # May be specified as a Hash (e.g. { _id: 1 }) or a String (e.g. "_id_"). # @option opts [ Hash ] :write_concern The write concern options. # Can be :w => Integer, :fsync => Boolean, :j => Boolean. # @option opts [ Hash ] :let Mapping of variables to use in the command. # See the server documentation for details. # @option opts [ Object ] :comment A user-provided # comment to attach to this command. # # @return [ Result ] The response from the database. # # @since 2.0.0 def delete_many(opts = {}) with_session(opts) do |session| write_concern = if opts[:write_concern] WriteConcern.get(opts[:write_concern]) else write_concern_with_session(session) end QueryCache.clear_namespace(collection.namespace) delete_doc = { Operation::Q => filter, Operation::LIMIT => 0, hint: opts[:hint], collation: opts[:collation] || opts['collation'] || collation, }.compact context = Operation::Context.new( client: client, session: session, operation_timeouts: operation_timeouts(opts) ) nro_write_with_retry(write_concern, context: context) do |connection, txn_num, context| gte_4_4 = connection.server.description.server_version_gte?('4.4') if !gte_4_4 && opts[:hint] && write_concern && !write_concern.acknowledged? raise Error::UnsupportedOption.hint_error(unacknowledged_write: true) end Operation::Delete.new( deletes: [ delete_doc ], db_name: collection.database.name, coll_name: collection.name, write_concern: write_concern, bypass_document_validation: !!opts[:bypass_document_validation], session: session, let: opts[:let], comment: opts[:comment], ).execute_with_connection(connection, context: context) end end end # Remove a document from the collection. # # @example Remove a single document from the collection. # collection_view.delete_one # # @param [ Hash ] opts The options. # # @option opts [ Hash ] :collation The collation to use. # @option opts [ Object ] :comment A user-provided # comment to attach to this command. # @option opts [ Hash | String ] :hint The index to use for this operation. # May be specified as a Hash (e.g. { _id: 1 }) or a String (e.g. "_id_"). # @option opts [ Hash ] :let Mapping of variables to use in the command. # See the server documentation for details. # @option opts [ Session ] :session The session to use. # @option options [ Integer ] :timeout_ms The operation timeout in milliseconds. # Must be a non-negative integer. An explicit value of 0 means infinite. # The default value is unset which means the value is inherited from # the collection or the database or the client. # @option opts [ Hash ] :write_concern The write concern options. # Can be :w => Integer, :fsync => Boolean, :j => Boolean. # # @return [ Result ] The response from the database. # # @since 2.0.0 def delete_one(opts = {}) with_session(opts) do |session| write_concern = if opts[:write_concern] WriteConcern.get(opts[:write_concern]) else write_concern_with_session(session) end QueryCache.clear_namespace(collection.namespace) delete_doc = { Operation::Q => filter, Operation::LIMIT => 1, hint: opts[:hint], collation: opts[:collation] || opts['collation'] || collation, }.compact context = Operation::Context.new( client: client, session: session, operation_timeouts: operation_timeouts(opts) ) write_with_retry(write_concern, context: context) do |connection, txn_num, context| gte_4_4 = connection.server.description.server_version_gte?('4.4') if !gte_4_4 && opts[:hint] && write_concern && !write_concern.acknowledged? raise Error::UnsupportedOption.hint_error(unacknowledged_write: true) end Operation::Delete.new( deletes: [ delete_doc ], db_name: collection.database.name, coll_name: collection.name, write_concern: write_concern, bypass_document_validation: !!opts[:bypass_document_validation], session: session, txn_num: txn_num, let: opts[:let], comment: opts[:comment], ).execute_with_connection(connection, context: context) end end end # Replaces a single document in the database with the new document. # # @example Replace a single document. # collection_view.replace_one({ name: 'test' }) # # @param [ Hash ] replacement The replacement document. # @param [ Hash ] opts The options. # # @option opts [ true, false ] :bypass_document_validation Whether or # not to skip document level validation. # @option opts [ Hash ] :collation The collation to use. # @option opts [ Object ] :comment A user-provided # comment to attach to this command. # @option opts [ Hash | String ] :hint The index to use for this operation. # May be specified as a Hash (e.g. { _id: 1 }) or a String (e.g. "_id_"). # @option opts [ Hash ] :let Mapping of variables to use in the command. # See the server documentation for details. # @option opts [ Session ] :session The session to use. # @option options [ Integer ] :timeout_ms The operation timeout in milliseconds. # Must be a non-negative integer. An explicit value of 0 means infinite. # The default value is unset which means the value is inherited from # the collection or the database or the client. # @option opts [ Hash ] :write_concern The write concern options. # @option opts [ true, false ] :upsert Whether to upsert if the # document doesn't exist. # Can be :w => Integer, :fsync => Boolean, :j => Boolean. # # @return [ Result ] The response from the database. # # @since 2.0.0 def replace_one(replacement, opts = {}) with_session(opts) do |session| write_concern = if opts[:write_concern] WriteConcern.get(opts[:write_concern]) else write_concern_with_session(session) end validate_replacement_documents!(replacement) QueryCache.clear_namespace(collection.namespace) update_doc = { Operation::Q => filter, arrayFilters: opts[:array_filters] || opts['array_filters'], Operation::U => replacement, hint: opts[:hint], collation: opts[:collation] || opts['collation'] || collation, }.compact if opts[:upsert] update_doc['upsert'] = true end context = Operation::Context.new( client: client, session: session, operation_timeouts: operation_timeouts(opts) ) write_with_retry(write_concern, context: context) do |connection, txn_num, context| gte_4_2 = connection.server.description.server_version_gte?('4.2') if !gte_4_2 && opts[:hint] && write_concern && !write_concern.acknowledged? raise Error::UnsupportedOption.hint_error(unacknowledged_write: true) end Operation::Update.new( updates: [ update_doc ], db_name: collection.database.name, coll_name: collection.name, write_concern: write_concern, bypass_document_validation: !!opts[:bypass_document_validation], session: session, txn_num: txn_num, let: opts[:let], comment: opts[:comment], ).execute_with_connection(connection, context: context) end end end # Update documents in the collection. # # @example Update multiple documents in the collection. # collection_view.update_many('$set' => { name: 'test' }) # # @param [ Hash | Array ] spec The update document or pipeline. # @param [ Hash ] opts The options. # # @option opts [ Array ] :array_filters A set of filters specifying to # which array elements an update should apply. # @option opts [ true, false ] :bypass_document_validation Whether or # not to skip document level validation. # @option opts [ Hash ] :collation The collation to use. # @option opts [ Object ] :comment A user-provided # comment to attach to this command. # @option opts [ Hash | String ] :hint The index to use for this operation. # May be specified as a Hash (e.g. { _id: 1 }) or a String (e.g. "_id_"). # @option opts [ Hash ] :let Mapping of variables to use in the command. # See the server documentation for details. # @option opts [ Session ] :session The session to use. # @option options [ Integer ] :timeout_ms The operation timeout in milliseconds. # Must be a non-negative integer. An explicit value of 0 means infinite. # The default value is unset which means the value is inherited from # the collection or the database or the client. # @option opts [ true, false ] :upsert Whether to upsert if the # document doesn't exist. # @option opts [ Hash ] :write_concern The write concern options. # Can be :w => Integer, :fsync => Boolean, :j => Boolean. # # @return [ Result ] The response from the database. # # @since 2.0.0 def update_many(spec, opts = {}) with_session(opts) do |session| write_concern = if opts[:write_concern] WriteConcern.get(opts[:write_concern]) else write_concern_with_session(session) end validate_update_documents!(spec) QueryCache.clear_namespace(collection.namespace) update_doc = { Operation::Q => filter, arrayFilters: opts[:array_filters] || opts['array_filters'], Operation::U => spec, Operation::MULTI => true, hint: opts[:hint], collation: opts[:collation] || opts['collation'] || collation, }.compact if opts[:upsert] update_doc['upsert'] = true end context = Operation::Context.new( client: client, session: session, operation_timeouts: operation_timeouts(opts) ) nro_write_with_retry(write_concern, context: context) do |connection, txn_num, context| gte_4_2 = connection.server.description.server_version_gte?('4.2') if !gte_4_2 && opts[:hint] && write_concern && !write_concern.acknowledged? raise Error::UnsupportedOption.hint_error(unacknowledged_write: true) end Operation::Update.new( updates: [ update_doc ], db_name: collection.database.name, coll_name: collection.name, write_concern: write_concern, bypass_document_validation: !!opts[:bypass_document_validation], session: session, let: opts[:let], comment: opts[:comment], ).execute_with_connection(connection, context: context) end end end # Update a single document in the collection. # # @example Update a single document in the collection. # collection_view.update_one('$set' => { name: 'test' }) # # @param [ Hash | Array ] spec The update document or pipeline. # @param [ Hash ] opts The options. # # @option opts [ Array ] :array_filters A set of filters specifying to # which array elements an update should apply. # @option opts [ true, false ] :bypass_document_validation Whether or # not to skip document level validation. # @option opts [ Hash ] :collation The collation to use. # @option opts [ Object ] :comment A user-provided # comment to attach to this command. # @option opts [ Hash | String ] :hint The index to use for this operation. # May be specified as a Hash (e.g. { _id: 1 }) or a String (e.g. "_id_"). # @option opts [ Hash ] :let Mapping of variables to use in the command. # See the server documentation for details. # @option opts [ Session ] :session The session to use. # @option options [ Integer ] :timeout_ms The operation timeout in milliseconds. # Must be a non-negative integer. An explicit value of 0 means infinite. # The default value is unset which means the value is inherited from # the collection or the database or the client. # @option opts [ true, false ] :upsert Whether to upsert if the # document doesn't exist. # @option opts [ Hash ] :write_concern The write concern options. # Can be :w => Integer, :fsync => Boolean, :j => Boolean. # # @return [ Result ] The response from the database. # # @since 2.0.0 def update_one(spec, opts = {}) with_session(opts) do |session| write_concern = if opts[:write_concern] WriteConcern.get(opts[:write_concern]) else write_concern_with_session(session) end validate_update_documents!(spec) QueryCache.clear_namespace(collection.namespace) update_doc = { Operation::Q => filter, arrayFilters: opts[:array_filters] || opts['array_filters'], Operation::U => spec, hint: opts[:hint], collation: opts[:collation] || opts['collation'] || collation, }.compact if opts[:upsert] update_doc['upsert'] = true end context = Operation::Context.new( client: client, session: session, operation_timeouts: operation_timeouts(opts) ) write_with_retry(write_concern, context: context) do |connection, txn_num, context| gte_4_2 = connection.server.description.server_version_gte?('4.2') if !gte_4_2 && opts[:hint] && write_concern && !write_concern.acknowledged? raise Error::UnsupportedOption.hint_error(unacknowledged_write: true) end Operation::Update.new( updates: [ update_doc ], db_name: collection.database.name, coll_name: collection.name, write_concern: write_concern, bypass_document_validation: !!opts[:bypass_document_validation], session: session, txn_num: txn_num, let: opts[:let], comment: opts[:comment], ).execute_with_connection(connection, context: context) end end end private # Checks the update documents to make sure they only have atomic modifiers. # Note that as per the spec, we only have to examine the first element # in the update document. # # @param [ Hash | Array ] spec The update document or pipeline. # # @raise [ Error::InvalidUpdateDocument ] if the first key in the # document does not start with a $. def validate_update_documents!(spec) if update = spec.is_a?(Array) ? spec&.first : spec if key = update.keys&.first unless key.to_s.start_with?("$") if Mongo.validate_update_replace raise Error::InvalidUpdateDocument.new(key: key) else Error::InvalidUpdateDocument.warn(Logger.logger, key) end end end end end # Check the replacement documents to make sure they don't have atomic # modifiers. Note that as per the spec, we only have to examine the # first element in the replacement document. # # @param [ Hash | Array ] spec The replacement document or pipeline. # # @raise [ Error::InvalidUpdateDocument ] if the first key in the # document does not start with a $. def validate_replacement_documents!(spec) if replace = spec.is_a?(Array) ? spec&.first : spec if key = replace.keys&.first if key.to_s.start_with?("$") if Mongo.validate_update_replace raise Error::InvalidReplacementDocument.new(key: key) else Error::InvalidReplacementDocument.warn(Logger.logger, key) end end end end end end end end end mongo-ruby-driver-2.21.3/lib/mongo/condition_variable.rb000066400000000000000000000027601505113246500232420ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo # This is an implementation of a condition variable. # # @api private class ConditionVariable extend Forwardable def initialize(lock = Mutex.new) @lock = lock @cv = ::ConditionVariable.new end # Waits for the condition variable to be signaled up to timeout seconds. # If condition variable is not signaled, returns after timeout seconds. def wait(timeout = nil) raise_unless_locked! return false if timeout && timeout < 0 @cv.wait(@lock, timeout) end def broadcast raise_unless_locked! @cv.broadcast end def signal raise_unless_locked! @cv.signal end def_delegators :@lock, :synchronize private def raise_unless_locked! unless @lock.owned? raise ArgumentError, "the lock must be owned when calling this method" end end end end mongo-ruby-driver-2.21.3/lib/mongo/config.rb000066400000000000000000000024131505113246500206470ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require "mongo/config/options" require "mongo/config/validators/option" module Mongo # This module defines configuration options for Mongo. # # @api private module Config extend Forwardable extend Options extend self # When this flag is off, an aggregation done on a view will be executed over # the documents included in that view, instead of all documents in the # collection. When this flag is on, the view filter is ignored. option :broken_view_aggregate, default: true # When this flag is set to false, the view options will be correctly # propagated to readable methods. option :broken_view_options, default: true # When this flag is set to true, the update and replace methods will # validate the parameters and raise an error if they are invalid. option :validate_update_replace, default: false # Set the configuration options. # # @example Set the options. # config.options = { validate_update_replace: true } # # @param [ Hash ] options The configuration options. def options=(options) options.each_pair do |option, value| Validators::Option.validate(option) send("#{option}=", value) end end end end mongo-ruby-driver-2.21.3/lib/mongo/config/000077500000000000000000000000001505113246500203225ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/config/options.rb000066400000000000000000000027601505113246500223470ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all module Mongo module Config # Encapsulates logic for setting options. module Options # Get the defaults or initialize a new empty hash. # # @return [ Hash ] The default options. def defaults @defaults ||= {} end # Define a configuration option with a default. # # @param [ Symbol ] name The name of the configuration option. # @param [ Hash ] options Extras for the option. # # @option options [ Object ] :default The default value. def option(name, options = {}) defaults[name] = settings[name] = options[:default] class_eval do # log_level accessor is defined specially below define_method(name) do settings[name] end define_method("#{name}=") do |value| settings[name] = value end define_method("#{name}?") do !!send(name) end end end # Reset the configuration options to the defaults. # # @example Reset the configuration options. # config.reset # # @return [ Hash ] The defaults. def reset settings.replace(defaults) end # Get the settings or initialize a new empty hash. # # @example Get the settings. # options.settings # # @return [ Hash ] The setting options. def settings @settings ||= {} end end end end mongo-ruby-driver-2.21.3/lib/mongo/config/validators/000077500000000000000000000000001505113246500224725ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/config/validators/option.rb000066400000000000000000000011301505113246500243220ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all module Mongo module Config module Validators # Validator for configuration options. # # @api private module Option extend self # Validate a configuration option. # # @example Validate a configuration option. # # @param [ String ] option The name of the option. def validate(option) unless Config.settings.keys.include?(option.to_sym) raise Mongo::Error::InvalidConfigOption.new(option) end end end end end end mongo-ruby-driver-2.21.3/lib/mongo/crypt.rb000066400000000000000000000043711505113246500205500ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2019-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Crypt autoload(:Binding, 'mongo/crypt/binding') autoload(:Binary, 'mongo/crypt/binary') autoload(:Status, 'mongo/crypt/status') autoload(:Hooks, 'mongo/crypt/hooks') autoload(:Handle, 'mongo/crypt/handle') autoload(:KmsContext, 'mongo/crypt/kms_context') autoload(:Context, 'mongo/crypt/context') autoload(:DataKeyContext, 'mongo/crypt/data_key_context') autoload(:ExplicitEncryptionContext, 'mongo/crypt/explicit_encryption_context') autoload(:ExplicitEncryptionExpressionContext, 'mongo/crypt/explicit_encryption_expression_context') autoload(:AutoEncryptionContext, 'mongo/crypt/auto_encryption_context') autoload(:ExplicitDecryptionContext, 'mongo/crypt/explicit_decryption_context') autoload(:AutoDecryptionContext, 'mongo/crypt/auto_decryption_context') autoload(:RewrapManyDataKeyContext, 'mongo/crypt/rewrap_many_data_key_context') autoload(:RewrapManyDataKeyResult, 'mongo/crypt/rewrap_many_data_key_result') autoload(:EncryptionIO, 'mongo/crypt/encryption_io') autoload(:ExplicitEncrypter, 'mongo/crypt/explicit_encrypter') autoload(:AutoEncrypter, 'mongo/crypt/auto_encrypter') autoload(:KMS, 'mongo/crypt/kms') def validate_ffi! return if defined?(FFI) require 'ffi' rescue LoadError => e raise Error::UnmetDependency, 'Cannot enable encryption because the ffi gem ' \ "has not been installed. Add \"gem 'ffi'\" to your Gemfile and run " \ "\"bundle install\" to install the gem. (#{e.class}: #{e})" end module_function :validate_ffi! end end mongo-ruby-driver-2.21.3/lib/mongo/crypt/000077500000000000000000000000001505113246500202165ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/crypt/auto_decryption_context.rb000066400000000000000000000025551505113246500255260ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2019-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Crypt # A Context object initialized for auto decryption # # @api private class AutoDecryptionContext < Context # Create a new AutoEncryptionContext object # # @param [ Mongo::Crypt::Handle ] mongocrypt a Handle that # wraps a mongocrypt_t object used to create a new mongocrypt_ctx_t. # @param [ ClientEncryption::IO ] io A instance of the IO class # that implements driver I/O methods required to run the # state machine. # @param [ Hash ] command The command to be decrypted. def initialize(mongocrypt, io, command) super(mongocrypt, io) @command = command Binding.ctx_decrypt_init(self, @command) end end end end mongo-ruby-driver-2.21.3/lib/mongo/crypt/auto_encrypter.rb000066400000000000000000000301401505113246500236040ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2019-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Crypt # An AutoEcnrypter is an object that encapsulates the behavior of # automatic encryption. It controls all resources associated with # auto-encryption, including the libmongocrypt handle, key vault client # object, mongocryptd client object, and encryption I/O. # # The AutoEncrypter is kept as an instance on a Mongo::Client. Client # objects with the same auto_encryption_options Hash may share # AutoEncrypters. # # @api private class AutoEncrypter attr_reader :mongocryptd_client attr_reader :key_vault_client attr_reader :metadata_client attr_reader :options # A Hash of default values for the :extra_options option DEFAULT_EXTRA_OPTIONS = Options::Redacted.new({ mongocryptd_uri: 'mongodb://localhost:27020', mongocryptd_bypass_spawn: false, mongocryptd_spawn_path: 'mongocryptd', mongocryptd_spawn_args: ['--idleShutdownTimeoutSecs=60'], }) # Set up encryption-related options and instance variables # on the class that includes this module. Calls the same method # on the Mongo::Crypt::Encrypter module. # # @param [ Hash ] options # # @option options [ Mongo::Client ] :client A client connected to the # encrypted collection. # @option options [ Mongo::Client | nil ] :key_vault_client A client connected # to the MongoDB instance containing the encryption key vault; optional. # If not provided, will default to :client option. # @option options [ String ] :key_vault_namespace The namespace of the key # vault in the format database.collection. # @option options [ Hash | nil ] :schema_map The JSONSchema of the collection(s) # with encrypted fields. This option is mutually exclusive with :schema_map_path. # @option options [ String | nil ] :schema_map_path A path to a file contains the JSON schema # of the collection that stores auto encrypted documents. This option is # mutually exclusive with :schema_map. # @option options [ Boolean | nil ] :bypass_auto_encryption When true, disables # auto-encryption. Default is false. # @option options [ Hash | nil ] :extra_options Options related to spawning # mongocryptd. These are set to default values if no option is passed in. # @option options [ Hash ] :kms_providers A hash of key management service # configuration information. # @see Mongo::Crypt::KMS::Credentials for list of options for every # supported provider. # @note There may be more than one KMS provider specified. # @option options [ Hash ] :kms_tls_options TLS options to connect to KMS # providers. Keys of the hash should be KSM provider names; values # should be hashes of TLS connection options. The options are equivalent # to TLS connection options of Mongo::Client. # @see Mongo::Client#initialize for list of TLS options. # @option options [ Hash | nil ] :encrypted_fields_map maps a collection # namespace to an encryptedFields. # - Note: If a collection is present on both the encryptedFieldsMap # and schemaMap, an error will be raised. # @option options [ Boolean | nil ] :bypass_query_analysis When true # disables automatic analysis of outgoing commands. # @option options [ String | nil ] :crypt_shared_lib_path Path that should # be the used to load the crypt shared library. Providing this option # overrides default crypt shared library load paths for libmongocrypt. # @option options [ Boolean | nil ] :crypt_shared_lib_required Whether # crypt shared library is required. If 'true', an error will be raised # if a crypt_shared library cannot be loaded by libmongocrypt. # # @raise [ ArgumentError ] If required options are missing or incorrectly # formatted. def initialize(options) Crypt.validate_ffi! # Note that this call may eventually, via other method invocations, # create additional clients which have to be cleaned up. @options = set_default_options(options).freeze @crypt_handle = Crypt::Handle.new( Crypt::KMS::Credentials.new(@options[:kms_providers]), Crypt::KMS::Validations.validate_tls_options(@options[:kms_tls_options]), schema_map: @options[:schema_map], schema_map_path: @options[:schema_map_path], encrypted_fields_map: @options[:encrypted_fields_map], bypass_query_analysis: @options[:bypass_query_analysis], crypt_shared_lib_path: @options[:extra_options][:crypt_shared_lib_path], crypt_shared_lib_required: @options[:extra_options][:crypt_shared_lib_required], ) @mongocryptd_options = @options[:extra_options].slice( :mongocryptd_uri, :mongocryptd_bypass_spawn, :mongocryptd_spawn_path, :mongocryptd_spawn_args ) @mongocryptd_options[:mongocryptd_bypass_spawn] = @options[:bypass_auto_encryption] || @options[:extra_options][:mongocryptd_bypass_spawn] || @crypt_handle.crypt_shared_lib_available? || @options[:extra_options][:crypt_shared_lib_required] unless @options[:extra_options][:crypt_shared_lib_required] || @crypt_handle.crypt_shared_lib_available? || @options[:bypass_query_analysis] @mongocryptd_client = Client.new( @options[:extra_options][:mongocryptd_uri], monitoring_io: @options[:client].options[:monitoring_io], populator_io: @options[:client].options[:populator_io], server_selection_timeout: 10, database: @options[:client].options[:database] ) end begin @encryption_io = EncryptionIO.new( client: @options[:client], mongocryptd_client: @mongocryptd_client, key_vault_namespace: @options[:key_vault_namespace], key_vault_client: @key_vault_client, metadata_client: @metadata_client, mongocryptd_options: @mongocryptd_options ) rescue begin @mongocryptd_client&.close rescue => e log_warn("Error closing mongocryptd client in auto encrypter's constructor: #{e.class}: #{e}") # Drop this exception so that the original exception is raised end raise end rescue if @key_vault_client && @key_vault_client != options[:client] && @key_vault_client.cluster != options[:client].cluster then begin @key_vault_client.close rescue => e log_warn("Error closing key vault client in auto encrypter's constructor: #{e.class}: #{e}") # Drop this exception so that the original exception is raised end end if @metadata_client && @metadata_client != options[:client] && @metadata_client.cluster != options[:client].cluster then begin @metadata_client.close rescue => e log_warn("Error closing metadata client in auto encrypter's constructor: #{e.class}: #{e}") # Drop this exception so that the original exception is raised end end raise end # Whether this encrypter should perform encryption (returns false if # the :bypass_auto_encryption option is set to true). # # @return [ Boolean ] Whether to perform encryption. def encrypt? !@options[:bypass_auto_encryption] end # Encrypt a database command. # # @param [ String ] database_name The name of the database on which the # command is being run. # @param [ Hash ] command The command to be encrypted. # # @return [ BSON::Document ] The encrypted command. def encrypt(database_name, command, timeout_holder) AutoEncryptionContext.new( @crypt_handle, @encryption_io, database_name, command ).run_state_machine(timeout_holder) end # Decrypt a database command. # # @param [ Hash ] command The command with encrypted fields. # # @return [ BSON::Document ] The decrypted command. def decrypt(command, timeout_holder) AutoDecryptionContext.new( @crypt_handle, @encryption_io, command ).run_state_machine(timeout_holder) end # Close the resources created by the AutoEncrypter. # # @return [ true ] Always true. def close @mongocryptd_client.close if @mongocryptd_client if @key_vault_client && @key_vault_client != options[:client] && @key_vault_client.cluster != options[:client].cluster then @key_vault_client.close end if @metadata_client && @metadata_client != options[:client] && @metadata_client.cluster != options[:client].cluster then @metadata_client.close end true end private # Returns a new set of options with the following changes: # - sets default values for all extra_options # - adds --idleShtudownTimeoutSecs=60 to extra_options[:mongocryptd_spawn_args] # if not already present # - sets bypass_auto_encryption to false # - sets default key vault client def set_default_options(options) opts = options.dup extra_options = opts.delete(:extra_options) || Options::Redacted.new extra_options = DEFAULT_EXTRA_OPTIONS.merge(extra_options) has_timeout_string_arg = extra_options[:mongocryptd_spawn_args].any? do |elem| elem.is_a?(String) && elem.match(/\A--idleShutdownTimeoutSecs=\d+\z/) end timeout_int_arg_idx = extra_options[:mongocryptd_spawn_args].index('--idleShutdownTimeoutSecs') has_timeout_int_arg = timeout_int_arg_idx && extra_options[:mongocryptd_spawn_args][timeout_int_arg_idx + 1].is_a?(Integer) unless has_timeout_string_arg || has_timeout_int_arg extra_options[:mongocryptd_spawn_args] << '--idleShutdownTimeoutSecs=60' end opts[:bypass_auto_encryption] ||= false set_or_create_clients(opts) opts[:key_vault_client] = @key_vault_client Options::Redacted.new(opts).merge(extra_options: extra_options) end # Create additional clients for auto encryption, if necessary # # @param [ Hash ] options Auto encryption options. def set_or_create_clients(options) client = options[:client] @key_vault_client = if options[:key_vault_client] options[:key_vault_client] elsif client.options[:max_pool_size] == 0 client else internal_client(client) end @metadata_client = if options[:bypass_auto_encryption] nil elsif client.options[:max_pool_size] == 0 client else internal_client(client) end end # Creates or return already created internal client to be used for # auto encryption. # # @param [ Mongo::Client ] client A client connected to the # encrypted collection. # # @return [ Mongo::Client ] Client to be used as internal client for # auto encryption. def internal_client(client) @internal_client ||= client.with( auto_encryption_options: nil, min_pool_size: 0, monitoring: client.send(:monitoring), ) end end end end mongo-ruby-driver-2.21.3/lib/mongo/crypt/auto_encryption_context.rb000066400000000000000000000030751505113246500255360ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2019-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Crypt # A Context object initialized for auto encryption # # @api private class AutoEncryptionContext < Context # Create a new AutoEncryptionContext object # # @param [ Mongo::Crypt::Handle ] mongocrypt a Handle that # wraps a mongocrypt_t object used to create a new mongocrypt_ctx_t # @param [ ClientEncryption::IO ] io A instance of the IO class # that implements driver I/O methods required to run the # state machine # @param [ String ] db_name The name of the database against which # the command is being made # @param [ Hash ] command The command to be encrypted def initialize(mongocrypt, io, db_name, command) super(mongocrypt, io) @db_name = db_name @command = command # Initialize the ctx object for auto encryption Binding.ctx_encrypt_init(self, @db_name, @command) end end end end mongo-ruby-driver-2.21.3/lib/mongo/crypt/binary.rb000066400000000000000000000127441505113246500220370ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2019-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'ffi' module Mongo module Crypt # A wrapper around mongocrypt_binary_t, a non-owning buffer of # uint-8 byte data. Each Binary instance keeps a copy of the data # passed to it in order to keep that data alive. # # @api private class Binary # Create a new Binary object that wraps a byte string # # @param [ String ] data The data string wrapped by the # byte buffer (optional) # @param [ FFI::Pointer ] pointer A pointer to an existing # mongocrypt_binary_t object # # @note When initializing a Binary object with a string or a pointer, # it is recommended that you use #self.from_pointer or #self.from_data # methods def initialize(data: nil, pointer: nil) if data # Represent data string as array of uint-8 bytes bytes = data.unpack('C*') # FFI::MemoryPointer automatically frees memory when it goes out of scope @data_p = FFI::MemoryPointer.new(bytes.length) .write_array_of_uint8(bytes) # FFI::AutoPointer uses a custom release strategy to automatically free # the pointer once this object goes out of scope @bin = FFI::AutoPointer.new( Binding.mongocrypt_binary_new_from_data(@data_p, bytes.length), Binding.method(:mongocrypt_binary_destroy) ) elsif pointer # If the Binary class is used this way, it means that the pointer # for the underlying mongocrypt_binary_t object is allocated somewhere # else. It is not the responsibility of this class to de-allocate data. @bin = pointer else # FFI::AutoPointer uses a custom release strategy to automatically free # the pointer once this object goes out of scope @bin = FFI::AutoPointer.new( Binding.mongocrypt_binary_new, Binding.method(:mongocrypt_binary_destroy) ) end end # Initialize a Binary object from an existing pointer to a mongocrypt_binary_t # object. # # @param [ FFI::Pointer ] pointer A pointer to an existing # mongocrypt_binary_t object # # @return [ Mongo::Crypt::Binary ] A new binary object def self.from_pointer(pointer) self.new(pointer: pointer) end # Initialize a Binary object with a string. The Binary object will store a # copy of the specified string and destroy the allocated memory when # it goes out of scope. # # @param [ String ] data A string to be wrapped by the Binary object # # @return [ Mongo::Crypt::Binary ] A new binary object def self.from_data(data) self.new(data: data) end # Overwrite the existing data wrapped by this Binary object # # @note The data passed in must not take up more memory than the # original memory allocated to the underlying mongocrypt_binary_t # object. Do NOT use this method unless required to do so by libmongocrypt. # # @param [ String ] data The new string data to be wrapped by this binary object # # @return [ true ] Always true # # @raise [ ArgumentError ] Raises when trying to write more data # than was originally allocated or when writing to an object that # already owns data. def write(data) if @data raise ArgumentError, 'Cannot write to an owned Binary' end # Cannot write a string that's longer than the space currently allocated # by the mongocrypt_binary_t object str_p = Binding.get_binary_data_direct(ref) len = Binding.get_binary_len_direct(ref) if len < data.bytesize raise ArgumentError.new( "Cannot write #{data.bytesize} bytes of data to a Binary object " + "that was initialized with #{Binding.get_binary_len_direct(@bin)} bytes." ) end str_p.put_bytes(0, data) true end # Returns the data stored as a string # # @return [ String ] Data stored in the mongocrypt_binary_t as a string def to_s str_p = Binding.get_binary_data_direct(ref) len = Binding.get_binary_len_direct(ref) str_p.read_string(len) end # Returns the reference to the underlying mongocrypt_binary_t # object # # @return [ FFI::Pointer ] The underlying mongocrypt_binary_t object def ref @bin end # Wraps a String with a mongocrypt_binary_t, yielding an FFI::Pointer # to the wrapped struct. def self.wrap_string(str) binary_p = Binding.mongocrypt_binary_new_from_data( FFI::MemoryPointer.from_string(str), str.bytesize, ) begin yield binary_p ensure Binding.mongocrypt_binary_destroy(binary_p) end end end end end mongo-ruby-driver-2.21.3/lib/mongo/crypt/binding.rb000066400000000000000000002235221505113246500221630ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2019-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. unless ENV['LIBMONGOCRYPT_PATH'] begin require 'libmongocrypt_helper' rescue LoadError => e # It seems that MRI maintains autoload configuration for a module until # that module is defined, but JRuby removes autoload configuration as soon # as the referenced file is attempted to be loaded, even if the module # never ends up being defined. if BSON::Environment.jruby? module Mongo module Crypt autoload :Binding, 'mongo/crypt/binding' end end end # JRuby 9.3.2.0 replaces a LoadError with our custom message with a # generic NameError, when this load is attempted as part of autoloading # process. JRuby 9.2.20.0 propagates LoadError as expected. raise LoadError, "Cannot load Mongo::Crypt::Binding because there is no path " + "to libmongocrypt specified in the LIBMONGOCRYPT_PATH environment variable " + "and libmongocrypt-helper is not installed: #{e.class}: #{e}" end end require 'ffi' module Mongo module Crypt # @api private def reset_autoload remove_const(:Binding) autoload(:Binding, 'mongo/crypt/binding') end module_function :reset_autoload # A Ruby binding for the libmongocrypt C library # # @api private class Binding extend FFI::Library if ENV['LIBMONGOCRYPT_PATH'] begin ffi_lib ENV['LIBMONGOCRYPT_PATH'] rescue LoadError => e Crypt.reset_autoload raise LoadError, "Cannot load Mongo::Crypt::Binding because the path to " + "libmongocrypt specified in the LIBMONGOCRYPT_PATH environment variable " + "is invalid: #{ENV['LIBMONGOCRYPT_PATH']}\n\n#{e.class}: #{e.message}" end else begin ffi_lib LibmongocryptHelper.libmongocrypt_path rescue LoadError => e Crypt.reset_autoload raise LoadError, "Cannot load Mongo::Crypt::Binding because the path to " + "libmongocrypt specified in libmongocrypt-helper " + "is invalid: #{LibmongocryptHelper.libmongocrypt_path}\n\n#{e.class}: #{e.message}" end end # Minimum version of libmongocrypt required by this version of the driver. # An attempt to use the driver with any previous version of libmongocrypt # will cause a `LoadError`. # # @api private MIN_LIBMONGOCRYPT_VERSION = Gem::Version.new("1.12.0") # @!method self.mongocrypt_version(len) # @api private # # Returns the version string of the libmongocrypt library. # @param [ FFI::Pointer | nil ] len (out param) An optional pointer to a # uint8 that will reference the length of the returned string. # @return [ String ] A version string for libmongocrypt. attach_function :mongocrypt_version, [:pointer], :string # Given a string representing a version number, parses it into a # Gem::Version object. This handles the case where the string is not # in a format supported by Gem::Version by doing some custom parsing. # # @param [ String ] version String representing a version number. # # @return [ Gem::Version ] the version number # # @raise [ ArgumentError ] if the string cannot be parsed. # # @api private def self.parse_version(version) Gem::Version.new(version) rescue ArgumentError match = version.match(/\A(?\d+)\.(?\d+)\.(?\d+)?(-[A-Za-z\+\d]+)?\z/) raise ArgumentError.new("Malformed version number string #{version}") if match.nil? Gem::Version.new( [ match[:major], match[:minor], match[:patch] ].join('.') ) end # Validates if provided version of libmongocrypt is valid, i.e. equal or # greater than minimum required version. Raises a LoadError if not. # # @param [ String ] lmc_version String representing libmongocrypt version. # # @raise [ LoadError ] if given version is lesser than minimum required version. # # @api private def self.validate_version(lmc_version) if (actual_version = parse_version(lmc_version)) < MIN_LIBMONGOCRYPT_VERSION raise LoadError, "libmongocrypt version #{MIN_LIBMONGOCRYPT_VERSION} or above is required, " + "but version #{actual_version} was found." end end validate_version(mongocrypt_version(nil)) # @!method self.mongocrypt_binary_new # @api private # # Creates a new mongocrypt_binary_t object (a non-owning view of a byte # array). # @return [ FFI::Pointer ] A pointer to the newly-created # mongocrypt_binary_t object. attach_function :mongocrypt_binary_new, [], :pointer # @!method self.mongocrypt_binary_new_from_data(data, len) # @api private # # Create a new mongocrypt_binary_t object that maintains a pointer to # the specified byte array. # @param [ FFI::Pointer ] data A pointer to an array of bytes; the data # is not copied and must outlive the mongocrypt_binary_t object. # @param [ Integer ] len The length of the array argument. # @return [ FFI::Pointer ] A pointer to the newly-created # mongocrypt_binary_t object. attach_function( :mongocrypt_binary_new_from_data, [:pointer, :int], :pointer ) # @!method self.mongocrypt_binary_data(binary) # @api private # # Get the pointer to the underlying data for the mongocrypt_binary_t. # @param [ FFI::Pointer ] binary A pointer to a mongocrypt_binary_t object. # @return [ FFI::Pointer ] A pointer to the data array. attach_function :mongocrypt_binary_data, [:pointer], :pointer # @!method self.mongocrypt_binary_len(binary) # @api private # # Get the length of the underlying data array. # @param [ FFI::Pointer ] binary A pointer to a mongocrypt_binary_t object. # @return [ Integer ] The length of the data array. attach_function :mongocrypt_binary_len, [:pointer], :int def self.get_binary_data_direct(mongocrypt_binary_t) mongocrypt_binary_t.get_pointer(0) end def self.get_binary_len_direct(mongocrypt_binary_t) mongocrypt_binary_t.get_uint32(FFI::NativeType::POINTER.size) end # @!method self.mongocrypt_binary_destroy(binary) # @api private # # Destroy the mongocrypt_binary_t object. # @param [ FFI::Pointer ] binary A pointer to a mongocrypt_binary_t object. # @return [ nil ] Always nil. attach_function :mongocrypt_binary_destroy, [:pointer], :void # Enum labeling different status types enum :status_type, [ :ok, 0, :error_client, 1, :error_kms, 2, ] # @!method self.mongocrypt_status_new # @api private # # Create a new mongocrypt_status_t object. # @return [ FFI::Pointer ] A pointer to the new mongocrypt_status_ts. attach_function :mongocrypt_status_new, [], :pointer # @!method self.mongocrypt_status_set(status, type, code, message, len) # @api private # # Set a message, type, and code on an existing status. # @param [ FFI::Pointer ] status A pointer to a mongocrypt_status_t. # @param [ Symbol ] type The status type; possible values are defined # by the status_type enum. # @param [ Integer ] code The status code. # @param [ String ] message The status message. # @param [ Integer ] len The length of the message argument (or -1 for a # null-terminated string). # @return [ nil ] Always nil. attach_function( :mongocrypt_status_set, [:pointer, :status_type, :int, :string, :int], :void ) # @!method self.mongocrypt_status_type(status) # @api private # # Indicates the status type. # @param [ FFI::Pointer ] status A pointer to a mongocrypt_status_t. # @return [ Symbol ] The status type (as defined by the status_type enum). attach_function :mongocrypt_status_type, [:pointer], :status_type # @!method self.mongocrypt_status_code(status) # @api private # # Return the status error code. # @param [ FFI::Pointer ] status A pointer to a mongocrypt_status_t. # @return [ Integer ] The status code. attach_function :mongocrypt_status_code, [:pointer], :int # @!method self.mongocrypt_status_message(status, len=nil) # @api private # # Returns the status message. # @param [ FFI::Pointer ] status A pointer to a mongocrypt_status_t. # @param [ FFI::Pointer | nil ] len (out param) An optional pointer to a # uint32, where the length of the retun string will be written. # @return [ String ] The status message. attach_function :mongocrypt_status_message, [:pointer, :pointer], :string # @!method self.mongocrypt_status_ok(status) # @api private # # Returns whether the status is ok or an error. # @param [ FFI::Pointer ] status A pointer to a mongocrypt_status_t. # @return [ Boolean ] Whether the status is ok. attach_function :mongocrypt_status_ok, [:pointer], :bool # @!method self.mongocrypt_status_destroy(status) # @api private # # Destroys the reference to the mongocrypt_status_t object. # @param [ FFI::Pointer ] status A pointer to a mongocrypt_status_t. # @return [ nil ] Always nil. attach_function :mongocrypt_status_destroy, [:pointer], :void # Enum labeling the various log levels enum :log_level, [ :fatal, 0, :error, 1, :warn, 2, :info, 3, :debug, 4, ] # @!method mongocrypt_log_fn_t(level, message, len, ctx) # @api private # # A callback to the mongocrypt log function. Set a custom log callback # with the mongocrypt_setopt_log_handler method # @param [ Symbol ] level The log level; possible values defined by the # log_level enum # @param [ String ] message The log message # @param [ Integer ] len The length of the message param, or -1 if the # string is null terminated # @param [ FFI::Pointer | nil ] ctx An optional pointer to a context # object when this callback was set # @return [ nil ] Always nil. # # @note This defines a method signature for an FFI callback; it is not # an instance method on the Binding class. callback :mongocrypt_log_fn_t, [:log_level, :string, :int, :pointer], :void # @!method self.ongocrypt_new # @api private # # Creates a new mongocrypt_t object. # @return [ FFI::Pointer ] A pointer to a new mongocrypt_t object. attach_function :mongocrypt_new, [], :pointer # @!method self.mongocrypt_setopt_log_handler(crypt, log_fn, log_ctx=nil) # @api private # # Set the handler on the mongocrypt_t object to be called every time # libmongocrypt logs a message. # @param [ FFI::Pointer ] crypt A pointer to a mongocrypt_t object. # @param [ Method ] log_fn A logging callback method. # @param [ FFI::Pointer | nil ] log_ctx An optional pointer to a context # to be passed into the log callback on every invocation. # @return [ Boolean ] Whether setting the callback was successful. attach_function( :mongocrypt_setopt_log_handler, [:pointer, :mongocrypt_log_fn_t, :pointer], :bool ) # Set the logger callback function on the Mongo::Crypt::Handle object # # @param [ Mongo::Crypt::Handle ] handle # @param [ Method ] log_callback # # @raise [ Mongo::Error::CryptError ] If the callback is not set successfully def self.setopt_log_handler(handle, log_callback) check_status(handle) do mongocrypt_setopt_log_handler(handle, log_callback, nil) end end # @!method self.mongocrypt_setopt_kms_providers(crypt, kms_providers) # @api private # # Configure KMS providers with a BSON document. # @param [ FFI::Pointer ] crypt A pointer to a mongocrypt_t object. # @param [ FFI::Pointer ] kms_providers A pointer to a # mongocrypt_binary_t object that references a BSON document mapping # the KMS provider names to credentials. # @note Do not initialize ctx before calling this method. # # @returns [ true | false ] Returns whether the options was set successfully. attach_function( :mongocrypt_setopt_kms_providers, [:pointer, :pointer], :bool ) # Set KMS providers options on the Mongo::Crypt::Handle object # # @param [ Mongo::Crypt::Handle ] handle # @param [ BSON::Document ] kms_providers BSON document mapping # the KMS provider names to credentials. # # @raise [ Mongo::Error::CryptError ] If the option is not set successfully def self.setopt_kms_providers(handle, kms_providers) validate_document(kms_providers) data = kms_providers.to_bson.to_s Binary.wrap_string(data) do |data_p| check_status(handle) do mongocrypt_setopt_kms_providers(handle.ref, data_p) end end end # @!method self.mongocrypt_setopt_schema_map(crypt, schema_map) # @api private # # Sets a local schema map for encryption. # @param [ FFI::Pointer ] crypt A pointer to a mongocrypt_t object. # @param [ FFI::Pointer ] schema_map A pointer to a mongocrypt_binary_t. # object that references the schema map as a BSON binary string. # @return [ Boolean ] Returns whether the option was set successfully. attach_function :mongocrypt_setopt_schema_map, [:pointer, :pointer], :bool # Set schema map on the Mongo::Crypt::Handle object # # @param [ Mongo::Crypt::Handle ] handle # @param [ BSON::Document ] schema_map_doc The schema map as a # BSON::Document object # # @raise [ Mongo::Error::CryptError ] If the schema map is not set successfully def self.setopt_schema_map(handle, schema_map_doc) validate_document(schema_map_doc) data = schema_map_doc.to_bson.to_s Binary.wrap_string(data) do |data_p| check_status(handle) do mongocrypt_setopt_schema_map(handle.ref, data_p) end end end # @!method self.mongocrypt_init(crypt) # @api private # # Initialize the mongocrypt_t object. # @param [ FFI::Pointer ] crypt A pointer to a mongocrypt_t object. # @return [ Boolean ] Returns whether the crypt was initialized successfully. attach_function :mongocrypt_init, [:pointer], :bool # Initialize the Mongo::Crypt::Handle object # # @param [ Mongo::Crypt::Handle ] handle # # @raise [ Mongo::Error::CryptError ] If initialization fails def self.init(handle) check_status(handle) do mongocrypt_init(handle.ref) end end # @!method self.mongocrypt_status(crypt, status) # @api private # # Set the status information from the mongocrypt_t object on the # mongocrypt_status_t object. # @param [ FFI::Pointer ] crypt A pointer to a mongocrypt_t object. # @param [ FFI::Pointer ] status A pointer to a mongocrypt_status_t object. # @return [ Boolean ] Whether the status was successfully set. attach_function :mongocrypt_status, [:pointer, :pointer], :bool # @!method self.mongocrypt_destroy(crypt) # @api private # # Destroy the reference the mongocrypt_t object. # @param [ FFI::Pointer ] crypt A pointer to a mongocrypt_t object. # @return [ nil ] Always nil. attach_function :mongocrypt_destroy, [:pointer], :void # @!method self.mongocrypt_ctx_new(crypt) # @api private # # Create a new mongocrypt_ctx_t object (a wrapper for the libmongocrypt # state machine). # @param [ FFI::Pointer ] crypt A pointer to a mongocrypt_t object. # @return [ FFI::Pointer ] A new mongocrypt_ctx_t object. attach_function :mongocrypt_ctx_new, [:pointer], :pointer # @!method self.mongocrypt_ctx_status(ctx, status) # @api private # # Set the status information from the mongocrypt_ctx_t object on the # mongocrypt_status_t object. # @param [ FFI::Pointer ] ctx A pointer to a mongocrypt_ctx_t object. # @param [ FFI::Pointer ] status A pointer to a mongocrypt_status_t object. # @return [ Boolean ] Whether the status was successfully set. attach_function :mongocrypt_ctx_status, [:pointer, :pointer], :bool # @!method self.mongocrypt_ctx_setopt_key_id(ctx, key_id) # @api private # # Set the key id used for explicit encryption. # @param [ FFI::Pointer ] ctx A pointer to a mongocrypt_ctx_t object. # @param [ FFI::Pointer ] key_id A pointer to a mongocrypt_binary_t object # that references the 16-byte key-id. # @note Do not initialize ctx before calling this method. # @return [ Boolean ] Whether the option was successfully set. attach_function :mongocrypt_ctx_setopt_key_id, [:pointer, :pointer], :bool # Sets the key id option on an explicit encryption context. # # @param [ Mongo::Crypt::Context ] context Explicit encryption context # @param [ String ] key_id The key id # # @raise [ Mongo::Error::CryptError ] If the operation failed def self.ctx_setopt_key_id(context, key_id) Binary.wrap_string(key_id) do |key_id_p| check_ctx_status(context) do mongocrypt_ctx_setopt_key_id(context.ctx_p, key_id_p) end end end # @!method self.mongocrypt_ctx_setopt_key_alt_name(ctx, binary) # @api private # # When creating a data key, set an alternate name on that key. When # performing explicit encryption, specifying which data key to use for # encryption based on its keyAltName field. # @param [ FFI::Pointer ] ctx A pointer to a mongocrypt_ctx_t object. # @param [ FFI::Pointer ] binary A pointer to a mongocrypt_binary_t # object that references a BSON document in the format # { "keyAltName": }. # @return [ Boolean ] Whether the alternative name was successfully set. # @note Do not initialize ctx before calling this method. attach_function( :mongocrypt_ctx_setopt_key_alt_name, [:pointer, :pointer], :bool ) # Set multiple alternate key names on data key creation # # @param [ Mongo::Crypt::Context ] context A DataKeyContext # @param [ Array ] key_alt_names An array of alternate key names as strings # # @raise [ Mongo::Error::CryptError ] If any of the alternate names are # not valid UTF8 strings def self.ctx_setopt_key_alt_names(context, key_alt_names) key_alt_names.each do |key_alt_name| key_alt_name_bson = { :keyAltName => key_alt_name }.to_bson.to_s Binary.wrap_string(key_alt_name_bson) do |key_alt_name_p| check_ctx_status(context) do mongocrypt_ctx_setopt_key_alt_name(context.ctx_p, key_alt_name_p) end end end end # @!method self.mongocrypt_ctx_setopt_key_material(ctx, binary) # @api private # # When creating a data key, set a custom key material to use for # encrypting data. # @param [ FFI::Pointer ] ctx A pointer to a mongocrypt_ctx_t object. # @param [ FFI::Pointer ] binary A pointer to a mongocrypt_binary_t # object that references the data encryption key to use. # @return [ Boolean ] Whether the custom key material was successfully set. # @note Do not initialize ctx before calling this method. attach_function( :mongocrypt_ctx_setopt_key_material, [:pointer, :pointer], :bool ) # Set set a custom key material to use for # encrypting data. # # @param [ Mongo::Crypt::Context ] context A DataKeyContext # @param [ BSON::Binary ] key_material 96 bytes of custom key material # # @raise [ Mongo::Error::CryptError ] If the key material is not 96 bytes. def self.ctx_setopt_key_material(context, key_material) data = {'keyMaterial' => key_material}.to_bson.to_s Binary.wrap_string(data) do |data_p| check_ctx_status(context) do mongocrypt_ctx_setopt_key_material(context.ctx_p, data_p) end end end # @!method self.mongocrypt_ctx_setopt_algorithm(ctx, algorithm, len) # @api private # # Set the algorithm used for explicit encryption. # @param [ FFI::Pointer ] ctx A pointer to a mongocrypt_ctx_t object. # @param [ String ] algorithm The algorithm name. Valid values are: # - "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" # - "AEAD_AES_256_CBC_HMAC_SHA_512-Random" # @param [ Integer ] len The length of the algorithm string. # @note Do not initialize ctx before calling this method. # @return [ Boolean ] Whether the option was successfully set. attach_function( :mongocrypt_ctx_setopt_algorithm, [:pointer, :string, :int], :bool ) # Set the algorithm on the context # # @param [ Mongo::Crypt::Context ] context # @param [ String ] name The algorithm name. Valid values are: # - "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" # - "AEAD_AES_256_CBC_HMAC_SHA_512-Random" # # @raise [ Mongo::Error::CryptError ] If the operation failed def self.ctx_setopt_algorithm(context, name) check_ctx_status(context) do mongocrypt_ctx_setopt_algorithm(context.ctx_p, name, -1) end end # @!method self.mongocrypt_ctx_setopt_key_encryption_key(ctx) # @api private # # Set key encryption key document for creating a data key. # @param [ FFI::Pointer ] ctx A pointer to a mongocrypt_ctx_t object. # @param [ FFI::Pointer ] bin A pointer to a mongocrypt_binary_t # object that references a BSON document representing the key # encryption key document with an additional "provider" field. # @note Do not initialize ctx before calling this method. # @return [ Boolean ] Whether the option was successfully set. attach_function( :mongocrypt_ctx_setopt_key_encryption_key, [:pointer, :pointer], :bool ) # Set key encryption key document for creating a data key. # # @param [ Mongo::Crypt::Context ] context # @param [ BSON::Document ] key_document BSON document representing the key # encryption key document with an additional "provider" field. # # @raise [ Mongo::Error::CryptError ] If the operation failed def self.ctx_setopt_key_encryption_key(context, key_document) validate_document(key_document) data = key_document.to_bson.to_s Binary.wrap_string(data) do |data_p| check_ctx_status(context) do mongocrypt_ctx_setopt_key_encryption_key(context.ctx_p, data_p) end end end # @!method self.mongocrypt_ctx_datakey_init(ctx) # @api private # # Initializes the ctx to create a data key. # @param [ FFI::Pointer ] ctx A pointer to a mongocrypt_ctx_t object. # @note Before calling this method, master key options must be set. # Set AWS master key by calling mongocrypt_ctx_setopt_masterkey_aws # and mongocrypt_ctx_setopt_masterkey_aws_endpoint. Set local master # key by calling mongocrypt_ctx_setopt_masterkey_local. # @return [ Boolean ] Whether the initialization was successful. attach_function :mongocrypt_ctx_datakey_init, [:pointer], :bool # Initialize the Context to create a data key # # @param [ Mongo::Crypt::Context ] context # # @raise [ Mongo::Error::CryptError ] If initialization fails def self.ctx_datakey_init(context) check_ctx_status(context) do mongocrypt_ctx_datakey_init(context.ctx_p) end end # @!method self.mongocrypt_ctx_datakey_init(ctx, filter) # @api private # # Initialize a context to rewrap datakeys. # # @param [ FFI::Pointer ] ctx A pointer to a mongocrypt_ctx_t object. # @param [ FFI::Pointer ] filter A pointer to a mongocrypt_binary_t object # that represents filter to use for the find command on the key vault # collection to retrieve datakeys to rewrap. # # @return [ Boolean ] Whether the initialization was successful. attach_function( :mongocrypt_ctx_rewrap_many_datakey_init, [:pointer, :pointer], :bool ) # Initialize a context to rewrap datakeys. # # @param [ Mongo::Crypt::Context ] context # @param [ BSON::Document ] filter BSON Document # that represents filter to use for the find command on the key vault # collection to retrieve datakeys to rewrap. # # @return [ Boolean ] Whether the initialization was successful. def self.ctx_rewrap_many_datakey_init(context, filter) filter_data = filter.to_bson.to_s Binary.wrap_string(filter_data) do |data_p| check_ctx_status(context) do mongocrypt_ctx_rewrap_many_datakey_init(context.ctx_p, data_p) end end end # @!method self.mongocrypt_ctx_encrypt_init(ctx, db, db_len, cmd) # @api private # # Initializes the ctx for auto-encryption. # @param [ FFI::Pointer ] ctx A pointer to a mongocrypt_ctx_t object. # @param [ String ] db The database name. # @param [ Integer ] db_len The length of the database name argument # (or -1 for a null-terminated string). # @param [ FFI::Pointer ] cmd A pointer to a mongocrypt_binary_t object # that references the database command as a binary string. # @note This method expects the passed-in BSON to be in the format: # { "v": BSON value to decrypt }. # @return [ Boolean ] Whether the initialization was successful. attach_function( :mongocrypt_ctx_encrypt_init, [:pointer, :string, :int, :pointer], :bool ) # Initialize the Context for auto-encryption # # @param [ Mongo::Crypt::Context ] context # @param [ String ] db_name The name of the database against which the # encrypted command is being performed # @param [ Hash ] command The command to be encrypted # # @raise [ Mongo::Error::CryptError ] If initialization fails def self.ctx_encrypt_init(context, db_name, command) validate_document(command) data = command.to_bson.to_s Binary.wrap_string(data) do |data_p| check_ctx_status(context) do mongocrypt_ctx_encrypt_init(context.ctx_p, db_name, -1, data_p) end end end # @!method self.mongocrypt_ctx_explicit_encrypt_init(ctx, msg) # @api private # # Initializes the ctx for explicit encryption. # @param [ FFI::Pointer ] ctx A pointer to a mongocrypt_ctx_t object. # @param [ FFI::Pointer ] msg A pointer to a mongocrypt_binary_t object # that references the message to be encrypted as a binary string. # @note Before calling this method, set a key_id, key_alt_name (optional), # and encryption algorithm using the following methods: # mongocrypt_ctx_setopt_key_id, mongocrypt_ctx_setopt_key_alt_name, # and mongocrypt_ctx_setopt_algorithm. # @return [ Boolean ] Whether the initialization was successful. attach_function( :mongocrypt_ctx_explicit_encrypt_init, [:pointer, :pointer], :bool ) # Initialize the Context for explicit encryption # # @param [ Mongo::Crypt::Context ] context # @param [ Hash ] doc A BSON document to encrypt # # @raise [ Mongo::Error::CryptError ] If initialization fails def self.ctx_explicit_encrypt_init(context, doc) validate_document(doc) data = doc.to_bson.to_s Binary.wrap_string(data) do |data_p| check_ctx_status(context) do mongocrypt_ctx_explicit_encrypt_init(context.ctx_p, data_p) end end end # @!method self.mongocrypt_ctx_explicit_encrypt_init(ctx, msg) # @api private # # Initializes the ctx for explicit expression encryption. # @param [ FFI::Pointer ] ctx A pointer to a mongocrypt_ctx_t object. # @param [ FFI::Pointer ] msg A pointer to a mongocrypt_binary_t object # that references the message to be encrypted as a binary string. # @note Before calling this method, set a key_id, key_alt_name (optional), # and encryption algorithm using the following methods: # mongocrypt_ctx_setopt_key_id, mongocrypt_ctx_setopt_key_alt_name, # and mongocrypt_ctx_setopt_algorithm. # @return [ Boolean ] Whether the initialization was successful. attach_function( :mongocrypt_ctx_explicit_encrypt_expression_init, [:pointer, :pointer], :bool ) # Initialize the Context for explicit expression encryption. # # @param [ Mongo::Crypt::Context ] context # @param [ Hash ] doc A BSON document to encrypt # # @raise [ Mongo::Error::CryptError ] If initialization fails def self.ctx_explicit_encrypt_expression_init(context, doc) validate_document(doc) data = doc.to_bson.to_s Binary.wrap_string(data) do |data_p| check_ctx_status(context) do mongocrypt_ctx_explicit_encrypt_expression_init(context.ctx_p, data_p) end end end # @!method self.mongocrypt_ctx_decrypt_init(ctx, doc) # @api private # # Initializes the ctx for auto-decryption. # @param [ FFI::Pointer ] ctx A pointer to a mongocrypt_ctx_t object. # @param [ FFI::Pointer ] doc A pointer to a mongocrypt_binary_t object # that references the document to be decrypted as a BSON binary string. # @return [ Boolean ] Whether the initialization was successful. attach_function :mongocrypt_ctx_decrypt_init, [:pointer, :pointer], :bool # Initialize the Context for auto-decryption # # @param [ Mongo::Crypt::Context ] context # @param [ BSON::Document ] command A BSON document to decrypt # # @raise [ Mongo::Error::CryptError ] If initialization fails def self.ctx_decrypt_init(context, command) validate_document(command) data = command.to_bson.to_s Binary.wrap_string(data) do |data_p| check_ctx_status(context) do mongocrypt_ctx_decrypt_init(context.ctx_p, data_p) end end end # @!method self.mongocrypt_ctx_explicit_decrypt_init(ctx, msg) # @api private # # Initializes the ctx for explicit decryption. # @param [ FFI::Pointer ] ctx A pointer to a mongocrypt_ctx_t object. # @param [ FFI::Pointer ] msg A pointer to a mongocrypt_binary_t object # that references the message to be decrypted as a BSON binary string. # @return [ Boolean ] Whether the initialization was successful. attach_function( :mongocrypt_ctx_explicit_decrypt_init, [:pointer, :pointer], :bool ) # Initialize the Context for explicit decryption # # @param [ Mongo::Crypt::Context ] context # @param [ Hash ] doc A BSON document to decrypt # # @raise [ Mongo::Error::CryptError ] If initialization fails def self.ctx_explicit_decrypt_init(context, doc) validate_document(doc) data = doc.to_bson.to_s Binary.wrap_string(data) do |data_p| check_ctx_status(context) do mongocrypt_ctx_explicit_decrypt_init(context.ctx_p, data_p) end end end # An enum labeling different libmognocrypt state machine states enum :mongocrypt_ctx_state, [ :error, 0, :need_mongo_collinfo, 1, :need_mongo_markings, 2, :need_mongo_keys, 3, :need_kms, 4, :ready, 5, :done, 6, :need_kms_credentials, 7, ] # @!method self.mongocrypt_ctx_state(ctx) # @api private # # Get the current state of the ctx. # @param [ FFI::Pointer ] ctx A pointer to a mongocrypt_ctx_t object. # @return [ Symbol ] The current state, will be one of the values defined # by the mongocrypt_ctx_state enum. attach_function :mongocrypt_ctx_state, [:pointer], :mongocrypt_ctx_state # @!method self.mongocrypt_ctx_mongo_op(ctx, op_bson) # @api private # # Get a BSON operation for the driver to run against the MongoDB # collection, the key vault database, or mongocryptd. # @param [ FFI::Pointer ] ctx A pointer to a mongocrypt_ctx_t object. # @param [ FFI::Pointer ] op_bson (out param) A pointer to a # mongocrypt_binary_t object that will have a reference to the # BSON operation written to it by libmongocrypt. # @return [ Boolean ] A boolean indicating the success of the operation. attach_function :mongocrypt_ctx_mongo_op, [:pointer, :pointer], :bool # Returns a BSON::Document representing an operation that the # driver must perform on behalf of libmongocrypt to get the # information it needs in order to continue with # encryption/decryption (for example, a filter for a key vault query). # # @param [ Mongo::Crypt::Context ] context # # @raise [ Mongo::Crypt ] If there is an error getting the operation # @return [ BSON::Document ] The operation that the driver must perform def self.ctx_mongo_op(context) binary = Binary.new check_ctx_status(context) do mongocrypt_ctx_mongo_op(context.ctx_p, binary.ref) end # TODO since the binary references a C pointer, and ByteBuffer is # written in C in MRI, we could omit a copy of the data by making # ByteBuffer reference the string that is owned by libmongocrypt. BSON::Document.from_bson(BSON::ByteBuffer.new(binary.to_s), mode: :bson) end # @!method self.mongocrypt_ctx_mongo_feed(ctx, reply) # @api private # # Feed a BSON reply to libmongocrypt. # @param [ FFI::Pointer ] ctx A pointer to a mongocrypt_ctx_t object. # @param [ FFI::Pointer ] reply A mongocrypt_binary_t object that # references the BSON reply to feed to libmongocrypt. # @return [ Boolean ] A boolean indicating the success of the operation. attach_function :mongocrypt_ctx_mongo_feed, [:pointer, :pointer], :bool # Feed a response from the driver back to libmongocrypt # # @param [ Mongo::Crypt::Context ] context # @param [ BSON::Document ] doc The document representing the response # # @raise [ Mongo::Error::CryptError ] If the response is not fed successfully def self.ctx_mongo_feed(context, doc) validate_document(doc) data = doc.to_bson.to_s Binary.wrap_string(data) do |data_p| check_ctx_status(context) do mongocrypt_ctx_mongo_feed(context.ctx_p, data_p) end end end # @!method self.mongocrypt_ctx_mongo_done(ctx) # @api private # # Indicate to libmongocrypt that the driver is done feeding replies. # @param [ FFI::Pointer ] ctx A pointer to a mongocrypt_ctx_t object. # @return [ Boolean ] A boolean indicating the success of the operation. attach_function :mongocrypt_ctx_mongo_done, [:pointer], :bool # @!method self.mongocrypt_ctx_mongo_next_kms_ctx(ctx) # @api private # # Return a pointer to a mongocrypt_kms_ctx_t object or NULL. # @param [ FFI::Pointer ] ctx A pointer to a mongocrypt_ctx_t object. # @return [ FFI::Pointer ] A pointer to a mongocrypt_kms_ctx_t object. attach_function :mongocrypt_ctx_next_kms_ctx, [:pointer], :pointer # Return a new KmsContext object needed by a Context object. # # @param [ Mongo::Crypt::Context ] context # # @return [ Mongo::Crypt::KmsContext | nil ] The KmsContext needed to # fetch an AWS master key or nil, if no KmsContext is needed def self.ctx_next_kms_ctx(context) kms_ctx_p = mongocrypt_ctx_next_kms_ctx(context.ctx_p) if kms_ctx_p.null? nil else KmsContext.new(kms_ctx_p) end end # @!method self.mongocrypt_kms_ctx_get_kms_provider(crypt, kms_providers) # @api private # # Get the KMS provider identifier associated with this KMS request. # # This is used to conditionally configure TLS connections based on the KMS # request. It is useful for KMIP, which authenticates with a client # certificate. # # @param [ FFI::Pointer ] kms Pointer mongocrypt_kms_ctx_t object. # @param [ FFI::Pointer ] len (outparam) Receives the length of the # returned string. It may be NULL. If it is not NULL, it is set to # the length of the returned string without the NULL terminator. # # @returns [ FFI::Pointer ] One of the NULL terminated static strings: "aws", "azure", "gcp", or # "kmip". attach_function( :mongocrypt_kms_ctx_get_kms_provider, [:pointer, :pointer], :pointer ) # Get the KMS provider identifier associated with this KMS request. # # This is used to conditionally configure TLS connections based on the KMS # request. It is useful for KMIP, which authenticates with a client # certificate. # # @param [ FFI::Pointer ] kms Pointer mongocrypt_kms_ctx_t object. # # @returns [ Symbol | nil ] KMS provider identifier. def self.kms_ctx_get_kms_provider(kms_context) len_ptr = FFI::MemoryPointer.new(:uint32, 1) provider = mongocrypt_kms_ctx_get_kms_provider( kms_context.kms_ctx_p, len_ptr ) if len_ptr.nil? nil else len = if BSON::Environment.jruby? # JRuby FFI implementation does not have `read(type)` method, but it # has this `get_uint32`. len_ptr.get_uint32 else # For MRI we use a documented `read` method - https://www.rubydoc.info/github/ffi/ffi/FFI%2FPointer:read len_ptr.read(:uint32) end provider.read_string(len).to_sym end end # @!method self.mongocrypt_kms_ctx_message(kms, msg) # @api private # # Get the message needed to fetch the AWS KMS master key. # @param [ FFI::Pointer ] kms Pointer to the mongocrypt_kms_ctx_t object # @param [ FFI::Pointer ] msg (outparam) Pointer to a mongocrypt_binary_t # object that will have the location of the message written to it by # libmongocrypt. # @return [ Boolean ] Whether the operation is successful. attach_function :mongocrypt_kms_ctx_message, [:pointer, :pointer], :bool # Get the HTTP message needed to fetch the AWS KMS master key from a # KmsContext object. # # @param [ Mongo::Crypt::KmsContext ] kms_context # # @raise [ Mongo::Error::CryptError ] If the response is not fed successfully # # @return [ String ] The HTTP message def self.kms_ctx_message(kms_context) binary = Binary.new check_kms_ctx_status(kms_context) do mongocrypt_kms_ctx_message(kms_context.kms_ctx_p, binary.ref) end return binary.to_s end # @!method self.mongocrypt_kms_ctx_endpoint(kms, endpoint) # @api private # # Get the hostname with which to connect over TLS to get information about # the AWS master key. # @param [ FFI::Pointer ] kms A pointer to a mongocrypt_kms_ctx_t object. # @param [ FFI::Pointer ] endpoint (out param) A pointer to which the # endpoint string will be written by libmongocrypt. # @return [ Boolean ] Whether the operation was successful. attach_function :mongocrypt_kms_ctx_endpoint, [:pointer, :pointer], :bool # Get the hostname with which to connect over TLS to get information # about the AWS master key. # # @param [ Mongo::Crypt::KmsContext ] kms_context # # @raise [ Mongo::Error::CryptError ] If the response is not fed successfully # # @return [ String | nil ] The hostname, or nil if none exists def self.kms_ctx_endpoint(kms_context) ptr = FFI::MemoryPointer.new(:pointer, 1) check_kms_ctx_status(kms_context) do mongocrypt_kms_ctx_endpoint(kms_context.kms_ctx_p, ptr) end str_ptr = ptr.read_pointer str_ptr.null? ? nil : str_ptr.read_string.force_encoding('UTF-8') end # @!method self.mongocrypt_kms_ctx_bytes_needed(kms) # @api private # # Get the number of bytes needed by the KMS context. # @param [ FFI::Pointer ] kms The mongocrypt_kms_ctx_t object. # @return [ Integer ] The number of bytes needed. attach_function :mongocrypt_kms_ctx_bytes_needed, [:pointer], :int # Get the number of bytes needed by the KmsContext. # # @param [ Mongo::Crypt::KmsContext ] kms_context # # @return [ Integer ] The number of bytes needed def self.kms_ctx_bytes_needed(kms_context) mongocrypt_kms_ctx_bytes_needed(kms_context.kms_ctx_p) end # @!method self.mongocrypt_kms_ctx_feed(kms, bytes) # @api private # # Feed replies from the KMS back to libmongocrypt. # @param [ FFI::Pointer ] kms A pointer to the mongocrypt_kms_ctx_t object. # @param [ FFI::Pointer ] bytes A pointer to a mongocrypt_binary_t # object that references the response from the KMS. # @return [ Boolean ] Whether the operation was successful. attach_function :mongocrypt_kms_ctx_feed, [:pointer, :pointer], :bool # Feed replies from the KMS back to libmongocrypt. # # @param [ Mongo::Crypt::KmsContext ] kms_context # @param [ String ] bytes The data to feed to libmongocrypt # # @raise [ Mongo::Error::CryptError ] If the response is not fed successfully def self.kms_ctx_feed(kms_context, bytes) check_kms_ctx_status(kms_context) do Binary.wrap_string(bytes) do |bytes_p| mongocrypt_kms_ctx_feed(kms_context.kms_ctx_p, bytes_p) end end end # @!method self.mongocrypt_kms_ctx_status(kms, status) # @api private # # Write status information about the mongocrypt_kms_ctx_t object # to the mongocrypt_status_t object. # @param [ FFI::Pointer ] kms A pointer to the mongocrypt_kms_ctx_t object. # @param [ FFI::Pointer ] status A pointer to a mongocrypt_status_t object. # @return [ Boolean ] Whether the operation was successful. attach_function :mongocrypt_kms_ctx_status, [:pointer, :pointer], :bool # If the provided block returns false, raise a CryptError with the # status information from the provided KmsContext object. # # @param [ Mongo::Crypt::KmsContext ] kms_context # # @raise [ Mongo::Error::CryptError ] If the provided block returns false def self.check_kms_ctx_status(kms_context) unless yield status = Status.new mongocrypt_kms_ctx_status(kms_context.kms_ctx_p, status.ref) status.raise_crypt_error(kms: true) end end # @!method self.mongocrypt_kms_ctx_usleep(ctx) # @api private # # Indicates how long to sleep before sending KMS request. # # @param [ FFI::Pointer ] ctx A pointer to a mongocrypt_ctx_t object. # @return [ int64 ] A 64-bit encoded number of microseconds of how long to sleep. attach_function :mongocrypt_kms_ctx_usleep, [:pointer], :int64 # Returns number of milliseconds to sleep before sending KMS request # for the given KMS context. # # @param [ Mongo::Crypt::KmsContext ] kms_context KMS Context we are going # to send KMS request for. # @return [ Integer ] A 64-bit encoded number of microseconds to sleep. def self.kms_ctx_usleep(kms_context) mongocrypt_kms_ctx_usleep(kms_context.kms_ctx_p) end # @!method self.mongocrypt_kms_ctx_fail(ctx) # @api private # # Indicate a network-level failure. # # @param [ FFI::Pointer ] ctx A pointer to a mongocrypt_ctx_t object. # @return [ Boolean ] whether the failed request may be retried. attach_function :mongocrypt_kms_ctx_fail, [:pointer], :bool # Check whether the last failed request for the KMS context may be retried. # # @param [ Mongo::Crypt::KmsContext ] kms_context KMS Context # @return [ true, false ] whether the failed request may be retried. def self.kms_ctx_fail(kms_context) mongocrypt_kms_ctx_fail(kms_context.kms_ctx_p) end # @!method self.mongocrypt_setopt_retry_kms(crypt, enable) # @api private # # Enable or disable KMS retry behavior. # # @param [ FFI::Pointer ] crypt A pointer to a mongocrypt_t object # @param [ Boolean ] enable A boolean indicating whether to retry operations. # @return [ Boolean ] indicating success. attach_function :mongocrypt_setopt_retry_kms, [:pointer, :bool], :bool # Enable or disable KMS retry behavior. # # @param [ Mongo::Crypt::Handle ] handle # @param [ true, false ] value whether to retry operations. # @return [ true, fale ] true is the option was set, otherwise false. def self.kms_ctx_setopt_retry_kms(handle, value) mongocrypt_setopt_retry_kms(handle.ref, value) end # @!method self.mongocrypt_kms_ctx_done(ctx) # @api private # # Indicate to libmongocrypt that it will receive no more replies from # mongocrypt_kms_ctx_t objects. # @param [ FFI::Pointer ] ctx A pointer to a mongocrypt_ctx_t object. # @return [ Boolean ] Whether the operation was successful. attach_function :mongocrypt_ctx_kms_done, [:pointer], :bool # Indicate to libmongocrypt that it will receive no more KMS replies. # # @param [ Mongo::Crypt::Context ] context # # @raise [ Mongo::Error::CryptError ] If the operation is unsuccessful def self.ctx_kms_done(context) check_ctx_status(context) do mongocrypt_ctx_kms_done(context.ctx_p) end end # @!method self.mongocrypt_ctx_finalize(ctx, op_bson) # @api private # # Perform the final encryption or decryption and return a BSON document. # @param [ FFI::Pointer ] ctx A pointer to a mongocrypt_ctx_t object. # @param [ FFI::Pointer ] op_bson (out param) A pointer to a # mongocrypt_binary_t object that will have a reference to the # final encrypted BSON document. # @return [ Boolean ] A boolean indicating the success of the operation. attach_function :mongocrypt_ctx_finalize, [:pointer, :pointer], :void # Finalize the state machine represented by the Context # # @param [ Mongo::Crypt::Context ] context # # @raise [ Mongo::Error::CryptError ] If the state machine is not successfully # finalized def self.ctx_finalize(context) binary = Binary.new check_ctx_status(context) do mongocrypt_ctx_finalize(context.ctx_p, binary.ref) end # TODO since the binary references a C pointer, and ByteBuffer is # written in C in MRI, we could omit a copy of the data by making # ByteBuffer reference the string that is owned by libmongocrypt. BSON::Document.from_bson(BSON::ByteBuffer.new(binary.to_s), mode: :bson) end # @!method self.mongocrypt_ctx_destroy(ctx) # @api private # # Destroy the reference to the mongocrypt_ctx_t object. # @param [ FFI::Pointer ] ctx A pointer to a mongocrypt_ctx_t object. # @return [ nil ] Always nil. attach_function :mongocrypt_ctx_destroy, [:pointer], :void # @!method mongocrypt_crypto_fn(ctx, key, iv, input, output, status) # @api private # # A callback to a function that performs AES encryption or decryption. # @param [ FFI::Pointer | nil] ctx An optional pointer to a context object # that may have been set when hooks were enabled. # @param [ FFI::Pointer ] key A pointer to a mongocrypt_binary_t object # that references the 32-byte AES encryption key. # @param [ FFI::Pointer ] iv A pointer to a mongocrypt_binary_t object # that references the 16-byte AES IV. # @param [ FFI::Pointer ] input A pointer to a mongocrypt_binary_t object # that references the value to be encrypted/decrypted. # @param [ FFI::Pointer ] output (out param) A pointer to a # mongocrypt_binary_t object will have a reference to the encrypted/ # decrypted value written to it by libmongocrypt. # @param [ FFI::Pointer ] status A pointer to a mongocrypt_status_t # object to which an error message will be written if encryption fails. # @return [ Bool ] Whether encryption/decryption was successful. # # @note This defines a method signature for an FFI callback; it is not # an instance method on the Binding class. callback( :mongocrypt_crypto_fn, [:pointer, :pointer, :pointer, :pointer, :pointer, :pointer, :pointer], :bool ) # @!method mongocrypt_hmac_fn(ctx, key, input, output, status) # @api private # # A callback to a function that performs HMAC SHA-512 or SHA-256. # @param [ FFI::Pointer | nil ] ctx An optional pointer to a context object # that may have been set when hooks were enabled. # @param [ FFI::Pointer ] key A pointer to a mongocrypt_binary_t object # that references the 32-byte HMAC SHA encryption key. # @param [ FFI::Pointer ] input A pointer to a mongocrypt_binary_t object # that references the input value. # @param [ FFI::Pointer ] output (out param) A pointer to a # mongocrypt_binary_t object will have a reference to the output value # written to it by libmongocrypt. # @param [ FFI::Pointer ] status A pointer to a mongocrypt_status_t # object to which an error message will be written if encryption fails. # @return [ Bool ] Whether HMAC-SHA was successful. # # @note This defines a method signature for an FFI callback; it is not # an instance method on the Binding class. callback( :mongocrypt_hmac_fn, [:pointer, :pointer, :pointer, :pointer, :pointer], :bool ) # @!method mongocrypt_hash_fn(ctx, input, output, status) # @api private # # A callback to a SHA-256 hash function. # @param [ FFI::Pointer | nil ] ctx An optional pointer to a context object # that may have been set when hooks were enabled. # @param [ FFI::Pointer ] input A pointer to a mongocrypt_binary_t object # that references the value to be hashed. # @param [ FFI::Pointer ] output (out param) A pointer to a # mongocrypt_binary_t object will have a reference to the output value # written to it by libmongocrypt. # @param [ FFI::Pointer ] status A pointer to a mongocrypt_status_t # object to which an error message will be written if encryption fails. # @return [ Bool ] Whether hashing was successful. # # @note This defines a method signature for an FFI callback; it is not # an instance method on the Binding class. callback :mongocrypt_hash_fn, [:pointer, :pointer, :pointer, :pointer], :bool # @!method mongocrypt_random_fn(ctx, output, count, status) # @api private # # A callback to a crypto secure random function. # @param [ FFI::Pointer | nil ] ctx An optional pointer to a context object # that may have been set when hooks were enabled. # @param [ FFI::Pointer ] output (out param) A pointer to a # mongocrypt_binary_t object will have a reference to the output value # written to it by libmongocrypt. # @param [ Integer ] count The number of random bytes to return. # @param [ FFI::Pointer ] status A pointer to a mongocrypt_status_t # object to which an error message will be written if encryption fails. # @return [ Bool ] Whether hashing was successful. # # @note This defines a method signature for an FFI callback; it is not # an instance method on the Binding class. callback :mongocrypt_random_fn, [:pointer, :pointer, :int, :pointer], :bool # @!method self.mongocrypt_setopt_crypto_hooks(crypt, aes_enc_fn, aes_dec_fn, random_fn, sha_512_fn, sha_256_fn, hash_fn, ctx=nil) # @api private # # Set crypto hooks on the provided mongocrypt object. # @param [ FFI::Pointer ] crypt A pointer to a mongocrypt_t object. # @param [ Proc ] aes_enc_fn An AES encryption method. # @param [ Proc ] aes_dec_fn An AES decryption method. # @param [ Proc ] random_fn A random method. # @param [ Proc ] sha_512_fn A HMAC SHA-512 method. # @param [ Proc ] sha_256_fn A HMAC SHA-256 method. # @param [ Proc ] hash_fn A SHA-256 hash method. # @param [ FFI::Pointer | nil ] ctx An optional pointer to a context object # that may have been set when hooks were enabled. # @return [ Boolean ] Whether setting this option succeeded. attach_function( :mongocrypt_setopt_crypto_hooks, [ :pointer, :mongocrypt_crypto_fn, :mongocrypt_crypto_fn, :mongocrypt_random_fn, :mongocrypt_hmac_fn, :mongocrypt_hmac_fn, :mongocrypt_hash_fn, :pointer ], :bool ) # Set crypto callbacks on the Handle # # @param [ Mongo::Crypt::Handle ] handle # @param [ Method ] aes_encrypt_cb An AES encryption method # @param [ Method ] aes_decrypt_cb A AES decryption method # @param [ Method ] random_cb A method that returns a string of random bytes # @param [ Method ] hmac_sha_512_cb A HMAC SHA-512 method # @param [ Method ] hmac_sha_256_cb A HMAC SHA-256 method # @param [ Method ] hmac_hash_cb A SHA-256 hash method # # @raise [ Mongo::Error::CryptError ] If the callbacks aren't set successfully def self.setopt_crypto_hooks(handle, aes_encrypt_cb, aes_decrypt_cb, random_cb, hmac_sha_512_cb, hmac_sha_256_cb, hmac_hash_cb ) check_status(handle) do mongocrypt_setopt_crypto_hooks(handle.ref, aes_encrypt_cb, aes_decrypt_cb, random_cb, hmac_sha_512_cb, hmac_sha_256_cb, hmac_hash_cb, nil ) end end # @!method self.mongocrypt_setopt_crypto_hook_sign_rsaes_pkcs1_v1_5(crypt, sign_rsaes_pkcs1_v1_5, ctx=nil) # @api private # # Set a crypto hook for the RSASSA-PKCS1-v1_5 algorithm with a SHA-256 hash. # @param [ FFI::Pointer ] crypt A pointer to a mongocrypt_t object. # @param [ Proc ] sign_rsaes_pkcs1_v1_5 A RSASSA-PKCS1-v1_5 signing method. # @param [ FFI::Pointer | nil ] ctx An optional pointer to a context object # that may have been set when hooks were enabled. # @return [ Boolean ] Whether setting this option succeeded. attach_function( :mongocrypt_setopt_crypto_hook_sign_rsaes_pkcs1_v1_5, [ :pointer, :mongocrypt_hmac_fn, :pointer ], :bool ) # Set a crypto hook for the RSASSA-PKCS1-v1_5 algorithm with # a SHA-256 hash oh the Handle. # # @param [ Mongo::Crypt::Handle ] handle # @param [ Method ] rsaes_pkcs_signature_cb A RSASSA-PKCS1-v1_5 signing method. # # @raise [ Mongo::Error::CryptError ] If the callbacks aren't set successfully def self.setopt_crypto_hook_sign_rsaes_pkcs1_v1_5( handle, rsaes_pkcs_signature_cb ) check_status(handle) do mongocrypt_setopt_crypto_hook_sign_rsaes_pkcs1_v1_5( handle.ref, rsaes_pkcs_signature_cb, nil ) end end # @!method self.mongocrypt_setopt_encrypted_field_config_map(crypt, efc_map) # @api private # # Set a local EncryptedFieldConfigMap for encryption. # # @param [ FFI::Pointer ] crypt A pointer to a mongocrypt_t object. # @param [ FFI::Pointer ] efc_map A pointer to mongocrypt_binary_t object that # references a BSON document representing the EncryptedFieldConfigMap # supplied by the user. The keys are collection namespaces and values are # EncryptedFieldConfigMap documents. # # @return [ Boolean ] Whether the operation succeeded. attach_function( :mongocrypt_setopt_encrypted_field_config_map, [ :pointer, :pointer ], :bool ) # Set a local EncryptedFieldConfigMap for encryption. # # @param [ Mongo::Crypt::Handle ] handle # @param [ BSON::Document ] efc_map A BSON document representing # the EncryptedFieldConfigMap supplied by the user. # The keys are collection namespaces and values are # EncryptedFieldConfigMap documents. # # @raise [ Mongo::Error::CryptError ] If the operation failed. def self.setopt_encrypted_field_config_map(handle, efc_map) validate_document(efc_map) data = efc_map.to_bson.to_s Binary.wrap_string(data) do |data_p| check_status(handle) do mongocrypt_setopt_encrypted_field_config_map( handle.ref, data_p ) end end end # @!method self.mongocrypt_setopt_bypass_query_analysis(crypt) # @api private # # Opt into skipping query analysis. # # If opted in: # - The csfle shared library will not attempt to be loaded. # - A mongocrypt_ctx_t will never enter the MONGOCRYPT_CTX_NEED_MARKINGS state. # # @param [ FFI::Pointer ] crypt A pointer to a mongocrypt_t object. attach_function(:mongocrypt_setopt_bypass_query_analysis, [:pointer], :void) # Opt-into skipping query analysis. # # If opted in: # - The csfle shared library will not attempt to be loaded. # - A mongocrypt_ctx_t will never enter the MONGOCRYPT_CTX_NEED_MARKINGS state. # # @param [ Mongo::Crypt::Handle ] handle def self.setopt_bypass_query_analysis(handle) mongocrypt_setopt_bypass_query_analysis(handle.ref) end # @!method self.mongocrypt_setopt_aes_256_ctr(crypt, aes_256_ctr_encrypt, aes_256_ctr_decrypt, ctx) # @api private # # Set a crypto hook for the AES256-CTR operations. # # @param [ FFI::Pointer ] crypt A pointer to a mongocrypt_t object. # @param [ Proc ] aes_enc_fn An AES-CTR encryption method. # @param [ Proc ] aes_dec_fn An AES-CTR decryption method. # @param [ FFI::Pointer | nil ] ctx An optional pointer to a context object # that may have been set when hooks were enabled. # @return [ Boolean ] Whether setting this option succeeded. attach_function( :mongocrypt_setopt_aes_256_ctr, [ :pointer, :mongocrypt_crypto_fn, :mongocrypt_crypto_fn, :pointer ], :bool ) # Set a crypto hook for the AES256-CTR operations. # # @param [ Mongo::Crypt::Handle ] handle # @param [ Method ] aes_encrypt_cb An AES-CTR encryption method # @param [ Method ] aes_decrypt_cb A AES-CTR decryption method # # @raise [ Mongo::Error::CryptError ] If the callbacks aren't set successfully def self.setopt_aes_256_ctr(handle, aes_ctr_encrypt_cb, aes_ctr_decrypt_cb) check_status(handle) do mongocrypt_setopt_aes_256_ctr(handle.ref, aes_ctr_encrypt_cb, aes_ctr_decrypt_cb, nil ) end end # @!method self.mongocrypt_setopt_append_crypt_shared_lib_search_path(crypt, path) # @api private # # Append an additional search directory to the search path for loading # the crypt_shared dynamic library. # # @param [ FFI::Pointer ] crypt A pointer to a mongocrypt_t object. # @param [ String ] path A path to search for the crypt shared library. If the leading element of # the path is the literal string "$ORIGIN", that substring will be replaced # with the directory path containing the executable libmongocrypt module. If # the path string is literal "$SYSTEM", then libmongocrypt will defer to the # system's library resolution mechanism to find the crypt_shared library. attach_function( :mongocrypt_setopt_append_crypt_shared_lib_search_path, [ :pointer, :string, ], :void ) # Append an additional search directory to the search path for loading # the crypt_shared dynamic library. # # @param [ Mongo::Crypt::Handle ] handle # @param [ String ] path A search path for the crypt shared library. def self.setopt_append_crypt_shared_lib_search_path(handle, path) check_status(handle) do mongocrypt_setopt_append_crypt_shared_lib_search_path(handle.ref, path) end end # @!method self.mongocrypt_setopt_set_crypt_shared_lib_path_override(crypt, path) # @api private # # Set a single override path for loading the crypt shared library. # # @param [ FFI::Pointer ] crypt A pointer to a mongocrypt_t object. # @param [ String ] path A path to crypt shared library file. If the leading element of # the path is the literal string "$ORIGIN", that substring will be replaced # with the directory path containing the executable libmongocrypt module. attach_function( :mongocrypt_setopt_set_crypt_shared_lib_path_override, [ :pointer, :string, ], :void ) # Set a single override path for loading the crypt shared library. # # @param [ Mongo::Crypt::Handle ] handle # @param [ String ] path A path to crypt shared library file. def self.setopt_set_crypt_shared_lib_path_override(handle, path) check_status(handle) do mongocrypt_setopt_set_crypt_shared_lib_path_override(handle.ref, path) end end # @!method self.mongocrypt_crypt_shared_lib_version(crypt) # @api private # # Obtain a 64-bit constant encoding the version of the loaded # crypt_shared library, if available. # # The version is encoded as four 16-bit numbers, from high to low: # # - Major version # - Minor version # - Revision # - Reserved # # For example, version 6.2.1 would be encoded as: 0x0006'0002'0001'0000 # # @param [ FFI::Pointer ] crypt A pointer to a mongocrypt_t object. # # @return [int64] A 64-bit encoded version number, with the version encoded as four # sixteen-bit integers, or zero if no crypt_shared library was loaded. attach_function( :mongocrypt_crypt_shared_lib_version, [ :pointer ], :uint64 ) # Obtain a 64-bit constant encoding the version of the loaded # crypt_shared library, if available. # # The version is encoded as four 16-bit numbers, from high to low: # # - Major version # - Minor version # - Revision # - Reserved # # For example, version 6.2.1 would be encoded as: 0x0006'0002'0001'0000 # # @param [ Mongo::Crypt::Handle ] handle # # @return [ Integer ] A 64-bit encoded version number, with the version encoded as four # sixteen-bit integers, or zero if no crypt_shared library was loaded. def self.crypt_shared_lib_version(handle) mongocrypt_crypt_shared_lib_version(handle.ref) end # @!method self.mongocrypt_setopt_use_need_kms_credentials_state(crypt) # @api private # # Opt-into handling the MONGOCRYPT_CTX_NEED_KMS_CREDENTIALS state. # # If set, before entering the MONGOCRYPT_CTX_NEED_KMS state, # contexts may enter the MONGOCRYPT_CTX_NEED_KMS_CREDENTIALS state # and then wait for credentials to be supplied through # `mongocrypt_ctx_provide_kms_providers`. # # A context will only enter MONGOCRYPT_CTX_NEED_KMS_CREDENTIALS # if an empty document was set for a KMS provider in # `mongocrypt_setopt_kms_providers`. # # @param [ FFI::Pointer ] crypt A pointer to a mongocrypt_t object. attach_function( :mongocrypt_setopt_use_need_kms_credentials_state, [ :pointer ], :void ) # Opt-into handling the MONGOCRYPT_CTX_NEED_KMS_CREDENTIALS state. # # If set, before entering the MONGOCRYPT_CTX_NEED_KMS state, # contexts may enter the MONGOCRYPT_CTX_NEED_KMS_CREDENTIALS state # and then wait for credentials to be supplied through # `mongocrypt_ctx_provide_kms_providers`. # # A context will only enter MONGOCRYPT_CTX_NEED_KMS_CREDENTIALS # if an empty document was set for a KMS provider in # `mongocrypt_setopt_kms_providers`. # # @param [ Mongo::Crypt::Handle ] handle def self.setopt_use_need_kms_credentials_state(handle) mongocrypt_setopt_use_need_kms_credentials_state(handle.ref) end # @!method self.mongocrypt_ctx_provide_kms_providers(ctx, kms_providers) # @api private # # Call in response to the MONGOCRYPT_CTX_NEED_KMS_CREDENTIALS state # to set per-context KMS provider settings. These follow the same format # as `mongocrypt_setopt_kms_providers``. If no keys are present in the # BSON input, the KMS provider settings configured for the mongocrypt_t # at initialization are used. # # @param [ FFI::Pointer ] ctx A pointer to a mongocrypt_ctx_t object. # @param [ FFI::Pointer ] kms_providers A pointer to a # mongocrypt_binary_t object that references a BSON document mapping # the KMS provider names to credentials. # # @returns [ true | false ] Returns whether the options was set successfully. attach_function( :mongocrypt_ctx_provide_kms_providers, [ :pointer, :pointer ], :bool ) # Call in response to the MONGOCRYPT_CTX_NEED_KMS_CREDENTIALS state # to set per-context KMS provider settings. These follow the same format # as `mongocrypt_setopt_kms_providers``. If no keys are present in the # BSON input, the KMS provider settings configured for the mongocrypt_t # at initialization are used. # # @param [ Mongo::Crypt::Context ] context Encryption context. # @param [ BSON::Document ] kms_providers BSON document mapping # the KMS provider names to credentials. # # @raise [ Mongo::Error::CryptError ] If the option is not set successfully. def self.ctx_provide_kms_providers(context, kms_providers) validate_document(kms_providers) data = kms_providers.to_bson.to_s Binary.wrap_string(data) do |data_p| check_ctx_status(context) do mongocrypt_ctx_provide_kms_providers(context.ctx_p, data_p) end end end # @!method self.mongocrypt_ctx_setopt_query_type(ctx, mongocrypt_query_type) # @api private # # Set the query type to use for FLE 2 explicit encryption. # The query type is only used for indexed FLE 2 encryption. # # @param [ FFI::Pointer ] ctx A pointer to a mongocrypt_ctx_t object. # @param [ String ] query_type Type of the query. # @param [ Integer ] len The length of the query type string. # # @return [ Boolean ] Whether setting this option succeeded. attach_function( :mongocrypt_ctx_setopt_query_type, [ :pointer, :string, :int ], :bool ) # Set the query type to use for FLE 2 explicit encryption. # The query type is only used for indexed FLE 2 encryption. # # @param [ Mongo::Crypt::Context ] context Explicit encryption context. # @param [ String ] :mongocrypt_query_type query_type Type of the query. # # @raise [ Mongo::Error::CryptError ] If the operation failed. def self.ctx_setopt_query_type(context, query_type) check_ctx_status(context) do mongocrypt_ctx_setopt_query_type(context.ctx_p, query_type, -1) end end # @!method self.mongocrypt_ctx_setopt_contention_factor(ctx, contention_factor) # @api private # # Set the contention factor used for explicit encryption. # The contention factor is only used for indexed FLE 2 encryption. # # @param [ FFI::Pointer ] ctx A pointer to a mongocrypt_ctx_t object. # @param [ int64 ] contention_factor # # @return [ Boolean ] Whether setting this option succeeded. attach_function( :mongocrypt_ctx_setopt_contention_factor, [ :pointer, :int64 ], :bool ) # Set the contention factor used for explicit encryption. # The contention factor is only used for indexed FLE 2 encryption. # # @param [ Mongo::Crypt::Context ] context Explicit encryption context. # @param [ Integer ] factor Contention factor used for explicit encryption. # # @raise [ Mongo::Error::CryptError ] If the operation failed. def self.ctx_setopt_contention_factor(context, factor) check_ctx_status(context) do mongocrypt_ctx_setopt_contention_factor(context.ctx_p, factor) end end # @!method self.mongocrypt_ctx_setopt_algorithm_range(ctx, opts) # @api private # # Set options for explicit encryption with the "range" algorithm. # # @note The Range algorithm is experimental only. It is not intended for # public use. # # @param [ FFI::Pointer ] ctx A pointer to a mongocrypt_ctx_t object. # @param [ FFI::Pointer ] opts opts A pointer to range # options document. # # @return [ Boolean ] Whether setting this option succeeded. attach_function( :mongocrypt_ctx_setopt_algorithm_range, [ :pointer, :pointer ], :bool ) # Set options for explicit encryption with the "range" algorithm. # # @note The Range algorithm is experimental only. It is not intended for # public use. # # @param [ Mongo::Crypt::Context ] context # @param [ Hash ] opts options # # @raise [ Mongo::Error::CryptError ] If the operation failed def self.ctx_setopt_algorithm_range(context, opts) validate_document(opts) data = opts.to_bson.to_s Binary.wrap_string(data) do |data_p| check_ctx_status(context) do mongocrypt_ctx_setopt_algorithm_range(context.ctx_p, data_p) end end end # Raise a Mongo::Error::CryptError based on the status of the underlying # mongocrypt_t object. # # @return [ nil ] Always nil. def self.check_status(handle) unless yield status = Status.new mongocrypt_status(handle.ref, status.ref) status.raise_crypt_error end end # Raise a Mongo::Error::CryptError based on the status of the underlying # mongocrypt_ctx_t object. # # @return [ nil ] Always nil. def self.check_ctx_status(context) if block_given? do_raise = !yield else do_raise = true end if do_raise status = Status.new mongocrypt_ctx_status(context.ctx_p, status.ref) status.raise_crypt_error end end # Checks that the specified data is a Hash before serializing # it to BSON to prevent errors from libmongocrypt # # @note All BSON::Document instances are also Hash instances # # @param [ Object ] data The data to be passed to libmongocrypt # # @raise [ Mongo::Error::CryptError ] If the data is not a Hash def self.validate_document(data) return if data.is_a?(Hash) if data.nil? message = "Attempted to pass nil data to libmongocrypt. " + "Data must be a Hash" else message = "Attempted to pass invalid data to libmongocrypt: #{data} " + "Data must be a Hash" end raise Error::CryptError.new(message) end end end end mongo-ruby-driver-2.21.3/lib/mongo/crypt/context.rb000066400000000000000000000177611505113246500222430ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2019-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Crypt # A wrapper around mongocrypt_ctx_t, which manages the # state machine for encryption and decription. # # This class is a superclass that defines shared methods # amongst contexts that are initialized for different purposes # (e.g. data key creation, encryption, explicit encryption, etc.) # # @api private class Context extend Forwardable def_delegators :@mongocrypt_handle, :kms_providers # Create a new Context object # # @param [ Mongo::Crypt::Handle ] mongocrypt_handle A handle to libmongocrypt # used to create a new context object. # @param [ ClientEncryption::IO ] io An instance of the IO class # that implements driver I/O methods required to run the # state machine. def initialize(mongocrypt_handle, io) @mongocrypt_handle = mongocrypt_handle # Ideally, this level of the API wouldn't be passing around pointer # references between objects, so this method signature is subject to change. # FFI::AutoPointer uses a custom release strategy to automatically free # the pointer once this object goes out of scope @ctx_p = FFI::AutoPointer.new( Binding.mongocrypt_ctx_new(@mongocrypt_handle.ref), Binding.method(:mongocrypt_ctx_destroy) ) @encryption_io = io @cached_azure_token = nil end attr_reader :ctx_p # Returns the state of the mongocrypt_ctx_t # # @return [ Symbol ] The context state def state Binding.mongocrypt_ctx_state(@ctx_p) end # Runs the mongocrypt_ctx_t state machine and handles # all I/O on behalf of # # @param [ CsotTimeoutHolder ] timeout_holder CSOT timeouts for the # operation the state. # # @return [ BSON::Document ] A BSON document representing the outcome # of the state machine. Contents can differ depending on how the # context was initialized.. # # @raise [ Error::CryptError ] If the state machine enters the # :error state # # This method is not currently unit tested. It is integration tested # in spec/integration/explicit_encryption_spec.rb def run_state_machine(timeout_holder) while true timeout_ms = timeout_holder.remaining_timeout_ms! case state when :error Binding.check_ctx_status(self) when :ready # Finalize the state machine and return the result as a BSON::Document return Binding.ctx_finalize(self) when :done return nil when :need_mongo_keys provide_keys(timeout_ms) when :need_mongo_collinfo provide_collection_info(timeout_ms) when :need_mongo_markings provide_markings(timeout_ms) when :need_kms feed_kms when :need_kms_credentials Binding.ctx_provide_kms_providers( self, retrieve_kms_credentials(timeout_holder).to_document ) else raise Error::CryptError.new( "State #{state} is not supported by Mongo::Crypt::Context" ) end end end private def provide_markings(timeout_ms) cmd = Binding.ctx_mongo_op(self) result = @encryption_io.mark_command(cmd, timeout_ms: timeout_ms) mongocrypt_feed(result) mongocrypt_done end def provide_collection_info(timeout_ms) filter = Binding.ctx_mongo_op(self) result = @encryption_io.collection_info(@db_name, filter, timeout_ms: timeout_ms) mongocrypt_feed(result) if result mongocrypt_done end def provide_keys(timeout_ms) filter = Binding.ctx_mongo_op(self) @encryption_io.find_keys(filter, timeout_ms: timeout_ms).each do |key| mongocrypt_feed(key) if key end mongocrypt_done end def feed_kms while (kms_context = Binding.ctx_next_kms_ctx(self)) do begin delay = Binding.kms_ctx_usleep(kms_context) sleep(delay / 1_000_000.0) unless delay.nil? provider = Binding.kms_ctx_get_kms_provider(kms_context) tls_options = @mongocrypt_handle.kms_tls_options(provider) @encryption_io.feed_kms(kms_context, tls_options) rescue Error::KmsError => e if e.network_error? if Binding.kms_ctx_fail(kms_context) next else raise end else raise end end end Binding.ctx_kms_done(self) end # Indicate that state machine is done feeding I/O responses back to libmongocrypt def mongocrypt_done Binding.mongocrypt_ctx_mongo_done(ctx_p) end # Feeds the result of a Mongo operation back to libmongocrypt. # # @param [ Hash ] doc BSON document to feed. # # @return [ BSON::Document ] BSON document containing the result. def mongocrypt_feed(doc) Binding.ctx_mongo_feed(self, doc) end # Retrieves KMS credentials for providers that are configured # for automatic credentials retrieval. # # @param [ CsotTimeoutHolder ] timeout_holder CSOT timeout. # # @return [ Crypt::KMS::Credentials ] Credentials for the configured # KMS providers. def retrieve_kms_credentials(timeout_holder) providers = {} if kms_providers.aws&.empty? begin aws_credentials = Mongo::Auth::Aws::CredentialsRetriever.new.credentials(timeout_holder) rescue Auth::Aws::CredentialsNotFound raise Error::CryptError.new( "Could not locate AWS credentials (checked environment variables, ECS and EC2 metadata)" ) end providers[:aws] = aws_credentials.to_h end if kms_providers.gcp&.empty? providers[:gcp] = { access_token: gcp_access_token(timeout_holder) } end if kms_providers.azure&.empty? providers[:azure] = { access_token: azure_access_token(timeout_holder) } end KMS::Credentials.new(providers) end # Retrieves a GCP access token. # # @return [ String ] A GCP access token. # # @raise [ Error::CryptError ] If the GCP access token could not be def gcp_access_token(timeout_holder) KMS::GCP::CredentialsRetriever.fetch_access_token(timeout_holder) rescue KMS::CredentialsNotFound => e raise Error::CryptError.new( "Could not locate GCP credentials: #{e.class}: #{e.message}" ) end # Returns an Azure access token, retrieving it if necessary. # # @return [ String ] An Azure access token. # # @raise [ Error::CryptError ] If the Azure access token could not be # retrieved. def azure_access_token(timeout_holder) if @cached_azure_token.nil? || @cached_azure_token.expired? @cached_azure_token = KMS::Azure::CredentialsRetriever.fetch_access_token(timeout_holder: timeout_holder) end @cached_azure_token.access_token rescue KMS::CredentialsNotFound => e raise Error::CryptError.new( "Could not locate Azure credentials: #{e.class}: #{e.message}" ) end end end end mongo-ruby-driver-2.21.3/lib/mongo/crypt/data_key_context.rb000066400000000000000000000054601505113246500240750ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2019-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Crypt # A Context object initialized specifically for the purpose of creating # a data key in the key management system. # # @api private class DataKeyContext < Context # Create a new DataKeyContext object # # @param [ Mongo::Crypt::Handle ] mongocrypt a Handle that # wraps a mongocrypt_t object used to create a new mongocrypt_ctx_t # @param [ Mongo::Crypt::EncryptionIO ] io An object that performs all # driver I/O on behalf of libmongocrypt # @param [ Mongo::Crypt::KMS::MasterKeyDocument ] master_key_document The master # key document that contains master encryption key parameters. # @param [ Array | nil ] key_alt_names An optional array of strings specifying # alternate names for the new data key. # @param [ String | nil ] :key_material Optional # 96 bytes to use as custom key material for the data key being created. # If :key_material option is given, the custom key material is used # for encrypting and decrypting data. def initialize(mongocrypt, io, master_key_document, key_alt_names, key_material) super(mongocrypt, io) Binding.ctx_setopt_key_encryption_key(self, master_key_document.to_document) set_key_alt_names(key_alt_names) if key_alt_names Binding.ctx_setopt_key_material(self, BSON::Binary.new(key_material)) if key_material initialize_ctx end private # Set the alt names option on the context def set_key_alt_names(key_alt_names) unless key_alt_names.is_a?(Array) raise ArgumentError.new, 'The :key_alt_names option must be an Array' end unless key_alt_names.all? { |key_alt_name| key_alt_name.is_a?(String) } raise ArgumentError.new( "#{key_alt_names} contains an invalid alternate key name. All " + "values of the :key_alt_names option Array must be Strings" ) end Binding.ctx_setopt_key_alt_names(self, key_alt_names) end # Initializes the underlying mongocrypt_ctx_t object def initialize_ctx Binding.ctx_datakey_init(self) end end end end mongo-ruby-driver-2.21.3/lib/mongo/crypt/encryption_io.rb000066400000000000000000000347471505113246500234430ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2019-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Crypt # A class that implements I/O methods between the driver and # the MongoDB server or mongocryptd. # # @api private class EncryptionIO # Timeout used for TLS socket connection, reading, and writing. # There is no specific timeout written in the spec. See SPEC-1394 # for a discussion and updates on what this timeout should be. SOCKET_TIMEOUT = 10 # Creates a new EncryptionIO object with information about how to connect # to the key vault. # # @param [ Mongo::Client ] client The client used to connect to the collection # that stores the encrypted documents, defaults to nil. # @param [ Mongo::Client ] mongocryptd_client The client connected to mongocryptd, # defaults to nil. # @param [ Mongo::Client ] key_vault_client The client connected to the # key vault collection. # @param [ Mongo::Client | nil ] metadata_client The client to be used to # obtain collection metadata. # @param [ String ] key_vault_namespace The key vault namespace in the format # db_name.collection_name. # @param [ Hash ] mongocryptd_options Options related to mongocryptd. # # @option mongocryptd_options [ Boolean ] :mongocryptd_bypass_spawn # @option mongocryptd_options [ String ] :mongocryptd_spawn_path # @option mongocryptd_options [ Array ] :mongocryptd_spawn_args # # @note When being used for auto encryption, all arguments are required. # When being used for explicit encryption, only the key_vault_namespace # and key_vault_client arguments are required. # # @note This class expects that the key_vault_client and key_vault_namespace # options are not nil and are in the correct format. def initialize( client: nil, mongocryptd_client: nil, key_vault_namespace:, key_vault_client:, metadata_client:, mongocryptd_options: {} ) validate_key_vault_client!(key_vault_client) validate_key_vault_namespace!(key_vault_namespace) @client = client @mongocryptd_client = mongocryptd_client @key_vault_db_name, @key_vault_collection_name = key_vault_namespace.split('.') @key_vault_client = key_vault_client @metadata_client = metadata_client @options = mongocryptd_options end # Query for keys in the key vault collection using the provided # filter # # @param [ Hash ] filter # @param [ Integer ] :timeout_ms The operation timeout in milliseconds. # Must be a non-negative integer. An explicit value of 0 means infinite. # The default value is unset which means the feature is not enabled. # # @return [ Array ] The query results def find_keys(filter, timeout_ms: nil) key_vault_collection.find(filter, timeout_ms: timeout_ms).to_a end # Insert a document into the key vault collection # # @param [ Hash ] document # @param [ Integer ] :timeout_ms The operation timeout in milliseconds. # Must be a non-negative integer. An explicit value of 0 means infinite. # The default value is unset which means the feature is not enabled. # # @return [ Mongo::Operation::Insert::Result ] The insertion result def insert_data_key(document, timeout_ms: nil) key_vault_collection.insert_one(document, timeout_ms: timeout_ms) end # Get collection info for a collection matching the provided filter # # @param [ Hash ] filter # @param [ Integer ] :timeout_ms The operation timeout in milliseconds. # Must be a non-negative integer. An explicit value of 0 means infinite. # The default value is unset which means the feature is not enabled. # # @return [ Hash ] The collection information def collection_info(db_name, filter, timeout_ms: nil) unless @metadata_client raise ArgumentError, 'collection_info requires metadata_client to have been passed to the constructor, but it was not' end @metadata_client .use(db_name) .database .list_collections(filter: filter, deserialize_as_bson: true, timeout_ms: timeout_ms) .first end # Send the command to mongocryptd to be marked with intent-to-encrypt markings # # @param [ Hash ] cmd # @param [ Integer ] :timeout_ms The operation timeout in milliseconds. # Must be a non-negative integer. An explicit value of 0 means infinite. # The default value is unset which means the feature is not enabled. # # @return [ Hash ] The marked command def mark_command(cmd, timeout_ms: nil) unless @mongocryptd_client raise ArgumentError, 'mark_command requires mongocryptd_client to have been passed to the constructor, but it was not' end # Ensure the response from mongocryptd is deserialized with { mode: :bson } # to prevent losing type information in commands options = { execution_options: { deserialize_as_bson: true }, timeout_ms: timeout_ms } begin response = @mongocryptd_client.database.command(cmd, options) rescue Error::NoServerAvailable => e raise e if @options[:mongocryptd_bypass_spawn] spawn_mongocryptd response = @mongocryptd_client.database.command(cmd, options) end return response.first end # Get information about the remote KMS encryption key and feed it to the the # KmsContext object # # @param [ Mongo::Crypt::KmsContext ] kms_context A KmsContext object # corresponding to one remote KMS data key. Contains information about # the endpoint at which to establish a TLS connection and the message # to send on that connection. # @param [ Hash ] tls_options. TLS options to connect to KMS provider. # The options are same as for Mongo::Client. # @param [ Integer ] :timeout_ms The operation timeout in milliseconds. # Must be a non-negative integer. An explicit value of 0 means infinite. # The default value is unset which means the feature is not enabled. def feed_kms(kms_context, tls_options, timeout_ms: nil) with_ssl_socket(kms_context.endpoint, tls_options) do |ssl_socket| Timeout.timeout(timeout_ms || SOCKET_TIMEOUT, Error::SocketTimeoutError, 'Socket write operation timed out' ) do ssl_socket.syswrite(kms_context.message) end bytes_needed = kms_context.bytes_needed while bytes_needed > 0 do bytes = Timeout.timeout(timeout_ms || SOCKET_TIMEOUT, Error::SocketTimeoutError, 'Socket read operation timed out' ) do ssl_socket.sysread(bytes_needed) end kms_context.feed(bytes) bytes_needed = kms_context.bytes_needed end end end # Adds a key_alt_name to the key_alt_names array of the key document # in the key vault collection with the given id. def add_key_alt_name(id, key_alt_name, timeout_ms: nil) key_vault_collection.find_one_and_update( { _id: id }, { '$addToSet' => { keyAltNames: key_alt_name } }, timeout_ms: timeout_ms ) end # Removes the key document with the given id # from the key vault collection. def delete_key(id, timeout_ms: nil) key_vault_collection.delete_one(_id: id, timeout_ms: timeout_ms) end # Finds a single key document with the given id. def get_key(id, timeout_ms: nil) key_vault_collection.find(_id: id, timeout_ms: timeout_ms).first end # Returns a key document in the key vault collection with # the given key_alt_name. def get_key_by_alt_name(key_alt_name, timeout_ms: nil) key_vault_collection.find(keyAltNames: key_alt_name, timeout_ms: timeout_ms).first end # Finds all documents in the key vault collection. def get_keys(timeout_ms: nil) key_vault_collection.find(nil, timeout_ms: timeout_ms) end # Removes a key_alt_name from the key_alt_names array of the key document # in the key vault collection with the given id. def remove_key_alt_name(id, key_alt_name, timeout_ms: nil) key_vault_collection.find_one_and_update( { _id: id }, [ { '$set' => { keyAltNames: { '$cond' => [ { '$eq' => [ '$keyAltNames', [ key_alt_name ] ] }, '$$REMOVE', { '$filter' => { input: '$keyAltNames', cond: { '$ne' => [ '$$this', key_alt_name ] } } } ] } } } ], timeout_ms: timeout_ms ) end # Apply given requests to the key vault collection using bulk write. # # @param [ Array ] requests The bulk write requests. # # @return [ BulkWrite::Result ] The result of the operation. def update_data_keys(updates, timeout_ms: nil) key_vault_collection.bulk_write(updates, timeout_ms: timeout_ms) end private def validate_key_vault_client!(key_vault_client) unless key_vault_client raise ArgumentError.new('The :key_vault_client option cannot be nil') end unless key_vault_client.is_a?(Client) raise ArgumentError.new( 'The :key_vault_client option must be an instance of Mongo::Client' ) end end def validate_key_vault_namespace!(key_vault_namespace) unless key_vault_namespace raise ArgumentError.new('The :key_vault_namespace option cannot be nil') end unless key_vault_namespace.split('.').length == 2 raise ArgumentError.new( "#{key_vault_namespace} is an invalid key vault namespace." + "The :key_vault_namespace option must be in the format database.collection" ) end end # Use the provided key vault client and namespace to construct a # Mongo::Collection object representing the key vault collection. def key_vault_collection @key_vault_collection ||= @key_vault_client.with( database: @key_vault_db_name, read_concern: { level: :majority }, write_concern: { w: :majority } )[@key_vault_collection_name] end # Spawn a new mongocryptd process using the mongocryptd_spawn_path # and mongocryptd_spawn_args passed in through the extra auto # encrypt options. Stdout and Stderr of this new process are written # to /dev/null. # # @note To capture the mongocryptd logs, add "--logpath=/path/to/logs" # to auto_encryption_options -> extra_options -> mongocrpytd_spawn_args # # @return [ Integer ] The process id of the spawned process # # @raise [ ArgumentError ] Raises an exception if no encryption options # have been provided def spawn_mongocryptd mongocryptd_spawn_args = @options[:mongocryptd_spawn_args] mongocryptd_spawn_path = @options[:mongocryptd_spawn_path] unless mongocryptd_spawn_path raise ArgumentError.new( 'Cannot spawn mongocryptd process when no ' + ':mongocryptd_spawn_path option is provided' ) end if mongocryptd_spawn_path.nil? || mongocryptd_spawn_args.nil? || mongocryptd_spawn_args.empty? then raise ArgumentError.new( 'Cannot spawn mongocryptd process when no :mongocryptd_spawn_args ' + 'option is provided. To start mongocryptd without arguments, pass ' + '"--" for :mongocryptd_spawn_args' ) end begin Process.spawn( mongocryptd_spawn_path, *mongocryptd_spawn_args, [:out, :err]=>'/dev/null' ) rescue Errno::ENOENT => e raise Error::MongocryptdSpawnError.new( "Failed to spawn mongocryptd at the path \"#{mongocryptd_spawn_path}\" " + "with arguments #{mongocryptd_spawn_args}. Received error " + "#{e.class}: \"#{e.message}\"" ) end end # Provide a TLS socket to be used for KMS calls in a block API # # @param [ String ] endpoint The URI at which to connect the TLS socket. # @param [ Hash ] tls_options. TLS options to connect to KMS provider. # The options are same as for Mongo::Client. # @yieldparam [ OpenSSL::SSL::SSLSocket ] ssl_socket Yields a TLS socket # connected to the specified endpoint. # # @raise [ Mongo::Error::KmsError ] If the socket times out or raises # an exception # # @note The socket is always closed when the provided block has finished # executing def with_ssl_socket(endpoint, tls_options, timeout_ms: nil) csot = !timeout_ms.nil? address = begin host, port = endpoint.split(':') port ||= 443 # All supported KMS APIs use this port by default. Address.new([host, port].join(':')) end socket_options = { ssl: true, csot: csot }.tap do |opts| if csot opts[:connect_timeout] = (timeout_ms / 1_000.0) end end mongo_socket = address.socket( SOCKET_TIMEOUT, tls_options.merge(socket_options) ) yield(mongo_socket.socket) rescue Error::KmsError raise rescue StandardError => e raise Error::KmsError.new("Error when connecting to KMS provider: #{e.class}: #{e.message}", network_error: true) ensure mongo_socket&.close end end end end mongo-ruby-driver-2.21.3/lib/mongo/crypt/explicit_decryption_context.rb000066400000000000000000000026771505113246500264040ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2019-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Crypt # A Context object initialized for explicit decryption # # @api private class ExplicitDecryptionContext < Context # Create a new ExplicitDecryptionContext object # # @param [ Mongo::Crypt::Handle ] mongocrypt a Handle that # wraps a mongocrypt_t object used to create a new mongocrypt_ctx_t # @param [ ClientEncryption::IO ] io A instance of the IO class # that implements driver I/O methods required to run the # state machine # @param [ BSON::Document ] doc A document to decrypt def initialize(mongocrypt, io, doc) super(mongocrypt, io) # Initialize the underlying mongocrypt_ctx_t object to perform # explicit decryption Binding.ctx_explicit_decrypt_init(self, doc) end end end end mongo-ruby-driver-2.21.3/lib/mongo/crypt/explicit_encrypter.rb000066400000000000000000000337601505113246500244700ustar00rootroot00000000000000# frozen_string_literal: true # Copyright (C) 2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Crypt # An ExplicitEncrypter is an object that performs explicit encryption # operations and handles all associated options and instance variables. # # @api private class ExplicitEncrypter extend Forwardable # Create a new ExplicitEncrypter object. # # @param [ Mongo::Client ] key_vault_client An instance of Mongo::Client # to connect to the key vault collection. # @param [ String ] key_vault_namespace The namespace of the key vault # collection in the format "db_name.collection_name". # @param [ Crypt::KMS::Credentials ] kms_providers A hash of key management service # configuration information. # @param [ Hash ] kms_tls_options TLS options to connect to KMS # providers. Keys of the hash should be KSM provider names; values # should be hashes of TLS connection options. The options are equivalent # to TLS connection options of Mongo::Client. # @param [ Integer | nil ] timeout_ms Timeout for every operation executed # on this object. def initialize(key_vault_client, key_vault_namespace, kms_providers, kms_tls_options, timeout_ms = nil) Crypt.validate_ffi! @crypt_handle = Handle.new( kms_providers, kms_tls_options, explicit_encryption_only: true ) @encryption_io = EncryptionIO.new( key_vault_client: key_vault_client, metadata_client: nil, key_vault_namespace: key_vault_namespace ) @timeout_ms = timeout_ms end # Generates a data key used for encryption/decryption and stores # that key in the KMS collection. The generated key is encrypted with # the KMS master key. # # @param [ Mongo::Crypt::KMS::MasterKeyDocument ] master_key_document The master # key document that contains master encryption key parameters. # @param [ Array | nil ] key_alt_names An optional array of strings specifying # alternate names for the new data key. # @param [ String | nil ] key_material Optional 96 bytes to use as # custom key material for the data key being created. # If key_material option is given, the custom key material is used # for encrypting and decrypting data. # # @return [ BSON::Binary ] The 16-byte UUID of the new data key as a # BSON::Binary object with type :uuid. def create_and_insert_data_key(master_key_document, key_alt_names, key_material = nil) data_key_document = Crypt::DataKeyContext.new( @crypt_handle, @encryption_io, master_key_document, key_alt_names, key_material ).run_state_machine(timeout_holder) @encryption_io.insert_data_key( data_key_document, timeout_ms: timeout_holder.remaining_timeout_ms! ).inserted_id end # Encrypts a value using the specified encryption key and algorithm # # @param [ Object ] value The value to encrypt # @param [ Hash ] options # # @option options [ BSON::Binary ] :key_id A BSON::Binary object of type :uuid # representing the UUID of the encryption key as it is stored in the key # vault collection. # @option options [ String ] :key_alt_name The alternate name for the # encryption key. # @option options [ String ] :algorithm The algorithm used to encrypt the value. # Valid algorithms are "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic", # "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "Indexed", "Unindexed". # @option options [ Integer | nil ] :contention_factor Contention factor # to be applied if encryption algorithm is set to "Indexed". If not # provided, it defaults to a value of 0. Contention factor should be set # only if encryption algorithm is set to "Indexed". # @option options [ String | nil ] query_type Query type to be applied # if encryption algorithm is set to "Indexed". Query type should be set # only if encryption algorithm is set to "Indexed". The only allowed # value is "equality". # # @note The :key_id and :key_alt_name options are mutually exclusive. Only # one is required to perform explicit encryption. # # @return [ BSON::Binary ] A BSON Binary object of subtype 6 (ciphertext) # representing the encrypted value # @raise [ ArgumentError ] if either contention_factor or query_type # is set, and algorithm is not "Indexed". def encrypt(value, options) Crypt::ExplicitEncryptionContext.new( @crypt_handle, @encryption_io, { v: value }, options ).run_state_machine(timeout_holder)['v'] end # Encrypts a Match Expression or Aggregate Expression to query a range index. # # @example Encrypt Match Expression. # encryption.encrypt_expression( # {'$and' => [{'field' => {'$gt' => 10}}, {'field' => {'$lt' => 20 }}]} # ) # @example Encrypt Aggregate Expression. # encryption.encrypt_expression( # {'$and' => [{'$gt' => ['$field', 10]}, {'$lt' => ['$field', 20]}} # ) # {$and: [{$gt: [, ]}, {$lt: [, ]}] # Only supported when queryType is "range" and algorithm is "Range". # @note: The Range algorithm is experimental only. It is not intended # for public use. It is subject to breaking changes. # # @param [ Hash ] expression Expression to encrypt. # # @param [ Hash ] options # @option options [ BSON::Binary ] :key_id A BSON::Binary object of type :uuid # representing the UUID of the encryption key as it is stored in the key # vault collection. # @option options [ String ] :key_alt_name The alternate name for the # encryption key. # @option options [ String ] :algorithm The algorithm used to encrypt the # expression. The only allowed value is "Range" # @option options [ Integer | nil ] :contention_factor Contention factor # to be applied If not provided, it defaults to a value of 0. # @option options [ String | nil ] query_type Query type to be applied. # The only allowed value is "range". # @option options [ Hash | nil ] :range_opts Specifies index options for # a Queryable Encryption field supporting "range" queries. # Allowed options are: # - :min # - :max # - :trim_factor # - :sparsity # - :precision # min, max, trim_factor, sparsity, and precision must match the values set in # the encryptedFields of the destination collection. # For double and decimal128, min/max/precision must all be set, # or all be unset. # # @note The Range algorithm is experimental only. It is not # intended for public use. # # @note The :key_id and :key_alt_name options are mutually exclusive. Only # one is required to perform explicit encryption. # # @return [ BSON::Binary ] A BSON Binary object of subtype 6 (ciphertext) # representing the encrypted expression. # # @raise [ ArgumentError ] if disallowed values in options are set. def encrypt_expression(expression, options) Crypt::ExplicitEncryptionExpressionContext.new( @crypt_handle, @encryption_io, { v: expression }, options ).run_state_machine(timeout_holder)['v'] end # Decrypts a value that has already been encrypted # # @param [ BSON::Binary ] value A BSON Binary object of subtype 6 (ciphertext) # that will be decrypted # # @return [ Object ] The decrypted value def decrypt(value) Crypt::ExplicitDecryptionContext.new( @crypt_handle, @encryption_io, { v: value } ).run_state_machine(timeout_holder)['v'] end # Adds a key_alt_name for the key in the key vault collection with the given id. # # @param [ BSON::Binary ] id Id of the key to add new key alt name. # @param [ String ] key_alt_name New key alt name to add. # # @return [ BSON::Document | nil ] Document describing the identified key # before adding the key alt name, or nil if no such key. def add_key_alt_name(id, key_alt_name) @encryption_io.add_key_alt_name(id, key_alt_name, timeout_ms: @timeout_ms) end # Removes the key with the given id from the key vault collection. # # @param [ BSON::Binary ] id Id of the key to delete. # # @return [ Operation::Result ] The response from the database for the delete_one # operation that deletes the key. def delete_key(id) @encryption_io.delete_key(id, timeout_ms: @timeout_ms) end # Finds a single key with the given id. # # @param [ BSON::Binary ] id Id of the key to get. # # @return [ BSON::Document | nil ] The found key document or nil # if not found. def get_key(id) @encryption_io.get_key(id, timeout_ms: @timeout_ms) end # Returns a key in the key vault collection with the given key_alt_name. # # @param [ String ] key_alt_name Key alt name to find a key. # # @return [ BSON::Document | nil ] The found key document or nil # if not found. def get_key_by_alt_name(key_alt_name) @encryption_io.get_key_by_alt_name(key_alt_name, timeout_ms: @timeout_ms) end # Returns all keys in the key vault collection. # # @return [ Collection::View ] Keys in the key vault collection. # rubocop:disable Naming/AccessorMethodName # Name of this method is defined in the FLE spec def get_keys @encryption_io.get_keys(timeout_ms: @timeout_ms) end # rubocop:enable Naming/AccessorMethodName # Removes a key_alt_name from a key in the key vault collection with the given id. # # @param [ BSON::Binary ] id Id of the key to remove key alt name. # @param [ String ] key_alt_name Key alt name to remove. # # @return [ BSON::Document | nil ] Document describing the identified key # before removing the key alt name, or nil if no such key. def remove_key_alt_name(id, key_alt_name) @encryption_io.remove_key_alt_name(id, key_alt_name, timeout_ms: @timeout_ms) end # Decrypts multiple data keys and (re-)encrypts them with a new master_key, # or with their current master_key if a new one is not given. # # @param [ Hash ] filter Filter used to find keys to be updated. # @param [ Hash ] options # # @option options [ String ] :provider KMS provider to encrypt keys. # @option options [ Hash | nil ] :master_key Document describing master key # to encrypt keys. # # @return [ Crypt::RewrapManyDataKeyResult ] Result of the operation. def rewrap_many_data_key(filter, opts = {}) validate_rewrap_options!(opts) master_key_document = master_key_for_provider(opts) rewrap_result = Crypt::RewrapManyDataKeyContext.new( @crypt_handle, @encryption_io, filter, master_key_document ).run_state_machine(timeout_holder) return RewrapManyDataKeyResult.new(nil) if rewrap_result.nil? updates = updates_from_data_key_documents(rewrap_result.fetch('v')) RewrapManyDataKeyResult.new( @encryption_io.update_data_keys(updates, timeout_ms: @timeout_ms) ) end private # Ensures the consistency of the options passed to #rewrap_many_data_keys. # # @param [ Hash ] opts the options hash to validate # # @raise [ ArgumentError ] if the options are not consistent or # compatible. def validate_rewrap_options!(opts) return unless opts.key?(:master_key) && !opts.key?(:provider) raise ArgumentError, 'If :master_key is specified, :provider must also be given' end # If a :provider is given, construct a new master key document # with that provider. # # @param [ Hash ] opts the options hash # # @option [ String ] :provider KMS provider to encrypt keys. # # @return [ KMS::MasterKeyDocument | nil ] the new master key document, # or nil if no provider was given. def master_key_for_provider(opts) return nil unless opts[:provider] options = opts.dup provider = options.delete(:provider) KMS::MasterKeyDocument.new(provider, options) end # Returns the corresponding update document for each for of the given # data key documents. # # @param [ Array ] documents the data key documents # # @return [ Array ] the update documents def updates_from_data_key_documents(documents) documents.map do |doc| { update_one: { filter: { _id: doc[:_id] }, update: { '$set' => { masterKey: doc[:masterKey], keyMaterial: doc[:keyMaterial] }, '$currentDate' => { updateDate: true }, }, } } end end def timeout_holder CsotTimeoutHolder.new( operation_timeouts: { operation_timeout_ms: @timeout_ms } ) end end end end mongo-ruby-driver-2.21.3/lib/mongo/crypt/explicit_encryption_context.rb000066400000000000000000000136571505113246500264160ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2019-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Crypt # A Context object initialized for explicit encryption # # @api private class ExplicitEncryptionContext < Context # Create a new ExplicitEncryptionContext object # # @param [ Mongo::Crypt::Handle ] mongocrypt a Handle that # wraps a mongocrypt_t object used to create a new mongocrypt_ctx_t # @param [ ClientEncryption::IO ] io A instance of the IO class # that implements driver I/O methods required to run the # state machine # @param [ BSON::Document ] doc A document to encrypt # # @param [ Hash ] options # @option options [ BSON::Binary ] :key_id A BSON::Binary object of type # :uuid representing the UUID of the data key to use for encryption. # @option options [ String ] :key_alt_name The alternate name of the data key # that will be used to encrypt the value. # @option options [ String ] :algorithm The algorithm used to encrypt the # value. Valid algorithms are "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic", # "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "Indexed", "Unindexed", "Range". # @option options [ Integer | nil ] :contention_factor Contention factor # to be applied if encryption algorithm is set to "Indexed". If not # provided, it defaults to a value of 0. Contention factor should be set # only if encryption algorithm is set to "Indexed". # @option options [ String | nil ] query_type Query type to be applied # if encryption algorithm is set to "Indexed" or "Range". # Allowed values are "equality" and "range". # @option options [ Hash | nil ] :range_opts Specifies index options for # a Queryable Encryption field supporting "range" queries. # Allowed options are: # - :min # - :max # - :trim_factor # - :sparsity # - :precision # min, max, trim_factor, sparsity, and precision must match the values set in # the encryptedFields of the destination collection. # For double and decimal128, min/max/precision must all be set, # or all be unset. # # @note The Range algorithm is experimental only. It is not intended for # public use. # # @raise [ ArgumentError|Mongo::Error::CryptError ] If invalid options are provided def initialize(mongocrypt, io, doc, options = {}) super(mongocrypt, io) set_key_opts(options) set_algorithm_opts(options) init(doc) end def init(doc) Binding.ctx_explicit_encrypt_init(self, doc) end private def set_key_opts(options) if options[:key_id].nil? && options[:key_alt_name].nil? raise ArgumentError.new( 'The :key_id and :key_alt_name options cannot both be nil. ' + 'Specify a :key_id option or :key_alt_name option (but not both)' ) end if options[:key_id] && options[:key_alt_name] raise ArgumentError.new( 'The :key_id and :key_alt_name options cannot both be present. ' + 'Identify the data key by specifying its id with the :key_id ' + 'option or specifying its alternate name with the :key_alt_name option' ) end if options[:key_id] set_key_id(options[:key_id]) elsif options[:key_alt_name] set_key_alt_name(options[:key_alt_name]) end end def set_key_id(key_id) unless key_id.is_a?(BSON::Binary) && key_id.type == :uuid raise ArgumentError.new( "Expected the :key_id option to be a BSON::Binary object with " + "type :uuid. #{key_id} is an invalid :key_id option" ) end Binding.ctx_setopt_key_id(self, key_id.data) end def set_key_alt_name(key_alt_name) unless key_alt_name.is_a?(String) raise ArgumentError.new(':key_alt_name option must be a String') end Binding.ctx_setopt_key_alt_names(self, [key_alt_name]) end def set_algorithm_opts(options) Binding.ctx_setopt_algorithm(self, options[:algorithm]) if %w(Indexed Range).include?(options[:algorithm]) if options[:contention_factor] Binding.ctx_setopt_contention_factor(self, options[:contention_factor]) end if options[:query_type] Binding.ctx_setopt_query_type(self, options[:query_type]) end else if options[:contention_factor] raise ArgumentError.new(':contention_factor is allowed only for "Indexed" or "Range" algorithms') end if options[:query_type] raise ArgumentError.new(':query_type is allowed only for "Indexed" or "Range" algorithms') end end if options[:algorithm] == 'Range' Binding.ctx_setopt_algorithm_range(self, convert_range_opts(options[:range_opts])) end end def convert_range_opts(range_opts) range_opts.dup.tap do |opts| if opts[:sparsity] && !opts[:sparsity].is_a?(BSON::Int64) opts[:sparsity] = BSON::Int64.new(opts[:sparsity]) end if opts[:trim_factor] opts[:trimFactor] = opts.delete(:trim_factor) end end end end end end mongo-ruby-driver-2.21.3/lib/mongo/crypt/explicit_encryption_expression_context.rb000066400000000000000000000016611505113246500306650ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2019-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Crypt # A Context object initialized for explicit expression encryption. # # @api private class ExplicitEncryptionExpressionContext < ExplicitEncryptionContext def init(doc) Binding.ctx_explicit_encrypt_expression_init(self, doc) end end end end mongo-ruby-driver-2.21.3/lib/mongo/crypt/handle.rb000066400000000000000000000351721505113246500220060ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2019-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'ffi' require 'base64' module Mongo module Crypt # A handle to the libmongocrypt library that wraps a mongocrypt_t object, # allowing clients to set options on that object or perform operations such # as encryption and decryption # # @api private class Handle # @returns [ Crypt::KMS::Credentials ] Credentials for KMS providers. attr_reader :kms_providers # Creates a new Handle object and initializes it with options # # @param [ Crypt::KMS::Credentials ] kms_providers Credentials for KMS providers. # # @param [ Hash ] kms_tls_options TLS options to connect to KMS # providers. Keys of the hash should be KSM provider names; values # should be hashes of TLS connection options. The options are equivalent # to TLS connection options of Mongo::Client. # # @param [ Hash ] options A hash of options. # @option options [ Hash | nil ] :schema_map A hash representing the JSON schema # of the collection that stores auto encrypted documents. This option is # mutually exclusive with :schema_map_path. # @option options [ String | nil ] :schema_map_path A path to a file contains the JSON schema # of the collection that stores auto encrypted documents. This option is # mutually exclusive with :schema_map. # @option options [ Hash | nil ] :encrypted_fields_map maps a collection # namespace to an encryptedFields. # - Note: If a collection is present on both the encryptedFieldsMap # and schemaMap, an error will be raised. # @option options [ Boolean | nil ] :bypass_query_analysis When true # disables automatic analysis of outgoing commands. # @option options [ String | nil ] :crypt_shared_lib_path Path that should # be the used to load the crypt shared library. Providing this option # overrides default crypt shared library load paths for libmongocrypt. # @option options [ Boolean | nil ] :crypt_shared_lib_required Whether # crypt_shared library is required. If 'true', an error will be raised # if a crypt_shared library cannot be loaded by libmongocrypt. # @option options [ Boolean | nil ] :explicit_encryption_only Whether this # handle is going to be used only for explicit encryption. If true, # libmongocrypt is instructed not to load crypt shared library. # @option options [ Logger ] :logger A Logger object to which libmongocrypt logs # will be sent def initialize(kms_providers, kms_tls_options, options={}) # FFI::AutoPointer uses a custom release strategy to automatically free # the pointer once this object goes out of scope @mongocrypt = FFI::AutoPointer.new( Binding.mongocrypt_new, Binding.method(:mongocrypt_destroy) ) Binding.kms_ctx_setopt_retry_kms(self, true) @kms_providers = kms_providers @kms_tls_options = kms_tls_options maybe_set_schema_map(options) @encrypted_fields_map = options[:encrypted_fields_map] set_encrypted_fields_map if @encrypted_fields_map @bypass_query_analysis = options[:bypass_query_analysis] set_bypass_query_analysis if @bypass_query_analysis @crypt_shared_lib_path = options[:crypt_shared_lib_path] @explicit_encryption_only = options[:explicit_encryption_only] if @crypt_shared_lib_path Binding.setopt_set_crypt_shared_lib_path_override(self, @crypt_shared_lib_path) elsif !@bypass_query_analysis && !@explicit_encryption_only Binding.setopt_append_crypt_shared_lib_search_path(self, "$SYSTEM") end @logger = options[:logger] set_logger_callback if @logger set_crypto_hooks Binding.setopt_kms_providers(self, @kms_providers.to_document) if @kms_providers.aws&.empty? || @kms_providers.gcp&.empty? || @kms_providers.azure&.empty? Binding.setopt_use_need_kms_credentials_state(self) end initialize_mongocrypt @crypt_shared_lib_required = !!options[:crypt_shared_lib_required] if @crypt_shared_lib_required && crypt_shared_lib_version == 0 raise Mongo::Error::CryptError.new( "Crypt shared library is required, but cannot be loaded according to libmongocrypt" ) end end # Return the reference to the underlying @mongocrypt object # # @return [ FFI::Pointer ] def ref @mongocrypt end # Return TLS options for KMS provider. If there are no TLS options set, # empty hash is returned. # # @param [ String ] provider KSM provider name. # # @return [ Hash ] TLS options to connect to KMS provider. def kms_tls_options(provider) @kms_tls_options.fetch(provider, {}) end def crypt_shared_lib_version Binding.crypt_shared_lib_version(self) end def crypt_shared_lib_available? crypt_shared_lib_version != 0 end private # Set the schema map option on the underlying mongocrypt_t object def maybe_set_schema_map(options) if !options[:schema_map] && !options[:schema_map_path] @schema_map = nil elsif options[:schema_map] && options[:schema_map_path] raise ArgumentError.new( "Cannot set both schema_map and schema_map_path options." ) elsif options[:schema_map] unless options[:schema_map].is_a?(Hash) raise ArgumentError.new( "#{@schema_map} is an invalid schema_map; schema_map must be a Hash or nil." ) end @schema_map = options[:schema_map] Binding.setopt_schema_map(self, @schema_map) elsif options[:schema_map_path] @schema_map = BSON::ExtJSON.parse(File.read(options[:schema_map_path])) Binding.setopt_schema_map(self, @schema_map) end rescue Errno::ENOENT raise ArgumentError.new( "#{@schema_map_path} is an invalid path to a file contains schema_map." ) end def set_encrypted_fields_map unless @encrypted_fields_map.is_a?(Hash) raise ArgumentError.new( "#{@encrypted_fields_map} is an invalid encrypted_fields_map: must be a Hash or nil" ) end Binding.setopt_encrypted_field_config_map(self, @encrypted_fields_map) end def set_bypass_query_analysis unless [true, false].include?(@bypass_query_analysis) raise ArgumentError.new( "#{@bypass_query_analysis} is an invalid bypass_query_analysis value; must be a Boolean or nil" ) end Binding.setopt_bypass_query_analysis(self) if @bypass_query_analysis end # Send the logs from libmongocrypt to the Mongo::Logger def set_logger_callback @log_callback = Proc.new do |level, msg| @logger.send(level, msg) end Binding.setopt_log_handler(@mongocrypt, @log_callback) end # Yields to the provided block and rescues exceptions raised by # the block. If an exception was raised, sets the specified status # to the exception message and returns false. If no exceptions were # raised, does not modify the status and returns true. # # This method is meant to be used with libmongocrypt callbacks and # follows the API defined by libmongocrypt. # # @param [ FFI::Pointer ] status_p A pointer to libmongocrypt status object # # @return [ true | false ] Whether block executed without raising # exceptions. def handle_error(status_p) begin yield true rescue => e status = Status.from_pointer(status_p) status.update(:error_client, 1, "#{e.class}: #{e}") false end end # Yields to the provided block and writes the return value of block # to the specified mongocrypt_binary_t object. If an exception is # raised during execution of the block, writes the exception message # to the specified status object and returns false. If no exception is # raised, does not modify status and returns true. # message to the mongocrypt_status_t object. # # @param [ FFI::Pointer ] output_binary_p A pointer to libmongocrypt # Binary object to receive the result of block's execution # @param [ FFI::Pointer ] status_p A pointer to libmongocrypt status object # # @return [ true | false ] Whether block executed without raising # exceptions. def write_binary_string_and_set_status(output_binary_p, status_p) handle_error(status_p) do output = yield Binary.from_pointer(output_binary_p).write(output) end end # Perform AES encryption or decryption and write the output to the # provided mongocrypt_binary_t object. def do_aes(key_binary_p, iv_binary_p, input_binary_p, output_binary_p, response_length_p, status_p, decrypt: false, mode: :CBC) key = Binary.from_pointer(key_binary_p).to_s iv = Binary.from_pointer(iv_binary_p).to_s input = Binary.from_pointer(input_binary_p).to_s write_binary_string_and_set_status(output_binary_p, status_p) do output = Hooks.aes(key, iv, input, decrypt: decrypt, mode: mode) response_length_p.write_int(output.bytesize) output end end # Perform HMAC SHA encryption and write the output to the provided # mongocrypt_binary_t object. def do_hmac_sha(digest_name, key_binary_p, input_binary_p, output_binary_p, status_p) key = Binary.from_pointer(key_binary_p).to_s input = Binary.from_pointer(input_binary_p).to_s write_binary_string_and_set_status(output_binary_p, status_p) do Hooks.hmac_sha(digest_name, key, input) end end # Perform signing using RSASSA-PKCS1-v1_5 with SHA256 hash and write # the output to the provided mongocrypt_binary_t object. def do_rsaes_pkcs_signature(key_binary_p, input_binary_p, output_binary_p, status_p) key = Binary.from_pointer(key_binary_p).to_s input = Binary.from_pointer(input_binary_p).to_s write_binary_string_and_set_status(output_binary_p, status_p) do Hooks.rsaes_pkcs_signature(key, input) end end # We are building libmongocrypt without crypto functions to remove the # external dependency on OpenSSL. This method binds native Ruby crypto # methods to the underlying mongocrypt_t object so that libmongocrypt can # still perform cryptography. # # Every crypto binding ignores its first argument, which is an option # mongocrypt_ctx_t object and is not required to use crypto hooks. def set_crypto_hooks @aes_encrypt = Proc.new do |_, key_binary_p, iv_binary_p, input_binary_p, output_binary_p, response_length_p, status_p| do_aes( key_binary_p, iv_binary_p, input_binary_p, output_binary_p, response_length_p, status_p ) end @aes_decrypt = Proc.new do |_, key_binary_p, iv_binary_p, input_binary_p, output_binary_p, response_length_p, status_p| do_aes( key_binary_p, iv_binary_p, input_binary_p, output_binary_p, response_length_p, status_p, decrypt: true ) end @random = Proc.new do |_, output_binary_p, num_bytes, status_p| write_binary_string_and_set_status(output_binary_p, status_p) do Hooks.random(num_bytes) end end @hmac_sha_512 = Proc.new do |_, key_binary_p, input_binary_p, output_binary_p, status_p| do_hmac_sha('SHA512', key_binary_p, input_binary_p, output_binary_p, status_p) end @hmac_sha_256 = Proc.new do |_, key_binary_p, input_binary_p, output_binary_p, status_p| do_hmac_sha('SHA256', key_binary_p, input_binary_p, output_binary_p, status_p) end @hmac_hash = Proc.new do |_, input_binary_p, output_binary_p, status_p| input = Binary.from_pointer(input_binary_p).to_s write_binary_string_and_set_status(output_binary_p, status_p) do Hooks.hash_sha256(input) end end Binding.setopt_crypto_hooks( self, @aes_encrypt, @aes_decrypt, @random, @hmac_sha_512, @hmac_sha_256, @hmac_hash, ) @aes_ctr_encrypt = Proc.new do |_, key_binary_p, iv_binary_p, input_binary_p, output_binary_p, response_length_p, status_p| do_aes( key_binary_p, iv_binary_p, input_binary_p, output_binary_p, response_length_p, status_p, mode: :CTR, ) end @aes_ctr_decrypt = Proc.new do |_, key_binary_p, iv_binary_p, input_binary_p, output_binary_p, response_length_p, status_p| do_aes( key_binary_p, iv_binary_p, input_binary_p, output_binary_p, response_length_p, status_p, decrypt: true, mode: :CTR, ) end Binding.setopt_aes_256_ctr( self, @aes_ctr_encrypt, @aes_ctr_decrypt, ) @rsaes_pkcs_signature_cb = Proc.new do |_, key_binary_p, input_binary_p, output_binary_p, status_p| do_rsaes_pkcs_signature(key_binary_p, input_binary_p, output_binary_p, status_p) end Binding.setopt_crypto_hook_sign_rsaes_pkcs1_v1_5( self, @rsaes_pkcs_signature_cb ) end # Initialize the underlying mongocrypt_t object and raise an error if the operation fails def initialize_mongocrypt Binding.init(self) # There is currently no test for the error(?) code path end end end end mongo-ruby-driver-2.21.3/lib/mongo/crypt/hooks.rb000066400000000000000000000076471505113246500217040ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2019-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'securerandom' require 'digest' module Mongo module Crypt # A helper module that implements cryptography methods required # for native Ruby crypto hooks. These methods are passed into FFI # as C callbacks and called from the libmongocrypt library. # # @api private module Hooks # An AES encrypt or decrypt method. # # @param [ String ] key The 32-byte AES encryption key # @param [ String ] iv The 16-byte AES IV # @param [ String ] input The data to be encrypted/decrypted # @param [ true | false ] decrypt Whether this method is decrypting. Default is # false, which means the method will create an encryption cipher by default # @param [ Symbol ] mode AES mode of operation # # @return [ String ] Output # @raise [ Exception ] Exceptions raised during encryption are propagated # to caller. def aes(key, iv, input, decrypt: false, mode: :CBC) cipher = OpenSSL::Cipher::AES.new(256, mode) decrypt ? cipher.decrypt : cipher.encrypt cipher.key = key cipher.iv = iv cipher.padding = 0 encrypted = cipher.update(input) end module_function :aes # Crypto secure random function # # @param [ Integer ] num_bytes The number of random bytes requested # # @return [ String ] # @raise [ Exception ] Exceptions raised during encryption are propagated # to caller. def random(num_bytes) SecureRandom.random_bytes(num_bytes) end module_function :random # An HMAC SHA-512 or SHA-256 function # # @param [ String ] digest_name The name of the digest, either "SHA256" or "SHA512" # @param [ String ] key The 32-byte AES encryption key # @param [ String ] input The data to be tagged # # @return [ String ] # @raise [ Exception ] Exceptions raised during encryption are propagated # to caller. def hmac_sha(digest_name, key, input) OpenSSL::HMAC.digest(digest_name, key, input) end module_function :hmac_sha # A crypto hash (SHA-256) function # # @param [ String ] input The data to be hashed # # @return [ String ] # @raise [ Exception ] Exceptions raised during encryption are propagated # to caller. def hash_sha256(input) Digest::SHA2.new(256).digest(input) end module_function :hash_sha256 # An RSASSA-PKCS1-v1_5 with SHA-256 signature function. # # @param [ String ] key The PKCS#8 private key in DER format, base64 encoded. # @param [ String ] input The data to be signed. # # @return [ String ] The signature. def rsaes_pkcs_signature(key, input) private_key = if BSON::Environment.jruby? # JRuby cannot read DER format, we need to convert key into PEM first. key_pem = [ "-----BEGIN PRIVATE KEY-----", Base64.strict_encode64(Base64.decode64(key)).scan(/.{1,64}/), "-----END PRIVATE KEY-----", ].join("\n") OpenSSL::PKey::RSA.new(key_pem) else OpenSSL::PKey.read(Base64.decode64(key)) end private_key.sign(OpenSSL::Digest::SHA256.new, input) end module_function :rsaes_pkcs_signature end end end mongo-ruby-driver-2.21.3/lib/mongo/crypt/kms.rb000066400000000000000000000102301505113246500213310ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2019-2021 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Crypt module KMS # This error indicates that we could not obtain credential for # a KMS service. # # @api private class CredentialsNotFound < RuntimeError; end # This module contains helper methods for validating KMS parameters. # # @api private module Validations # Validate if a KMS parameter is valid. # # @param [ Symbol ] key The parameter name. # @param [ Hash ] opts Hash should contain the parameter under the key. # @param [ Boolean ] required Whether the parameter is required or not. # Non-required parameters can be nil. # # @return [ String | nil ] String parameter value or nil if a # non-required parameter is missing. # # @raise [ ArgumentError ] If required options are missing or incorrectly # formatted. def validate_param(key, opts, format_hint, required: true) value = opts.fetch(key) return nil if value.nil? && !required if value.nil? raise ArgumentError.new( "The #{key} option must be a String with at least one character; " \ "currently have nil" ) end unless value.is_a?(String) raise ArgumentError.new( "The #{key} option must be a String with at least one character; " \ "currently have #{value}" ) end if value.empty? raise ArgumentError.new( "The #{key} option must be a String with at least one character; " \ "it is currently an empty string" ) end value rescue KeyError if required raise ArgumentError.new( "The specified KMS provider options are invalid: #{opts}. " + format_hint ) else nil end end # Validate KMS TLS options. # # @param [ Hash | nil ] options TLS options to connect to KMS # providers. Keys of the hash should be KSM provider names; values # should be hashes of TLS connection options. The options are equivalent # to TLS connection options of Mongo::Client. # # @return [ Hash ] Provided TLS options if valid. # # @raise [ ArgumentError ] If required options are missing or incorrectly # formatted. def validate_tls_options(options) opts = options || {} opts.each do |provider, provider_opts| if provider_opts[:ssl] == false || opts[:tls] == false raise ArgumentError.new( "Incorrect TLS options for #{provider}: TLS is required" ) end %i( ssl_verify_certificate ssl_verify_hostname ).each do |opt| if provider_opts[opt] == false raise ArgumentError.new( "Incorrect TLS options for #{provider}: " + 'Insecure TLS options prohibited, ' + "#{opt} cannot be set to false for KMS" ) end end end opts end module_function :validate_tls_options end end end end require "mongo/crypt/kms/credentials" require "mongo/crypt/kms/master_key_document" require 'mongo/crypt/kms/aws' require 'mongo/crypt/kms/azure' require 'mongo/crypt/kms/gcp' require 'mongo/crypt/kms/kmip' require 'mongo/crypt/kms/local' mongo-ruby-driver-2.21.3/lib/mongo/crypt/kms/000077500000000000000000000000001505113246500210105ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/crypt/kms/aws.rb000066400000000000000000000013231505113246500221260ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2019-2021 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'mongo/crypt/kms/aws/credentials' require 'mongo/crypt/kms/aws/master_document' mongo-ruby-driver-2.21.3/lib/mongo/crypt/kms/aws/000077500000000000000000000000001505113246500216025ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/crypt/kms/aws/credentials.rb000066400000000000000000000055721505113246500244350ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2019-2021 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Crypt module KMS module AWS # AWS KMS Credentials object contains credentials for using AWS KMS provider. # # @api private class Credentials extend Forwardable include KMS::Validations # @return [ String ] AWS access key. attr_reader :access_key_id # @return [ String ] AWS secret access key. attr_reader :secret_access_key # @return [ String | nil ] AWS session token. attr_reader :session_token # @api private def_delegator :@opts, :empty? FORMAT_HINT = "AWS KMS provider options must be in the format: " + "{ access_key_id: 'YOUR-ACCESS-KEY-ID', secret_access_key: 'SECRET-ACCESS-KEY' }" # Creates an AWS KMS credentials object form a parameters hash. # # @param [ Hash ] opts A hash that contains credentials for # AWS KMS provider # @option opts [ String ] :access_key_id AWS access key id. # @option opts [ String ] :secret_access_key AWS secret access key. # @option opts [ String | nil ] :session_token AWS session token, optional. # # @raise [ ArgumentError ] If required options are missing or incorrectly # formatted. def initialize(opts) @opts = opts unless empty? @access_key_id = validate_param(:access_key_id, opts, FORMAT_HINT) @secret_access_key = validate_param(:secret_access_key, opts, FORMAT_HINT) @session_token = validate_param(:session_token, opts, FORMAT_HINT, required: false) end end # Convert credentials object to a BSON document in libmongocrypt format. # # @return [ BSON::Document ] AWS KMS credentials in libmongocrypt format. def to_document return BSON::Document.new if empty? BSON::Document.new({ accessKeyId: access_key_id, secretAccessKey: secret_access_key, }).tap do |bson| unless session_token.nil? bson.update({ sessionToken: session_token }) end end end end end end end end mongo-ruby-driver-2.21.3/lib/mongo/crypt/kms/aws/master_document.rb000066400000000000000000000052451505113246500253260ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2019-2021 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Crypt module KMS module AWS # AWS KMS master key document object contains KMS master key parameters. # # @api private class MasterKeyDocument include KMS::Validations # @return [ String ] AWS region. attr_reader :region # @return [ String ] AWS KMS key. attr_reader :key # @return [ String | nil ] AWS KMS endpoint. attr_reader :endpoint FORMAT_HINT = "AWS key document must be in the format: " + "{ region: 'REGION', key: 'KEY' }" # Creates a master key document object form a parameters hash. # # @param [ Hash ] opts A hash that contains master key options for # the AWS KMS provider. # @option opts [ String ] :region AWS region. # @option opts [ String ] :key AWS KMS key. # @option opts [ String | nil ] :endpoint AWS KMS endpoint, optional. # # @raise [ ArgumentError ] If required options are missing or incorrectly. def initialize(opts) unless opts.is_a?(Hash) raise ArgumentError.new( 'Key document options must contain a key named :master_key with a Hash value' ) end @region = validate_param(:region, opts, FORMAT_HINT) @key = validate_param(:key, opts, FORMAT_HINT) @endpoint = validate_param(:endpoint, opts, FORMAT_HINT, required: false) end # Convert master key document object to a BSON document in libmongocrypt format. # # @return [ BSON::Document ] AWS KMS master key document in libmongocrypt format. def to_document BSON::Document.new({ provider: 'aws', region: region, key: key, }).tap do |bson| unless endpoint.nil? bson.update({ endpoint: endpoint }) end end end end end end end end mongo-ruby-driver-2.21.3/lib/mongo/crypt/kms/azure.rb000066400000000000000000000014711505113246500224660ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2019-2021 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'mongo/crypt/kms/azure/access_token' require 'mongo/crypt/kms/azure/credentials' require 'mongo/crypt/kms/azure/credentials_retriever' require 'mongo/crypt/kms/azure/master_document' mongo-ruby-driver-2.21.3/lib/mongo/crypt/kms/azure/000077500000000000000000000000001505113246500221365ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/crypt/kms/azure/access_token.rb000066400000000000000000000033521505113246500251270ustar00rootroot00000000000000# frozen_string_literal: true # Copyright (C) 2019-2021 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Crypt module KMS module Azure # Azure access token for temporary credentials. # # @api private class AccessToken # @return [ String ] Azure access token. attr_reader :access_token # @return [ Integer ] Azure access token expiration time. attr_reader :expires_in # Creates an Azure access token object. # # @param [ String ] access_token Azure access token. # @param [ Integer ] expires_in Azure access token expiration time. def initialize(access_token, expires_in) @access_token = access_token @expires_in = expires_in @expires_at = Time.now.to_i + @expires_in end # Checks if the access token is expired. # # The access token is considered expired if it is within 60 seconds # of its expiration time. # # @return [ true | false ] Whether the access token is expired. def expired? Time.now.to_i >= @expires_at - 60 end end end end end end mongo-ruby-driver-2.21.3/lib/mongo/crypt/kms/azure/credentials.rb000066400000000000000000000071461505113246500247700ustar00rootroot00000000000000# frozen_string_literal: true # Copyright (C) 2019-2021 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Crypt module KMS module Azure # Azure KMS Credentials object contains credentials for using Azure KMS provider. # # @api private class Credentials extend Forwardable include KMS::Validations # @return [ String ] Azure tenant id. attr_reader :tenant_id # @return [ String ] Azure client id. attr_reader :client_id # @return [ String ] Azure client secret. attr_reader :client_secret # @return [ String | nil ] Azure identity platform endpoint. attr_reader :identity_platform_endpoint # @return [ String | nil ] Azure access token. attr_reader :access_token # @api private def_delegator :@opts, :empty? FORMAT_HINT = 'Azure KMS provider options must be in the format: \ { tenant_id: "TENANT-ID", client_id: "TENANT_ID", client_secret: "CLIENT_SECRET" }' # Creates an Azure KMS credentials object form a parameters hash. # # @param [ Hash ] opts A hash that contains credentials for # Azure KMS provider # @option opts [ String ] :tenant_id Azure tenant id. # @option opts [ String ] :client_id Azure client id. # @option opts [ String ] :client_secret Azure client secret. # @option opts [ String | nil ] :identity_platform_endpoint Azure # identity platform endpoint, optional. # # @raise [ ArgumentError ] If required options are missing or incorrectly # formatted. def initialize(opts) @opts = opts return if empty? if opts[:access_token] @access_token = opts[:access_token] else @tenant_id = validate_param(:tenant_id, opts, FORMAT_HINT) @client_id = validate_param(:client_id, opts, FORMAT_HINT) @client_secret = validate_param(:client_secret, opts, FORMAT_HINT) @identity_platform_endpoint = validate_param( :identity_platform_endpoint, opts, FORMAT_HINT, required: false ) end end # Convert credentials object to a BSON document in libmongocrypt format. # # @return [ BSON::Document ] Azure KMS credentials in libmongocrypt format. def to_document return BSON::Document.new if empty? if access_token BSON::Document.new({ accessToken: access_token }) else BSON::Document.new( { tenantId: @tenant_id, clientId: @client_id, clientSecret: @client_secret } ).tap do |bson| unless identity_platform_endpoint.nil? bson.update({ identityPlatformEndpoint: identity_platform_endpoint }) end end end end end end end end end mongo-ruby-driver-2.21.3/lib/mongo/crypt/kms/azure/credentials_retriever.rb000066400000000000000000000133351505113246500270540ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2019-2021 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Crypt module KMS module Azure # This class retrieves Azure credentials using Azure # metadata host. This should be used when the driver is used on the # Azure environment. # # @api private class CredentialsRetriever # Default host to obtain Azure metadata. DEFAULT_HOST = '169.254.169.254' # Fetches Azure credentials from Azure metadata host. # # @param [Hash] extra_headers Extra headers to be passed to the # request. This is used for testing. # @param [String | nil] metadata_host Azure metadata host. This # is used for testing. # @param [ CsotTimeoutHolder | nil ] timeout_holder CSOT timeout. # # @return [ KMS::Azure::AccessToken ] Azure access token. # # @raise [KMS::CredentialsNotFound] If credentials could not be found. # @raise Error::TimeoutError if credentials cannot be retrieved within # the timeout. def self.fetch_access_token(extra_headers: {}, metadata_host: nil, timeout_holder: nil) uri, req = prepare_request(extra_headers, metadata_host) parsed_response = fetch_response(uri, req, timeout_holder) Azure::AccessToken.new( parsed_response.fetch('access_token'), Integer(parsed_response.fetch('expires_in')) ) rescue KeyError, ArgumentError => e raise KMS::CredentialsNotFound, "Azure metadata response is invalid: '#{parsed_response}'; #{e.class}: #{e.message}" end # Prepares a request to Azure metadata host. # # @param [Hash] extra_headers Extra headers to be passed to the # request. This is used for testing. # @param [String | nil] metadata_host Azure metadata host. This # is used for testing. # # @return [Array] URI and request object. def self.prepare_request(extra_headers, metadata_host) host = metadata_host || DEFAULT_HOST host = DEFAULT_HOST if host.empty? uri = URI("http://#{host}/metadata/identity/oauth2/token") uri.query = ::URI.encode_www_form( 'api-version' => '2018-02-01', 'resource' => 'https://vault.azure.net' ) req = Net::HTTP::Get.new(uri) req['Metadata'] = 'true' req['Accept'] = 'application/json' extra_headers.each { |k, v| req[k] = v } [uri, req] end private_class_method :prepare_request # Fetches response from Azure metadata host. # # @param [URI] uri URI to Azure metadata host. # @param [Net::HTTP::Get] req Request object. # @param [ CsotTimeoutHolder | nil ] timeout_holder CSOT timeout. # # @return [Hash] Parsed response. # # @raise [KMS::CredentialsNotFound] If cannot fetch response or # response is invalid. # @raise Error::TimeoutError if credentials cannot be retrieved within # the timeout. def self.fetch_response(uri, req, timeout_holder) resp = do_request(uri, req, timeout_holder) if resp.code != '200' raise KMS::CredentialsNotFound, "Azure metadata host responded with code #{resp.code}" end JSON.parse(resp.body) rescue JSON::ParserError => e raise KMS::CredentialsNotFound, "Azure metadata response is invalid: '#{resp.body}'; #{e.class}: #{e.message}" end private_class_method :fetch_response # Performs a request to Azure metadata host. # # @param [URI] uri URI to Azure metadata host. # @param [Net::HTTP::Get] req Request object. # @param [ CsotTimeoutHolder | nil ] timeout_holder CSOT timeout. # # @return [Net::HTTPResponse] Response object. # # @raise [KMS::CredentialsNotFound] If cannot execute request. # @raise Error::TimeoutError if credentials cannot be retrieved within # the timeout. def self.do_request(uri, req, timeout_holder) timeout_holder&.check_timeout! timeout = timeout_holder&.remaining_timeout_sec || 10 exception_class = if timeout_holder&.csot? Error::TimeoutError else nil end ::Timeout.timeout(timeout, exception_class) do Net::HTTP.start(uri.hostname, uri.port, use_ssl: false) do |http| http.request(req) end end rescue ::Timeout::Error, IOError, SystemCallError, SocketError => e raise KMS::CredentialsNotFound, "Could not receive Azure metadata response; #{e.class}: #{e.message}" end private_class_method :do_request end end end end end mongo-ruby-driver-2.21.3/lib/mongo/crypt/kms/azure/master_document.rb000066400000000000000000000055451505113246500256650ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2019-2021 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Crypt module KMS module Azure # Azure KMS master key document object contains KMS master key parameters. # # @api private class MasterKeyDocument include KMS::Validations # @return [ String ] Azure key vault endpoint. attr_reader :key_vault_endpoint # @return [ String ] Azure KMS key name. attr_reader :key_name # @return [ String | nil ] Azure KMS key version. attr_reader :key_version FORMAT_HINT = "Azure key document must be in the format: " + "{ key_vault_endpoint: 'KEY_VAULT_ENDPOINT', key_name: 'KEY_NAME' }" # Creates a master key document object form a parameters hash. # # @param [ Hash ] opts A hash that contains master key options for # the Azure KMS provider. # @option opts [ String ] :key_vault_endpoint Azure key vault endpoint. # @option opts [ String ] :key_name Azure KMS key name. # @option opts [ String | nil ] :key_version Azure KMS key version, optional. # # @raise [ ArgumentError ] If required options are missing or incorrectly. def initialize(opts) unless opts.is_a?(Hash) raise ArgumentError.new( 'Key document options must contain a key named :master_key with a Hash value' ) end @key_vault_endpoint = validate_param(:key_vault_endpoint, opts, FORMAT_HINT) @key_name = validate_param(:key_name, opts, FORMAT_HINT) @key_version = validate_param(:key_version, opts, FORMAT_HINT, required: false) end # Convert master key document object to a BSON document in libmongocrypt format. # # @return [ BSON::Document ] Azure KMS credentials in libmongocrypt format. def to_document BSON::Document.new({ provider: 'azure', keyVaultEndpoint: key_vault_endpoint, keyName: key_name, }).tap do |bson| unless key_version.nil? bson.update({ keyVersion: key_version }) end end end end end end end end mongo-ruby-driver-2.21.3/lib/mongo/crypt/kms/credentials.rb000066400000000000000000000066311505113246500236400ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2019-2021 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Crypt module KMS # KMS Credentials object contains credentials for using KMS providers. # # @api private class Credentials # @return [ Credentials::AWS | nil ] AWS KMS credentials. attr_reader :aws # @return [ Credentials::Azure | nil ] Azure KMS credentials. attr_reader :azure # @return [ Credentials::GCP | nil ] GCP KMS credentials. attr_reader :gcp # @return [ Credentials::KMIP | nil ] KMIP KMS credentials. attr_reader :kmip # @return [ Credentials::Local | nil ] Local KMS credentials. attr_reader :local # Creates a KMS credentials object form a parameters hash. # # @param [ Hash ] kms_providers A hash that contains credential for # KMS providers. The hash should have KMS provider names as keys, # and required parameters for every provider as values. # Required parameters for KMS providers are described in corresponding # classes inside Mongo::Crypt::KMS module. # # @note There may be more than one KMS provider specified. # # @raise [ ArgumentError ] If required options are missing or incorrectly # formatted. def initialize(kms_providers) if kms_providers.nil? raise ArgumentError.new("KMS providers options must not be nil") end if kms_providers.key?(:aws) @aws = AWS::Credentials.new(kms_providers[:aws]) end if kms_providers.key?(:azure) @azure = Azure::Credentials.new(kms_providers[:azure]) end if kms_providers.key?(:gcp) @gcp = GCP::Credentials.new(kms_providers[:gcp]) end if kms_providers.key?(:kmip) @kmip = KMIP::Credentials.new(kms_providers[:kmip]) end if kms_providers.key?(:local) @local = Local::Credentials.new(kms_providers[:local]) end if @aws.nil? && @azure.nil? && @gcp.nil? && @kmip.nil? && @local.nil? raise ArgumentError.new( "KMS providers options must have one of the following keys: " + ":aws, :azure, :gcp, :kmip, :local" ) end end # Convert credentials object to a BSON document in libmongocrypt format. # # @return [ BSON::Document ] Credentials as BSON document. def to_document BSON::Document.new.tap do |bson| bson[:aws] = @aws.to_document if @aws bson[:azure] = @azure.to_document if @azure bson[:gcp] = @gcp.to_document if @gcp bson[:kmip] = @kmip.to_document if @kmip bson[:local] = @local.to_document if @local end end end end end end mongo-ruby-driver-2.21.3/lib/mongo/crypt/kms/gcp.rb000066400000000000000000000014071505113246500221100ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2019-2021 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'mongo/crypt/kms/gcp/credentials' require 'mongo/crypt/kms/gcp/credentials_retriever' require 'mongo/crypt/kms/gcp/master_document' mongo-ruby-driver-2.21.3/lib/mongo/crypt/kms/gcp/000077500000000000000000000000001505113246500215615ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/crypt/kms/gcp/credentials.rb000066400000000000000000000111701505113246500244030ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2019-2021 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Crypt module KMS module GCP # GCP Cloud Key Management Credentials object contains credentials for # using GCP KMS provider. # # @api private class Credentials extend Forwardable include KMS::Validations # @return [ String ] GCP email to authenticate with. attr_reader :email # @return [ String ] GCP private key, base64 encoded DER format. attr_reader :private_key # @return [ String | nil ] GCP KMS endpoint. attr_reader :endpoint # @return [ String | nil ] GCP access token. attr_reader :access_token # @api private def_delegator :@opts, :empty? FORMAT_HINT = "GCP KMS provider options must be in the format: " + "{ email: 'EMAIL', private_key: 'PRIVATE-KEY' }" # Creates an GCP KMS credentials object form a parameters hash. # # @param [ Hash ] opts A hash that contains credentials for # GCP KMS provider # @option opts [ String ] :email GCP email. # @option opts [ String ] :private_key GCP private key. This method accepts # private key in either base64 encoded DER format, or PEM format. # @option opts [ String | nil ] :endpoint GCP endpoint, optional. # @option opts [ String | nil ] :access_token GCP access token, optional. # If this option is not null, other options are ignored. # # @raise [ ArgumentError ] If required options are missing or incorrectly # formatted. def initialize(opts) @opts = opts return if empty? if opts[:access_token] @access_token = opts[:access_token] else @email = validate_param(:email, opts, FORMAT_HINT) @private_key = begin private_key_opt = validate_param(:private_key, opts, FORMAT_HINT) if BSON::Environment.jruby? # We cannot really validate private key on JRuby, so we assume # it is in base64 encoded DER format. private_key_opt else # Check if private key is in PEM format. pkey = OpenSSL::PKey::RSA.new(private_key_opt) # PEM it is, need to be converted to base64 encoded DER. der = if pkey.respond_to?(:private_to_der) pkey.private_to_der else pkey.to_der end Base64.encode64(der) end rescue OpenSSL::PKey::RSAError # Check if private key is in DER. begin OpenSSL::PKey.read(Base64.decode64(private_key_opt)) # Private key is fine, use it. private_key_opt rescue OpenSSL::PKey::PKeyError raise ArgumentError.new( "The private_key option must be either either base64 encoded DER format, or PEM format." ) end end @endpoint = validate_param( :endpoint, opts, FORMAT_HINT, required: false ) end end # Convert credentials object to a BSON document in libmongocrypt format. # # @return [ BSON::Document ] Azure KMS credentials in libmongocrypt format. def to_document return BSON::Document.new if empty? if access_token BSON::Document.new({ accessToken: access_token }) else BSON::Document.new({ email: email, privateKey: BSON::Binary.new(private_key, :generic), }).tap do |bson| unless endpoint.nil? bson.update({ endpoint: endpoint }) end end end end end end end end end mongo-ruby-driver-2.21.3/lib/mongo/crypt/kms/gcp/credentials_retriever.rb000066400000000000000000000057751505113246500265100ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2019-2021 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Crypt module KMS module GCP # This class retrieves GPC credentials using Google Compute Engine # metadata host. This should be used when the driver is used on the # Google Compute Engine instance. # # @api private class CredentialsRetriever METADATA_HOST_ENV = 'GCE_METADATA_HOST' DEFAULT_HOST = 'metadata.google.internal' # Fetch GCP access token. # # @param [ CsotTimeoutHolder | nil ] timeout_holder CSOT timeout. # # @return [ String ] GCP access token. # # @raise [ KMS::CredentialsNotFound ] # @raise [ Error::TimeoutError ] def self.fetch_access_token(timeout_holder = nil) host = ENV.fetch(METADATA_HOST_ENV) { DEFAULT_HOST } uri = URI("http://#{host}/computeMetadata/v1/instance/service-accounts/default/token") req = Net::HTTP::Get.new(uri) req['Metadata-Flavor'] = 'Google' resp = fetch_response(uri, req, timeout_holder) if resp.code != '200' raise KMS::CredentialsNotFound, "GCE metadata host responded with code #{resp.code}" end parsed_resp = JSON.parse(resp.body) parsed_resp.fetch('access_token') rescue JSON::ParserError, KeyError => e raise KMS::CredentialsNotFound, "GCE metadata response is invalid: '#{resp.body}'; #{e.class}: #{e.message}" rescue ::Timeout::Error, IOError, SystemCallError, SocketError => e raise KMS::CredentialsNotFound, "Could not receive GCP metadata response; #{e.class}: #{e.message}" end def self.fetch_response(uri, req, timeout_holder) timeout_holder&.check_timeout! if timeout_holder&.timeout? ::Timeout.timeout(timeout_holder.remaining_timeout_sec, Error:TimeoutError) do do_fetch(uri, req) end else do_fetch(uri, req) end end private_class_method :fetch_response def self.do_fetch(uri, req) Net::HTTP.start(uri.hostname, uri.port, use_ssl: false) do |http| http.request(req) end end private_class_method :do_fetch end end end end end mongo-ruby-driver-2.21.3/lib/mongo/crypt/kms/gcp/master_document.rb000066400000000000000000000071331505113246500253030ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2019-2021 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Crypt module KMS module GCP # GCP KMS master key document object contains KMS master key parameters. # # @api private class MasterKeyDocument include KMS::Validations # @return [ String ] GCP project id. attr_reader :project_id # @return [ String ] GCP location. attr_reader :location # @return [ String ] GCP KMS key ring. attr_reader :key_ring # @return [ String ] GCP KMS key name. attr_reader :key_name # @return [ String | nil ] GCP KMS key version. attr_reader :key_version # @return [ String | nil ] GCP KMS endpoint. attr_reader :endpoint FORMAT_HINT = "GCP key document must be in the format: " + "{ project_id: 'PROJECT_ID', location: 'LOCATION', " + "key_ring: 'KEY-RING', key_name: 'KEY-NAME' }" # Creates a master key document object form a parameters hash. # # @param [ Hash ] opts A hash that contains master key options for # the GCP KMS provider. # @option opts [ String ] :project_id GCP project id. # @option opts [ String ] :location GCP location. # @option opts [ String ] :key_ring GCP KMS key ring. # @option opts [ String ] :key_name GCP KMS key name. # @option opts [ String | nil ] :key_version GCP KMS key version, optional. # @option opts [ String | nil ] :endpoint GCP KMS key endpoint, optional. # # @raise [ ArgumentError ] If required options are missing or incorrectly. def initialize(opts) if opts.empty? @empty = true return end @project_id = validate_param(:project_id, opts, FORMAT_HINT) @location = validate_param(:location, opts, FORMAT_HINT) @key_ring = validate_param(:key_ring, opts, FORMAT_HINT) @key_name = validate_param(:key_name, opts, FORMAT_HINT) @key_version = validate_param(:key_version, opts, FORMAT_HINT, required: false) @endpoint = validate_param(:endpoint, opts, FORMAT_HINT, required: false) end # Convert master key document object to a BSON document in libmongocrypt format. # # @return [ BSON::Document ] GCP KMS credentials in libmongocrypt format. def to_document return BSON::Document.new({}) if @empty BSON::Document.new({ provider: 'gcp', projectId: project_id, location: location, keyRing: key_ring, keyName: key_name }).tap do |bson| unless key_version.nil? bson.update({ keyVersion: key_version }) end unless endpoint.nil? bson.update({ endpoint: endpoint }) end end end end end end end end mongo-ruby-driver-2.21.3/lib/mongo/crypt/kms/kmip.rb000066400000000000000000000013241505113246500222750ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2019-2021 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'mongo/crypt/kms/kmip/credentials' require 'mongo/crypt/kms/kmip/master_document' mongo-ruby-driver-2.21.3/lib/mongo/crypt/kms/kmip/000077500000000000000000000000001505113246500217505ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/crypt/kms/kmip/credentials.rb000066400000000000000000000041711505113246500245750ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2019-2021 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Crypt module KMS module KMIP # KMIP KMS Credentials object contains credentials for a # remote KMIP KMS provider. # # @api private class Credentials extend Forwardable include KMS::Validations # @return [ String ] KMIP KMS endpoint with optional port. attr_reader :endpoint # @api private def_delegator :@opts, :empty? FORMAT_HINT = "KMIP KMS provider options must be in the format: " + "{ endpoint: 'ENDPOINT' }" # Creates a KMIP KMS credentials object form a parameters hash. # # @param [ Hash ] opts A hash that contains credentials for # KMIP KMS provider. # @option opts [ String ] :endpoint KMIP endpoint. # # @raise [ ArgumentError ] If required options are missing or incorrectly # formatted. def initialize(opts) @opts = opts unless empty? @endpoint = validate_param(:endpoint, opts, FORMAT_HINT) end end # Convert credentials object to a BSON document in libmongocrypt format. # # @return [ BSON::Document ] Local KMS credentials in libmongocrypt format. def to_document return BSON::Document.new({}) if empty? BSON::Document.new({ endpoint: endpoint, }) end end end end end end mongo-ruby-driver-2.21.3/lib/mongo/crypt/kms/kmip/master_document.rb000066400000000000000000000052051505113246500254700ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2019-2021 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Crypt module KMS module KMIP # KMIP KMS master key document object contains KMS master key parameters. # # @api private class MasterKeyDocument include KMS::Validations # @return [ String | nil ] The KMIP Unique Identifier to a 96 byte # KMIP Secret Data managed object. attr_reader :key_id # @return [ String | nil ] KMIP KMS endpoint with optional port. attr_reader :endpoint FORMAT_HINT = "KMIP KMS key document must be in the format: " + "{ key_id: 'KEY-ID', endpoint: 'ENDPOINT' }" # Creates a master key document object form a parameters hash. # # @param [ Hash ] opts A hash that contains master key options for # KMIP KMS provider # @option opts [ String | nil ] :key_id KMIP Unique Identifier to # a 96 byte KMIP Secret Data managed object, optional. If key_id # is omitted, the driver creates a random 96 byte identifier. # @option opts [ String | nil ] :endpoint KMIP endpoint, optional. # # @raise [ ArgumentError ] If required options are missing or incorrectly # formatted. def initialize(opts = {}) @key_id = validate_param( :key_id, opts, FORMAT_HINT, required: false ) @endpoint = validate_param( :endpoint, opts, FORMAT_HINT, required: false ) end # Convert master key document object to a BSON document in libmongocrypt format. # # @return [ BSON::Document ] KMIP KMS credentials in libmongocrypt format. def to_document BSON::Document.new({ provider: 'kmip', }).tap do |bson| bson.update({ endpoint: endpoint }) unless endpoint.nil? bson.update({ keyId: key_id }) unless key_id.nil? end end end end end end end mongo-ruby-driver-2.21.3/lib/mongo/crypt/kms/local.rb000066400000000000000000000013261505113246500224310ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2019-2021 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'mongo/crypt/kms/local/credentials' require 'mongo/crypt/kms/local/master_document' mongo-ruby-driver-2.21.3/lib/mongo/crypt/kms/local/000077500000000000000000000000001505113246500221025ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/crypt/kms/local/credentials.rb000066400000000000000000000037611505113246500247330ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2019-2021 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Crypt module KMS module Local # Local KMS Credentials object contains credentials for using local KMS provider. # # @api private class Credentials extend Forwardable include KMS::Validations # @return [ String ] Master key. attr_reader :key # @api private def_delegator :@opts, :empty? FORMAT_HINT = "Local KMS provider options must be in the format: " + "{ key: 'MASTER-KEY' }" # Creates a local KMS credentials object form a parameters hash. # # @param [ Hash ] opts A hash that contains credentials for # local KMS provider # @option opts [ String ] :key Master key. # # @raise [ ArgumentError ] If required options are missing or incorrectly # formatted. def initialize(opts) @opts = opts unless empty? @key = validate_param(:key, opts, FORMAT_HINT) end end # @return [ BSON::Document ] Local KMS credentials in libmongocrypt format. def to_document return BSON::Document.new({}) if empty? BSON::Document.new({ key: BSON::Binary.new(@key, :generic), }) end end end end end end mongo-ruby-driver-2.21.3/lib/mongo/crypt/kms/local/master_document.rb000066400000000000000000000025351505113246500256250ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2019-2021 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Crypt module KMS module Local # Local KMS master key document object contains KMS master key parameters. # # @api private class MasterKeyDocument # Creates a master key document object form a parameters hash. # This empty method is to keep a uniform interface for all KMS providers. def initialize(_opts) end # Convert master key document object to a BSON document in libmongocrypt format. # # @return [ BSON::Document ] Local KMS credentials in libmongocrypt format. def to_document BSON::Document.new({ provider: "local" }) end end end end end end mongo-ruby-driver-2.21.3/lib/mongo/crypt/kms/master_key_document.rb000066400000000000000000000046521505113246500254050ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2019-2021 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Crypt module KMS # KMS master key document object contains KMS master key parameters # that are used for creation of data keys. # # @api private class MasterKeyDocument # Known KMS provider names. KMS_PROVIDERS = %w(aws azure gcp kmip local).freeze # Creates a master key document object form a parameters hash. # # @param [ String ] kms_provider. KMS provider name. # @param [ Hash ] options A hash that contains master key options for # the KMS provider. # Required parameters for KMS providers are described in corresponding # classes inside Mongo::Crypt::KMS module. # # @raise [ ArgumentError ] If required options are missing or incorrectly. def initialize(kms_provider, options) if options.nil? raise ArgumentError.new('Key document options must not be nil') end master_key = options.fetch(:master_key, {}) @key_document = case kms_provider.to_s when 'aws' then KMS::AWS::MasterKeyDocument.new(master_key) when 'azure' then KMS::Azure::MasterKeyDocument.new(master_key) when 'gcp' then KMS::GCP::MasterKeyDocument.new(master_key) when 'kmip' then KMS::KMIP::MasterKeyDocument.new(master_key) when 'local' then KMS::Local::MasterKeyDocument.new(master_key) else raise ArgumentError.new("KMS provider must be one of #{KMS_PROVIDERS}") end end # Convert master key document object to a BSON document in libmongocrypt format. # # @return [ BSON::Document ] Master key document as BSON document. def to_document @key_document.to_document end end end end end mongo-ruby-driver-2.21.3/lib/mongo/crypt/kms_context.rb000066400000000000000000000043641505113246500231100ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2019-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Crypt # Wraps a libmongocrypt mongocrypt_kms_ctx_t object. Contains information # about making an HTTP request to fetch information about a KMS # data key. class KmsContext # Create a new KmsContext object. # # @param [ FFI::Pointer ] kms_ctx A pointer to a mongocrypt_kms_ctx_t # object. This object is managed by the mongocrypt_ctx_t object that # created it; this class is not responsible for de-allocating resources. def initialize(kms_ctx) @kms_ctx_p = kms_ctx end # Return the pointer to the underlying mongocrypt_kms_ctx_t object. # # @return [ FFI::Pointer ] A pointer to a mongocrypt_kms_ctx_t object. attr_reader :kms_ctx_p # Return the endpoint at which to make the HTTP request. # # @return [ String ] The endpoint. def endpoint Binding.kms_ctx_endpoint(self) end # Return the HTTP message to send to fetch information about the relevant # KMS data key. # # @return [ String ] The HTTP message. def message Binding.kms_ctx_message(self) end # Return the number of bytes still needed by libmongocrypt to complete # the request for information about the AWS data key. # # @return [ Integer ] The number of bytes needed. def bytes_needed Binding.kms_ctx_bytes_needed(self) end # Feed a response from the HTTP request to libmongocrypt. # # @param [ String ] data Data to feed to libmongocrypt. def feed(data) Binding.kms_ctx_feed(self, data) end end end end mongo-ruby-driver-2.21.3/lib/mongo/crypt/rewrap_many_data_key_context.rb000066400000000000000000000034201505113246500264730ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2019-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Crypt # A Context object initialized specifically for the purpose of rewrapping # data keys (decrypting and re-rencryting using a new KEK). # # @api private class RewrapManyDataKeyContext < Context # Create a new RewrapManyDataKeyContext object # # @param [ Mongo::Crypt::Handle ] mongocrypt a Handle that # wraps a mongocrypt_t object used to create a new mongocrypt_ctx_t # @param [ Mongo::Crypt::EncryptionIO ] io An object that performs all # driver I/O on behalf of libmongocrypt # @param [ Hash ] filter Filter used to find keys to be updated. # alternate names for the new data key. # @param [ Mongo::Crypt::KMS::MasterKeyDocument | nil ] master_key_document The optional master # key document that contains master encryption key parameters. def initialize(mongocrypt, io, filter, master_key_document) super(mongocrypt, io) if master_key_document Binding.ctx_setopt_key_encryption_key(self, master_key_document.to_document) end Binding.ctx_rewrap_many_datakey_init(self, filter) end end end end mongo-ruby-driver-2.21.3/lib/mongo/crypt/rewrap_many_data_key_result.rb000066400000000000000000000024001505113246500263220ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2019-2022 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Crypt # Represent result of the rewrap many data ke operation. # # @api semiprivate class RewrapManyDataKeyResult # @returns [ BulkWrite::Result ] the result of the bulk write operation # used to update the key vault collection with rewrapped data keys. attr_reader :bulk_write_result # @param [ BulkWrite::Result | nil ] bulk_write_result The result of the # bulk write operation used to update the key vault collection # with rewrapped data keys. def initialize(bulk_write_result) @bulk_write_result = bulk_write_result end end end end mongo-ruby-driver-2.21.3/lib/mongo/crypt/status.rb000066400000000000000000000107421505113246500220720ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all ## Copyright (C) 2019-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'ffi' module Mongo module Crypt # A wrapper around mongocrypt_status_t, representing the status of # a mongocrypt_t handle. # # @api private class Status # Create a new Status object # # @param [ FFI::Pointer | nil ] pointer A pointer to an existing # mongocrypt_status_t object. Defaults to nil. # # @note When initializing a Status object with a pointer, it is # recommended that you use the #self.from_pointer method def initialize(pointer: nil) # If a pointer is passed in, this class is not responsible for # destroying that pointer and deallocating data. # # FFI::AutoPointer uses a custom release strategy to automatically free # the pointer once this object goes out of scope @status = pointer || FFI::AutoPointer.new( Binding.mongocrypt_status_new, Binding.method(:mongocrypt_status_destroy) ) end # Initialize a Status object from an existing pointer to a # mongocrypt_status_t object. # # @param [ FFI::Pointer ] pointer A pointer to an existing # mongocrypt_status_t object # # @return [ Mongo::Crypt::Status ] A new Status object def self.from_pointer(pointer) self.new(pointer: pointer) end # Set a label, code, and message on the Status # # @param [ Symbol ] label One of :ok, :error_client, or :error_kms # @param [ Integer ] code # @param [ String ] message # # @return [ Mongo::Crypt::Status ] returns self def update(label, code, message) unless [:ok, :error_client, :error_kms].include?(label) raise ArgumentError.new( "#{label} is an invalid value for a Mongo::Crypt::Status label. " + "Label must have one of the following values: :ok, :error_client, :error_kms" ) end message_length = message ? message.bytesize + 1 : 0 Binding.mongocrypt_status_set(@status, label, code, message, message_length) self end # Return the label of the status # # @return [ Symbol ] The status label, either :ok, :error_kms, or :error_client, # defaults to :ok def label Binding.mongocrypt_status_type(@status) end # Return the integer code associated with the status # # @return [ Integer ] The status code, defaults to 0 def code Binding.mongocrypt_status_code(@status) end # Return the status message # # @return [ String ] The status message, defaults to empty string def message message = Binding.mongocrypt_status_message(@status, nil) message || '' end # Checks whether the status is labeled :ok # # @return [ Boolean ] Whether the status is :ok def ok? Binding.mongocrypt_status_ok(@status) end # Returns the reference to the underlying mongocrypt_status_t # object # # @return [ FFI::Pointer ] Pointer to the underlying mongocrypt_status_t oject def ref @status end # Raises a Mongo::Error:CryptError corresponding to the # information stored in this status # # Does nothing if self.ok? is true # # @param kms [ true | false ] Whether the operation was against the KMS. # # @note If kms parameter is false, the error may still have come from a # KMS. The kms parameter simply forces all errors to be treated as # KMS errors. def raise_crypt_error(kms: false) return if ok? if kms || label == :error_kms error = Error::KmsError.new(message, code: code) else error = Error::CryptError.new(message, code: code) end raise error end end end end mongo-ruby-driver-2.21.3/lib/mongo/csot_timeout_holder.rb000066400000000000000000000074321505113246500234630ustar00rootroot00000000000000# frozen_string_literal: true # Copyright (C) 2024 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo # This class stores operation timeout and provides corresponding helper methods. # # @api private class CsotTimeoutHolder def initialize(session: nil, operation_timeouts: {}) @deadline = calculate_deadline(operation_timeouts, session) @operation_timeouts = operation_timeouts @timeout_sec = (@deadline - Utils.monotonic_time if @deadline) end attr_reader :deadline, :timeout_sec, :operation_timeouts # @return [ true | false ] Whether CSOT is enabled for the operation def csot? !deadline.nil? end # @return [ true | false ] Returns false if CSOT is not enabled, or if # CSOT is set to 0 (means unlimited), otherwise true. def timeout? ![ nil, 0 ].include?(@deadline) end # @return [ Float | nil ] Returns the remaining seconds of the timeout # set for the operation; if no timeout is set, or the timeout is 0 # (means unlimited) returns nil. def remaining_timeout_sec return nil unless timeout? deadline - Utils.monotonic_time end def remaining_timeout_sec! check_timeout! remaining_timeout_sec end # @return [ Integer | nil ] Returns the remaining milliseconds of the timeout # set for the operation; if no timeout is set, or the timeout is 0 # (means unlimited) returns nil. def remaining_timeout_ms seconds = remaining_timeout_sec return nil if seconds.nil? (seconds * 1_000).to_i end def remaining_timeout_ms! check_timeout! remaining_timeout_ms end # @return [ true | false ] Whether the timeout for the operation expired. # If no timeout set, this method returns false. def timeout_expired? if timeout? Utils.monotonic_time >= deadline else false end end # Check whether the operation timeout expired, and raises an appropriate # error if yes. # # @raise [ Error::TimeoutError ] def check_timeout! return unless timeout_expired? raise Error::TimeoutError, "Operation took more than #{timeout_sec} seconds" end private def calculate_deadline(opts = {}, session = nil) check_no_override_inside_transaction!(opts, session) return session&.with_transaction_deadline if session&.with_transaction_deadline if (operation_timeout_ms = opts[:operation_timeout_ms]) calculate_deadline_from_timeout_ms(operation_timeout_ms) elsif (inherited_timeout_ms = opts[:inherited_timeout_ms]) calculate_deadline_from_timeout_ms(inherited_timeout_ms) end end def check_no_override_inside_transaction!(opts, session) return unless opts[:operation_timeout_ms] && session&.with_transaction_deadline raise ArgumentError, 'Cannot override timeout_ms inside with_transaction block' end def calculate_deadline_from_timeout_ms(operation_timeout_ms) if operation_timeout_ms.positive? Utils.monotonic_time + (operation_timeout_ms / 1_000.0) elsif operation_timeout_ms.zero? 0 elsif operation_timeout_ms.negative? raise ArgumentError, "timeout_ms must be a non-negative integer but #{operation_timeout_ms} given" end end end end mongo-ruby-driver-2.21.3/lib/mongo/cursor.rb000066400000000000000000000441471505113246500207310ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo # Client-side representation of an iterator over a query result set on # the server. # # +Cursor+ objects are not directly exposed to application code. Rather, # +Collection::View+ exposes the +Enumerable+ interface to the applications, # and the enumerator is backed by a +Cursor+ instance. # # @example Get an array of 5 users named Emily. # users.find({:name => 'Emily'}).limit(5).to_a # # @example Call a block on each user doc. # users.find.each { |doc| puts doc } # # @api private class Cursor extend Forwardable include Enumerable include Retryable def_delegators :@view, :collection def_delegators :collection, :client, :database def_delegators :@server, :cluster # @return [ Collection::View ] view The collection view. attr_reader :view # The resume token tracked by the cursor for change stream resuming # # @return [ BSON::Document | nil ] The cursor resume token. # @api private attr_reader :resume_token # @return [ Operation::Context ] context the context for this cursor attr_reader :context # Creates a +Cursor+ object. # # @example Instantiate the cursor. # Mongo::Cursor.new(view, response, server) # # @param [ CollectionView ] view The +CollectionView+ defining the query. # @param [ Operation::Result ] result The result of the first execution. # @param [ Server ] server The server this cursor is locked to. # @param [ Hash ] options The cursor options. # # @option options [ Operation::Context ] :context The operation context # for this cursor. # @option options [ true, false ] :disable_retry Whether to disable # retrying on error when sending getMore operations (deprecated, getMore # operations are no longer retried) # @option options [ true, false ] :retry_reads Retry reads (following # the modern mechanism), default is true # # @since 2.0.0 def initialize(view, result, server, options = {}) unless result.is_a?(Operation::Result) raise ArgumentError, "Second argument must be a Mongo::Operation::Result: #{result.inspect}" end @view = view @server = server @initial_result = result @namespace = result.namespace @remaining = limit if limited? set_cursor_id(result) if @cursor_id.nil? raise ArgumentError, 'Cursor id must be present in the result' end @options = options @session = @options[:session] @connection_global_id = result.connection_global_id @context = @options[:context]&.with(connection_global_id: connection_global_id_for_context) || fresh_context @explicitly_closed = false @lock = Mutex.new if server.load_balancer? # We need the connection in the cursor only in load balanced topology; # we do not need an additional reference to it otherwise. @connection = @initial_result.connection end if closed? check_in_connection else register ObjectSpace.define_finalizer( self, self.class.finalize(kill_spec(@connection_global_id), cluster) ) end end # @api private attr_reader :server # @api private attr_reader :initial_result # @api private attr_reader :connection # Finalize the cursor for garbage collection. Schedules this cursor to be included # in a killCursors operation executed by the Cluster's CursorReaper. # # @param [ Cursor::KillSpec ] kill_spec The KillCursor operation specification. # @param [ Mongo::Cluster ] cluster The cluster associated with this cursor and its server. # # @return [ Proc ] The Finalizer. # # @api private def self.finalize(kill_spec, cluster) unless KillSpec === kill_spec raise ArgumentError, "First argument must be a KillSpec: #{kill_spec.inspect}" end proc do cluster.schedule_kill_cursor(kill_spec) end end # Get a human-readable string representation of +Cursor+. # # @example Inspect the cursor. # cursor.inspect # # @return [ String ] A string representation of a +Cursor+ instance. # # @since 2.0.0 def inspect "#" end # Iterate through documents returned from the query. # # A cursor may be iterated at most once. Incomplete iteration is also # allowed. Attempting to iterate the cursor more than once raises # InvalidCursorOperation. # # @example Iterate over the documents in the cursor. # cursor.each do |doc| # ... # end # # @return [ Enumerator ] The enumerator. # # @since 2.0.0 def each # If we already iterated past the first batch (i.e., called get_more # at least once), the cursor on the server side has advanced past # the first batch and restarting iteration from the beginning by # returning initial result would miss documents in the second batch # and subsequent batches up to wherever the cursor is. Detect this # condition and abort the iteration. # # In a future driver version, each would either continue from the # end of previous iteration or would always restart from the # beginning. if @get_more_called raise Error::InvalidCursorOperation, 'Cannot restart iteration of a cursor which issued a getMore' end # To maintain compatibility with pre-2.10 driver versions, reset # the documents array each time a new iteration is started. @documents = nil if block_given? # StopIteration raised by try_next ends this loop. loop do document = try_next if explicitly_closed? raise Error::InvalidCursorOperation, 'Cursor was explicitly closed' end yield document if document end self else documents = [] # StopIteration raised by try_next ends this loop. loop do document = try_next if explicitly_closed? raise Error::InvalidCursorOperation, 'Cursor was explicitly closed' end documents << document if document end documents end end # Return one document from the query, if one is available. # # This method will wait up to max_await_time_ms milliseconds # for changes from the server, and if no changes are received # it will return nil. If there are no more documents to return # from the server, or if we have exhausted the cursor, it will # raise a StopIteration exception. # # @note This method is experimental and subject to change. # # @return [ BSON::Document | nil ] A document. # # @raise [ StopIteration ] Raised on the calls after the cursor had been # completely iterated. # # @api private def try_next if @documents.nil? # Since published versions of Mongoid have a copy of old driver cursor # code, our dup call in #process isn't invoked when Mongoid query # cache is active. Work around that by also calling dup here on # the result of #process which might come out of Mongoid's code. @documents = process(@initial_result).dup # the documents here can be an empty array, hence # we may end up issuing a getMore in the first try_next call end if @documents.empty? # On empty batches, we cache the batch resume token cache_batch_resume_token unless closed? if exhausted? close @fully_iterated = true raise StopIteration end @documents = get_more else @fully_iterated = true raise StopIteration end else # cursor is closed here # keep documents as an empty array end # If there is at least one document, cache its _id if @documents[0] cache_resume_token(@documents[0]) end # Cache the batch resume token if we are iterating # over the last document, or if the batch is empty if @documents.size <= 1 cache_batch_resume_token if closed? @fully_iterated = true end end return @documents.shift end # Get the batch size. # # @example Get the batch size. # cursor.batch_size # # @return [ Integer ] The batch size. # # @since 2.2.0 def batch_size value = @view.batch_size && @view.batch_size > 0 ? @view.batch_size : limit if value == 0 nil else value end end # Is the cursor closed? # # @example Is the cursor closed? # cursor.closed? # # @return [ true, false ] If the cursor is closed. # # @since 2.2.0 def closed? # @cursor_id should in principle never be nil @cursor_id.nil? || @cursor_id == 0 end # Closes this cursor, freeing any associated resources on the client and # the server. # # @return [ nil ] Always nil. def close(opts = {}) return if closed? ctx = context ? context.refresh(timeout_ms: opts[:timeout_ms]) : fresh_context(opts) unregister read_with_one_retry do spec = { coll_name: collection_name, db_name: database.name, cursor_ids: [id], } op = Operation::KillCursors.new(spec) execute_operation(op, context: ctx) end nil rescue Error::OperationFailure::Family, Error::SocketError, Error::SocketTimeoutError, Error::ServerNotUsable # Errors are swallowed since there is noting can be done by handling them. ensure end_session @cursor_id = 0 @lock.synchronize do @explicitly_closed = true end check_in_connection end # Get the parsed collection name. # # @example Get the parsed collection name. # cursor.coll_name # # @return [ String ] The collection name. # # @since 2.2.0 def collection_name # In most cases, this will be equivalent to the name of the collection # object in the driver. However, in some cases (e.g. when connected # to an Atlas Data Lake), the namespace returned by the find command # may be different, which is why we want to use the collection name based # on the namespace in the command result. if @namespace # Often, the namespace will be in the format "database.collection". # However, sometimes the collection name will contain periods, which # is why this method joins all the namespace components after the first. ns_components = @namespace.split('.') ns_components[1...ns_components.length].join('.') else collection.name end end # Get the cursor id. # # @example Get the cursor id. # cursor.id # # @note A cursor id of 0 means the cursor was closed on the server. # # @return [ Integer ] The cursor id. # # @since 2.2.0 def id @cursor_id end # Get the number of documents to return. Used on 3.0 and lower server # versions. # # @example Get the number to return. # cursor.to_return # # @return [ Integer ] The number of documents to return. # # @since 2.2.0 def to_return use_limit? ? @remaining : (batch_size || 0) end # Execute a getMore command and return the batch of documents # obtained from the server. # # @return [ Array ] The batch of documents # # @api private def get_more @get_more_called = true # Modern retryable reads specification prohibits retrying getMores. # Legacy retryable read logic used to retry getMores, but since # doing so may result in silent data loss, the driver no longer retries # getMore operations in any circumstance. # https://github.com/mongodb/specifications/blob/master/source/retryable-reads/retryable-reads.md#qa process(execute_operation(get_more_operation)) end # @api private def kill_spec(connection_global_id) KillSpec.new( cursor_id: id, coll_name: collection_name, db_name: database.name, connection_global_id: connection_global_id, server_address: server.address, session: @session, connection: @connection ) end # @api private def fully_iterated? !!@fully_iterated end private def explicitly_closed? @lock.synchronize do @explicitly_closed end end def batch_size_for_get_more if batch_size && use_limit? [batch_size, @remaining].min else batch_size end end def exhausted? limited? ? @remaining <= 0 : false end def cache_resume_token(doc) if doc[:_id] && doc[:_id].is_a?(Hash) @resume_token = doc[:_id] && doc[:_id].dup.freeze end end def cache_batch_resume_token @resume_token = @post_batch_resume_token if @post_batch_resume_token end def get_more_operation spec = { session: @session, db_name: database.name, coll_name: collection_name, cursor_id: id, # 3.2+ servers use batch_size, 3.0- servers use to_return. # TODO should to_return be calculated in the operation layer? batch_size: batch_size_for_get_more, to_return: to_return } if view.respond_to?(:options) && view.options.is_a?(Hash) spec[:comment] = view.options[:comment] unless view.options[:comment].nil? end Operation::GetMore.new(spec) end def end_session @session.end_session if @session && @session.implicit? end def limited? limit ? limit > 0 : false end def process(result) @remaining -= result.returned_count if limited? # #process is called for the first batch of results. In this case # the @cursor_id may be zero (all results fit in the first batch). # Thus we need to check both @cursor_id and the cursor_id of the result # prior to calling unregister here. if !closed? && result.cursor_id == 0 unregister check_in_connection end @cursor_id = set_cursor_id(result) if result.respond_to?(:post_batch_resume_token) @post_batch_resume_token = result.post_batch_resume_token end end_session if closed? # Since our iteration code mutates the documents array by calling #shift # on it, duplicate the documents here to permit restarting iteration # from the beginning of the cursor as long as get_more was not called result.documents.dup end def use_limit? limited? && batch_size >= @remaining end def limit @view.send(:limit) end def register cluster.register_cursor(@cursor_id) end def unregister cluster.unregister_cursor(@cursor_id) end def execute_operation(op, context: nil) op_context = context || possibly_refreshed_context if @connection.nil? op.execute(@server, context: op_context) else op.execute_with_connection(@connection, context: op_context) end end # Considers the timeout mode and will either return the cursor's # context directly, or will return a new (refreshed) context. # # @return [ Operation::Context ] the (possibly-refreshed) context. def possibly_refreshed_context return context if view.timeout_mode == :cursor_lifetime context.refresh(view: view) end # Sets @cursor_id from the operation result. # # In the operation result cursor id can be represented either as Integer # value or as BSON::Int64. This method ensures that the instance variable # is always of type Integer. # # @param [ Operation::Result ] result The result of the operation. # # @api private def set_cursor_id(result) @cursor_id = if result.cursor_id.is_a?(BSON::Int64) result.cursor_id.value else result.cursor_id end end # Returns a newly instantiated operation context based on the # default values from the view. def fresh_context(opts = {}) Operation::Context.new(client: view.client, session: @session, connection_global_id: connection_global_id_for_context, operation_timeouts: view.operation_timeouts(opts), view: view) end # Because a context must not have a connection_global_id if the session # is already pinned to one, this method checks to see whether or not there's # pinned connection_global_id on the session and returns nil if so. def connection_global_id_for_context if @session&.pinned_connection_global_id nil else @connection_global_id end end # Returns the connection that was used to create the cursor back to the # corresponding connection pool. # # In a load balanced topology cursors must use the same connection for the # initial and all subsequent operations. Therefore, the connection is not # checked into the pool after the initial operation is completed, but # only when the cursor is drained. def check_in_connection # Connection nil means the connection has been already checked in. return if @connection.nil? return unless @connection.server.load_balancer? @connection.connection_pool.check_in(@connection) @connection = nil end end end require 'mongo/cursor/kill_spec' mongo-ruby-driver-2.21.3/lib/mongo/cursor/000077500000000000000000000000001505113246500203725ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/cursor/kill_spec.rb000066400000000000000000000037461505113246500226760ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2021 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Cursor # This class contains the operation specification for KillCursors. # # Its purpose is to ensure we don't misspell attribute names accidentally. # # @api private class KillSpec def initialize( cursor_id:, coll_name:, db_name:, connection_global_id:, server_address:, session:, connection: nil ) @cursor_id = cursor_id @coll_name = coll_name @db_name = db_name @connection_global_id = connection_global_id @server_address = server_address @session = session @connection = connection end attr_reader :cursor_id, :coll_name, :db_name, :connection_global_id, :server_address, :session, :connection def ==(other) cursor_id == other.cursor_id && coll_name == other.coll_name && db_name == other.db_name && connection_global_id == other.connection_global_id && server_address == other.server_address && session == other.session end def eql?(other) self.==(other) end def hash [ cursor_id, coll_name, db_name, connection_global_id, server_address, session, ].compact.hash end end end end mongo-ruby-driver-2.21.3/lib/mongo/cursor/nontailable.rb000066400000000000000000000012751505113246500232140ustar00rootroot00000000000000# frozen_string_literal: true module Mongo class Cursor # This module is used by cursor-implementing classes to indicate that # the only cursors they generate are non-tailable, and iterable. # # @api private module NonTailable # These views are always non-tailable. # # @return [ nil ] indicating a non-tailable cursor. def cursor_type nil end # These views apply timeouts to each iteration of a cursor, as # opposed to the entire lifetime of the cursor. # # @return [ :iterable ] indicating a cursor with a timeout mode of # "iterable". def timeout_mode :iterable end end end end mongo-ruby-driver-2.21.3/lib/mongo/cursor_host.rb000066400000000000000000000061241505113246500217570ustar00rootroot00000000000000# frozen_string_literal: true module Mongo # A shared concern implementing settings and configuration for entities that # "host" (or spawn) cursors. # # The class or module that includes this concern must implement: # * timeout_ms -- this must return either the operation level timeout_ms # (if set) or an inherited timeout_ms from a hierarchically higher # level (if any). module CursorHost # Returns the cursor associated with this view, if any. # # @return [ nil | Cursor ] The cursor, if any. # # @api private attr_reader :cursor # @return [ :cursor_lifetime | :iteration ] The timeout mode to be # used by this object. attr_reader :timeout_mode # Ensure the timeout mode is appropriate for other options that # have been given. # # @param [ Hash ] options The options to inspect. # @param [ Array ] forbid The list of options to forbid for this # class. # # @raise [ ArgumentError ] if inconsistent or incompatible options are # detected. # # @api private # rubocop:disable Metrics def validate_timeout_mode!(options, forbid: []) forbid.each do |key| raise ArgumentError, "#{key} is not allowed here" if options.key?(key) end cursor_type = options[:cursor_type] timeout_mode = options[:timeout_mode] if timeout_ms # "Tailable cursors only support the ITERATION value for the # timeoutMode option. This is the default value and drivers MUST # error if the option is set to CURSOR_LIFETIME." if cursor_type timeout_mode ||= :iteration if timeout_mode == :cursor_lifetime raise ArgumentError, 'tailable cursors only support `timeout_mode: :iteration`' end # "Drivers MUST error if [the maxAwaitTimeMS] option is set, # timeoutMS is set to a non-zero value, and maxAwaitTimeMS is # greater than or equal to timeoutMS." max_await_time_ms = options[:max_await_time_ms] || 0 if cursor_type == :tailable_await && max_await_time_ms >= timeout_ms raise ArgumentError, ':max_await_time_ms must not be >= :timeout_ms' end else # "For non-tailable cursors, the default value of timeoutMode # is CURSOR_LIFETIME." timeout_mode ||= :cursor_lifetime end elsif timeout_mode # "Drivers MUST error if timeoutMode is set and timeoutMS is not." raise ArgumentError, ':timeout_ms must be set if :timeout_mode is set' end if timeout_mode == :iteration && respond_to?(:write?) && write? raise ArgumentError, 'timeout_mode=:iteration is not supported for aggregation pipelines with $out or $merge' end # set it as an instance variable, rather than updating the options, # because if the cursor type changes (e.g. via #configure()), the new # View instance must be able to select a different default timeout_mode # if no timeout_mode was set initially. @timeout_mode = timeout_mode end # rubocop:enable Metrics end end mongo-ruby-driver-2.21.3/lib/mongo/database.rb000066400000000000000000000530471505113246500211570ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'mongo/database/view' module Mongo # Represents a database on the db server and operations that can execute on # it at this level. # # @since 2.0.0 class Database extend Forwardable include Retryable # The admin database name. # # @since 2.0.0 ADMIN = 'admin'.freeze # The "collection" that database commands operate against. # # @since 2.0.0 COMMAND = '$cmd'.freeze # The default database options. # # @since 2.0.0 DEFAULT_OPTIONS = Options::Redacted.new(:database => ADMIN).freeze # Database name field constant. # # @since 2.1.0 # @deprecated NAME = 'name'.freeze # Databases constant. # # @since 2.1.0 DATABASES = 'databases'.freeze # The name of the collection that holds all the collection names. # # @since 2.0.0 NAMESPACES = 'system.namespaces'.freeze # @return [ Client ] client The database client. attr_reader :client # @return [ String ] name The name of the database. attr_reader :name # @return [ Hash ] options The options. attr_reader :options # Get cluster, read preference, and write concern from client. def_delegators :@client, :cluster, :read_preference, :server_selector, :read_concern, :write_concern, :encrypted_fields_map # @return [ Mongo::Server ] Get the primary server from the cluster. def_delegators :cluster, :next_primary # Check equality of the database object against another. Will simply check # if the names are the same. # # @example Check database equality. # database == other # # @param [ Object ] other The object to check against. # # @return [ true, false ] If the objects are equal. # # @since 2.0.0 def ==(other) return false unless other.is_a?(Database) name == other.name end # Get a collection in this database by the provided name. # # @example Get a collection. # database[:users] # # @param [ String, Symbol ] collection_name The name of the collection. # @param [ Hash ] options The options to the collection. # # @return [ Mongo::Collection ] The collection object. # # @since 2.0.0 def [](collection_name, options = {}) if options[:server_api] raise ArgumentError, 'The :server_api option cannot be specified for collection objects. It can only be specified on Client level' end Collection.new(self, collection_name, options) end alias_method :collection, :[] # Get all the names of the non-system collections in the database. # # @note The set of returned collection names depends on the version of # MongoDB server that fulfills the request. # # @param [ Hash ] options # # @option options [ Hash ] :filter A filter on the collections returned. # @option options [ true, false ] :authorized_collections A flag, when # set to true and used with nameOnly: true, that allows a user without the # required privilege to run the command when access control is enforced # @option options [ Object ] :comment A user-provided # comment to attach to this command. # @option options [ Integer ] :timeout_ms The operation timeout in milliseconds. # Must be a non-negative integer. An explicit value of 0 means infinite. # The default value is unset which means the value is inherited from # the database or the client. # # See https://mongodb.com/docs/manual/reference/command/listCollections/ # for more information and usage. # # @return [ Array ] Names of the collections. # # @since 2.0.0 def collection_names(options = {}) View.new(self, options).collection_names(options) end # Get info on all the non-system collections in the database. # # @note The set of collections returned, and the schema of the # information hash per collection, depends on the MongoDB server # version that fulfills the request. # # @param [ Hash ] options # # @option options [ Hash ] :filter A filter on the collections returned. # @option options [ true, false ] :name_only Indicates whether command # should return just collection/view names and type or return both the # name and other information # @option options [ true, false ] :authorized_collections A flag, when # set to true and used with nameOnly: true, that allows a user without the # required privilege to run the command when access control is enforced. # @option options [ Object ] :comment A user-provided # comment to attach to this command. # @option options [ Integer ] :timeout_ms The operation timeout in milliseconds. # Must be a non-negative integer. An explicit value of 0 means infinite. # The default value is unset which means the value is inherited from # the database or the client. # # See https://mongodb.com/docs/manual/reference/command/listCollections/ # for more information and usage. # # @return [ Array ] Array of information hashes, one for each # collection in the database. # # @since 2.0.5 def list_collections(options = {}) View.new(self, options).list_collections(options) end # Get all the non-system collections that belong to this database. # # @note The set of returned collections depends on the version of # MongoDB server that fulfills the request. # # @param [ Hash ] options # # @option options [ Hash ] :filter A filter on the collections returned. # @option options [ true, false ] :authorized_collections A flag, when # set to true and used with name_only: true, that allows a user without the # required privilege to run the command when access control is enforced. # @option options [ Object ] :comment A user-provided # comment to attach to this command. # @option options [ Integer ] :timeout_ms The operation timeout in milliseconds. # Must be a non-negative integer. An explicit value of 0 means infinite. # The default value is unset which means the value is inherited from # the database or the client. # # See https://mongodb.com/docs/manual/reference/command/listCollections/ # for more information and usage. # # @return [ Array ] The collections. # # @since 2.0.0 def collections(options = {}) collection_names(options).map { |name| collection(name) } end # Execute a command on the database. # # @example Execute a command. # database.command(:hello => 1) # # @param [ Hash ] operation The command to execute. # @param [ Hash ] opts The command options. # # @option opts :read [ Hash ] The read preference for this command. # @option opts :session [ Session ] The session to use for this command. # @option options [ Integer ] :timeout_ms The operation timeout in milliseconds. # Must be a non-negative integer. An explicit value of 0 means infinite. # The default value is unset which means the value is inherited from # the database or the client. # @option opts :execution_options [ Hash ] Options to pass to the code that # executes this command. This is an internal option and is subject to # change. # - :deserialize_as_bson [ Boolean ] Whether to deserialize the response # to this command using BSON types intead of native Ruby types wherever # possible. # # @return [ Mongo::Operation::Result ] The result of the command execution. def command(operation, opts = {}) opts = opts.dup execution_opts = opts.delete(:execution_options) || {} txn_read_pref = if opts[:session] && opts[:session].in_transaction? opts[:session].txn_read_preference else nil end txn_read_pref ||= opts[:read] || ServerSelector::PRIMARY Lint.validate_underscore_read_preference(txn_read_pref) selector = ServerSelector.get(txn_read_pref) client.with_session(opts) do |session| server = selector.select_server(cluster, nil, session) op = Operation::Command.new( :selector => operation, :db_name => name, :read => selector, :session => session ) op.execute(server, context: Operation::Context.new( client: client, session: session, operation_timeouts: operation_timeouts(opts) ), options: execution_opts) end end # Execute a read command on the database, retrying the read if necessary. # # @param [ Hash ] operation The command to execute. # @param [ Hash ] opts The command options. # # @option opts :read [ Hash ] The read preference for this command. # @option opts :session [ Session ] The session to use for this command. # @option opts [ Object ] :comment A user-provided # comment to attach to this command. # @option options [ Integer ] :timeout_ms The operation timeout in milliseconds. # Must be a non-negative integer. An explicit value of 0 means infinite. # The default value is unset which means the value is inherited from # the database or the client. # # @return [ Hash ] The result of the command execution. # @api private def read_command(operation, opts = {}) txn_read_pref = if opts[:session] && opts[:session].in_transaction? opts[:session].txn_read_preference else nil end txn_read_pref ||= opts[:read] || ServerSelector::PRIMARY Lint.validate_underscore_read_preference(txn_read_pref) preference = ServerSelector.get(txn_read_pref) client.with_session(opts) do |session| context = Operation::Context.new( client: client, session: session, operation_timeouts: operation_timeouts(opts) ) read_with_retry(session, preference, context) do |server| Operation::Command.new( selector: operation.dup, db_name: name, read: preference, session: session, comment: opts[:comment], ).execute(server, context: context) end end end # Drop the database and all its associated information. # # @example Drop the database. # database.drop # # @param [ Hash ] options The options for the operation. # # @option options [ Session ] :session The session to use for the operation. # @option options [ Hash ] :write_concern The write concern options. # @option options [ Integer ] :timeout_ms The operation timeout in milliseconds. # Must be a non-negative integer. An explicit value of 0 means infinite. # The default value is unset which means the value is inherited from # the database or the client. # # @return [ Result ] The result of the command. # # @since 2.0.0 def drop(options = {}) operation = { :dropDatabase => 1 } client.with_session(options) do |session| write_concern = if options[:write_concern] WriteConcern.get(options[:write_concern]) else self.write_concern end Operation::DropDatabase.new({ selector: operation, db_name: name, write_concern: write_concern, session: session }).execute( next_primary(nil, session), context: Operation::Context.new( client: client, session: session, operation_timeouts: operation_timeouts(options) ) ) end end # Instantiate a new database object. # # @example Instantiate the database. # Mongo::Database.new(client, :test) # # @param [ Mongo::Client ] client The driver client. # @param [ String, Symbol ] name The name of the database. # @param [ Hash ] options The options. # @option options [ Integer ] :timeout_ms The operation timeout in milliseconds. # Must be a non-negative integer. An explicit value of 0 means infinite. # The default value is unset which means the value is inherited from # the client. # # @raise [ Mongo::Database::InvalidName ] If the name is nil. # # @since 2.0.0 def initialize(client, name, options = {}) raise Error::InvalidDatabaseName.new unless name if Lint.enabled? && !(name.is_a?(String) || name.is_a?(Symbol)) raise "Database name must be a string or a symbol: #{name}" end @client = client @name = name.to_s.freeze @options = options.freeze end # Get a pretty printed string inspection for the database. # # @example Inspect the database. # database.inspect # # @return [ String ] The database inspection. # # @since 2.0.0 def inspect "#" end # Get the Grid "filesystem" for this database. # # @param [ Hash ] options The GridFS options. # # @option options [ String ] :bucket_name The prefix for the files and chunks # collections. # @option options [ Integer ] :chunk_size Override the default chunk # size. # @option options [ String ] :fs_name The prefix for the files and chunks # collections. # @option options [ String ] :read The read preference. # @option options [ Session ] :session The session to use. # @option options [ Hash ] :write Deprecated. Equivalent to :write_concern # option. # @option options [ Hash ] :write_concern The write concern options. # Can be :w => Integer|String, :fsync => Boolean, :j => Boolean. # # @return [ Grid::FSBucket ] The GridFS for the database. # # @since 2.0.0 def fs(options = {}) Grid::FSBucket.new(self, options) end # Get the user view for this database. # # @example Get the user view. # database.users # # @return [ View::User ] The user view. # # @since 2.0.0 def users Auth::User::View.new(self) end # Perform an aggregation on the database. # # @example Perform an aggregation. # collection.aggregate([ { "$listLocalSessions" => {} } ]) # # @param [ Array ] pipeline The aggregation pipeline. # @param [ Hash ] options The aggregation options. # # @option options [ true, false ] :allow_disk_use Set to true if disk # usage is allowed during the aggregation. # @option options [ Integer ] :batch_size The number of documents to return # per batch. # @option options [ true, false ] :bypass_document_validation Whether or # not to skip document level validation. # @option options [ Hash ] :collation The collation to use. # @option options [ Object ] :comment A user-provided # comment to attach to this command. # @option options [ Integer ] :max_time_ms The maximum amount of time to # allow the query to run, in milliseconds. This option is deprecated, use # :timeout_ms instead. # @option options [ Integer ] :timeout_ms The operation timeout in milliseconds. # Must be a non-negative integer. An explicit value of 0 means infinite. # The default value is unset which means the value is inherited from # the database or the client. # @option options [ String ] :hint The index to use for the aggregation. # @option options [ Session ] :session The session to use. # # @return [ Collection::View::Aggregation ] The aggregation object. # # @since 2.10.0 def aggregate(pipeline, options = {}) View.new(self, options).aggregate(pipeline, options) end # As of version 3.6 of the MongoDB server, a ``$changeStream`` pipeline stage is supported # in the aggregation framework. As of version 4.0, this stage allows users to request that # notifications are sent for all changes that occur in the client's database. # # @example Get change notifications for a given database.. # database.watch([{ '$match' => { operationType: { '$in' => ['insert', 'replace'] } } }]) # # @param [ Array ] pipeline Optional additional filter operators. # @param [ Hash ] options The change stream options. # # @option options [ String ] :full_document Allowed values: nil, 'default', # 'updateLookup', 'whenAvailable', 'required'. # # The default is to not send a value (i.e. nil), which is equivalent to # 'default'. By default, the change notification for partial updates will # include a delta describing the changes to the document. # # When set to 'updateLookup', the change notification for partial updates # will include both a delta describing the changes to the document as well # as a copy of the entire document that was changed from some time after # the change occurred. # # When set to 'whenAvailable', configures the change stream to return the # post-image of the modified document for replace and update change events # if the post-image for this event is available. # # When set to 'required', the same behavior as 'whenAvailable' except that # an error is raised if the post-image is not available. # @option options [ String ] :full_document_before_change Allowed values: nil, # 'whenAvailable', 'required', 'off'. # # The default is to not send a value (i.e. nil), which is equivalent to 'off'. # # When set to 'whenAvailable', configures the change stream to return the # pre-image of the modified document for replace, update, and delete change # events if it is available. # # When set to 'required', the same behavior as 'whenAvailable' except that # an error is raised if the pre-image is not available. # @option options [ BSON::Document, Hash ] :resume_after Specifies the logical starting point # for the new change stream. # @option options [ Integer ] :max_await_time_ms The maximum amount of time for the server to # wait on new documents to satisfy a change stream query. # @option options [ Integer ] :batch_size The number of documents to return per batch. # @option options [ BSON::Document, Hash ] :collation The collation to use. # @option options [ Session ] :session The session to use. # @option options [ BSON::Timestamp ] :start_at_operation_time Only return # changes that occurred after the specified timestamp. Any command run # against the server will return a cluster time that can be used here. # Only recognized by server versions 4.0+. # @option options [ Object ] :comment A user-provided # comment to attach to this command. # @option options [ Boolean ] :show_expanded_events Enables the server to # send the 'expanded' list of change stream events. The list of additional # events included with this flag set are: createIndexes, dropIndexes, # modify, create, shardCollection, reshardCollection, # refineCollectionShardKey. # # @note A change stream only allows 'majority' read concern. # @note This helper method is preferable to running a raw aggregation with a $changeStream # stage, for the purpose of supporting resumability. # # @return [ ChangeStream ] The change stream object. # # @since 2.6.0 def watch(pipeline = [], options = {}) view_options = options.dup view_options[:cursor_type] = :tailable_await if options[:max_await_time_ms] Mongo::Collection::View::ChangeStream.new( Mongo::Collection::View.new(collection("#{COMMAND}.aggregate"), {}, view_options), pipeline, Mongo::Collection::View::ChangeStream::DATABASE, options) end # Create a database for the provided client, for use when we don't want the # client's original database instance to be the same. # # @api private # # @example Create a database for the client. # Database.create(client) # # @param [ Client ] client The client to create on. # # @return [ Database ] The database. # # @since 2.0.0 def self.create(client) database = Database.new(client, client.options[:database], client.options) client.instance_variable_set(:@database, database) end # @return [ Integer | nil ] Operation timeout that is for this database or # for the corresponding client. # # @api private def timeout_ms options[:timeout_ms] || client.timeout_ms end # @return [ Hash ] timeout_ms value set on the operation level (if any), # and/or timeout_ms that is set on collection/database/client level (if any). # # @api private def operation_timeouts(opts) # TODO: We should re-evaluate if we need two timeouts separately. {}.tap do |result| if opts[:timeout_ms].nil? result[:inherited_timeout_ms] = timeout_ms else result[:operation_timeout_ms] = opts.delete(:timeout_ms) end end end end end mongo-ruby-driver-2.21.3/lib/mongo/database/000077500000000000000000000000001505113246500206215ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/database/view.rb000066400000000000000000000301011505113246500221130ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'mongo/cursor/nontailable' module Mongo class Database # A class representing a view of a database. # # @since 2.0.0 class View extend Forwardable include Enumerable include Retryable include Mongo::CursorHost include Cursor::NonTailable def_delegators :@database, :cluster, :read_preference, :client # @api private def_delegators :@database, :server_selector, :read_concern, :write_concern def_delegators :cluster, :next_primary # @return [ Integer ] batch_size The size of the batch of results # when sending the listCollections command. attr_reader :batch_size # @return [ Integer ] limit The limit when sending a command. attr_reader :limit # @return [ Collection ] collection The command collection. attr_reader :collection # Get all the names of the non-system collections in the database. # # @note The set of returned collection names depends on the version of # MongoDB server that fulfills the request. # # @param [ Hash ] options Options for the listCollections command. # # @option options [ Integer ] :batch_size The batch size for results # returned from the listCollections command. # @option options [ Hash ] :filter A filter on the collections returned. # @option options [ true, false ] :authorized_collections A flag, when # set to true, that allows a user without the required privilege # to run the command when access control is enforced. # @option options [ Object ] :comment A user-provided # comment to attach to this command. # @option options [ Integer ] :timeout_ms The operation timeout in milliseconds. # Must be a non-negative integer. An explicit value of 0 means infinite. # The default value is unset which means the value is inherited from # the database or the client. # # See https://mongodb.com/docs/manual/reference/command/listCollections/ # for more information and usage. # @option options [ Session ] :session The session to use. # # @return [ Array ] The names of all non-system collections. # # @since 2.0.0 def collection_names(options = {}) @batch_size = options[:batch_size] session = client.get_session(options) context = Operation::Context.new( client: client, session: session, operation_timeouts: operation_timeouts(options) ) cursor = read_with_retry_cursor(session, ServerSelector.primary, self, context: context) do |server| send_initial_query(server, session, context, options.merge(name_only: true)) end cursor.map do |info| if cursor.initial_result.connection_description.features.list_collections_enabled? info['name'] else (info['name'] && info['name'].sub("#{@database.name}.", '')) end end.reject do |name| name.start_with?('system.') || name.include?('$') end end # Get info on all the collections in the database. # # @note The set of collections returned, and the schema of the # information hash per collection, depends on the MongoDB server # version that fulfills the request. # # @example Get info on each collection. # database.list_collections # # @param [ Hash ] options # # @option options [ Hash ] :filter A filter on the collections returned. # @option options [ true, false ] :name_only Indicates whether command # should return just collection/view names and type or return both the # name and other information # @option options [ true, false ] :authorized_collections A flag, when # set to true and used with nameOnly: true, that allows a user without the # required privilege to run the command when access control is enforced # # See https://mongodb.com/docs/manual/reference/command/listCollections/ # for more information and usage. # @option options [ Session ] :session The session to use. # @option options [ Boolean ] :deserialize_as_bson Whether to deserialize # this message using BSON types instead of native Ruby types wherever # possible. # # @return [ Array ] Info for each collection in the database. # # @since 2.0.5 def list_collections(options = {}) session = client.get_session(options) collections_info(session, ServerSelector.primary, options) end # Create the new database view. # # @example Create the new database view. # Database::View.new(database) # # @param [ Database ] database The database. # @param [ Hash ] options The options to configure the view with. # # @option options [ :cursor_lifetime | :iteration ] :timeout_mode How to interpret # :timeout_ms (whether it applies to the lifetime of the cursor, or per # iteration). # @option options [ Integer ] :timeout_ms The operation timeout in milliseconds. # Must be a non-negative integer. An explicit value of 0 means infinite. # The default value is unset which means the value is inherited from # the database or the client. # # @since 2.0.0 def initialize(database, options = {}) @database = database @operation_timeout_ms = options.delete(:timeout_ms) validate_timeout_mode!(options) @batch_size = nil @limit = nil @collection = @database[Database::COMMAND] end # @api private attr_reader :database # @return [ Integer | nil | The timeout_ms value that was passed as an # option to the view. # # @api private attr_reader :operation_timeout_ms # Execute an aggregation on the database view. # # @example Aggregate documents. # view.aggregate([ # { "$listLocalSessions" => {} } # ]) # # @param [ Array ] pipeline The aggregation pipeline. # @param [ Hash ] options The aggregation options. # # @return [ Collection::View::Aggregation ] The aggregation object. # # @since 2.10.0 # @api private def aggregate(pipeline, options = {}) Collection::View::Aggregation.new(self, pipeline, options) end # The timeout_ms value to use for this operation; either specified as an # option to the view, or inherited from the database. # # @return [ Integer | nil ] the timeout_ms for this operation def timeout_ms operation_timeout_ms || database.timeout_ms end # @return [ Hash ] timeout_ms value set on the operation level (if any). # # @api private def operation_timeouts(opts = {}) {}.tap do |result| if opts[:timeout_ms] || operation_timeout_ms result[:operation_timeout_ms] = opts.delete(:timeout_ms) || operation_timeout_ms else result[:inherited_timeout_ms] = database.timeout_ms end end end private def collections_info(session, server_selector, options = {}, &block) description = nil context = Operation::Context.new( client: client, session: session, operation_timeouts: operation_timeouts(options) ) cursor = read_with_retry_cursor(session, server_selector, self, context: context) do |server| # TODO take description from the connection used to send the query # once https://jira.mongodb.org/browse/RUBY-1601 is fixed. description = server.description send_initial_query(server, session, context, options) end # On 3.0+ servers, we get just the collection names. # On 2.6 server, we get collection names prefixed with the database # name. We need to filter system collections out here because # in the caller we don't know which server version executed the # command and thus what the proper filtering logic should be # (it is valid for collection names to have dots, thus filtering out # collections named system.* here for 2.6 servers would actually # filter out collections in the system database). if description.server_version_gte?('3.0') cursor.reject do |doc| doc['name'].start_with?('system.') || doc['name'].include?('$') end else cursor.reject do |doc| doc['name'].start_with?("#{database.name}.system") || doc['name'].include?('$') end end end def collections_info_spec(session, options = {}) { selector: { listCollections: 1, cursor: batch_size ? { batchSize: batch_size } : {} }, db_name: @database.name, session: session }.tap do |spec| spec[:selector][:nameOnly] = true if options[:name_only] spec[:selector][:filter] = options[:filter] if options[:filter] spec[:selector][:authorizedCollections] = true if options[:authorized_collections] spec[:comment] = options[:comment] if options[:comment] end end def initial_query_op(session, options = {}) Operation::CollectionsInfo.new(collections_info_spec(session, options)) end # Sends command that obtains information about the database. # # This command returns a cursor, so there could be additional commands, # therefore this method is called send *initial* command. # # @param [ Server ] server Server to send the query to. # @param [ Session ] session Session that should be used to send the query. # @param [ Hash ] options # @option options [ Hash | nil ] :filter A query expression to filter # the list of collections. # @option options [ true | false | nil ] :name_only A flag to indicate # whether the command should return just the collection/view names # and type or return both the name and other information. # @option options [ true | false | nil ] :authorized_collections A flag, # when set to true and used with name_only: true, that allows a user # without the required privilege (i.e. listCollections # action on the database) to run the command when access control # is enforced. # @option options [ Object | nil ] :comment A user-provided comment to attach # to this command. # @option options [ true | false | nil ] :deserialize_as_bson Whether the # query results should be deserialized to BSON types, or to Ruby # types (where possible). # # @return [ Operation::Result ] Result of the query. def send_initial_query(server, session, context, options = {}) opts = options.dup execution_opts = {} if opts.key?(:deserialize_as_bson) execution_opts[:deserialize_as_bson] = opts.delete(:deserialize_as_bson) end if server.load_balancer? connection = server.pool.check_out(context: context) initial_query_op(session, opts).execute_with_connection( connection, context: context, options: execution_opts ) else initial_query_op(session, opts).execute( server, context: context, options: execution_opts ) end end end end end mongo-ruby-driver-2.21.3/lib/mongo/dbref.rb000066400000000000000000000012411505113246500204620ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2015-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an 'AS IS' BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo DBRef = BSON::DBRef end mongo-ruby-driver-2.21.3/lib/mongo/distinguishing_semaphore.rb000066400000000000000000000030701505113246500244750ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo # This is a semaphore that distinguishes waits ending due to the timeout # being reached from waits ending due to the semaphore being signaled. # # @api private class DistinguishingSemaphore def initialize @lock = Mutex.new @cv = ::ConditionVariable.new @queue = [] end # Waits for the semaphore to be signaled up to timeout seconds. # If semaphore is not signaled, returns after timeout seconds. # # @return [ true | false ] true if semaphore was signaled, false if # timeout was reached. def wait(timeout = nil) @lock.synchronize do @cv.wait(@lock, timeout) (!@queue.empty?).tap do @queue.clear end end end def broadcast @lock.synchronize do @queue.push(true) @cv.broadcast end end def signal @lock.synchronize do @queue.push(true) @cv.signal end end end end mongo-ruby-driver-2.21.3/lib/mongo/error.rb000066400000000000000000000166341505113246500205450ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'mongo/error/notable' require 'mongo/error/labelable' module Mongo # Base error class for all Mongo related errors. # # @since 2.0.0 class Error < StandardError include Notable include Labelable # The error code field. # # @since 2.0.0 CODE = 'code'.freeze # An error field, MongoDB < 2.6 # # @since 2.0.0 # @deprecated ERR = '$err'.freeze # An error field, MongoDB < 2.4 # # @since 2.0.0 # @deprecated ERROR = 'err'.freeze # The standard error message field, MongoDB 3.0+ # # @since 2.0.0 # @deprecated ERRMSG = 'errmsg'.freeze # The constant for the writeErrors array. # # @since 2.0.0 # @deprecated WRITE_ERRORS = 'writeErrors'.freeze # The constant for a write concern error. # # @since 2.0.0 # @deprecated WRITE_CONCERN_ERROR = 'writeConcernError'.freeze # The constant for write concern errors. # # @since 2.1.0 # @deprecated WRITE_CONCERN_ERRORS = 'writeConcernErrors'.freeze # Constant for an unknown error. # # @since 2.0.0 UNKNOWN_ERROR = 8.freeze # Constant for a bad value error. # # @since 2.0.0 BAD_VALUE = 2.freeze # Constant for a Cursor not found error. # # @since 2.2.3 CURSOR_NOT_FOUND = 'Cursor not found.' # Can the change stream on which this error occurred be resumed, # provided the operation that triggered this error was a getMore? # # @example Is the error resumable for the change stream? # error.change_stream_resumable? # # @return [ true, false ] Whether the error is resumable. # # @since 2.6.0 def change_stream_resumable? false end # Error label describing commitTransaction errors that may or may not occur again if a commit is # manually retried by the user. # # @since 2.6.0 # @deprecated UNKNOWN_TRANSACTION_COMMIT_RESULT_LABEL = 'UnknownTransactionCommitResult'.freeze # Error label describing errors that will likely not occur if a transaction is manually retried # from the start. # # @since 2.6.0 # @deprecated TRANSIENT_TRANSACTION_ERROR_LABEL = 'TransientTransactionError'.freeze def initialize(msg = nil) super @write_concern_error_labels = [] end # Does the write concern error have the given label? # # @param [ String ] label The label to check for the presence of. # # @return [ Boolean ] Whether the write concern error has the given label. def write_concern_error_label?(label) @write_concern_error_labels.include?(label) end # The set of error labels associated with the write concern error. # # @return [ Array ] The list of error labels. def write_concern_error_labels @write_concern_error_labels.dup end end end require 'mongo/error/auth_error' require 'mongo/error/bad_load_balancer_target' require 'mongo/error/sdam_error_detection' require 'mongo/error/parser' require 'mongo/error/write_retryable' require 'mongo/error/change_stream_resumable' require 'mongo/error/bulk_write_error' require 'mongo/error/client_closed' require 'mongo/error/closed_stream' require 'mongo/error/connection_check_out_timeout' require 'mongo/error/connection_perished' require 'mongo/error/connection_unavailable' require 'mongo/error/credential_check_error' require 'mongo/error/crypt_error' require 'mongo/error/extra_file_chunk' require 'mongo/error/file_not_found' require 'mongo/error/handshake_error' require 'mongo/error/invalid_address' require 'mongo/error/invalid_bulk_operation' require 'mongo/error/invalid_bulk_operation_type' require 'mongo/error/invalid_collection_name' require 'mongo/error/invalid_config_option' require 'mongo/error/invalid_cursor_operation' require 'mongo/error/invalid_database_name' require 'mongo/error/invalid_document' require 'mongo/error/invalid_file' require 'mongo/error/invalid_file_revision' require 'mongo/error/invalid_max_connecting' require 'mongo/error/invalid_min_pool_size' require 'mongo/error/invalid_read_option' require 'mongo/error/invalid_application_name' require 'mongo/error/invalid_nonce' require 'mongo/error/invalid_read_concern' require 'mongo/error/invalid_replacement_document' require 'mongo/error/invalid_server_auth_response' # Subclass of InvalidServerAuthResponse require 'mongo/error/invalid_server_auth_host' require 'mongo/error/invalid_server_preference' require 'mongo/error/invalid_session' require 'mongo/error/invalid_signature' require 'mongo/error/invalid_transaction_operation' require 'mongo/error/invalid_txt_record' require 'mongo/error/invalid_update_document' require 'mongo/error/invalid_uri' require 'mongo/error/invalid_write_concern' require 'mongo/error/insufficient_iteration_count' require 'mongo/error/internal_driver_error' require 'mongo/error/kms_error' require 'mongo/error/lint_error' require 'mongo/error/max_bson_size' require 'mongo/error/max_message_size' require 'mongo/error/mismatched_domain' require 'mongo/error/mongocryptd_spawn_error' require 'mongo/error/multi_index_drop' require 'mongo/error/need_primary_server' require 'mongo/error/no_service_connection_available' require 'mongo/error/no_server_available' require 'mongo/error/no_srv_records' require 'mongo/error/session_ended' require 'mongo/error/sessions_not_supported' require 'mongo/error/session_not_materialized' require 'mongo/error/snapshot_session_invalid_server_version' require 'mongo/error/snapshot_session_transaction_prohibited' require 'mongo/error/operation_failure' require 'mongo/error/pool_error' require 'mongo/error/pool_closed_error' require 'mongo/error/pool_paused_error' require 'mongo/error/raise_original_error' require 'mongo/error/server_certificate_revoked' require 'mongo/error/socket_error' require 'mongo/error/pool_cleared_error' require 'mongo/error/socket_timeout_error' require 'mongo/error/failed_string_prep_validation' require 'mongo/error/unchangeable_collection_option' require 'mongo/error/unexpected_chunk_length' require 'mongo/error/unexpected_response' require 'mongo/error/missing_connection' require 'mongo/error/missing_file_chunk' require 'mongo/error/missing_password' require 'mongo/error/missing_resume_token' require 'mongo/error/missing_scram_server_signature' require 'mongo/error/missing_service_id' require 'mongo/error/server_api_conflict' require 'mongo/error/server_api_not_supported' require 'mongo/error/server_not_usable' require 'mongo/error/server_timeout_error' require 'mongo/error/transactions_not_supported' require 'mongo/error/timeout_error' require 'mongo/error/unknown_payload_type' require 'mongo/error/unmet_dependency' require 'mongo/error/unsupported_option' require 'mongo/error/unsupported_array_filters' require 'mongo/error/unsupported_collation' require 'mongo/error/unsupported_features' require 'mongo/error/unsupported_message_type' mongo-ruby-driver-2.21.3/lib/mongo/error/000077500000000000000000000000001505113246500202065ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/error/auth_error.rb000066400000000000000000000017161505113246500227120ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2018-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error # Raised when authentication fails. # # Note: This class is derived from RuntimeError for # backwards compatibility reasons. It is subject to # change in future major versions of the driver. # # @since 2.11.0 class AuthError < RuntimeError include Notable end end end mongo-ruby-driver-2.21.3/lib/mongo/error/bad_load_balancer_target.rb000066400000000000000000000015121505113246500254540ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2021 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error # Raised when the driver is in load-balancing mode but a connection # is established to something other than a mongos. class BadLoadBalancerTarget < Error end end end mongo-ruby-driver-2.21.3/lib/mongo/error/bulk_write_error.rb000066400000000000000000000067041505113246500241220ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error # Exception raised if there are write errors upon executing a bulk # operation. # # Unlike OperationFailure, BulkWriteError does not currently expose # individual error components (such as the error code). The result document # (which can be obtained using the +result+ attribute) provides detailed # error information and can be examined by the application if desired. # # @note A bulk operation that resulted in a BulkWriteError may have # written some of the documents to the database. If the bulk write # was unordered, writes may have also continued past the write that # produced a BulkWriteError. # # @since 2.0.0 class BulkWriteError < Error # @return [ BSON::Document ] result The error result. attr_reader :result # Instantiate the new exception. # # @example Instantiate the exception. # Mongo::Error::BulkWriteError.new(response) # # @param [ Hash ] result A processed response from the server # reporting results of the operation. # # @since 2.0.0 def initialize(result) @result = result # Exception constructor behaves differently for a nil argument and # for no argument. Avoid passing nil explicitly. super(*[build_message]) end private # Generates an error message when there are multiple write errors. # # @example Multiple documents fail validation # # col has validation { 'validator' => { 'x' => { '$type' => 'string' } } } # col.insert_many([{_id: 1}, {_id: 2}], ordered: false) # # Multiple errors: # [121]: Document failed validation -- # {"failingDocumentId":1,"details":{"operatorName":"$type", # "specifiedAs":{"x":{"$type":"string"}},"reason":"field was # missing"}}; # [121]: Document failed validation -- # {"failingDocumentId":2, "details":{"operatorName":"$type", # "specifiedAs":{"x":{"$type":"string"}}, "reason":"field was # missing"}} # # @return [ String ] The error message def build_message errors = @result['writeErrors'] return nil unless errors fragment = "" cut_short = false errors.first(10).each_with_index do |error, i| fragment += "; " if fragment.length > 0 fragment += "[#{error['code']}]: #{error['errmsg']}" fragment += " -- #{error['errInfo'].to_json}" if error['errInfo'] if fragment.length > 3000 cut_short = i < [9, errors.length].min break end end fragment += '...' if errors.length > 10 || cut_short if errors.length > 1 fragment = "Multiple errors: #{fragment}" end fragment end end end end mongo-ruby-driver-2.21.3/lib/mongo/error/change_stream_resumable.rb000066400000000000000000000023321505113246500253720ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2018-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error # A module signifying the error will always cause change stream to # resume once. # # @since 2.6.0 module ChangeStreamResumable # Can the change stream on which this error occurred be resumed, # provided the operation that triggered this error was a getMore? # # @example Is the error resumable for the change stream? # error.change_stream_resumable? # # @return [ true, false ] Whether the error is resumable. # # @since 2.6.0 def change_stream_resumable? true end end end end mongo-ruby-driver-2.21.3/lib/mongo/error/client_closed.rb000066400000000000000000000013021505113246500233360ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2022 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error class ClientClosed < Error end end end mongo-ruby-driver-2.21.3/lib/mongo/error/closed_stream.rb000066400000000000000000000021031505113246500233530ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error # Raised if the Grid::FSBucket::Stream object is closed and an operation is attempted. # # @since 2.1.0 class ClosedStream < Error # Create the new exception. # # @example Create the new exception. # Mongo::Error::ClosedStream.new # # @since 2.1.0 def initialize super("The stream is closed and cannot be written to or read from.") end end end end mongo-ruby-driver-2.21.3/lib/mongo/error/connection_check_out_timeout.rb000066400000000000000000000030501505113246500264620ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2019-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error # Exception raised when trying to check out a connection from a connection # pool, the pool is at its max size and no connections become available # within the configured wait timeout. # # @note For backwards compatibility reasons this class derives from # Timeout::Error rather than Mongo::Error. # # @since 2.9.0 class ConnectionCheckOutTimeout < ::Timeout::Error # @return [ Mongo::Address ] address The address of the server the # pool's connections connect to. # # @since 2.9.0 attr_reader :address # Instantiate the new exception. # # @option options [ Address ] :address # # @api private def initialize(msg, options) super(msg) @address = options[:address] unless @address raise ArgumentError, 'Address argument is required' end end end end end mongo-ruby-driver-2.21.3/lib/mongo/error/connection_perished.rb000066400000000000000000000015721505113246500245620ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error # Exception raised when trying to perform operations on a connection that # experienced a network error. class ConnectionPerished < Error include WriteRetryable include ChangeStreamResumable end end end mongo-ruby-driver-2.21.3/lib/mongo/error/connection_unavailable.rb000066400000000000000000000015631505113246500252420ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2022 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error # Exception raised when trying to check out a connection with a specific # global id, and the connection for that global id no longer exists in the # pool. class ConnectionUnavailable < Error end end end mongo-ruby-driver-2.21.3/lib/mongo/error/credential_check_error.rb000066400000000000000000000017511505113246500252170ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error # Credential check for MONGODB-AWS authentication mechanism failed. # # This exception is raised when the driver attempts to verify the # credentials via STS prior to sending them to the server, and the # verification fails due to an error response from the STS. class CredentialCheckError < AuthError end end end mongo-ruby-driver-2.21.3/lib/mongo/error/crypt_error.rb000066400000000000000000000020721505113246500231060ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error # An error related to client-side encryption. class CryptError < Mongo::Error # Create a new CryptError # # @param [ Integer | nil ] code The optional libmongocrypt error code # @param [ String ] message The error message def initialize(message, code: nil) msg = message msg += " (libmongocrypt error code #{code})" if code super(msg) end end end end mongo-ruby-driver-2.21.3/lib/mongo/error/extra_file_chunk.rb000066400000000000000000000017611505113246500240520ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error # Raised if an extra chunk is found. # # @since 2.1.0 class ExtraFileChunk < Error # Create the new exception. # # @example Create the new exception. # Mongo::Error::ExtraFileChunk.new # # @since 2.1.0 def initialize super("Extra file chunk found.") end end end end mongo-ruby-driver-2.21.3/lib/mongo/error/failed_string_prep_validation.rb000066400000000000000000000024331505113246500266070ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all module Mongo class Error # This exception is raised when stringprep validation fails, such as due to # character being present or bidirection data being invalid. # # @since 2.6.0 class FailedStringPrepValidation < Error # The error message describing failed bidi validation. # # @since 2.6.0 INVALID_BIDIRECTIONAL = 'Data failed bidirectional validation'.freeze # The error message describing the discovery of a prohibited character. # # @since 2.6.0 PROHIBITED_CHARACTER = 'Data contains a prohibited character.'.freeze # The error message describing that stringprep normalization can't be done on Ruby versions # below 2.2.0. # # @since 2.6.0 UNABLE_TO_NORMALIZE = 'Unable to perform normalization with Ruby versions below 2.2.0'.freeze # Create the new exception. # # @example Create the new exception. # Mongo::Error::FailedStringPrepValidation.new( # Mongo::Error::FailedStringPrepValidation::PROHIBITED_CHARACTER) # # @param [ String ] msg The error message describing how the validation failed. # # @since 2.6.0 def initialize(msg) super(msg) end end end end mongo-ruby-driver-2.21.3/lib/mongo/error/file_not_found.rb000066400000000000000000000023441505113246500235300ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error # Raised if a file is deleted from a GridFS but it is not found. # # @since 2.1.0 class FileNotFound < Error # Create the new exception. # # @example Create the new exception. # Mongo::Error::FileNotFound.new(id, :id) # # @param [ Object ] value The property value used to find the file. # @param [ String, Symbol ] property The name of the property used to find the file. # # @since 2.1.0 def initialize(value, property) super("File with #{property} '#{value}' not found.") end end end end mongo-ruby-driver-2.21.3/lib/mongo/error/handshake_error.rb000066400000000000000000000014161505113246500236740ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2018-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error # Raised when a server handshake fails. # # @since 2.7.0 class HandshakeError < Error end end end mongo-ruby-driver-2.21.3/lib/mongo/error/insufficient_iteration_count.rb000066400000000000000000000023301505113246500265050ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error # Exception that is raised when trying to create a database with no name. # # @since 2.6.0 class InsufficientIterationCount < Error # Instantiate the new exception. # # @example Instantiate the exception. # Mongo::Error::InvalidDatabaseName.new # # @since 2.6.0 def initialize(msg) super(msg) end def self.message(required_count, given_count) "This auth mechanism requires an iteration count of #{required_count}, but the server only requested #{given_count}" end end end end mongo-ruby-driver-2.21.3/lib/mongo/error/internal_driver_error.rb000066400000000000000000000014221505113246500251320ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2021 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error # Raised when the driver detects an internal implementation problem. class InternalDriverError < Error end end end mongo-ruby-driver-2.21.3/lib/mongo/error/invalid_address.rb000066400000000000000000000014541505113246500236720ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2019-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error # Raised when validation of addresses in URIs and SRV records fails. # # @since 2.11.0 class InvalidAddress < Error end end end mongo-ruby-driver-2.21.3/lib/mongo/error/invalid_application_name.rb000066400000000000000000000025011505113246500255420ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2016-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error # This exception is raised when the metadata document sent to the server # at the time of a connection handshake is invalid. # # @since 2.4.0 class InvalidApplicationName < Error # Instantiate the new exception. # # @example Create the exception. # InvalidApplicationName.new(app_name, 128) # # @param [ String ] app_name The application name option. # @param [ Integer ] max_size The max byte size of the application name. # # @since 2.4.0 def initialize(app_name, max_size) super("The provided application name '#{app_name}' cannot exceed #{max_size} bytes.") end end end end mongo-ruby-driver-2.21.3/lib/mongo/error/invalid_bulk_operation.rb000066400000000000000000000023231505113246500252560ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error # Exception raised if an non-existent operation type is used. # # @since 2.0.0 class InvalidBulkOperation < Error # Instantiate the new exception. # # @example Instantiate the exception. # Mongo::Error::InvalidBulkOperation.new(name) # # @param [ String ] type The bulk operation type. # @param [ Hash ] operation The bulk operation. # # @since 2.0.0 def initialize(type, operation) super("Invalid document format for bulk #{type} operation: #{operation}.") end end end end mongo-ruby-driver-2.21.3/lib/mongo/error/invalid_bulk_operation_type.rb000066400000000000000000000022031505113246500263140ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error # Exception raised if an non-existent operation type is used. # # @since 2.0.0 class InvalidBulkOperationType < Error # Instantiate the new exception. # # @example Instantiate the exception. # Mongo::Error::InvalidBulkOperationType.new(type) # # @param [ String ] type The attempted operation type. # # @since 2.0.0 def initialize(type) super("Invalid bulk operation type: #{type}.") end end end end mongo-ruby-driver-2.21.3/lib/mongo/error/invalid_collection_name.rb000066400000000000000000000022631505113246500253770ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error # Exception that is raised when trying to create a collection with no name. # # @since 2.0.0 class InvalidCollectionName < Error # The message is constant. # # @since 2.0.0 MESSAGE = 'nil is an invalid collection name. Please provide a string or symbol.'.freeze # Instantiate the new exception. # # @example Instantiate the exception. # Mongo::Collection::InvalidName.new # # @since 2.0.0 def initialize super(MESSAGE) end end end end mongo-ruby-driver-2.21.3/lib/mongo/error/invalid_config_option.rb000066400000000000000000000007011505113246500250740ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all module Mongo class Error # This error is raised when a bad configuration option is attempted to be # set. class InvalidConfigOption < Error # Create the new error. # # @param [ Symbol, String ] name The attempted config option name. # # @api private def initialize(name) super("Invalid config option #{name}.") end end end end mongo-ruby-driver-2.21.3/lib/mongo/error/invalid_cursor_operation.rb000066400000000000000000000017531505113246500256440ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2019-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error # Exception raised when an unsupported operation is attempted on a cursor. # # Examples: # - Attempting to iterate a regular cursor more than once. # - Attempting to call try_next on a caching cursor after it had been # iterated completely the first time. class InvalidCursorOperation < Error end end end mongo-ruby-driver-2.21.3/lib/mongo/error/invalid_database_name.rb000066400000000000000000000022601505113246500250050ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error # Exception that is raised when trying to create a database with no name. # # @since 2.0.0 class InvalidDatabaseName < Error # The message is constant. # # @since 2.0.0 MESSAGE = 'nil is an invalid database name. Please provide a string or symbol.'.freeze # Instantiate the new exception. # # @example Instantiate the exception. # Mongo::Error::InvalidDatabaseName.new # # @since 2.0.0 def initialize super(MESSAGE) end end end end mongo-ruby-driver-2.21.3/lib/mongo/error/invalid_document.rb000066400000000000000000000021511505113246500240560ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error # Exception raised if the object is not a valid document. # # @since 2.0.0 class InvalidDocument < Error # The error message. # # @since 2.0.0 MESSAGE = 'Invalid document provided.'.freeze # Instantiate the new exception. # # @example Instantiate the exception. # Mongo::Error::InvalidDocument.new # # @since 2.0.0 def initialize super(MESSAGE) end end end end mongo-ruby-driver-2.21.3/lib/mongo/error/invalid_file.rb000066400000000000000000000024171505113246500231640ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error # Raised if the file md5 and server md5 do not match when acknowledging # GridFS writes. # # @since 2.0.0 class InvalidFile < Error # Create the new exception. # # @example Create the new exception. # Mongo::Error::InvalidFile.new(file_md5, server_md5) # # @param [ String ] client_md5 The client side file md5. # @param [ String ] server_md5 The server side file md5. # # @since 2.0.0 def initialize(client_md5, server_md5) super("File MD5 on client side is #{client_md5} but the server reported #{server_md5}.") end end end end mongo-ruby-driver-2.21.3/lib/mongo/error/invalid_file_revision.rb000066400000000000000000000023171505113246500251010ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error # Raised if the requested file revision is not found. # # @since 2.1.0 class InvalidFileRevision < Error # Create the new exception. # # @example Create the new exception. # Mongo::Error::InvalidFileRevision.new('some-file.txt', 3) # # @param [ String ] filename The name of the file. # @param [ Integer ] revision The requested revision. # # @since 2.1.0 def initialize(filename, revision) super("No revision #{revision} found for file '#{filename}'.") end end end end mongo-ruby-driver-2.21.3/lib/mongo/error/invalid_max_connecting.rb000066400000000000000000000017521505113246500252420ustar00rootroot00000000000000# frozen_string_literal: true # Copyright (C) 2014-present MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error # Exception that is raised when trying to create a client with an invalid # max_connecting option. class InvalidMaxConnecting < Error # Instantiate the new exception. def initialize(max_connecting) super("Invalid max_connecting: #{max_connecting}. Please ensure that it is greater than zero. ") end end end end mongo-ruby-driver-2.21.3/lib/mongo/error/invalid_min_pool_size.rb000066400000000000000000000022241505113246500251070ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error # Exception that is raised when trying to create a client with an invalid # min_pool_size option. # # @since 2.4.2 class InvalidMinPoolSize < Error # Instantiate the new exception. # # @example Instantiate the exception. # Mongo::Error::InvalidMinPoolSize.new(10, 5) # # @since 2.4.2 def initialize(min, max) super("Invalid min pool size: #{min}. Please ensure that it is less than the max size: #{max}. ") end end end end mongo-ruby-driver-2.21.3/lib/mongo/error/invalid_nonce.rb000066400000000000000000000026651505113246500233540ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2015-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error # This exception is raised when the server nonce returned does not # start with the client nonce sent to it. # # @since 2.0.0 class InvalidNonce < Error # @return [ String ] nonce The client nonce. attr_reader :nonce # @return [ String ] rnonce The server nonce. attr_reader :rnonce # Instantiate the new exception. # # @example Create the exception. # InvalidNonce.new(nonce, rnonce) # # @param [ String ] nonce The client nonce. # @param [ String ] rnonce The server nonce. # # @since 2.0.0 def initialize(nonce, rnonce) @nonce = nonce @rnonce = rnonce super("Expected server rnonce '#{rnonce}' to start with client nonce '#{nonce}'.") end end end end mongo-ruby-driver-2.21.3/lib/mongo/error/invalid_read_concern.rb000066400000000000000000000020171505113246500246630ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error # Raised when an invalid read concern is provided. class InvalidReadConcern < Error # Instantiate the new exception. def initialize(msg = nil) super(msg || 'Invalid read concern option provided.' \ 'The only valid key is :level, for which accepted values are' \ ':local, :majority, and :snapshot') end end end end mongo-ruby-driver-2.21.3/lib/mongo/error/invalid_read_option.rb000066400000000000000000000022031505113246500245410ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2018-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error # Exception that is raised when trying to create a client with an invalid # read option. # # @since 2.6.0 class InvalidReadOption < Error # Instantiate the new exception. # # @example Instantiate the exception. # Mongo::Error::InvalidReadOption.new({:mode => 'bogus'}) # # @since 2.6.0 def initialize(read_option, msg) super("Invalid read preference value: #{read_option.inspect}: #{msg}") end end end end mongo-ruby-driver-2.21.3/lib/mongo/error/invalid_replacement_document.rb000066400000000000000000000032331505113246500264370ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error # Exception raised if the object is not a valid replacement document. class InvalidReplacementDocument < Error # The error message. # # @deprecated MESSAGE = 'Invalid replacement document provided'.freeze # Construct the error message. # # @param [ String ] key The invalid key. # # @return [ String ] The error message. # # @api private def self.message(key) message = "Invalid replacement document provided. Replacement documents " message += "must not contain atomic modifiers. The \"#{key}\" key is invalid." message end # Send and cache the warning. # # @api private def self.warn(logger, key) @warned ||= begin logger.warn(message(key)) true end end # Instantiate the new exception. # # @param [ String ] :key The invalid key. def initialize(key: nil) super(self.class.message(key)) end end end end mongo-ruby-driver-2.21.3/lib/mongo/error/invalid_server_auth_host.rb000066400000000000000000000014501505113246500256250ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error # Raised when the server returned an invalid Host value in AWS auth. class InvalidServerAuthHost < InvalidServerAuthResponse end end end mongo-ruby-driver-2.21.3/lib/mongo/error/invalid_server_auth_response.rb000066400000000000000000000015131505113246500265060ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error # Raised when authentication is aborted on the client because the server # responded in an unacceptable manner. class InvalidServerAuthResponse < AuthError end end end mongo-ruby-driver-2.21.3/lib/mongo/error/invalid_server_preference.rb000066400000000000000000000046661505113246500257610ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error # Raised when an invalid server preference is provided. # # @since 2.0.0 class InvalidServerPreference < Error # Error message when tags are specified for a read preference that cannot support them. # # @since 2.4.0 NO_TAG_SUPPORT = 'This read preference cannot be combined with tags.'.freeze # Error message when a max staleness is specified for a read preference that cannot support it. # # @since 2.4.0 NO_MAX_STALENESS_SUPPORT = 'max_staleness cannot be set for this read preference.'.freeze # Error message when hedge is specified for a read preference that does not support it. # # @api private NO_HEDGE_SUPPORT = 'The hedge option cannot be set for this read preference'.freeze # Error message for when the max staleness is not at least twice the heartbeat frequency. # # @since 2.4.0 # @deprecated INVALID_MAX_STALENESS = "`max_staleness` value is too small. It must be at least " + "`ServerSelector::SMALLEST_MAX_STALENESS_SECONDS` and (the cluster's heartbeat_frequency " + "setting + `Cluster::IDLE_WRITE_PERIOD_SECONDS`).".freeze # Error message when max staleness cannot be used because one or more servers has version < 3.4. # # @since 2.4.0 NO_MAX_STALENESS_WITH_LEGACY_SERVER = 'max_staleness can only be set for a cluster in which ' + 'each server is at least version 3.4.'.freeze # Instantiate the new exception. # # @example Instantiate the exception. # Mongo::Error::InvalidServerPreference.new # # @param [ String ] message The error message. # # @since 2.0.0 def initialize(message) super(message) end end end end mongo-ruby-driver-2.21.3/lib/mongo/error/invalid_session.rb000066400000000000000000000021261505113246500237250ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2017-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error # This exception is raised when a session is attempted to be used and it # is invalid. # # @since 2.5.0 class InvalidSession < Error # Create the new exception. # # @example Create the new exception. # InvalidSession.new(message) # # @param [ String ] message The error message. # # @since 2.5.0 def initialize(message) super(message) end end end end mongo-ruby-driver-2.21.3/lib/mongo/error/invalid_signature.rb000066400000000000000000000031321505113246500242410ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2015-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error # This exception is raised when the server verifier does not match the # expected signature on the client. # # @since 2.0.0 class InvalidSignature < Error # @return [ String ] verifier The server verifier string. attr_reader :verifier # @return [ String ] server_signature The expected server signature. attr_reader :server_signature # Create the new exception. # # @example Create the new exception. # InvalidSignature.new(verifier, server_signature) # # @param [ String ] verifier The verifier returned from the server. # @param [ String ] server_signature The expected value from the # server. # # @since 2.0.0 def initialize(verifier, server_signature) @verifier = verifier @server_signature = server_signature super("Expected server verifier '#{verifier}' to match '#{server_signature}'.") end end end end mongo-ruby-driver-2.21.3/lib/mongo/error/invalid_transaction_operation.rb000066400000000000000000000055441505113246500266560ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error # Exception raised if an invalid operation is attempted as part of a transaction. # # @since 2.6.0 class InvalidTransactionOperation < Error # The error message for when a user attempts to commit or abort a transaction when none is in # progress. # # @since 2.6.0 NO_TRANSACTION_STARTED = 'no transaction started'.freeze # The error message for when a user attempts to start a transaction when one is already in # progress. # # @since 2.6.0. TRANSACTION_ALREADY_IN_PROGRESS = 'transaction already in progress'.freeze # The error message for when a transaction read operation uses a non-primary read preference. # # @since 2.6.0 INVALID_READ_PREFERENCE = 'read preference in a transaction must be primary'.freeze # The error message for when a transaction is started with an unacknowledged write concern. # # @since 2.6.0 UNACKNOWLEDGED_WRITE_CONCERN = 'transactions do not support unacknowledged write concern'.freeze # Instantiate the new exception. # # @example Instantiate the exception. # Mongo::Error::InvalidTransactionOperation.new(msg) # # @since 2.6.0 def initialize(msg) super(msg) end # Create an error message for incorrectly running a transaction operation twice. # # @example Create the error message. # InvalidTransactionOperation.cannot_call_twice(op) # # @param [ Symbol ] op The operation which was run twice. # # @since 2.6.0 def self.cannot_call_twice_msg(op) "cannot call #{op} twice" end # Create an error message for incorrectly running a transaction operation that cannot be run # after the previous one. # # @example Create the error message. # InvalidTransactionOperation.cannot_call_after(last_op, current_op) # # @param [ Symbol ] last_op The operation which was run before. # @param [ Symbol ] current_op The operation which cannot be run. # # @since 2.6.0 def self.cannot_call_after_msg(last_op, current_op) "Cannot call #{current_op} after calling #{last_op}" end end end end mongo-ruby-driver-2.21.3/lib/mongo/error/invalid_txt_record.rb000066400000000000000000000017161505113246500244230ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2017-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error # This exception is raised when the URI Parser's query returns too many # TXT records or the record specifies invalid options. # # @example Instantiate the exception. # Mongo::Error::InvalidTXTRecord.new(message) # # @since 2.5.0 class InvalidTXTRecord < Error; end end end mongo-ruby-driver-2.21.3/lib/mongo/error/invalid_update_document.rb000066400000000000000000000032421505113246500254220ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error # Exception raised if the object is not a valid update document. # # @since 2.0.0 class InvalidUpdateDocument < Error # The error message. # # @deprecated MESSAGE = 'Invalid update document provided'.freeze # Construct the error message. # # @param [ String ] key The invalid key. # # @return [ String ] The error message. # # @api private def self.message(key) message = "Invalid update document provided. Updates documents must only " message += "contain only atomic modifiers. The \"#{key}\" key is invalid." message end # Send and cache the warning. # # @api private def self.warn(logger, key) @warned ||= begin logger.warn(message(key)) true end end # Instantiate the new exception. # # @param [ String ] :key The invalid key. def initialize(key: nil) super(self.class.message(key)) end end end end mongo-ruby-driver-2.21.3/lib/mongo/error/invalid_uri.rb000066400000000000000000000025161505113246500230440ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error # Exception that is raised when trying to parse a URI that does not match # the specification. # # @since 2.0.0 class InvalidURI < Error # Instantiate the new exception. # # @example Instantiate the exception. # Mongo::Error::InvalidURI.new(uri, details, format) # # @since 2.0.0 def initialize(uri, details, format = nil) message = "Bad URI: #{uri}\n" + "#{details}\n" message += "MongoDB URI must be in the following format: #{format}\n" if format message += "Please see the following URL for more information: #{Mongo::URI::HELP}\n" super(message) end end end end mongo-ruby-driver-2.21.3/lib/mongo/error/invalid_write_concern.rb000066400000000000000000000023041505113246500251010ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error # Raised when an invalid write concern is provided. # # @since 2.2.0 class InvalidWriteConcern < Error # Instantiate the new exception. # # @example Instantiate the exception. # Mongo::Error::InvalidWriteConcern.new # # @since 2.2.0 def initialize(msg = nil) super(msg || 'Invalid write concern options. If w is an Integer, it must be greater than or equal to 0. ' + 'If w is 0, it cannot be combined with a true value for fsync or j (journal).') end end end end mongo-ruby-driver-2.21.3/lib/mongo/error/kms_error.rb000066400000000000000000000020321505113246500225330ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error # A KMS-related error during client-side encryption. class KmsError < CryptError def initialize(message, code: nil, network_error: nil) @network_error = network_error super(message, code: code) end end # @return [ true, false ] whether this error was caused by a network error. def network_error? @network_error == true end end end mongo-ruby-driver-2.21.3/lib/mongo/error/labelable.rb000066400000000000000000000037211505113246500224410ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2019-2022 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error # A module encapsulating functionality to manage labels added to errors. # # @note Although methods of this module are part of the public API, # the fact that these methods are defined on this module and not on # the classes which include this module is not part of the public API. # # @api semipublic module Labelable # Does the error have the given label? # # @example # error.label?(label) # # @param [ String ] label The label to check if the error has. # # @return [ true, false ] Whether the error has the given label. # # @since 2.6.0 def label?(label) @labels && @labels.include?(label) end # Gets the set of labels associated with the error. # # @example # error.labels # # @return [ Array ] The set of labels. # # @since 2.7.0 def labels if @labels @labels.dup else [] end end # Adds the specified label to the error instance, if the label is not # already in the set of labels. # # @param [ String ] label The label to add. # # @api private def add_label(label) @labels ||= [] @labels << label unless label?(label) end end end end mongo-ruby-driver-2.21.3/lib/mongo/error/lint_error.rb000066400000000000000000000025251505113246500227160ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2018-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error # Raised when the driver is used incorrectly. # # Normally the driver passes certain data to the server and lets the # server return an error if the data is invalid. This makes it possible # for the server to add functionality in the future and for older # driver versions to support such functionality transparently, but # also complicates debugging. # # Setting the environment variable MONGO_RUBY_DRIVER_LINT to 1, true # or yes will make the driver perform additional checks on data it passes # to the server, to flag failures sooner. This exception is raised on # such failures. # # @since 2.6.1 class LintError < Error end end end mongo-ruby-driver-2.21.3/lib/mongo/error/max_bson_size.rb000066400000000000000000000031561505113246500234000ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error # Exception that is raised when trying to serialize a document that # exceeds max BSON object size. # # @since 2.0.0 class MaxBSONSize < Error # The message is constant. # # @since 2.0.0 MESSAGE = "The document exceeds maximum allowed BSON size".freeze # Instantiate the new exception. # # @example Instantiate the exception. # Mongo::Error::MaxBSONSize.new(max) # # @param [ String | Numeric ] max_size_or_msg The message to use or # the maximum size to insert into the predefined message. The # Numeric argument type is deprecated. # # @since 2.0.0 def initialize(max_size_or_msg = nil) if max_size_or_msg.is_a?(Numeric) msg = "#{MESSAGE}. The maximum allowed size is #{max_size_or_msg}" elsif max_size_or_msg msg = max_size_or_msg else msg = MESSAGE end super(msg) end end end end mongo-ruby-driver-2.21.3/lib/mongo/error/max_message_size.rb000066400000000000000000000024561505113246500240650ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error # Exception that is raised when trying to send a message that exceeds max # message size. # # @since 2.0.0 class MaxMessageSize < Error # The message is constant. # # @since 2.0.0 MESSAGE = "Message exceeds allowed max message size.".freeze # Instantiate the new exception. # # @example Instantiate the exception. # Mongo::Error::MaxMessageSize.new(max) # # @param [ Integer ] max_size The maximum message size. # # @since 2.0.0 def initialize(max_size = nil) super(max_size ? MESSAGE + " The max is #{max_size}." : MESSAGE) end end end end mongo-ruby-driver-2.21.3/lib/mongo/error/mismatched_domain.rb000066400000000000000000000017461505113246500242100ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2017-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error # This exception is raised when the URI Parser's DNS query returns SRV record(s) # whose parent domain does not match the hostname used for the query. # # @example Instantiate the exception. # Mongo::Error::MismatchedDomain.new(message) # # @since 2.5.0 class MismatchedDomain < Error; end end end mongo-ruby-driver-2.21.3/lib/mongo/error/missing_connection.rb000066400000000000000000000015571505113246500244330ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2022 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error # Exception raised when trying to check out a connection with a specific # global id, and the connection for that global id no longer exists in the # pool. class MissingConnection < Error end end end mongo-ruby-driver-2.21.3/lib/mongo/error/missing_file_chunk.rb000066400000000000000000000030001505113246500243640ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error # Raised if the next chunk when reading from a GridFSBucket does not have the # expected sequence number (n). # # @since 2.1.0 class MissingFileChunk < Error # Create the new exception. # # @example Create the new exception. # Mongo::Error::MissingFileChunk.new(expected_n, chunk) # # @param [ Integer ] expected_n The expected index value. # @param [ Grid::File::Chunk | Integer ] chunk The chunk read from GridFS. # # @since 2.1.0 # # @api private def initialize(expected_n, chunk) if chunk.is_a?(Integer) super("Missing chunk(s). Expected #{expected_n} chunks but got #{chunk}.") else super("Unexpected chunk in sequence. Expected next chunk to have index #{expected_n} but it has index #{chunk.n}") end end end end end mongo-ruby-driver-2.21.3/lib/mongo/error/missing_password.rb000066400000000000000000000020011505113246500241170ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2019-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error # Raised when the operations that require a password (e.g. retrieving # a salted or hashed password) are attempted on a User object that was # not created with a password. # # @since 2.8.0 class MissingPassword < Error def initialize(msg = nil) super(msg || 'User was created without a password') end end end end mongo-ruby-driver-2.21.3/lib/mongo/error/missing_resume_token.rb000066400000000000000000000022421505113246500247640ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2017-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error # Raised if a change stream document is returned without a resume token. # # @since 2.5.0 class MissingResumeToken < Error # The error message. # # @since 2.5.0 MESSAGE = 'Cannot provide resume functionality when the resume token is missing'.freeze # Create the new exception. # # @example Create the new exception. # Mongo::Error::MissingResumeToken.new # # @since 2.5.0 def initialize super(MESSAGE) end end end end mongo-ruby-driver-2.21.3/lib/mongo/error/missing_scram_server_signature.rb000066400000000000000000000017771505113246500270540ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error # This exception is raised when the server returned +{done: true}+ in a # SCRAM conversation but did not provide a ServerSignature. class MissingScramServerSignature < Error def initialize(msg = nil) msg ||= "Server signaled completion of SCRAM conversation without providing ServerSignature" super(msg) end end end end mongo-ruby-driver-2.21.3/lib/mongo/error/missing_service_id.rb000066400000000000000000000015271505113246500244050ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2021 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error # Raised when the driver is in load-balancing mode via the URI option # but a connection does not report a value in the serviceId field. class MissingServiceId < Error end end end mongo-ruby-driver-2.21.3/lib/mongo/error/mongocryptd_spawn_error.rb000066400000000000000000000014331505113246500255220ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error # An error related to spawning mongocryptd for client-side encryption. class MongocryptdSpawnError < CryptError end end end mongo-ruby-driver-2.21.3/lib/mongo/error/multi_index_drop.rb000066400000000000000000000021131505113246500240750ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2015-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error # Exception raised if '*' is passed to drop_one on indexes. # # @since 2.0.0 class MultiIndexDrop < Error # Instantiate the new exception. # # @example Instantiate the exception. # Mongo::Error::MultiIndexDrop.new # # @since 2.0.0 def initialize super("Passing '*' to #drop_one would cause all indexes to be dropped. Please use #drop_all") end end end end mongo-ruby-driver-2.21.3/lib/mongo/error/need_primary_server.rb000066400000000000000000000014361505113246500246030ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error # Raised when a primary server is needed but not found. # # @since 2.0.0 class NeedPrimaryServer < Error; end end end mongo-ruby-driver-2.21.3/lib/mongo/error/no_server_available.rb000066400000000000000000000031331505113246500245350ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error # Raised if there are no servers available matching the preference. # # @since 2.0.0 class NoServerAvailable < Error # Instantiate the new exception. # # @example Instantiate the exception. # Mongo::Error::NoServerAvailable.new(server_selector) # # @param [ Hash ] server_selector The server preference that could not be # satisfied. # @param [ Cluster ] cluster The cluster that server selection was # performed on. (added in 2.7.0) # # @since 2.0.0 def initialize(server_selector, cluster=nil, msg=nil) unless msg msg = "No #{server_selector.name} server is available" if cluster msg += " in cluster: #{cluster.summary}" end msg += " with timeout=#{server_selector.server_selection_timeout}, " + "LT=#{server_selector.local_threshold}" end super(msg) end end end end mongo-ruby-driver-2.21.3/lib/mongo/error/no_service_connection_available.rb000066400000000000000000000027301505113246500271100ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2021 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error # Raised when the driver requires a connection to a particular service # but no matching connections exist in the connection pool. class NoServiceConnectionAvailable < Error # @api private def initialize(message, address:, service_id:) super(message) @address = address @service_id = service_id end # @return [ Mongo::Address ] The address to which a connection was # requested. attr_reader :address # @return [ nil | Object ] The service id. attr_reader :service_id # @api private def self.generate(address:, service_id:) new( "The connection pool for #{address} does not have a connection for service #{service_id}", address: address, service_id: service_id, ) end end end end mongo-ruby-driver-2.21.3/lib/mongo/error/no_srv_records.rb000066400000000000000000000016241505113246500235650ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2017-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error # This exception is raised when the URI Parser's DNS query returns no SRV records. # # @example Instantiate the exception. # Mongo::Error::NoSRVRecords.new(message) # # @since 2.5.0 class NoSRVRecords < Error; end end end mongo-ruby-driver-2.21.3/lib/mongo/error/notable.rb000066400000000000000000000054551505113246500221700ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2019-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error < StandardError # A module encapsulating functionality to manage data attached to # exceptions in the driver, since the driver does not currently have a # single exception hierarchy root. # # @since 2.11.0 # @api private module Notable # Returns an array of strings with additional information about the # exception. # # @return [ Array ] Additional information strings. # # @since 2.11.0 # @api public def notes if @notes @notes.dup else [] end end # @api private def add_note(note) unless @notes @notes = [] end if Lint.enabled? if @notes.include?(note) # The driver makes an effort to not add duplicated notes, by # keeping track of *when* a particular exception should have the # particular notes attached to it throughout the call stack. raise Error::LintError, "Adding a note which already exists in exception #{self}: #{note}" end end @notes << note end # Allows multiple notes to be added in a single call, for convenience. # # @api private def add_notes(*notes) notes.each { |note| add_note(note) } end # Returns connection pool generation for the connection on which the # error occurred. # # @return [ Integer | nil ] Connection pool generation. attr_accessor :generation # Returns service id for the connection on which the error occurred. # # @return [ Object | nil ] Service id. # # @api experimental attr_accessor :service_id # Returns global id of the connection on which the error occurred. # # @return [ Integer | nil ] Connection global id. # # @api private attr_accessor :connection_global_id # @api public def to_s super + notes_tail end private # @api private def notes_tail msg = '' unless notes.empty? msg += " (#{notes.join(', ')})" end msg end end end end mongo-ruby-driver-2.21.3/lib/mongo/error/operation_failure.rb000066400000000000000000000241571505113246500242530ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2015-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'mongo/error/read_write_retryable' module Mongo class Error # Raised when an operation fails for some reason. class OperationFailure < Error # Implements the behavior for an OperationFailure error. Other errors # (e.g. ServerTimeoutError) may also implement this, so that they may # be recognized and treated as OperationFailure errors. module OperationFailure::Family extend Forwardable include SdamErrorDetection include ReadWriteRetryable def_delegators :@result, :operation_time # @!method connection_description # # @return [ Server::Description ] Server description of the server that # the operation that this exception refers to was performed on. # # @api private def_delegator :@result, :connection_description # @return [ Integer ] The error code parsed from the document. # # @since 2.6.0 attr_reader :code # @return [ String ] The error code name parsed from the document. # # @since 2.6.0 attr_reader :code_name # @return [ String ] The server-returned error message # parsed from the response. # # @api experimental attr_reader :server_message # Error codes and code names that should result in a failing getMore # command on a change stream NOT being resumed. # # @api private CHANGE_STREAM_RESUME_ERRORS = [ {code_name: 'HostUnreachable', code: 6}, {code_name: 'HostNotFound', code: 7}, {code_name: 'NetworkTimeout', code: 89}, {code_name: 'ShutdownInProgress', code: 91}, {code_name: 'PrimarySteppedDown', code: 189}, {code_name: 'ExceededTimeLimit', code: 262}, {code_name: 'SocketException', code: 9001}, {code_name: 'NotMaster', code: 10107}, {code_name: 'InterruptedAtShutdown', code: 11600}, {code_name: 'InterruptedDueToReplStateChange', code: 11602}, {code_name: 'NotPrimaryNoSecondaryOk', code: 13435}, {code_name: 'NotMasterOrSecondary', code: 13436}, {code_name: 'StaleShardVersion', code: 63}, {code_name: 'FailedToSatisfyReadPreference', code: 133}, {code_name: 'StaleEpoch', code: 150}, {code_name: 'RetryChangeStream', code: 234}, {code_name: 'StaleConfig', code: 13388}, ].freeze # Change stream can be resumed when these error messages are encountered. # # @since 2.6.0 # @api private CHANGE_STREAM_RESUME_MESSAGES = ReadWriteRetryable::WRITE_RETRY_MESSAGES # Can the change stream on which this error occurred be resumed, # provided the operation that triggered this error was a getMore? # # @example Is the error resumable for the change stream? # error.change_stream_resumable? # # @return [ true, false ] Whether the error is resumable. # # @since 2.6.0 def change_stream_resumable? if @result && @result.is_a?(Mongo::Operation::GetMore::Result) # CursorNotFound exceptions are always resumable because the server # is not aware of the cursor id, and thus cannot determine if # the cursor is a change stream and cannot add the # ResumableChangeStreamError label. return true if code == 43 # Connection description is not populated for unacknowledged writes. if connection_description.max_wire_version >= 9 label?('ResumableChangeStreamError') else change_stream_resumable_code? end else false end end def change_stream_resumable_code? CHANGE_STREAM_RESUME_ERRORS.any? { |e| e[:code] == code } end private :change_stream_resumable_code? # @return [ true | false ] Whether the failure includes a write # concern error. A failure may have a top level error and a write # concern error or either one of the two. # # @since 2.10.0 def write_concern_error? !!@write_concern_error_document end # Returns the write concern error document as it was reported by the # server, if any. # # @return [ Hash | nil ] Write concern error as reported to the server. attr_reader :write_concern_error_document # @return [ Integer | nil ] The error code for the write concern error, # if a write concern error is present and has a code. # # @since 2.10.0 attr_reader :write_concern_error_code # @return [ String | nil ] The code name for the write concern error, # if a write concern error is present and has a code name. # # @since 2.10.0 attr_reader :write_concern_error_code_name # @return [ String | nil ] The details of the error. # For WriteConcernErrors this is `document['writeConcernError']['errInfo']`. # For WriteErrors this is `document['writeErrors'][0]['errInfo']`. # For all other errors this is nil. attr_reader :details # @return [ BSON::Document | nil ] The server-returned error document. # # @api experimental attr_reader :document # @return [ Operation::Result ] the result object for the operation. # # @api private attr_reader :result # Create the operation failure. # # @param [ String ] message The error message. # @param [ Operation::Result ] result The result object. # @param [ Hash ] options Additional parameters. # # @option options [ Integer ] :code Error code. # @option options [ String ] :code_name Error code name. # @option options [ BSON::Document ] :document The server-returned # error document. # @option options [ String ] server_message The server-returned # error message parsed from the response. # @option options [ Hash ] :write_concern_error_document The # server-supplied write concern error document, if any. # @option options [ Integer ] :write_concern_error_code Error code for # write concern error, if any. # @option options [ String ] :write_concern_error_code_name Error code # name for write concern error, if any. # @option options [ Array ] :write_concern_error_labels Error # labels for the write concern error, if any. # @option options [ Array ] :labels The set of labels associated # with the error. # @option options [ true | false ] :wtimeout Whether the error is a wtimeout. def initialize(message = nil, result = nil, options = {}) @details = retrieve_details(options[:document]) super(append_details(message, @details)) @result = result @code = options[:code] @code_name = options[:code_name] @write_concern_error_document = options[:write_concern_error_document] @write_concern_error_code = options[:write_concern_error_code] @write_concern_error_code_name = options[:write_concern_error_code_name] @write_concern_error_labels = options[:write_concern_error_labels] || [] @labels = options[:labels] || [] @wtimeout = !!options[:wtimeout] @document = options[:document] @server_message = options[:server_message] end # Whether the error is a write concern timeout. # # @return [ true | false ] Whether the error is a write concern timeout. # # @since 2.7.1 def wtimeout? @wtimeout end # Whether the error is MaxTimeMSExpired. # # @return [ true | false ] Whether the error is MaxTimeMSExpired. # # @since 2.10.0 def max_time_ms_expired? code == 50 # MaxTimeMSExpired end # Whether the error is caused by an attempted retryable write # on a storage engine that does not support retryable writes. # # @return [ true | false ] Whether the error is caused by an attempted # retryable write on a storage engine that does not support retryable writes. # # @since 2.10.0 def unsupported_retryable_write? # code 20 is IllegalOperation. # Note that the document is expected to be a BSON::Document, thus # either having string keys or providing indifferent access. code == 20 && server_message&.start_with?("Transaction numbers") || false end private # Retrieve the details from a document # # @return [ Hash | nil ] the details extracted from the document def retrieve_details(document) return nil unless document if wce = document['writeConcernError'] return wce['errInfo'] elsif we = document['writeErrors']&.first return we['errInfo'] end end # Append the details to the message # # @return [ String ] the message with the details appended to it def append_details(message, details) return message unless details && message message + " -- #{details.to_json}" end end # OperationFailure is the canonical implementor of the # OperationFailure::Family concern. include OperationFailure::Family end end end mongo-ruby-driver-2.21.3/lib/mongo/error/parser.rb000066400000000000000000000231321505113246500220300ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2015-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # Sample error - mongo 3.4: # { # "ok" : 0, # "errmsg" : "not master", # "code" : 10107, # "codeName" : "NotMaster" # } # # Sample response with a write concern error - mongo 3.4: # { # "n" : 1, # "opTime" : { # "ts" : Timestamp(1527728618, 1), # "t" : NumberLong(4) # }, # "electionId" : ObjectId("7fffffff0000000000000004"), # "writeConcernError" : { # "code" : 100, # "codeName" : "CannotSatisfyWriteConcern", # "errmsg" : "Not enough data-bearing nodes" # }, # "ok" : 1 # } module Mongo class Error # Class for parsing the various forms that errors can come in from MongoDB # command responses. # # The errors can be reported by the server in a number of ways: # - {ok:0} response indicates failure. In newer servers, code, codeName # and errmsg fields should be set. In older servers some may not be set. # - {ok:1} response with a write concern error (writeConcernError top-level # field). This indicates that the node responding successfully executed # the request, but not enough other nodes successfully executed the # request to satisfy the write concern. # - {ok:1} response with writeErrors top-level field. This can be obtained # in a bulk write but also in a non-bulk write. In a non-bulk write # there should be exactly one error in the writeErrors list. # The case of multiple errors is handled by BulkWrite::Result. # - {ok:1} response with writeConcernErrors top-level field. This can # only be obtained in a bulk write and is handled by BulkWrite::Result, # not by this class. # # Note that writeErrors do not have codeName fields - they just provide # codes and messages. writeConcernErrors may similarly not provide code # names. # # @since 2.0.0 # @api private class Parser include SdamErrorDetection # @return [ BSON::Document ] The returned document. attr_reader :document # @return [ String ] The full error message to be used in the # raised exception. attr_reader :message # @return [ String ] The server-returned error message # parsed from the response. attr_reader :server_message # @return [ Array ] The message replies. attr_reader :replies # @return [ Integer ] The error code parsed from the document. # @since 2.6.0 attr_reader :code # @return [ String ] The error code name parsed from the document. # @since 2.6.0 attr_reader :code_name # @return [ Array ] The set of labels associated with the error. # @since 2.7.0 attr_reader :labels # @api private attr_reader :wtimeout # Create the new parser with the returned document. # # In legacy mode, the code and codeName fields of the document are not # examined because the status (ok: 1) is not part of the document and # there is no way to distinguish successful from failed responses using # the document itself, and a successful response may legitimately have # { code: 123, codeName: 'foo' } as the contents of a user-inserted # document. The legacy server versions do not fill out code nor codeName # thus not reading them does not lose information. # # @example Create the new parser. # Parser.new({ 'errmsg' => 'failed' }) # # @param [ BSON::Document ] document The returned document. # @param [ Array ] replies The message replies. # @param [ Hash ] options The options. # # @option options [ true | false ] :legacy Whether document and replies # are from a legacy (pre-3.2) response # # @since 2.0.0 def initialize(document, replies = nil, options = nil) @document = document || {} @replies = replies @options = if options options.dup else {} end.freeze parse! end # @return [ true | false ] Whether the document includes a write # concern error. A failure may have a top level error and a write # concern error or either one of the two. # # @since 2.10.0 # @api experimental def write_concern_error? !!write_concern_error_document end # Returns the write concern error document as it was reported by the # server, if any. # # @return [ Hash | nil ] Write concern error as reported to the server. # @api experimental def write_concern_error_document document['writeConcernError'] end # @return [ Integer | nil ] The error code for the write concern error, # if a write concern error is present and has a code. # # @since 2.10.0 # @api experimental def write_concern_error_code write_concern_error_document && write_concern_error_document['code'] end # @return [ String | nil ] The code name for the write concern error, # if a write concern error is present and has a code name. # # @since 2.10.0 # @api experimental def write_concern_error_code_name write_concern_error_document && write_concern_error_document['codeName'] end # @return [ Array | nil ] The error labels associated with this # write concern error, if there is a write concern error present. def write_concern_error_labels write_concern_error_document && write_concern_error_document['errorLabels'] end class << self def build_message(code: nil, code_name: nil, message: nil) if code_name && code "[#{code}:#{code_name}]: #{message}" elsif code_name # This surely should never happen, if there's a code name # there ought to also be the code provided. # Handle this case for completeness. "[#{code_name}]: #{message}" elsif code "[#{code}]: #{message}" else message end end end private def parse! if document['ok'] != 1 && document['writeErrors'] raise ArgumentError, "writeErrors should only be given in successful responses" end @message = +"" parse_single(@message, '$err') parse_single(@message, 'err') parse_single(@message, 'errmsg') parse_multiple(@message, 'writeErrors') if write_concern_error_document parse_single(@message, 'errmsg', write_concern_error_document) end parse_flag(@message) parse_code parse_labels parse_wtimeout @server_message = @message @message = self.class.build_message( code: code, code_name: code_name, message: @message, ) end def parse_single(message, key, doc = document) if error = doc[key] append(message, error) end end def parse_multiple(message, key) if errors = document[key] errors.each do |error| parse_single(message, 'errmsg', error) end end end def parse_flag(message) if replies && replies.first && (replies.first.respond_to?(:cursor_not_found?)) && replies.first.cursor_not_found? append(message, CURSOR_NOT_FOUND) end end def append(message, error) if message.length > 1 message.concat(", #{error}") else message.concat(error) end end def parse_code if document['ok'] == 1 || @options[:legacy] @code = @code_name = nil else @code = document['code'] @code_name = document['codeName'] end # Since there is only room for one code, do not replace # codes of the top level response with write concern error codes. # In practice this should never be an issue as a write concern # can only fail after the operation succeeds on the primary. if @code.nil? && @code_name.nil? if subdoc = write_concern_error_document @code = subdoc['code'] @code_name = subdoc['codeName'] end end if @code.nil? && @code_name.nil? # If we have writeErrors, and all of their codes are the same, # use that code. Otherwise don't set the code if write_errors = document['writeErrors'] codes = write_errors.map { |e| e['code'] }.compact if codes.uniq.length == 1 @code = codes.first # code name may not be returned by the server @code_name = write_errors.map { |e| e['codeName'] }.compact.first end end end end def parse_labels @labels = document['errorLabels'] || [] end def parse_wtimeout @wtimeout = write_concern_error_document && write_concern_error_document['errInfo'] && write_concern_error_document['errInfo']['wtimeout'] end end end end mongo-ruby-driver-2.21.3/lib/mongo/error/pool_cleared_error.rb000066400000000000000000000024731505113246500244020ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2019-present MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error # Exception raised if an operation is attempted connection that was # interrupted due to server monitor timeout. class PoolClearedError < PoolError include WriteRetryable include ChangeStreamResumable # Instantiate the new exception. # # @example Instantiate the exception. # Mongo::Error::PoolClearedError.new(address, pool) # # @api private def initialize(address, pool) add_label('TransientTransactionError') super(address, pool, "Connection to #{address} interrupted due to server monitor timeout " + "(for pool 0x#{pool.object_id})") end end end end mongo-ruby-driver-2.21.3/lib/mongo/error/pool_closed_error.rb000066400000000000000000000023221505113246500242450ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2019-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error # Exception raised if an operation is attempted on a closed connection pool. # # @since 2.9.0 class PoolClosedError < PoolError # Instantiate the new exception. # # @example Instantiate the exception. # Mongo::Error::PoolClosedError.new(address, pool) # # @since 2.9.0 # @api private def initialize(address, pool) super(address, pool, "Attempted to use a connection pool which has been closed (for #{address} " + "with pool 0x#{pool.object_id})") end end end end mongo-ruby-driver-2.21.3/lib/mongo/error/pool_error.rb000066400000000000000000000023651505113246500227230ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2019-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error # Abstract base class for connection pool-related exceptions. class PoolError < Error # @return [ Mongo::Address ] address The address of the server the # pool's connections connect to. # # @since 2.9.0 attr_reader :address # @return [ Mongo::Server::ConnectionPool ] pool The connection pool. # # @since 2.11.0 attr_reader :pool # Instantiate the new exception. # # @api private def initialize(address, pool, message) @address = address @pool = pool super(message) end end end end mongo-ruby-driver-2.21.3/lib/mongo/error/pool_paused_error.rb000066400000000000000000000023641505113246500242630ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2019-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error # Exception raised if an operation is attempted on a paused connection pool. class PoolPausedError < PoolError include WriteRetryable include ChangeStreamResumable # Instantiate the new exception. # # @example Instantiate the exception. # Mongo::Error::PoolClosedError.new(address, pool) # # @since 2.9.0 # @api private def initialize(address, pool) super(address, pool, "Attempted to use a connection pool which is paused (for #{address} " + "with pool 0x#{pool.object_id})") end end end end mongo-ruby-driver-2.21.3/lib/mongo/error/raise_original_error.rb000066400000000000000000000020021505113246500247250ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error # This is a special marker exception class used internally in the # retryable reads/writes implementation. Its purpose is to bypass # note addition when raising the exception from the first read/write # attempt. # # @note This class must not derive from Error. # # @api private class RaiseOriginalError < Exception end end end mongo-ruby-driver-2.21.3/lib/mongo/error/read_write_retryable.rb000066400000000000000000000070611505113246500247350ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2019-2022 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error # A module encapsulating functionality to indicate whether errors are # retryable. # # @note Although methods of this module are part of the public API, # the fact that these methods are defined on this module and not on # the classes which include this module is not part of the public API. # # @api semipublic module ReadWriteRetryable # Error codes and code names that should result in a failing write # being retried. # # @api private WRITE_RETRY_ERRORS = [ {:code_name => 'HostUnreachable', :code => 6}, {:code_name => 'HostNotFound', :code => 7}, {:code_name => 'NetworkTimeout', :code => 89}, {:code_name => 'ShutdownInProgress', :code => 91}, {:code_name => 'PrimarySteppedDown', :code => 189}, {:code_name => 'ExceededTimeLimit', :code => 262}, {:code_name => 'SocketException', :code => 9001}, {:code_name => 'NotMaster', :code => 10107}, {:code_name => 'InterruptedAtShutdown', :code => 11600}, {:code_name => 'InterruptedDueToReplStateChange', :code => 11602}, {:code_name => 'NotPrimaryNoSecondaryOk', :code => 13435}, {:code_name => 'NotMasterOrSecondary', :code => 13436}, ].freeze # These are magic error messages that could indicate a master change. # # @api private WRITE_RETRY_MESSAGES = [ 'not master', 'node is recovering', ].freeze # These are magic error messages that could indicate a cluster # reconfiguration behind a mongos. # # @api private RETRY_MESSAGES = WRITE_RETRY_MESSAGES + [ 'transport error', 'socket exception', "can't connect", 'connect failed', 'error querying', 'could not get last error', 'connection attempt failed', 'interrupted at shutdown', 'unknown replica set', 'dbclient error communicating with server' ].freeze # Whether the error is a retryable error according to the legacy # read retry logic. # # @return [ true, false ] # # @deprecated def retryable? write_retryable? || code.nil? && RETRY_MESSAGES.any?{ |m| message.include?(m) } end # Whether the error is a retryable error according to the modern retryable # reads and retryable writes specifications. # # This method is also used by the legacy retryable write logic to determine # whether an error is a retryable one. # # @return [ true, false ] def write_retryable? write_retryable_code? || code.nil? && WRITE_RETRY_MESSAGES.any? { |m| message.include?(m) } end private def write_retryable_code? if code WRITE_RETRY_ERRORS.any? { |e| e[:code] == code } else # return false rather than nil false end end end end end mongo-ruby-driver-2.21.3/lib/mongo/error/sdam_error_detection.rb000066400000000000000000000054641505113246500247370ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all module Mongo class Error # @note Although not_master? and node_recovering? methods of this module # are part of the public API, the fact that these methods are defined on # this module and not on the classes which include this module is not # part of the public API. # # @api semipublic module SdamErrorDetection # @api private NOT_MASTER_CODES = [10107, 13435].freeze # @api private NODE_RECOVERING_CODES = [11600, 11602, 13436, 189, 91, 10058].freeze # @api private NODE_SHUTTING_DOWN_CODES = [11600, 91].freeze # Whether the error is a "not master" error, or one of its variants. # # See https://github.com/mongodb/specifications/blob/master/source/server-discovery-and-monitoring/server-discovery-and-monitoring.md#not-writable-primary-and-node-is-recovering # # @return [ true | false ] Whether the error is a not master. # # @since 2.8.0 def not_master? # Require the error to be communicated at the top level of the response # for it to influence SDAM state. See DRIVERS-1376 / RUBY-2516. return false if document && document['ok'] == 1 if node_recovering? false elsif code NOT_MASTER_CODES.include?(code) elsif message message.include?('not master') else false end end # Whether the error is a "node is recovering" error, or one of its variants. # # See https://github.com/mongodb/specifications/blob/master/source/server-discovery-and-monitoring/server-discovery-and-monitoring.md#not-writable-primary-and-node-is-recovering # # @return [ true | false ] Whether the error is a node is recovering. # # @since 2.8.0 def node_recovering? # Require the error to be communicated at the top level of the response # for it to influence SDAM state. See DRIVERS-1376 / RUBY-2516. return false if document && document['ok'] == 1 if code NODE_RECOVERING_CODES.include?(code) elsif message message.include?('node is recovering') || message.include?('not master or secondary') else false end end # Whether the error is a "node is shutting down" type error. # # See https://github.com/mongodb/specifications/blob/master/source/server-discovery-and-monitoring/server-discovery-and-monitoring.md#not-writable-primary-and-node-is-recovering # # @return [ true | false ] Whether the error is a node is shutting down. # # @since 2.9.0 def node_shutting_down? if code && NODE_SHUTTING_DOWN_CODES.include?(code) true else false end end end end end mongo-ruby-driver-2.21.3/lib/mongo/error/server_api_conflict.rb000066400000000000000000000015301505113246500245520ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2021 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error # Exception raised when a Client has :server_api configured and an # operation attempts to specify any of server API version parameters. class ServerApiConflict < Error end end end mongo-ruby-driver-2.21.3/lib/mongo/error/server_api_not_supported.rb000066400000000000000000000016471505113246500256670ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2021 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error # Exception raised when a Client has :server_api configured and an # operation is executed against a pre-3.6 MongoDB server using a legacy # wire protocol message that does not permit sending API parameters. class ServerApiNotSupported < Error end end end mongo-ruby-driver-2.21.3/lib/mongo/error/server_certificate_revoked.rb000066400000000000000000000014171505113246500261250ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error # Server certificate has been revoked (determined via OCSP). class ServerCertificateRevoked < Error end end end mongo-ruby-driver-2.21.3/lib/mongo/error/server_not_usable.rb000066400000000000000000000020551505113246500242560ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2019-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error include WriteRetryable include ChangeStreamResumable # Exception raised if an unknown server is attempted to be used for # an operation. class ServerNotUsable < Error # Instantiate the new exception. # # @api private def initialize(address) @address = address super("Attempted to use an unknown server at #{address}") end end end end mongo-ruby-driver-2.21.3/lib/mongo/error/server_timeout_error.rb000066400000000000000000000003701505113246500250200ustar00rootroot00000000000000# frozen_string_literal: true require 'mongo/error/timeout_error' module Mongo class Error # Raised when the server returns error code 50. class ServerTimeoutError < TimeoutError include OperationFailure::Family end end end mongo-ruby-driver-2.21.3/lib/mongo/error/session_ended.rb000066400000000000000000000015351505113246500233610ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2019-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error # Session was previously ended. # # @since 2.7.0 class SessionEnded < Error def initialize super("The session was ended and cannot be used") end end end end mongo-ruby-driver-2.21.3/lib/mongo/error/session_not_materialized.rb000066400000000000000000000017761505113246500256430ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2022 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error # This exception is raised when a session is attempted to be used but # it was never materialized. class SessionNotMaterialized < InvalidSession def initialize super("The session was not materialized and cannot be used. Use start_session or with_session in order to start a session that will be materialized.") end end end end mongo-ruby-driver-2.21.3/lib/mongo/error/sessions_not_supported.rb000066400000000000000000000022301505113246500253630ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error # This exception is raised when a session is attempted to be used and the # deployment does not support sessions. # # @note The subclassing of InvalidSession only exists for backwards # compatibility and will be removed in driver version 3.0. class SessionsNotSupported < InvalidSession # Create the new exception. # # @param [ String ] message The error message. # # @api private def initialize(message) super(message) end end end end mongo-ruby-driver-2.21.3/lib/mongo/error/snapshot_session_invalid_server_version.rb000066400000000000000000000017041505113246500310000ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2021 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error # Exception raised if an operation using a snapshot session is # directed to a pre-5.0 server. class SnapshotSessionInvalidServerVersion < Error # Instantiate the new exception. def initialize super("Snapshot reads require MongoDB 5.0 or later") end end end end mongo-ruby-driver-2.21.3/lib/mongo/error/snapshot_session_transaction_prohibited.rb000066400000000000000000000016621505113246500307600ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2021 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error # Exception raised if a transaction is attempted on a snapshot session. class SnapshotSessionTransactionProhibited < Error # Instantiate the new exception. def initialize super("Transactions are not supported in snapshot sessions") end end end end mongo-ruby-driver-2.21.3/lib/mongo/error/socket_error.rb000066400000000000000000000015111505113246500232320ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2015-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error # Raised when a socket has an error. # # @since 2.0.0 class SocketError < Error include WriteRetryable include ChangeStreamResumable end end end mongo-ruby-driver-2.21.3/lib/mongo/error/socket_timeout_error.rb000066400000000000000000000016041505113246500250030ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2015-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'mongo/error/timeout_error' module Mongo class Error # Raised when a socket connection times out. # # @since 2.0.0 class SocketTimeoutError < TimeoutError include WriteRetryable include ChangeStreamResumable end end end mongo-ruby-driver-2.21.3/lib/mongo/error/timeout_error.rb000066400000000000000000000013631505113246500234350ustar00rootroot00000000000000# frozen_string_literal: true # Copyright (C) 2015-present MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error # Raised when a Client Side Operation Timeout times out. class TimeoutError < Error end end end mongo-ruby-driver-2.21.3/lib/mongo/error/transactions_not_supported.rb000066400000000000000000000022241505113246500262300ustar00rootroot00000000000000# frozen_string_literal: true # Copyright (C) 2019-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error # Transactions are not supported by the cluster. There might be the # following reasons: # - topology is standalone # - topology is replica set and server version is < 4.0 # - topology is sharded and server version is < 4.2 # # @param [ String ] reason The reason why transactions are no supported. # # @since 2.7.0 class TransactionsNotSupported < Error def initialize(reason) super("Transactions are not supported for the cluster: #{reason}") end end end end mongo-ruby-driver-2.21.3/lib/mongo/error/unchangeable_collection_option.rb000066400000000000000000000025301505113246500267520ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error # Raised if a new collection is created from an existing one and options other than the # changeable ones are provided. # # @since 2.1.0 class UnchangeableCollectionOption < Error # Create the new exception. # # @example Create the new exception. # Mongo::Error::UnchangeableCollectionOption.new(option) # # @param [ String, Symbol ] option The option that was attempted to be changed. # # @since 2.1.0 def initialize(option) super("The option #{option} cannot be set on a new collection instance." + " The options that can be updated are #{Collection::CHANGEABLE_OPTIONS}") end end end end mongo-ruby-driver-2.21.3/lib/mongo/error/unexpected_chunk_length.rb000066400000000000000000000026021505113246500254300ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error # Raised if the next chunk when reading from a GridFSBucket does not have the # expected length. # # @since 2.1.0 class UnexpectedChunkLength < Error # Create the new exception. # # @example Create the new exception. # Mongo::Error::UnexpectedChunkLength.new(expected_len, chunk) # # @param [ Integer ] expected_len The expected length. # @param [ Grid::File::Chunk ] chunk The chunk read from GridFS. # # @since 2.1.0 def initialize(expected_len, chunk) super("Unexpected chunk length. Chunk has length #{chunk.data.data.size} but expected length " + "#{expected_len} or for it to be the last chunk in the sequence.") end end end end mongo-ruby-driver-2.21.3/lib/mongo/error/unexpected_response.rb000066400000000000000000000025771505113246500246300ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error # Raised if the response read from the socket does not match the latest query. # # @since 2.2.6 class UnexpectedResponse < Error # Create the new exception. # # @example Create the new exception. # Mongo::Error::UnexpectedResponse.new(expected_response_to, response_to) # # @param [ Integer ] expected_response_to The last request id sent. # @param [ Integer ] response_to The actual response_to of the reply. # # @since 2.2.6 def initialize(expected_response_to, response_to) super("Unexpected response. Got response for request ID #{response_to} " + "but expected response for request ID #{expected_response_to}") end end end end mongo-ruby-driver-2.21.3/lib/mongo/error/unknown_payload_type.rb000066400000000000000000000024371505113246500250120ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2017-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error # Raised if an unknown payload type is encountered when an OP_MSG is created or read. # # @since 2.5.0 class UnknownPayloadType < Error # The error message. # # @since 2.5.0 MESSAGE = 'Unknown payload type (%s) encountered when creating or reading an OP_MSG wire protocol message.' # Create the new exception. # # @example Create the new exception. # Mongo::Error::UnknownPayloadType.new(byte) # # @param [ String ] byte The unknown payload type. # # @since 2.5.0 def initialize(byte) super(MESSAGE % byte.inspect) end end end end mongo-ruby-driver-2.21.3/lib/mongo/error/unmet_dependency.rb000066400000000000000000000014031505113246500240570ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error # Raised if an optional dependency of the driver is not met. class UnmetDependency < Error; end end end mongo-ruby-driver-2.21.3/lib/mongo/error/unsupported_array_filters.rb000066400000000000000000000044441505113246500260570ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error # Raised if the array filters option is specified for an operation but the server # selected does not support array filters. # # @since 2.5.0 # # @deprecated RUBY-2260 In driver version 3.0, this error class will be # replaced with UnsupportedOption. To handle this error, catch # Mongo::Error::UnsupportedOption, which will prevent any breaking changes # in your application when upgrading to version 3.0 of the driver. class UnsupportedArrayFilters < UnsupportedOption # The default error message describing that array filters are not supported. # # @return [ String ] A default message describing that array filters are not supported by the server. # # @since 2.5.0 DEFAULT_MESSAGE = "The array_filters option is not a supported feature of the server handling this operation. " + "Operation results may be unexpected.".freeze # The error message describing that array filters cannot be used when write concern is unacknowledged. # # @return [ String ] A message describing that array filters cannot be used when write concern is unacknowledged. # # @since 2.5.0 UNACKNOWLEDGED_WRITES_MESSAGE = "The array_filters option cannot be specified when using unacknowledged writes. " + "Either remove the array_filters option or use acknowledged writes (w >= 1).".freeze # Create the new exception. # # @example Create the new exception. # Mongo::Error::UnsupportedArrayFilters.new # # @since 2.5.0 def initialize(message = nil) super(message || DEFAULT_MESSAGE) end end end end mongo-ruby-driver-2.21.3/lib/mongo/error/unsupported_collation.rb000066400000000000000000000043231505113246500251710ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error # Raised if a collation is specified for an operation but the server selected does not # support collations. # # @since 2.4.0 # # @deprecated RUBY-2260 In driver version 3.0, this error class will be # replaced with UnsupportedOption. To handle this error, catch # Mongo::Error::UnsupportedOption, which will prevent any breaking changes # in your application when upgrading to version 3.0 of the driver. class UnsupportedCollation < UnsupportedOption # The default error message describing that collations is not supported. # # @return [ String ] A default message describing that collations is not supported by the server. # # @since 2.4.0 DEFAULT_MESSAGE = "Collations is not a supported feature of the server handling this operation. " + "Operation results may be unexpected." # The error message describing that collations cannot be used when write concern is unacknowledged. # # @return [ String ] A message describing that collations cannot be used when write concern is unacknowledged. # # @since 2.4.0 UNACKNOWLEDGED_WRITES_MESSAGE = "A collation cannot be specified when using unacknowledged writes. " + "Either remove the collation option or use acknowledged writes (w >= 1)." # Create the new exception. # # @example Create the new exception. # Mongo::Error::UnsupportedCollation.new # # @since 2.4.0 def initialize(message = nil) super(message || DEFAULT_MESSAGE) end end end end mongo-ruby-driver-2.21.3/lib/mongo/error/unsupported_features.rb000066400000000000000000000015001505113246500250150ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error # Raised when the driver does not support the complete set of server # features. # # @since 2.0.0 class UnsupportedFeatures < Error end end end mongo-ruby-driver-2.21.3/lib/mongo/error/unsupported_message_type.rb000066400000000000000000000014741505113246500256760ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error # Raised when trying to get a message type from the registry that doesn't exist. # # @since 2.5.0 class UnsupportedMessageType < Error; end end end mongo-ruby-driver-2.21.3/lib/mongo/error/unsupported_option.rb000066400000000000000000000075661505113246500245310ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error # Raised if an unsupported option is specified for an operation. class UnsupportedOption < Error # The error message provided when the user passes the hint option to # a write operation against a server that does not support the hint # option and does not provide option validation. # # @api private HINT_MESSAGE = "The MongoDB server handling this request does not support " \ "the hint option on this command. The hint option is supported on update " \ "commands on MongoDB server versions 4.2 and later and on findAndModify " \ "and delete commands on MongoDB server versions 4.4 and later" # The error message provided when the user passes the hint option to # an unacknowledged write operation. # # @api private UNACKNOWLEDGED_HINT_MESSAGE = "The hint option cannot be specified on " \ "an unacknowledged write operation. Remove the hint option or perform " \ "this operation with a write concern of at least { w: 1 }" # The error message provided when the user passes the allow_disk_use # option to a find operation against a server that does not support the # allow_disk_use operation and does not provide option validation. # # @api private ALLOW_DISK_USE_MESSAGE = "The MongoDB server handling this request does " \ "not support the allow_disk_use option on this command. The " \ "allow_disk_use option is supported on find commands on MongoDB " \ "server versions 4.4 and later" # The error message provided when the user passes the commit_quorum option # to a createIndexes operation against a server that does not support # that option. # # @api private COMMIT_QUORUM_MESSAGE = "The MongoDB server handling this request does " \ "not support the commit_quorum option on this command. The commit_quorum " \ "option is supported on createIndexes commands on MongoDB server versions " \ "4.4 and later" # Raise an error about an unsupported hint option. # # @option options [ Boolean ] unacknowledged_write Whether this error # pertains to a hint option passed to an unacknowledged write. Defaults # to false. # # @return [ Mongo::Error::UnsupportedOption ] An error with a default # error message. # # @api private def self.hint_error(**options) unacknowledged_write = options[:unacknowledged_write] || false error_message = if unacknowledged_write UNACKNOWLEDGED_HINT_MESSAGE else HINT_MESSAGE end new(error_message) end # Raise an error about an unsupported allow_disk_use option. # # @return [ Mongo::Error::UnsupportedOption ] An error with a default # error message. # # @api private def self.allow_disk_use_error new(ALLOW_DISK_USE_MESSAGE) end # Raise an error about an unsupported commit_quorum option. # # @return [ Mongo::Error::UnsupportedOption ] An error with a default # error message. # # @api private def self.commit_quorum_error new(COMMIT_QUORUM_MESSAGE) end end end end mongo-ruby-driver-2.21.3/lib/mongo/error/write_retryable.rb000066400000000000000000000015141505113246500237370ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2018-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Error # A module signifying the error is always write retryable. # # @since 2.6.0 module WriteRetryable def write_retryable? true end end end end mongo-ruby-driver-2.21.3/lib/mongo/event.rb000066400000000000000000000027011505113246500205230ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an 'AS IS' BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Event # When a standalone is discovered. # # @since 2.0.6 # @deprecated Will be removed in 3.0 STANDALONE_DISCOVERED = 'standalone_discovered'.freeze # When a server is elected primary. # # @since 2.0.0 # @deprecated Will be removed in 3.0 PRIMARY_ELECTED = 'primary_elected'.freeze # When a server is discovered to be a member of a topology. # # @since 2.4.0 # @deprecated Will be removed in 3.0 MEMBER_DISCOVERED = 'member_discovered'.freeze # When a server is to be removed from a cluster. # # @since 2.0.6 # @deprecated Will be removed in 3.0 DESCRIPTION_CHANGED = 'description_changed'.freeze end end require 'mongo/event/base' require 'mongo/event/listeners' require 'mongo/event/publisher' require 'mongo/event/subscriber' mongo-ruby-driver-2.21.3/lib/mongo/event/000077500000000000000000000000001505113246500201765ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/event/base.rb000066400000000000000000000023131505113246500214340ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2018-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an 'AS IS' BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Event # Base class for all events. # # @since 2.6.0 class Base # Returns a concise yet useful summary of the event. # Meant to be overridden in derived classes. # # @return [ String ] String summary of the event. # # @note This method is experimental and subject to change. # # @since 2.7.0 # @api experimental def summary "#<#{self.class}>" end private def short_class_name self.class.name.sub(/^Mongo::Monitoring::Event::/, '') end end end end mongo-ruby-driver-2.21.3/lib/mongo/event/listeners.rb000066400000000000000000000034251505113246500225370ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2015-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an 'AS IS' BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Event # The queue of events getting processed in the client. # # @since 2.0.0 class Listeners # Initialize the event listeners. # # @example Initialize the event listeners. # Listeners.new # # @since 2.0.0 def initialize @listeners = {} end # Add an event listener for the provided event. # # @example Add an event listener # publisher.add_listener("my_event", listener) # # @param [ String ] event The event to listen for. # @param [ Object ] listener The event listener. # # @return [ Array ] The listeners for the event. # # @since 2.0.0 def add_listener(event, listener) listeners_for(event).push(listener) end # Get the listeners for a specific event. # # @example Get the listeners. # publisher.listeners_for("test") # # @param [ String ] event The event name. # # @return [ Array ] The listeners. # # @since 2.0.0 def listeners_for(event) @listeners[event] ||= [] end end end end mongo-ruby-driver-2.21.3/lib/mongo/event/publisher.rb000066400000000000000000000025071505113246500225240ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Event # This module is included for objects that need to publish events. # # @since 2.0.0 module Publisher # @return [ Event::Listeners ] event_listeners The listeners. attr_reader :event_listeners # Publish the provided event. # # @example Publish an event. # publisher.publish("my_event", "payload") # # @param [ String ] event The event to publish. # @param [ Array ] args The objects to pass to the listeners. # # @since 2.0.0 def publish(event, *args) event_listeners.listeners_for(event).each do |listener| listener.handle(*args) end end end end end mongo-ruby-driver-2.21.3/lib/mongo/event/subscriber.rb000066400000000000000000000024101505113246500226630ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Event # Adds convenience methods for adding listeners to event publishers. # # @since 2.0.0 module Subscriber # @return [ Event::Listeners ] event_listeners The listeners. attr_reader :event_listeners # Subscribe to the provided event. # # @example Subscribe to the event. # subscriber.subscribe_to('test', listener) # # @param [ String ] event The event. # @param [ Object ] listener The event listener. # # @since 2.0.0 def subscribe_to(event, listener) event_listeners.add_listener(event, listener) end end end end mongo-ruby-driver-2.21.3/lib/mongo/grid.rb000066400000000000000000000013171505113246500203310ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'mongo/grid/file' require 'mongo/grid/fs_bucket' require 'mongo/grid/stream' mongo-ruby-driver-2.21.3/lib/mongo/grid/000077500000000000000000000000001505113246500200025ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/grid/file.rb000066400000000000000000000076321505113246500212560ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'mongo/grid/file/chunk' require 'mongo/grid/file/info' module Mongo module Grid # A representation of a file in the database. # # @since 2.0.0 # # @deprecated Please use the 'stream' API on a FSBucket instead. # Will be removed in driver version 3.0. class File extend Forwardable # Delegate to file info for convenience. def_delegators :info, :chunk_size, :content_type, :filename, :id, :md5, :upload_date # @return [ Array ] chunks The file chunks. attr_reader :chunks # @return [ File::Info ] info The file information. attr_reader :info # Check equality of files. # # @example Check the equality of files. # file == other # # @param [ Object ] other The object to check against. # # @return [ true, false ] If the objects are equal. # # @since 2.0.0 def ==(other) return false unless other.is_a?(File) chunks == other.chunks && info == other.info end # Initialize the file. # # @example Create the file. # Grid::File.new(data, :filename => 'test.txt') # # @param [ IO, String, Array ] data The file object, file # contents or chunks. # @param [ BSON::Document, Hash ] options The info options. # # @option options [ String ] :filename Required name of the file. # @option options [ String ] :content_type The content type of the file. # Deprecated, please use the metadata document instead. # @option options [ String ] :metadata Optional file metadata. # @option options [ Integer ] :chunk_size Override the default chunk # size. # @option opts [ Array ] :aliases A list of aliases. # Deprecated, please use the metadata document instead. # # @since 2.0.0 def initialize(data, options = {}) options = options.merge(:length => data.size) unless options[:length] @info = Info.new(options) initialize_chunks!(data) end # Joins chunks into a string. # # @return [ String ] The raw data for the file. # # @since 2.0.0 def data @data ||= Chunk.assemble(chunks) end # Gets a pretty inspection of the file. # # @example Get the file inspection. # file.inspect # # @return [ String ] The file inspection. # # @since 2.0.0 def inspect "#" end private # @note If we have provided an array of BSON::Documents to initialize # with, we have an array of chunk documents and need to create the # chunk objects and assemble the data. If we have an IO object, then # it's the original file data and we must split it into chunks and set # the original data itself. # # @param [ IO, String, Array ] value The file object, # file contents or chunk documents. # # @return [ Array ] Array of chunks. def initialize_chunks!(value) if value.is_a?(Array) @chunks = value.map{ |doc| Chunk.new(doc) } else @chunks = Chunk.split(value, info) end end end end end mongo-ruby-driver-2.21.3/lib/mongo/grid/file/000077500000000000000000000000001505113246500207215ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/grid/file/chunk.rb000066400000000000000000000123631505113246500223630ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Grid class File # Encapsulates behavior around GridFS chunks of file data. # # @since 2.0.0 class Chunk # Name of the chunks collection. # # @since 2.0.0 COLLECTION = 'chunks'.freeze # Default size for chunks of data. # # @since 2.0.0 DEFAULT_SIZE = (255 * 1024).freeze # @return [ BSON::Document ] document The document to store for the # chunk. attr_reader :document # Check chunk equality. # # @example Check chunk equality. # chunk == other # # @param [ Object ] other The object ot compare to. # # @return [ true, false ] If the objects are equal. # # @since 2.0.0 def ==(other) return false unless other.is_a?(Chunk) document == other.document end # Get the BSON type for a chunk document. # # @example Get the BSON type. # chunk.bson_type # # @return [ Integer ] The BSON type. # # @since 2.0.0 def bson_type BSON::Hash::BSON_TYPE end # Get the chunk data. # # @example Get the chunk data. # chunk.data # # @return [ BSON::Binary ] The chunk data. # # @since 2.0.0 def data document[:data] end # Get the chunk id. # # @example Get the chunk id. # chunk.id # # @return [ BSON::ObjectId ] The chunk id. # # @since 2.0.0 def id document[:_id] end # Get the files id. # # @example Get the files id. # chunk.files_id # # @return [ BSON::ObjectId ] The files id. # # @since 2.0.0 def files_id document[:files_id] end # Get the chunk position. # # @example Get the chunk position. # chunk.n # # @return [ Integer ] The chunk position. # # @since 2.0.0 def n document[:n] end # Create the new chunk. # # @example Create the chunk. # Chunk.new(document) # # @param [ BSON::Document ] document The document to create the chunk # from. # # @since 2.0.0 def initialize(document) @document = BSON::Document.new(:_id => BSON::ObjectId.new).merge(document) end # Conver the chunk to BSON for storage. # # @example Convert the chunk to BSON. # chunk.to_bson # # @param [ BSON::ByteBuffer ] buffer The encoded BSON buffer to append to. # @param [ true, false ] validating_keys Whether keys should be validated when serializing. # This option is deprecated and will not be used. It will removed in version 3.0. # # @return [ String ] The raw BSON data. # # @since 2.0.0 def to_bson(buffer = BSON::ByteBuffer.new, validating_keys = nil) document.to_bson(buffer) end class << self # Takes an array of chunks and assembles them back into the full # piece of raw data. # # @example Assemble the chunks. # Chunk.assemble(chunks) # # @param [ Array ] chunks The chunks. # # @return [ String ] The assembled data. # # @since 2.0.0 # @api private def assemble(chunks) chunks.reduce(+''){ |data, chunk| data << chunk.data.data } end # Split the provided data into multiple chunks. # # @example Split the data into chunks. # Chunks.split(data) # # @param [ String, IO ] io The raw bytes. # @param [ File::Info ] file_info The files collection file doc. # @param [ Integer ] offset The offset. # # @return [ Array ] The chunks of the data. # # @since 2.0.0 # @api private def split(io, file_info, offset = 0) io = StringIO.new(io) if io.is_a?(String) parts = Enumerator.new { |y| y << io.read(file_info.chunk_size) until io.eof? } parts.map.with_index do |bytes, n| file_info.update_md5(bytes) Chunk.new( data: BSON::Binary.new(bytes), files_id: file_info.id, n: n + offset ) end end end end end end end mongo-ruby-driver-2.21.3/lib/mongo/grid/file/info.rb000066400000000000000000000175201505113246500222060ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Grid class File # Encapsulates behavior around GridFS files collection file document. # # @since 2.0.0 # # @deprecated Please use the 'stream' API on a FSBucket instead. # Will be removed in driver version 3.0. class Info # Name of the files collection. # # @since 2.0.0 COLLECTION = 'files'.freeze # Mappings of user supplied fields to db specification. # # @since 2.0.0 MAPPINGS = { :chunk_size => :chunkSize, :content_type => :contentType, :filename => :filename, :_id => :_id, :md5 => :md5, :length => :length, :metadata => :metadata, :upload_date => :uploadDate, :aliases => :aliases }.freeze # Default content type for stored files. # # @since 2.0.0 DEFAULT_CONTENT_TYPE = 'binary/octet-stream'.freeze # @return [ BSON::Document ] document The files collection document. attr_reader :document # Is this file information document equal to another? # # @example Check file information document equality. # file_info == other # # @param [ Object ] other The object to check against. # # @return [ true, false ] If the objects are equal. # # @since 2.0.0 def ==(other) return false unless other.is_a?(Info) document == other.document end # Get the BSON type for a files information document. # # @example Get the BSON type. # file_info.bson_type # # @return [ Integer ] The BSON type. # # @since 2.0.0 def bson_type BSON::Hash::BSON_TYPE end # Get the file chunk size. # # @example Get the chunk size. # file_info.chunk_size # # @return [ Integer ] The chunksize in bytes. # # @since 2.0.0 def chunk_size document[:chunkSize] end # Get the file information content type. # # @example Get the content type. # file_info.content_type # # @return [ String ] The content type. # # @since 2.0.0 def content_type document[:contentType] end # Get the filename from the file information. # # @example Get the filename. # file_info.filename # # @return [ String ] The filename. def filename document[:filename] end # Get the file id from the file information. # # @example Get the file id. # file_info.id # # @return [ BSON::ObjectId ] The file id. # # @since 2.0.0 def id document[:_id] end # Create the new file information document. # # @example Create the new file information document. # Info.new(:filename => 'test.txt') # # @param [ BSON::Document ] document The document to create from. # # @since 2.0.0 def initialize(document) @client_md5 = Digest::MD5.new unless document[:disable_md5] == true # document contains a mix of user options and keys added # internally by the driver, like session. # Remove the keys that driver adds but keep user options. document = document.reject do |key, value| key.to_s == 'session' end @document = default_document.merge(Options::Mapper.transform(document, MAPPINGS)) end # Get a readable inspection for the object. # # @example Inspect the file information. # file_info.inspect # # @return [ String ] The nice inspection. # # @since 2.0.0 def inspect "#" end # Get the length of the document in bytes. # # @example Get the file length from the file information document. # file_info.length # # @return [ Integer ] The file length. # # @since 2.0.0 def length document[:length] end alias :size :length # Get the additional metadata from the file information document. # # @example Get additional metadata. # file_info.metadata # # @return [ String ] The additional metadata from file information document. # # @since 2.0.0 def metadata document[:metadata] end # Get the md5 hash. # # @example Get the md5 hash. # file_info.md5 # # @return [ String ] The md5 hash as a string. # # @since 2.0.0 # # @deprecated as of 2.6.0 def md5 document[:md5] || @client_md5 end # Update the md5 hash if there is one. # # @example Update the md5 hash. # file_info.update_md5(bytes) # # @note This method is transitional and is provided for backwards compatibility. # It will be removed when md5 support is deprecated entirely. # # @param [ String ] bytes The bytes to use to update the digest. # # @return [ Digest::MD5 ] The md5 hash object. # # @since 2.6.0 # # @deprecated as of 2.6.0 def update_md5(bytes) md5.update(bytes) if md5 end # Convert the file information document to BSON for storage. # # @note If no md5 exists in the file information document (it was loaded # from the server and is not a new file) then we digest the md5 and set it. # # @example Convert the file information document to BSON. # file_info.to_bson # # @param [ BSON::ByteBuffer ] buffer The encoded BSON buffer to append to. # @param [ true, false ] validating_keys Whether keys should be validated when serializing. # This option is deprecated and will not be used. It will removed in version 3.0. # # @return [ String ] The raw BSON data. # # @since 2.0.0 def to_bson(buffer = BSON::ByteBuffer.new, validating_keys = nil) if @client_md5 && !document[:md5] document[:md5] = @client_md5.hexdigest end document.to_bson(buffer) end # Get the upload date. # # @example Get the upload date. # file_info.upload_date # # @return [ Time ] The upload date. # # @since 2.0.0 def upload_date document[:uploadDate] end private def default_document BSON::Document.new( :_id => BSON::ObjectId.new, :chunkSize => Chunk::DEFAULT_SIZE, # MongoDB stores times with millisecond precision :uploadDate => Time.now.utc.round(3), :contentType => DEFAULT_CONTENT_TYPE ) end end end end end mongo-ruby-driver-2.21.3/lib/mongo/grid/fs_bucket.rb000066400000000000000000000512501505113246500222770ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Grid # Represents a view of the GridFS in the database. # # @since 2.0.0 class FSBucket extend Forwardable # The default root prefix. # # @since 2.0.0 DEFAULT_ROOT = 'fs'.freeze # The specification for the chunks collection index. # # @since 2.0.0 CHUNKS_INDEX = { :files_id => 1, :n => 1 }.freeze # The specification for the files collection index. # # @since 2.1.0 FILES_INDEX = { filename: 1, uploadDate: 1 }.freeze # Create the GridFS. # # @example Create the GridFS. # Grid::FSBucket.new(database) # # @param [ Database ] database The database the files reside in. # @param [ Hash ] options The GridFS options. # # @option options [ String ] :bucket_name The prefix for the files and chunks # collections. # @option options [ Integer ] :chunk_size Override the default chunk # size. # @option options [ String ] :fs_name The prefix for the files and chunks # collections. # @option options [ Hash ] :read The read preference options. The hash # may have the following items: # - *:mode* -- read preference specified as a symbol; valid values are # *:primary*, *:primary_preferred*, *:secondary*, *:secondary_preferred* # and *:nearest*. # - *:tag_sets* -- an array of hashes. # - *:local_threshold*. # @option options [ Session ] :session The session to use. # @option options [ Hash ] :write Deprecated. Equivalent to :write_concern # option. # @option options [ Hash ] :write_concern The write concern options. # Can be :w => Integer|String, :fsync => Boolean, :j => Boolean. # # @since 2.0.0 def initialize(database, options = {}) @database = database @options = options.dup =begin WriteConcern object support if @options[:write_concern].is_a?(WriteConcern::Base) # Cache the instance so that we do not needlessly reconstruct it. @write_concern = @options[:write_concern] @options[:write_concern] = @write_concern.options end =end @options.freeze @chunks_collection = database[chunks_name] @files_collection = database[files_name] end # @return [ Collection ] chunks_collection The chunks collection. # # @since 2.0.0 attr_reader :chunks_collection # @return [ Database ] database The database. # # @since 2.0.0 attr_reader :database # @return [ Collection ] files_collection The files collection. # # @since 2.0.0 attr_reader :files_collection # @return [ Hash ] options The FSBucket options. # # @since 2.1.0 attr_reader :options # Get client from the database. # # @since 2.1.0 def_delegators :database, :client # Find files collection documents matching a given selector. # # @example Find files collection documents by a filename. # fs.find(filename: 'file.txt') # # @param [ Hash ] selector The selector to use in the find. # @param [ Hash ] options The options for the find. # # @option options [ true, false ] :allow_disk_use Whether the server can # write temporary data to disk while executing the find operation. # @option options [ Integer ] :batch_size The number of documents returned in each batch # of results from MongoDB. # @option options [ Integer ] :limit The max number of docs to return from the query. # @option options [ true, false ] :no_cursor_timeout The server normally times out idle # cursors after an inactivity period (10 minutes) to prevent excess memory use. # Set this option to prevent that. # @option options [ Integer ] :skip The number of docs to skip before returning results. # @option options [ Hash ] :sort The key and direction pairs by which the result set # will be sorted. # # @return [ CollectionView ] The collection view. # # @since 2.1.0 def find(selector = nil, options = {}) opts = options.merge(read: read_preference) if read_preference files_collection.find(selector, opts || options) end # Find a file in the GridFS. # # @example Find a file by its id. # fs.find_one(_id: id) # # @example Find a file by its filename. # fs.find_one(filename: 'test.txt') # # @param [ Hash ] selector The selector. # # @return [ Grid::File ] The file. # # @since 2.0.0 # # @deprecated Please use #find instead with a limit of -1. # Will be removed in version 3.0. def find_one(selector = nil) file_info = files_collection.find(selector).first return nil unless file_info chunks = chunks_collection.find(:files_id => file_info[:_id]).sort(:n => 1) Grid::File.new(chunks.to_a, Options::Mapper.transform(file_info, Grid::File::Info::MAPPINGS.invert)) end # Insert a single file into the GridFS. # # @example Insert a single file. # fs.insert_one(file) # # @param [ Grid::File ] file The file to insert. # # @return [ BSON::ObjectId ] The file id. # # @since 2.0.0 # # @deprecated Please use #upload_from_stream or #open_upload_stream instead. # Will be removed in version 3.0. def insert_one(file) @indexes ||= ensure_indexes! chunks_collection.insert_many(file.chunks) files_collection.insert_one(file.info) file.id end # Get the prefix for the GridFS # # @example Get the prefix. # fs.prefix # # @return [ String ] The GridFS prefix. # # @since 2.0.0 def prefix @options[:fs_name] || @options[:bucket_name] || DEFAULT_ROOT end # Remove a single file from the GridFS. # # @example Remove a file from the GridFS. # fs.delete_one(file) # # @param [ Grid::File ] file The file to remove. # # @return [ Result ] The result of the remove. # # @since 2.0.0 def delete_one(file, opts = {}) delete(file.id, opts) end # Remove a single file, identified by its id from the GridFS. # # @example Remove a file from the GridFS. # fs.delete(id) # # @param [ BSON::ObjectId, Object ] id The id of the file to remove. # # @return [ Result ] The result of the remove. # # @raise [ Error::FileNotFound ] If the file is not found. # # @since 2.1.0 def delete(id, opts = {}) timeout_holder = CsotTimeoutHolder.new(operation_timeouts: operation_timeouts(opts)) result = files_collection .find({ :_id => id }, @options.merge(timeout_ms: timeout_holder.remaining_timeout_ms)) .delete_one(timeout_ms: timeout_holder.remaining_timeout_ms) chunks_collection .find({ :files_id => id }, @options.merge(timeout_ms: timeout_holder.remaining_timeout_ms)) .delete_many(timeout_ms: timeout_holder.remaining_timeout_ms) raise Error::FileNotFound.new(id, :id) if result.n == 0 result end # Opens a stream from which a file can be downloaded, specified by id. # # @example Open a stream from which a file can be downloaded. # fs.open_download_stream(id) # # @param [ BSON::ObjectId, Object ] id The id of the file to read. # @param [ Hash ] options The options. # # @option options [ BSON::Document ] :file_info_doc For internal # driver use only. A BSON document to use as file information. # # @return [ Stream::Read ] The stream to read from. # # @yieldparam [ Hash ] The read stream. # # @since 2.1.0 def open_download_stream(id, options = nil) options = Utils.shallow_symbolize_keys(options || {}) read_stream(id, **options).tap do |stream| if block_given? begin yield stream ensure stream.close end end end end # Downloads the contents of the file specified by id and writes them to # the destination io object. # # @example Download the file and write it to the io object. # fs.download_to_stream(id, io) # # @param [ BSON::ObjectId, Object ] id The id of the file to read. # @param [ IO ] io The io object to write to. # # @since 2.1.0 def download_to_stream(id, io) open_download_stream(id) do |stream| stream.each do |chunk| io << chunk end end end # Opens a stream from which the application can read the contents of the stored file # specified by filename and the revision in options. # # Revision numbers are defined as follows: # 0 = the original stored file # 1 = the first revision # 2 = the second revision # etc… # -2 = the second most recent revision # -1 = the most recent revision # # @example Open a stream to download the most recent revision. # fs.open_download_stream_by_name('some-file.txt') # # # @example Open a stream to download the original file. # fs.open_download_stream_by_name('some-file.txt', revision: 0) # # @example Open a stream to download the second revision of the stored file. # fs.open_download_stream_by_name('some-file.txt', revision: 2) # # @param [ String ] filename The file's name. # @param [ Hash ] opts Options for the download. # # @option opts [ Integer ] :revision The revision number of the file to download. # Defaults to -1, the most recent version. # # @return [ Stream::Read ] The stream to read from. # # @raise [ Error::FileNotFound ] If the file is not found. # @raise [ Error::InvalidFileRevision ] If the requested revision is not found for the file. # # @yieldparam [ Hash ] The read stream. # # @since 2.1.0 def open_download_stream_by_name(filename, opts = {}, &block) revision = opts.fetch(:revision, -1) if revision < 0 skip = revision.abs - 1 sort = { 'uploadDate' => Mongo::Index::DESCENDING } else skip = revision sort = { 'uploadDate' => Mongo::Index::ASCENDING } end file_info_doc = files_collection.find({ filename: filename} , sort: sort, skip: skip, limit: -1).first unless file_info_doc raise Error::FileNotFound.new(filename, :filename) unless opts[:revision] raise Error::InvalidFileRevision.new(filename, opts[:revision]) end open_download_stream(file_info_doc[:_id], file_info_doc: file_info_doc, &block) end # Downloads the contents of the stored file specified by filename and by the # revision in options and writes the contents to the destination io object. # # Revision numbers are defined as follows: # 0 = the original stored file # 1 = the first revision # 2 = the second revision # etc… # -2 = the second most recent revision # -1 = the most recent revision # # @example Download the most recent revision. # fs.download_to_stream_by_name('some-file.txt', io) # # # @example Download the original file. # fs.download_to_stream_by_name('some-file.txt', io, revision: 0) # # @example Download the second revision of the stored file. # fs.download_to_stream_by_name('some-file.txt', io, revision: 2) # # @param [ String ] filename The file's name. # @param [ IO ] io The io object to write to. # @param [ Hash ] opts Options for the download. # # @option opts [ Integer ] :revision The revision number of the file to download. # Defaults to -1, the most recent version. # # @raise [ Error::FileNotFound ] If the file is not found. # @raise [ Error::InvalidFileRevision ] If the requested revision is not found for the file. # # @since 2.1.0 def download_to_stream_by_name(filename, io, opts = {}) download_to_stream(open_download_stream_by_name(filename, opts).file_id, io) end # Opens an upload stream to GridFS to which the contents of a file or # blob can be written. # # @param [ String ] filename The name of the file in GridFS. # @param [ Hash ] opts The options for the write stream. # # @option opts [ Object ] :file_id An optional unique file id. # A BSON::ObjectId is automatically generated if a file id is not # provided. # @option opts [ Integer ] :chunk_size Override the default chunk size. # @option opts [ Hash ] :metadata User data for the 'metadata' field of the files # collection document. # @option opts [ String ] :content_type The content type of the file. # Deprecated, please use the metadata document instead. # @option opts [ Array ] :aliases A list of aliases. # Deprecated, please use the metadata document instead. # @option options [ Hash ] :write Deprecated. Equivalent to :write_concern # option. # @option options [ Hash ] :write_concern The write concern options. # Can be :w => Integer|String, :fsync => Boolean, :j => Boolean. # # @return [ Stream::Write ] The write stream. # # @yieldparam [ Hash ] The write stream. # # @since 2.1.0 def open_upload_stream(filename, opts = {}) opts = Utils.shallow_symbolize_keys(opts) write_stream(filename, **opts).tap do |stream| if block_given? begin yield stream ensure stream.close end end end end # Uploads a user file to a GridFS bucket. # Reads the contents of the user file from the source stream and uploads it as chunks in the # chunks collection. After all the chunks have been uploaded, it creates a files collection # document for the filename in the files collection. # # @example Upload a file to the GridFS bucket. # fs.upload_from_stream('a-file.txt', file) # # @param [ String ] filename The filename of the file to upload. # @param [ IO ] io The source io stream to upload from. # @param [ Hash ] opts The options for the write stream. # # @option opts [ Object ] :file_id An optional unique file id. An ObjectId is generated otherwise. # @option opts [ Integer ] :chunk_size Override the default chunk size. # @option opts [ Hash ] :metadata User data for the 'metadata' field of the files # collection document. # @option opts [ String ] :content_type The content type of the file. Deprecated, please # use the metadata document instead. # @option opts [ Array ] :aliases A list of aliases. Deprecated, please use the # metadata document instead. # @option options [ Hash ] :write Deprecated. Equivalent to :write_concern # option. # @option options [ Hash ] :write_concern The write concern options. # Can be :w => Integer|String, :fsync => Boolean, :j => Boolean. # # @return [ BSON::ObjectId ] The ObjectId file id. # # @since 2.1.0 def upload_from_stream(filename, io, opts = {}) open_upload_stream(filename, opts) do |stream| begin stream.write(io) # IOError and SystemCallError are for errors reading the io. # Error::SocketError and Error::SocketTimeoutError are for # writing to MongoDB. rescue IOError, SystemCallError, Error::SocketError, Error::SocketTimeoutError begin stream.abort rescue Error::OperationFailure end raise end end.file_id end # Get the read preference. # # @note This method always returns a BSON::Document instance, even though # the FSBucket constructor specifies the type of :read as a Hash, not # as a BSON::Document. # # @return [ BSON::Document ] The read preference. # The document may have the following fields: # - *:mode* -- read preference specified as a symbol; valid values are # *:primary*, *:primary_preferred*, *:secondary*, *:secondary_preferred* # and *:nearest*. # - *:tag_sets* -- an array of hashes. # - *:local_threshold*. def read_preference @read_preference ||= begin pref = options[:read] || database.read_preference if BSON::Document === pref pref else BSON::Document.new(pref) end end end # Get the write concern. # # @example Get the write concern. # stream.write_concern # # @return [ Mongo::WriteConcern ] The write concern. # # @since 2.1.0 def write_concern @write_concern ||= if wco = @options[:write_concern] || @options[:write] WriteConcern.get(wco) else database.write_concern end end # Drop the collections that implement this bucket. def drop(opts = {}) context = Operation::Context.new(operation_timeouts: operation_timeouts(opts)) files_collection.drop(timeout_ms: context.remaining_timeout_ms) chunks_collection.drop(timeout_ms: context.remaining_timeout_ms) end private # @param [ Hash ] opts The options. # # @option opts [ BSON::Document ] :file_info_doc For internal # driver use only. A BSON document to use as file information. def read_stream(id, **opts) Stream.get(self, Stream::READ_MODE, { file_id: id }.update(options).update(opts)) end def write_stream(filename, **opts) Stream.get(self, Stream::WRITE_MODE, { filename: filename }.update(options).update(opts)) end def chunks_name "#{prefix}.#{Grid::File::Chunk::COLLECTION}" end def files_name "#{prefix}.#{Grid::File::Info::COLLECTION}" end def ensure_indexes!(timeout_holder = nil) fc_idx = files_collection.find( {}, limit: 1, projection: { _id: 1 }, timeout_ms: timeout_holder&.remaining_timeout_ms ).first if fc_idx.nil? create_index_if_missing!(files_collection, FSBucket::FILES_INDEX) end cc_idx = chunks_collection.find( {}, limit: 1, projection: { _id: 1 }, timeout_ms: timeout_holder&.remaining_timeout_ms ).first if cc_idx.nil? create_index_if_missing!(chunks_collection, FSBucket::CHUNKS_INDEX, :unique => true) end end def create_index_if_missing!(collection, index_spec, options = {}) indexes_view = collection.indexes begin if indexes_view.get(index_spec).nil? indexes_view.create_one(index_spec, options) end rescue Mongo::Error::OperationFailure::Family => e # proceed with index creation if a NamespaceNotFound error is thrown if e.code == 26 indexes_view.create_one(index_spec, options) else raise end end end # @return [ Hash ] timeout_ms value set on the operation level (if any), # and/or timeout_ms that is set on collection/database/client level (if any). # # @api private def operation_timeouts(opts = {}) # TODO: We should re-evaluate if we need two timeouts separately. {}.tap do |result| if opts[:timeout_ms].nil? result[:inherited_timeout_ms] = database.timeout_ms else result[:operation_timeout_ms] = opts[:timeout_ms] end end end end end end mongo-ruby-driver-2.21.3/lib/mongo/grid/stream.rb000066400000000000000000000035551505113246500216320ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'mongo/grid/stream/read' require 'mongo/grid/stream/write' module Mongo module Grid class FSBucket # A stream that reads and writes files from/to the FSBucket. # # @since 2.1.0 module Stream extend self # The symbol for opening a read stream. # # @since 2.1.0 READ_MODE = :r # The symbol for opening a write stream. # # @since 2.1.0 WRITE_MODE = :w # Mapping from mode to stream class. # # @since 2.1.0 MODE_MAP = { READ_MODE => Read, WRITE_MODE => Write }.freeze # Get a stream for reading/writing files from/to the FSBucket. # # @example Get a stream. # FSBucket::Stream.get(fs, FSBucket::READ_MODE, options) # # @param [ FSBucket ] fs The GridFS bucket object. # @param [ FSBucket::READ_MODE, FSBucket::WRITE_MODE ] mode The stream mode. # @param [ Hash ] options The stream options. # # @return [ Stream::Read, Stream::Write ] The stream object. # # @since 2.1.0 def get(fs, mode, options = {}) MODE_MAP[mode].new(fs, options) end end end end end mongo-ruby-driver-2.21.3/lib/mongo/grid/stream/000077500000000000000000000000001505113246500212755ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/grid/stream/read.rb000066400000000000000000000204021505113246500225330ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Grid class FSBucket module Stream # A stream that reads files from the FSBucket. # # @since 2.1.0 class Read include Enumerable # @return [ FSBucket ] fs The fs bucket from which this stream reads. # # @since 2.1.0 attr_reader :fs # @return [ Hash ] options The stream options. # # @since 2.1.0 attr_reader :options # @return [ BSON::ObjectId, Object ] file_id The id of the file being read. # # @since 2.1.0 attr_reader :file_id # Create a stream for reading files from the FSBucket. # # @example Create the stream. # Stream::Read.new(fs, options) # # @param [ FSBucket ] fs The GridFS bucket object. # @param [ Hash ] options The read stream options. # # @option options [ BSON::Document ] :file_info_doc For internal # driver use only. A BSON document to use as file information. # # @since 2.1.0 def initialize(fs, options) @fs = fs @options = options.dup @file_id = @options.delete(:file_id) @options.freeze @open = true @timeout_holder = CsotTimeoutHolder.new( operation_timeouts: { operation_timeout_ms: options[:timeout_ms], inherited_timeout_ms: fs.database.timeout_ms } ) end # Iterate through chunk data streamed from the FSBucket. # # @example Iterate through the chunk data. # stream.each do |data| # buffer << data # end # # @return [ Enumerator ] The enumerator. # # @raise [ Error::MissingFileChunk ] If a chunk is found out of sequence. # # @yieldparam [ Hash ] Each chunk of file data. # # @since 2.1.0 def each ensure_readable! info = file_info num_chunks = (info.length + info.chunk_size - 1) / info.chunk_size num_read = 0 if block_given? view.each_with_index.reduce(0) do |length_read, (doc, index)| chunk = Grid::File::Chunk.new(doc) validate!(index, num_chunks, chunk, length_read) data = chunk.data.data yield data num_read += 1 length_read += data.size end.tap do if num_read < num_chunks raise Error::MissingFileChunk.new(num_chunks, num_read) end end else view.to_enum end end # Read all file data. # # @example Read the file data. # stream.read # # @return [ String ] The file data. # # @raise [ Error::MissingFileChunk ] If a chunk is found out of sequence. # # @since 2.1.0 def read to_a.join end # Close the read stream. # # If the stream is already closed, this method does nothing. # # @example Close the stream. # stream.close # # @return [ BSON::ObjectId, Object ] The file id. # # @since 2.1.0 def close if @open view.close_query @open = false end file_id end # Is the stream closed. # # @example Is the stream closd. # stream.closed? # # @return [ true, false ] Whether the stream is closed. # # @since 2.1.0 def closed? !@open end # Get the read preference. # # @note This method always returns a BSON::Document instance, even # though the constructor specifies the type of :read as a Hash, not # as a BSON::Document. # # @return [ BSON::Document ] The read preference. # The document may have the following fields: # - *:mode* -- read preference specified as a symbol; valid values are # *:primary*, *:primary_preferred*, *:secondary*, *:secondary_preferred* # and *:nearest*. # - *:tag_sets* -- an array of hashes. # - *:local_threshold*. def read_preference @read_preference ||= begin pref = options[:read] || fs.read_preference if BSON::Document === pref pref else BSON::Document.new(pref) end end end # Get the files collection file information document for the file # being read. # # @note The file information is cached in the stream. Subsequent # calls to file_info will return the same information that the # first call returned, and will not query the database again. # # @return [ File::Info ] The file information object. # # @since 2.1.0 def file_info @file_info ||= begin doc = options[:file_info_doc] || fs.files_collection.find( { _id: file_id }, { timeout_ms: @timeout_holder.remaining_timeout_ms! } ).first if doc File::Info.new(Options::Mapper.transform(doc, File::Info::MAPPINGS.invert)) else nil end end end private def ensure_open! raise Error::ClosedStream.new if closed? end def ensure_file_info! raise Error::FileNotFound.new(file_id, :id) unless file_info end def ensure_readable! ensure_open! ensure_file_info! end def view @view ||= begin opts = if read_preference options.merge(read: read_preference) else options end if @timeout_holder.csot? opts[:timeout_ms] = @timeout_holder.remaining_timeout_ms! opts[:timeout_mode] = :cursor_lifetime end fs.chunks_collection.find({ :files_id => file_id }, opts).sort(:n => 1) end end def validate!(index, num_chunks, chunk, length_read) validate_n!(index, chunk) validate_length!(index, num_chunks, chunk, length_read) end def raise_unexpected_chunk_length!(chunk) close raise Error::UnexpectedChunkLength.new(file_info.chunk_size, chunk) end def validate_length!(index, num_chunks, chunk, length_read) if num_chunks > 0 && chunk.data.data.size > 0 raise Error::ExtraFileChunk.new unless index < num_chunks if index == num_chunks - 1 unless chunk.data.data.size + length_read == file_info.length raise_unexpected_chunk_length!(chunk) end elsif chunk.data.data.size != file_info.chunk_size raise_unexpected_chunk_length!(chunk) end end end def validate_n!(index, chunk) unless index == chunk.n close raise Error::MissingFileChunk.new(index, chunk) end end end end end end end mongo-ruby-driver-2.21.3/lib/mongo/grid/stream/write.rb000066400000000000000000000163651505113246500227670ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Grid class FSBucket module Stream # A stream that writes files to the FSBucket. # # @since 2.1.0 class Write # @return [ FSBucket ] fs The fs bucket to which this stream writes. # # @since 2.1.0 attr_reader :fs # @return [ Object ] file_id The id of the file being uploaded. # # @since 2.1.0 attr_reader :file_id # @return [ String ] filename The name of the file being uploaded. # # @since 2.1.0 attr_reader :filename # @return [ Hash ] options The write stream options. # # @since 2.1.0 attr_reader :options # Create a stream for writing files to the FSBucket. # # @example Create the stream. # Stream::Write.new(fs, options) # # @param [ FSBucket ] fs The GridFS bucket object. # @param [ Hash ] options The write stream options. # # @option options [ Object ] :file_id The file id. An ObjectId # is generated if the file id is not provided. # @option opts [ Integer ] :chunk_size Override the default chunk size. # @option opts [ Hash ] :metadata User data for the 'metadata' field of the files collection document. # @option opts [ String ] :content_type The content type of the file. # Deprecated, please use the metadata document instead. # @option opts [ Array ] :aliases A list of aliases. # Deprecated, please use the metadata document instead. # @option options [ Hash ] :write Deprecated. Equivalent to :write_concern # option. # @option options [ Hash ] :write_concern The write concern options. # Can be :w => Integer|String, :fsync => Boolean, :j => Boolean. # # @since 2.1.0 def initialize(fs, options) @fs = fs @length = 0 @n = 0 @file_id = options[:file_id] || BSON::ObjectId.new @options = options.dup =begin WriteConcern object support if @options[:write_concern].is_a?(WriteConcern::Base) # Cache the instance so that we do not needlessly reconstruct it. @write_concern = @options[:write_concern] @options[:write_concern] = @write_concern.options end =end @options.freeze @filename = @options[:filename] @open = true @timeout_holder = CsotTimeoutHolder.new( operation_timeouts: { operation_timeout_ms: options[:timeout_ms], inherited_timeout_ms: fs.database.timeout_ms } ) end # Write to the GridFS bucket from the source stream or a string. # # @example Write to GridFS. # stream.write(io) # # @param [ String | IO ] io The string or IO object to upload from. # # @return [ Stream::Write ] self The write stream itself. # # @since 2.1.0 def write(io) ensure_open! @indexes ||= ensure_indexes! @length += if io.respond_to?(:bytesize) # String objects io.bytesize else # IO objects io.size end chunks = File::Chunk.split(io, file_info, @n) @n += chunks.size unless chunks.empty? chunks_collection.insert_many( chunks, timeout_ms: @timeout_holder.remaining_timeout_ms! ) end self end # Close the write stream. # # @example Close the stream. # stream.close # # @return [ BSON::ObjectId, Object ] The file id. # # @raise [ Error::ClosedStream ] If the stream is already closed. # # @since 2.1.0 def close ensure_open! update_length files_collection.insert_one( file_info, @options.merge(timeout_ms: @timeout_holder.remaining_timeout_ms!) ) @open = false file_id end # Get the write concern used when uploading. # # @example Get the write concern. # stream.write_concern # # @return [ Mongo::WriteConcern ] The write concern. # # @since 2.1.0 def write_concern @write_concern ||= if wco = @options[:write_concern] || @options[:write] WriteConcern.get(wco) else fs.write_concern end end # Is the stream closed. # # @example Is the stream closed. # stream.closed? # # @return [ true, false ] Whether the stream is closed. # # @since 2.1.0 def closed? !@open end # Abort the upload by deleting all chunks already inserted. # # @example Abort the write operation. # stream.abort # # @return [ true ] True if the operation was aborted and the stream is closed. # # @since 2.1.0 def abort fs.chunks_collection.find( { :files_id => file_id }, @options.merge(timeout_ms: @timeout_holder.remaining_timeout_ms!) ).delete_many (@open = false) || true end private def chunks_collection with_write_concern(fs.chunks_collection) end def files_collection with_write_concern(fs.files_collection) end def with_write_concern(collection) if write_concern.nil? || (collection.write_concern && collection.write_concern.options == write_concern.options) then collection else collection.with(write: write_concern.options) end end def update_length file_info.document[:length] = @length end def file_info doc = { length: @length, _id: file_id, filename: filename } @file_info ||= File::Info.new(options.merge(doc)) end def ensure_indexes! fs.send(:ensure_indexes!, @timeout_holder) end def ensure_open! raise Error::ClosedStream.new if closed? end end end end end end mongo-ruby-driver-2.21.3/lib/mongo/id.rb000066400000000000000000000040031505113246500177730ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2016-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an 'AS IS' BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo # This module abstracts the functionality for generating sequential # unique integer IDs for instances of the class. It defines the method # #next_id on the class that includes it. The implementation ensures that # the IDs will be unique even when called from multiple threads. # # @example Include the Id module. # class Foo # include Mongo::Id # end # # f = Foo.new # foo.next_id # => 1 # foo.next_id # => 2 # # Classes which include Id should _not_ access `@@id` or `@@id_lock` # directly; instead, they should call `#next_id` in `#initialize` and save # the result in the instance being created. # # @example Save the ID in the instance of the including class. # class Bar # include Mongo::Id # # attr_reader :id # # def initialize # @id = self.class.next_id # end # end # # a = Bar.new # a.id # => 1 # b = Bar.new # b.id # => 2 # # @since 2.7.0 # @api private module Id def self.included(klass) klass.class_variable_set(:@@id, 0) klass.class_variable_set(:@@id_lock, Mutex.new) klass.define_singleton_method(:next_id) do klass.class_variable_get(:@@id_lock).synchronize do id = class_variable_get(:@@id) klass.class_variable_set(:@@id, id + 1) klass.class_variable_get(:@@id) end end end end end mongo-ruby-driver-2.21.3/lib/mongo/index.rb000066400000000000000000000030761505113246500205170ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'mongo/index/view' module Mongo # Contains constants for indexing purposes. # # @since 2.0.0 module Index # Wildcard constant for all. # # @since 2.1.0 ALL = '*'.freeze # Specify ascending order for an index. # # @since 2.0.0 ASCENDING = 1 # Specify descending order for an index. # # @since 2.0.0 DESCENDING = -1 # Specify a 2d Geo index. # # @since 2.0.0 GEO2D = '2d'.freeze # Specify a 2d sphere Geo index. # # @since 2.0.0 GEO2DSPHERE = '2dsphere'.freeze # Specify a geoHaystack index. # # @since 2.0.0 # @deprecated GEOHAYSTACK = 'geoHaystack'.freeze # Encodes a text index. # # @since 2.0.0 TEXT = 'text'.freeze # Specify a hashed index. # # @since 2.0.0 HASHED = 'hashed'.freeze # Constant for the indexes collection. # # @since 2.0.0 COLLECTION = 'system.indexes'.freeze end end mongo-ruby-driver-2.21.3/lib/mongo/index/000077500000000000000000000000001505113246500201645ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/index/view.rb000066400000000000000000000346711505113246500214760ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'mongo/cursor/nontailable' module Mongo module Index # A class representing a view of indexes. # # @since 2.0.0 class View extend Forwardable include Enumerable include Retryable include Mongo::CursorHost include Cursor::NonTailable # @return [ Collection ] collection The indexes collection. attr_reader :collection # @return [ Integer ] batch_size The size of the batch of results # when sending the listIndexes command. attr_reader :batch_size # @return [ Integer | nil | The timeout_ms value that was passed as an # option to the view. # # @api private attr_reader :operation_timeout_ms def_delegators :@collection, :cluster, :database, :read_preference, :write_concern, :client def_delegators :cluster, :next_primary # The index key field. # # @since 2.0.0 KEY = 'key'.freeze # The index name field. # # @since 2.0.0 NAME = 'name'.freeze # The mappings of Ruby index options to server options. # # @since 2.0.0 OPTIONS = { :background => :background, :bits => :bits, :bucket_size => :bucketSize, :default_language => :default_language, :expire_after => :expireAfterSeconds, :expire_after_seconds => :expireAfterSeconds, :key => :key, :language_override => :language_override, :max => :max, :min => :min, :name => :name, :partial_filter_expression => :partialFilterExpression, :sparse => :sparse, :sphere_version => :'2dsphereIndexVersion', :storage_engine => :storageEngine, :text_version => :textIndexVersion, :unique => :unique, :version => :v, :weights => :weights, :collation => :collation, :comment => :comment, :wildcard_projection => :wildcardProjection, }.freeze # Drop an index by its name. # # @example Drop an index by its name. # view.drop_one('name_1') # # @param [ String ] name The name of the index. # @param [ Hash ] options Options for this operation. # # @option options [ Object ] :comment A user-provided # comment to attach to this command. # # @return [ Result ] The response. # # @since 2.0.0 def drop_one(name, options = {}) raise Error::MultiIndexDrop.new if name == Index::ALL drop_by_name(name, options) end # Drop all indexes on the collection. # # @example Drop all indexes on the collection. # view.drop_all # # @param [ Hash ] options Options for this operation. # # @option options [ Object ] :comment A user-provided # comment to attach to this command. # # @return [ Result ] The response. # # @since 2.0.0 def drop_all(options = {}) drop_by_name(Index::ALL, options) end # Creates an index on the collection. # # @example Create a unique index on the collection. # view.create_one({ name: 1 }, { unique: true }) # # @param [ Hash ] keys A hash of field name/direction pairs. # @param [ Hash ] options Options for this index. # # @option options [ true, false ] :unique (false) If true, this index will enforce # a uniqueness constraint on that field. # @option options [ true, false ] :background (false) If true, the index will be built # in the background (only available for server versions >= 1.3.2 ) # @option options [ true, false ] :drop_dups (false) If creating a unique index on # this collection, this option will keep the first document the database indexes # and drop all subsequent documents with duplicate values on this field. # @option options [ Integer ] :bucket_size (nil) For use with geoHaystack indexes. # Number of documents to group together within a certain proximity to a given # longitude and latitude. # @option options [ Integer ] :max (nil) Specify the max latitude and longitude for # a geo index. # @option options [ Integer ] :min (nil) Specify the min latitude and longitude for # a geo index. # @option options [ Hash ] :partial_filter_expression Specify a filter for a partial # index. # @option options [ Boolean ] :hidden When :hidden is true, this index will # exist on the collection but not be used by the query planner when # executing operations. # @option options [ String | Integer ] :commit_quorum Specify how many # data-bearing members of a replica set, including the primary, must # complete the index builds successfully before the primary marks # the indexes as ready. Potential values are: # - an integer from 0 to the number of members of the replica set # - "majority" indicating that a majority of data bearing nodes must vote # - "votingMembers" which means that all voting data bearing nodes must vote # @option options [ Session ] :session The session to use for the operation. # @option options [ Object ] :comment A user-provided # comment to attach to this command. # # @note Note that the options listed may be subset of those available. # See the MongoDB documentation for a full list of supported options by server version. # # @return [ Result ] The response. # # @since 2.0.0 def create_one(keys, options = {}) options = options.dup create_options = {} if session = @options[:session] create_options[:session] = session end %i(commit_quorum session comment timeout_ms max_time_ms).each do |key| if value = options.delete(key) create_options[key] = value end end create_many({ key: keys }.merge(options), create_options) end # Creates multiple indexes on the collection. # # @example Create multiple indexes. # view.create_many([ # { key: { name: 1 }, unique: true }, # { key: { age: -1 }, background: true } # ]) # # @example Create multiple indexes with options. # view.create_many( # { key: { name: 1 }, unique: true }, # { key: { age: -1 }, background: true }, # { commit_quorum: 'majority' } # ) # # @note On MongoDB 3.0.0 and higher, the indexes will be created in # parallel on the server. # # @param [ Array ] models The index specifications. Each model MUST # include a :key option, except for the last item in the Array, which # may be a Hash specifying options relevant to the createIndexes operation. # The following options are accepted: # - commit_quorum: Specify how many data-bearing members of a replica set, # including the primary, must complete the index builds successfully # before the primary marks the indexes as ready. Potential values are: # - an integer from 0 to the number of members of the replica set # - "majority" indicating that a majority of data bearing nodes must vote # - "votingMembers" which means that all voting data bearing nodes must vote # - session: The session to use. # - comment: A user-provided comment to attach to this command. # # @return [ Result ] The result of the command. # # @since 2.0.0 def create_many(*models) models = models.flatten options = {} if models && !models.last.key?(:key) options = models.pop end client.with_session(@options.merge(options)) do |session| server = next_primary(nil, session) indexes = normalize_models(models, server) indexes.each do |index| if index[:bucketSize] || index['bucketSize'] client.log_warn("Haystack indexes (bucketSize index option) are deprecated as of MongoDB 4.4") end end spec = { indexes: indexes, db_name: database.name, coll_name: collection.name, session: session, commit_quorum: options[:commit_quorum], write_concern: write_concern, comment: options[:comment], } context = Operation::Context.new( client: client, session: session, operation_timeouts: operation_timeouts(options) ) Operation::CreateIndex.new(spec).execute(server, context: context) end end # Convenience method for getting index information by a specific name or # spec. # # @example Get index information by name. # view.get('name_1') # # @example Get index information by the keys. # view.get(name: 1) # # @param [ Hash, String ] keys_or_name The index name or spec. # # @return [ Hash ] The index information. # # @since 2.0.0 def get(keys_or_name) find do |index| (index[NAME] == keys_or_name) || (index[KEY] == normalize_keys(keys_or_name)) end end # Iterate over all indexes for the collection. # # @example Get all the indexes. # view.each do |index| # ... # end # # @since 2.0.0 def each(&block) session = client.get_session(@options) context = Operation::Context.new( client: client, session: session, operation_timeouts: operation_timeouts(@options) ) cursor = read_with_retry_cursor(session, ServerSelector.primary, self, context: context) do |server| send_initial_query(server, session, context) end if block_given? cursor.each do |doc| yield doc end else cursor.to_enum end end # Create the new index view. # # @example Create the new index view. # View::Index.new(collection) # # @param [ Collection ] collection The collection. # @param [ Hash ] options Options for getting a list of indexes. # # @option options [ Integer ] :batch_size The batch size for results # returned from the listIndexes command. # @option options [ :cursor_lifetime | :iteration ] :timeout_mode How to interpret # :timeout_ms (whether it applies to the lifetime of the cursor, or per # iteration). # @option options [ Integer ] :timeout_ms The operation timeout in milliseconds. # Must be a non-negative integer. An explicit value of 0 means infinite. # The default value is unset which means the value is inherited from # the collection or the database or the client. # # @since 2.0.0 def initialize(collection, options = {}) @collection = collection @operation_timeout_ms = options.delete(:timeout_ms) validate_timeout_mode!(options) @batch_size = options[:batch_size] @options = options end # The timeout_ms value to use for this operation; either specified as an # option to the view, or inherited from the collection. # # @return [ Integer | nil ] the timeout_ms for this operation def timeout_ms operation_timeout_ms || collection.timeout_ms end # @return [ Hash ] timeout_ms value set on the operation level (if any), # and/or timeout_ms that is set on collection/database/client level (if any). # # @api private def operation_timeouts(opts = {}) {}.tap do |result| if opts[:timeout_ms] || operation_timeout_ms result[:operation_timeout_ms] = opts.delete(:timeout_ms) || operation_timeout_ms else result[:inherited_timeout_ms] = collection.timeout_ms end end end private def drop_by_name(name, opts = {}) client.send(:with_session, @options) do |session| spec = { db_name: database.name, coll_name: collection.name, index_name: name, session: session, write_concern: write_concern, } spec[:comment] = opts[:comment] unless opts[:comment].nil? server = next_primary(nil, session) context = Operation::Context.new( client: client, session: session, operation_timeouts: operation_timeouts(opts) ) Operation::DropIndex.new(spec).execute(server, context: context) end end def index_name(spec) spec.to_a.join('_') end def indexes_spec(session) { selector: { listIndexes: collection.name, cursor: batch_size ? { batchSize: batch_size } : {} }, coll_name: collection.name, db_name: database.name, session: session } end def initial_query_op(session) Operation::Indexes.new(indexes_spec(session)) end def limit; -1; end def normalize_keys(spec) return false if spec.is_a?(String) Options::Mapper.transform_keys_to_strings(spec) end def normalize_models(models, server) models.map do |model| # Transform options first which gives us a mutable hash Options::Mapper.transform(model, OPTIONS).tap do |model| model[:name] ||= index_name(model.fetch(:key)) end end end def send_initial_query(server, session, context) if server.load_balancer? connection = server.pool.check_out(context: context) initial_query_op(session).execute_with_connection(connection, context: context) else initial_query_op(session).execute(server, context: context) end end end end end mongo-ruby-driver-2.21.3/lib/mongo/lint.rb000066400000000000000000000075271505113246500203630ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all module Mongo # @api private module Lint # Raises LintError if +obj+ is not of type +cls+. def assert_type(obj, cls) return unless enabled? unless obj.is_a?(cls) raise Error::LintError, "Expected #{obj} to be a #{cls}" end end module_function :assert_type def validate_underscore_read_preference(read_pref) return unless enabled? return if read_pref.nil? unless read_pref.is_a?(Hash) raise Error::LintError, "Read preference is not a hash: #{read_pref}" end validate_underscore_read_preference_mode(read_pref[:mode] || read_pref['mode']) end module_function :validate_underscore_read_preference def validate_underscore_read_preference_mode(mode) return unless enabled? if mode unless %w(primary primary_preferred secondary secondary_preferred nearest).include?(mode.to_s) raise Error::LintError, "Invalid read preference mode: #{mode}" end end end module_function :validate_underscore_read_preference_mode def validate_camel_case_read_preference(read_pref) return unless enabled? return if read_pref.nil? unless read_pref.is_a?(Hash) raise Error::LintError, "Read preference is not a hash: #{read_pref}" end validate_camel_case_read_preference_mode(read_pref[:mode] || read_pref['mode']) end module_function :validate_camel_case_read_preference def validate_camel_case_read_preference_mode(mode) return unless enabled? if mode unless %w(primary primaryPreferred secondary secondaryPreferred nearest).include?(mode.to_s) raise Error::LintError, "Invalid read preference mode: #{mode}" end end end module_function :validate_camel_case_read_preference_mode # Validates the provided hash as a read concern object, per the # read/write concern specification # (https://github.com/mongodb/specifications/blob/master/source/read-write-concern/read-write-concern.md#read-concern). # # This method also accepts nil as input for convenience. # # The read concern document as sent to the server may include # additional fields, for example afterClusterTime. These fields # are generated internally by the driver and cannot be specified by # the user (and would potentially lead to incorrect behavior if they # were specified by the user), hence this method prohibits them. # # @param [ Hash ] read_concern The read concern options hash, # with the following optional keys: # - *:level* -- the read preference level as a symbol; valid values # are *:local*, *:majority*, and *:snapshot* # # @raise [ Error::LintError ] If the validation failed. def validate_read_concern_option(read_concern) return unless enabled? return if read_concern.nil? unless read_concern.is_a?(Hash) raise Error::LintError, "Read concern is not a hash: #{read_concern}" end return if read_concern.empty? keys = read_concern.keys if read_concern.is_a?(BSON::Document) # Permits indifferent access allowed_keys = ['level'] else # Does not permit indifferent access allowed_keys = [:level] end if keys != allowed_keys raise Error::LintError, "Read concern has invalid keys: #{keys.inspect}" end level = read_concern[:level] return if [:local, :available, :majority, :linearizable, :snapshot].include?(level) raise Error::LintError, "Read concern level is invalid: value must be a symbol: #{level.inspect}" end module_function :validate_read_concern_option def enabled? ENV['MONGO_RUBY_DRIVER_LINT'] && %w(1 yes true on).include?(ENV['MONGO_RUBY_DRIVER_LINT'].downcase) end module_function :enabled? end end mongo-ruby-driver-2.21.3/lib/mongo/loggable.rb000066400000000000000000000055751505113246500211720ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo # Allows objects to easily log operations. # # @since 2.0.0 module Loggable # The standard MongoDB log prefix. # # @since 2.0.0 PREFIX = 'MONGODB'.freeze # Convenience method to log debug messages with the standard prefix. # # @example Log a debug message. # log_debug('Message') # # @param [ String ] message The message to log. # # @since 2.0.0 def log_debug(message) logger.debug(format_message(message)) if logger.debug? end # Convenience method to log error messages with the standard prefix. # # @example Log a error message. # log_error('Message') # # @param [ String ] message The message to log. # # @since 2.0.0 def log_error(message) logger.error(format_message(message)) if logger.error? end # Convenience method to log fatal messages with the standard prefix. # # @example Log a fatal message. # log_fatal('Message') # # @param [ String ] message The message to log. # # @since 2.0.0 def log_fatal(message) logger.fatal(format_message(message)) if logger.fatal? end # Convenience method to log info messages with the standard prefix. # # @example Log a info message. # log_info('Message') # # @param [ String ] message The message to log. # # @since 2.0.0 def log_info(message) logger.info(format_message(message)) if logger.info? end # Convenience method to log warn messages with the standard prefix. # # @example Log a warn message. # log_warn('Message') # # @param [ String ] message The message to log. # # @since 2.0.0 def log_warn(message) logger.warn(format_message(message)) if logger.warn? end # Get the logger instance. # # @example Get the logger instance. # loggable.logger # # @return [ Logger ] The logger. # # @since 2.1.0 def logger ((respond_to?(:options) && options && options[:logger]) || Logger.logger) end private def format_message(message) format("%s | %s".freeze, _mongo_log_prefix, message) end def _mongo_log_prefix (respond_to?(:options) && options && options[:log_prefix]) || PREFIX end end end mongo-ruby-driver-2.21.3/lib/mongo/logger.rb000066400000000000000000000040401505113246500206570ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo # Provides ability to log messages. # # @since 2.0.0 class Logger class << self # Get the wrapped logger. If none was set will return a default info # level logger. # # @example Get the wrapped logger. # Mongo::Logger.logger # # @return [ ::Logger ] The wrapped logger. # # @since 2.0.0 def logger @logger ||= default_logger end # Set the logger. # # @example Set the wrapped logger. # Mongo::Logger.logger = logger # # @param [ ::Logger ] other The logger to set. # # @return [ ::Logger ] The wrapped logger. # # @since 2.0.0 def logger=(other) @logger = other end # Get the global logger level. # # @example Get the global logging level. # Mongo::Logger.level # # @return [ Integer ] The log level. # # @since 2.0.0 def level logger.level end # Set the global logger level. # # @example Set the global logging level. # Mongo::Logger.level == Logger::DEBUG # # @return [ Integer ] The log level. # # @since 2.0.0 def level=(level) logger.level = level end private def default_logger logger = ::Logger.new(STDOUT) logger.level = ::Logger::INFO logger end end end end mongo-ruby-driver-2.21.3/lib/mongo/monitoring.rb000066400000000000000000000304561505113246500215770ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2015-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an 'AS IS' BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo # The class defines behavior for the performance monitoring API. # # @since 2.1.0 class Monitoring include Id # The command topic. # # @since 2.1.0 COMMAND = 'Command'.freeze # The connection pool topic. # # @since 2.9.0 CONNECTION_POOL = 'ConnectionPool'.freeze # Server closed topic. # # @since 2.4.0 SERVER_CLOSED = 'ServerClosed'.freeze # Server description changed topic. # # @since 2.4.0 SERVER_DESCRIPTION_CHANGED = 'ServerDescriptionChanged'.freeze # Server opening topic. # # @since 2.4.0 SERVER_OPENING = 'ServerOpening'.freeze # Topology changed topic. # # @since 2.4.0 TOPOLOGY_CHANGED = 'TopologyChanged'.freeze # Topology closed topic. # # @since 2.4.0 TOPOLOGY_CLOSED = 'TopologyClosed'.freeze # Topology opening topic. # # @since 2.4.0 TOPOLOGY_OPENING = 'TopologyOpening'.freeze # Server heartbeat started topic. # # @since 2.7.0 SERVER_HEARTBEAT = 'ServerHeartbeat'.freeze # Used for generating unique operation ids to link events together. # # @example Get the next operation id. # Monitoring.next_operation_id # # @return [ Integer ] The next operation id. # # @since 2.1.0 def self.next_operation_id self.next_id end # Contains subscription methods common between monitoring and # global event subscriptions. # # @since 2.6.0 module Subscribable # Subscribe a listener to an event topic. # # @note It is possible to subscribe the same listener to the same topic # multiple times, in which case the listener will be invoked as many # times as it is subscribed and to unsubscribe it the same number # of unsubscribe calls will be needed. # # @example Subscribe to the topic. # monitoring.subscribe(QUERY, subscriber) # # @example Subscribe to the topic globally. # Monitoring::Global.subscribe(QUERY, subscriber) # # @param [ String ] topic The event topic. # @param [ Object ] subscriber The subscriber to handle the event. # # @since 2.1.0 def subscribe(topic, subscriber) subscribers_for(topic).push(subscriber) end # Unsubscribe a listener from an event topic. # # If the listener was subscribed to the event topic multiple times, # this call removes a single subscription. # # If the listener was not subscribed to the topic, this operation # is a no-op and no exceptions are raised. # # @note Global subscriber registry is separate from per-client # subscriber registry. The same subscriber can be subscribed to # events from a particular client as well as globally; unsubscribing # globally will not unsubscribe that subscriber from the client # it was explicitly subscribed to. # # @note Currently the list of global subscribers is copied into # a client whenever the client is created. Thus unsubscribing a # subscriber globally has no effect for existing clients - they will # continue sending events to the unsubscribed subscriber. # # @example Unsubscribe from the topic. # monitoring.unsubscribe(QUERY, subscriber) # # @example Unsubscribe from the topic globally. # Mongo::Monitoring::Global.unsubscribe(QUERY, subscriber) # # @param [ String ] topic The event topic. # @param [ Object ] subscriber The subscriber to be unsubscribed. # # @since 2.6.0 def unsubscribe(topic, subscriber) subs = subscribers_for(topic) index = subs.index(subscriber) if index subs.delete_at(index) end end # Get all the subscribers. # # @example Get all the subscribers. # monitoring.subscribers # # @example Get all the global subscribers. # Mongo::Monitoring::Global.subscribers # # @return [ Hash ] The subscribers. # # @since 2.1.0 def subscribers @subscribers ||= {} end # Determine if there are any subscribers for a particular event. # # @example Are there subscribers? # monitoring.subscribers?(COMMAND) # # @example Are there global subscribers? # Mongo::Monitoring::Global.subscribers?(COMMAND) # # @param [ String ] topic The event topic. # # @return [ true, false ] If there are subscribers for the topic. # # @since 2.1.0 def subscribers?(topic) !subscribers_for(topic).empty? end private def subscribers_for(topic) subscribers[topic] ||= [] end end # Allows subscribing to events for all Mongo clients. # # @note Global subscriptions must be established prior to creating # clients. When a client is constructed it copies subscribers from # the Global module; subsequent subscriptions or unsubscriptions # on the Global module have no effect on already created clients. # # @since 2.1.0 module Global extend Subscribable end include Subscribable # Initialize the monitoring. # # @example Create the new monitoring. # Monitoring.new(:monitoring => true) # # @param [ Hash ] options Options. Client constructor forwards its # options to Monitoring constructor, although Monitoring recognizes # only a subset of the options recognized by Client. # @option options [ true, false ] :monitoring If false is given, the # Monitoring instance is initialized without global monitoring event # subscribers and will not publish SDAM events. Command monitoring events # will still be published, and the driver will still perform SDAM and # monitor its cluster in order to perform server selection. Built-in # driver logging of SDAM events will be disabled because it is # implemented through SDAM event subscription. Client#subscribe will # succeed for all event types, but subscribers to SDAM events will # not be invoked. Values other than false result in default behavior # which is to perform normal SDAM event publication. # # @since 2.1.0 # @api private def initialize(options = {}) @options = options if options[:monitoring] != false Global.subscribers.each do |topic, subscribers| subscribers.each do |subscriber| subscribe(topic, subscriber) end end subscribe(COMMAND, CommandLogSubscriber.new(options)) # CMAP events are not logged by default because this will create # log entries for every operation performed by the driver. #subscribe(CONNECTION_POOL, CmapLogSubscriber.new(options)) subscribe(SERVER_OPENING, ServerOpeningLogSubscriber.new(options)) subscribe(SERVER_CLOSED, ServerClosedLogSubscriber.new(options)) subscribe(SERVER_DESCRIPTION_CHANGED, ServerDescriptionChangedLogSubscriber.new(options)) subscribe(TOPOLOGY_OPENING, TopologyOpeningLogSubscriber.new(options)) subscribe(TOPOLOGY_CHANGED, TopologyChangedLogSubscriber.new(options)) subscribe(TOPOLOGY_CLOSED, TopologyClosedLogSubscriber.new(options)) end end # @api private attr_reader :options # @api private def monitoring? options[:monitoring] != false end # Publish an event. # # This method is used for event types which only have a single event # in them. # # @param [ String ] topic The event topic. # @param [ Event ] event The event to publish. # # @since 2.9.0 def published(topic, event) subscribers_for(topic).each{ |subscriber| subscriber.published(event) } end # Publish a started event. # # This method is used for event types which have the started/succeeded/failed # events in them, such as command and heartbeat events. # # @example Publish a started event. # monitoring.started(COMMAND, event) # # @param [ String ] topic The event topic. # @param [ Event ] event The event to publish. # # @since 2.1.0 def started(topic, event) subscribers_for(topic).each{ |subscriber| subscriber.started(event) } end # Publish a succeeded event. # # This method is used for event types which have the started/succeeded/failed # events in them, such as command and heartbeat events. # # @example Publish a succeeded event. # monitoring.succeeded(COMMAND, event) # # @param [ String ] topic The event topic. # @param [ Event ] event The event to publish. # # @since 2.1.0 def succeeded(topic, event) subscribers_for(topic).each{ |subscriber| subscriber.succeeded(event) } end # Publish a failed event. # # This method is used for event types which have the started/succeeded/failed # events in them, such as command and heartbeat events. # # @example Publish a failed event. # monitoring.failed(COMMAND, event) # # @param [ String ] topic The event topic. # @param [ Event ] event The event to publish. # # @since 2.1.0 def failed(topic, event) subscribers_for(topic).each{ |subscriber| subscriber.failed(event) } end # @api private def publish_heartbeat(server, awaited: false) if monitoring? started_event = Event::ServerHeartbeatStarted.new( server.address, awaited: awaited) started(SERVER_HEARTBEAT, started_event) end # The duration we publish in heartbeat succeeded/failed events is # the time spent on the entire heartbeat. This could include time # to connect the socket (including TLS handshake), not just time # spent on hello call itself. # The spec at https://github.com/mongodb/specifications/blob/master/source/server-discovery-and-monitoring/server-discovery-and-monitoring-logging-and-monitoring.md # requires that the duration exposed here start from "sending the # message" (hello). This requirement does not make sense if, # for example, we were never able to connect to the server at all # and thus hello was never sent. start_time = Utils.monotonic_time begin result = yield rescue => exc if monitoring? event = Event::ServerHeartbeatFailed.new( server.address, Utils.monotonic_time - start_time, exc, awaited: awaited, started_event: started_event, ) failed(SERVER_HEARTBEAT, event) end raise else if monitoring? event = Event::ServerHeartbeatSucceeded.new( server.address, Utils.monotonic_time - start_time, awaited: awaited, started_event: started_event, ) succeeded(SERVER_HEARTBEAT, event) end result end end private def initialize_copy(original) @subscribers = {} original.subscribers.each do |k, v| @subscribers[k] = v.dup end end end end require 'mongo/monitoring/event' require 'mongo/monitoring/publishable' require 'mongo/monitoring/command_log_subscriber' require 'mongo/monitoring/cmap_log_subscriber' require 'mongo/monitoring/sdam_log_subscriber' require 'mongo/monitoring/server_description_changed_log_subscriber' require 'mongo/monitoring/server_closed_log_subscriber' require 'mongo/monitoring/server_opening_log_subscriber' require 'mongo/monitoring/topology_changed_log_subscriber' require 'mongo/monitoring/topology_opening_log_subscriber' require 'mongo/monitoring/topology_closed_log_subscriber' require 'mongo/monitoring/unified_sdam_log_subscriber' mongo-ruby-driver-2.21.3/lib/mongo/monitoring/000077500000000000000000000000001505113246500212425ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/monitoring/cmap_log_subscriber.rb000066400000000000000000000027031505113246500255750ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2019-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an 'AS IS' BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Monitoring # Subscribes to CMAP events and logs them. # # @since 2.9.0 class CmapLogSubscriber include Loggable # @return [ Hash ] options The options. # # @since 2.9.0 attr_reader :options # Create the new log subscriber. # # @example Create the log subscriber. # CmapLogSubscriber.new # # @param [ Hash ] options The options. # # @option options [ Logger ] :logger An optional custom logger. # # @since 2.9.0 def initialize(options = {}) @options = options end # Handle a CMAP event. # # @param [ Event ] event The event. # # @since 2.9.0 def published(event) log_debug("EVENT: #{event.summary}") if logger.debug? end end end end mongo-ruby-driver-2.21.3/lib/mongo/monitoring/command_log_subscriber.rb000066400000000000000000000071741505113246500263020ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2015-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an 'AS IS' BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Monitoring # Subscribes to command events and logs them. # # @since 2.1.0 class CommandLogSubscriber include Loggable # @return [ Hash ] options The options. attr_reader :options # Constant for the max number of characters to print when inspecting # a query field. # # @since 2.1.0 LOG_STRING_LIMIT = 250 # Create the new log subscriber. # # @example Create the log subscriber. # CommandLogSubscriber.new # # @param [ Hash ] options The options. # # @option options [ Logger ] :logger An optional custom logger. # # @since 2.1.0 def initialize(options = {}) @options = options end # Handle the command started event. # # @example Handle the event. # subscriber.started(event) # # @param [ CommandStartedEvent ] event The event. # # @since 2.1.0 def started(event) if logger.debug? _prefix = prefix(event, connection_generation: event.connection_generation, connection_id: event.connection_id, server_connection_id: event.server_connection_id, ) log_debug("#{_prefix} | STARTED | #{format_command(event.command)}") end end # Handle the command succeeded event. # # @example Handle the event. # subscriber.succeeded(event) # # @param [ CommandSucceededEvent ] event The event. # # @since 2.1.0 def succeeded(event) if logger.debug? log_debug("#{prefix(event)} | SUCCEEDED | #{'%.3f' % event.duration}s") end end # Handle the command failed event. # # @example Handle the event. # subscriber.failed(event) # # @param [ CommandFailedEvent ] event The event. # # @since 2.1.0 def failed(event) if logger.debug? log_debug("#{prefix(event)} | FAILED | #{event.message} | #{event.duration}s") end end private def format_command(args) begin truncating? ? truncate(args) : args.inspect rescue Exception '' end end def prefix(event, connection_generation: nil, connection_id: nil, server_connection_id: nil ) extra = [connection_generation, connection_id].compact.join(':') if extra == '' extra = nil else extra = "conn:#{extra}" end if server_connection_id extra += " sconn:#{server_connection_id}" end "#{event.address.to_s} req:#{event.request_id}#{extra && " #{extra}"} | " + "#{event.database_name}.#{event.command_name}" end def truncate(command) ((s = command.inspect).length > LOG_STRING_LIMIT) ? "#{s[0..LOG_STRING_LIMIT]}..." : s end def truncating? @truncating ||= (options[:truncate_logs] != false) end end end end mongo-ruby-driver-2.21.3/lib/mongo/monitoring/event.rb000066400000000000000000000025211505113246500227100ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2015-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an 'AS IS' BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'mongo/event' require 'mongo/monitoring/event/secure' require 'mongo/monitoring/event/command_started' require 'mongo/monitoring/event/command_succeeded' require 'mongo/monitoring/event/command_failed' require 'mongo/monitoring/event/cmap' require 'mongo/monitoring/event/server_closed' require 'mongo/monitoring/event/server_description_changed' require 'mongo/monitoring/event/server_opening' require 'mongo/monitoring/event/server_heartbeat_started' require 'mongo/monitoring/event/server_heartbeat_succeeded' require 'mongo/monitoring/event/server_heartbeat_failed' require 'mongo/monitoring/event/topology_changed' require 'mongo/monitoring/event/topology_closed' require 'mongo/monitoring/event/topology_opening' mongo-ruby-driver-2.21.3/lib/mongo/monitoring/event/000077500000000000000000000000001505113246500223635ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/monitoring/event/cmap.rb000066400000000000000000000024241505113246500236320ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2019-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an 'AS IS' BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'mongo/monitoring/event/cmap/base' require 'mongo/monitoring/event/cmap/connection_checked_in' require 'mongo/monitoring/event/cmap/connection_checked_out' require 'mongo/monitoring/event/cmap/connection_check_out_failed' require 'mongo/monitoring/event/cmap/connection_check_out_started' require 'mongo/monitoring/event/cmap/connection_closed' require 'mongo/monitoring/event/cmap/connection_created' require 'mongo/monitoring/event/cmap/connection_ready' require 'mongo/monitoring/event/cmap/pool_cleared' require 'mongo/monitoring/event/cmap/pool_closed' require 'mongo/monitoring/event/cmap/pool_created' require 'mongo/monitoring/event/cmap/pool_ready' mongo-ruby-driver-2.21.3/lib/mongo/monitoring/event/cmap/000077500000000000000000000000001505113246500233035ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/monitoring/event/cmap/base.rb000066400000000000000000000015251505113246500245450ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2019-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an 'AS IS' BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Monitoring module Event module Cmap # Base class for CMAP events. # # @since 2.9.0 class Base < Mongo::Event::Base end end end end end mongo-ruby-driver-2.21.3/lib/mongo/monitoring/event/cmap/connection_check_out_failed.rb000066400000000000000000000053171505113246500313250ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2019-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an 'AS IS' BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Monitoring module Event module Cmap # Event published when a connection is unable to be checked out of a pool. # # @since 2.9.0 class ConnectionCheckOutFailed < Base # @return [ Symbol ] POOL_CLOSED Indicates that the connection check # out failed due to the pool already being closed. # # @since 2.9.0 POOL_CLOSED = :pool_closed # @return [ Symbol ] TIMEOUT Indicates that the connection check out # failed due to the timeout being reached before a connection # became available. # # @since 2.9.0 TIMEOUT = :timeout # @return [ Symbol ] CONNECTION_ERROR Indicates that the connection # check out failed due to an error encountered while setting up a # new connection. # # @since 2.10.0 CONNECTION_ERROR = :connection_error # @return [ Mongo::Address ] address The address of the server the # connection would have connected to. # # @since 2.9.0 attr_reader :address # @return [ Symbol ] reason The reason a connection was unable to be # acquired. # # @since 2.9.0 attr_reader :reason # Create the event. # # @param [ Address ] address # @param [ Symbol ] reason # # @since 2.9.0 # @api private def initialize(address, reason) @reason = reason @address = address end # Returns a concise yet useful summary of the event. # # @return [ String ] String summary of the event. # # @note This method is experimental and subject to change. # # @since 2.9.0 # @api experimental def summary "#<#{self.class.name.sub(/^Mongo::Monitoring::Event::Cmap::/, '')} address=#{address} " + "reason=#{reason}>" end end end end end end mongo-ruby-driver-2.21.3/lib/mongo/monitoring/event/cmap/connection_check_out_started.rb000066400000000000000000000033311505113246500315410ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2019-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an 'AS IS' BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Monitoring module Event module Cmap # Event published when a thread begins attempting to check a connection out of a pool. # # @since 2.9.0 class ConnectionCheckOutStarted < Base # @return [ Mongo::Address ] address The address of the server that the connection will # connect to. # # @since 2.9.0 attr_reader :address # Create the event. # # @param [ Address ] address # # @since 2.9.0 # @api private def initialize(address) @address = address end # Returns a concise yet useful summary of the event. # # @return [ String ] String summary of the event. # # @note This method is experimental and subject to change. # # @since 2.9.0 # @api experimental def summary "#<#{self.class.name.sub(/^Mongo::Monitoring::Event::Cmap::/, '')} address=#{address}>" end end end end end end mongo-ruby-driver-2.21.3/lib/mongo/monitoring/event/cmap/connection_checked_in.rb000066400000000000000000000043351505113246500301300ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2019-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an 'AS IS' BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Monitoring module Event module Cmap # Event published when a connection is returned to a connection pool. # # @since 2.9.0 class ConnectionCheckedIn < Base # @return [ Address ] address The address of the server the connection was connected to. # # @since 2.9.0 attr_reader :address # @return [ Integer ] connection_id The ID of the connection. # # @since 2.9.0 attr_reader :connection_id # @return [ Mongo::Server::ConnectionPool ] pool The pool that the connection # was checked in to. # # @since 2.11.0 # @api experimental attr_reader :pool # Create the event. # # @example Create the event. # ConnectionCheckedIn.new(address, id, pool) # # @since 2.9.0 # @api private def initialize(address, id, pool) @address = address @connection_id = id @pool = pool end # Returns a concise yet useful summary of the event. # # @return [ String ] String summary of the event. # # @note This method is experimental and subject to change. # # @since 2.9.0 # @api experimental def summary "#<#{self.class.name.sub(/^Mongo::Monitoring::Event::Cmap::/, '')} " + "address=#{address} connection_id=#{connection_id} pool=0x#{pool.object_id}>" end end end end end end mongo-ruby-driver-2.21.3/lib/mongo/monitoring/event/cmap/connection_checked_out.rb000066400000000000000000000044041505113246500303260ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2019-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an 'AS IS' BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Monitoring module Event module Cmap # Event published when a connection is successfully checked out out of a pool. # # @since 2.9.0 class ConnectionCheckedOut < Base # @return [ Mongo::Address ] address The address of the server that the connection will # connect to. # # @since 2.9.0 attr_reader :address # @return [ Integer ] connection_id The ID of the connection. # # @since 2.9.0 attr_reader :connection_id # @return [ Mongo::Server::ConnectionPool ] pool The pool that the connection # was checked out from. # # @since 2.11.0 # @api experimental attr_reader :pool # Create the event. # # @example Create the event. # ConnectionCheckedOut.new(address, id, pool) # # @since 2.9.0 # @api private def initialize(address, id, pool) @address = address @connection_id = id @pool = pool end # Returns a concise yet useful summary of the event. # # @return [ String ] String summary of the event. # # @note This method is experimental and subject to change. # # @since 2.9.0 # @api experimental def summary "#<#{self.class.name.sub(/^Mongo::Monitoring::Event::Cmap::/, '')} " + "address=#{address} connection_id=#{connection_id} pool=0x#{pool.object_id}>" end end end end end end mongo-ruby-driver-2.21.3/lib/mongo/monitoring/event/cmap/connection_closed.rb000066400000000000000000000063721505113246500273300ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2019-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an 'AS IS' BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Monitoring module Event module Cmap # Event published when a connection is closed. # # @since 2.9.0 class ConnectionClosed < Base # @return [ Symbol ] STALE Indicates that the connection was closed due to it being stale. # # @since 2.9.0 STALE = :stale # @return [ Symbol ] IDLE Indicates that the connection was closed due to it being idle. # # @since 2.9.0 IDLE = :idle # @return [ Symbol ] ERROR Indicates that the connection was closed due to it experiencing # an error. # # @since 2.9.0 ERROR = :error # @return [ Symbol ] POOL_CLOSED Indicates that the connection was closed due to the pool # already being closed. # # @since 2.9.0 POOL_CLOSED = :pool_closed # @return [ Symbol ] HANDSHAKE_FAILED Indicates that the connection was closed due to the # connection handshake failing. # # @since 2.9.0 HANDSHAKE_FAILED = :handshake_failed # @return [ Symbol ] UNKNOWN Indicates that the connection was closed for an unknown reason. # # @since 2.9.0 UNKNOWN = :unknown # @return [ Integer ] connection_id The ID of the connection. # # @since 2.9.0 attr_reader :connection_id # @return [ Symbol ] reason The reason why the connection was closed. # # @since 2.9.0 attr_reader :reason # @return [ Mongo::Address ] address The address of the server the pool's connections will # connect to. # # @since 2.9.0 attr_reader :address # Create the event. # # @example Create the event. # ConnectionClosed.new(address, id, reason) # # @since 2.9.0 # @api private def initialize(address, id, reason) @reason = reason @address = address @connection_id = id end # Returns a concise yet useful summary of the event. # # @return [ String ] String summary of the event. # # @note This method is experimental and subject to change. # # @since 2.9.0 # @api experimental def summary "#<#{self.class.name.sub(/^Mongo::Monitoring::Event::Cmap::/, '')} " + "address=#{address} connection_id=#{connection_id} reason=#{reason}>" end end end end end end mongo-ruby-driver-2.21.3/lib/mongo/monitoring/event/cmap/connection_created.rb000066400000000000000000000037011505113246500274570ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2019-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an 'AS IS' BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Monitoring module Event module Cmap # Event published when a connection is created. # # @since 2.9.0 class ConnectionCreated < Base # @return [ Mongo::Address ] address The address of the server the connection will connect # to. # # @since 2.9.0 attr_reader :address # @return [ Integer ] connection_id The ID of the connection. # # @since 2.9.0 attr_reader :connection_id # Create the event. # # @example Create the event. # ConnectionCreated.new(address, id) # # @since 2.9.0 # @api private def initialize(address, id) @address = address @connection_id = id end # Returns a concise yet useful summary of the event. # # @return [ String ] String summary of the event. # # @note This method is experimental and subject to change. # # @since 2.9.0 # @api experimental def summary "#<#{self.class.name.sub(/^Mongo::Monitoring::Event::Cmap::/, '')} " + "address=#{address} connection_id=#{connection_id}>" end end end end end end mongo-ruby-driver-2.21.3/lib/mongo/monitoring/event/cmap/connection_ready.rb000066400000000000000000000037251505113246500271620ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2019-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an 'AS IS' BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Monitoring module Event module Cmap # Event published when a connection is ready to be used for operations. # # @since 2.9.0 class ConnectionReady < Base # @return [ Mongo::Address ] address The address of the server the connection is connected # to. # # @since 2.9.0 attr_reader :address # @return [ Integer ] connection_id The ID of the connection. # # @since 2.9.0 attr_reader :connection_id # Create the event. # # @example Create the event. # ConnectionReady.new(address, id) # # @since 2.9.0 # @api private def initialize(address, id) @address = address @connection_id = id end # Returns a concise yet useful summary of the event. # # @return [ String ] String summary of the event. # # @note This method is experimental and subject to change. # # @since 2.9.0 # @api experimental def summary "#<#{self.class.name.sub(/^Mongo::Monitoring::Event::Cmap::/, '')} " + "address=#{address} connection_id=#{connection_id}>" end end end end end end mongo-ruby-driver-2.21.3/lib/mongo/monitoring/event/cmap/pool_cleared.rb000066400000000000000000000043211505113246500262600ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2019-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an 'AS IS' BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Monitoring module Event module Cmap # Event published when a connection pool is cleared. # # @since 2.9.0 class PoolCleared < Base # @return [ Mongo::Address ] address The address of the server the pool's connections will # connect to. # # @since 2.9.0 attr_reader :address # @return [ nil | Object ] The service id, if any. attr_reader :service_id # @return [ Hash ] options The options attr_reader :options # Create the event. # # @param [ Address ] address # @param [ Object ] service_id The service id, if any. # @param [ true | false | nil ] interrupt_in_use_connections The # interrupt_in_use_connections flag, if given. # # @api private def initialize(address, service_id: nil, interrupt_in_use_connections: nil) @address = address @service_id = service_id @options = {} @options[:interrupt_in_use_connections] = interrupt_in_use_connections end # Returns a concise yet useful summary of the event. # # @return [ String ] String summary of the event. # # @note This method is experimental and subject to change. # # @since 2.9.0 # @api experimental def summary "#<#{self.class.name.sub(/^Mongo::Monitoring::Event::Cmap::/, '')} address=#{address}>" end end end end end end mongo-ruby-driver-2.21.3/lib/mongo/monitoring/event/cmap/pool_closed.rb000066400000000000000000000037311505113246500261360ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2019-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an 'AS IS' BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Monitoring module Event module Cmap # Event published when a connection pool is closed. # # @since 2.9.0 class PoolClosed < Base # @return [ Mongo::Address ] address The address of the server the pool's connections will # connect to. # # @since 2.9.0 attr_reader :address # @return [ Mongo::Server::ConnectionPool ] pool The pool that was closed. # # @since 2.11.0 # @api experimental attr_reader :pool # Create the event. # # @example Create the event. # PoolClosed.new(address, pool) # # @since 2.9.0 # @api private def initialize(address, pool) @address = address @pool = pool end # Returns a concise yet useful summary of the event. # # @return [ String ] String summary of the event. # # @note This method is experimental and subject to change. # # @since 2.9.0 # @api experimental def summary "#<#{self.class.name.sub(/^Mongo::Monitoring::Event::Cmap::/, '')} " + "address=#{address} pool=0x#{pool.object_id}>" end end end end end end mongo-ruby-driver-2.21.3/lib/mongo/monitoring/event/cmap/pool_created.rb000066400000000000000000000043141505113246500262720ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2019-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an 'AS IS' BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Monitoring module Event module Cmap # Event published when a connection pool is created. # # @since 2.9.0 class PoolCreated < Base # @return [ Mongo::Address ] address The address of the server the pool's connections will # connect to. # # @since 2.9.0 attr_reader :address # @return [ Hash ] options Options specified for pool creation. # # @since 2.9.0 attr_reader :options # @return [ Mongo::Server::ConnectionPool ] pool The pool that was just # created. # # @since 2.11.0 # @api experimental attr_reader :pool # Create the event. # # @example Create the event. # PoolCreated.new(address, options, pool) # # @since 2.9.0 # @api private def initialize(address, options, pool) @address = address @options = options.dup.freeze @pool = pool end # Returns a concise yet useful summary of the event. # # @return [ String ] String summary of the event. # # @note This method is experimental and subject to change. # # @since 2.9.0 # @api experimental def summary "#<#{self.class.name.sub(/^Mongo::Monitoring::Event::Cmap::/, '')} " + "address=#{address} options=#{options} pool=0x#{pool.object_id}>" end end end end end end mongo-ruby-driver-2.21.3/lib/mongo/monitoring/event/cmap/pool_ready.rb000066400000000000000000000040641505113246500257710ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2019-present MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an 'AS IS' BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Monitoring module Event module Cmap # Event published when a connection pool is marked ready. class PoolReady < Base # @return [ Mongo::Address ] address The address of the server the pool's connections will # connect to. attr_reader :address # @return [ Hash ] options Options specified for pool creation. attr_reader :options # @return [ Mongo::Server::ConnectionPool ] pool The pool that was just # created. # # @api experimental attr_reader :pool # Create the event. # # @example Create the event. # PoolCreated.new(address, options, pool) # # @since 2.9.0 # @api private def initialize(address, options, pool) @address = address @options = options.dup.freeze @pool = pool end # Returns a concise yet useful summary of the event. # # @return [ String ] String summary of the event. # # @note This method is experimental and subject to change. # # @api experimental def summary "#<#{self.class.name.sub(/^Mongo::Monitoring::Event::Cmap::/, '')} " + "address=#{address} options=#{options} pool=0x#{pool.object_id}>" end end end end end end mongo-ruby-driver-2.21.3/lib/mongo/monitoring/event/command_failed.rb000066400000000000000000000127411505113246500256370ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2015-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an 'AS IS' BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Monitoring module Event # Event that is fired when a command operation fails. # # @since 2.1.0 class CommandFailed < Mongo::Event::Base include Secure # @return [ Server::Address ] address The server address. attr_reader :address # @return [ String ] command_name The name of the command. attr_reader :command_name # @return [ String ] database_name The name of the database_name. attr_reader :database_name # @return [ Float ] duration The duration of the command in seconds. attr_reader :duration # @return [ BSON::Document ] failure The error document, if present. # This will only be filled out for errors communicated by a # MongoDB server. In other situations, for example in case of # a network error, this attribute may be nil. attr_reader :failure # @return [ String ] message The error message. Unlike the error # document, the error message should always be present. attr_reader :message # @return [ Integer ] operation_id The operation id. attr_reader :operation_id # @return [ Integer ] request_id The request id. attr_reader :request_id # @return [ Integer ] server_connection_id The server connection id. attr_reader :server_connection_id # @return [ nil | Object ] The service id, if any. attr_reader :service_id # @return [ Monitoring::Event::CommandStarted ] started_event The corresponding # started event. # # @api private attr_reader :started_event # Create the new event. # # @example Create the event. # # @param [ String ] command_name The name of the command. # @param [ String ] database_name The database_name name. # @param [ Server::Address ] address The server address. # @param [ Integer ] request_id The request id. # @param [ Integer ] operation_id The operation id. # @param [ String ] message The error message. # @param [ BSON::Document ] failure The error document, if any. # @param [ Float ] duration The duration the command took in seconds. # @param [ Monitoring::Event::CommandStarted ] started_event The corresponding # started event. # @param [ Object ] service_id The service id, if any. # # @api private def initialize(command_name, database_name, address, request_id, operation_id, message, failure, duration, started_event:, server_connection_id: nil, service_id: nil ) @command_name = command_name.to_s @database_name = database_name @address = address @request_id = request_id @operation_id = operation_id @service_id = service_id @message = message @started_event = started_event @failure = redacted(command_name, failure) @duration = duration @server_connection_id = server_connection_id end # Returns a concise yet useful summary of the event. # # @return [ String ] String summary of the event. # # @note This method is experimental and subject to change. # # @api experimental def summary "#<#{short_class_name} address=#{address} #{database_name}.#{command_name}>" end # Create the event from a wire protocol message payload. # # @example Create the event. # CommandFailed.generate(address, 1, payload, duration) # # @param [ Server::Address ] address The server address. # @param [ Integer ] operation_id The operation id. # @param [ Hash ] payload The message payload. # @param [ String ] message The error message. # @param [ BSON::Document ] failure The error document, if any. # @param [ Float ] duration The duration of the command in seconds. # @param [ Monitoring::Event::CommandStarted ] started_event The corresponding # started event. # @param [ Object ] service_id The service id, if any. # # @return [ CommandFailed ] The event. # # @since 2.1.0 # @api private def self.generate(address, operation_id, payload, message, failure, duration, started_event:, server_connection_id: nil, service_id: nil ) new( payload[:command_name], payload[:database_name], address, payload[:request_id], operation_id, message, failure, duration, started_event: started_event, server_connection_id: server_connection_id, service_id: service_id, ) end end end end end mongo-ruby-driver-2.21.3/lib/mongo/monitoring/event/command_started.rb000066400000000000000000000146541505113246500260660ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2015-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an 'AS IS' BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Monitoring module Event # Event that is fired when a command operation starts. # # @since 2.1.0 class CommandStarted < Mongo::Event::Base include Secure # @return [ Server::Address ] address The server address. attr_reader :address # @return [ BSON::Document ] command The command arguments. attr_reader :command # @return [ String ] command_name The name of the command. attr_reader :command_name # @return [ String ] database_name The name of the database_name. attr_reader :database_name # @return [ Integer ] operation_id The operation id. attr_reader :operation_id # @return [ Integer ] request_id The request id. attr_reader :request_id # @return [ nil | Object ] The service id, if any. attr_reader :service_id # object_id of the socket object used for this command. # # @api private attr_reader :socket_object_id # @api private attr_reader :connection_generation # @return [ Integer ] The ID for the connection over which the command # is sent. # # @api private attr_reader :connection_id # @return [ Integer ] server_connection_id The server connection id. attr_reader :server_connection_id # @return [ true | false ] Whether the event contains sensitive data. # # @api private attr_reader :sensitive # Create the new event. # # @example Create the event. # # @param [ String ] command_name The name of the command. # @param [ String ] database_name The database_name name. # @param [ Server::Address ] address The server address. # @param [ Integer ] request_id The request id. # @param [ Integer ] operation_id The operation id. # @param [ BSON::Document ] command The command arguments. # @param [ Object ] service_id The service id, if any. # # @since 2.1.0 # @api private def initialize(command_name, database_name, address, request_id, operation_id, command, socket_object_id: nil, connection_id: nil, connection_generation: nil, server_connection_id: nil, service_id: nil ) @command_name = command_name.to_s @database_name = database_name @address = address @request_id = request_id @operation_id = operation_id @service_id = service_id @sensitive = sensitive?( command_name: @command_name, document: command ) @command = redacted(command_name, command) @socket_object_id = socket_object_id @connection_id = connection_id @connection_generation = connection_generation @server_connection_id = server_connection_id end # Returns a concise yet useful summary of the event. # # @return [ String ] String summary of the event. # # @note This method is experimental and subject to change. # # @api experimental def summary "#<#{short_class_name} address=#{address} #{database_name}.#{command_name} command=#{command_summary}>" end # Returns the command, formatted as a string, with automatically added # keys elided ($clusterTime, lsid, signature). # # @return [ String ] The command summary. private def command_summary command = self.command remove_keys = %w($clusterTime lsid signature) if remove_keys.any? { |k| command.key?(k) } command = Hash[command.reject { |k, v| remove_keys.include?(k) }] suffix = ' ...' else suffix = '' end command.map do |k, v| "#{k}=#{v.inspect}" end.join(' ') + suffix end # Create the event from a wire protocol message payload. # # @example Create the event. # CommandStarted.generate(address, 1, payload) # # @param [ Server::Address ] address The server address. # @param [ Integer ] operation_id The operation id. # @param [ Hash ] payload The message payload. # @param [ Object ] service_id The service id, if any. # # @return [ CommandStarted ] The event. # # @since 2.1.0 # @api private def self.generate(address, operation_id, payload, socket_object_id: nil, connection_id: nil, connection_generation: nil, server_connection_id: nil, service_id: nil ) new( payload[:command_name], payload[:database_name], address, payload[:request_id], operation_id, # All op_msg payloads have a $db field. Legacy payloads do not # have a $db field. To emulate op_msg when publishing command # monitoring events for legacy servers, add $db to the payload, # copying the database name. Note that the database name is also # available as a top-level attribute on the command started event. payload[:command].merge('$db' => payload[:database_name]), socket_object_id: socket_object_id, connection_id: connection_id, connection_generation: connection_generation, server_connection_id: server_connection_id, service_id: service_id, ) end # Returns a concise yet useful summary of the event. # # @return [ String ] String summary of the event. # # @since 2.6.0 def inspect "#<{#{self.class} #{database_name}.#{command_name} command=#{command}>" end end end end end mongo-ruby-driver-2.21.3/lib/mongo/monitoring/event/command_succeeded.rb000066400000000000000000000132221505113246500263320ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2015-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an 'AS IS' BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Monitoring module Event # Event that is fired when a command operation succeeds. # # @since 2.1.0 class CommandSucceeded < Mongo::Event::Base include Secure # @return [ Server::Address ] address The server address. attr_reader :address # @return [ String ] command_name The name of the command. attr_reader :command_name # @return [ BSON::Document ] reply The command reply. attr_reader :reply # @return [ String ] database_name The name of the database. attr_reader :database_name # @return [ Float ] duration The duration of the event. attr_reader :duration # @return [ Integer ] operation_id The operation id. attr_reader :operation_id # @return [ Integer ] request_id The request id. attr_reader :request_id # @return [ Integer ] server_connection_id The server connection id. attr_reader :server_connection_id # @return [ nil | Object ] The service id, if any. attr_reader :service_id # @return [ Monitoring::Event::CommandStarted ] started_event The corresponding # started event. # # @api private attr_reader :started_event # Create the new event. # # @example Create the event. # # @param [ String ] command_name The name of the command. # @param [ String ] database_name The database name. # @param [ Server::Address ] address The server address. # @param [ Integer ] request_id The request id. # @param [ Integer ] operation_id The operation id. # @param [ BSON::Document ] reply The command reply. # @param [ Float ] duration The duration the command took in seconds. # @param [ Monitoring::Event::CommandStarted ] started_event The corresponding # started event. # @param [ Object ] service_id The service id, if any. # # @since 2.1.0 # @api private def initialize(command_name, database_name, address, request_id, operation_id, reply, duration, started_event:, server_connection_id: nil, service_id: nil ) @command_name = command_name.to_s @database_name = database_name @address = address @request_id = request_id @operation_id = operation_id @service_id = service_id @started_event = started_event @reply = redacted(command_name, reply) @duration = duration @server_connection_id = server_connection_id end # Returns a concise yet useful summary of the event. # # @return [ String ] String summary of the event. # # @note This method is experimental and subject to change. # # @api experimental def summary "#<#{short_class_name} address=#{address} #{database_name}.#{command_name}>" end # Create the event from a wire protocol message payload. # # @example Create the event. # CommandSucceeded.generate(address, 1, command_payload, reply_payload, 0.5) # # @param [ Server::Address ] address The server address. # @param [ Integer ] operation_id The operation id. # @param [ Hash ] command_payload The command message payload. # @param [ Hash ] reply_payload The reply message payload. # @param [ Float ] duration The duration of the command in seconds. # @param [ Monitoring::Event::CommandStarted ] started_event The corresponding # started event. # @param [ Object ] service_id The service id, if any. # # @return [ CommandCompleted ] The event. # # @since 2.1.0 # @api private def self.generate(address, operation_id, command_payload, reply_payload, duration, started_event:, server_connection_id: nil, service_id: nil ) new( command_payload[:command_name], command_payload[:database_name], address, command_payload[:request_id], operation_id, generate_reply(command_payload, reply_payload), duration, started_event: started_event, server_connection_id: server_connection_id, service_id: service_id, ) end private def self.generate_reply(command_payload, reply_payload) if reply_payload reply = reply_payload[:reply] if cursor = reply[:cursor] if !cursor.key?(Collection::NS) cursor.merge!(Collection::NS => namespace(command_payload)) end end reply else BSON::Document.new(Operation::Result::OK => 1) end end def self.namespace(payload) command = payload[:command] "#{payload[:database_name]}.#{command[:collection] || command.values.first}" end end end end end mongo-ruby-driver-2.21.3/lib/mongo/monitoring/event/secure.rb000066400000000000000000000076421505113246500242070ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2015-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an 'AS IS' BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Monitoring module Event # Provides behavior to redact sensitive information from commands and # replies. # # @since 2.1.0 module Secure # The list of commands that has the data redacted for security. # # @since 2.1.0 REDACTED_COMMANDS = [ 'authenticate', 'saslStart', 'saslContinue', 'getnonce', 'createUser', 'updateUser', 'copydbgetnonce', 'copydbsaslstart', 'copydb' ].freeze # Check whether the command is sensitive in terms of command monitoring # spec. A command is detected as sensitive if it is in the # list or if it is a hello/legacy hello command, and # speculative authentication is enabled. # # @param [ String, Symbol ] command_name The command name. # @param [ BSON::Document ] document The document. # # @return [ true | false ] Whether the command is sensitive. def sensitive?(command_name:, document:) if REDACTED_COMMANDS.include?(command_name.to_s) true elsif %w(hello ismaster isMaster).include?(command_name.to_s) && document['speculativeAuthenticate'] then # According to Command Monitoring spec,for hello/legacy hello commands # when speculativeAuthenticate is present, their commands AND replies # MUST be redacted from the events. # See https://github.com/mongodb/specifications/blob/master/source/command-logging-and-monitoring/command-logging-and-monitoring.md#security true else false end end # Redact secure information from the document if: # - its command is in the sensitive commands; # - its command is a hello/legacy hello command, and # speculative authentication is enabled; # - corresponding started event is sensitive. # # @example Get the redacted document. # secure.redacted(command_name, document) # # @param [ String, Symbol ] command_name The command name. # @param [ BSON::Document ] document The document. # # @return [ BSON::Document ] The redacted document. # # @since 2.1.0 def redacted(command_name, document) if %w(1 true yes).include?(ENV['MONGO_RUBY_DRIVER_UNREDACT_EVENTS']&.downcase) document elsif respond_to?(:started_event) && started_event.sensitive return BSON::Document.new elsif sensitive?(command_name: command_name, document: document) BSON::Document.new else document end end # Is compression allowed for a given command message. # # @example Determine if compression is allowed for a given command. # secure.compression_allowed?(selector) # # @param [ String, Symbol ] command_name The command name. # # @return [ true, false ] Whether compression can be used. # # @since 2.5.0 def compression_allowed?(command_name) @compression_allowed ||= !REDACTED_COMMANDS.include?(command_name.to_s) end end end end end mongo-ruby-driver-2.21.3/lib/mongo/monitoring/event/server_closed.rb000066400000000000000000000033711505113246500255530ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2016-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an 'AS IS' BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Monitoring module Event # Event fired when the server is closed. # # @since 2.4.0 class ServerClosed < Mongo::Event::Base # @return [ Address ] address The server address. attr_reader :address # @return [ Topology ] topology The topology. attr_reader :topology # Create the event. # # @example Create the event. # ServerClosed.new(address) # # @param [ Address ] address The server address. # @param [ Integer ] topology The topology. # # @since 2.4.0 def initialize(address, topology) @address = address @topology = topology end # Returns a concise yet useful summary of the event. # # @return [ String ] String summary of the event. # # @note This method is experimental and subject to change. # # @since 2.7.0 # @api experimental def summary "#<#{short_class_name}" + " address=#{address} topology=#{topology.summary}>" end end end end end mongo-ruby-driver-2.21.3/lib/mongo/monitoring/event/server_description_changed.rb000066400000000000000000000061011505113246500302700ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2016-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an 'AS IS' BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Monitoring module Event # Event fired when a server's description changes. # # @since 2.4.0 class ServerDescriptionChanged < Mongo::Event::Base # @return [ Address ] address The server address. attr_reader :address # @return [ Topology ] topology The topology. attr_reader :topology # @return [ Server::Description ] previous_description The previous server # description. attr_reader :previous_description # @return [ Server::Description ] new_description The new server # description. attr_reader :new_description # @return [ true | false ] Whether the heartbeat was awaited. # # @api experimental def awaited? @awaited end # Create the event. # # @example Create the event. # ServerDescriptionChanged.new(address, topology, previous, new) # # @param [ Address ] address The server address. # @param [ Integer ] topology The topology. # @param [ Server::Description ] previous_description The previous description. # @param [ Server::Description ] new_description The new description. # @param [ true | false ] awaited Whether the server description was # a result of processing an awaited hello response. # # @since 2.4.0 # @api private def initialize(address, topology, previous_description, new_description, awaited: false ) @address = address @topology = topology @previous_description = previous_description @new_description = new_description @awaited = !!awaited end # Returns a concise yet useful summary of the event. # # @return [ String ] String summary of the event. # # @note This method is experimental and subject to change. # # @since 2.7.0 # @api experimental def summary "#<#{short_class_name}" + " address=#{address}" + # TODO Add summaries to descriptions and use them here " prev=#{previous_description.server_type.upcase} new=#{new_description.server_type.upcase}#{awaited_indicator}>" end private def awaited_indicator if awaited? ' [awaited]' else '' end end end end end end mongo-ruby-driver-2.21.3/lib/mongo/monitoring/event/server_heartbeat_failed.rb000066400000000000000000000054641505113246500275520ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2018-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an 'AS IS' BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Monitoring module Event # Event fired when a server heartbeat is dispatched. # # @since 2.7.0 class ServerHeartbeatFailed < Mongo::Event::Base # Create the event. # # @example Create the event. # ServerHeartbeatSucceeded.new(address, duration) # # @param [ Address ] address The server address. # @param [ Float ] round_trip_time Duration of hello call in seconds. # @param [ true | false ] awaited Whether the heartbeat was awaited. # @param [ Monitoring::Event::ServerHeartbeatStarted ] started_event # The corresponding started event. # # @since 2.7.0 # @api private def initialize(address, round_trip_time, error, awaited: false, started_event: ) @address = address @round_trip_time = round_trip_time @error = error @awaited = !!awaited @started_event = started_event end # @return [ Address ] address The server address. attr_reader :address # @return [ Float ] round_trip_time Duration of hello call in seconds. attr_reader :round_trip_time # Alias of round_trip_time. alias :duration :round_trip_time # @return [ Exception ] error The exception that occurred in hello call. attr_reader :error # Alias of error for SDAM spec compliance. alias :failure :error # @return [ true | false ] Whether the heartbeat was awaited. def awaited? @awaited end # @return [ Monitoring::Event::ServerHeartbeatStarted ] # The corresponding started event. # # @api experimental attr_reader :started_event # Returns a concise yet useful summary of the event. # # @return [ String ] String summary of the event. # # @note This method is experimental and subject to change. # # @since 2.7.0 # @api experimental def summary "#<#{short_class_name}" + " address=#{address}" + " error=#{error.inspect}>" end end end end end mongo-ruby-driver-2.21.3/lib/mongo/monitoring/event/server_heartbeat_started.rb000066400000000000000000000035301505113246500277640ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2018-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an 'AS IS' BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Monitoring module Event # Event fired when a server heartbeat is dispatched. # # @since 2.7.0 class ServerHeartbeatStarted < Mongo::Event::Base # @return [ Address ] address The server address. attr_reader :address # @return [ true | false ] Whether the heartbeat was awaited. def awaited? @awaited end # Create the event. # # @example Create the event. # ServerHeartbeatStarted.new(address) # # @param [ Address ] address The server address. # @param [ true | false ] awaited Whether the heartbeat was awaited. # # @since 2.7.0 # @api private def initialize(address, awaited: false) @address = address @awaited = !!awaited end # Returns a concise yet useful summary of the event. # # @return [ String ] String summary of the event. # # @note This method is experimental and subject to change. # # @since 2.7.0 # @api experimental def summary "#<#{short_class_name}" + " address=#{address}>" end end end end end mongo-ruby-driver-2.21.3/lib/mongo/monitoring/event/server_heartbeat_succeeded.rb000066400000000000000000000050621505113246500302440ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2018-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an 'AS IS' BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Monitoring module Event # Event fired when a server heartbeat is dispatched. # # @since 2.7.0 class ServerHeartbeatSucceeded < Mongo::Event::Base # Create the event. # # @example Create the event. # ServerHeartbeatSucceeded.new(address, duration) # # @param [ Address ] address The server address. # @param [ Float ] round_trip_time Duration of hello call in seconds. # @param [ true | false ] awaited Whether the heartbeat was awaited. # @param [ Monitoring::Event::ServerHeartbeatStarted ] started_event # The corresponding started event. # # @since 2.7.0 # @api private def initialize(address, round_trip_time, awaited: false, started_event: ) @address = address @round_trip_time = round_trip_time @awaited = !!awaited @started_event = started_event end # @return [ Address ] address The server address. attr_reader :address # @return [ Float ] round_trip_time Duration of hello call in seconds. attr_reader :round_trip_time # Alias of round_trip_time. alias :duration :round_trip_time # @return [ true | false ] Whether the heartbeat was awaited. def awaited? @awaited end # @return [ Monitoring::Event::ServerHeartbeatStarted ] # The corresponding started event. # # @api experimental attr_reader :started_event # Returns a concise yet useful summary of the event. # # @return [ String ] String summary of the event. # # @note This method is experimental and subject to change. # # @since 2.7.0 # @api experimental def summary "#<#{short_class_name}" + " address=#{address}>" end end end end end mongo-ruby-driver-2.21.3/lib/mongo/monitoring/event/server_opening.rb000066400000000000000000000033741505113246500257440ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2016-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an 'AS IS' BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Monitoring module Event # Event fired when the server is opening. # # @since 2.4.0 class ServerOpening < Mongo::Event::Base # @return [ Address ] address The server address. attr_reader :address # @return [ Topology ] topology The topology. attr_reader :topology # Create the event. # # @example Create the event. # ServerOpening.new(address) # # @param [ Address ] address The server address. # @param [ Integer ] topology The topology. # # @since 2.4.0 def initialize(address, topology) @address = address @topology = topology end # Returns a concise yet useful summary of the event. # # @return [ String ] String summary of the event. # # @note This method is experimental and subject to change. # # @since 2.7.0 # @api experimental def summary "#<#{short_class_name}" + " address=#{address} topology=#{topology.summary}>" end end end end end mongo-ruby-driver-2.21.3/lib/mongo/monitoring/event/topology_changed.rb000066400000000000000000000036531505113246500262440ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2016-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an 'AS IS' BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Monitoring module Event # Event fired when the topology changes. # # @since 2.4.0 class TopologyChanged < Mongo::Event::Base # @return [ Cluster::Topology ] previous_topology The previous topology. attr_reader :previous_topology # @return [ Cluster::Topology ] new_topology The new topology. attr_reader :new_topology # Create the event. # # @example Create the event. # TopologyChanged.new(previous, new) # # @param [ Cluster::Topology ] previous_topology The previous topology. # @param [ Cluster::Topology ] new_topology The new topology. # # @since 2.4.0 def initialize(previous_topology, new_topology) @previous_topology = previous_topology @new_topology = new_topology end # Returns a concise yet useful summary of the event. # # @return [ String ] String summary of the event. # # @note This method is experimental and subject to change. # # @since 2.7.0 # @api experimental def summary "#<#{short_class_name}" + " prev=#{previous_topology.summary}" + " new=#{new_topology.summary}>" end end end end end mongo-ruby-driver-2.21.3/lib/mongo/monitoring/event/topology_closed.rb000066400000000000000000000030631505113246500261170ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2016-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an 'AS IS' BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Monitoring module Event # Event fired when the topology closes. # # @since 2.4.0 class TopologyClosed < Mongo::Event::Base # @return [ Topology ] topology The topology. attr_reader :topology # Create the event. # # @example Create the event. # TopologyClosed.new(topology) # # @param [ Integer ] topology The topology. # # @since 2.4.0 def initialize(topology) @topology = topology end # Returns a concise yet useful summary of the event. # # @return [ String ] String summary of the event. # # @note This method is experimental and subject to change. # # @since 2.7.0 # @api experimental def summary "#<#{short_class_name}" + " topology=#{topology.summary}>" end end end end end mongo-ruby-driver-2.21.3/lib/mongo/monitoring/event/topology_opening.rb000066400000000000000000000030711505113246500263040ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2016-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an 'AS IS' BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Monitoring module Event # Event fired when the topology is opening. # # @since 2.4.0 class TopologyOpening < Mongo::Event::Base # @return [ Topology ] topology The topology. attr_reader :topology # Create the event. # # @example Create the event. # TopologyOpening.new(topology) # # @param [ Integer ] topology The topology. # # @since 2.4.0 def initialize(topology) @topology = topology end # Returns a concise yet useful summary of the event. # # @return [ String ] String summary of the event. # # @note This method is experimental and subject to change. # # @since 2.7.0 # @api experimental def summary "#<#{short_class_name}" + " topology=#{topology.summary}>" end end end end end mongo-ruby-driver-2.21.3/lib/mongo/monitoring/publishable.rb000066400000000000000000000076111505113246500240660ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2015-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an 'AS IS' BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Monitoring # Defines behavior for an object that can publish monitoring events. # # @since 2.1.0 module Publishable include Loggable # @return [ Monitoring ] monitoring The monitoring. attr_reader :monitoring # @deprecated def publish_event(topic, event) monitoring.succeeded(topic, event) end def publish_sdam_event(topic, event) return unless monitoring? monitoring.succeeded(topic, event) end def publish_cmap_event(event) return unless monitoring? monitoring.published(Monitoring::CONNECTION_POOL, event) end private def command_started(address, operation_id, payload, socket_object_id: nil, connection_id: nil, connection_generation: nil, server_connection_id: nil, service_id: nil ) event = Event::CommandStarted.generate(address, operation_id, payload, socket_object_id: socket_object_id, connection_id: connection_id, connection_generation: connection_generation, server_connection_id: server_connection_id, service_id: service_id, ) monitoring.started( Monitoring::COMMAND, event ) event end def command_completed(result, address, operation_id, payload, duration, started_event:, server_connection_id: nil, service_id: nil ) document = result ? (result.documents || []).first : nil if document && (document['ok'] && document['ok'] != 1 || document.key?('$err')) parser = Error::Parser.new(document) command_failed(document, address, operation_id, payload, parser.message, duration, started_event: started_event, server_connection_id: server_connection_id, service_id: service_id, ) else command_succeeded(result, address, operation_id, payload, duration, started_event: started_event, server_connection_id: server_connection_id, service_id: service_id, ) end end def command_succeeded(result, address, operation_id, payload, duration, started_event:, server_connection_id: nil, service_id: nil ) monitoring.succeeded( Monitoring::COMMAND, Event::CommandSucceeded.generate( address, operation_id, payload, result ? result.payload : nil, duration, started_event: started_event, server_connection_id: server_connection_id, service_id: service_id, ) ) end def command_failed(failure, address, operation_id, payload, message, duration, started_event:, server_connection_id: nil, service_id: nil ) monitoring.failed( Monitoring::COMMAND, Event::CommandFailed.generate(address, operation_id, payload, message, failure, duration, started_event: started_event, server_connection_id: server_connection_id, service_id: service_id, ) ) end def duration(start) Time.now - start end def monitoring? options[:monitoring] != false end end end end mongo-ruby-driver-2.21.3/lib/mongo/monitoring/sdam_log_subscriber.rb000066400000000000000000000027571505113246500256120ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2016-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an 'AS IS' BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Monitoring # Subscribes to SDAM events and logs them. # # @since 2.4.0 class SDAMLogSubscriber include Loggable # @return [ Hash ] options The options. attr_reader :options # Create the new log subscriber. # # @example Create the log subscriber. # SDAMLogSubscriber.new # # @param [ Hash ] options The options. # # @option options [ Logger ] :logger An optional custom logger. # # @since 2.4.0 def initialize(options = {}) @options = options end # Handle the SDAM succeeded event. # # @example Handle the event. # subscriber.succeeded(event) # # @param [ Event ] event The event. # # @since 2.4.0 def succeeded(event) log_event(event) if logger.debug? end end end end mongo-ruby-driver-2.21.3/lib/mongo/monitoring/server_closed_log_subscriber.rb000066400000000000000000000016531505113246500275170ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2016-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an 'AS IS' BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Monitoring # Subscribes to Server Closed events and logs them. # # @since 2.4.0 class ServerClosedLogSubscriber < SDAMLogSubscriber private def log_event(event) log_debug("Server #{event.address} connection closed.") end end end end mongo-ruby-driver-2.21.3/lib/mongo/monitoring/server_description_changed_log_subscriber.rb000066400000000000000000000023501505113246500322350ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2016-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an 'AS IS' BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Monitoring # Subscribes to Server Description Changed events and logs them. # # @since 2.4.0 class ServerDescriptionChangedLogSubscriber < SDAMLogSubscriber private def log_event(event) log_debug( "Server description for #{event.address} changed from " + "'#{event.previous_description.server_type}' to '#{event.new_description.server_type}'#{awaited_indicator(event)}." ) end def awaited_indicator(event) if event.awaited? ' [awaited]' else '' end end end end end mongo-ruby-driver-2.21.3/lib/mongo/monitoring/server_opening_log_subscriber.rb000066400000000000000000000016501505113246500277020ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2016-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an 'AS IS' BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Monitoring # Subscribes to Server Opening events and logs them. # # @since 2.4.0 class ServerOpeningLogSubscriber < SDAMLogSubscriber private def log_event(event) log_debug("Server #{event.address} initializing.") end end end end mongo-ruby-driver-2.21.3/lib/mongo/monitoring/topology_changed_log_subscriber.rb000066400000000000000000000024271505113246500302050ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2016-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an 'AS IS' BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Monitoring # Subscribes to Topology Changed events and logs them. # # @since 2.4.0 class TopologyChangedLogSubscriber < SDAMLogSubscriber private def log_event(event) if event.previous_topology.class != event.new_topology.class log_debug( "Topology type '#{event.previous_topology.display_name}' changed to " + "type '#{event.new_topology.display_name}'." ) else log_debug( "There was a change in the members of the '#{event.new_topology.display_name}' " + "topology." ) end end end end end mongo-ruby-driver-2.21.3/lib/mongo/monitoring/topology_closed_log_subscriber.rb000066400000000000000000000017041505113246500300620ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2018-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an 'AS IS' BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Monitoring # Subscribes to Topology Closed events and logs them. # # @since 2.7.0 class TopologyClosedLogSubscriber < SDAMLogSubscriber private def log_event(event) log_debug("Topology type '#{event.topology.display_name.downcase}' closed.") end end end end mongo-ruby-driver-2.21.3/lib/mongo/monitoring/topology_opening_log_subscriber.rb000066400000000000000000000017141505113246500302510ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2016-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an 'AS IS' BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Monitoring # Subscribes to Topology Opening events and logs them. # # @since 2.4.0 class TopologyOpeningLogSubscriber < SDAMLogSubscriber private def log_event(event) log_debug("Topology type '#{event.topology.display_name.downcase}' initializing.") end end end end mongo-ruby-driver-2.21.3/lib/mongo/monitoring/unified_sdam_log_subscriber.rb000066400000000000000000000035641505113246500273120ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2019-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an 'AS IS' BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Monitoring # Subscribes to SDAM events and logs them. # # @since 2.11.0 # @api experimental class UnifiedSdamLogSubscriber include Loggable # @return [ Hash ] options The options. # # @since 2.11.0 attr_reader :options # Create the new log subscriber. # # @param [ Hash ] options The options. # # @option options [ Logger ] :logger An optional custom logger. # # @since 2.11.0 def initialize(options = {}) @options = options end # Handle an event. # # @param [ Event ] event The event. # # @since 2.11.0 def published(event) log_debug("EVENT: #{event.summary}") if logger.debug? end alias :succeeded :published def subscribe(client) client.subscribe(Mongo::Monitoring::TOPOLOGY_OPENING, self) client.subscribe(Mongo::Monitoring::SERVER_OPENING, self) client.subscribe(Mongo::Monitoring::SERVER_DESCRIPTION_CHANGED, self) client.subscribe(Mongo::Monitoring::TOPOLOGY_CHANGED, self) client.subscribe(Mongo::Monitoring::SERVER_CLOSED, self) client.subscribe(Mongo::Monitoring::TOPOLOGY_CLOSED, self) end end end end mongo-ruby-driver-2.21.3/lib/mongo/operation.rb000066400000000000000000000067401505113246500214110ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'mongo/operation/context' require 'mongo/operation/result' require 'mongo/operation/shared/response_handling' require 'mongo/operation/shared/executable' require 'mongo/operation/shared/executable_no_validate' require 'mongo/operation/shared/executable_transaction_label' require 'mongo/operation/shared/polymorphic_lookup' require 'mongo/operation/shared/polymorphic_result' require 'mongo/operation/shared/read_preference_supported' require 'mongo/operation/shared/bypass_document_validation' require 'mongo/operation/shared/write_concern_supported' require 'mongo/operation/shared/limited' require 'mongo/operation/shared/sessions_supported' require 'mongo/operation/shared/causal_consistency_supported' require 'mongo/operation/shared/write' require 'mongo/operation/shared/idable' require 'mongo/operation/shared/specifiable' require 'mongo/operation/shared/validatable' require 'mongo/operation/shared/object_id_generator' require 'mongo/operation/shared/op_msg_executable' require 'mongo/operation/shared/timed' require 'mongo/operation/op_msg_base' require 'mongo/operation/command' require 'mongo/operation/write_command' require 'mongo/operation/aggregate' require 'mongo/operation/result' require 'mongo/operation/collections_info' require 'mongo/operation/list_collections' require 'mongo/operation/update' require 'mongo/operation/insert' require 'mongo/operation/delete' require 'mongo/operation/count' require 'mongo/operation/distinct' require 'mongo/operation/create' require 'mongo/operation/drop' require 'mongo/operation/drop_database' require 'mongo/operation/get_more' require 'mongo/operation/find' require 'mongo/operation/explain' require 'mongo/operation/kill_cursors' require 'mongo/operation/indexes' require 'mongo/operation/map_reduce' require 'mongo/operation/users_info' require 'mongo/operation/parallel_scan' require 'mongo/operation/create_user' require 'mongo/operation/update_user' require 'mongo/operation/remove_user' require 'mongo/operation/create_index' require 'mongo/operation/drop_index' require 'mongo/operation/create_search_indexes' require 'mongo/operation/drop_search_index' require 'mongo/operation/update_search_index' module Mongo # This module encapsulates all of the operation classes defined by the driver. # # The operation classes take Ruby options as constructor parameters. # For example, :read contains read preference and :read_concern contains read # concern, whereas server commands use readConcern field for the read # concern and read preference is passed as $readPreference or secondaryOk # wire protocol flag bit. # # @api private module Operation # The q field constant. # # @since 2.1.0 Q = 'q'.freeze # The u field constant. # # @since 2.1.0 U = 'u'.freeze # The limit field constant. # # @since 2.1.0 LIMIT = 'limit'.freeze # The multi field constant. # # @since 2.1.0 MULTI = 'multi'.freeze # The upsert field constant. # # @since 2.1.0 UPSERT = 'upsert'.freeze # The collation field constant. # # @since 2.4.0 COLLATION = 'collation'.freeze # The array filters field constant. # # @since 2.5.0 ARRAY_FILTERS = 'arrayFilters'.freeze # The operation time field constant. # # @since 2.5.0 OPERATION_TIME = 'operationTime'.freeze # The cluster time field constant. # # @since 2.5.0 # @deprecated CLUSTER_TIME = '$clusterTime'.freeze end end mongo-ruby-driver-2.21.3/lib/mongo/operation/000077500000000000000000000000001505113246500210555ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/operation/aggregate.rb000066400000000000000000000021501505113246500233260ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2015-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'mongo/operation/aggregate/op_msg' require 'mongo/operation/aggregate/result' module Mongo module Operation # A MongoDB aggregate operation. # # @note An aggregate operation can behave like a read and return a # result set, or can behave like a write operation and # output results to a user-specified collection. # # @api private # # @since 2.0.0 class Aggregate include Specifiable include OpMsgExecutable end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/aggregate/000077500000000000000000000000001505113246500230035ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/operation/aggregate/op_msg.rb000066400000000000000000000017301505113246500246150ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2018-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Operation class Aggregate # A MongoDB aggregate operation sent as an op message. # # @api private # # @since 2.5.2 class OpMsg < OpMsgBase include CausalConsistencySupported include ExecutableTransactionLabel include PolymorphicResult end end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/aggregate/result.rb000066400000000000000000000063201505113246500246470ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Operation class Aggregate # Defines custom behavior of results in an aggregation context. # # @since 2.0.0 # @api semiprivate class Result < Operation::Result # The field name for the aggregation explain information. # # @since 2.0.5 # @api private EXPLAIN = 'stages'.freeze # The legacy field name for the aggregation explain information. # # @since 2.0.5 # @api private EXPLAIN_LEGACY = 'serverPipeline'.freeze # Get the cursor id for the result. # # @example Get the cursor id. # result.cursor_id # # @note Even though the wire protocol has a cursor_id field for all # messages of type reply, it is always zero when using the # aggregation framework and must be retrieved from the cursor # document itself. Wahnsinn! # # @return [ Integer ] The cursor id. # # @since 2.0.0 # @api private def cursor_id cursor_document ? cursor_document[CURSOR_ID] : 0 end # Get the post batch resume token for the result # # @return [ BSON::Document | nil ] The post batch resume token # # @api private def post_batch_resume_token cursor_document ? cursor_document['postBatchResumeToken'] : nil end # Get the documents for the aggregation result. This is either the # first document's 'result' field, or if a cursor option was selected, # it is the 'firstBatch' field in the 'cursor' field of the first # document returned. Otherwise, it is an explain document. # # @example Get the documents. # result.documents # # @return [ Array ] The documents. # # @since 2.0.0 # @api public def documents docs = reply.documents[0][RESULT] docs ||= cursor_document[FIRST_BATCH] if cursor_document docs ||= explain_document docs end private # This should only be called on explain responses; it will never # return a nil result and will only be meaningful on explain responses def explain_document first_document[EXPLAIN] || first_document[EXPLAIN_LEGACY] || [first_document] end def cursor_document @cursor_document ||= reply.documents[0][CURSOR] end def first_document @first_document ||= reply.documents[0] end end end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/collections_info.rb000066400000000000000000000020171505113246500247330ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2015-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'mongo/operation/collections_info/result' module Mongo module Operation # A MongoDB operation to get info on all collections in a given database. # # @api private # # @since 2.0.0 class CollectionsInfo include Specifiable include OpMsgExecutable private def final_operation ListCollections::OpMsg.new(spec) end end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/collections_info/000077500000000000000000000000001505113246500244065ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/operation/collections_info/result.rb000066400000000000000000000040231505113246500262500ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Operation class CollectionsInfo # Defines custom behavior of results when query the system.namespaces # collection. # # @since 2.1.0 # @api semiprivate class Result < Operation::Result # Initialize a new result. # # @param [ Array | nil ] replies The wire protocol replies, if any. # @param [ Server::Description ] connection_description # Server description of the server that performed the operation that # this result is for. # @param [ Integer ] connection_global_id # Global id of the connection on which the operation that # this result is for was performed. # @param [ String ] database_name The name of the database that the # query was sent to. # # @api private def initialize(replies, connection_description, connection_global_id, database_name) super(replies, connection_description, connection_global_id) @database_name = database_name end # Get the namespace for the cursor. # # @example Get the namespace. # result.namespace # # @return [ String ] The namespace. # # @since 2.1.0 # @api private def namespace "#{@database_name}.#{Database::NAMESPACES}" end end end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/command.rb000066400000000000000000000015761505113246500230310ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2015-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'mongo/operation/command/op_msg' module Mongo module Operation # A MongoDB general command operation. # # @api private # # @since 2.0.0 class Command include Specifiable include OpMsgExecutable end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/command/000077500000000000000000000000001505113246500224735ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/operation/command/op_msg.rb000066400000000000000000000020111505113246500242760ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2018-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Operation class Command # A MongoDB command operation sent as an op message. # # @api private # # @since 2.5.2 class OpMsg < OpMsgBase def selector(connection) spec[:selector].dup.tap do |sel| sel[:comment] = spec[:comment] unless spec[:comment].nil? end end end end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/context.rb000066400000000000000000000124241505113246500230710ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2021 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Operation # Context for operations. # # Holds various objects needed to make decisions about operation execution # in a single container, and provides facade methods for the contained # objects. # # The context contains parameters for operations, and as such while an # operation is being prepared nothing in the context should change. # When the result of the operation is being processed, the data # returned by the context may change (for example, because a transaction # is aborted), but at that point the operation should no longer read # anything from the context. Because context data may change during # operation execution, context objects should not be reused for multiple # operations. # # @api private class Context < CsotTimeoutHolder def initialize( client: nil, session: nil, connection_global_id: nil, operation_timeouts: {}, view: nil, options: nil ) if options if client raise ArgumentError, 'Client and options cannot both be specified' end if session raise ArgumentError, 'Session and options cannot both be specified' end end if connection_global_id && session&.pinned_connection_global_id raise ArgumentError, 'Trying to pin context to a connection when the session is already pinned to a connection.' end @client = client @session = session @view = view @connection_global_id = connection_global_id @options = options super(session: session, operation_timeouts: operation_timeouts) end attr_reader :client attr_reader :session attr_reader :view attr_reader :options # Returns a new Operation::Context with the deadline refreshed # and relative to the current moment. # # @return [ Operation::Context ] the refreshed context def refresh(connection_global_id: @connection_global_id, timeout_ms: nil, view: nil) operation_timeouts = @operation_timeouts operation_timeouts = operation_timeouts.merge(operation_timeout_ms: timeout_ms) if timeout_ms self.class.new(client: client, session: session, connection_global_id: connection_global_id, operation_timeouts: operation_timeouts, view: view || self.view, options: options) end def connection_global_id @connection_global_id || session&.pinned_connection_global_id end def in_transaction? session&.in_transaction? || false end def starting_transaction? session&.starting_transaction? || false end def committing_transaction? in_transaction? && session.committing_transaction? end def aborting_transaction? in_transaction? && session.aborting_transaction? end def modern_retry_writes? client && client.options[:retry_writes] end def legacy_retry_writes? client && !client.options[:retry_writes] && client.max_write_retries > 0 end def any_retry_writes? modern_retry_writes? || legacy_retry_writes? end def server_api if client client.options[:server_api] elsif options options[:server_api] end end # Whether the operation is a retry (true) or an initial attempt (false). def retry? !!@is_retry end # Returns a new context with the parameters changed as per the # provided arguments. # # @option opts [ true|false ] :is_retry Whether the operation is a retry # or a first attempt. def with(**opts) dup.tap do |copy| opts.each do |k, v| copy.instance_variable_set("@#{k}", v) end end end def encrypt? client&.encrypter&.encrypt? || false end def encrypt(db_name, cmd) encrypter.encrypt(db_name, cmd, self) end def decrypt? !!client&.encrypter end def decrypt(cmd) encrypter.decrypt(cmd, self) end def encrypter if client&.encrypter client.encrypter else raise Error::InternalDriverError, 'Encrypter should only be accessed when encryption is to be performed' end end def inspect "#<#{self.class} connection_global_id=#{connection_global_id.inspect} deadline=#{deadline.inspect} options=#{options.inspect} operation_timeouts=#{operation_timeouts.inspect}>" end end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/count.rb000066400000000000000000000015701505113246500225350ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2015-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'mongo/operation/count/op_msg' module Mongo module Operation # A MongoDB count command operation. # # @api private # # @since 2.0.0 class Count include Specifiable include OpMsgExecutable end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/count/000077500000000000000000000000001505113246500222055ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/operation/count/op_msg.rb000066400000000000000000000021051505113246500240140ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2018-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Operation class Count # A MongoDB count operation sent as an op message. # # @api private # # @since 2.5.2 class OpMsg < OpMsgBase include CausalConsistencySupported private def selector(connection) spec[:selector].merge( collation: spec[:collation], comment: spec[:comment], ).compact end end end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/create.rb000066400000000000000000000016061505113246500226500ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2015-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'mongo/operation/create/op_msg' module Mongo module Operation # A MongoDB create collection command operation. # # @api private # # @since 2.0.0 class Create include Specifiable include OpMsgExecutable end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/create/000077500000000000000000000000001505113246500223205ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/operation/create/op_msg.rb000066400000000000000000000022641505113246500241350ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2018-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Operation class Create # A MongoDB create collection operation sent as an op message. # # @api private # # @since 2.5.2 class OpMsg < OpMsgBase include ExecutableTransactionLabel private def selector(connection) # Collation is always supported on 3.6+ servers that would use OP_MSG. spec[:selector].merge( collation: spec[:collation], encryptedFields: spec[:encrypted_fields], ).compact end end end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/create_index.rb000066400000000000000000000016141505113246500240360ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2015-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'mongo/operation/create_index/op_msg' module Mongo module Operation # A MongoDB create index command operation. # # @api private # # @since 2.0.0 class CreateIndex include Specifiable include OpMsgExecutable end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/create_index/000077500000000000000000000000001505113246500235075ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/operation/create_index/op_msg.rb000066400000000000000000000035521505113246500253250ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2018-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Operation class CreateIndex # A MongoDB createindex operation sent as an op message. # # @api private # # @since 2.5.2 class OpMsg < OpMsgBase include ExecutableTransactionLabel private def selector(connection) { createIndexes: coll_name, indexes: indexes, comment: spec[:comment], }.compact.tap do |selector| if commit_quorum = spec[:commit_quorum] # While server versions 3.4 and newer generally perform option # validation, there was a bug on server versions 4.2.0 - 4.2.5 where # the server would accept the commitQuorum option and use it internally # (see SERVER-47193). As a result, the drivers specifications require # drivers to perform validation and raise an error when the commitQuorum # option is passed to servers that don't support it. unless connection.features.commit_quorum_enabled? raise Error::UnsupportedOption.commit_quorum_error end selector[:commitQuorum] = commit_quorum end end end end end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/create_search_indexes.rb000066400000000000000000000004571505113246500257170ustar00rootroot00000000000000# frozen_string_literal: true require 'mongo/operation/create_search_indexes/op_msg' module Mongo module Operation # A MongoDB createSearchIndexes command operation. # # @api private class CreateSearchIndexes include Specifiable include OpMsgExecutable end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/create_search_indexes/000077500000000000000000000000001505113246500253645ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/operation/create_search_indexes/op_msg.rb000066400000000000000000000013751505113246500272030ustar00rootroot00000000000000# frozen_string_literal: true module Mongo module Operation class CreateSearchIndexes # A MongoDB createSearchIndexes operation sent as an op message. # # @api private class OpMsg < OpMsgBase include ExecutableTransactionLabel private # Returns the command to send to the database, describing the # desired createSearchIndexes operation. # # @param [ Connection ] _connection the connection that will receive the # command # # @return [ Hash ] the selector def selector(_connection) { createSearchIndexes: coll_name, :$db => db_name, indexes: indexes, } end end end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/create_user.rb000066400000000000000000000016111505113246500237020ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2015-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'mongo/operation/create_user/op_msg' module Mongo module Operation # A MongoDB create user command operation. # # @api private # # @since 2.0.0 class CreateUser include Specifiable include OpMsgExecutable end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/create_user/000077500000000000000000000000001505113246500233565ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/operation/create_user/op_msg.rb000066400000000000000000000020041505113246500251630ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2018-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Operation class CreateUser # A MongoDB createuser operation sent as an op message. # # @api private # # @since 2.5.2 class OpMsg < OpMsgBase include ExecutableTransactionLabel private def selector(connection) { :createUser => user.name }.merge(user.spec) end end end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/delete.rb000066400000000000000000000017621505113246500226520ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2015-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'mongo/operation/delete/op_msg' require 'mongo/operation/delete/result' require 'mongo/operation/delete/bulk_result' module Mongo module Operation # A MongoDB delete operation. # # @api private # # @since 2.0.0 class Delete include Specifiable include Write private IDENTIFIER = 'deletes'.freeze end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/delete/000077500000000000000000000000001505113246500223175ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/operation/delete/bulk_result.rb000066400000000000000000000026631505113246500252060ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Operation class Delete # Defines custom behavior of results for a delete when part of a bulk write. # # @since 2.0.0 # @api semiprivate class BulkResult < Operation::Result include Aggregatable # Gets the number of documents deleted. # # @example Get the deleted count. # result.n_removed # # @return [ Integer ] The number of documents deleted. # # @since 2.0.0 # @api public def n_removed return 0 unless acknowledged? @replies.reduce(0) do |n, reply| if reply.documents.first[Result::N] n += reply.documents.first[Result::N] else n end end end end end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/delete/op_msg.rb000066400000000000000000000033161505113246500241330ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2018-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Operation class Delete # A MongoDB delete operation sent as an op message. # # @api private # # @since 2.5.2 class OpMsg < OpMsgBase include BypassDocumentValidation include ExecutableNoValidate include ExecutableTransactionLabel include PolymorphicResult include Validatable private def selector(connection) { delete: coll_name, Protocol::Msg::DATABASE_IDENTIFIER => db_name, ordered: ordered?, let: spec[:let], comment: spec[:comment], }.compact.tap do |selector| if hint = spec[:hint] validate_hint_on_update(connection, selector) selector[:hint] = hint end end end def message(connection) section = Protocol::Msg::Section1.new(IDENTIFIER, send(IDENTIFIER)) cmd = apply_relevant_timeouts_to(command(connection), connection) Protocol::Msg.new(flags, {}, cmd, section) end end end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/delete/result.rb000066400000000000000000000023751505113246500241710ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Operation class Delete # Defines custom behavior of results for a delete. # # @since 2.0.0 # @api semiprivate class Result < Operation::Result # Get the number of documents deleted. # # @example Get the deleted count. # result.deleted_count # # @return [ Integer ] The deleted count. # # @since 2.0.0 # @api public def deleted_count n end # @api public def bulk_result BulkResult.new(@replies, connection_description) end end end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/distinct.rb000066400000000000000000000016011505113246500232210ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2017-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'mongo/operation/distinct/op_msg' module Mongo module Operation # A MongoDB distinct command operation. # # @api private # # @since 2.5.0 class Distinct include Specifiable include OpMsgExecutable end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/distinct/000077500000000000000000000000001505113246500226765ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/operation/distinct/op_msg.rb000066400000000000000000000023071505113246500245110ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2018-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Operation class Distinct # A MongoDB distinct operation sent as an op message. # # @api private # # @since 2.5.2 class OpMsg < OpMsgBase include CausalConsistencySupported include ExecutableTransactionLabel private def selector(connection) # Collation is always supported on 3.6+ servers that would use OP_MSG. spec[:selector].merge( collation: spec[:collation], comment: spec[:comment], ).compact end end end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/drop.rb000066400000000000000000000015701505113246500223510ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2015-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'mongo/operation/drop/op_msg' module Mongo module Operation # A MongoDB drop collection operation. # # @api private # # @since 2.4.0 class Drop include Specifiable include OpMsgExecutable end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/drop/000077500000000000000000000000001505113246500220215ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/operation/drop/op_msg.rb000066400000000000000000000016141505113246500236340ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2018-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Operation class Drop # A MongoDB drop collection operation sent as an op message. # # @api private # # @since 2.5.2 class OpMsg < OpMsgBase include ExecutableTransactionLabel end end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/drop_database.rb000066400000000000000000000016071505113246500241760ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2015-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'mongo/operation/drop_database/op_msg' module Mongo module Operation # A MongoDB drop database operation. # # @api private # # @since 2.4.0 class DropDatabase include Specifiable include OpMsgExecutable end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/drop_database/000077500000000000000000000000001505113246500236455ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/operation/drop_database/op_msg.rb000066400000000000000000000016221505113246500254570ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2018-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Operation class DropDatabase # A MongoDB drop database operation sent as an op message. # # @api private # # @since 2.5.2 class OpMsg < OpMsgBase include ExecutableTransactionLabel end end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/drop_index.rb000066400000000000000000000015761505113246500235460ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2015-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'mongo/operation/drop_index/op_msg' module Mongo module Operation # A MongoDB drop index operation. # # @api private # # @since 2.0.0 class DropIndex include Specifiable include OpMsgExecutable end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/drop_index/000077500000000000000000000000001505113246500232105ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/operation/drop_index/op_msg.rb000066400000000000000000000021331505113246500250200ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2018-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Operation class DropIndex # A MongoDB dropindex operation sent as an op message. # # @api private # # @since 2.5.2 class OpMsg < OpMsgBase include ExecutableTransactionLabel private def selector(connection) { :dropIndexes => coll_name, :index => index_name, :comment => spec[:comment], }.compact end end end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/drop_search_index.rb000066400000000000000000000004431505113246500250630ustar00rootroot00000000000000# frozen_string_literal: true require 'mongo/operation/drop_search_index/op_msg' module Mongo module Operation # A MongoDB dropSearchIndex command operation. # # @api private class DropSearchIndex include Specifiable include OpMsgExecutable end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/drop_search_index/000077500000000000000000000000001505113246500245355ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/operation/drop_search_index/op_msg.rb000066400000000000000000000015141505113246500263470ustar00rootroot00000000000000# frozen_string_literal: true module Mongo module Operation class DropSearchIndex # A MongoDB createSearchIndexes operation sent as an op message. # # @api private class OpMsg < OpMsgBase include ExecutableTransactionLabel private # Returns the command to send to the database, describing the # desired dropSearchIndex operation. # # @param [ Connection ] _connection the connection that will receive the # command # # @return [ Hash ] the selector def selector(_connection) { dropSearchIndex: coll_name, :$db => db_name, }.tap do |sel| sel[:id] = index_id if index_id sel[:name] = index_name if index_name end end end end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/explain.rb000066400000000000000000000016371505113246500230510ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2015-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'mongo/operation/explain/result' require 'mongo/operation/explain/op_msg' module Mongo module Operation # A MongoDB explain operation. # # @api private # # @since 2.5.0 class Explain include Specifiable include OpMsgExecutable end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/explain/000077500000000000000000000000001505113246500225155ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/operation/explain/op_msg.rb000066400000000000000000000026541505113246500243350ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2018-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Operation class Explain # A MongoDB explain operation sent as an op message. # # @api private # # @since 2.5.2 class OpMsg < OpMsgBase include CausalConsistencySupported include ExecutableTransactionLabel include PolymorphicResult private def selector(connection) # The mappings are BSON::Documents and as such store keys as # strings, the spec here has symbol keys. spec = BSON::Document.new(self.spec) { explain: { find: coll_name, }.update(Find::Builder::Command.selector(spec, connection)), Protocol::Msg::DATABASE_IDENTIFIER => db_name, }.update(spec[:explain] || {}) end end end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/explain/result.rb000066400000000000000000000027051505113246500243640ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2018-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Operation class Explain # Defines custom behavior of results in find command with explain. # # @since 2.5.0 # @api semiprivate class Result < Operation::Result # Get the cursor id. # # @example Get the cursor id. # result.cursor_id # # @return [ 0 ] Always 0 because explain doesn't return a cursor. # # @since 2.5.0 # @api private def cursor_id 0 end # Get the documents in the result. # # @example Get the documents. # result.documents # # @return [ Array ] The documents. # # @since 2.5.0 # @api public def documents reply.documents end end end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/find.rb000066400000000000000000000016721505113246500223300ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2015-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'mongo/operation/find/op_msg' require 'mongo/operation/find/result' require 'mongo/operation/find/builder' module Mongo module Operation # A MongoDB find operation. # # @api private # # @since 2.0.0 class Find include Specifiable include OpMsgExecutable end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/find/000077500000000000000000000000001505113246500217755ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/operation/find/builder.rb000066400000000000000000000014071505113246500237520ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2015-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'mongo/operation/find/builder/command' require 'mongo/operation/find/builder/flags' require 'mongo/operation/find/builder/modifiers' mongo-ruby-driver-2.21.3/lib/mongo/operation/find/builder/000077500000000000000000000000001505113246500234235ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/operation/find/builder/command.rb000066400000000000000000000071731505113246500253760ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2015-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Operation class Find module Builder # Builds a find command specification from options. # # @api private module Command # The mappings from ruby options to the find command. OPTION_MAPPINGS = BSON::Document.new( allow_disk_use: 'allowDiskUse', allow_partial_results: 'allowPartialResults', await_data: 'awaitData', batch_size: 'batchSize', collation: 'collation', comment: 'comment', filter: 'filter', hint: 'hint', let: 'let', limit: 'limit', max_scan: 'maxScan', max_time_ms: 'maxTimeMS', max_value: 'max', min_value: 'min', no_cursor_timeout: 'noCursorTimeout', oplog_replay: 'oplogReplay', projection: 'projection', read_concern: 'readConcern', return_key: 'returnKey', show_disk_loc: 'showRecordId', single_batch: 'singleBatch', skip: 'skip', snapshot: 'snapshot', sort: 'sort', tailable: 'tailable', tailable_cursor: 'tailable', ).freeze module_function def selector(spec, connection) if spec[:collation] && !connection.features.collation_enabled? raise Error::UnsupportedCollation end BSON::Document.new.tap do |selector| OPTION_MAPPINGS.each do |k, server_k| unless (value = spec[k]).nil? selector[server_k] = value end end if rc = selector[:readConcern] selector[:readConcern] = Options::Mapper.transform_values_to_strings(rc) end convert_limit_and_batch_size!(selector) end end private # Converts negative limit and batchSize parameters in the # find command to positive ones. Removes the parameters if their # values are zero. # # This is only used for find commmand, not for OP_QUERY path. # # The +command+ parameter is mutated by this method. module_function def convert_limit_and_batch_size!(command) if command[:limit] && command[:limit] < 0 && command[:batchSize] && command[:batchSize] < 0 then command[:limit] = command[:limit].abs command[:batchSize] = command[:limit].abs command[:singleBatch] = true else [:limit, :batchSize].each do |opt| if command[opt] if command[opt] < 0 command[opt] = command[opt].abs command[:singleBatch] = true elsif command[opt] == 0 command.delete(opt) end end end end end end end end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/find/builder/flags.rb000066400000000000000000000040001505113246500250360ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2015-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Operation class Find module Builder # Provides behavior for converting Ruby options to wire protocol flags # when sending find and related commands (e.g. explain). # # @api private module Flags # Options to cursor flags mapping. MAPPINGS = { :allow_partial_results => [ :partial ], :oplog_replay => [ :oplog_replay ], :no_cursor_timeout => [ :no_cursor_timeout ], :tailable => [ :tailable_cursor ], :tailable_await => [ :await_data, :tailable_cursor], :await_data => [ :await_data ], :exhaust => [ :exhaust ], }.freeze # Converts Ruby find options to an array of flags. # # Any keys in the input hash that are not options that map to flags # are ignored. # # @param [ Hash, BSON::Document ] options The options. # # @return [ Array ] The flags. module_function def map_flags(options) MAPPINGS.each.reduce(options[:flags] || []) do |flags, (key, value)| cursor_type = options[:cursor_type] if options[key] || (cursor_type && cursor_type == key) flags.push(*value) end flags end end end end end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/find/builder/modifiers.rb000066400000000000000000000062531505113246500257370ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2015-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Operation class Find module Builder # Provides behavior for mapping Ruby options to legacy OP_QUERY # find modifiers. # # This module is used in two ways: # 1. When Collection#find is invoked with the legacy OP_QUERY # syntax (:$query argument etc.), this module is used to map # the legacy parameters into the Ruby options that normally # are used by applications. # 2. When sending a find operation using the OP_QUERY protocol, # this module is used to map the Ruby find options to the # modifiers in the wire protocol message. # # @api private module Modifiers # Mappings from Ruby options to OP_QUERY modifiers. DRIVER_MAPPINGS = BSON::Document.new( comment: '$comment', explain: '$explain', hint: '$hint', max_scan: '$maxScan', max_time_ms: '$maxTimeMS', max_value: '$max', min_value: '$min', return_key: '$returnKey', show_disk_loc: '$showDiskLoc', snapshot: '$snapshot', sort: '$orderby', ).freeze # Mappings from OP_QUERY modifiers to Ruby options. SERVER_MAPPINGS = BSON::Document.new(DRIVER_MAPPINGS.invert).freeze # Transform the provided OP_QUERY modifiers to Ruby options. # # @example Transform to driver options. # Modifiers.map_driver_options(modifiers) # # @param [ Hash ] modifiers The modifiers. # # @return [ BSON::Document ] The Ruby options. module_function def map_driver_options(modifiers) Options::Mapper.transform_documents(modifiers, SERVER_MAPPINGS) end # Transform the provided Ruby options into a document of OP_QUERY # modifiers. # # Accepts both string and symbol keys. # # The input mapping may contain additional keys that do not map to # OP_QUERY modifiers, in which case the extra keys are ignored. # # @example Map the server modifiers. # Modifiers.map_server_modifiers(options) # # @param [ Hash, BSON::Document ] options The options. # # @return [ BSON::Document ] The modifiers. module_function def map_server_modifiers(options) Options::Mapper.transform_documents(options, DRIVER_MAPPINGS) end end end end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/find/op_msg.rb000066400000000000000000000062061505113246500236120ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2018-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Operation class Find # A MongoDB find operation sent as an op message. # # @api private # # @since 2.5.2 class OpMsg < OpMsgBase include CausalConsistencySupported include ExecutableTransactionLabel include PolymorphicResult private # Applies the relevant CSOT timeouts for a find command. # Considers the cursor type and timeout mode and will add (or omit) a # maxTimeMS field accordingly. def apply_relevant_timeouts_to(spec, connection) with_max_time(connection) do |max_time_sec| timeout_ms = max_time_sec ? (max_time_sec * 1_000).to_i : nil apply_find_timeouts_to(spec, timeout_ms) unless connection.description.mongocryptd? end end def apply_find_timeouts_to(spec, timeout_ms) view = context&.view return spec unless view case view.cursor_type when nil # non-tailable if view.timeout_mode == :cursor_lifetime spec[:maxTimeMS] = timeout_ms || view.options[:max_time_ms] else # timeout_mode == :iterable # drivers MUST honor the timeoutMS option for the initial command # but MUST NOT append a maxTimeMS field to the command sent to the # server if !timeout_ms && view.options[:max_time_ms] spec[:maxTimeMS] = view.options[:max_time_ms] end end when :tailable # If timeoutMS is set, drivers...MUST NOT append a maxTimeMS field to any commands. if !timeout_ms && view.options[:max_time_ms] spec[:maxTimeMS] = view.options[:max_time_ms] end when :tailable_await # The server supports the maxTimeMS option for the original command. if timeout_ms || view.options[:max_time_ms] spec[:maxTimeMS] = timeout_ms || view.options[:max_time_ms] end end spec.tap do |spc| spc.delete(:maxTimeMS) if spc[:maxTimeMS].nil? end end def selector(connection) # The mappings are BSON::Documents and as such store keys as # strings, the spec here has symbol keys. spec = BSON::Document.new(self.spec) { find: coll_name, Protocol::Msg::DATABASE_IDENTIFIER => db_name, }.update(Find::Builder::Command.selector(spec, connection)) end end end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/find/result.rb000066400000000000000000000036721505113246500236500ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2015-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Operation class Find # Defines custom behavior of results in find command. # # @since 2.2.0 # @api semiprivate class Result < Operation::Result # Get the cursor id. # # @example Get the cursor id. # result.cursor_id # # @return [ Integer ] The cursor id. # # @since 2.2.0 # @api private def cursor_id cursor_document ? cursor_document[CURSOR_ID] : super end # Get the documents in the result. # # @example Get the documents. # result.documents # # @return [ Array ] The documents. # # @since 2.2.0 # @api public def documents cursor_document[FIRST_BATCH] end # The namespace in which this find command was performed. # # @return [ String ] The namespace, usually in the format # "database.collection". # # @api private def namespace cursor_document['ns'] end private def cursor_document @cursor_document ||= reply.documents[0][CURSOR] end def first_document @first_document ||= reply.documents[0] end end end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/get_more.rb000066400000000000000000000017241505113246500232070ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2015-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'mongo/operation/get_more/command_builder' require 'mongo/operation/get_more/op_msg' require 'mongo/operation/get_more/result' module Mongo module Operation # A MongoDB getMore operation. # # @api private # # @since 2.5.0 class GetMore include Specifiable include OpMsgExecutable end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/get_more/000077500000000000000000000000001505113246500226565ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/operation/get_more/command_builder.rb000066400000000000000000000022761505113246500263360ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2021 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Operation class GetMore # @api private module CommandBuilder private def selector(connection) { getMore: BSON::Int64.new(spec.fetch(:cursor_id)), collection: spec.fetch(:coll_name), batchSize: spec[:batch_size], maxTimeMS: spec[:max_time_ms], }.compact.tap do |sel| if spec[:comment] && connection.features.get_more_comment_enabled? sel[:comment] = spec[:comment] end end end end end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/get_more/op_msg.rb000066400000000000000000000047251505113246500244770ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2018-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Operation class GetMore # A MongoDB getMore operation sent as an op message. # # @api private # # @since 2.5.2 class OpMsg < OpMsgBase include ExecutableTransactionLabel include PolymorphicResult include CommandBuilder private # Applies the relevant CSOT timeouts for a getMore command. # Considers the cursor type and timeout mode and will add (or omit) a # maxTimeMS field accordingly. def apply_relevant_timeouts_to(spec, connection) with_max_time(connection) do |max_time_sec| timeout_ms = max_time_sec ? (max_time_sec * 1_000).to_i : nil apply_get_more_timeouts_to(spec, timeout_ms) end end def apply_get_more_timeouts_to(spec, timeout_ms) view = context&.view return spec unless view if view.cursor_type == :tailable_await # If timeoutMS is set, drivers MUST apply it to the original operation. # Drivers MUST also apply the original timeoutMS value to each next # call on the resulting cursor but MUST NOT use it to derive a # maxTimeMS value for getMore commands. Helpers for operations that # create tailable awaitData cursors MUST also support the # maxAwaitTimeMS option. Drivers MUST error if this option is set, # timeoutMS is set to a non-zero value, and maxAwaitTimeMS is greater # than or equal to timeoutMS. If this option is set, drivers MUST use # it as the maxTimeMS field on getMore commands. max_await_time_ms = view.respond_to?(:max_await_time_ms) ? view.max_await_time_ms : nil spec[:maxTimeMS] = max_await_time_ms if max_await_time_ms end spec end end end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/get_more/result.rb000066400000000000000000000037261505113246500245310ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2015-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Operation class GetMore # Defines custom behavior of results for the get more command. # # @since 2.2.0 # @api semiprivate class Result < Operation::Result # Get the cursor id. # # @example Get the cursor id. # result.cursor_id # # @return [ Integer ] The cursor id. # # @since 2.2.0 # @api private def cursor_id cursor_document ? cursor_document[CURSOR_ID] : super end # Get the post batch resume token for the result # # @return [ BSON::Document | nil ] The post batch resume token # # @api private def post_batch_resume_token cursor_document ? cursor_document['postBatchResumeToken'] : nil end # Get the documents in the result. # # @example Get the documents. # result.documents # # @return [ Array ] The documents. # # @since 2.2.0 # @api public def documents cursor_document[NEXT_BATCH] end private def cursor_document @cursor_document ||= reply.documents[0][CURSOR] end def first_document @first_document ||= reply.documents[0] end end end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/indexes.rb000066400000000000000000000016371505113246500230500ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2015-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'mongo/operation/indexes/op_msg' require 'mongo/operation/indexes/result' module Mongo module Operation # A MongoDB indexes operation. # # @api private # # @since 2.0.0 class Indexes include Specifiable include OpMsgExecutable end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/indexes/000077500000000000000000000000001505113246500225145ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/operation/indexes/op_msg.rb000066400000000000000000000017011505113246500243240ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2018-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Operation class Indexes # A MongoDB indexes operation sent as an op message. # # @api private # # @since 2.5.2 class OpMsg < OpMsgBase include Limited include ExecutableTransactionLabel include PolymorphicResult end end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/indexes/result.rb000066400000000000000000000057431505113246500243700ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Operation class Indexes # Defines custom behavior of results when using the # listIndexes command. # # @since 2.0.0 # @api semiprivate class Result < Operation::Result # Get the cursor id for the result. # # @example Get the cursor id. # result.cursor_id # # @note Even though the wire protocol has a cursor_id field for all # messages of type reply, it is always zero when using the # listIndexes command and must be retrieved from the cursor # document itself. # # @return [ Integer ] The cursor id. # # @since 2.0.0 # @api private def cursor_id cursor_document ? cursor_document[CURSOR_ID] : super end # Get the namespace for the cursor. # # @example Get the namespace. # result.namespace # # @return [ String ] The namespace. # # @since 2.0.0 # @api private def namespace cursor_document ? cursor_document[NAMESPACE] : super end # Get the documents for the listIndexes result. This is the 'firstBatch' # field in the 'cursor' field of the first document returned. # # @example Get the documents. # result.documents # # @return [ Array ] The documents. # # @since 2.0.0 # @api public def documents cursor_document[FIRST_BATCH] end # Validate the result. In the case where the database or collection # does not exist on the server we will get an error, and it's better # to raise a meaningful exception here than the ambiguous one when # the error occurs. # # @example Validate the result. # result.validate! # # @raise [ NoNamespace ] If the ns doesn't exist. # # @return [ Result ] Self if successful. # # @since 2.0.0 # @api private def validate! !successful? ? raise_operation_failure : self end private def cursor_document @cursor_document ||= first_document[CURSOR] end def first_document @first_document ||= reply.documents[0] end end end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/insert.rb000066400000000000000000000020371505113246500227100ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2015-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'mongo/operation/insert/op_msg' require 'mongo/operation/insert/result' require 'mongo/operation/insert/bulk_result' module Mongo module Operation # A MongoDB insert operation. # # @api private # # @since 2.0.0 class Insert include Specifiable include Write private IDENTIFIER = 'documents'.freeze def validate!(connection) end end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/insert/000077500000000000000000000000001505113246500223615ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/operation/insert/bulk_result.rb000066400000000000000000000077131505113246500252510ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Operation class Insert # Defines custom behavior of results for an insert when sent as part of a bulk write. # # @since 2.0.0 # @api semiprivate class BulkResult < Operation::Result include Aggregatable # Get the ids of the inserted documents. # # @since 2.0.0 # @api public attr_reader :inserted_ids # Initialize a new result. # # @example Instantiate the result. # Result.new(replies, inserted_ids) # # @param [ Array | nil ] replies The wire protocol replies, if any. # @param [ Server::Description ] connection_description # Server description of the server that performed the operation that # this result is for. # @param [ Integer ] connection_global_id # Global id of the connection on which the operation that # this result is for was performed. # @param [ Array ] ids The ids of the inserted documents. # # @since 2.0.0 # @api private def initialize(replies, connection_description, connection_global_id, ids) @replies = [*replies] if replies @connection_description = connection_description @connection_global_id = connection_global_id if replies && replies.first && (doc = replies.first.documents.first) if errors = doc['writeErrors'] # some documents were potentially inserted bad_indices = {} errors.map do |error| bad_indices[error['index']] = true end @inserted_ids = [] ids.each_with_index do |id, index| if bad_indices[index].nil? @inserted_ids << id end end # I don't know if acknowledged? check here is necessary, # as best as I can tell it doesn't hurt elsif acknowledged? && successful? # We have a reply and the reply is successful and the # reply has no writeErrors - everything got inserted @inserted_ids = ids else # We have a reply and the reply is not successful and # it has no writeErrors - nothing got inserted. # If something got inserted the reply will be not successful # but will have writeErrors @inserted_ids = [] end else # I don't think we should ever get here but who knows, # make this behave as old drivers did @inserted_ids = ids end end # Gets the number of documents inserted. # # @example Get the number of documents inserted. # result.n_inserted # # @return [ Integer ] The number of documents inserted. # # @since 2.0.0 # @api public def n_inserted written_count end # Gets the id of the document inserted. # # @example Get id of the document inserted. # result.inserted_id # # @return [ Object ] The id of the document inserted. # # @since 2.0.0 # @api public def inserted_id inserted_ids.first end end end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/insert/op_msg.rb000066400000000000000000000033301505113246500241710ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2018-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Operation class Insert # A MongoDB insert operation sent as an op message. # # @api private # # @since 2.5.2 class OpMsg < OpMsgBase include Idable include BypassDocumentValidation include ExecutableNoValidate include ExecutableTransactionLabel include PolymorphicResult private def get_result(connection, context, options = {}) # This is a Mongo::Operation::Insert::Result Result.new(*dispatch_message(connection, context), @ids, context: context) end def selector(connection) { insert: coll_name, Protocol::Msg::DATABASE_IDENTIFIER => db_name, ordered: ordered?, comment: spec[:comment], }.compact end def message(connection) section = Protocol::Msg::Section1.new(IDENTIFIER, send(IDENTIFIER)) cmd = apply_relevant_timeouts_to(command(connection), connection) Protocol::Msg.new(flags, {}, cmd, section) end end end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/insert/result.rb000066400000000000000000000052171505113246500242310ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Operation class Insert # Defines custom behavior of results for an insert. # # According to the CRUD spec, reporting the inserted ids # is optional. It can be added to this class later, if needed. # # @since 2.0.0 # @api semiprivate class Result < Operation::Result # Get the ids of the inserted documents. # # @since 2.0.0 # @api public attr_reader :inserted_ids # Initialize a new result. # # @example Instantiate the result. # Result.new(replies, inserted_ids) # # @param [ Array | nil ] replies The wire protocol replies, if any. # @param [ Server::Description ] connection_description # Server description of the server that performed the operation that # this result is for. # @param [ Integer ] connection_global_id # Global id of the connection on which the operation that # this result is for was performed. # @param [ Array ] ids The ids of the inserted documents. # @param [ Operation::Context | nil ] context the operation context that # was active when this result was produced. # # @since 2.0.0 # @api private def initialize(replies, connection_description, connection_global_id, ids, context: nil) super(replies, connection_description, connection_global_id, context: context) @inserted_ids = ids end # Gets the id of the document inserted. # # @example Get id of the document inserted. # result.inserted_id # # @return [ Object ] The id of the document inserted. # # @since 2.0.0 # @api public def inserted_id inserted_ids.first end # @api public def bulk_result BulkResult.new(@replies, connection_description, connection_global_id, @inserted_ids) end end end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/kill_cursors.rb000066400000000000000000000016721505113246500241230ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2015-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'mongo/operation/kill_cursors/command_builder' require 'mongo/operation/kill_cursors/op_msg' module Mongo module Operation # A MongoDB killcursors operation. # # @api private # # @since 2.0.0 class KillCursors include Specifiable include OpMsgExecutable end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/kill_cursors/000077500000000000000000000000001505113246500235705ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/operation/kill_cursors/command_builder.rb000066400000000000000000000016201505113246500272400ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2021 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Operation class KillCursors # @api private module CommandBuilder private def int64_cursor_ids spec.fetch(:cursor_ids).map do |id| BSON::Int64.new(id) end end end end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/kill_cursors/op_msg.rb000066400000000000000000000021201505113246500253740ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2018-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Operation class KillCursors # A MongoDB killcursors operation sent as an op message. # # @api private # # @since 2.5.2 class OpMsg < OpMsgBase include ExecutableTransactionLabel include CommandBuilder private def selector(connection) { killCursors: coll_name, cursors: int64_cursor_ids, } end end end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/list_collections.rb000066400000000000000000000017011505113246500247520ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2015-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'mongo/operation/list_collections/op_msg' require 'mongo/operation/list_collections/result' module Mongo module Operation # A MongoDB listcollections operation. # # @api private # # @since 2.0.0 class ListCollections include Specifiable include OpMsgExecutable end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/list_collections/000077500000000000000000000000001505113246500244265ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/operation/list_collections/op_msg.rb000066400000000000000000000021721505113246500262410ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2018-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Operation class ListCollections # A MongoDB listcollections operation sent as an op message. # # @api private # # @since 2.5.2 class OpMsg < OpMsgBase include ExecutableTransactionLabel include PolymorphicResult private def selector(connection) (spec[SELECTOR] || {}).merge({ listCollections: 1, comment: spec[:comment] }).compact end end end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/list_collections/result.rb000066400000000000000000000062711505113246500262770ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Operation class ListCollections # Defines custom behavior of results when using the # listCollections command. # # @since 2.0.0 # @api semiprivate class Result < Operation::Result # Get the cursor id for the result. # # @example Get the cursor id. # result.cursor_id # # @note Even though the wire protocol has a cursor_id field for all # messages of type reply, it is always zero when using the # listCollections command and must be retrieved from the cursor # document itself. # # @return [ Integer ] The cursor id. # # @since 2.0.0 # @api private def cursor_id cursor_document ? cursor_document[CURSOR_ID] : super end # Get the namespace for the cursor. # # @example Get the namespace. # result.namespace # # @return [ String ] The namespace. # # @since 2.0.0 # @api private def namespace cursor_document ? cursor_document[NAMESPACE] : super end # Get the documents for the listCollections result. It is the 'firstBatch' # field in the 'cursor' field of the first document returned. # # @example Get the documents. # result.documents # # @return [ Array ] The documents. # # @since 2.0.0 # @api public def documents cursor_document[FIRST_BATCH] end # Validate the result. In the case where an unauthorized client tries # to run the command we need to generate the proper error. # # @example Validate the result. # result.validate! # # @return [ Result ] Self if successful. # # @since 2.0.0 # @api private def validate! if successful? self else raise operation_failure_class.new( parser.message, self, code: parser.code, code_name: parser.code_name, labels: parser.labels, wtimeout: parser.wtimeout, document: parser.document, server_message: parser.server_message, ) end end private def cursor_document @cursor_document ||= first_document[CURSOR] end def first_document @first_document ||= reply.documents[0] end end end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/map_reduce.rb000066400000000000000000000016511505113246500235110ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2015-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'mongo/operation/map_reduce/op_msg' require 'mongo/operation/map_reduce/result' module Mongo module Operation # A MongoDB mapreduce operation. # # @api private # # @since 2.5.0 class MapReduce include Specifiable include OpMsgExecutable end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/map_reduce/000077500000000000000000000000001505113246500231615ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/operation/map_reduce/op_msg.rb000066400000000000000000000017311505113246500247740ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2018-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Operation class MapReduce # A MongoDB map-reduce operation sent as an op message. # # @api private # # @since 2.5.2 class OpMsg < OpMsgBase include CausalConsistencySupported include ExecutableTransactionLabel include PolymorphicResult end end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/map_reduce/result.rb000066400000000000000000000105551505113246500250320ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Operation class MapReduce # Defines custom behavior of results for a map reduce operation. # # @since 2.0.0 # @api semiprivate class Result < Operation::Result # The counts field for the map/reduce. # # @since 2.0.0 # @api private COUNTS = 'counts'.freeze # The field name for a result without a cursor. # # @since 2.0.0 # @api private RESULTS = 'results'.freeze # The time the operation took constant. # # @since 2.0.0 # @api private TIME = 'timeMillis'.freeze # Gets the map/reduce counts from the reply. # # @example Get the counts. # result.counts # # @return [ Hash ] A hash of the result counts. # # @since 2.0.0 # @api public def counts reply.documents[0][COUNTS] end # Get the documents from the map/reduce. # # @example Get the documents. # result.documents # # @return [ Array ] The documents. # # @since 2.0.0 # @api public def documents reply.documents[0][RESULTS] || reply.documents[0][RESULT] end # If the result was a command then determine if it was considered a # success. # # @note If the write was unacknowledged, then this will always return # true. # # @example Was the command successful? # result.successful? # # @return [ true, false ] If the command was successful. # # @since 2.0.0 # @api public def successful? !documents.nil? end # Get the execution time of the map/reduce. # # @example Get the execution time. # result.time # # @return [ Integer ] The executing time in milliseconds. # # @since 2.0.0 # @api public def time reply.documents[0][TIME] end # Validate the result by checking for any errors. # # @note This only checks for errors with writes since authentication is # handled at the connection level and any authentication errors would # be raised there, before a Result is ever created. # # @example Validate the result. # result.validate! # # @raise [ Error::OperationFailure::Family ] If an error is in the result. # # @return [ Result ] The result if verification passed. # # @since 2.0.0 # @api private def validate! documents.nil? ? raise_operation_failure : self end # Get the cursor id. # # @example Get the cursor id. # result.cursor_id # # @return [ Integer ] Always 0 because map reduce doesn't return a cursor. # # @since 2.5.0 # @api private def cursor_id 0 end # Get the number of documents returned by the server in this batch. # # Map/Reduce operation returns documents inline without using # cursors; as such, the standard Mongo::Reply#returned_count does # not work correctly for Map/Reduce. # # Note that the Map/Reduce operation is limited to max BSON document # size (16 MB) in its inline result set. # # @return [ Integer ] The number of documents returned. # # @api public def returned_count reply.documents.length end private def first_document @first_document ||= reply.documents[0] end end end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/op_msg_base.rb000066400000000000000000000017671505113246500236730ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2018-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Operation # @api private class OpMsgBase include Specifiable include Executable include SessionsSupported include Timed private def message(connection) cmd = apply_relevant_timeouts_to(command(connection), connection) Protocol::Msg.new(flags, options(connection), cmd) end end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/parallel_scan.rb000066400000000000000000000016651505113246500242120ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2015-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'mongo/operation/parallel_scan/op_msg' require 'mongo/operation/parallel_scan/result' module Mongo module Operation # A MongoDB parallelscan operation. # # @api private # # @since 2.0.0 class ParallelScan include Specifiable include OpMsgExecutable end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/parallel_scan/000077500000000000000000000000001505113246500236555ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/operation/parallel_scan/op_msg.rb000066400000000000000000000025111505113246500254650ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2018-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Operation class ParallelScan # A MongoDB parallelscan operation sent as an op message. # # @api private # # @since 2.5.2 class OpMsg < OpMsgBase include CausalConsistencySupported include ExecutableTransactionLabel include PolymorphicResult private def selector(connection) sel = { :parallelCollectionScan => coll_name, :numCursors => cursor_count } sel[:maxTimeMS] = max_time_ms if max_time_ms if read_concern sel[:readConcern] = Options::Mapper.transform_values_to_strings( read_concern) end sel end end end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/parallel_scan/result.rb000066400000000000000000000033601505113246500255220ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2015-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Operation class ParallelScan # Defines custom behavior of results in a parallel scan. # # @since 2.0.0 # @api semiprivate class Result < Operation::Result # The name of the cursors field in the result. # # @since 2.0.0 # @api private CURSORS = 'cursors'.freeze # Get all the cursor ids from the result. # # @example Get the cursor ids. # result.cursor_ids # # @return [ Array ] The cursor ids. # # @since 2.0.0 # @api private def cursor_ids documents.map {|doc| doc[CURSOR][CURSOR_ID]} end # Get the documents from parallel scan. # # @example Get the documents. # result.documents # # @return [ Array ] The documents. # # @since 2.0.0 # @api public def documents reply.documents[0][CURSORS] end private def first @first ||= reply.documents[0] || {} end end end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/remove_user.rb000066400000000000000000000016001505113246500237320ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2015-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'mongo/operation/remove_user/op_msg' module Mongo module Operation # A MongoDB removeuser operation. # # @api private # # @since 2.0.0 class RemoveUser include Specifiable include OpMsgExecutable end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/remove_user/000077500000000000000000000000001505113246500234105ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/operation/remove_user/op_msg.rb000066400000000000000000000017611505113246500252260ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2018-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Operation class RemoveUser # A MongoDB removeuser operation sent as an op message. # # @api private # # @since 2.5.2 class OpMsg < OpMsgBase include ExecutableTransactionLabel private def selector(connection) { :dropUser => user_name } end end end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/result.rb000066400000000000000000000346651505113246500227360ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'mongo/operation/shared/result/aggregatable' require 'mongo/operation/shared/result/use_legacy_error_parser' module Mongo module Operation # Result wrapper for wire protocol replies. # # An operation has zero or one replies. The only operations producing zero # replies are unacknowledged writes; all other operations produce one reply. # This class provides an object that can be operated on (for example, to # check whether an operation succeeded) even when the operation did not # produce a reply (in which case it is assumed to have succeeded). # # @since 2.0.0 # @api semiprivate class Result extend Forwardable include Enumerable # The field name for the cursor document in an aggregation. # # @since 2.2.0 # @api private CURSOR = 'cursor'.freeze # The cursor id field in the cursor document. # # @since 2.2.0 # @api private CURSOR_ID = 'id'.freeze # The field name for the first batch of a cursor. # # @since 2.2.0 # @api private FIRST_BATCH = 'firstBatch'.freeze # The field name for the next batch of a cursor. # # @since 2.2.0 # @api private NEXT_BATCH = 'nextBatch'.freeze # The namespace field in the cursor document. # # @since 2.2.0 # @api private NAMESPACE = 'ns'.freeze # The number of documents updated in the write. # # @since 2.0.0 # @api private N = 'n'.freeze # The ok status field in the result. # # @since 2.0.0 # @api private OK = 'ok'.freeze # The result field constant. # # @since 2.2.0 # @api private RESULT = 'result'.freeze # Initialize a new result. # # For an unkacknowledged write, pass nil in replies. # # For all other operations, replies must be a Protocol::Message instance # or an array containing a single Protocol::Message instance. # # @param [ Protocol::Message | Array | nil ] replies # The wire protocol replies. # @param [ Server::Description | nil ] connection_description # Server description of the server that performed the operation that # this result is for. This parameter is allowed to be nil for # compatibility with existing mongo_kerberos library, but should # always be not nil in the driver proper. # @param [ Integer ] connection_global_id # Global id of the connection on which the operation that # this result is for was performed. # @param [ Operation::Context | nil ] context the context that was active # when this result was produced. # # @api private def initialize(replies, connection_description = nil, connection_global_id = nil, context: nil, connection: nil) @context = context if replies if replies.is_a?(Array) if replies.length != 1 raise ArgumentError, "Only one (or zero) reply is supported, given #{replies.length}" end reply = replies.first else reply = replies end unless reply.is_a?(Protocol::Message) raise ArgumentError, "Argument must be a Message instance, but is a #{reply.class}: #{reply.inspect}" end @replies = [ reply ] @connection_description = connection_description @connection_global_id = connection_global_id @connection = connection end end # @return [ Array ] replies The wrapped wire protocol replies. # # @api private attr_reader :replies # @return [ Server::Description ] Server description of the server that # the operation was performed on that this result is for. # # @api private attr_reader :connection_description # @return [ Object ] Global is of the connection that # the operation was performed on that this result is for. # # @api private attr_reader :connection_global_id # @return [ Operation::Context | nil ] the operation context (if any) # that was active when this result was produced. # # @api private attr_reader :context attr_reader :connection # @api private def_delegators :parser, :not_master?, :node_recovering?, :node_shutting_down? # Is the result acknowledged? # # @note On MongoDB 2.6 and higher all writes are acknowledged since the # driver uses write commands for all write operations. On 2.4 and # lower, the result is acknowledged if the GLE has been executed after # the command. If not, no replies will be specified. Reads will always # return true here since a replies is always provided. # # @return [ true, false ] If the result is acknowledged. # # @since 2.0.0 # @api public def acknowledged? !!@replies end # Whether the result contains cursor_id # # @return [ true, false ] If the result contains cursor_id. # # @api private def has_cursor_id? acknowledged? && replies.last.respond_to?(:cursor_id) end # Get the cursor id if the response is acknowledged. # # @note Cursor ids of 0 indicate there is no cursor on the server. # # @example Get the cursor id. # result.cursor_id # # @return [ Integer ] The cursor id. # # @since 2.0.0 # @api private def cursor_id acknowledged? ? replies.last.cursor_id : 0 end # Get the namespace of the cursor. The method should be defined in # result classes where 'ns' is in the server response. # # @return [ Nil ] # # @since 2.0.0 # @api private def namespace nil end # Get the documents in the result. # # @example Get the documents. # result.documents # # @return [ Array ] The documents. # # @since 2.0.0 # @api public def documents if acknowledged? replies.flat_map(&:documents) else [] end end # Iterate over the documents in the replies. # # @example Iterate over the documents. # result.each do |doc| # p doc # end # # @return [ Enumerator ] The enumerator. # # @yieldparam [ BSON::Document ] Each document in the result. # # @since 2.0.0 # @api public def each(&block) documents.each(&block) end # Get the pretty formatted inspection of the result. # # @example Inspect the result. # result.inspect # # @return [ String ] The inspection. # # @since 2.0.0 # @api public def inspect "#<#{self.class.name}:0x#{object_id} documents=#{documents}>" end # Get the reply from the result. # # Returns nil if there is no reply (i.e. the operation was an # unacknowledged write). # # @return [ Protocol::Message ] The first reply. # # @since 2.0.0 # @api private def reply if acknowledged? replies.first else nil end end # Get the number of documents returned by the server in this batch. # # @return [ Integer ] The number of documents returned. # # @since 2.0.0 # @api public def returned_count if acknowledged? reply.number_returned else 0 end end # If the result was a command then determine if it was considered a # success. # # @note If the write was unacknowledged, then this will always return # true. # # @example Was the command successful? # result.successful? # # @return [ true, false ] If the command was successful. # # @since 2.0.0 # @api public def successful? return true if !acknowledged? if first_document.has_key?(OK) ok? && parser.message.empty? else !query_failure? && parser.message.empty? end end # Check the first document's ok field. # # @example Check the ok field. # result.ok? # # @return [ true, false ] If the command returned ok. # # @since 2.1.0 # @api public def ok? # first_document[OK] is a float, and the server can return # ok as a BSON int32, BSON int64 or a BSON double. # The number 1 is exactly representable in a float, hence # 1.0 == 1 is going to perform correctly all of the time # (until the server returns something other than 1 for success, that is) first_document[OK] == 1 end # Validate the result by checking for any errors. # # @note This only checks for errors with writes since authentication is # handled at the connection level and any authentication errors would # be raised there, before a Result is ever created. # # @example Validate the result. # result.validate! # # @raise [ Error::OperationFailure::Family ] If an error is in the result. # # @return [ Result ] The result if verification passed. # # @since 2.0.0 # @api private def validate! !successful? ? raise_operation_failure : self end # The exception instance (of Error::OperationFailure::Family) # that would be raised during processing of this result. # # This method should only be called when result is not successful. # # @return [ Error::OperationFailure::Family ] The exception. # # @api private def error @error ||= operation_failure_class.new( parser.message, self, code: parser.code, code_name: parser.code_name, write_concern_error_document: parser.write_concern_error_document, write_concern_error_code: parser.write_concern_error_code, write_concern_error_code_name: parser.write_concern_error_code_name, write_concern_error_labels: parser.write_concern_error_labels, labels: parser.labels, wtimeout: parser.wtimeout, connection_description: connection_description, document: parser.document, server_message: parser.server_message, ) end # Raises a Mongo::OperationFailure exception corresponding to the # error information in this result. # # @raise Error::OperationFailure private def raise_operation_failure raise error end # @return [ TopologyVersion | nil ] The topology version. # # @api private def topology_version unless defined?(@topology_version) @topology_version = first_document['topologyVersion'] && TopologyVersion.new(first_document['topologyVersion']) end @topology_version end # Get the number of documents written by the server. # # @example Get the number of documents written. # result.written_count # # @return [ Integer ] The number of documents written. # # @since 2.0.0 # @api public def written_count if acknowledged? first_document[N] || 0 else 0 end end # @api public alias :n :written_count # Get the operation time reported in the server response. # # @example Get the operation time. # result.operation_time # # @return [ Object | nil ] The operation time value. # # @since 2.5.0 # @api public def operation_time first_document && first_document[OPERATION_TIME] end # Get the cluster time reported in the server response. # # @example Get the cluster time. # result.cluster_time # # @return [ ClusterTime | nil ] The cluster time document. # # Changed in version 2.9.0: This attribute became an instance of # ClusterTime, which is a subclass of BSON::Document. # Previously it was an instance of BSON::Document. # # @since 2.5.0 # @api public def cluster_time first_document && ClusterTime[first_document['$clusterTime']] end # Gets the set of error labels associated with the result. # # @example Get the labels. # result.labels # # @return [ Array ] labels The set of labels. # # @since 2.7.0 # @api private def labels @labels ||= parser.labels end # Whether the operation failed with a write concern error. # # @api private def write_concern_error? !!(first_document && first_document['writeConcernError']) end def snapshot_timestamp if doc = reply.documents.first doc['cursor']&.[]('atClusterTime') || doc['atClusterTime'] end end private def operation_failure_class if context&.csot? && parser.code == 50 Error::ServerTimeoutError else Error::OperationFailure end end def aggregate_returned_count replies.reduce(0) do |n, reply| n += reply.number_returned n end end def aggregate_written_count documents.reduce(0) do |n, document| n += (document[N] || 0) n end end def parser @parser ||= Error::Parser.new(first_document, replies) end def first_document @first_document ||= first || BSON::Document.new end def query_failure? replies.first && (replies.first.query_failure? || replies.first.cursor_not_found?) end end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/shared/000077500000000000000000000000001505113246500223235ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/operation/shared/bypass_document_validation.rb000066400000000000000000000024631505113246500302660ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2018-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Operation # Custom behavior for operations that support the bypassdocumentvalidation option. # # @since 2.5.2 # @api private module BypassDocumentValidation private def command(connection) if Lint.enabled? unless connection.is_a?(Server::Connection) raise Error::LintError, "Connection is not a Connection instance: #{connection}" end end sel = super add_bypass_document_validation(sel) end def add_bypass_document_validation(sel) return sel unless bypass_document_validation sel.merge(bypassDocumentValidation: true) end end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/shared/causal_consistency_supported.rb000066400000000000000000000027761505113246500306620ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2018-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Operation # Custom behavior for operations that support causal consistency. # # @since 2.5.2 # @api private module CausalConsistencySupported private # Adds causal consistency document to the selector, if one can be # constructed. # # This method overrides the causal consistency addition logic of # SessionsSupported and is meant to be used with operations classified # as "read operations accepting a read concern", as these are defined # in the causal consistency spec. # # In order for the override to work correctly the # CausalConsistencySupported module must be included after # SessionsSupported module in target classes. def apply_causal_consistency!(selector, connection) apply_causal_consistency_if_possible(selector, connection) end end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/shared/executable.rb000066400000000000000000000134201505113246500247710ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2015-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'mongo/error' module Mongo module Operation # Shared executable behavior of operations. # # @since 2.5.2 # @api private module Executable include ResponseHandling # @return [ Operation::Context | nil ] the operation context used to # execute this operation. attr_accessor :context def do_execute(connection, context, options = {}) # Save the context on the instance, to avoid having to pass it as a # parameter to every single method. There are many legacy methods that # still accept it as a parameter, which are left as-is for now to # minimize the impact of this change. Moving forward, it may be # reasonable to refactor things so this saved reference is used instead. @context = context session&.materialize_if_needed unpin_maybe(session, connection) do add_error_labels(connection, context) do check_for_network_error do add_server_diagnostics(connection) do get_result(connection, context, options).tap do |result| if session if session.in_transaction? && connection.description.load_balancer? then if session.pinned_connection_global_id unless session.pinned_connection_global_id == connection.global_id raise( Error::InternalDriverError, "Expected operation to use connection #{session.pinned_connection_global_id} but it used #{connection.global_id}" ) end else session.pin_to_connection(connection.global_id) connection.pin end end if session.snapshot? && !session.snapshot_timestamp session.snapshot_timestamp = result.snapshot_timestamp end end if result.has_cursor_id? && connection.description.load_balancer? then if result.cursor_id == 0 connection.unpin else connection.pin end end process_result(result, connection) end end end end end end def execute(connection, context:, options: {}) if Lint.enabled? unless connection.is_a?(Mongo::Server::Connection) raise Error::LintError, "Connection argument is of wrong type: #{connection}" end end do_execute(connection, context, options).tap do |result| validate_result(result, connection, context) end end private def result_class Result end def get_result(connection, context, options = {}) result_class.new(*dispatch_message(connection, context, options), context: context, connection: connection) end # Returns a Protocol::Message or nil as reply. def dispatch_message(connection, context, options = {}) message = build_message(connection, context) message = message.maybe_encrypt(connection, context) reply = connection.dispatch([ message ], context, options) [reply, connection.description, connection.global_id] end # @param [ Mongo::Server::Connection ] connection The connection on which # the operation is performed. # @param [ Mongo::Operation::Context ] context The operation context. def build_message(connection, context) msg = message(connection) if server_api = context.server_api msg = msg.maybe_add_server_api(server_api) end msg end def process_result(result, connection) connection.server.update_cluster_time(result) process_result_for_sdam(result, connection) if session session.process(result) end result end def process_result_for_sdam(result, connection) if (result.not_master? || result.node_recovering?) && connection.generation >= connection.server.pool.generation(service_id: connection.service_id) then if result.node_shutting_down? keep_pool = false else # Max wire version needs to be examined while the server is known keep_pool = connection.description.server_version_gte?('4.2') end connection.server.unknown!( keep_connection_pool: keep_pool, generation: connection.generation, service_id: connection.service_id, topology_version: result.topology_version, ) connection.server.scan_semaphore.signal end end NETWORK_ERRORS = [ Error::SocketError, Error::SocketTimeoutError ].freeze def check_for_network_error yield rescue *NETWORK_ERRORS session&.dirty! raise end end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/shared/executable_no_validate.rb000066400000000000000000000016531505113246500273430ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2018-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Operation # Shared executable behavior of operations for operations # whose result should not be validated. # # @api private module ExecutableNoValidate def execute(connection, context:) do_execute(connection, context) end end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/shared/executable_transaction_label.rb000066400000000000000000000016611505113246500305410ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2018-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Operation # Shared behavior of applying transaction error label to execution result. # # @note This module should be included after ExecutableNoValidate, # if both are included in a class. # # @api private module ExecutableTransactionLabel end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/shared/idable.rb000066400000000000000000000033361505113246500240750ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2018-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Operation # Shared behavior of operations that require its documents to each have an id. # # @since 2.5.2 # @api private module Idable def documents @documents ||= ensure_ids(super) end private # The option for a custom id generator. # # @since 2.2.0 ID_GENERATOR = :id_generator.freeze # Get the id generator. # # @example Get the id generator. # idable.id_generator # # @return [ IdGenerator ] The default or custom id generator. # # @since 2.2.0 def id_generator @id_generator ||= (spec[ID_GENERATOR] || Operation::ObjectIdGenerator.new) end def id(doc) doc.respond_to?(:id) ? doc.id : (doc['_id'] || doc[:_id]) end def has_id?(doc) !!id(doc) end def ensure_ids(documents) @ids = [] documents.collect do |doc| doc_with_id = has_id?(doc) ? doc : doc.merge(_id: id_generator.generate) @ids << id(doc_with_id) doc_with_id end end end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/shared/limited.rb000066400000000000000000000022531505113246500243010ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2015-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Operation # Shared behavior of operations that require its documents to each have an id. # # @since 2.5.2 # @api private module Limited private # Get the options for executing the operation on a particular connection. # # @param [ Server::Connection ] connection The connection that the # operation will be executed on. # # @return [ Hash ] The options. # # @since 2.0.0 def options(connection) super.merge(limit: -1) end end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/shared/object_id_generator.rb000066400000000000000000000020401505113246500266340ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2015-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Operation # The default generator of ids for documents. # # @since 2.2.0 # @api private class ObjectIdGenerator # Generate a new id. # # @example Generate the id. # object_id_generator.generate # # @return [ BSON::ObjectId ] The new id. # # @since 2.2.0 def generate BSON::ObjectId.new end end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/shared/op_msg_executable.rb000066400000000000000000000040421505113246500263350ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2018-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Operation # Shared behavior of executing the operation as an OpMsg. # # @api private module OpMsgExecutable include PolymorphicLookup # Execute the operation. # # @param [ Mongo::Server ] server The server to send the operation to. # @param [ Operation::Context ] context The operation context. # @param [ Hash ] options Operation execution options. # # @return [ Mongo::Operation::Result ] The operation result. def execute(server, context:, options: {}) server.with_connection( connection_global_id: context.connection_global_id, context: context ) do |connection| execute_with_connection(connection, context: context, options: options) end end # Execute the operation. # # @param [ Mongo::Server::Connection ] connection The connection to send # the operation through. # @param [ Operation::Context ] context The operation context. # @param [ Hash ] options Operation execution options. # # @return [ Mongo::Operation::Result ] The operation result. def execute_with_connection(connection, context:, options: {}) final_operation.execute(connection, context: context, options: options) end private def final_operation polymorphic_class(self.class.name, :OpMsg).new(spec) end end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/shared/polymorphic_lookup.rb000066400000000000000000000020151505113246500266040ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2018-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Operation # Shared behavior of looking up a class based on the name of # the receiver's class. # # @api private module PolymorphicLookup private def polymorphic_class(base, name) bits = (base + "::#{name}").split('::') bits.reduce(Object) do |cls, name| cls.const_get(name, false) end end end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/shared/polymorphic_result.rb000066400000000000000000000025401505113246500266140ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2018-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Operation # Shared behavior of instantiating a result class matching the # operation class. # # This module must be included after Executable module because result_class # is defined in both. # # @api private module PolymorphicResult include PolymorphicLookup private def self.included(base) base.extend ClassMethods end module ClassMethods attr_accessor :result_class end def result_class self.class.result_class ||= begin polymorphic_class(self.class.name, :Result) rescue NameError polymorphic_class(self.class.name.sub(/::[^:]*$/, ''), :Result) end end end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/shared/read_preference_supported.rb000066400000000000000000000106351505113246500300730ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2015-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Operation # Read preference handling for pre-OP_MSG operation implementations. # # This module is not used by OP_MSG operation classes (those deriving # from OpMsgBase). Instead, read preference for those classes is handled # in SessionsSupported module. # # @since 2.5.2 # @api private module ReadPreferenceSupported private # Get the options for executing the operation on a particular connection. # # @param [ Server::Connection ] connection The connection that the # operation will be executed on. # # @return [ Hash ] The options. # # @since 2.0.0 def options(connection) options = super if add_secondary_ok_flag?(connection) flags = options[:flags]&.dup || [] flags << :secondary_ok options = options.merge(flags: flags) end options end # Whether to add the :secondary_ok flag to the request based on the # read preference specified in the operation or implied by the topology # that the connection's server is a part of. # # @param [ Server::Connection ] connection The connection that the # operation will be executed on. # # @return [ true | false ] Whether the :secondary_ok flag should be added. def add_secondary_ok_flag?(connection) # https://github.com/mongodb/specifications/blob/master/source/server-selection/server-selection.md#topology-type-single if connection.description.standalone? # Read preference is never sent to standalones. false elsif connection.server.cluster.single? # In Single topology the driver forces primaryPreferred read # preference mode (via the secondary_ok flag, in case of old servers) # so that the query is satisfied. true else # In replica sets and sharded clusters, read preference is passed # to the server if one is specified by the application, and there # is no default. read && read.secondary_ok? || false end end def command(connection) sel = super add_read_preference_legacy(sel, connection) end # Adds $readPreference field to the command document. # # $readPreference is only sent when the server is a mongos, # following the rules described in # https://github.com/mongodb/specifications/blob/master/source/server-selection/server-selection.md#passing-read-preference-to-mongos. # The topology does not matter for figuring out whether to send # $readPreference since the decision is always made based on # server type. # # $readPreference is not sent to pre-OP_MSG replica set members. # # @param [ Hash ] sel Existing command document. # @param [ Server::Connection ] connection The connection that the # operation will be executed on. # # @return [ Hash ] New command document to send to the server. def add_read_preference_legacy(sel, connection) if read && ( connection.description.mongos? || connection.description.load_balancer? ) && read_pref = read.to_mongos # If the read preference contains only mode and mode is secondary # preferred and we are sending to a pre-OP_MSG server, this read # preference is indicated by the :secondary_ok wire protocol flag # and $readPreference command parameter isn't sent. if read_pref != {mode: 'secondaryPreferred'} Mongo::Lint.validate_camel_case_read_preference(read_pref) sel = sel[:$query] ? sel : {:$query => sel} sel = sel.merge(:$readPreference => read_pref) end end sel end end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/shared/response_handling.rb000066400000000000000000000156771505113246500263720ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2019-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Operation # Shared behavior of response handling for operations. # # @api private module ResponseHandling private # @param [ Mongo::Operation::Result ] result The operation result. # @param [ Mongo::Server::Connection ] connection The connection on which # the operation is performed. # @param [ Mongo::Operation::Context ] context The operation context. def validate_result(result, connection, context) unpin_maybe(context.session, connection) do add_error_labels(connection, context) do add_server_diagnostics(connection) do result.validate! end end end end # Adds error labels to exceptions raised in the yielded to block, # which should perform MongoDB operations and raise Mongo::Errors on # failure. This method handles network errors (Error::SocketError) # and server-side errors (Error::OperationFailure::Family); it does not # handle server selection errors (Error::NoServerAvailable), for which # labels are added in the server selection code. # # @param [ Mongo::Server::Connection ] connection The connection on which # the operation is performed. # @param [ Mongo::Operation::Context ] context The operation context. def add_error_labels(connection, context) yield rescue Mongo::Error::SocketError => e if context.in_transaction? && !context.committing_transaction? e.add_label('TransientTransactionError') end if context.committing_transaction? e.add_label('UnknownTransactionCommitResult') end maybe_add_retryable_write_error_label!(e, connection, context) raise e rescue Mongo::Error::SocketTimeoutError => e maybe_add_retryable_write_error_label!(e, connection, context) raise e rescue Mongo::Error::OperationFailure::Family => e if context.committing_transaction? if e.write_retryable? || e.wtimeout? || (e.write_concern_error? && !Session::UNLABELED_WRITE_CONCERN_CODES.include?(e.write_concern_error_code) ) || e.max_time_ms_expired? e.add_label('UnknownTransactionCommitResult') end end maybe_add_retryable_write_error_label!(e, connection, context) raise e end # Unpins the session and/or the connection if the yielded to block # raises errors that are required to unpin the session and the connection. # # @note This method takes the session as an argument because this module # is included in BulkWrite which does not store the session in the # receiver (despite Specifiable doing so). # # @param [ Session | nil ] session Session to consider. # @param [ Connection | nil ] connection Connection to unpin. def unpin_maybe(session, connection) yield rescue Mongo::Error => e if session session.unpin_maybe(e, connection) end raise end # Yields to the block and, if the block raises an exception, adds a note # to the exception with the address of the specified server. # # This method is intended to add server address information to exceptions # raised during execution of operations on servers. def add_server_diagnostics(connection) yield rescue Error::SocketError, Error::SocketTimeoutError, Error::TimeoutError # Diagnostics should have already been added by the connection code, # do not add them again. raise rescue Error, Error::AuthError => e e.add_note("on #{connection.address.seed}") e.generation = connection.generation e.service_id = connection.service_id raise e end private # A method that will add the RetryableWriteError label to an error if # any of the following conditions are true: # # The error meets the criteria for a retryable error (i.e. has one # of the retryable error codes or error messages) # # AND the server does not support adding the RetryableWriteError label OR # the error is a network error (i.e. the driver must add the label) # # AND the error occured during a commitTransaction or abortTransaction # OR the error occured during a write outside of a transaction on a # client that has retry writes enabled. # # If these conditions are met, the original error will be mutated. # If they're not met, the error will not be changed. # # @param [ Mongo::Error ] error The error to which to add the label. # @param [ Mongo::Server::Connection ] connection The connection on which # the operation is performed. # @param [ Mongo::Operation::Context ] context The operation context. # # @note The client argument is optional because some operations, such as # end_session, do not pass the client as an argument to the execute # method. def maybe_add_retryable_write_error_label!(error, connection, context) # An operation is retryable if it meets one of the following criteria: # - It is a commitTransaction or abortTransaction # - It does not occur during a transaction and the client has enabled # modern or legacy writes # # Note: any write operation within a transaction (excepting commit and # abort is NOT a retryable operation) retryable_operation = context.committing_transaction? || context.aborting_transaction? || !context.in_transaction? && context.any_retry_writes? # An operation should add the RetryableWriteError label if one of the # following conditions is met: # - The server does not support adding the RetryableWriteError label # - The error is a network error should_add_error_label = !connection.description.features.retryable_write_error_label_enabled? || error.write_concern_error_label?('RetryableWriteError') || error.is_a?(Mongo::Error::SocketError) || error.is_a?(Mongo::Error::SocketTimeoutError) if retryable_operation && should_add_error_label && error.write_retryable? error.add_label('RetryableWriteError') end end end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/shared/result/000077500000000000000000000000001505113246500236415ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/operation/shared/result/aggregatable.rb000066400000000000000000000046501505113246500266000ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Operation class Result # Defines custom behavior of bulk write results # # @since 2.0.0 # @api private module Aggregatable # Aggregate the write errors returned from this result. # # @example Aggregate the write errors. # result.aggregate_write_errors(0) # # @param [ Integer ] count The number of documents already executed. # # @return [ Array ] The aggregate write errors. # # @since 2.0.0 def aggregate_write_errors(count) return unless @replies @replies.reduce(nil) do |errors, reply| if write_errors = reply.documents.first['writeErrors'] wes = write_errors.collect do |we| we.merge!('index' => count + we['index']) end (errors || []) << wes if wes end end end # Aggregate the write concern errors returned from this result. # # @example Aggregate the write concern errors. # result.aggregate_write_concern_errors(100) # # @param [ Integer ] count The number of documents already executed. # # @return [ Array ] The aggregate write concern errors. # # @since 2.0.0 def aggregate_write_concern_errors(count) return unless @replies @replies.each_with_index.reduce(nil) do |errors, (reply, _)| if write_concern_errors = reply.documents.first['writeConcernErrors'] (errors || []) << write_concern_errors.reduce(nil) do |errs, wce| wce.merge!('index' => count + wce['index']) (errs || []) << write_concern_error end end end end end end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/shared/result/use_legacy_error_parser.rb000066400000000000000000000016651505113246500311030ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2019-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Operation class Result # This module creates the Parser instance in legacy mode. # # @api private module UseLegacyErrorParser def parser @parser ||= Error::Parser.new(first_document, replies, legacy: true) end end end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/shared/sessions_supported.rb000066400000000000000000000224171505113246500266310ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2015-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Operation # Shared behavior of operations that support a session. # # @since 2.5.2 # @api private module SessionsSupported private ZERO_TIMESTAMP = BSON::Timestamp.new(0, 0) READ_COMMANDS = [ :aggregate, :count, :dbStats, :distinct, :find, :geoNear, :geoSearch, :group, :mapReduce, :parallelCollectionScan ].freeze # Adds causal consistency document to the selector, if one can be # constructed and the selector is for a startTransaction command. # # When operations are performed in a transaction, only the first # operation (the one which starts the transaction via startTransaction) # is allowed to have a read concern, and with it the causal consistency # document, specified. def apply_causal_consistency!(selector, connection) return unless selector[:startTransaction] apply_causal_consistency_if_possible(selector, connection) end # Adds causal consistency document to the selector, if one can be # constructed. # # In order for the causal consistency document to be constructed, # causal consistency must be enabled for the session and the session # must have the current operation time. Also, topology must be # replica set or sharded cluster. def apply_causal_consistency_if_possible(selector, connection) if !connection.description.standalone? cc_doc = session.send(:causal_consistency_doc) if cc_doc rc_doc = (selector[:readConcern] || read_concern || {}).merge(cc_doc) selector[:readConcern] = Options::Mapper.transform_values_to_strings( rc_doc) end end end def flags acknowledged_write? ? [] : [:more_to_come] end def apply_cluster_time!(selector, connection) if !connection.description.standalone? cluster_time = [ connection.cluster_time, session&.cluster_time, ].compact.max if cluster_time selector['$clusterTime'] = cluster_time end end end def read_command?(sel) READ_COMMANDS.any? { |c| sel[c] } end def add_write_concern!(sel) sel[:writeConcern] = write_concern.options if write_concern end def apply_autocommit!(selector) session.add_autocommit!(selector) end def apply_start_transaction!(selector) session.add_start_transaction!(selector) end def apply_txn_num!(selector) session.add_txn_num!(selector) end def apply_read_pref!(selector) session.apply_read_pref!(selector) if read_command?(selector) end def apply_txn_opts!(selector) session.add_txn_opts!(selector, read_command?(selector), context) end def suppress_read_write_concern!(selector) session.suppress_read_write_concern!(selector) end def validate_read_preference!(selector) session.validate_read_preference!(selector) if read_command?(selector) end def command(connection) if Lint.enabled? unless connection.is_a?(Server::Connection) raise Error::LintError, "Connection is not a Connection instance: #{connection}" end end sel = BSON::Document.new(selector(connection)) add_write_concern!(sel) sel[Protocol::Msg::DATABASE_IDENTIFIER] = db_name add_read_preference(sel, connection) if connection.features.sessions_enabled? apply_cluster_time!(sel, connection) if session && (acknowledged_write? || session.in_transaction?) apply_session_options(sel, connection) end elsif session && session.explicit? apply_session_options(sel, connection) end sel end # Adds $readPreference field to the command document. # # $readPreference is only sent when the server is a mongos, # following the rules described in # https://github.com/mongodb/specifications/blob/master/source/server-selection/server-selection.md#passing-read-preference-to-mongos. # The topology does not matter for figuring out whether to send # $readPreference since the decision is always made based on # server type. # # $readPreference is sent to OP_MSG-grokking replica set members. # # @param [ Hash ] sel Existing command document which will be mutated. # @param [ Server::Connection ] connection The connection that the # operation will be executed on. def add_read_preference(sel, connection) Lint.assert_type(connection, Server::Connection) # https://github.com/mongodb/specifications/blob/master/source/server-selection/server-selection.md#topology-type-single read_doc = if connection.description.standalone? # Read preference is never sent to standalones. nil elsif connection.server.load_balancer? read&.to_mongos elsif connection.description.mongos? # When server is a mongos: # - $readPreference is never sent when mode is 'primary' # - Otherwise $readPreference is sent # When mode is 'secondaryPreferred' $readPreference is currently # required to only be sent when a non-mode field (i.e. tag_sets) # is present, but this causes wrong behavior (DRIVERS-1642). read&.to_mongos elsif connection.server.cluster.single? # In Single topology: # - If no read preference is specified by the application, the driver # adds mode: primaryPreferred. # - If a read preference is specified by the application, the driver # replaces the mode with primaryPreferred. read_doc = if read BSON::Document.new(read.to_doc) else BSON::Document.new end if [nil, 'primary'].include?(read_doc['mode']) read_doc['mode'] = 'primaryPreferred' end read_doc else # In replica sets, read preference is passed to the server if one # is specified by the application, except for primary read preferences. read_doc = BSON::Document.new(read&.to_doc || {}) if [nil, 'primary'].include?(read_doc['mode']) nil else read_doc end end if read_doc sel['$readPreference'] = read_doc end end def apply_session_options(sel, connection) apply_cluster_time!(sel, connection) sel[:txnNumber] = BSON::Int64.new(txn_num) if txn_num sel.merge!(lsid: session.session_id) apply_start_transaction!(sel) apply_causal_consistency!(sel, connection) apply_autocommit!(sel) apply_txn_opts!(sel) suppress_read_write_concern!(sel) validate_read_preference!(sel) apply_txn_num!(sel) if session.recovery_token && (sel[:commitTransaction] || sel[:abortTransaction]) then sel[:recoveryToken] = session.recovery_token end if session.snapshot? unless connection.description.server_version_gte?('5.0') raise Error::SnapshotSessionInvalidServerVersion end sel[:readConcern] = {level: 'snapshot'} if session.snapshot_timestamp sel[:readConcern][:atClusterTime] = session.snapshot_timestamp end end end def build_message(connection, context) if self.session != context.session if self.session raise Error::InternalDriverError, "Operation session #{self.session.inspect} does not match context session #{context.session.inspect}" else # Some operations are not constructed with sessions but are # executed in a context where a session is available. # This could be OK or a driver issue. # TODO investigate. end end super.tap do |message| if session = context.session # Serialize the message to detect client-side problems, # such as invalid BSON keys or too large messages. # The message will be serialized again # later prior to being sent to the connection. buf = BSON::ByteBuffer.new message.serialize(buf) if buf.length > connection.max_message_size raise Error::MaxMessageSize.new(connection.max_message_size) end session.update_state! end end end end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/shared/specifiable.rb000066400000000000000000000326231505113246500251240ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Operation # This module contains common functionality for convenience methods getting # various values from the spec. # # @since 2.0.0 # @api private module Specifiable # The field for database name. # # @since 2.0.0 DB_NAME = :db_name.freeze # The field for deletes. # # @since 2.0.0 DELETES = :deletes.freeze # The field for delete. # # @since 2.0.0 DELETE = :delete.freeze # The field for documents. # # @since 2.0.0 DOCUMENTS = :documents.freeze # The field for collection name. # # @since 2.0.0 COLL_NAME = :coll_name.freeze # The field for cursor count. # # @since 2.0.0 CURSOR_COUNT = :cursor_count.freeze # The field for cursor id. # # @since 2.0.0 CURSOR_ID = :cursor_id.freeze # The field for an index. # # @since 2.0.0 INDEX = :index.freeze # The field for multiple indexes. # # @since 2.0.0 INDEXES = :indexes.freeze # The field for index names. # # @since 2.0.0 INDEX_NAME = :index_name.freeze # The operation id constant. # # @since 2.1.0 OPERATION_ID = :operation_id.freeze # The field for options. # # @since 2.0.0 OPTIONS = :options.freeze # The read concern option. # # @since 2.2.0 READ_CONCERN = :read_concern.freeze # The max time ms option. # # @since 2.2.5 MAX_TIME_MS = :max_time_ms.freeze # The field for a selector. # # @since 2.0.0 SELECTOR = :selector.freeze # The field for number to return. # # @since 2.0.0 TO_RETURN = :to_return.freeze # The field for updates. # # @since 2.0.0 UPDATES = :updates.freeze # The field for update. # # @since 2.0.0 UPDATE = :update.freeze # The field name for a user. # # @since 2.0.0 USER = :user.freeze # The field name for user name. # # @since 2.0.0 USER_NAME = :user_name.freeze # The field name for a write concern. # # @since 2.0.0 WRITE_CONCERN = :write_concern.freeze # The field name for the read preference. # # @since 2.0.0 READ = :read.freeze # Whether to bypass document level validation. # # @since 2.2.0 BYPASS_DOC_VALIDATION = :bypass_document_validation.freeze # A collation to apply to the operation. # # @since 2.4.0 COLLATION = :collation.freeze # @return [ Hash ] spec The specification for the operation. attr_reader :spec # Check equality of two specifiable operations. # # @example Are the operations equal? # operation == other # # @param [ Object ] other The other operation. # # @return [ true, false ] Whether the objects are equal. # # @since 2.0.0 def ==(other) return false unless other.is_a?(Specifiable) spec == other.spec end alias_method :eql?, :== # Get the cursor count from the spec. # # @example Get the cursor count. # specifiable.cursor_count # # @return [ Integer ] The cursor count. # # @since 2.0.0 def cursor_count spec[CURSOR_COUNT] end # The name of the database to which the operation should be sent. # # @example Get the database name. # specifiable.db_name # # @return [ String ] Database name. # # @since 2.0.0 def db_name spec[DB_NAME] end # Get the deletes from the specification. # # @example Get the deletes. # specifiable.deletes # # @return [ Array ] The deletes. # # @since 2.0.0 def deletes spec[DELETES] end # Get the delete document from the specification. # # @example Get the delete document. # specifiable.delete # # @return [ Hash ] The delete document. # # @since 2.0.0 def delete spec[DELETE] end # The documents to in the specification. # # @example Get the documents. # specifiable.documents # # @return [ Array ] The documents. # # @since 2.0.0 def documents spec[DOCUMENTS] end # The name of the collection to which the operation should be sent. # # @example Get the collection name. # specifiable.coll_name # # @return [ String ] Collection name. # # @since 2.0.0 def coll_name spec.fetch(COLL_NAME) end # The id of the cursor created on the server. # # @example Get the cursor id. # specifiable.cursor_id # # @return [ Integer ] The cursor id. # # @since 2.0.0 def cursor_id spec[CURSOR_ID] end # Get the index from the specification. # # @example Get the index specification. # specifiable.index # # @return [ Hash ] The index specification. # # @since 2.0.0 def index spec[INDEX] end # Get the index id from the spec. # # @return [ String ] The index id. def index_id spec[:index_id] end # Get the index name from the spec. # # @example Get the index name. # specifiable.index_name # # @return [ String ] The index name. # # @since 2.0.0 def index_name spec[INDEX_NAME] end # Get the indexes from the specification. # # @example Get the index specifications. # specifiable.indexes # # @return [ Hash ] The index specifications. # # @since 2.0.0 def indexes spec[INDEXES] end # Create the new specifiable operation. # # @example Create the new specifiable operation. # Specifiable.new(spec) # # @param [ Hash ] spec The operation specification. # # @see The individual operations for the values they require in their # specs. # # @since 2.0.0 def initialize(spec) @spec = spec end # Get the operation id for the operation. Used for linking operations in # monitoring. # # @example Get the operation id. # specifiable.operation_id # # @return [ Integer ] The operation id. # # @since 2.1.0 def operation_id spec[OPERATION_ID] end # Get the options for executing the operation on a particular connection. # # @param [ Server::Connection ] connection The connection that the # operation will be executed on. # # @return [ Hash ] The options. # # @since 2.0.0 def options(connection) spec[OPTIONS] || {} end # Get the read concern document from the spec. # # @note The document may include afterClusterTime. # # @example Get the read concern. # specifiable.read_concern # # @return [ Hash ] The read concern document. # # @since 2.2.0 def read_concern spec[READ_CONCERN] end # Get the max time ms value from the spec. # # @example Get the max time ms. # specifiable.max_time_ms # # @return [ Hash ] The max time ms value. # # @since 2.2.5 def max_time_ms spec[MAX_TIME_MS] end # Whether or not to bypass document level validation. # # @example Get the bypass_document_validation option. # specifiable.bypass_documentation_validation. # # @return [ true, false ] Whether to bypass document level validation. # # @since 2.2.0 def bypass_document_validation spec[BYPASS_DOC_VALIDATION] end # The collation to apply to the operation. # # @example Get the collation option. # specifiable.collation. # # @return [ Hash ] The collation document. # # @since 2.4.0 def collation send(self.class::IDENTIFIER).first[COLLATION] end # The selector from the specification for execution on a particular # connection. # # @param [ Server::Connection ] connection The connection that the # operation will be executed on. # # @return [ Hash ] The selector spec. # # @since 2.0.0 def selector(connection) spec[SELECTOR] end # The number of documents to request from the server. # # @example Get the to return value from the spec. # specifiable.to_return # # @return [ Integer ] The number of documents to return. # # @since 2.0.0 def to_return spec[TO_RETURN] end # The update documents from the spec. # # @example Get the update documents. # # @return [ Array ] The update documents. # # @since 2.0.0 def updates spec[UPDATES] end # The update document from the spec. # # @example Get the update document. # # @return [ Hash ] The update document. # # @since 2.0.0 def update spec[UPDATE] end # The user for user related operations. # # @example Get the user. # specifiable.user # # @return [ Auth::User ] The user. # # @since 2.0.0 def user spec[USER] end # The user name from the specification. # # @example Get the user name. # specifiable.user_name # # @return [ String ] The user name. # # @since 2.0. def user_name spec[USER_NAME] end # The write concern to use for this operation. # # @example Get the write concern. # specifiable.write_concern # # @return [ Mongo::WriteConcern ] The write concern. # # @since 2.0.0 def write_concern @spec[WRITE_CONCERN] end # The read preference for this operation. # # @example Get the read preference. # specifiable.read # # @return [ Mongo::ServerSelector ] The read preference. # # @since 2.0.0 def read @read ||= begin ServerSelector.get(spec[READ]) if spec[READ] end end # Whether the operation is ordered. # # @example Get the ordered value, true is the default. # specifiable.ordered? # # @return [ true, false ] Whether the operation is ordered. # # @since 2.1.0 def ordered? !!(@spec.fetch(:ordered, true)) end # The namespace, consisting of the db name and collection name. # # @example Get the namespace. # specifiable.namespace # # @return [ String ] The namespace. # # @since 2.1.0 def namespace "#{db_name}.#{coll_name}" end # The session to use for the operation. # # @example Get the session. # specifiable.session # # @return [ Session ] The session. # # @since 2.5.0 def session @spec[:session] end # The transaction number for the operation. # # @example Get the transaction number. # specifiable.txn_num # # @return [ Integer ] The transaction number. # # @since 2.5.0 def txn_num @spec[:txn_num] end # The command. # # @return [ Hash ] The command. # # @since 2.5.2 def command(connection) selector(connection) end # The array filters. # # @param [ Server::Connection ] connection The connection that the # operation will be executed on. # # @return [ Hash | nil ] The array filters. # # @since 2.5.2 def array_filters(connection) sel = selector(connection) sel[Operation::ARRAY_FILTERS] if sel end # Does the operation have an acknowledged write concern. # # @example Determine whether the operation has an acknowledged write. # specifiable.array_filters # # @return [ Boolean ] Whether or not the operation has an acknowledged write concern. # # @since 2.5.2 def acknowledged_write? write_concern.nil? || write_concern.acknowledged? end def apply_collation(selector, connection, collation) if collation unless connection.features.collation_enabled? raise Error::UnsupportedCollation end selector = selector.merge(collation: collation) end selector end end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/shared/timed.rb000066400000000000000000000033661505113246500237620ustar00rootroot00000000000000# frozen_string_literal: true module Mongo module Operation # Defines the behavior of operations that have the default timeout # behavior described by the client-side operation timeouts (CSOT) # spec. # # @api private module Timed # If a timeout is active (as defined by the current context), and it has # not yet expired, add :maxTimeMS to the spec. # # @param [ Hash ] spec The spec to modify # @param [ Connection ] connection The connection that will be used to # execute the operation # # @return [ Hash ] the spec # # @raises [ Mongo::Error::TimeoutError ] if the current timeout has # expired. def apply_relevant_timeouts_to(spec, connection) with_max_time(connection) do |max_time_sec| return spec if max_time_sec.nil? return spec if connection.description.mongocryptd? spec.tap { spec[:maxTimeMS] = (max_time_sec * 1_000).to_i } end end # A helper method that computes the remaining timeout (in seconds) and # yields it to the associated block. If no timeout is present, yields # nil. If the timeout has expired, raises Mongo::Error::TimeoutError. # # @param [ Connection ] connection The connection that will be used to # execute the operation # # @return [ Hash ] the result of yielding to the block (which must be # a Hash) def with_max_time(connection) if context&.timeout? max_time_sec = context.remaining_timeout_sec - connection.server.minimum_round_trip_time raise Mongo::Error::TimeoutError if max_time_sec <= 0 yield max_time_sec else yield nil end end end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/shared/validatable.rb000066400000000000000000000055001505113246500251200ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2021 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Operation # @api private module Validatable def validate_find_options(connection, selector) if selector.key?(:hint) && !connection.features.find_and_modify_option_validation_enabled? then raise Error::UnsupportedOption.hint_error end if selector.key?(:arrayFilters) && !connection.features.array_filters_enabled? then raise Error::UnsupportedArrayFilters end if selector.key?(:collation) && !connection.features.collation_enabled? then raise Error::UnsupportedCollation end end # selector_or_item here is either: # - The selector as used in a findAndModify command, or # - One of the array elements in the updates array in an update command. def validate_hint_on_update(connection, selector_or_item) if selector_or_item.key?(:hint) && !connection.features.update_delete_option_validation_enabled? then raise Error::UnsupportedOption.hint_error end end # selector_or_item here is either: # - The selector as used in a findAndModify command, or # - One of the array elements in the updates array in an update command. def validate_array_filters(connection, selector_or_item) if selector_or_item.key?(:arrayFilters) && !connection.features.array_filters_enabled? then raise Error::UnsupportedArrayFilters end end # selector_or_item here is either: # - The selector as used in a findAndModify command, or # - One of the array elements in the updates array in an update command. def validate_collation(connection, selector_or_item) if selector_or_item.key?(:collation) && !connection.features.collation_enabled? then raise Error::UnsupportedCollation end end def validate_updates(connection, updates) updates.each do |update| validate_array_filters(connection, update) validate_collation(connection, update) validate_hint_on_update(connection, update) end updates end end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/shared/write.rb000066400000000000000000000063311505113246500240050ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2018-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Operation # Shared behavior of operations that write (update, insert, delete). # # @since 2.5.2 # @api private module Write include ResponseHandling # Execute the operation. # # @param [ Mongo::Server ] server The server to send the operation to. # @param [ Operation::Context ] context The operation context. # # @return [ Mongo::Operation::Result ] The operation result. # # @since 2.5.2 def execute(server, context:) server.with_connection( connection_global_id: context.connection_global_id, context: context ) do |connection| execute_with_connection(connection, context: context) end end # Execute the operation. # # @param [ Mongo::Server::Connection ] connection The connection to send # the operation through. # @param [ Operation::Context ] context The operation context. # @param [ Hash ] options Operation execution options. # # @return [ Mongo::Operation::Result ] The operation result. def execute_with_connection(connection, context:) validate!(connection) op = self.class::OpMsg.new(spec) result = op.execute(connection, context: context) validate_result(result, connection, context) end # Execute the bulk write operation. # # @param [ Mongo::Server::Connection ] connection The connection over # which to send the operation. # @param [ Operation::Context ] context The operation context. # # @return [ Mongo::Operation::Delete::BulkResult, # Mongo::Operation::Insert::BulkResult, # Mongo::Operation::Update::BulkResult ] The bulk result. # # @since 2.5.2 def bulk_execute(connection, context:) Lint.assert_type(connection, Server::Connection) if connection.features.op_msg_enabled? self.class::OpMsg.new(spec).execute(connection, context: context).bulk_result else self.class::Command.new(spec).execute(connection, context: context).bulk_result end end private def validate!(connection) if !acknowledged_write? if collation raise Error::UnsupportedCollation.new( Error::UnsupportedCollation::UNACKNOWLEDGED_WRITES_MESSAGE) end if array_filters(connection) raise Error::UnsupportedArrayFilters.new( Error::UnsupportedArrayFilters::UNACKNOWLEDGED_WRITES_MESSAGE) end end end end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/shared/write_concern_supported.rb000066400000000000000000000022431505113246500276170ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2015-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Operation # Custom behavior for operations that support write concern. # # @since 2.5.2 # @api private module WriteConcernSupported private def write_concern_supported?(connection); true; end def command(connection) add_write_concern!(super, connection) end def add_write_concern!(sel, connection) if write_concern && write_concern_supported?(connection) sel[:writeConcern] = write_concern.options end sel end end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/update.rb000066400000000000000000000017621505113246500226720ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2015-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'mongo/operation/update/op_msg' require 'mongo/operation/update/result' require 'mongo/operation/update/bulk_result' module Mongo module Operation # A MongoDB update operation. # # @api private # # @since 2.0.0 class Update include Specifiable include Write private IDENTIFIER = 'updates'.freeze end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/update/000077500000000000000000000000001505113246500223375ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/operation/update/bulk_result.rb000066400000000000000000000070561505113246500252270ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Operation class Update # Defines custom behavior of results for an udpate when sent as part of a bulk write. # # @since 2.0.0 class BulkResult < Operation::Result include Aggregatable # The number of modified docs field in the result. # # @since 2.0.0 MODIFIED = 'nModified'.freeze # The upserted docs field in the result. # # @since 2.0.0 UPSERTED = 'upserted'.freeze # Gets the number of documents upserted. # # @example Get the upserted count. # result.n_upserted # # @return [ Integer ] The number of documents upserted. # # @since 2.0.0 def n_upserted return 0 unless acknowledged? @replies.reduce(0) do |n, reply| if upsert?(reply) n += reply.documents.first[UPSERTED].size else n end end end # Gets the number of documents matched. # # @example Get the matched count. # result.n_matched # # @return [ Integer ] The number of documents matched. # # @since 2.0.0 def n_matched return 0 unless acknowledged? @replies.reduce(0) do |n, reply| if upsert?(reply) reply.documents.first[N] - n_upserted else if reply.documents.first[N] n += reply.documents.first[N] else n end end end end # Gets the number of documents modified. # Not that in a mixed sharded cluster a call to # update could return nModified (>= 2.6) or not (<= 2.4). # If any call does not return nModified we can't report # a valid final count so set the field to nil. # # @example Get the modified count. # result.n_modified # # @return [ Integer ] The number of documents modified. # # @since 2.0.0 def n_modified return 0 unless acknowledged? @replies.reduce(0) do |n, reply| if n && reply.documents.first[MODIFIED] n += reply.documents.first[MODIFIED] else 0 end end end # Get the upserted documents. # # @example Get upserted documents. # result.upserted # # @return [ Array ] The upserted document info # # @since 2.1.0 def upserted return [] unless acknowledged? @replies.reduce([]) do |ids, reply| if upserted_ids = reply.documents.first[UPSERTED] ids += upserted_ids end ids end end private def upsert?(reply) upserted.any? end end end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/update/op_msg.rb000066400000000000000000000030451505113246500241520ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2018-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Operation class Update # A MongoDB update operation sent as an op message. # # @api private # # @since 2.5.2 class OpMsg < OpMsgBase include BypassDocumentValidation include ExecutableNoValidate include ExecutableTransactionLabel include PolymorphicResult include Validatable private def selector(connection) { update: coll_name, ordered: ordered?, let: spec[:let], comment: spec[:comment] }.compact end def message(connection) updates = validate_updates(connection, send(IDENTIFIER)) section = Protocol::Msg::Section1.new(IDENTIFIER, updates) cmd = apply_relevant_timeouts_to(command(connection), connection) Protocol::Msg.new(flags, {}, cmd, section) end end end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/update/result.rb000066400000000000000000000054421505113246500242070ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Operation class Update # Defines custom behavior of results for an update. # # @since 2.0.0 # @api semiprivate class Result < Operation::Result # The number of modified docs field in the result. # # @since 2.0.0 # @api private MODIFIED = 'nModified'.freeze # The upserted docs field in the result. # # @since 2.0.0 # @api private UPSERTED = 'upserted'.freeze # Get the number of documents matched. # # @example Get the matched count. # result.matched_count # # @return [ Integer ] The matched count. # # @since 2.0.0 # @api public def matched_count return 0 unless acknowledged? if upsert? 0 else n end end # Get the number of documents modified. # # @example Get the modified count. # result.modified_count # # @return [ Integer ] The modified count. # # @since 2.0.0 # @api public def modified_count return 0 unless acknowledged? first[MODIFIED] end # The identifier of the inserted document if an upsert # took place. # # @example Get the upserted document's identifier. # result.upserted_id # # @return [ Object ] The upserted id. # # @since 2.0.0 # @api public def upserted_id return nil unless upsert? upsert?.first['_id'] end # Returns the number of documents upserted. # # @example Get the number of upserted documents. # result.upserted_count # # @return [ Integer ] The number upserted. # # @since 2.4.2 # @api public def upserted_count upsert? ? n : 0 end # @api public def bulk_result BulkResult.new(@replies, connection_description) end private def upsert? first[UPSERTED] end end end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/update_search_index.rb000066400000000000000000000004511505113246500254000ustar00rootroot00000000000000# frozen_string_literal: true require 'mongo/operation/update_search_index/op_msg' module Mongo module Operation # A MongoDB updateSearchIndex command operation. # # @api private class UpdateSearchIndex include Specifiable include OpMsgExecutable end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/update_search_index/000077500000000000000000000000001505113246500250535ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/operation/update_search_index/op_msg.rb000066400000000000000000000015571505113246500266740ustar00rootroot00000000000000# frozen_string_literal: true module Mongo module Operation class UpdateSearchIndex # A MongoDB updateSearchIndex operation sent as an op message. # # @api private class OpMsg < OpMsgBase include ExecutableTransactionLabel private # Returns the command to send to the database, describing the # desired updateSearchIndex operation. # # @param [ Connection ] _connection the connection that will receive the # command # # @return [ Hash ] the selector def selector(_connection) { updateSearchIndex: coll_name, :$db => db_name, definition: index, }.tap do |sel| sel[:id] = index_id if index_id sel[:name] = index_name if index_name end end end end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/update_user.rb000066400000000000000000000016001505113246500237170ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2015-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'mongo/operation/update_user/op_msg' module Mongo module Operation # A MongoDB updateuser operation. # # @api private # # @since 2.0.0 class UpdateUser include Specifiable include OpMsgExecutable end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/update_user/000077500000000000000000000000001505113246500233755ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/operation/update_user/op_msg.rb000066400000000000000000000020041505113246500252020ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2018-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Operation class UpdateUser # A MongoDB updateuser operation sent as an op message. # # @api private # # @since 2.5.2 class OpMsg < OpMsgBase include ExecutableTransactionLabel private def selector(connection) { :updateUser => user.name }.merge(user.spec) end end end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/users_info.rb000066400000000000000000000016511505113246500235610ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2015-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'mongo/operation/users_info/op_msg' require 'mongo/operation/users_info/result' module Mongo module Operation # A MongoDB usersinfo operation. # # @api private # # @since 2.0.0 class UsersInfo include Specifiable include OpMsgExecutable end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/users_info/000077500000000000000000000000001505113246500232315ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/operation/users_info/op_msg.rb000066400000000000000000000020221505113246500250360ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2018-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Operation class UsersInfo # A MongoDB usersinfo operation sent as an op message. # # @api private # # @since 2.5.2 class OpMsg < OpMsgBase include ExecutableTransactionLabel include PolymorphicResult private def selector(connection) { :usersInfo => user_name } end end end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/users_info/result.rb000066400000000000000000000023461505113246500251010ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Operation class UsersInfo # Defines custom behavior of results when using the # usersInfo command. # # @since 2.1.0 # @api semiprivate class Result < Operation::Result # The field name for the users document in a usersInfo result. # # @since 2.1.0 # @api private USERS = 'users'.freeze # @api public def documents reply.documents.first[USERS] end private def first_document @first_document ||= reply.documents[0] end end end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/write_command.rb000066400000000000000000000015531505113246500242360ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2021 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'mongo/operation/write_command/op_msg' module Mongo module Operation # A MongoDB general command operation. # # @api private class WriteCommand include Specifiable include OpMsgExecutable end end end mongo-ruby-driver-2.21.3/lib/mongo/operation/write_command/000077500000000000000000000000001505113246500237055ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/operation/write_command/op_msg.rb000066400000000000000000000022631505113246500255210ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2021 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Operation class WriteCommand # A MongoDB write command operation sent as an op message. # # @api private class OpMsg < OpMsgBase include Validatable private def selector(connection) super.tap do |selector| if selector.key?(:findAndModify) validate_find_options(connection, selector) end if wc = spec[:write_concern] selector[:writeConcern] = wc.options end end end end end end end mongo-ruby-driver-2.21.3/lib/mongo/options.rb000066400000000000000000000012721505113246500210770ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2015-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'mongo/options/mapper' require 'mongo/options/redacted' mongo-ruby-driver-2.21.3/lib/mongo/options/000077500000000000000000000000001505113246500205505ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/options/mapper.rb000066400000000000000000000100251505113246500223570ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2015-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Options # Utility class for various options mapping behavior. # # @since 2.0.0 module Mapper extend self # Transforms the provided options to a new set of options given the # provided mapping. # # Options which are not present in the provided mapping # are returned unmodified. # # @example Transform the options. # Mapper.transform({ name: 1 }, { :name => :nombre }) # # @param [ Hash ] options The options to transform # @param [ Hash ] mappings The key mappings. # # @return [ Hash ] The transformed options. # # @since 2.0.0 def transform(options, mappings) map = transform_keys_to_strings(mappings) opts = transform_keys_to_strings(options) opts.reduce({}) do |transformed, (key, value)| if map[key] transformed[map[key]] = value else transformed[key] = value end transformed end end # Transforms the provided options to a new set of options given the # provided mapping. Expects BSON::Documents in and out so no explicit # string conversion needs to happen. # # @example Transform the options. # Mapper.transform_documents({ name: 1 }, { :name => :nombre }) # # @param [ BSON::Document ] options The options to transform # @param [ BSON::Document ] mappings The key mappings. # @param [ BSON::Document ] document The output document. # # @return [ BSON::Document ] The transformed options. # # @since 2.0.0 def transform_documents(options, mappings, document = BSON::Document.new) options.reduce(document) do |transformed, (key, value)| name = mappings[key] transformed[name] = value if name && !value.nil? transformed end end # Coverts all the keys of the options to strings. # # @example Convert all option keys to strings. # Mapper.transform({ :name => 1 }) # # @param [ Hash ] options The options to transform. # # @return [ Hash ] The transformed options. # # @since 2.0.0 def transform_keys_to_strings(options) options.reduce({}) do |transformed, (key, value)| transformed[key.to_s] = value transformed end end # Coverts all the keys of the options to symbols. # # @example Convert all option keys to symbols. # Mapper.transform({ 'name' => 1 }) # # @param [ Hash ] options The options to transform. # # @return [ Hash ] The transformed options. # # @since 2.2.2 def transform_keys_to_symbols(options) options.reduce({}) do |transformed, (key, value)| transformed[key.to_sym] = value transformed end end # Coverts all the symbol values to strings. # # @example Convert all option symbol values to strings. # Mapper.transform({ :name => 1 }) # # @param [ Hash ] options The options to transform. # # @return [ Hash ] The transformed options. # # @since 2.0.0 def transform_values_to_strings(options) options.reduce({}) do |transformed, (key, value)| transformed[key] = value.is_a?(Symbol) ? value.to_s : value transformed end end end end end mongo-ruby-driver-2.21.3/lib/mongo/options/redacted.rb000066400000000000000000000113231505113246500226500ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2015-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Options # Class for wrapping options that could be sensitive. # When printed, the sensitive values will be redacted. # # @since 2.1.0 class Redacted < BSON::Document # The options whose values will be redacted. # # @since 2.1.0 SENSITIVE_OPTIONS = [ :password, :pwd ].freeze # The replacement string used in place of the value for sensitive keys. # # @since 2.1.0 STRING_REPLACEMENT = ''.freeze # Get a string representation of the options. # # @return [ String ] The string representation of the options. # # @since 2.1.0 def inspect redacted_string(:inspect) end # Get a string representation of the options. # # @return [ String ] The string representation of the options. # # @since 2.1.0 def to_s redacted_string(:to_s) end # Whether these options contain a given key. # # @example Determine if the options contain a given key. # options.has_key?(:name) # # @param [ String, Symbol ] key The key to check for existence. # # @return [ true, false ] If the options contain the given key. # # @since 2.1.0 def has_key?(key) super(convert_key(key)) end alias_method :key?, :has_key? # Returns a new options object consisting of pairs for which the block returns false. # # @example Get a new options object with pairs for which the block returns false. # new_options = options.reject { |k, v| k == 'database' } # # @yieldparam [ String, Object ] The key as a string and its value. # # @return [ Options::Redacted ] A new options object. # # @since 2.1.0 def reject(&block) new_options = dup new_options.reject!(&block) || new_options end # Only keeps pairs for which the block returns false. # # @example Remove pairs from this object for which the block returns true. # options.reject! { |k, v| k == 'database' } # # @yieldparam [ String, Object ] The key as a string and its value. # # @return [ Options::Redacted, nil ] This object or nil if no changes were made. # # @since 2.1.0 def reject! if block_given? n_keys = keys.size keys.each do |key| delete(key) if yield(key, self[key]) end n_keys == keys.size ? nil : self else to_enum end end # Returns a new options object consisting of pairs for which the block returns true. # # @example Get a new options object with pairs for which the block returns true. # ssl_options = options.select { |k, v| k =~ /ssl/ } # # @yieldparam [ String, Object ] The key as a string and its value. # # @return [ Options::Redacted ] A new options object. # # @since 2.1.0 def select(&block) new_options = dup new_options.select!(&block) || new_options end # Only keeps pairs for which the block returns true. # # @example Remove pairs from this object for which the block does not return true. # options.select! { |k, v| k =~ /ssl/ } # # @yieldparam [ String, Object ] The key as a string and its value. # # @return [ Options::Redacted, nil ] This object or nil if no changes were made. # # @since 2.1.0 def select! if block_given? n_keys = keys.size keys.each do |key| delete(key) unless yield(key, self[key]) end n_keys == keys.size ? nil : self else to_enum end end private def redacted_string(method) '{' + reduce([]) do |list, (k, v)| list << "#{k.send(method)}=>#{redact(k, v, method)}" end.join(', ') + '}' end def redact(k, v, method) return STRING_REPLACEMENT if SENSITIVE_OPTIONS.include?(k.to_sym) v.send(method) end end end end mongo-ruby-driver-2.21.3/lib/mongo/protocol.rb000066400000000000000000000007471505113246500212530ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Wire Protocol Base require 'mongo/protocol/serializers' require 'mongo/protocol/registry' require 'mongo/protocol/bit_vector' require 'mongo/protocol/message' require 'mongo/protocol/caching_hash' # Client Requests require 'mongo/protocol/compressed' require 'mongo/protocol/get_more' require 'mongo/protocol/kill_cursors' require 'mongo/protocol/query' require 'mongo/protocol/msg' # Server Responses require 'mongo/protocol/reply' mongo-ruby-driver-2.21.3/lib/mongo/protocol/000077500000000000000000000000001505113246500207165ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/protocol/bit_vector.rb000066400000000000000000000046521505113246500234120ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Protocol module Serializers # Class used to define a bitvector for a MongoDB wire protocol message. # # Defines serialization strategy upon initialization. # # @api private class BitVector # Initializes a BitVector with a layout # # @param layout [ Array ] the array of fields in the bit vector def initialize(layout) @masks = {} layout.each_with_index do |field, index| @masks[field] = 2**index if field end end # Serializes vector by encoding each symbol according to its mask # # @param buffer [ String ] Buffer to receive the serialized vector # @param value [ Array ] Array of flags to encode # @param [ true, false ] validating_keys Whether keys should be validated when serializing. # This option is deprecated and will not be used. It will removed in version 3.0. # # @return [ String ] Buffer that received the serialized vector def serialize(buffer, value, validating_keys = nil) bits = 0 value.each { |flag| bits |= (@masks[flag] || 0) } buffer.put_int32(bits) end # Deserializes vector by decoding the symbol according to its mask # # @param [ String ] buffer Buffer containing the vector to be deserialized. # @param [ Hash ] options This method does not currently accept any options. # # @return [ Array ] Flags contained in the vector def deserialize(buffer, options = {}) vector = buffer.get_int32 flags = [] @masks.each do |flag, mask| flags << flag if mask & vector != 0 end flags end end end end end mongo-ruby-driver-2.21.3/lib/mongo/protocol/caching_hash.rb000066400000000000000000000033441505113246500236460ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2022 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Protocol # A Hash that caches the results of #to_bson. # # @api private class CachingHash def initialize(hash) @hash = hash end def bson_type Hash::BSON_TYPE end # Caches the result of to_bson and writes it to the given buffer on subsequent # calls to this method. If this method is originally called without validation, # and then is subsequently called with validation, we will want to recalculate # the to_bson to trigger the validations. # # @param [ BSON::ByteBuffer ] buffer The encoded BSON buffer to append to. # @param [ true, false ] validating_keys Whether keys should be validated when serializing. # This option is deprecated and will not be used. It will removed in version 3.0. # # @return [ BSON::ByteBuffer ] The buffer with the encoded object. def to_bson(buffer = BSON::ByteBuffer.new, validating_keys = nil) if !@bytes @bytes = @hash.to_bson(BSON::ByteBuffer.new).to_s end buffer.put_bytes(@bytes) end end end end mongo-ruby-driver-2.21.3/lib/mongo/protocol/compressed.rb000066400000000000000000000136321505113246500234140ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2017-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Protocol # MongoDB Wire protocol Compressed message. # # This is a bi-directional message that compresses another opcode. # See https://github.com/mongodb/specifications/blob/master/source/compression/OP_COMPRESSED.md # # @api semipublic # # @since 2.5.0 class Compressed < Message # The noop compressor identifier. NOOP = 'noop'.freeze # The byte signaling that the message has not been compressed (test mode). NOOP_BYTE = 0.chr.force_encoding(BSON::BINARY).freeze # The snappy compressor identifier. SNAPPY = 'snappy'.freeze # The byte signaling that the message has been compressed with snappy. SNAPPY_BYTE = 1.chr.force_encoding(BSON::BINARY).freeze # The byte signaling that the message has been compressed with Zlib. # # @since 2.5.0 ZLIB_BYTE = 2.chr.force_encoding(BSON::BINARY).freeze # The Zlib compressor identifier. # # @since 2.5.0 ZLIB = 'zlib'.freeze # The zstd compressor identifier. ZSTD = 'zstd'.freeze # The byte signaling that the message has been compressed with zstd. ZSTD_BYTE = 3.chr.force_encoding(BSON::BINARY).freeze # The compressor identifier to byte map. # # @since 2.5.0 COMPRESSOR_ID_MAP = { SNAPPY => SNAPPY_BYTE, ZSTD => ZSTD_BYTE, ZLIB => ZLIB_BYTE }.freeze # Creates a new OP_COMPRESSED message. # # @example Create an OP_COMPRESSED message. # Compressed.new(original_message, 'zlib') # # @param [ Mongo::Protocol::Message ] message The original message. # @param [ String, Symbol ] compressor The compression algorithm to use. # @param [ Integer ] zlib_compression_level The zlib compression level to use. # -1 and nil imply default. # # @since 2.5.0 def initialize(message, compressor, zlib_compression_level = nil) @original_message = message @original_op_code = message.op_code @uncompressed_size = 0 @compressor_id = COMPRESSOR_ID_MAP[compressor] @compressed_message = '' @zlib_compression_level = zlib_compression_level if zlib_compression_level && zlib_compression_level != -1 @request_id = message.request_id end # Inflates an OP_COMRESSED message and returns the original message. # # @return [ Protocol::Message ] The inflated message. # # @since 2.5.0 # @api private def maybe_inflate message = Registry.get(@original_op_code).allocate buf = decompress(@compressed_message) message.send(:fields).each do |field| if field[:multi] Message.deserialize_array(message, buf, field) else Message.deserialize_field(message, buf, field) end end if message.is_a?(Msg) message.fix_after_deserialization end message end # Whether the message expects a reply from the database. # # @example Does the message require a reply? # message.replyable? # # @return [ true, false ] If the message expects a reply. # # @since 2.5.0 def replyable? @original_message.replyable? end private # The operation code for a +Compressed+ message. # @return [ Fixnum ] the operation code. # # @since 2.5.0 OP_CODE = 2012 # @!attribute # Field representing the original message's op code as an Int32. field :original_op_code, Int32 # @!attribute # @return [ Fixnum ] The size of the original message, excluding header as an Int32. field :uncompressed_size, Int32 # @!attribute # @return [ String ] The id of the compressor as a single byte. field :compressor_id, Byte # @!attribute # @return [ String ] The actual compressed message bytes. field :compressed_message, Bytes def serialize_fields(buffer, max_bson_size) buf = BSON::ByteBuffer.new @original_message.send(:serialize_fields, buf, max_bson_size) @uncompressed_size = buf.length @compressed_message = compress(buf) super end def compress(buffer) if @compressor_id == NOOP_BYTE buffer.to_s.force_encoding(BSON::BINARY) elsif @compressor_id == ZLIB_BYTE Zlib::Deflate.deflate(buffer.to_s, @zlib_compression_level).force_encoding(BSON::BINARY) elsif @compressor_id == SNAPPY_BYTE Snappy.deflate(buffer.to_s).force_encoding(BSON::BINARY) elsif @compressor_id == ZSTD_BYTE # DRIVERS-600 will allow this to be configurable in the future Zstd.compress(buffer.to_s).force_encoding(BSON::BINARY) end end def decompress(compressed_message) if @compressor_id == NOOP_BYTE BSON::ByteBuffer.new(compressed_message) elsif @compressor_id == ZLIB_BYTE BSON::ByteBuffer.new(Zlib::Inflate.inflate(compressed_message)) elsif @compressor_id == SNAPPY_BYTE BSON::ByteBuffer.new(Snappy.inflate(compressed_message)) elsif @compressor_id == ZSTD_BYTE BSON::ByteBuffer.new(Zstd.decompress(compressed_message)) end end Registry.register(OP_CODE, self) end end end mongo-ruby-driver-2.21.3/lib/mongo/protocol/get_more.rb000066400000000000000000000116151505113246500230500ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Protocol # MongoDB Wire protocol getMore message. # # This is a client request message that is sent to the server in order # to retrieve additional documents from a cursor that has already been # instantiated. # # The operation requires that you specify the database and collection # name as well as the cursor id because cursors are scoped to a namespace. # # @api semipublic class GetMore < Message # Creates a new getMore message # # @example Get 15 additional documents from cursor 123 in 'xgen.users'. # GetMore.new('xgen', 'users', 15, 123) # # @param database [String, Symbol] The database to query. # @param collection [String, Symbol] The collection to query. # @param number_to_return [Integer] The number of documents to return. # @param cursor_id [Integer] The cursor id returned in a reply. def initialize(database, collection, number_to_return, cursor_id) @database = database @namespace = "#{database}.#{collection}" @number_to_return = number_to_return @cursor_id = cursor_id @upconverter = Upconverter.new(collection, cursor_id, number_to_return) super end # Return the event payload for monitoring. # # @example Return the event payload. # message.payload # # @return [ BSON::Document ] The event payload. # # @since 2.1.0 def payload BSON::Document.new( command_name: 'getMore', database_name: @database, command: upconverter.command, request_id: request_id ) end # Get more messages require replies from the database. # # @example Does the message require a reply? # message.replyable? # # @return [ true ] Always true for get more. # # @since 2.0.0 def replyable? true end protected attr_reader :upconverter private # The operation code required to specify a getMore message. # @return [Fixnum] the operation code. # # @since 2.5.0 OP_CODE = 2005 # Field representing Zero encoded as an Int32 field :zero, Zero # @!attribute # @return [String] The namespace for this getMore message. field :namespace, CString # @!attribute # @return [Fixnum] The number to return for this getMore message. field :number_to_return, Int32 # @!attribute # @return [Fixnum] The cursor id to get more documents from. field :cursor_id, Int64 # Converts legacy getMore messages to the appropriare OP_COMMAND style # message. # # @since 2.1.0 class Upconverter # The get more constant. # # @since 2.2.0 # @deprecated GET_MORE = 'getMore'.freeze # @return [ String ] collection The name of the collection. attr_reader :collection # @return [ Integer ] cursor_id The cursor id. attr_reader :cursor_id # @return [ Integer ] number_to_return The number of docs to return. attr_reader :number_to_return # Instantiate the upconverter. # # @example Instantiate the upconverter. # Upconverter.new('users', 1, 1) # # @param [ String ] collection The name of the collection. # @param [ Integer ] cursor_id The cursor id. # @param [ Integer ] number_to_return The number of documents to # return. # # @since 2.1.0 def initialize(collection, cursor_id, number_to_return) @collection = collection @cursor_id = cursor_id @number_to_return = number_to_return end # Get the upconverted command. # # @example Get the command. # upconverter.command # # @return [ BSON::Document ] The upconverted command. # # @since 2.1.0 def command document = BSON::Document.new document.store('getMore', BSON::Int64.new(cursor_id)) document.store(Message::BATCH_SIZE, number_to_return) document.store(Message::COLLECTION, collection) document end end Registry.register(OP_CODE, self) end end end mongo-ruby-driver-2.21.3/lib/mongo/protocol/kill_cursors.rb000066400000000000000000000073411505113246500237630ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Protocol # MongoDB Wire protocol KillCursors message. # # This is a client request message that is sent to the server in order # to kill a number of cursors. # # @api semipublic class KillCursors < Message # Creates a new KillCursors message # # @example Kill the cursor on the server with id 1. # KillCursors.new([1]) # # @param [ Mongo::Database ] collection The collection. # @param [ Mongo::Database ] database The database. # @param [ Array ] cursor_ids The cursor ids to kill. def initialize(collection, database, cursor_ids) @database = database @cursor_ids = cursor_ids @id_count = @cursor_ids.size @upconverter = Upconverter.new(collection, cursor_ids) super end # Return the event payload for monitoring. # # @example Return the event payload. # message.payload # # @return [ BSON::Document ] The event payload. # # @since 2.1.0 def payload BSON::Document.new( command_name: 'killCursors', database_name: @database, command: upconverter.command, request_id: request_id, ) end protected attr_reader :upconverter private # The operation code required to specify +KillCursors+ message. # @return [Fixnum] the operation code. # # @since 2.5.0 OP_CODE = 2007 # Field representing Zero encoded as an Int32. field :zero, Zero # @!attribute # @return [Fixnum] Count of the number of cursor ids. field :id_count, Int32 # @!attribute # @return [Array] Cursors to kill. field :cursor_ids, Int64, true # Converts legacy insert messages to the appropriare OP_COMMAND style # message. # # @since 2.1.0 class Upconverter # @return [ String ] collection The name of the collection. attr_reader :collection # @return [ Array ] cursor_ids The cursor ids. attr_reader :cursor_ids # Instantiate the upconverter. # # @example Instantiate the upconverter. # Upconverter.new('users', [ 1, 2, 3 ]) # # @param [ String ] collection The name of the collection. # @param [ Array ] cursor_ids The cursor ids. # # @since 2.1.0 def initialize(collection, cursor_ids) @collection = collection @cursor_ids = cursor_ids end # Get the upconverted command. # # @example Get the command. # upconverter.command # # @return [ BSON::Document ] The upconverted command. # # @since 2.1.0 def command document = BSON::Document.new document.store('killCursors', collection) store_ids = cursor_ids.map do |cursor_id| BSON::Int64.new(cursor_id) end document.store('cursors', store_ids) document end end Registry.register(OP_CODE, self) end end end mongo-ruby-driver-2.21.3/lib/mongo/protocol/message.rb000066400000000000000000000364641505113246500227040ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Protocol # A base class providing functionality required by all messages in # the MongoDB wire protocol. It provides a minimal DSL for defining typed # fields to enable serialization and deserialization over the wire. # # @example # class WireProtocolMessage < Message # # private # # def op_code # 1234 # end # # FLAGS = [:first_bit, :bit_two] # # # payload # field :flags, BitVector.new(FLAGS) # field :namespace, CString # field :document, Document # field :documents, Document, true # end # # @abstract # @api semiprivate class Message include Id include Serializers # The batch size constant. # # @since 2.2.0 BATCH_SIZE = 'batchSize'.freeze # The collection constant. # # @since 2.2.0 COLLECTION = 'collection'.freeze # The limit constant. # # @since 2.2.0 LIMIT = 'limit'.freeze # The ordered constant. # # @since 2.2.0 ORDERED = 'ordered'.freeze # The q constant. # # @since 2.2.0 Q = 'q'.freeze # Default max message size of 48MB. # # @since 2.2.1 MAX_MESSAGE_SIZE = 50331648.freeze def initialize(*args) # :nodoc: set_request_id end # Returns the request id for the message # # @return [Fixnum] The request id for this message attr_reader :request_id # The default for messages is not to require a reply after sending a # message to the server. # # @example Does the message require a reply? # message.replyable? # # @return [ false ] The default is to not require a reply. # # @since 2.0.0 def replyable? false end # Compress the message, if supported by the wire protocol used and if # the command being sent permits compression. Otherwise returns self. # # @param [ String, Symbol ] compressor The compressor to use. # @param [ Integer ] zlib_compression_level The zlib compression level to use. # # @return [ self ] Always returns self. Other message types should # override this method. # # @since 2.5.0 # @api private def maybe_compress(compressor, zlib_compression_level = nil) self end # Compress the message, if the command being sent permits compression. # Otherwise returns self. # # @param [ String ] command_name Command name extracted from the message. # @param [ String | Symbol ] compressor The compressor to use. # @param [ Integer ] zlib_compression_level Zlib compression level to use. # # @return [ Message ] A Protocol::Compressed message or self, # depending on whether this message can be compressed. # # @since 2.5.0 private def compress_if_possible(command_name, compressor, zlib_compression_level) if compressor && compression_allowed?(command_name) Compressed.new(self, compressor, zlib_compression_level) else self end end # Inflate a message if it is compressed. # # @return [ Protocol::Message ] Always returns self. Subclasses should # override this method as necessary. # # @since 2.5.0 # @api private def maybe_inflate self end # Possibly decrypt this message with libmongocrypt. # # @param [ Mongo::Operation::Context ] context The operation context. # # @return [ Mongo::Protocol::Msg ] The decrypted message, or the original # message if decryption was not possible or necessary. def maybe_decrypt(context) # TODO determine if we should be decrypting data coming from pre-4.2 # servers, potentially using legacy wire protocols. If so we need # to implement decryption for those wire protocols as our current # encryption/decryption code is OP_MSG-specific. self end # Possibly encrypt this message with libmongocrypt. # # @param [ Mongo::Server::Connection ] connection The connection on which # the operation is performed. # @param [ Mongo::Operation::Context ] context The operation context. # # @return [ Mongo::Protocol::Msg ] The encrypted message, or the original # message if encryption was not possible or necessary. def maybe_encrypt(connection, context) # Do nothing if the Message subclass has not implemented this method self end def maybe_add_server_api(server_api) raise Error::ServerApiNotSupported, "Server API parameters cannot be sent to pre-3.6 MongoDB servers. Please remove the :server_api parameter from Client options or use MongoDB 3.6 or newer" end private def merge_sections cmd = if @sections.length > 1 cmd = @sections.detect { |section| section[:type] == 0 }[:payload] identifier = @sections.detect { |section| section[:type] == 1}[:payload][:identifier] cmd.merge(identifier.to_sym => @sections.select { |section| section[:type] == 1 }. map { |section| section[:payload][:sequence] }. inject([]) { |arr, documents| arr + documents } ) elsif @sections.first[:payload] @sections.first[:payload] else @sections.first end if cmd.nil? raise "The command should never be nil here" end cmd end # Serializes message into bytes that can be sent on the wire # # @param buffer [String] buffer where the message should be inserted # @return [String] buffer containing the serialized message def serialize(buffer = BSON::ByteBuffer.new, max_bson_size = nil, bson_overhead = nil) max_size = if max_bson_size && bson_overhead max_bson_size + bson_overhead elsif max_bson_size max_bson_size else nil end start = buffer.length serialize_header(buffer) serialize_fields(buffer, max_size) buffer.replace_int32(start, buffer.length - start) end alias_method :to_s, :serialize # Deserializes messages from an IO stream. # # This method returns decompressed messages (i.e. if the message on the # wire was OP_COMPRESSED, this method would typically return the OP_MSG # message that is the result of decompression). # # @param [ Integer ] max_message_size The max message size. # @param [ IO ] io Stream containing a message # @param [ Hash ] options # # @option options [ Boolean ] :deserialize_as_bson Whether to deserialize # this message using BSON types instead of native Ruby types wherever # possible. # @option options [ Numeric ] :socket_timeout The timeout to use for # each read operation. # # @return [ Message ] Instance of a Message class # # @api private def self.deserialize(io, max_message_size = MAX_MESSAGE_SIZE, expected_response_to = nil, options = {} ) # io is usually a Mongo::Socket instance, which supports the # timeout option. For compatibility with whoever might call this # method with some other IO-like object, pass options only when they # are not empty. read_options = options.slice(:timeout, :socket_timeout) if read_options.empty? chunk = io.read(16) else chunk = io.read(16, **read_options) end buf = BSON::ByteBuffer.new(chunk) length, _request_id, response_to, _op_code = deserialize_header(buf) # Protection from potential DOS man-in-the-middle attacks. See # DRIVERS-276. if length > (max_message_size || MAX_MESSAGE_SIZE) raise Error::MaxMessageSize.new(max_message_size) end # Protection against returning the response to a previous request. See # RUBY-1117 if expected_response_to && response_to != expected_response_to raise Error::UnexpectedResponse.new(expected_response_to, response_to) end if read_options.empty? chunk = io.read(length - 16) else chunk = io.read(length - 16, **read_options) end buf = BSON::ByteBuffer.new(chunk) message = Registry.get(_op_code).allocate message.send(:fields).each do |field| if field[:multi] deserialize_array(message, buf, field, options) else deserialize_field(message, buf, field, options) end end if message.is_a?(Msg) message.fix_after_deserialization end message.maybe_inflate end # Tests for equality between two wire protocol messages # by comparing class and field values. # # @param other [Mongo::Protocol::Message] The wire protocol message. # @return [true, false] The equality of the messages. def ==(other) return false if self.class != other.class fields.all? do |field| name = field[:name] instance_variable_get(name) == other.instance_variable_get(name) end end alias_method :eql?, :== # Creates a hash from the values of the fields of a message. # # @return [ Fixnum ] The hash code for the message. def hash fields.map { |field| instance_variable_get(field[:name]) }.hash end # Generates a request id for a message # # @return [Fixnum] a request id used for sending a message to the # server. The server will put this id in the response_to field of # a reply. def set_request_id @request_id = self.class.next_id end # Default number returned value for protocol messages. # # @return [ 0 ] This method must be overridden, otherwise, always returns 0. # # @since 2.5.0 def number_returned; 0; end private # A method for getting the fields for a message class # # @return [Integer] the fields for the message class def fields self.class.fields end # A class method for getting the fields for a message class # # @return [Integer] the fields for the message class def self.fields @fields ||= [] end # Serializes message fields into a buffer # # @param buffer [String] buffer to receive the field # @return [String] buffer with serialized field def serialize_fields(buffer, max_bson_size = nil) fields.each do |field| value = instance_variable_get(field[:name]) if field[:multi] value.each do |item| if field[:type].respond_to?(:size_limited?) field[:type].serialize(buffer, item, max_bson_size) else field[:type].serialize(buffer, item) end end else if field[:type].respond_to?(:size_limited?) field[:type].serialize(buffer, value, max_bson_size) else field[:type].serialize(buffer, value) end end end end # Serializes the header of the message consisting of 4 32bit integers # # The integers represent a message length placeholder (calculation of # the actual length is deferred) the request id, the response to id, # and the op code for the message # # Currently uses hardcoded 0 for request id and response to as their # values are irrelevent to the server # # @param buffer [String] Buffer to receive the header # @return [String] Serialized header def serialize_header(buffer) Header.serialize(buffer, [0, request_id, 0, op_code]) end # Deserializes the header of the message # # @param io [IO] Stream containing the header. # @return [Array] Deserialized header. def self.deserialize_header(io) Header.deserialize(io) end # A method for declaring a message field # # @param name [String] Name of the field # @param type [Module] Type specific serialization strategies # @param multi [true, false, Symbol] Specify as +true+ to # serialize the field's value as an array of type +:type+ or as a # symbol describing the field having the number of items in the # array (used upon deserialization) # # Note: In fields where multi is a symbol representing the field # containing number items in the repetition, the field containing # that information *must* be deserialized prior to deserializing # fields that use the number. # # @return [NilClass] def self.field(name, type, multi = false) fields << { :name => "@#{name}".intern, :type => type, :multi => multi } attr_reader name end # Deserializes an array of fields in a message # # The number of items in the array must be described by a previously # deserialized field specified in the class by the field dsl under # the key +:multi+ # # @param message [Message] Message to contain the deserialized array. # @param io [IO] Stream containing the array to deserialize. # @param field [Hash] Hash representing a field. # @param options [ Hash ] # # @option options [ Boolean ] :deserialize_as_bson Whether to deserialize # each of the elements in this array using BSON types wherever possible. # # @return [Message] Message with deserialized array. def self.deserialize_array(message, io, field, options = {}) elements = [] count = message.instance_variable_get(field[:multi]) count.times { elements << field[:type].deserialize(io, options) } message.instance_variable_set(field[:name], elements) end # Deserializes a single field in a message # # @param message [Message] Message to contain the deserialized field. # @param io [IO] Stream containing the field to deserialize. # @param field [Hash] Hash representing a field. # @param options [ Hash ] # # @option options [ Boolean ] :deserialize_as_bson Whether to deserialize # this field using BSON types wherever possible. # # @return [Message] Message with deserialized field. def self.deserialize_field(message, io, field, options = {}) message.instance_variable_set( field[:name], field[:type].deserialize(io, options) ) end end end end mongo-ruby-driver-2.21.3/lib/mongo/protocol/msg.rb000066400000000000000000000361521505113246500220400ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2017-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Protocol # MongoDB Wire protocol Msg message (OP_MSG), a bi-directional wire # protocol opcode. # # OP_MSG is only available in MongoDB 3.6 (maxWireVersion >= 6) and later. # # @api private # # @since 2.5.0 class Msg < Message include Monitoring::Event::Secure # The identifier for the database name to execute the command on. # # @since 2.5.0 DATABASE_IDENTIFIER = '$db'.freeze # Keys that the driver adds to commands. These are going to be # moved to the end of the hash for better logging. # # @api private INTERNAL_KEYS = Set.new(%w($clusterTime $db lsid signature txnNumber)).freeze # Creates a new OP_MSG protocol message # # @example Create a OP_MSG wire protocol message # Msg.new([:more_to_come], {}, { hello: 1 }, # { type: 1, payload: { identifier: 'documents', sequence: [..] } }) # # @param [ Array ] flags The flag bits. Currently supported # values are :more_to_come and :checksum_present. # @param [ Hash ] options The options. # @param [ BSON::Document, Hash ] main_document The document that will # become the payload type 0 section. Can contain global args as they # are defined in the OP_MSG specification. # @param [ Protocol::Msg::Section1 ] sequences Zero or more payload type 1 # sections. # # @option options [ true, false ] validating_keys Whether keys should be # validated for being valid document keys (i.e. not begin with $ and # not contain dots). # This option is deprecated and will not be used. It will removed in version 3.0. # # @api private # # @since 2.5.0 def initialize(flags, options, main_document, *sequences) if flags flags.each do |flag| unless KNOWN_FLAGS.key?(flag) raise ArgumentError, "Unknown flag: #{flag.inspect}" end end end @flags = flags || [] @options = options unless main_document.is_a?(Hash) raise ArgumentError, "Main document must be a Hash, given: #{main_document.class}" end @main_document = main_document sequences.each_with_index do |section, index| unless section.is_a?(Section1) raise ArgumentError, "All sequences must be Section1 instances, got: #{section} at index #{index}" end end @sequences = sequences @sections = [ {type: 0, payload: @main_document} ] + @sequences.map do |section| {type: 1, payload: { identifier: section.identifier, sequence: section.documents.map do |doc| CachingHash.new(doc) end, }} end @request_id = nil super end # Whether the message expects a reply from the database. # # @example Does the message require a reply? # message.replyable? # # @return [ true, false ] If the message expects a reply. # # @since 2.5.0 def replyable? @replyable ||= !flags.include?(:more_to_come) end # Return the event payload for monitoring. # # @example Return the event payload. # message.payload # # @return [ BSON::Document ] The event payload. # # @since 2.5.0 def payload # Reorder keys in main_document for better logging - see # https://jira.mongodb.org/browse/RUBY-1591. # Note that even without the reordering, the payload is not an exact # match to what is sent over the wire because the command as used in # the published event combines keys from multiple sections of the # payload sent over the wire. ordered_command = {} skipped_command = {} command.each do |k, v| if INTERNAL_KEYS.member?(k.to_s) skipped_command[k] = v else ordered_command[k] = v end end ordered_command.update(skipped_command) BSON::Document.new( command_name: ordered_command.keys.first.to_s, database_name: @main_document[DATABASE_IDENTIFIER], command: ordered_command, request_id: request_id, reply: @main_document, ) end # Serializes message into bytes that can be sent on the wire. # # @param [ BSON::ByteBuffer ] buffer where the message should be inserted. # @param [ Integer ] max_bson_size The maximum bson object size. # # @return [ BSON::ByteBuffer ] buffer containing the serialized message. # # @since 2.5.0 def serialize(buffer = BSON::ByteBuffer.new, max_bson_size = nil, bson_overhead = nil) validate_document_size!(max_bson_size) super add_check_sum(buffer) buffer end # Compress the message, if the command being sent permits compression. # Otherwise returns self. # # @param [ String, Symbol ] compressor The compressor to use. # @param [ Integer ] zlib_compression_level The zlib compression level to use. # # @return [ Message ] A Protocol::Compressed message or self, # depending on whether this message can be compressed. # # @since 2.5.0 # @api private def maybe_compress(compressor, zlib_compression_level = nil) compress_if_possible(command.keys.first, compressor, zlib_compression_level) end # Reverse-populates the instance variables after deserialization sets # the @sections instance variable to the list of documents. # # TODO fix deserialization so that this method is not needed. # # @api private def fix_after_deserialization if @sections.nil? raise NotImplementedError, "After deserializations @sections should have been initialized" end if @sections.length != 1 raise NotImplementedError, "Deserialization must have produced exactly one section, but it produced #{sections.length} sections" end @main_document = @sections.first @sequences = [] @sections = [{type: 0, payload: @main_document}] end def documents [@main_document] end # Possibly encrypt this message with libmongocrypt. Message will only be # encrypted if the specified client exists, that client has been given # auto-encryption options, the client has not been instructed to bypass # auto-encryption, and mongocryptd determines that this message is # eligible for encryption. A message is eligible for encryption if it # represents one of the command types allow-listed by libmongocrypt and it # contains data that is required to be encrypted by a local or remote json schema. # # @param [ Mongo::Server::Connection ] connection The connection on which # the operation is performed. # @param [ Mongo::Operation::Context ] context The operation context. # # @return [ Mongo::Protocol::Msg ] The encrypted message, or the original # message if encryption was not possible or necessary. def maybe_encrypt(connection, context) # TODO verify compression happens later, i.e. when this method runs # the message is not compressed. if context.encrypt? if connection.description.max_wire_version < 8 raise Error::CryptError.new( "Cannot perform encryption against a MongoDB server older than " + "4.2 (wire version less than 8). Currently connected to server " + "with max wire version #{connection.description.max_wire_version}} " + "(Auto-encryption requires a minimum MongoDB version of 4.2)" ) end db_name = @main_document[DATABASE_IDENTIFIER] cmd = merge_sections enc_cmd = context.encrypt(db_name, cmd) if cmd.key?('$db') && !enc_cmd.key?('$db') enc_cmd['$db'] = cmd['$db'] end Msg.new(@flags, @options, enc_cmd) else self end end # Possibly decrypt this message with libmongocrypt. Message will only be # decrypted if the specified client exists, that client has been given # auto-encryption options, and this message is eligible for decryption. # A message is eligible for decryption if it represents one of the command # types allow-listed by libmongocrypt and it contains data that is required # to be encrypted by a local or remote json schema. # # @param [ Mongo::Operation::Context ] context The operation context. # # @return [ Mongo::Protocol::Msg ] The decrypted message, or the original # message if decryption was not possible or necessary. def maybe_decrypt(context) if context.decrypt? cmd = merge_sections enc_cmd = context.decrypt(cmd) Msg.new(@flags, @options, enc_cmd) else self end end # Whether this message represents a bulk write. A bulk write is an insert, # update, or delete operation that encompasses multiple operations of # the same type. # # @return [ Boolean ] Whether this message represents a bulk write. # # @note This method was written to support client-side encryption # functionality. It is not recommended that this method be used in # service of any other feature or behavior. # # @api private def bulk_write? inserts = @main_document['documents'] updates = @main_document['updates'] deletes = @main_document['deletes'] num_inserts = inserts && inserts.length || 0 num_updates = updates && updates.length || 0 num_deletes = deletes && deletes.length || 0 num_inserts > 1 || num_updates > 1 || num_deletes > 1 end def maybe_add_server_api(server_api) conflicts = {} %i(apiVersion apiStrict apiDeprecationErrors).each do |key| if @main_document.key?(key) conflicts[key] = @main_document[key] end if @main_document.key?(key.to_s) conflicts[key] = @main_document[key.to_s] end end unless conflicts.empty? raise Error::ServerApiConflict, "The Client is configured with :server_api option but the operation provided the following conflicting parameters: #{conflicts.inspect}" end main_document = @main_document.merge( Utils.transform_server_api(server_api) ) Msg.new(@flags, @options, main_document, *@sequences) end # Returns the number of documents returned from the server. # # The Msg instance must be for a server reply and the reply must return # an active cursor (either a newly created one or one whose iteration is # continuing via getMore). # # @return [ Integer ] Number of returned documents. def number_returned if doc = documents.first if cursor = doc['cursor'] if batch = cursor['firstBatch'] || cursor['nextBatch'] return batch.length end end end raise NotImplementedError, "number_returned is only defined for cursor replies" end private # Validate that the documents in this message are all smaller than the # maxBsonObjectSize. If not, raise an exception. def validate_document_size!(max_bson_size) max_bson_size ||= Mongo::Server::ConnectionBase::DEFAULT_MAX_BSON_OBJECT_SIZE contains_too_large_document = @sections.any? do |section| section[:type] == 1 && section[:payload][:sequence].any? do |document| document.to_bson.length > max_bson_size end end if contains_too_large_document raise Error::MaxBSONSize.new('The document exceeds maximum allowed BSON object size after serialization') end end def command @command ||= if @main_document @main_document.dup.tap do |cmd| @sequences.each do |section| cmd[section.identifier] ||= [] cmd[section.identifier] += section.documents end end else documents.first end end def add_check_sum(buffer) if flags.include?(:checksum_present) #buffer.put_int32(checksum) end end # Encapsulates a type 1 OP_MSG section. # # @see https://github.com/mongodb/specifications/blob/master/source/message/OP_MSG.md#sections # # @api private class Section1 def initialize(identifier, documents) @identifier, @documents = identifier, documents end attr_reader :identifier, :documents def ==(other) other.is_a?(Section1) && identifier == other.identifier && documents == other.documents end alias :eql? :== end # The operation code required to specify a OP_MSG message. # @return [ Fixnum ] the operation code. # # @since 2.5.0 OP_CODE = 2013 KNOWN_FLAGS = { checksum_present: true, more_to_come: true, exhaust_allowed: true, } # Available flags for a OP_MSG message. FLAGS = Array.new(16).tap do |arr| arr[0] = :checksum_present arr[1] = :more_to_come arr[16] = :exhaust_allowed end.freeze # @!attribute # @return [Array] The flags for this message. field :flags, BitVector.new(FLAGS) # The sections that will be serialized, or the documents have been # deserialized. # # Usually the sections contain OP_MSG-compliant sections derived # from @main_document and @sequences. The information in @main_document # and @sequences is duplicated in the sections. # # When deserializing Msg instances, sections temporarily is an array # of documents returned in the type 0 section of the OP_MSG wire # protocol message. #fix_after_deserialization method mutates this # object to have sections, @main_document and @sequences be what # they would have been had the Msg instance been constructed using # the constructor (rather than having been deserialized). # # @return [ Array | Array ] The sections of # payload type 1 or 0. # @api private field :sections, Sections Registry.register(OP_CODE, self) end end end mongo-ruby-driver-2.21.3/lib/mongo/protocol/query.rb000066400000000000000000000275211505113246500224170ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Protocol # MongoDB Wire protocol Query message. # # This is a client request message that is sent to the server in order # to retrieve documents matching provided query. # # Users may also provide additional options such as a projection, to # select a subset of the fields, a number to skip or a limit on the # number of returned documents. # # There are a variety of flags that can be used to adjust cursor # parameters or the desired consistency and integrity the results. # # @api semipublic class Query < Message include Monitoring::Event::Secure # Creates a new Query message # # @example Find all users named Tyler. # Query.new('xgen', 'users', {:name => 'Tyler'}) # # @example Find all users named Tyler skipping 5 and returning 10. # Query.new('xgen', 'users', {:name => 'Tyler'}, :skip => 5, # :limit => 10) # # @example Find all users with secondaryOk bit set # Query.new('xgen', 'users', {:name => 'Tyler'}, :flags => [:secondary_ok]) # # @example Find all user ids. # Query.new('xgen', 'users', {}, :fields => {:id => 1}) # # @param [ String, Symbol ] database The database to query. # @param [ String, Symbol ] collection The collection to query. # @param [ Hash ] selector The query selector. # @param [ Hash ] options The additional query options. # # @option options [ Array ] :flags The flag bits. # Currently supported values are :await_data, :exhaust, # :no_cursor_timeout, :oplog_replay, :partial, :secondary_ok, # :tailable_cursor. # @option options [ Integer ] :limit The number of documents to return. # @option options [ Hash ] :project The projection. # @option options [ Integer ] :skip The number of documents to skip. def initialize(database, collection, selector, options = {}) @database = database @namespace = "#{database}.#{collection}" if selector.nil? raise ArgumentError, 'Selector cannot be nil' end @selector = selector @options = options @project = options[:project] @limit = determine_limit @skip = options[:skip] || 0 @flags = options[:flags] || [] @upconverter = Upconverter.new( collection, BSON::Document.new(selector), BSON::Document.new(options), flags, ) super end # Return the event payload for monitoring. # # @example Return the event payload. # message.payload # # @return [ BSON::Document ] The event payload. # # @since 2.1.0 def payload BSON::Document.new( command_name: upconverter.command_name, database_name: @database, command: upconverter.command, request_id: request_id ) end # Query messages require replies from the database. # # @example Does the message require a reply? # message.replyable? # # @return [ true ] Always true for queries. # # @since 2.0.0 def replyable? true end # Compress the message, if the command being sent permits compression. # Otherwise returns self. # # @param [ String, Symbol ] compressor The compressor to use. # @param [ Integer ] zlib_compression_level The zlib compression level to use. # # @return [ Message ] A Protocol::Compressed message or self, # depending on whether this message can be compressed. # # @since 2.5.0 # @api private def maybe_compress(compressor, zlib_compression_level = nil) compress_if_possible(selector.keys.first, compressor, zlib_compression_level) end # Serializes message into bytes that can be sent on the wire. # # @param [ BSON::ByteBuffer ] buffer where the message should be inserted. # @param [ Integer ] max_bson_size The maximum bson object size. # # @return [ BSON::ByteBuffer ] buffer containing the serialized message. def serialize(buffer = BSON::ByteBuffer.new, max_bson_size = nil, bson_overhead = nil) validate_document_size!(max_bson_size) super end protected attr_reader :upconverter private # Validate that the documents in this message are all smaller than the # maxBsonObjectSize. If not, raise an exception. def validate_document_size!(max_bson_size) max_bson_size ||= Mongo::Server::ConnectionBase::DEFAULT_MAX_BSON_OBJECT_SIZE documents = if @selector.key?(:documents) @selector[:documents] elsif @selector.key?(:deletes) @selector[:deletes] elsif @selector.key?(:updates) @selector[:updates] else [] end contains_too_large_document = documents.any? do |doc| doc.to_bson.length > max_bson_size end if contains_too_large_document raise Error::MaxBSONSize.new('The document exceeds maximum allowed BSON object size after serialization') end end # The operation code required to specify a Query message. # @return [Fixnum] the operation code. # # @since 2.5.0 OP_CODE = 2004 def determine_limit [ @options[:limit] || @options[:batch_size], @options[:batch_size] || @options[:limit] ].min || 0 end # Available flags for a Query message. # @api private FLAGS = [ :reserved, :tailable_cursor, :secondary_ok, :oplog_replay, :no_cursor_timeout, :await_data, :exhaust, :partial ] # @!attribute # @return [Array] The flags for this query message. field :flags, BitVector.new(FLAGS) # @!attribute # @return [String] The namespace for this query message. field :namespace, CString # @!attribute # @return [Integer] The number of documents to skip. field :skip, Int32 # @!attribute # @return [Integer] The number of documents to return. field :limit, Int32 # @!attribute # @return [Hash] The query selector. field :selector, Document # @!attribute # @return [Hash] The projection. field :project, Document # Converts legacy query messages to the appropriare OP_COMMAND style # message. # # @since 2.1.0 class Upconverter # Mappings of the options to the find command options. # # @since 2.1.0 OPTION_MAPPINGS = { :project => 'projection', :skip => 'skip', :limit => 'limit', :batch_size => 'batchSize' }.freeze SPECIAL_FIELD_MAPPINGS = { :$readPreference => '$readPreference', :$orderby => 'sort', :$hint => 'hint', :$comment => 'comment', :$returnKey => 'returnKey', :$snapshot => 'snapshot', :$maxScan => 'maxScan', :$max => 'max', :$min => 'min', :$maxTimeMS => 'maxTimeMS', :$showDiskLoc => 'showRecordId', :$explain => 'explain' }.freeze # Mapping of flags to find command options. # # @since 2.1.0 FLAG_MAPPINGS = { :tailable_cursor => 'tailable', :oplog_replay => 'oplogReplay', :no_cursor_timeout => 'noCursorTimeout', :await_data => 'awaitData', :partial => 'allowPartialResults' }.freeze # @return [ String ] collection The name of the collection. attr_reader :collection # @return [ BSON::Document, Hash ] filter The query filter or command. attr_reader :filter # @return [ BSON::Document, Hash ] options The options. attr_reader :options # @return [ Array ] flags The flags. attr_reader :flags # Instantiate the upconverter. # # @example Instantiate the upconverter. # Upconverter.new('users', { name: 'test' }, { skip: 10 }) # # @param [ String ] collection The name of the collection. # @param [ BSON::Document, Hash ] filter The filter or command. # @param [ BSON::Document, Hash ] options The options. # @param [ Array ] flags The flags. # # @since 2.1.0 def initialize(collection, filter, options, flags) # Although the docstring claims both hashes and BSON::Documents # are acceptable, this class expects the filter and options to # contain symbol keys which isn't what the operation layer produces. unless BSON::Document === filter raise ArgumentError, 'Filter must provide indifferent access' end unless BSON::Document === options raise ArgumentError, 'Options must provide indifferent access' end @collection = collection @filter = filter @options = options @flags = flags end # Get the upconverted command. # # @example Get the command. # upconverter.command # # @return [ BSON::Document ] The upconverted command. # # @since 2.1.0 def command command? ? op_command : find_command end # Get the name of the command. If the collection is $cmd then it's the # first key in the filter, otherwise it's a find. # # @example Get the command name. # upconverter.command_name # # @return [ String ] The command name. # # @since 2.1.0 def command_name ((filter[:$query] || !command?) ? :find : filter.keys.first).to_s end private def command? collection == Database::COMMAND end def query_filter filter[:$query] || filter end def op_command document = BSON::Document.new query_filter.each do |field, value| document.store(field.to_s, value) end document end def find_command document = BSON::Document.new( find: collection, filter: query_filter, ) OPTION_MAPPINGS.each do |legacy, option| document.store(option, options[legacy]) unless options[legacy].nil? end if Lint.enabled? filter.each do |k, v| unless String === k raise Error::LintError, "All keys in filter must be strings: #{filter.inspect}" end end end Lint.validate_camel_case_read_preference(filter['readPreference']) SPECIAL_FIELD_MAPPINGS.each do |special, normal| unless (v = filter[special]).nil? document.store(normal, v) end end FLAG_MAPPINGS.each do |legacy, flag| document.store(flag, true) if flags.include?(legacy) end document end end Registry.register(OP_CODE, self) end end end mongo-ruby-driver-2.21.3/lib/mongo/protocol/registry.rb000066400000000000000000000042501505113246500231140ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2009-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Protocol # Provides a registry for looking up a message class based on op code. # # @since 2.5.0 module Registry extend self # A Mapping of all the op codes to their corresponding Ruby classes. # # @since 2.5.0 MAPPINGS = {} # Get the class for the given op code and raise an error if it's not found. # # @example Get the type for the op code. # Mongo::Protocol::Registry.get(1) # # @return [ Class ] The corresponding Ruby class for the message type. # # @since 2.5.0 def get(op_code, message = nil) if type = MAPPINGS[op_code] type else handle_unsupported_op_code!(op_code) end end # Register the Ruby type for the corresponding op code. # # @example Register the op code. # Mongo::Protocol::Registry.register(1, Reply) # # @param [ Fixnum ] op_code The op code. # @param [ Class ] type The class the op code maps to. # # @return [ Class ] The class. # # @since 2.5.0 def register(op_code, type) MAPPINGS.store(op_code, type) define_type_reader(type) end private def define_type_reader(type) type.module_eval <<-MOD def op_code; OP_CODE; end MOD end def handle_unsupported_op_code!(op_code) message = "Detected unknown message type with op code: #{op_code}." raise Error::UnsupportedMessageType.new(message) end end end end mongo-ruby-driver-2.21.3/lib/mongo/protocol/reply.rb000066400000000000000000000130051505113246500223750ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Protocol # The MongoDB wire protocol message representing a reply # # @example # socket = TCPSocket.new('localhost', 27017) # query = Protocol::Query.new('xgen', 'users', {:name => 'Tyler'}) # socket.write(query) # reply = Protocol::Reply::deserialize(socket) # # @api semipublic class Reply < Message # Determine if the reply had a query failure flag. # # @example Did the reply have a query failure. # reply.query_failure? # # @return [ true, false ] If the query failed. # # @since 2.0.5 def query_failure? flags.include?(:query_failure) end # Determine if the reply had a cursor not found flag. # # @example Did the reply have a cursor not found flag. # reply.cursor_not_found? # # @return [ true, false ] If the query cursor was not found. # # @since 2.2.3 def cursor_not_found? flags.include?(:cursor_not_found) end # Return the event payload for monitoring. # # @example Return the event payload. # message.payload # # @return [ BSON::Document ] The event payload. # # @since 2.1.0 def payload BSON::Document.new( reply: upconverter.command, request_id: request_id ) end private def upconverter @upconverter ||= Upconverter.new(documents, cursor_id, starting_from) end # The operation code required to specify a Reply message. # @return [Fixnum] the operation code. # # @since 2.5.0 OP_CODE = 1 # Available flags for a Reply message. FLAGS = [ :cursor_not_found, :query_failure, :shard_config_stale, :await_capable ] public # @!attribute # @return [Array] The flags for this reply. # # Supported flags: +:cursor_not_found+, +:query_failure+, # +:shard_config_stale+, +:await_capable+ field :flags, BitVector.new(FLAGS) # @!attribute # @return [Fixnum] The cursor id for this response. Will be zero # if there are no additional results. field :cursor_id, Int64 # @!attribute # @return [Fixnum] The starting position of the cursor for this Reply. field :starting_from, Int32 # @!attribute # @return [Fixnum] Number of documents in this Reply. field :number_returned, Int32 # @!attribute # @return [Array] The documents in this Reply. field :documents, Document, :@number_returned # Upconverts legacy replies to new op command replies. # # @since 2.1.0 class Upconverter # Next batch constant. # # @since 2.1.0 NEXT_BATCH = 'nextBatch'.freeze # First batch constant. # # @since 2.1.0 FIRST_BATCH = 'firstBatch'.freeze # Cursor field constant. # # @since 2.1.0 CURSOR = 'cursor'.freeze # Id field constant. # # @since 2.1.0 ID = 'id'.freeze # Initialize the new upconverter. # # @example Create the upconverter. # Upconverter.new(docs, 1, 3) # # @param [ Array ] documents The documents. # @param [ Integer ] cursor_id The cursor id. # @param [ Integer ] starting_from The starting position. # # @since 2.1.0 def initialize(documents, cursor_id, starting_from) @documents = documents @cursor_id = cursor_id @starting_from = starting_from end # @return [ Array ] documents The documents. attr_reader :documents # @return [ Integer ] cursor_id The cursor id. attr_reader :cursor_id # @return [ Integer ] starting_from The starting point in the cursor. attr_reader :starting_from # Get the upconverted command. # # @example Get the command. # upconverter.command # # @return [ BSON::Document ] The command. # # @since 2.1.0 def command command? ? op_command : find_command end private def batch_field starting_from > 0 ? NEXT_BATCH : FIRST_BATCH end def command? !documents.empty? && documents.first.key?(Operation::Result::OK) end def find_command document = BSON::Document.new cursor_document = BSON::Document.new cursor_document.store(ID, cursor_id) cursor_document.store(batch_field, documents) document.store(Operation::Result::OK, 1) document.store(CURSOR, cursor_document) document end def op_command documents.first end end Registry.register(OP_CODE, self) end end end mongo-ruby-driver-2.21.3/lib/mongo/protocol/serializers.rb000066400000000000000000000421621505113246500236040ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Protocol # Container for various serialization strategies # # Each strategy must have a serialization method named +serailize+ # and a deserialization method named +deserialize+ # # Serialize methods must take buffer and value arguements and # serialize the value into the buffer # # Deserialize methods must take an IO stream argument and # deserialize the value from the stream of bytes # # @api private module Serializers private ZERO = 0.freeze NULL = 0.chr.freeze INT32_PACK = 'l<'.freeze INT64_PACK = 'q<'.freeze HEADER_PACK = 'l ] Array consisting of the deserialized # length, request id, response id, and op code. def self.deserialize(buffer, options = {}) buffer.get_bytes(16).unpack(HEADER_PACK) end end # MongoDB wire protocol serialization strategy for C style strings. # # Serializes and de-serializes C style strings (null terminated). module CString # Serializes a C style string into the buffer # # @param buffer [ String ] Buffer to receive the serialized CString. # @param value [ String ] The string to be serialized. # # @return [ String ] Buffer with serialized value. def self.serialize(buffer, value, validating_keys = nil) buffer.put_cstring(value) end end # MongoDB wire protocol serialization strategy for 32-bit Zero. # # Serializes and de-serializes one 32-bit Zero. module Zero # Serializes a 32-bit Zero into the buffer # # @param buffer [ String ] Buffer to receive the serialized Zero. # @param value [ Fixnum ] Ignored value. # # @return [ String ] Buffer with serialized value. def self.serialize(buffer, value, validating_keys = nil) buffer.put_int32(ZERO) end end # MongoDB wire protocol serialization strategy for 32-bit integers. # # Serializes and de-serializes one 32-bit integer. module Int32 # Serializes a number to a 32-bit integer # # @param buffer [ String ] Buffer to receive the serialized Int32. # @param value [ Integer | BSON::Int32 ] 32-bit integer to be serialized. # # @return [String] Buffer with serialized value. def self.serialize(buffer, value, validating_keys = nil) if value.is_a?(BSON::Int32) if value.respond_to?(:value) # bson-ruby >= 4.6.0 value = value.value else value = value.instance_variable_get('@integer') end end buffer.put_int32(value) end # Deserializes a 32-bit Fixnum from the IO stream # # @param [ String ] buffer Buffer containing the 32-bit integer # @param [ Hash ] options This method currently accepts no options. # # @return [ Fixnum ] Deserialized Int32 def self.deserialize(buffer, options = {}) buffer.get_int32 end end # MongoDB wire protocol serialization strategy for 64-bit integers. # # Serializes and de-serializes one 64-bit integer. module Int64 # Serializes a number to a 64-bit integer # # @param buffer [ String ] Buffer to receive the serialized Int64. # @param value [ Integer | BSON::Int64 ] 64-bit integer to be serialized. # # @return [ String ] Buffer with serialized value. def self.serialize(buffer, value, validating_keys = nil) if value.is_a?(BSON::Int64) if value.respond_to?(:value) # bson-ruby >= 4.6.0 value = value.value else value = value.instance_variable_get('@integer') end end buffer.put_int64(value) end # Deserializes a 64-bit Fixnum from the IO stream # # @param [ String ] buffer Buffer containing the 64-bit integer. # @param [ Hash ] options This method currently accepts no options. # # @return [Fixnum] Deserialized Int64. def self.deserialize(buffer, options = {}) buffer.get_int64 end end # MongoDB wire protocol serialization strategy for a Section of OP_MSG. # # Serializes and de-serializes a list of Sections. # # @since 2.5.0 module Sections # Serializes the sections of an OP_MSG, payload type 0 or 1. # # @param [ BSON::ByteBuffer ] buffer Buffer to receive the serialized Sections. # @param [ Array ] value The sections to be serialized. # @param [ Fixnum ] max_bson_size The max bson size of documents in the sections. # @param [ true, false ] validating_keys Whether to validate document keys. # This option is deprecated and will not be used. It will removed in version 3.0. # # @return [ BSON::ByteBuffer ] Buffer with serialized value. # # @since 2.5.0 def self.serialize(buffer, value, max_bson_size = nil, validating_keys = nil) value.each do |section| case section[:type] when PayloadZero::TYPE PayloadZero.serialize(buffer, section[:payload], max_bson_size) when nil PayloadZero.serialize(buffer, section[:payload], max_bson_size) when PayloadOne::TYPE PayloadOne.serialize(buffer, section[:payload], max_bson_size) else raise Error::UnknownPayloadType.new(section[:type]) end end end # Deserializes a section of an OP_MSG from the IO stream. # # @param [ BSON::ByteBuffer ] buffer Buffer containing the sections. # @param [ Hash ] options # # @option options [ Boolean ] :deserialize_as_bson Whether to perform # section deserialization using BSON types instead of native Ruby types # wherever possible. # # @return [ Array ] Deserialized sections. # # @since 2.5.0 def self.deserialize(buffer, options = {}) end_length = (@flag_bits & Msg::FLAGS.index(:checksum_present)) == 1 ? 32 : 0 sections = [] until buffer.length == end_length case byte = buffer.get_byte when PayloadZero::TYPE_BYTE sections << PayloadZero.deserialize(buffer, options) when PayloadOne::TYPE_BYTE sections += PayloadOne.deserialize(buffer, options) else raise Error::UnknownPayloadType.new(byte) end end sections end # Whether there can be a size limit on this type after serialization. # # @return [ true ] Documents can be size limited upon serialization. # # @since 2.5.0 def self.size_limited? true end # MongoDB wire protocol serialization strategy for a payload 0 type Section of OP_MSG. # # @since 2.5.0 module PayloadZero # The byte identifier for this payload type. # # @since 2.5.0 TYPE = 0x0 # The byte corresponding to this payload type. # # @since 2.5.0 TYPE_BYTE = TYPE.chr.force_encoding(BSON::BINARY).freeze # Serializes a section of an OP_MSG, payload type 0. # # @param [ BSON::ByteBuffer ] buffer Buffer to receive the serialized Sections. # @param [ BSON::Document, Hash ] value The object to serialize. # @param [ Fixnum ] max_bson_size The max bson size of documents in the section. # @param [ true, false ] validating_keys Whether to validate document keys. # This option is deprecated and will not be used. It will removed in version 3.0. # # @return [ BSON::ByteBuffer ] Buffer with serialized value. # # @since 2.5.0 def self.serialize(buffer, value, max_bson_size = nil, validating_keys = nil) buffer.put_byte(TYPE_BYTE) Serializers::Document.serialize(buffer, value, max_bson_size) end # Deserializes a section of payload type 0 of an OP_MSG from the IO stream. # # @param [ BSON::ByteBuffer ] buffer Buffer containing the sections. # @param [ Hash ] options # # @option options [ Boolean ] :deserialize_as_bson Whether to perform # section deserialization using BSON types instead of native Ruby types # wherever possible. # # @return [ Array ] Deserialized section. # # @since 2.5.0 def self.deserialize(buffer, options = {}) mode = options[:deserialize_as_bson] ? :bson : nil BSON::Document.from_bson(buffer, **{ mode: mode }) end end # MongoDB wire protocol serialization strategy for a payload 1 type Section of OP_MSG. # # @since 2.5.0 module PayloadOne # The byte identifier for this payload type. # # @since 2.5.0 TYPE = 0x1 # The byte corresponding to this payload type. # # @since 2.5.0 TYPE_BYTE = TYPE.chr.force_encoding(BSON::BINARY).freeze # Serializes a section of an OP_MSG, payload type 1. # # @param [ BSON::ByteBuffer ] buffer Buffer to receive the serialized Sections. # @param [ BSON::Document, Hash ] value The object to serialize. # @param [ Fixnum ] max_bson_size The max bson size of documents in the section. # @param [ true, false ] validating_keys Whether to validate document keys. # This option is deprecated and will not be used. It will removed in version 3.0. # # @return [ BSON::ByteBuffer ] Buffer with serialized value. # # @since 2.5.0 def self.serialize(buffer, value, max_bson_size = nil, validating_keys = nil) buffer.put_byte(TYPE_BYTE) start = buffer.length buffer.put_int32(0) # hold for size buffer.put_cstring(value[:identifier]) value[:sequence].each do |document| Document.serialize(buffer, document, max_bson_size) end buffer.replace_int32(start, buffer.length - start) end # Deserializes a section of payload type 1 of an OP_MSG from the IO stream. # # @param [ BSON::ByteBuffer ] buffer Buffer containing the sections. # # @return [ Array ] Deserialized section. # # @since 2.5.0 def self.deserialize(buffer) raise NotImplementedError start_size = buffer.length section_size = buffer.get_int32 # get the size end_size = start_size - section_size buffer.get_cstring # get the identifier documents = [] until buffer.length == end_size documents << BSON::Document.from_bson(buffer) end documents end end end # MongoDB wire protocol serialization strategy for a BSON Document. # # Serializes and de-serializes a single document. module Document # Serializes a document into the buffer # # @param buffer [ String ] Buffer to receive the BSON encoded document. # @param value [ Hash ] Document to serialize as BSON. # # @return [ String ] Buffer with serialized value. def self.serialize(buffer, value, max_bson_size = nil, validating_keys = nil) start_size = buffer.length value.to_bson(buffer) serialized_size = buffer.length - start_size if max_bson_size && serialized_size > max_bson_size raise Error::MaxBSONSize, "The document exceeds maximum allowed BSON object size after serialization. Serialized size: #{serialized_size} bytes, maximum allowed size: #{max_bson_size} bytes" end end # Deserializes a document from the IO stream # # @param [ String ] buffer Buffer containing the BSON encoded document. # @param [ Hash ] options # # @option options [ Boolean ] :deserialize_as_bson Whether to perform # section deserialization using BSON types instead of native Ruby types # wherever possible. # # @return [ Hash ] The decoded BSON document. def self.deserialize(buffer, options = {}) mode = options[:deserialize_as_bson] ? :bson : nil BSON::Document.from_bson(buffer, **{ mode: mode }) end # Whether there can be a size limit on this type after serialization. # # @return [ true ] Documents can be size limited upon serialization. # # @since 2.0.0 def self.size_limited? true end end # MongoDB wire protocol serialization strategy for a single byte. # # Writes and fetches a single byte from the byte buffer. module Byte # Writes a byte into the buffer. # # @param [ BSON::ByteBuffer ] buffer Buffer to receive the single byte. # @param [ String ] value The byte to write to the buffer. # @param [ true, false ] validating_keys Whether to validate keys. # This option is deprecated and will not be used. It will removed in version 3.0. # # @return [ BSON::ByteBuffer ] Buffer with serialized value. # # @since 2.5.0 def self.serialize(buffer, value, validating_keys = nil) buffer.put_byte(value) end # Deserializes a byte from the byte buffer. # # @param [ BSON::ByteBuffer ] buffer Buffer containing the value to read. # @param [ Hash ] options This method currently accepts no options. # # @return [ String ] The byte. # # @since 2.5.0 def self.deserialize(buffer, options = {}) buffer.get_byte end end # MongoDB wire protocol serialization strategy for n bytes. # # Writes and fetches bytes from the byte buffer. module Bytes # Writes bytes into the buffer. # # @param [ BSON::ByteBuffer ] buffer Buffer to receive the bytes. # @param [ String ] value The bytes to write to the buffer. # @param [ true, false ] validating_keys Whether to validate keys. # This option is deprecated and will not be used. It will removed in version 3.0. # # @return [ BSON::ByteBuffer ] Buffer with serialized value. # # @since 2.5.0 def self.serialize(buffer, value, validating_keys = nil) buffer.put_bytes(value) end # Deserializes bytes from the byte buffer. # # @param [ BSON::ByteBuffer ] buffer Buffer containing the value to read. # @param [ Hash ] options The method options. # # @option options [ Integer ] num_bytes Number of bytes to read. # # @return [ String ] The bytes. # # @since 2.5.0 def self.deserialize(buffer, options = {}) num_bytes = options[:num_bytes] buffer.get_bytes(num_bytes || buffer.length) end end end end end mongo-ruby-driver-2.21.3/lib/mongo/query_cache.rb000066400000000000000000000236731505113246500217050ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2020 MongoDB, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module QueryCache class << self # Set whether the cache is enabled. # # @example Set if the cache is enabled. # QueryCache.enabled = true # # @param [ true, false ] value The enabled value. def enabled=(value) Thread.current["[mongo]:query_cache:enabled"] = value end # Is the query cache enabled on the current thread? # # @example Is the query cache enabled? # QueryCache.enabled? # # @return [ true, false ] If the cache is enabled. def enabled? !!Thread.current["[mongo]:query_cache:enabled"] end # Execute the block while using the query cache. # # @example Execute with the cache. # QueryCache.cache { collection.find } # # @return [ Object ] The result of the block. def cache enabled = enabled? self.enabled = true begin yield ensure self.enabled = enabled end end # Execute the block with the query cache disabled. # # @example Execute without the cache. # QueryCache.uncached { collection.find } # # @return [ Object ] The result of the block. def uncached enabled = enabled? self.enabled = false begin yield ensure self.enabled = enabled end end # Get the cached queries. # # @example Get the cached queries from the current thread. # QueryCache.cache_table # # @return [ Hash ] The hash of cached queries. private def cache_table Thread.current["[mongo]:query_cache"] ||= {} end # Clear the query cache. # # @example Clear the cache. # QueryCache.clear # # @return [ nil ] Always nil. def clear Thread.current["[mongo]:query_cache"] = nil end # Clear the section of the query cache storing cursors with results # from this namespace. # # @param [ String ] namespace The namespace to be cleared, in the format # "database.collection". # # @return [ nil ] Always nil. # # @api private def clear_namespace(namespace) cache_table.delete(namespace) # The nil key is where cursors are stored that could potentially read from # multiple collections. This key should be cleared on every write operation # to prevent returning stale data. cache_table.delete(nil) nil end # Store a CachingCursor instance in the query cache associated with the # specified query options. # # @param [ Mongo::CachingCursor ] cursor The CachingCursor instance to store. # # @option opts [ String | nil ] :namespace The namespace of the query, # in the format "database_name.collection_name". # @option opts [ Array, Hash ] :selector The selector passed to the query. # For most queries, this will be a Hash, but for aggregations, this # will be an Array representing the aggregation pipeline. May not be nil. # @option opts [ Integer | nil ] :skip The skip value of the query. # @option opts [ Hash | nil ] :sort The order of the query results # (e.g. { name: -1 }). # @option opts [ Integer | nil ] :limit The limit value of the query. # @option opts [ Hash | nil ] :projection The projection of the query # results (e.g. { name: 1 }). # @option opts [ Hash | nil ] :collation The collation of the query # (e.g. { "locale" => "fr_CA" }). # @option opts [ Hash | nil ] :read_concern The read concern of the query # (e.g. { level: :majority }). # @option opts [ Hash | nil ] :read_preference The read preference of # the query (e.g. { mode: :secondary }). # @option opts [ Boolean | nil ] :multi_collection Whether the query # results could potentially come from multiple collections. When true, # these results will be stored under the nil namespace key and cleared # on every write command. # # @return [ true ] Always true. # # @api private def set(cursor, **opts) _cache_key = cache_key(**opts) _namespace_key = namespace_key(**opts) cache_table[_namespace_key] ||= {} cache_table[_namespace_key][_cache_key] = cursor true end # For the given query options, retrieve a cached cursor that can be used # to obtain the correct query results, if one exists in the cache. # # @option opts [ String | nil ] :namespace The namespace of the query, # in the format "database_name.collection_name". # @option opts [ Array, Hash ] :selector The selector passed to the query. # For most queries, this will be a Hash, but for aggregations, this # will be an Array representing the aggregation pipeline. May not be nil. # @option opts [ Integer | nil ] :skip The skip value of the query. # @option opts [ Hash | nil ] :sort The order of the query results # (e.g. { name: -1 }). # @option opts [ Integer | nil ] :limit The limit value of the query. # @option opts [ Hash | nil ] :projection The projection of the query # results (e.g. { name: 1 }). # @option opts [ Hash | nil ] :collation The collation of the query # (e.g. { "locale" => "fr_CA" }). # @option opts [ Hash | nil ] :read_concern The read concern of the query # (e.g. { level: :majority }). # @option opts [ Hash | nil ] :read_preference The read preference of # the query (e.g. { mode: :secondary }). # @option opts [ Boolean | nil ] :multi_collection Whether the query # results could potentially come from multiple collections. When true, # these results will be stored under the nil namespace key and cleared # on every write command. # # @return [ Mongo::CachingCursor | nil ] Returns a CachingCursor if one # exists in the query cache, otherwise returns nil. # # @api private def get(**opts) limit = normalized_limit(opts[:limit]) _namespace_key = namespace_key(**opts) _cache_key = cache_key(**opts) namespace_hash = cache_table[_namespace_key] return nil unless namespace_hash caching_cursor = namespace_hash[_cache_key] return nil unless caching_cursor caching_cursor_limit = normalized_limit(caching_cursor.view.limit) # There are two scenarios in which a caching cursor could fulfill the # query: # 1. The query has a limit, and the stored cursor has no limit or # a larger limit. # 2. The query has no limit and the stored cursor has no limit. # # Otherwise, return nil because the stored cursor will not satisfy # the query. if limit && (caching_cursor_limit.nil? || caching_cursor_limit >= limit) caching_cursor elsif limit.nil? && caching_cursor_limit.nil? caching_cursor else nil end end def normalized_limit(limit) return nil unless limit # For the purposes of caching, a limit of 0 means no limit, as mongo treats it as such. return nil if limit == 0 # For the purposes of caching, a negative limit is the same as as a positive limit. limit.abs end private def cache_key(**opts) unless opts[:namespace] raise ArgumentError.new("Cannot generate cache key without namespace") end unless opts[:selector] raise ArgumentError.new("Cannot generate cache key without selector") end [ opts[:namespace], opts[:selector], opts[:skip], opts[:sort], opts[:projection], opts[:collation], opts[:read_concern], opts[:read_preference] ] end # If the cached results can come from multiple collections, store this # cursor under the nil namespace to be cleared on every write operation. # Otherwise, store it under the specified namespace. def namespace_key(**opts) if opts[:multi_collection] nil else opts[:namespace] end end end # Rack middleware that activates the query cache for each request. class Middleware # Instantiate the middleware. # # @example Create the new middleware. # Middleware.new(app) # # @param [ Object ] app The rack application stack. def initialize(app) @app = app end # Enable query cache and execute the request. # # @example Execute the request. # middleware.call(env) # # @param [ Object ] env The environment. # # @return [ Object ] The result of the call. def call(env) QueryCache.cache do @app.call(env) end ensure QueryCache.clear end # ActiveJob middleware that activates the query cache for each job. module ActiveJob def self.included(base) base.class_eval do around_perform do |_job, block| QueryCache.cache do block.call end ensure QueryCache.clear end end end end end end end mongo-ruby-driver-2.21.3/lib/mongo/retryable.rb000066400000000000000000000044711505113246500214010ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2015-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'mongo/retryable/read_worker' require 'mongo/retryable/write_worker' module Mongo # Defines basic behavior around retrying operations. # # @since 2.1.0 module Retryable extend Forwardable # Delegate the public read_with_retry methods to the read_worker def_delegators :read_worker, :read_with_retry_cursor, :read_with_retry, :read_with_one_retry # Delegate the public write_with_retry methods to the write_worker def_delegators :write_worker, :write_with_retry, :nro_write_with_retry # This is a separate method to make it possible for the test suite to # assert that server selection is performed during retry attempts. # # This is a public method so that it can be accessed via the read and # write worker delegates, as needed. # # @api private # # @return [ Mongo::Server ] A server matching the server preference. def select_server(cluster, server_selector, session, failed_server = nil, timeout: nil) server_selector.select_server( cluster, nil, session, deprioritized: [failed_server].compact, timeout: timeout ) end # Returns the read worker for handling retryable reads. # # @api private # # @note this is only a public method so that tests can add expectations # based on it. def read_worker @read_worker ||= ReadWorker.new(self) end # Returns the write worker for handling retryable writes. # # @api private # # @note this is only a public method so that tests can add expectations # based on it. def write_worker @write_worker ||= WriteWorker.new(self) end end end mongo-ruby-driver-2.21.3/lib/mongo/retryable/000077500000000000000000000000001505113246500210465ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/retryable/base_worker.rb000066400000000000000000000073141505113246500237030ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2015-2023 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Retryable # The abstract superclass for workers employed by Mongo::Retryable. # # @api private class BaseWorker extend Forwardable def_delegators :retryable, :client, :cluster, :select_server # @return [ Mongo::Retryable ] retryable A reference to the client object # that instatiated this worker. attr_reader :retryable # Constructs a new worker. # # @example Instantiating a new read worker # worker = Mongo::Retryable::ReadWorker.new(self) # # @example Instantiating a new write worker # worker = Mongo::Retryable::WriteWorker.new(self) # # @param [ Mongo::Retryable ] retryable The client object that is using # this worker to perform a retryable operation def initialize(retryable) @retryable = retryable end private # Indicate which exception classes that are generally retryable # when using modern retries mechanism. # # @return [ Array ] Array of exception classes that are # considered retryable. def retryable_exceptions [ Error::ConnectionPerished, Error::ServerNotUsable, Error::SocketError, Error::SocketTimeoutError, ].freeze end # Indicate which exception classes that are generally retryable # when using legacy retries mechanism. # # @return [ Array ] Array of exception classes that are # considered retryable. def legacy_retryable_exceptions [ Error::ConnectionPerished, Error::ServerNotUsable, Error::SocketError, Error::SocketTimeoutError, Error::PoolClearedError, Error::PoolPausedError, ].freeze end # Tests to see if the given exception instance is of a type that can # be retried with modern retry mechanism. # # @return [ true | false ] true if the exception is retryable. def is_retryable_exception?(e) retryable_exceptions.any? { |klass| klass === e } end # Tests to see if the given exception instance is of a type that can # be retried with legacy retry mechanism. # # @return [ true | false ] true if the exception is retryable. def is_legacy_retryable_exception?(e) legacy_retryable_exceptions.any? { |klass| klass === e } end # Logs the given deprecation warning the first time it is called for a # given key; after that, it does nothing when given the same key. def deprecation_warning(key, warning) $_deprecation_warnings ||= {} unless $_deprecation_warnings[key] $_deprecation_warnings[key] = true Logger.logger.warn(warning) end end # Log a warning so that any application slow down is immediately obvious. def log_retry(e, options = nil) message = (options || {}).fetch(:message, "Retry") Logger.logger.warn "#{message} due to: #{e.class.name}: #{e.message}" end end end end mongo-ruby-driver-2.21.3/lib/mongo/retryable/read_worker.rb000066400000000000000000000330211505113246500236760ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2015-2023 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'mongo/retryable/base_worker' module Mongo module Retryable # Implements the logic around retrying read operations. # # @api private # # @since 2.19.0 class ReadWorker < BaseWorker # Execute a read operation returning a cursor with retrying. # # This method performs server selection for the specified server selector # and yields to the provided block, which should execute the initial # query operation and return its result. The block will be passed the # server selected for the operation. If the block raises an exception, # and this exception corresponds to a read retryable error, and read # retries are enabled for the client, this method will perform server # selection again and yield to the block again (with potentially a # different server). If the block returns successfully, the result # of the block (which should be a Mongo::Operation::Result) is used to # construct a Mongo::Cursor object for the result set. The cursor # is then returned. # # If modern retry reads are on (which is the default), the initial read # operation will be retried once. If legacy retry reads are on, the # initial read operation will be retried zero or more times depending # on the :max_read_retries client setting, the default for which is 1. # To disable read retries, turn off modern read retries by setting # retry_reads: false and set :max_read_retries to 0 on the client. # # @api private # # @example Execute a read returning a cursor. # cursor = read_with_retry_cursor(session, server_selector, view) do |server| # # return a Mongo::Operation::Result # ... # end # # @param [ Mongo::Session ] session The session that the operation is being # run on. # @param [ Mongo::ServerSelector::Selectable ] server_selector Server # selector for the operation. # @param [ CollectionView ] view The +CollectionView+ defining the query. # @param [ Operation::Context | nil ] context the operation context to use # with the cursor. # @param [ Proc ] block The block to execute. # # @return [ Cursor ] The cursor for the result set. def read_with_retry_cursor(session, server_selector, view, context: nil, &block) read_with_retry(session, server_selector, context) do |server| result = yield server # RUBY-2367: This will be updated to allow the query cache to # cache cursors with multi-batch results. if QueryCache.enabled? && !view.collection.system_collection? CachingCursor.new(view, result, server, session: session, context: context) else Cursor.new(view, result, server, session: session, context: context) end end end # Execute a read operation with retrying. # # This method performs server selection for the specified server selector # and yields to the provided block, which should execute the initial # query operation and return its result. The block will be passed the # server selected for the operation. If the block raises an exception, # and this exception corresponds to a read retryable error, and read # retries are enabled for the client, this method will perform server # selection again and yield to the block again (with potentially a # different server). If the block returns successfully, the result # of the block is returned. # # If modern retry reads are on (which is the default), the initial read # operation will be retried once. If legacy retry reads are on, the # initial read operation will be retried zero or more times depending # on the :max_read_retries client setting, the default for which is 1. # To disable read retries, turn off modern read retries by setting # retry_reads: false and set :max_read_retries to 0 on the client. # # @api private # # @example Execute the read. # read_with_retry(session, server_selector) do |server| # ... # end # # @param [ Mongo::Session | nil ] session The session that the operation # is being run on. # @param [ Mongo::ServerSelector::Selectable | nil ] server_selector # Server selector for the operation. # @param [ Mongo::Operation::Context | nil ] context Context for the # read operation. # @param [ Proc ] block The block to execute. # # @return [ Result ] The result of the operation. def read_with_retry(session = nil, server_selector = nil, context = nil, &block) if session.nil? && server_selector.nil? deprecated_legacy_read_with_retry(&block) elsif session&.retry_reads? modern_read_with_retry(session, server_selector, context, &block) elsif client.max_read_retries > 0 legacy_read_with_retry(session, server_selector, context, &block) else read_without_retry(session, server_selector, &block) end end # Execute a read operation with a single retry on network errors. # # This method is used by the driver for some of the internal housekeeping # operations. Application-requested reads should use read_with_retry # rather than this method. # # @api private # # @example Execute the read. # read_with_one_retry do # ... # end # # @note This only retries read operations on socket errors. # # @param [ Hash | nil ] options Options. # # @option options [ String ] :retry_message Message to log when retrying. # # @yield Calls the provided block with no arguments # # @return [ Result ] The result of the operation. # # @since 2.2.6 def read_with_one_retry(options = nil) yield rescue *retryable_exceptions, Error::PoolError => e raise e unless e.write_retryable? retry_message = options && options[:retry_message] log_retry(e, message: retry_message) yield end private # Attempts to do a legacy read_with_retry, without either a session or # server_selector. This is a deprecated use-case, and a warning will be # issued the first time this is invoked. # # @param [ Proc ] block The block to execute. # # @return [ Result ] The result of the operation. def deprecated_legacy_read_with_retry(&block) deprecation_warning :read_with_retry, 'Legacy read_with_retry invocation - ' \ 'please update the application and/or its dependencies' # Since we don't have a session, we cannot use the modern read retries. # And we need to select a server but we don't have a server selector. # Use PrimaryPreferred which will work as long as there is a data # bearing node in the cluster; the block may select a different server # which is fine. server_selector = ServerSelector.get(mode: :primary_preferred) legacy_read_with_retry(nil, server_selector, &block) end # Attempts to do a "modern" read with retry. Only a single retry will # be attempted. # # @param [ Mongo::Session ] session The session that the operation is # being run on. # @param [ Mongo::ServerSelector::Selectable ] server_selector Server # selector for the operation. # @param [ Mongo::Operation::Context ] context Context for the # read operation. # @param [ Proc ] block The block to execute. # # @return [ Result ] The result of the operation. def modern_read_with_retry(session, server_selector, context, &block) server = select_server( cluster, server_selector, session, timeout: context&.remaining_timeout_sec ) yield server rescue *retryable_exceptions, Error::OperationFailure::Family, Auth::Unauthorized, Error::PoolError => e e.add_notes('modern retry', 'attempt 1') raise e if session.in_transaction? raise e if !is_retryable_exception?(e) && !e.write_retryable? retry_read(e, session, server_selector, context: context, failed_server: server, &block) end # Attempts to do a "legacy" read with retry. The operation will be # attempted multiple times, up to the client's `max_read_retries` # setting. # # @param [ Mongo::Session ] session The session that the operation is # being run on. # @param [ Mongo::ServerSelector::Selectable ] server_selector Server # selector for the operation. # @param [ Mongo::Operation::Context | nil ] context Context for the # read operation. # @param [ Proc ] block The block to execute. # # @return [ Result ] The result of the operation. def legacy_read_with_retry(session, server_selector, context = nil, &block) context&.check_timeout! attempt = attempt ? attempt + 1 : 1 yield select_server(cluster, server_selector, session) rescue *legacy_retryable_exceptions, Error::OperationFailure::Family => e e.add_notes('legacy retry', "attempt #{attempt}") if is_legacy_retryable_exception?(e) raise e if attempt > client.max_read_retries || session&.in_transaction? elsif e.retryable? && !session&.in_transaction? raise e if attempt > client.max_read_retries else raise e end log_retry(e, message: 'Legacy read retry') sleep(client.read_retry_interval) unless is_retryable_exception?(e) retry end # Attempts to do a read *without* a retry; for example, when retries have # been explicitly disabled. # # @param [ Mongo::Session ] session The session that the operation is # being run on. # @param [ Mongo::ServerSelector::Selectable ] server_selector Server # selector for the operation. # @param [ Proc ] block The block to execute. # # @return [ Result ] The result of the operation. def read_without_retry(session, server_selector, &block) server = select_server(cluster, server_selector, session) begin yield server rescue *retryable_exceptions, Error::PoolError, Error::OperationFailure::Family => e e.add_note('retries disabled') raise e end end # The retry logic of the "modern" read_with_retry implementation. # # @param [ Exception ] original_error The original error that triggered # the retry. # @param [ Mongo::Session ] session The session that the operation is # being run on. # @param [ Mongo::ServerSelector::Selectable ] server_selector Server # selector for the operation. # @param [ Mongo::Operation::Context | nil ] :context Context for the # read operation. # @param [ Mongo::Server | nil ] :failed_server The server on which the original # operation failed. # @param [ Proc ] block The block to execute. # # @return [ Result ] The result of the operation. def retry_read(original_error, session, server_selector, context: nil, failed_server: nil, &block) server = select_server_for_retry( original_error, session, server_selector, context, failed_server ) log_retry(original_error, message: 'Read retry') begin context&.check_timeout! attempt = attempt ? attempt + 1 : 2 yield server, true rescue Error::TimeoutError raise rescue *retryable_exceptions => e e.add_notes('modern retry', "attempt #{attempt}") if context&.csot? failed_server = server retry else raise e end rescue Error::OperationFailure::Family, Error::PoolError => e e.add_note('modern retry') if e.write_retryable? e.add_note("attempt #{attempt}") if context&.csot? failed_server = server retry else raise e end else original_error.add_note("later retry failed: #{e.class}: #{e}") raise original_error end rescue Error, Error::AuthError => e e.add_note('modern retry') original_error.add_note("later retry failed: #{e.class}: #{e}") raise original_error end end def select_server_for_retry(original_error, session, server_selector, context, failed_server) select_server( cluster, server_selector, session, failed_server, timeout: context&.remaining_timeout_sec ) rescue Error, Error::AuthError => e original_error.add_note("later retry failed: #{e.class}: #{e}") raise original_error end end end end mongo-ruby-driver-2.21.3/lib/mongo/retryable/write_worker.rb000066400000000000000000000371061505113246500241250ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2015-2023 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'mongo/retryable/base_worker' module Mongo module Retryable # Implements the logic around retrying write operations. # # @api private # # @since 2.19.0 class WriteWorker < BaseWorker # Implements write retrying functionality by yielding to the passed # block one or more times. # # If the session is provided (hence, the deployment supports sessions), # and modern retry writes are enabled on the client, the modern retry # logic is invoked. Otherwise the legacy retry logic is invoked. # # If ending_transaction parameter is true, indicating that a transaction # is being committed or aborted, the operation is executed exactly once. # Note that, since transactions require sessions, this method will raise # ArgumentError if ending_transaction is true and session is nil. # # @api private # # @example Execute the write. # write_with_retry do # ... # end # # @note This only retries operations on not master failures, since it is # the only case we can be sure a partial write did not already occur. # # @param [ nil | Hash | WriteConcern::Base ] write_concern The write concern. # @param [ true | false ] ending_transaction True if the write operation is # abortTransaction or commitTransaction, false otherwise. # @param [ Context ] context The context for the operation. # @param [ Proc ] block The block to execute. # # @yieldparam [ Connection ] connection The connection through which the # write should be sent. # @yieldparam [ Integer ] txn_num Transaction number (NOT the ACID kind). # @yieldparam [ Operation::Context ] context The operation context. # # @return [ Result ] The result of the operation. # # @since 2.1.0 def write_with_retry(write_concern, ending_transaction: false, context:, &block) session = context.session ensure_valid_state!(ending_transaction, session) unless ending_transaction || retry_write_allowed?(session, write_concern) return legacy_write_with_retry(nil, context: context, &block) end # If we are here, session is not nil. A session being nil would have # failed retry_write_allowed? check. server = select_server( cluster, ServerSelector.primary, session, timeout: context.remaining_timeout_sec ) unless ending_transaction || server.retry_writes? return legacy_write_with_retry(server, context: context, &block) end modern_write_with_retry(session, server, context, &block) end # Retryable writes wrapper for operations not supporting modern retryable # writes. # # If the driver is configured to use modern retryable writes, this method # yields to the passed block exactly once, thus not retrying any writes. # # If the driver is configured to use legacy retryable writes, this method # delegates to legacy_write_with_retry which performs write retries using # legacy logic. # # @param [ nil | Hash | WriteConcern::Base ] write_concern The write concern. # @param [ Context ] context The context for the operation. # # @yieldparam [ Connection ] connection The connection through which the # write should be sent. # @yieldparam [ nil ] txn_num nil as transaction number. # @yieldparam [ Operation::Context ] context The operation context. def nro_write_with_retry(write_concern, context:, &block) session = context.session server = select_server(cluster, ServerSelector.primary, session) options = session&.client&.options || {} if options[:retry_writes] begin server.with_connection(connection_global_id: context.connection_global_id) do |connection| yield connection, nil, context end rescue *retryable_exceptions, Error::PoolError, Error::OperationFailure::Family => e e.add_note('retries disabled') raise e end else legacy_write_with_retry(server, context: context, &block) end end # Queries whether the session and write concern support retrying writes. # # @param [ Mongo::Session ] session The session that the operation is # being run on. # @param [ nil | Hash | WriteConcern::Base ] write_concern The write # concern. # # @return [ true | false ] Whether write retries are allowed or not. def retry_write_allowed?(session, write_concern) return false unless session&.retry_writes? if write_concern.nil? true else WriteConcern.get(write_concern).acknowledged? end end private # Makes sure the state of the arguments is consistent and valid. # # @param [ true | false ] ending_transaction True if the write operation # is abortTransaction or commitTransaction, false otherwise. # @param [ nil | Mongo::Session ] session The session that the operation # is being run on (if any). def ensure_valid_state!(ending_transaction, session) if ending_transaction && !session raise ArgumentError, 'Cannot end a transaction without a session' end end # Implements legacy write retrying functionality by yielding to the passed # block one or more times. # # This method is used for operations which are not supported by modern # retryable writes, such as delete_many and update_many. # # @param [ Server ] server The server which should be used for the # operation. If not provided, the current primary will be retrieved from # the cluster. # @param [ Context ] context The context for the operation. # # @yieldparam [ Connection ] connection The connection through which the # write should be sent. # @yieldparam [ nil ] txn_num nil as transaction number. # @yieldparam [ Operation::Context ] context The operation context. # # @api private def legacy_write_with_retry(server = nil, context:) session = context.session context.check_timeout! # This is the pre-session retry logic, and is not subject to # current retryable write specifications. # In particular it does not retry on SocketError and SocketTimeoutError. attempt = 0 begin attempt += 1 server ||= select_server( cluster, ServerSelector.primary, session, timeout: context.remaining_timeout_sec ) server.with_connection( connection_global_id: context.connection_global_id, context: context ) do |connection| # Legacy retries do not use txn_num yield connection, nil, context.dup end rescue Error::OperationFailure::Family => e e.add_note('legacy retry') e.add_note("attempt #{attempt}") server = nil if attempt > client.max_write_retries raise e end if e.label?('RetryableWriteError') log_retry(e, message: 'Legacy write retry') cluster.scan!(false) retry else raise e end end end # Implements modern write retrying functionality by yielding to the passed # block no more than twice. # # @param [ Mongo::Session ] session The session that the operation is # being run on. # @param [ Server ] server The server which should be used for the # operation. # @param [ Operation::Context ] context The context for the operation. # # @yieldparam [ Connection ] connection The connection through which the # write should be sent. # @yieldparam [ Integer ] txn_num Transaction number (NOT the ACID kind). # @yieldparam [ Operation::Context ] context The operation context. # # @return [ Result ] The result of the operation. # # @api private def modern_write_with_retry(session, server, context, &block) txn_num = nil connection_succeeded = false server.with_connection( connection_global_id: context.connection_global_id, context: context ) do |connection| connection_succeeded = true session.materialize_if_needed txn_num = session.in_transaction? ? session.txn_num : session.next_txn_num # The context needs to be duplicated here because we will be using # it later for the retry as well. yield connection, txn_num, context.dup end rescue *retryable_exceptions, Error::PoolError, Auth::Unauthorized, Error::OperationFailure::Family => e e.add_notes('modern retry', 'attempt 1') if e.is_a?(Error::OperationFailure::Family) ensure_retryable!(e) else ensure_labeled_retryable!(e, connection_succeeded, session) end # Context#with creates a new context, which is not necessary here # but the API is less prone to misuse this way. retry_write(e, txn_num, context: context.with(is_retry: true), failed_server: server, &block) end # Called after a failed write, this will retry the write no more than # once. # # @param [ Exception ] original_error The exception that triggered the # retry. # @param [ Number ] txn_num The transaction number. # @param [ Operation::Context ] context The context for the operation. # @param [ Mongo::Server ] failed_server The server on which the original # operation failed. # # @return [ Result ] The result of the operation. def retry_write(original_error, txn_num, context:, failed_server: nil, &block) context&.check_timeout! session = context.session # We do not request a scan of the cluster here, because error handling # for the error which triggered the retry should have updated the # server description and/or topology as necessary (specifically, # a socket error or a not master error should have marked the respective # server unknown). Here we just need to wait for server selection. server = select_server( cluster, ServerSelector.primary, session, failed_server, timeout: context.remaining_timeout_sec ) unless server.retry_writes? # Do not need to add "modern retry" here, it should already be on # the first exception. original_error.add_note('did not retry because server selected for retry does not support retryable writes') # When we want to raise the original error, we must not run the # rescue blocks below that add diagnostics because the diagnostics # added would either be rendundant (e.g. modern retry note) or wrong # (e.g. "attempt 2", we are raising the exception produced in the # first attempt and haven't attempted the second time). Use the # special marker class to bypass the ordinarily applicable rescues. raise Error::RaiseOriginalError end attempt = attempt ? attempt + 1 : 2 log_retry(original_error, message: 'Write retry') server.with_connection(connection_global_id: context.connection_global_id) do |connection| yield(connection, txn_num, context) end rescue *retryable_exceptions, Error::PoolError => e maybe_fail_on_retryable(e, original_error, context, attempt) failed_server = server retry rescue Error::OperationFailure::Family => e maybe_fail_on_operation_failure(e, original_error, context, attempt) failed_server = server retry rescue Mongo::Error::TimeoutError raise rescue Error, Error::AuthError => e fail_on_other_error!(e, original_error) rescue Error::RaiseOriginalError raise original_error end # Retry writes on MMAPv1 should raise an actionable error; append actionable # information to the error message and preserve the backtrace. def raise_unsupported_error(e) new_error = Error::OperationFailure.new("#{e.class}: #{e} "\ "This MongoDB deployment does not support retryable writes. Please add "\ "retryWrites=false to your connection string or use the retry_writes: false Ruby client option") new_error.set_backtrace(e.backtrace) raise new_error end # Make sure the exception object is labeled 'RetryableWriteError'. If it # isn't, and should not be, re-raise the exception. def ensure_labeled_retryable!(e, connection_succeeded, session) if !e.label?('RetryableWriteError') # If there was an error before the connection was successfully # checked out and connected, there was no connection present to use # for adding labels. Therefore, we should check if it is retryable, # and if it is, add the label and retry it. if !connection_succeeded && !session.in_transaction? && e.write_retryable? e.add_label('RetryableWriteError') else raise e end end end # Make sure the exception object supports retryable writes. If it does, # make sure it has been appropriately labeled. If either condition fails, # raise an exception. def ensure_retryable!(e) if e.unsupported_retryable_write? raise_unsupported_error(e) elsif !e.label?('RetryableWriteError') raise e end end # Raise either e, or original_error, depending on whether e is # write_retryable. def maybe_fail_on_retryable(e, original_error, context, attempt) if e.write_retryable? e.add_notes('modern retry', "attempt #{attempt}") raise e unless context&.deadline else original_error.add_note("later retry failed: #{e.class}: #{e}") raise original_error end end # Raise either e, or original_error, depending on whether e is # appropriately labeled. def maybe_fail_on_operation_failure(e, original_error, context, attempt) e.add_note('modern retry') if e.label?('RetryableWriteError') && !e.label?('NoWritesPerformed') e.add_note("attempt #{attempt}") raise e unless context&.deadline else original_error.add_note("later retry failed: #{e.class}: #{e}") raise original_error end end # Raise the original error (after annotating). def fail_on_other_error!(e, original_error) # Do not need to add "modern retry" here, it should already be on # the first exception. original_error.add_note("later retry failed: #{e.class}: #{e}") raise original_error end end end end mongo-ruby-driver-2.21.3/lib/mongo/search_index/000077500000000000000000000000001505113246500215115ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/search_index/view.rb000066400000000000000000000207431505113246500230160ustar00rootroot00000000000000# frozen_string_literal: true module Mongo module SearchIndex # A class representing a view of search indexes. class View include Enumerable include Retryable include Collection::Helpers # @return [ Mongo::Collection ] the collection this view belongs to attr_reader :collection # @return [ nil | String ] the index id to query attr_reader :requested_index_id # @return [ nil | String ] the index name to query attr_reader :requested_index_name # @return [ Hash ] the options hash to use for the aggregate command # when querying the available indexes. attr_reader :aggregate_options # Create the new search index view. # # @param [ Collection ] collection The collection. # @param [ Hash ] options The options that configure the behavior of the view. # # @option options [ String ] :id The specific index id to query (optional) # @option options [ String ] :name The name of the specific index to query (optional) # @option options [ Hash ] :aggregate The options hash to send to the # aggregate command when querying the available indexes. def initialize(collection, options = {}) @collection = collection @requested_index_id = options[:id] @requested_index_name = options[:name] @aggregate_options = options[:aggregate] || {} return if @aggregate_options.is_a?(Hash) raise ArgumentError, "The :aggregate option must be a Hash (got a #{@aggregate_options.class})" end # Create a single search index with the given definition. If the name is # provided, the new index will be given that name. # # @param [ Hash ] definition The definition of the search index. # @param [ nil | String ] name The name to give the new search index. # # @return [ String ] the name of the new search index. def create_one(definition, name: nil, type: 'search') create_many([ { name: name, definition: definition, type: type } ]).first end # Create multiple search indexes with a single command. # # @param [ Array ] indexes The description of the indexes to # create. Each element of the list must be a hash with a definition # key, and an optional name key. # # @return [ Array ] the names of the new search indexes. def create_many(indexes) spec = spec_with(indexes: indexes.map { |v| validate_search_index!(v) }) result = Operation::CreateSearchIndexes.new(spec).execute(next_primary, context: execution_context) result.first['indexesCreated'].map { |idx| idx['name'] } end # Drop the search index with the given id, or name. One or the other must # be specified, but not both. # # @param [ String ] id the id of the index to drop # @param [ String ] name the name of the index to drop # # @return [ Mongo::Operation::Result | false ] the result of the # operation, or false if the given index does not exist. def drop_one(id: nil, name: nil) validate_id_or_name!(id, name) spec = spec_with(index_id: id, index_name: name) op = Operation::DropSearchIndex.new(spec) # per the spec: # Drivers MUST suppress NamespaceNotFound errors for the # ``dropSearchIndex`` helper. Drop operations should be idempotent. do_drop(op, nil, execution_context) end # Iterate over the search indexes. # # @param [ Proc ] block if given, each search index will be yieleded to # the block. # # @return [ self | Enumerator ] if a block is given, self is returned. # Otherwise, an enumerator will be returned. def each(&block) @result ||= begin spec = {}.tap do |s| s[:id] = requested_index_id if requested_index_id s[:name] = requested_index_name if requested_index_name end collection.with(read_concern: {}).aggregate( [ { '$listSearchIndexes' => spec } ], aggregate_options ) end return @result.to_enum unless block @result.each(&block) self end # Update the search index with the given id or name. One or the other # must be provided, but not both. # # @param [ Hash ] definition the definition to replace the given search # index with. # @param [ nil | String ] id the id of the search index to update # @param [ nil | String ] name the name of the search index to update # # @return [ Mongo::Operation::Result ] the result of the operation def update_one(definition, id: nil, name: nil) validate_id_or_name!(id, name) spec = spec_with(index_id: id, index_name: name, index: definition) Operation::UpdateSearchIndex.new(spec).execute(next_primary, context: execution_context) end # The following methods are to make the view act more like an array, # without having to explicitly make it an array... # Queries whether the search index enumerable is empty. # # @return [ true | false ] whether the enumerable is empty or not. def empty? count.zero? end private # A helper method for building the specification document with certain # values pre-populated. # # @param [ Hash ] extras the values to put into the specification # # @return [ Hash ] the specification document def spec_with(extras) { coll_name: collection.name, db_name: collection.database.name, }.merge(extras) end # A helper method for retrieving the primary server from the cluster. # # @return [ Mongo::Server ] the server to use def next_primary(ping = nil, session = nil) collection.cluster.next_primary(ping, session) end # A helper method for constructing a new operation context for executing # an operation. # # @return [ Mongo::Operation::Context ] the operation context def execution_context Operation::Context.new(client: collection.client) end # Validates the given id and name, ensuring that exactly one of them # is non-nil. # # @param [ nil | String ] id the id to validate # @param [ nil | String ] name the name to validate # # @raise [ ArgumentError ] if neither or both arguments are nil def validate_id_or_name!(id, name) return unless (id.nil? && name.nil?) || (!id.nil? && !name.nil?) raise ArgumentError, 'exactly one of id or name must be specified' end # Validates the given search index document, ensuring that it has no # extra keys, and that the name and definition are valid. # # @param [ Hash ] doc the document to validate # # @raise [ ArgumentError ] if the document is invalid. def validate_search_index!(doc) validate_search_index_keys!(doc.keys) validate_search_index_name!(doc[:name] || doc['name']) validate_search_index_definition!(doc[:definition] || doc['definition']) doc end # Validates the keys of a search index document, ensuring that # they are all valid. # # @param [ Array ] keys the keys of a search index document # # @raise [ ArgumentError ] if the list contains any invalid keys def validate_search_index_keys!(keys) extras = keys - [ 'name', 'definition', 'type', :name, :definition, :type ] raise ArgumentError, "invalid keys in search index creation: #{extras.inspect}" if extras.any? end # Validates the name of a search index, ensuring that it is either a # String or nil. # # @param [ nil | String ] name the name of a search index # # @raise [ ArgumentError ] if the name is not valid def validate_search_index_name!(name) return if name.nil? || name.is_a?(String) raise ArgumentError, "search index name must be nil or a string (got #{name.inspect})" end # Validates the definition of a search index. # # @param [ Hash ] definition the definition of a search index # # @raise [ ArgumentError ] if the definition is not valid def validate_search_index_definition!(definition) return if definition.is_a?(Hash) raise ArgumentError, "search index definition must be a Hash (got #{definition.inspect})" end end end end mongo-ruby-driver-2.21.3/lib/mongo/semaphore.rb000066400000000000000000000025141505113246500213670ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2018-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo # This is a semaphore implementation essentially encapsulating the # sample code at https://ruby-doc.org/stdlib-2.0.0/libdoc/thread/rdoc/ConditionVariable.html. # # @api private class Semaphore def initialize @lock = Mutex.new @cv = ::ConditionVariable.new end # Waits for the semaphore to be signaled up to timeout seconds. # If semaphore is not signaled, returns after timeout seconds. def wait(timeout = nil) @lock.synchronize do @cv.wait(@lock, timeout) end end def broadcast @lock.synchronize do @cv.broadcast end end def signal @lock.synchronize do @cv.signal end end end end mongo-ruby-driver-2.21.3/lib/mongo/server.rb000066400000000000000000000533041505113246500207150ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo # Represents a single server on the server side that can be standalone, part of # a replica set, or a mongos. # # @since 2.0.0 class Server extend Forwardable include Monitoring::Publishable include Event::Publisher # The default time in seconds to timeout a connection attempt. # # @since 2.4.3 CONNECT_TIMEOUT = 10.freeze # Instantiate a new server object. Will start the background refresh and # subscribe to the appropriate events. # # @api private # # @example Initialize the server. # Mongo::Server.new('127.0.0.1:27017', cluster, monitoring, listeners) # # @note Server must never be directly instantiated outside of a Cluster. # # @param [ Address ] address The host:port address to connect to. # @param [ Cluster ] cluster The cluster the server belongs to. # @param [ Monitoring ] monitoring The monitoring. # @param [ Event::Listeners ] event_listeners The event listeners. # @param [ Hash ] options The server options. # # @option options [ Boolean ] :monitor For internal driver use only: # whether to monitor the server after instantiating it. # @option options [ true, false ] :monitoring_io For internal driver # use only. Set to false to prevent SDAM-related I/O from being # done by this server. Note: setting this option to false will make # the server non-functional. It is intended for use in tests which # manually invoke SDAM state transitions. # @option options [ true, false ] :populator_io For internal driver # use only. Set to false to prevent the populator threads from being # created and started in the server's connection pool. It is intended # for use in tests that also turn off monitoring_io, unless the populator # is explicitly needed. If monitoring_io is off, but the populator_io # is on, the populator needs to be manually closed at the end of the # test, since a cluster without monitoring is considered not connected, # and thus will not clean up the connection pool populator threads on # close. # @option options [ true | false ] :load_balancer Whether this server # is a load balancer. # @option options [ String ] :connect The client connection mode. # # @since 2.0.0 def initialize(address, cluster, monitoring, event_listeners, options = {}) @address = address @cluster = cluster @monitoring = monitoring options = options.dup _monitor = options.delete(:monitor) @options = options.freeze @event_listeners = event_listeners @connection_id_gen = Class.new do include Id end @scan_semaphore = DistinguishingSemaphore.new @round_trip_time_calculator = RoundTripTimeCalculator.new @description = Description.new(address, {}, load_balancer: !!@options[:load_balancer], force_load_balancer: force_load_balancer?, ) @last_scan = nil @last_scan_monotime = nil unless options[:monitoring_io] == false @monitor = Monitor.new(self, event_listeners, monitoring, options.merge( app_metadata: cluster.monitor_app_metadata, push_monitor_app_metadata: cluster.push_monitor_app_metadata, heartbeat_interval: cluster.heartbeat_interval, )) unless _monitor == false start_monitoring end end @connected = true @pool_lock = Mutex.new end # @return [ String ] The configured address for the server. attr_reader :address # @return [ Cluster ] cluster The server cluster. attr_reader :cluster # @return [ nil | Monitor ] monitor The server monitor. nil if the servenr # was created with monitoring_io: false option. attr_reader :monitor # @return [ Hash ] The options hash. attr_reader :options # @return [ Monitoring ] monitoring The monitoring. attr_reader :monitoring # @return [ Server::Description ] description The server # description the monitor refreshes. attr_reader :description # Returns whether this server is forced to be a load balancer. # # @return [ true | false ] Whether this server is forced to be a load balancer. # # @api private def force_load_balancer? options[:connect] == :load_balanced end # @return [ Time | nil ] last_scan The time when the last server scan # completed, or nil if the server has not been scanned yet. # # @since 2.4.0 def last_scan if description && !description.config.empty? description.last_update_time else @last_scan end end # @return [ Float | nil ] last_scan_monotime The monotonic time when the last server scan # completed, or nil if the server has not been scanned yet. # @api private def last_scan_monotime if description && !description.config.empty? description.last_update_monotime else @last_scan_monotime end end # @deprecated def heartbeat_frequency cluster.heartbeat_interval end # @deprecated alias :heartbeat_frequency_seconds :heartbeat_frequency # Performs an immediate, synchronous check of the server. # # @deprecated def_delegators :monitor, :scan! # The compressor negotiated by the server monitor, if any. # # This attribute is nil if no server check has not yet completed, and if # no compression was negatiated. # # @note Compression is negotiated for each connection separately. # # @return [ String | nil ] The negotiated compressor. # # @deprecated def compressor if monitor monitor.compressor else nil end end # Delegate convenience methods to the monitor description. def_delegators :description, :arbiter?, :features, :ghost?, :max_wire_version, :max_write_batch_size, :max_bson_object_size, :max_message_size, :tags, :average_round_trip_time, :minimum_round_trip_time, :mongos?, :other?, :primary?, :replica_set_name, :secondary?, :standalone?, :unknown?, :load_balancer?, :last_write_date, :logical_session_timeout # Get the app metadata from the cluster. def_delegators :cluster, :app_metadata, :cluster_time, :update_cluster_time # @api private def_delegators :cluster, :monitor_app_metadata, :push_monitor_app_metadata def_delegators :features, :check_driver_support! # @return [ Semaphore ] Semaphore to signal to request an immediate scan # of this server by its monitor, if one is running. # # @api private attr_reader :scan_semaphore # @return [ RoundTripTimeCalculator ] Round trip time calculator object. # @api private attr_reader :round_trip_time_calculator # Is this server equal to another? # # @example Is the server equal to the other? # server == other # # @param [ Object ] other The object to compare to. # # @return [ true, false ] If the servers are equal. # # @since 2.0.0 def ==(other) return false unless other.is_a?(Server) address == other.address end # Determine if a connection to the server is able to be established and # messages can be sent to it. # # @example Is the server connectable? # server.connectable? # # @return [ true, false ] If the server is connectable. # # @since 2.1.0 # # @deprecated No longer necessary with Server Selection specification. def connectable?; end # Disconnect the driver from this server. # # Disconnects all idle connections to this server in its connection pool, # if any exist. Stops the populator of the connection pool, if it is # running. Does not immediately close connections which are presently # checked out (i.e. in use) - such connections will be closed when they # are returned to their respective connection pools. Stop the server's # background monitor. # # @return [ true ] Always true. # # @since 2.0.0 def disconnect! if monitor monitor.stop! end @connected = false # The current CMAP spec requires a pool to be mostly unusable # if its server is unknown (or, therefore, disconnected). # However any outstanding operations should continue to completion, # and their connections need to be checked into the pool to be # torn down. Because of this cleanup requirement we cannot just # close the pool and set it to nil here, to be recreated the next # time the server is discovered. pool_internal&.clear true end def close if monitor monitor.stop! end @connected = false _pool = nil @pool_lock.synchronize do _pool, @pool = @pool, nil end # TODO: change this to _pool.close in RUBY-3174. # Clear the pool. If the server is not unknown then the # pool will stay ready. Stop the background populator thread. _pool&.close(stay_ready: true) nil end # Whether the server is connected. # # @return [ true|false ] Whether the server is connected. # # @api private # @since 2.7.0 def connected? @connected end # Start monitoring the server. # # Used internally by the driver to add a server to a cluster # while delaying monitoring until the server is in the cluster. # # @api private def start_monitoring publish_opening_event if options[:monitoring_io] != false monitor.run! end end # Publishes the server opening event. # # @api private def publish_opening_event publish_sdam_event( Monitoring::SERVER_OPENING, Monitoring::Event::ServerOpening.new(address, cluster.topology) ) end # Get a pretty printed server inspection. # # @example Get the server inspection. # server.inspect # # @return [ String ] The nice inspection string. # # @since 2.0.0 def inspect "#" end # @return [ String ] String representing server status (e.g. PRIMARY). # # @api private def status case when load_balancer? 'LB' when primary? 'PRIMARY' when secondary? 'SECONDARY' when standalone? 'STANDALONE' when arbiter? 'ARBITER' when ghost? 'GHOST' when other? 'OTHER' when mongos? 'MONGOS' when unknown? 'UNKNOWN' else # Since the summary method is often used for debugging, do not raise # an exception in case none of the expected types matched nil end end # @note This method is experimental and subject to change. # # @api experimental # @since 2.7.0 def summary status = self.status || '' if replica_set_name status += " replica_set=#{replica_set_name}" end unless monitor&.running? status += " NO-MONITORING" end if @pool status += " pool=#{@pool.summary}" end address_bit = if address "#{address.host}:#{address.port}" else 'nil' end "#" end # Get the connection pool for this server. # # @example Get the connection pool for the server. # server.pool # # @return [ Mongo::Server::ConnectionPool ] The connection pool. # # @since 2.0.0 def pool if unknown? raise Error::ServerNotUsable, address end @pool_lock.synchronize do opts = connected? ? options : options.merge(populator_io: false) @pool ||= ConnectionPool.new(self, opts).tap do |pool| pool.ready end end end # Internal driver method to retrieve the connection pool for this server. # # Unlike +pool+, +pool_internal+ will not create a pool if one does not # already exist. # # @return [ Server::ConnectionPool | nil ] The connection pool, if one exists. # # @api private def pool_internal @pool_lock.synchronize do @pool end end # Determine if the provided tags are a subset of the server's tags. # # @example Are the provided tags a subset of the server's tags. # server.matches_tag_set?({ 'rack' => 'a', 'dc' => 'nyc' }) # # @param [ Hash ] tag_set The tag set to compare to the server's tags. # # @return [ true, false ] If the provided tags are a subset of the server's tags. # # @since 2.0.0 def matches_tag_set?(tag_set) tag_set.keys.all? do |k| tags[k] && tags[k] == tag_set[k] end end # Restart the server monitor. # # @example Restart the server monitor. # server.reconnect! # # @return [ true ] Always true. # # @since 2.1.0 def reconnect! if options[:monitoring_io] != false monitor.restart! end @connected = true end # Execute a block of code with a connection, that is checked out of the # server's pool and then checked back in. # # @example Send a message with the connection. # server.with_connection do |connection| # connection.dispatch([ command ]) # end # # @return [ Object ] The result of the block execution. # # @since 2.3.0 def with_connection(connection_global_id: nil, context: nil, &block) pool.with_connection( connection_global_id: connection_global_id, context: context, &block ) end # Handle handshake failure. # # @since 2.7.0 # @api private def handle_handshake_failure! yield rescue Mongo::Error::SocketError, Mongo::Error::SocketTimeoutError => e unknown!( generation: e.generation, service_id: e.service_id, stop_push_monitor: true, ) raise end # Handle authentication failure. # # @example Handle possible authentication failure. # server.handle_auth_failure! do # Auth.get(user).login(self) # end # # @raise [ Auth::Unauthorized ] If the authentication failed. # # @return [ Object ] The result of the block execution. # # @since 2.3.0 def handle_auth_failure! yield rescue Mongo::Error::SocketTimeoutError # possibly cluster is slow, do not give up on it raise rescue Mongo::Error::SocketError, Auth::Unauthorized => e # non-timeout network error or auth error, clear the pool and mark the # topology as unknown unknown!( generation: e.generation, service_id: e.service_id, stop_push_monitor: true, ) raise end # Whether the server supports modern read retries. # # @api private def retry_reads? !!(features.sessions_enabled? && logical_session_timeout) end # Will writes sent to this server be retried. # # @example Will writes be retried. # server.retry_writes? # # @return [ true, false ] If writes will be retried. # # @note Retryable writes are only available on server versions 3.6+ and with # sharded clusters or replica sets. # # @note Some of the conditions in this method automatically return false for # for load balanced topologies. The conditions in this method should # always be true, since load-balanced topologies are only available on # MongoDB 5.0+, and not for standalone topologies. Therefore, we can # assume that retry writes are enabled. # # @since 2.5.0 def retry_writes? !!(features.sessions_enabled? && logical_session_timeout && !standalone?) || load_balancer? end # Marks server unknown and publishes the associated SDAM event # (server description changed). # # If the generation is passed in options, the server will only be marked # unknown if the passed generation is no older than the current generation # of the server's connection pool. # # @param [ Hash ] options Options. # # @option options [ Integer ] :generation Connection pool generation of # the connection that was used for the operation that produced the error. # @option options [ true | false ] :keep_connection_pool Usually when the # new server description is unknown, the connection pool on the # respective server is cleared. Set this option to true to keep the # existing connection pool (required when handling not master errors # on 4.2+ servers). # @option options [ TopologyVersion ] :topology_version Topology version # of the error response that is causing the server to be marked unknown. # @option options [ true | false ] :stop_push_monitor Whether to stop # the PushMonitor associated with the server, if any. # @option options [ Object ] :service_id Discard state for the specified # service id only. # # @since 2.4.0, SDAM events are sent as of version 2.7.0 def unknown!(options = {}) pool = pool_internal if load_balancer? # When the client is in load-balanced topology, servers (the one and # only that can be) starts out as a load balancer and stays as a # load balancer indefinitely. As such it is not marked unknown. # # However, this method also clears connection pool for the server # when the latter is marked unknown, and this part needs to happen # when the server is a load balancer. # # It is possible for a load balancer server to not have a service id, # for example if there haven't been any successful connections yet to # this server, but the server can still be marked unknown if one # of such connections failed midway through its establishment. if service_id = options[:service_id] pool&.disconnect!(service_id: service_id) end return end if options[:generation] && options[:generation] < pool&.generation return end if options[:topology_version] && description.topology_version && !options[:topology_version].gt?(description.topology_version) then return end if options[:stop_push_monitor] monitor&.stop_push_monitor! end # SDAM flow will update description on the server without in-place # mutations and invoke SDAM transitions as needed. config = {} if options[:service_id] config['serviceId'] = options[:service_id] end if options[:topology_version] config['topologyVersion'] = options[:topology_version] end new_description = Description.new(address, config, load_balancer: load_balancer?, force_load_balancer: options[:connect] == :load_balanced, ) cluster.run_sdam_flow(description, new_description, options) end # @api private def update_description(description) pool = pool_internal if pool && !description.unknown? pool.ready end @description = description end # Clear the servers description so that it is considered unknown and can be # safely disconnected. # # @api private def clear_description @description = Mongo::Server::Description.new(address, {}) end # @param [ Object ] :service_id Close connections with the specified # service id only. # @param [ true | false ] :interrupt_in_use_connections Whether or not the # cleared connections should be interrupted as well. # # @api private def clear_connection_pool(service_id: nil, interrupt_in_use_connections: false) @pool_lock.synchronize do # A server being marked unknown after it is closed is technically # incorrect but it does not meaningfully alter any state. # Because historically the driver permitted servers to be marked # unknown at any time, continue doing so even if the pool is closed. if @pool && !@pool.closed? @pool.disconnect!(service_id: service_id, interrupt_in_use_connections: interrupt_in_use_connections) end end end # @api private def next_connection_id @connection_id_gen.next_id end # @api private def update_last_scan @last_scan = Time.now @last_scan_monotime = Utils.monotonic_time end end end require 'mongo/server/app_metadata' require 'mongo/server/connection_common' require 'mongo/server/connection_base' require 'mongo/server/pending_connection' require 'mongo/server/connection' require 'mongo/server/connection_pool' require 'mongo/server/description' require 'mongo/server/monitor' require 'mongo/server/round_trip_time_calculator' require 'mongo/server/push_monitor' mongo-ruby-driver-2.21.3/lib/mongo/server/000077500000000000000000000000001505113246500203635ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/server/app_metadata.rb000066400000000000000000000176011505113246500233350ustar00rootroot00000000000000# frozen_string_literal: true # Copyright (C) 2016-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'mongo/server/app_metadata/environment' require 'mongo/server/app_metadata/platform' require 'mongo/server/app_metadata/truncator' module Mongo class Server # Application metadata that is sent to the server during a handshake, # when a new connection is established. # # @api private class AppMetadata extend Forwardable # The max application name byte size. MAX_APP_NAME_SIZE = 128 # The driver name. DRIVER_NAME = 'mongo-ruby-driver' # Option keys that affect auth mechanism negotiation. AUTH_OPTION_KEYS = %i[ user auth_source auth_mech].freeze # Possible connection purposes. PURPOSES = %i[ application monitor push_monitor ].freeze # Instantiate the new AppMetadata object. # # @example Instantiate the app metadata. # Mongo::Server::AppMetadata.new(options) # # @param [ Hash ] options Metadata options. # @option options [ String, Symbol ] :app_name Application name that is # printed to the mongod logs upon establishing a connection in server # versions >= 3.4. # @option options [ Symbol ] :auth_mech The authentication mechanism to # use. One of :mongodb_cr, :mongodb_x509, :plain, :scram, :scram256 # @option options [ String ] :auth_source The source to authenticate from. # @option options [ Array ] :compressors A list of potential # compressors to use, in order of preference. The driver chooses the # first compressor that is also supported by the server. Currently the # driver only supports 'zstd', 'snappy' and 'zlib'. # @option options [ String ] :platform Platform information to include in # the metadata printed to the mongod logs upon establishing a connection # in server versions >= 3.4. # @option options [ Symbol ] :purpose The purpose of this connection. # @option options [ Hash ] :server_api The requested server API version. # This hash can have the following items: # - *:version* -- string # - *:strict* -- boolean # - *:deprecation_errors* -- boolean # @option options [ String ] :user The user name. # @option options [ Array ] :wrapping_libraries Information about # libraries such as ODMs that are wrapping the driver. Specify the # lower level libraries first. Allowed hash keys: :name, :version, # :platform. # # @since 2.4.0 def initialize(options = {}) @app_name = options[:app_name].to_s if options[:app_name] @platform = options[:platform] @purpose = check_purpose!(options[:purpose]) @compressors = options[:compressors] || [] @wrapping_libraries = options[:wrapping_libraries] @server_api = options[:server_api] return unless options[:user] && !options[:auth_mech] auth_db = options[:auth_source] || 'admin' @request_auth_mech = "#{auth_db}.#{options[:user]}" end # @return [ Symbol ] The purpose of the connection for which this # app metadata is created. attr_reader :purpose # @return [ String ] The platform information given when the object was # instantiated. attr_reader :platform # @return [ Hash | nil ] The requested server API version. # # Thes hash can have the following items: # - *:version* -- string # - *:strict* -- boolean # - *:deprecation_errors* -- boolean attr_reader :server_api # @return [ Array | nil ] Information about libraries wrapping # the driver. attr_reader :wrapping_libraries # Get the metadata as BSON::Document to be sent to # as part of the handshake. The document should # be appended to a suitable handshake command. # # This method ensures that the metadata are valid. # # @return [BSON::Document] Valid document for connection's handshake. # # @raise [ Error::InvalidApplicationName ] When the metadata are invalid. def validated_document validate! document end # Get BSON::Document to be used as value for `client` key in # handshake document. # # @return [BSON::Document] Document describing client for handshake. def client_document @client_document ||= BSON::Document.new.tap do |doc| doc[:application] = { name: @app_name } if @app_name doc[:driver] = driver_doc doc[:os] = os_doc doc[:platform] = platform_string env_doc.tap { |env| doc[:env] = env if env } end end private # Check whether it is possible to build a valid app metadata document # with params provided on intialization. # # @raise [ Error::InvalidApplicationName ] When the metadata are invalid. def validate! if @app_name && @app_name.bytesize > MAX_APP_NAME_SIZE raise Error::InvalidApplicationName.new(@app_name, MAX_APP_NAME_SIZE) end true end # Get the metadata as BSON::Document to be sent to # as part of the handshake. The document should # be appended to a suitable handshake command. # # @return [BSON::Document] Document for connection's handshake. def document @document ||= begin client = Truncator.new(client_document).document BSON::Document.new(compression: @compressors, client: client).tap do |doc| doc[:saslSupportedMechs] = @request_auth_mech if @request_auth_mech doc.update(Utils.transform_server_api(@server_api)) if @server_api end end end def driver_doc names = [ DRIVER_NAME ] versions = [ Mongo::VERSION ] wrapping_libraries&.each do |library| names << (library[:name] || '') versions << (library[:version] || '') end { name: names.join('|'), version: versions.join('|'), } end def os_doc { type: type, name: name, architecture: architecture, } end # Returns the environment doc describing the current execution # environment. # # @return [ Hash | nil ] the environment doc (or nil if no relevant # environment info was detected) def env_doc env = Environment.new env.present? ? env.to_h : nil end def type if RbConfig::CONFIG && RbConfig::CONFIG['host_os'] RbConfig::CONFIG['host_os'].split('_').first[/[a-z]+/i].downcase else 'unknown' end end def name RbConfig::CONFIG['host_os'] end def architecture RbConfig::CONFIG['target_cpu'] end def platform_string Platform.new(self).to_s end # Verifies that the given purpose is either nil, or is one of the # allowed purposes. # # @param [ String | nil ] purpose The purpose to validate # # @return [ String | nil ] the {{purpose}} argument # # @raise [ ArgumentError ] if the purpose is invalid def check_purpose!(purpose) return purpose unless purpose && !PURPOSES.include?(purpose) raise ArgumentError, "Invalid purpose: #{purpose}" end end end end mongo-ruby-driver-2.21.3/lib/mongo/server/app_metadata/000077500000000000000000000000001505113246500230035ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/server/app_metadata/environment.rb000066400000000000000000000265201505113246500257010ustar00rootroot00000000000000# frozen_string_literal: true # Copyright (C) 2016-2023 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Server class AppMetadata # Implements the logic from the handshake spec, for deducing and # reporting the current environment in which the program is # executing. # # This includes FaaS environment checks, as well as checks for the # presence of a container (Docker) and/or orchestrator (Kubernetes). # # @api private class Environment # Error class for reporting that too many discriminators were found # in the environment. (E.g. if the environment reports that it is # running under both AWS and Azure.) class TooManyEnvironments < Mongo::Error; end # Error class for reporting that a required environment variable is # missing. class MissingVariable < Mongo::Error; end # Error class for reporting that the wrong type was given for a # field. class TypeMismatch < Mongo::Error; end # Error class for reporting that the value for a field is too long. class ValueTooLong < Mongo::Error; end # The name and location of the .dockerenv file that will signal the # presence of Docker. DOCKERENV_PATH = '/.dockerenv' # This value is not explicitly specified in the spec, only implied to be # less than 512. MAXIMUM_VALUE_LENGTH = 500 # The mapping that determines which FaaS environment is active, based # on which environment variable(s) are present. DISCRIMINATORS = { 'AWS_EXECUTION_ENV' => { pattern: /^AWS_Lambda_/, name: 'aws.lambda' }, 'AWS_LAMBDA_RUNTIME_API' => { name: 'aws.lambda' }, 'FUNCTIONS_WORKER_RUNTIME' => { name: 'azure.func' }, 'K_SERVICE' => { name: 'gcp.func' }, 'FUNCTION_NAME' => { name: 'gcp.func' }, 'VERCEL' => { name: 'vercel' }, }.freeze # Describes how to coerce values of the specified type. COERCIONS = { string: ->(v) { String(v) }, integer: ->(v) { Integer(v) } }.freeze # Describes which fields are required for each FaaS environment, # along with their expected types, and how they should be named in # the handshake document. FIELDS = { 'aws.lambda' => { 'AWS_REGION' => { field: :region, type: :string }, 'AWS_LAMBDA_FUNCTION_MEMORY_SIZE' => { field: :memory_mb, type: :integer }, }, 'azure.func' => {}, 'gcp.func' => { 'FUNCTION_MEMORY_MB' => { field: :memory_mb, type: :integer }, 'FUNCTION_TIMEOUT_SEC' => { field: :timeout_sec, type: :integer }, 'FUNCTION_REGION' => { field: :region, type: :string }, }, 'vercel' => { 'VERCEL_REGION' => { field: :region, type: :string }, }, }.freeze # @return [ String | nil ] the name of the FaaS environment that was # detected, or nil if no valid FaaS environment was detected. attr_reader :name # @return [ Hash | nil ] the fields describing the detected FaaS # environment. attr_reader :fields # @return [ String | nil ] the error message explaining why a valid # FaaS environment was not detected, or nil if no error occurred. # # @note These error messagess are not to be propogated to the # user; they are intended only for troubleshooting and debugging.) attr_reader :error # Create a new AppMetadata::Environment object, initializing it from # the current ENV variables. If no FaaS environment is detected, or # if the environment contains invalid or contradictory state, it will # be initialized with {{name}} set to {{nil}}. def initialize @fields = {} @error = nil @name = detect_environment populate_faas_fields detect_container rescue TooManyEnvironments => e self.error = "too many environments detected: #{e.message}" rescue MissingVariable => e self.error = "missing environment variable: #{e.message}" rescue TypeMismatch => e self.error = e.message rescue ValueTooLong => e self.error = "value for #{e.message} is too long" end # Queries the detected container information. # # @return [ Hash | nil ] the detected container information, or # nil if no container was detected. def container fields[:container] end # Queries whether any environment information was able to be # detected. # # @return [ true | false ] if any environment information was # detected. def present? @name || fields.any? end # Queries whether the current environment is a valid FaaS environment. # # @return [ true | false ] whether the environment is a FaaS # environment or not. def faas? @name != nil end # Queries whether the current environment is a valid AWS Lambda # environment. # # @return [ true | false ] whether the environment is a AWS Lambda # environment or not. def aws? @name == 'aws.lambda' end # Queries whether the current environment is a valid Azure # environment. # # @return [ true | false ] whether the environment is a Azure # environment or not. def azure? @name == 'azure.func' end # Queries whether the current environment is a valid GCP # environment. # # @return [ true | false ] whether the environment is a GCP # environment or not. def gcp? @name == 'gcp.func' end # Queries whether the current environment is a valid Vercel # environment. # # @return [ true | false ] whether the environment is a Vercel # environment or not. def vercel? @name == 'vercel' end # Compiles the detected environment information into a Hash. # # @return [ Hash ] the detected environment information. def to_h name ? fields.merge(name: name) : fields end private # Searches the DESCRIMINATORS list to see which (if any) apply to # the current environment. # # @return [ String | nil ] the name of the detected FaaS provider. # # @raise [ TooManyEnvironments ] if the environment contains # discriminating variables for more than one FaaS provider. def detect_environment matches = DISCRIMINATORS.keys.select { |k| discriminator_matches?(k) } names = matches.map { |m| DISCRIMINATORS[m][:name] }.uniq # From the spec: # When variables for multiple ``client.env.name`` values are present, # ``vercel`` takes precedence over ``aws.lambda``; any other # combination MUST cause ``client.env`` to be entirely omitted. return 'vercel' if names.sort == %w[ aws.lambda vercel ] raise TooManyEnvironments, names.join(', ') if names.length > 1 names.first end # Looks for the presence of a container. Currently can detect # Docker (by the existence of a .dockerenv file in the root # directory) and Kubernetes (by the existence of the KUBERNETES_SERVICE_HOST # environment variable). def detect_container runtime = docker_present? && 'docker' orchestrator = kubernetes_present? && 'kubernetes' return unless runtime || orchestrator fields[:container] = {} fields[:container][:runtime] = runtime if runtime fields[:container][:orchestrator] = orchestrator if orchestrator end # Checks for the existence of a .dockerenv in the root directory. def docker_present? File.exist?(dockerenv_path) end # Implementing this as a method so that it can be mocked in tests, to # test the presence or absence of Docker. def dockerenv_path DOCKERENV_PATH end # Checks for the presence of a non-empty KUBERNETES_SERVICE_HOST # environment variable. def kubernetes_present? !ENV['KUBERNETES_SERVICE_HOST'].to_s.empty? end # Determines whether the named environment variable exists, and (if # a pattern has been declared for that descriminator) whether the # pattern matches the value of the variable. # # @param [ String ] var the name of the environment variable # # @return [ true | false ] if the variable describes the current # environment or not. def discriminator_matches?(var) return false unless ENV[var] disc = DISCRIMINATORS[var] return true unless disc[:pattern] disc[:pattern].match?(ENV[var]) end # Extracts environment information from the current environment # variables, based on the detected FaaS environment. Populates the # {{@fields}} instance variable. def populate_faas_fields return unless name FIELDS[name].each_with_object(@fields) do |(var, defn), fields| fields[defn[:field]] = extract_field(var, defn) end end # Extracts the named variable from the environment and validates it # against its declared definition. # # @param [ String ] var The name of the environment variable to look # for. # @param [ Hash ] definition The definition of the field that applies # to the named variable. # # @return [ Integer | String ] the validated and coerced value of the # given environment variable. # # @raise [ MissingVariable ] if the environment does not include a # variable required by the current FaaS provider. # @raise [ ValueTooLong ] if a required variable is too long. # @raise [ TypeMismatch ] if a required variable cannot be coerced to # the expected type. def extract_field(var, definition) raise MissingVariable, var unless ENV[var] raise ValueTooLong, var if ENV[var].length > MAXIMUM_VALUE_LENGTH COERCIONS[definition[:type]].call(ENV[var]) rescue ArgumentError raise TypeMismatch, "#{var} must be #{definition[:type]} (got #{ENV[var].inspect})" end # Sets the error message to the given value and sets the name to nil. # # @param [ String ] msg The error message to store. def error=(msg) @name = nil @error = msg end end end end end mongo-ruby-driver-2.21.3/lib/mongo/server/app_metadata/platform.rb000066400000000000000000000070321505113246500251560ustar00rootroot00000000000000# frozen_string_literal: true # Copyright (C) 2016-2023 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Server class AppMetadata # Implements the logic for building the platform string for the # handshake. # # @api private class Platform # @return [ Mongo::Server::AppMetadata ] the metadata object to # reference when building the platform string. attr_reader :metadata # Create a new Platform object, referencing the given metadata object. # # @param [ Mongo::Server::AppMetadata ] metadata the metadata object # the reference when building the platform string. def initialize(metadata) @metadata = metadata end # Queries whether the current runtime is JRuby or not. # # @return [ true | false ] whether the runtime is JRuby or not. def jruby? BSON::Environment.jruby? end # Returns the list of Ruby versions that identify this runtime. # # @return [ Array ] the list of ruby versions def ruby_versions if jruby? [ "JRuby #{JRUBY_VERSION}", "like Ruby #{RUBY_VERSION}" ] else [ "Ruby #{RUBY_VERSION}" ] end end # Returns the list of platform identifiers that identify this runtime. # # @return [ Array ] the list of platform identifiers. def platforms [ RUBY_PLATFORM ].tap do |list| list.push "JVM #{java_version}" if jruby? end end # Returns the version of the current Java environment, or nil if not # invoked with JRuby. # # @return [ String | nil ] the current Java version def java_version return nil unless jruby? java.lang.System.get_property('java.version') end # Builds and returns the default platform list, for use when building # the platform string. # # @return [ Array ] the list of platform identifiers def default_platform_list [ metadata.platform, *ruby_versions, *platforms, RbConfig::CONFIG['build'] ] end # Returns a single letter representing the purpose reported to the # metadata, or nil if no purpose was specified. # # @return [ String | nil ] the code representing the purpose def purpose return nil unless metadata.purpose metadata.purpose.to_s[0].upcase end # Builds and returns the platform string by concatenating relevant # values together. # # @return [ String ] the platform string def to_s primary = [ *default_platform_list, purpose ].compact.join(', ') list = [ primary ] metadata.wrapping_libraries&.each do |library| list << (library[:platform] || '') end list.join('|') end end end end end mongo-ruby-driver-2.21.3/lib/mongo/server/app_metadata/truncator.rb000066400000000000000000000110141505113246500253460ustar00rootroot00000000000000# frozen_string_literal: true # Copyright (C) 2016-2023 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Server class AppMetadata # Implements the metadata truncation logic described in the handshake # spec. # # @api private class Truncator # @return [ BSON::Document ] the document being truncated. attr_reader :document # The max application metadata document byte size. MAX_DOCUMENT_SIZE = 512 # Creates a new Truncator instance and tries enforcing the maximum # document size on the given document. # # @param [ BSON::Document] document The document to (potentially) # truncate. # # @note The document is modified in-place; if you wish to keep the # original unchanged, you must deep-clone it prior to sending it to # a truncator. def initialize(document) @document = document try_truncate! end # The current size of the document, in bytes, as a serialized BSON # document. # # @return [ Integer ] the size of the document def size @document.to_bson.to_s.length end # Whether the document fits within the required maximum document size. # # @return [ true | false ] if the document is okay or not. def ok? size <= MAX_DOCUMENT_SIZE end private # How many extra bytes must be trimmed before the document may be # considered #ok?. # # @return [ Integer ] how many bytes larger the document is than the # maximum document size. def excess size - MAX_DOCUMENT_SIZE end # Attempt to truncate the document using the documented metadata # priorities (see the handshake specification). def try_truncate! %i[ env_fields os_fields env platform ].each do |target| break if ok? send(:"try_truncate_#{target}!") end end # Attempt to truncate or remove the {{:platform}} key from the # document. def try_truncate_platform! @document.delete(:platform) unless try_truncate_string(@document[:platform]) end # Attempt to truncate the keys in the {{:env}} subdocument. def try_truncate_env_fields! try_truncate_hash(@document[:env], reserved: %w[ name ]) end # Attempt to truncate the keys in the {{:os}} subdocument. def try_truncate_os_fields! try_truncate_hash(@document[:os], reserved: %w[ type ]) end # Remove the {{:env}} key from the document. def try_truncate_env! @document.delete(:env) end # A helper method for truncating a string (in-place) by whatever # {{#excess}} is required. # # @param [ String ] string the string value to truncate. # # @note the parameter is modified in-place. def try_truncate_string(string) length = string&.length || 0 return false if excess > length string[(length - excess)..-1] = '' end # A helper method for removing the keys of a Hash (in-place) until # the document is the necessary size. The keys are considered in order # (using the Hash's native key ordering), and each will be removed from # the hash in turn, until the document is the necessary size. # # Any keys in the {{reserved}} list will be ignored. # # @param [ Hash | nil ] hash the Hash instance to consider. # @param [ Array ] reserved the list of keys to ignore in the hash. # # @note the hash parameter is modified in-place. def try_truncate_hash(hash, reserved: []) return false unless hash keys = hash.keys - reserved keys.each do |key| hash.delete(key) return true if ok? end false end end end end end mongo-ruby-driver-2.21.3/lib/mongo/server/connection.rb000066400000000000000000000326671505113246500230650ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Server # This class models the socket connections for servers and their behavior. # # @since 2.0.0 class Connection < ConnectionBase include Monitoring::Publishable include Retryable include Id extend Forwardable # The ping command. # # @since 2.1.0 # # @deprecated No longer necessary with Server Selection specification. PING = { :ping => 1 }.freeze # The ping command for an OP_MSG (server versions >= 3.6). # # @since 2.5.0 # # @deprecated No longer necessary with Server Selection specification. PING_OP_MSG = { :ping => 1, '$db' => Database::ADMIN }.freeze # Ping message. # # @since 2.1.0 # # @deprecated No longer necessary with Server Selection specification. PING_MESSAGE = Protocol::Query.new(Database::ADMIN, Database::COMMAND, PING, :limit => -1) # Ping message as an OP_MSG (server versions >= 3.6). # # @since 2.5.0 # # @deprecated No longer necessary with Server Selection specification. PING_OP_MSG_MESSAGE = Protocol::Msg.new([], {}, PING_OP_MSG) # The ping message as raw bytes. # # @since 2.1.0 # # @deprecated No longer necessary with Server Selection specification. PING_BYTES = PING_MESSAGE.serialize.to_s.freeze # The ping OP_MSG message as raw bytes (server versions >= 3.6). # # @since 2.5.0 # # @deprecated No longer necessary with Server Selection specification. PING_OP_MSG_BYTES = PING_OP_MSG_MESSAGE.serialize.to_s.freeze # Creates a new connection object to the specified target address # with the specified options. # # The constructor does not perform any I/O (and thus does not create # sockets, handshakes nor authenticates); call connect! method on the # connection object to create the network connection. # # @api private # # @example Create the connection. # Connection.new(server) # # @note Connection must never be directly instantiated outside of a # Server. # # @param [ Mongo::Server ] server The server the connection is for. # @param [ Hash ] options The connection options. # # @option options :pipe [ IO ] The file descriptor for the read end of the # pipe to listen on during the select system call when reading from the # socket. # @option options [ Integer ] :generation The generation of this # connection. The generation should only be specified in this option # when not in load-balancing mode, and it should be the generation # of the connection pool when the connection is created. In # load-balancing mode, the generation is set on the connection # after the handshake completes. # @option options [ Hash ] :server_api The requested server API version. # This hash can have the following items: # - *:version* -- string # - *:strict* -- boolean # - *:deprecation_errors* -- boolean # # @since 2.0.0 def initialize(server, options = {}) if server.load_balancer? && options[:generation] raise ArgumentError, "Generation cannot be set when server is a load balancer" end @id = server.next_connection_id @global_id = self.class.next_id @monitoring = server.monitoring @options = options.freeze @server = server @socket = nil @last_checkin = nil @auth_mechanism = nil @pid = Process.pid @pinned = false publish_cmap_event( Monitoring::Event::Cmap::ConnectionCreated.new(address, id) ) end # @return [ Time ] The last time the connection was checked back into a pool. # # @since 2.5.0 attr_reader :last_checkin # @return [ Integer ] The ID for the connection. This will be unique # across connections to the same server object. # # @since 2.9.0 attr_reader :id # @return [ Integer ] The global ID for the connection. This will be unique # across all connections. attr_reader :global_id # The connection pool from which this connection was created. # May be nil. # # @api private def connection_pool options[:connection_pool] end # Whether the connection was connected and was not interrupted, closed, # or had an error raised. # # @return [ true | false ] if the connection was connected. def connected? !closed? && !error? && !interrupted? && !!@socket end # Whether the connection was closed. # # Closed connections should no longer be used. Instead obtain a new # connection from the connection pool. # # @return [ true | false ] Whether connection was closed. # # @since 2.9.0 def closed? !!@closed end # Whether the connection was interrupted. # # Interrupted connections were already removed from the pool and should # not be checked back into the pool. # # @return [ true | false ] Whether connection was interrupted. def interrupted? !!@interrupted end # Mark the connection as interrupted. def interrupted! @interrupted = true end # @api private def error? !!@error end # Whether the connection is used by a transaction or cursor operations. # # Pinned connections should not be disconnected and removed from a # connection pool if they are idle or stale. # # # @return [ true | false ] Whether connection is pinned. # # @api private def pinned? @pinned end # Mark the connection as pinned. # # @api private def pin @pinned = true end # Mark the connection as not pinned. # # @api private def unpin @pinned = false end # Establishes a network connection to the target address. # # If the connection is already established, this method does nothing. # # @example Connect to the host. # connection.connect! # # @note This method mutates the connection object by setting a socket if # one previously did not exist. # # @return [ true ] If the connection succeeded. # # @since 2.0.0 def connect!(context = nil) raise_if_closed! unless @socket @socket = create_socket(context) @description, @compressor = do_connect if server.load_balancer? if Lint.enabled? unless service_id raise Error::InternalDriverError, "The connection is to a load balancer and it must have service_id set here, but does not" end end @generation = connection_pool.generation_manager.generation(service_id: service_id) end publish_cmap_event( Monitoring::Event::Cmap::ConnectionReady.new(address, id) ) @close_event_published = false end true end # Creates the socket. The method is separate from do_connect, so that # pending connections can be closed if they are interrupted during hello. # # # @return [ Socket ] The created socket. private def create_socket(context = nil) add_server_diagnostics do opts = ssl_options.merge( connection_address: address, connection_generation: generation, pipe: options[:pipe], connect_timeout: context&.remaining_timeout_sec, csot: !!context&.csot? ) address.socket(socket_timeout, opts) end end # Separate method to permit easier mocking in the test suite. # # @return [ Array ] A server # description instance from the hello response of the returned socket # and the compressor to use. private def do_connect raise_if_closed! begin pending_connection = PendingConnection.new( socket, @server, monitoring, options.merge(id: id)) pending_connection.handshake_and_authenticate! rescue Exception socket&.close @socket = nil raise end [pending_connection.description, pending_connection.compressor] end # Disconnect the connection. # # @note Once a connection is disconnected, it should no longer be used. # A new connection should be obtained from the connection pool which # will either return a ready connection or create a new connection. # If linting is enabled, reusing a disconnected connection will raise # Error::LintError. If linting is not enabled, a warning will be logged. # # @note This method mutates the connection object by setting the socket # to nil if the closing succeeded. # # @option options [ Symbol ] :reason The reason why the connection is # being closed. # @option options [ true | false ] :interrupted Whether or not the # connection was interrupted. # # @return [ true ] If the disconnect succeeded. # # @since 2.0.0 def disconnect!(options = nil) # Note: @closed may be true here but we also may have a socket. # Check the socket and not @closed flag. @auth_mechanism = nil @last_checkin = nil if socket socket.close rescue nil @socket = nil end @closed = true interrupted! if options && options[:interrupted] # To satisfy CMAP spec tests, publish close events even if the # socket was never connected (and thus the ready event was never # published). But track whether we published close event and do not # publish it multiple times, unless the socket was reconnected - # in that case publish the close event once per socket close. unless @close_event_published reason = options && options[:reason] publish_cmap_event( Monitoring::Event::Cmap::ConnectionClosed.new( address, id, reason, ), ) @close_event_published = true end true end # Ping the connection to see if the server is responding to commands. # This is non-blocking on the server side. # # @example Ping the connection. # connection.ping # # @note This uses a pre-serialized ping message for optimization. # # @return [ true, false ] If the server is accepting connections. # # @since 2.1.0 # # @deprecated No longer necessary with Server Selection specification. def ping bytes = features.op_msg_enabled? ? PING_OP_MSG_BYTES : PING_BYTES ensure_connected do |socket| reply = add_server_diagnostics do socket.write(bytes) Protocol::Message.deserialize(socket, max_message_size) end reply.documents[0][Operation::Result::OK] == 1 end end # Get the timeout to execute an operation on a socket. # # @return [ Float ] The operation timeout in seconds. # # @since 2.0.0 def socket_timeout @timeout ||= options[:socket_timeout] end # @deprecated Please use :socket_timeout instead. Will be removed in 3.0.0 alias :timeout :socket_timeout # Record the last checkin time. # # @example Record the checkin time on this connection. # connection.record_checkin! # # @return [ self ] # # @since 2.5.0 def record_checkin! @last_checkin = Time.now self end private def deliver(message, client, options = {}) handle_errors do super end end def handle_errors begin yield rescue Error::SocketError => e @error = e @server.unknown!( generation: e.generation, # or description.service_id? service_id: e.service_id, stop_push_monitor: true, ) raise rescue Error::SocketTimeoutError => e @error = e raise end end def raise_if_closed! if error? raise Error::ConnectionPerished, "Connection #{generation}:#{id} for #{address.seed} is perished. Reconnecting closed or errored connections is no longer supported" end if closed? raise Error::ConnectionPerished, "Connection #{generation}:#{id} for #{address.seed} is closed. Reconnecting closed or errored connections is no longer supported" end end end end end mongo-ruby-driver-2.21.3/lib/mongo/server/connection_base.rb000066400000000000000000000300221505113246500240360ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2019-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Server # This class encapsulates common connection functionality. # # @note Although methods of this module are part of the public API, # the fact that these methods are defined on this module and not on # the classes which include this module is not part of the public API. # # @api semipublic class ConnectionBase < ConnectionCommon extend Forwardable include Monitoring::Publishable # The maximum allowed size in bytes that a user-supplied document may # take up when serialized, if the server's hello response does not # include maxBsonObjectSize field. # # The commands that are sent to the server may exceed this size by # MAX_BSON_COMMAND_OVERHEAD. # # @api private DEFAULT_MAX_BSON_OBJECT_SIZE = 16777216 # The additional overhead allowed for command data (i.e. fields added # to the command document by the driver, as opposed to documents # provided by the user) when serializing a complete command to BSON. # # @api private MAX_BSON_COMMAND_OVERHEAD = 16384 # @api private REDUCED_MAX_BSON_SIZE = 2097152 # @return [ Hash ] options The passed in options. attr_reader :options # @return [ Server ] The server that this connection is for. # # @api private attr_reader :server # @return [ Mongo::Address ] address The address to connect to. def_delegators :server, :address # @deprecated def_delegators :server, :cluster_time, :update_cluster_time # Returns the server description for this connection, derived from # the hello response for the handshake performed on this connection. # # @note A connection object that hasn't yet connected (handshaken and # authenticated, if authentication is required) does not have a # description. While handshaking and authenticating the driver must # be using global defaults, in particular not assuming that the # properties of a particular connection are the same as properties # of other connections made to the same address (since the server # on the other end could have been shut down and a different server # version could have been launched). # # @return [ Server::Description ] Server description for this connection. # @api private attr_reader :description # @deprecated def_delegators :description, :features, :max_bson_object_size, :max_message_size, :mongos? # @return [ nil | Object ] The service id, if any. def service_id description&.service_id end # Connection pool generation from which this connection was created. # May be nil. # # @return [ Integer | nil ] Connection pool generation. def generation # If the connection is to a load balancer, @generation is set # after handshake completes. If the connection is to another server # type, generation is specified during connection creation. @generation || options[:generation] end def app_metadata @app_metadata ||= begin same = true AppMetadata::AUTH_OPTION_KEYS.each do |key| if @server.options[key] != options[key] same = false break end end if same @server.app_metadata else AppMetadata.new(options.merge(purpose: @server.app_metadata.purpose)) end end end # Dispatch a single message to the connection. If the message # requires a response, a reply will be returned. # # @example Dispatch the message. # connection.dispatch([ insert ]) # # @note This method is named dispatch since 'send' is a core Ruby method on # all objects. # # @note For backwards compatibility, this method accepts the messages # as an array. However, exactly one message must be given per invocation. # # @param [ Array ] messages A one-element array containing # the message to dispatch. # @param [ Operation::Context ] context The operation context. # @param [ Hash ] options # # @option options [ Boolean ] :deserialize_as_bson Whether to deserialize # the response to this message using BSON objects in place of native # Ruby types wherever possible. # # @return [ Protocol::Message | nil ] The reply if needed. # # @raise [ Error::SocketError | Error::SocketTimeoutError ] When there is a network error. # # @since 2.0.0 def dispatch(messages, context, options = {}) # The monitoring code does not correctly handle multiple messages, # and the driver internally does not send more than one message at # a time ever. Thus prohibit multiple message use for now. if messages.length != 1 raise ArgumentError, 'Can only dispatch one message at a time' end if description.unknown? raise Error::InternalDriverError, "Cannot dispatch a message on a connection with unknown description: #{description.inspect}" end message = messages.first deliver(message, context, options) end private # @raise [ Error::SocketError | Error::SocketTimeoutError ] When there is a network error. def deliver(message, context, options = {}) if Lint.enabled? && !@socket raise Error::LintError, "Trying to deliver a message over a disconnected connection (to #{address})" end buffer = serialize(message, context) check_timeout!(context) ensure_connected do |socket| operation_id = Monitoring.next_operation_id started_event = command_started(address, operation_id, message.payload, socket_object_id: socket.object_id, connection_id: id, connection_generation: generation, server_connection_id: description.server_connection_id, service_id: description.service_id, ) start = Utils.monotonic_time result = nil begin result = add_server_diagnostics do socket.write(buffer.to_s, timeout: context.remaining_timeout_sec) if message.replyable? check_timeout!(context) Protocol::Message.deserialize(socket, max_message_size, message.request_id, options.merge(timeout: context.remaining_timeout_sec)) else nil end end rescue Exception => e total_duration = Utils.monotonic_time - start command_failed(nil, address, operation_id, message.payload, e.message, total_duration, started_event: started_event, server_connection_id: description.server_connection_id, service_id: description.service_id, ) raise else total_duration = Utils.monotonic_time - start command_completed(result, address, operation_id, message.payload, total_duration, started_event: started_event, server_connection_id: description.server_connection_id, service_id: description.service_id, ) end if result && context.decrypt? result = result.maybe_decrypt(context) end result end end def serialize(message, context, buffer = BSON::ByteBuffer.new) # Driver specifications only mandate the fixed 16MiB limit for # serialized BSON documents. However, the server returns its # active serialized BSON document size limit in the hello response, # which is +max_bson_object_size+ below. The +DEFAULT_MAX_BSON_OBJECT_SIZE+ # is the 16MiB value mandated by the specifications which we use # only as the default if the server's hello did not contain # maxBsonObjectSize. max_bson_size = max_bson_object_size || DEFAULT_MAX_BSON_OBJECT_SIZE if context.encrypt? # The client-side encryption specification requires bulk writes to # be split at a reduced maxBsonObjectSize. If this message is a bulk # write and its size exceeds the reduced size limit, the serializer # will raise an exception, which is caught by BulkWrite. BulkWrite # will split the operation into individual writes, which will # not be subject to the reduced maxBsonObjectSize. if message.bulk_write? # Make the new maximum size equal to the specified reduced size # limit plus the 16KiB overhead allowance. max_bson_size = REDUCED_MAX_BSON_SIZE end end # RUBY-2234: It is necessary to check that the message size does not # exceed the maximum bson object size before compressing and serializing # the final message. # # This is to avoid the case where the user performs a bulk write # larger than 16MiB which, when compressed, becomes smaller than 16MiB. # If the driver does not split the bulk writes prior to compression, # the entire operation will be sent to the server, which will raise an # error because the uncompressed operation exceeds the maximum bson size. # # To address this problem, we serialize the message prior to compression # and raise an exception if the serialized message exceeds the maximum # bson size. if max_message_size # Create a separate buffer that contains the un-compressed message # for the purpose of checking its size. Write any pre-existing contents # from the original buffer into the temporary one. temp_buffer = BSON::ByteBuffer.new # TODO: address the fact that this line mutates the buffer. temp_buffer.put_bytes(buffer.get_bytes(buffer.length)) message.serialize(temp_buffer, max_bson_size, MAX_BSON_COMMAND_OVERHEAD) if temp_buffer.length > max_message_size raise Error::MaxMessageSize.new(max_message_size) end end # RUBY-2335: When the un-compressed message is smaller than the maximum # bson size limit, the message will be serialized twice. The operations # layer should be refactored to allow compression on an already- # serialized message. final_message = message.maybe_compress(compressor, options[:zlib_compression_level]) final_message.serialize(buffer, max_bson_size, MAX_BSON_COMMAND_OVERHEAD) buffer end # If timeoutMS is set for the operation context, checks whether there is # enough time left to send the corresponding message to the server # (remaining timeout is bigger than minimum round trip time for # the server) # # @param [ Mongo::Operation::Context ] context Context of the operation. # # @raise [ Mongo::Error::TimeoutError ] if timeout expired or there is # not enough time to send the message to the server. def check_timeout!(context) return if [nil, 0].include?(context.deadline) time_to_execute = context.remaining_timeout_sec - server.minimum_round_trip_time if time_to_execute <= 0 raise Mongo::Error::TimeoutError end end end end end mongo-ruby-driver-2.21.3/lib/mongo/server/connection_common.rb000066400000000000000000000162111505113246500244200ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Server # Common methods used by both monitoring and non-monitoring connections. # # @note Although methods of this module are part of the public API, # the fact that these methods are defined on this module and not on # the classes which include this module is not part of the public API. # # @api semipublic class ConnectionCommon # The compressor negotiated during the handshake for this connection, # if any. # # This attribute is nil for connections that haven't completed the # handshake yet, and for connections that negotiated no compression. # # @return [ String | nil ] The compressor. attr_reader :compressor # Determine if the connection is currently connected. # # @example Is the connection connected? # connection.connected? # # @return [ true, false ] If connected. # # @deprecated def connected? !!socket end # @return [ Integer ] pid The process id when the connection was created. # @api private attr_reader :pid # Build a document that should be used for connection handshake. # # @param [ Server::AppMetadata ] app_metadata Application metadata # @param [ BSON::Document ] speculative_auth_doc The speculative # authentication document, if any. # @param [ true | false ] load_balancer Whether the connection is to # a load balancer. # @param server_api [ Hash | nil ] server_api Server API version. # # @return [BSON::Document] Document that should be sent to a server # for handshake purposes. # # @api private def handshake_document(app_metadata, speculative_auth_doc: nil, load_balancer: false, server_api: nil) serv_api = app_metadata.server_api || server_api document = if serv_api HELLO_DOC.merge(Utils.transform_server_api(serv_api)) else LEGACY_HELLO_DOC end document.merge(app_metadata.validated_document).tap do |doc| if speculative_auth_doc doc.update(speculativeAuthenticate: speculative_auth_doc) end if load_balancer doc.update(loadBalanced: true) end end end # Build a command that should be used for connection handshake. # # @param [ BSON::Document ] handshake_document Document that should be # sent to a server for handshake purpose. # # @return [ Protocol::Message ] Command that should be sent to a server # for handshake purposes. # # @api private def handshake_command(handshake_document) if handshake_document['apiVersion'] || handshake_document['loadBalanced'] Protocol::Msg.new( [], {}, handshake_document.merge({'$db' => Database::ADMIN}) ) else Protocol::Query.new( Database::ADMIN, Database::COMMAND, handshake_document, :limit => -1 ) end end private HELLO_DOC = BSON::Document.new({ hello: 1 }).freeze LEGACY_HELLO_DOC = BSON::Document.new({ isMaster: 1, helloOk: true }).freeze attr_reader :socket def set_compressor!(reply) server_compressors = reply['compression'] if options[:compressors] if intersection = (server_compressors & options[:compressors]) @compressor = intersection.first else msg = if server_compressors "The server at #{address} has no compression algorithms in common with those requested. " + "Server algorithms: #{server_compressors.join(', ')}; " + "Requested algorithms: #{options[:compressors].join(', ')}. " + "Compression will not be used" else "The server at #{address} did not advertise compression support. " + "Requested algorithms: #{options[:compressors].join(', ')}. " + "Compression will not be used" end log_warn(msg) end end end # Yields to the block and, if the block raises an exception, adds a note # to the exception with the address of the specified server. # # This method is intended to add server address information to exceptions # raised during execution of operations on servers. def add_server_diagnostics yield # Note that the exception should already have been mapped to a # Mongo::Error subclass when it gets to this method. rescue Error::SocketError, Error::SocketTimeoutError => e # Server::Monitor::Connection does not reference its server, but # knows its address. Server::Connection delegates the address to its # server. note = +"on #{address.seed}" if respond_to?(:id) note << ", connection #{generation}:#{id}" end # Non-monitoring connections have service id. # Monitoring connections do not. if respond_to?(:service_id) && service_id note << ", service id #{service_id}" end e.add_note(note) if respond_to?(:generation) # Non-monitoring connections e.generation = generation if respond_to?(:global_id) e.connection_global_id = global_id end if respond_to?(:description) e.service_id = service_id end end raise e end def ssl_options @ssl_options ||= if options[:ssl] options.select { |k, v| k.to_s.start_with?('ssl') } else # Due to the way options are propagated from the client, if we # decide that we don't want to use TLS we need to have the :ssl # option explicitly set to false or the value provided to the # connection might be overwritten by the default inherited from # the client. {ssl: false} end.freeze end def ensure_connected begin unless socket raise ArgumentError, "Connection #{generation}:#{id} for #{address.seed} is not connected" end if @error raise Error::ConnectionPerished, "Connection #{generation}:#{id} for #{address.seed} is perished" end result = yield socket success = true result ensure unless success @error = true end end end end end end mongo-ruby-driver-2.21.3/lib/mongo/server/connection_pool.rb000066400000000000000000001437711505113246500241150ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Server # Represents a connection pool for server connections. # # @since 2.0.0, largely rewritten in 2.9.0 class ConnectionPool include Loggable include Monitoring::Publishable extend Forwardable # The default max size for the connection pool. # # @since 2.9.0 DEFAULT_MAX_SIZE = 20 # The default min size for the connection pool. # # @since 2.9.0 DEFAULT_MIN_SIZE = 0 # The default maximum number of connections that can be connecting at # any given time. DEFAULT_MAX_CONNECTING = 2 # The default timeout, in seconds, to wait for a connection. # # This timeout applies while in flow threads are waiting for background # threads to establish connections (and hence they must connect, handshake # and auth in the allotted time). # # It is currently set to 10 seconds. The default connect timeout is # 10 seconds by itself, but setting large timeouts can get applications # in trouble if their requests get timed out by the reverse proxy, # thus anything over 15 seconds is potentially dangerous. # # @since 2.9.0 DEFAULT_WAIT_TIMEOUT = 10.freeze # Condition variable broadcast when the size of the pool changes # to wake up the populator attr_reader :populate_semaphore # Create the new connection pool. # # @param [ Server ] server The server which this connection pool is for. # @param [ Hash ] options The connection pool options. # # @option options [ Integer ] :max_size The maximum pool size. Setting # this option to zero creates an unlimited connection pool. # @option options [ Integer ] :max_connecting The maximum number of # connections that can be connecting simultaneously. The default is 2. # This option should be increased if there are many threads that share # same connection pool and the application is experiencing timeouts # while waiting for connections to be established. # @option options [ Integer ] :max_pool_size Deprecated. # The maximum pool size. If max_size is also given, max_size and # max_pool_size must be identical. # @option options [ Integer ] :min_size The minimum pool size. # @option options [ Integer ] :min_pool_size Deprecated. # The minimum pool size. If min_size is also given, min_size and # min_pool_size must be identical. # @option options [ Float ] :wait_timeout The time to wait, in # seconds, for a free connection. # @option options [ Float ] :wait_queue_timeout Deprecated. # Alias for :wait_timeout. If both wait_timeout and wait_queue_timeout # are given, their values must be identical. # @option options [ Float ] :max_idle_time The time, in seconds, # after which idle connections should be closed by the pool. # @option options [ true, false ] :populator_io For internal driver # use only. Set to false to prevent the populator threads from being # created and started in the server's connection pool. It is intended # for use in tests that also turn off monitoring_io, unless the populator # is explicitly needed. If monitoring_io is off, but the populator_io # is on, the populator needs to be manually closed at the end of the # test, since a cluster without monitoring is considered not connected, # and thus will not clean up the connection pool populator threads on # close. # Note: Additionally, options for connections created by this pool should # be included in the options passed here, and they will be forwarded to # any connections created by the pool. # # @since 2.0.0, API changed in 2.9.0 def initialize(server, options = {}) unless server.is_a?(Server) raise ArgumentError, 'First argument must be a Server instance' end options = options.dup if options[:min_size] && options[:min_pool_size] && options[:min_size] != options[:min_pool_size] raise ArgumentError, "Min size #{options[:min_size]} is not identical to min pool size #{options[:min_pool_size]}" end if options[:max_size] && options[:max_pool_size] && options[:max_size] != options[:max_pool_size] raise ArgumentError, "Max size #{options[:max_size]} is not identical to max pool size #{options[:max_pool_size]}" end if options[:wait_timeout] && options[:wait_queue_timeout] && options[:wait_timeout] != options[:wait_queue_timeout] raise ArgumentError, "Wait timeout #{options[:wait_timeout]} is not identical to wait queue timeout #{options[:wait_queue_timeout]}" end options[:min_size] ||= options[:min_pool_size] options.delete(:min_pool_size) options[:max_size] ||= options[:max_pool_size] options.delete(:max_pool_size) if options[:min_size] && options[:max_size] && (options[:max_size] != 0 && options[:min_size] > options[:max_size]) then raise ArgumentError, "Cannot have min size #{options[:min_size]} exceed max size #{options[:max_size]}" end if options[:wait_queue_timeout] options[:wait_timeout] ||= options[:wait_queue_timeout] end options.delete(:wait_queue_timeout) @server = server @options = options.freeze @generation_manager = GenerationManager.new(server: server) @ready = false @closed = false # A connection owned by this pool should be either in the # available connections array (which is used as a stack) # or in the checked out connections set. @available_connections = available_connections = [] @checked_out_connections = Set.new @pending_connections = Set.new @interrupt_connections = [] # Mutex used for synchronizing access to @available_connections and # @checked_out_connections. The pool object is thread-safe, thus # all methods that retrieve or modify instance variables generally # must do so under this lock. @lock = Mutex.new # Background thread reponsible for maintaining the size of # the pool to at least min_size @populator = Populator.new(self, options) @populate_semaphore = Semaphore.new # Condition variable to enforce the first check in check_out: max_pool_size. # This condition variable should be signaled when the number of # unavailable connections decreases (pending + pending_connections + # checked_out_connections). @size_cv = Mongo::ConditionVariable.new(@lock) # This represents the number of threads that have made it past the size_cv # gate but have not acquired a connection to add to the pending_connections # set. @connection_requests = 0 # Condition variable to enforce the second check in check_out: max_connecting. # Thei condition variable should be signaled when the number of pending # connections decreases. @max_connecting_cv = Mongo::ConditionVariable.new(@lock) @max_connecting = options.fetch(:max_connecting, DEFAULT_MAX_CONNECTING) ObjectSpace.define_finalizer(self, self.class.finalize(@available_connections, @pending_connections, @populator)) publish_cmap_event( Monitoring::Event::Cmap::PoolCreated.new(@server.address, options, self) ) end # @return [ Hash ] options The pool options. attr_reader :options # @api private attr_reader :server # @api private def_delegators :server, :address # Get the maximum size of the connection pool. # # @return [ Integer ] The maximum size of the connection pool. # # @since 2.9.0 def max_size @max_size ||= options[:max_size] || [DEFAULT_MAX_SIZE, min_size].max end # Get the minimum size of the connection pool. # # @return [ Integer ] The minimum size of the connection pool. # # @since 2.9.0 def min_size @min_size ||= options[:min_size] || DEFAULT_MIN_SIZE end # The time to wait, in seconds, for a connection to become available. # # @param [ Mongo::Operation:Context | nil ] context Context of the operation # the connection is requested for, if any. # # @return [ Float ] The queue wait timeout. # # @since 2.9.0 def wait_timeout(context = nil) if context&.remaining_timeout_sec.nil? options[:wait_timeout] || DEFAULT_WAIT_TIMEOUT else context&.remaining_timeout_sec end end # The maximum seconds a socket can remain idle since it has been # checked in to the pool, if set. # # @return [ Float | nil ] The max socket idle time in seconds. # # @since 2.9.0 def max_idle_time @max_idle_time ||= options[:max_idle_time] end # @api private attr_reader :generation_manager # @return [ Integer ] generation Generation of connections currently # being used by the queue. # # @api private def_delegators :generation_manager, :generation, :generation_unlocked # A connection pool is paused if it is not closed and it is not ready. # # @return [ true | false ] whether the connection pool is paused. # # @raise [ Error::PoolClosedError ] If the pool has been closed. def paused? raise_if_closed! @lock.synchronize do !@ready end end # Size of the connection pool. # # Includes available and checked out connections. # # @return [ Integer ] Size of the connection pool. # # @since 2.9.0 def size raise_if_closed! @lock.synchronize do unsynchronized_size end end # Returns the size of the connection pool without acquiring the lock. # This method should only be used by other pool methods when they are # already holding the lock as Ruby does not allow a thread holding a # lock to acquire this lock again. def unsynchronized_size @available_connections.length + @checked_out_connections.length + @pending_connections.length end private :unsynchronized_size # @return [ Integer ] The number of unavailable connections in the pool. # Used to calculate whether we have hit max_pool_size. # # @api private def unavailable_connections @checked_out_connections.length + @pending_connections.length + @connection_requests end # Number of available connections in the pool. # # @return [ Integer ] Number of available connections. # # @since 2.9.0 def available_count raise_if_closed! @lock.synchronize do @available_connections.length end end # Whether the pool has been closed. # # @return [ true | false ] Whether the pool is closed. # # @since 2.9.0 def closed? !!@closed end # Whether the pool is ready. # # @return [ true | false ] Whether the pool is ready. def ready? @lock.synchronize do @ready end end # @note This method is experimental and subject to change. # # @api experimental # @since 2.11.0 def summary @lock.synchronize do state = if closed? 'closed' elsif !@ready 'paused' else 'ready' end "#" end end # @since 2.9.0 def_delegators :@server, :monitoring # @api private attr_reader :populator # @api private attr_reader :max_connecting # Checks a connection out of the pool. # # If there are active connections in the pool, the most recently used # connection is returned. Otherwise if the connection pool size is less # than the max size, creates a new connection and returns it. Otherwise # waits up to the wait timeout and raises Timeout::Error if there are # still no active connections and the pool is at max size. # # The returned connection counts toward the pool's max size. When the # caller is finished using the connection, the connection should be # checked back in via the check_in method. # @param [ Integer | nil ] :connection_global_id The global id for the # connection to check out. # @param [ Mongo::Operation:Context | nil ] :context Context of the operation # the connection is requested for, if any. # # @return [ Mongo::Server::Connection ] The checked out connection. # @raise [ Error::PoolClosedError ] If the pool has been closed. # @raise [ Timeout::Error ] If the connection pool is at maximum size # and remains so for longer than the wait timeout. # # @since 2.9.0 def check_out(connection_global_id: nil, context: nil) check_invariants publish_cmap_event( Monitoring::Event::Cmap::ConnectionCheckOutStarted.new(@server.address) ) raise_if_pool_closed! raise_if_pool_paused_locked! connection = retrieve_and_connect_connection( connection_global_id, context ) publish_cmap_event( Monitoring::Event::Cmap::ConnectionCheckedOut.new(@server.address, connection.id, self), ) if Lint.enabled? unless connection.connected? raise Error::LintError, "Connection pool for #{address} checked out a disconnected connection #{connection.generation}:#{connection.id}" end end connection ensure check_invariants end # Check a connection back into the pool. # # The connection must have been previously created by this pool. # # @param [ Mongo::Server::Connection ] connection The connection. # # @since 2.9.0 def check_in(connection) check_invariants @lock.synchronize do do_check_in(connection) end ensure check_invariants end # Executes the check in after having already acquired the lock. # # @param [ Mongo::Server::Connection ] connection The connection. def do_check_in(connection) # When a connection is interrupted it is checked back into the pool # and closed. The operation that was using the connection before it was # interrupted will attempt to check it back into the pool, and we # should ignore it since its already been closed and removed from the pool. return if connection.closed? && connection.interrupted? unless connection.connection_pool == self raise ArgumentError, "Trying to check in a connection which was not checked out by this pool: #{connection} checked out from pool #{connection.connection_pool} (for #{self})" end unless @checked_out_connections.include?(connection) raise ArgumentError, "Trying to check in a connection which is not currently checked out by this pool: #{connection} (for #{self})" end # Note: if an event handler raises, resource will not be signaled. # This means threads waiting for a connection to free up when # the pool is at max size may time out. # Threads that begin waiting after this method completes (with # the exception) should be fine. @checked_out_connections.delete(connection) @size_cv.signal publish_cmap_event( Monitoring::Event::Cmap::ConnectionCheckedIn.new(@server.address, connection.id, self) ) if connection.interrupted? connection.disconnect!(reason: :stale) return end if connection.error? connection.disconnect!(reason: :error) return end if closed? connection.disconnect!(reason: :pool_closed) return end if connection.closed? # Connection was closed - for example, because it experienced # a network error. Nothing else needs to be done here. @populate_semaphore.signal elsif connection.generation != generation(service_id: connection.service_id) && !connection.pinned? # If connection is marked as pinned, it is used by a transaction # or a series of cursor operations in a load balanced setup. # In this case connection should not be disconnected until # unpinned. connection.disconnect!(reason: :stale) @populate_semaphore.signal else connection.record_checkin! @available_connections << connection @max_connecting_cv.signal end end # Mark the connection pool as paused. def pause raise_if_closed! check_invariants @lock.synchronize do do_pause end ensure check_invariants end # Mark the connection pool as paused without acquiring the lock. # # @api private def do_pause if Lint.enabled? && !@server.unknown? raise Error::LintError, "Attempting to pause pool for server #{@server.summary} which is known" end return if !@ready @ready = false end # Closes all idle connections in the pool and schedules currently checked # out connections to be closed when they are checked back into the pool. # The pool is paused, it will not create new connections in background # and it will fail checkout requests until marked ready. # # @option options [ true | false ] :lazy If true, do not close any of # the idle connections and instead let them be closed during a # subsequent check out operation. Defaults to false. # @option options [ true | false ] :interrupt_in_use_connections If true, # close all checked out connections immediately. If it is false, do not # close any of the checked out connections. Defaults to true. # @option options [ Object ] :service_id Clear connections with # the specified service id only. # # @return [ true ] true. # # @since 2.1.0 def clear(options = nil) raise_if_closed! if Lint.enabled? && !@server.unknown? raise Error::LintError, "Attempting to clear pool for server #{@server.summary} which is known" end do_clear(options) end # Disconnects the pool. # # Does everything that +clear+ does, except if the pool is closed # this method does nothing but +clear+ would raise PoolClosedError. # # @since 2.1.0 # @api private def disconnect!(options = nil) do_clear(options) rescue Error::PoolClosedError # The "disconnected" state is between closed and paused. # When we are trying to disconnect the pool, permit the pool to be # already closed. end def do_clear(options = nil) check_invariants service_id = options && options[:service_id] @lock.synchronize do # Generation must be bumped before emitting pool cleared event. @generation_manager.bump(service_id: service_id) unless options && options[:lazy] close_available_connections(service_id) end if options && options[:interrupt_in_use_connections] schedule_for_interruption(@checked_out_connections, service_id) schedule_for_interruption(@pending_connections, service_id) end if @ready publish_cmap_event( Monitoring::Event::Cmap::PoolCleared.new( @server.address, service_id: service_id, interrupt_in_use_connections: options&.[](:interrupt_in_use_connections) ) ) # Only pause the connection pool if the server was marked unknown, # otherwise, allow the retry to be attempted with a ready pool. do_pause if !@server.load_balancer? && @server.unknown? end # Broadcast here to cause all of the threads waiting on the max # connecting to break out of the wait loop and error. @max_connecting_cv.broadcast # Broadcast here to cause all of the threads waiting on the pool size # to break out of the wait loop and error. @size_cv.broadcast end # "Schedule the background thread" after clearing. This is responsible # for cleaning up stale threads, and interrupting in use connections. @populate_semaphore.signal true ensure check_invariants end # Instructs the pool to create and return connections. def ready raise_if_closed! # TODO: Add this back in RUBY-3174. # if Lint.enabled? # unless @server.connected? # raise Error::LintError, "Attempting to ready a pool for server #{@server.summary} which is disconnected" # end # end @lock.synchronize do return if @ready @ready = true end # Note that the CMAP spec demands serialization of CMAP events for a # pool. In order to implement this, event publication must be done into # a queue which is synchronized, instead of subscribers being invoked # from the trigger method like this one here inline. On MRI, assuming # the threads yield to others when they stop having work to do, it is # likely that the events would in practice always be published in the # required order. JRuby, being truly concurrent with OS threads, # would not offers such a guarantee. publish_cmap_event( Monitoring::Event::Cmap::PoolReady.new(@server.address, options, self) ) if options.fetch(:populator_io, true) if @populator.running? @populate_semaphore.signal else @populator.run! end end end # Marks the pool closed, closes all idle connections in the pool and # schedules currently checked out connections to be closed when they are # checked back into the pool. If force option is true, checked out # connections are also closed. Attempts to use the pool after it is closed # will raise Error::PoolClosedError. # # @option options [ true | false ] :force Also close all checked out # connections. # @option options [ true | false ] :stay_ready For internal driver use # only. Whether or not to mark the pool as closed. # # @return [ true ] Always true. # # @since 2.9.0 def close(options = nil) return if closed? options ||= {} stop_populator @lock.synchronize do until @available_connections.empty? connection = @available_connections.pop connection.disconnect!(reason: :pool_closed) end if options[:force] until @checked_out_connections.empty? connection = @checked_out_connections.take(1).first connection.disconnect!(reason: :pool_closed) @checked_out_connections.delete(connection) end end unless options && options[:stay_ready] # mark pool as closed before releasing lock so # no connections can be created, checked in, or checked out @closed = true @ready = false end @max_connecting_cv.broadcast @size_cv.broadcast @generation_manager.close_all_pipes end publish_cmap_event( Monitoring::Event::Cmap::PoolClosed.new(@server.address, self) ) true end # Get a pretty printed string inspection for the pool. # # @example Inspect the pool. # pool.inspect # # @return [ String ] The pool inspection. # # @since 2.0.0 def inspect if closed? "#" elsif !ready? "#" else "#" end end # Yield the block to a connection, while handling check in/check out logic. # # @example Execute with a connection. # pool.with_connection do |connection| # connection.read # end # # @return [ Object ] The result of the block. # # @since 2.0.0 def with_connection(connection_global_id: nil, context: nil) raise_if_closed! connection = check_out( connection_global_id: connection_global_id, context: context ) yield(connection) rescue Error::SocketError, Error::SocketTimeoutError, Error::ConnectionPerished => e maybe_raise_pool_cleared!(connection, e) ensure if connection check_in(connection) end end # Close sockets that have been open for longer than the max idle time, # if the option is set. # # @since 2.5.0 def close_idle_sockets return if closed? return unless max_idle_time @lock.synchronize do i = 0 while i < @available_connections.length connection = @available_connections[i] if last_checkin = connection.last_checkin if (Time.now - last_checkin) > max_idle_time connection.disconnect!(reason: :idle) @available_connections.delete_at(i) @populate_semaphore.signal next end end i += 1 end end end # Stop the background populator thread and clean up any connections created # which have not been connected yet. # # Used when closing the pool or when terminating the bg thread for testing # purposes. In the latter case, this method must be called before the pool # is used, to ensure no connections in pending_connections were created in-flow # by the check_out method. # # @api private def stop_populator @populator.stop! @lock.synchronize do # If stop_populator is called while populate is running, there may be # connections waiting to be connected, connections which have not yet # been moved to available_connections, or connections moved to available_connections # but not deleted from pending_connections. These should be cleaned up. clear_pending_connections end end # This method does three things: # 1. Creates and adds a connection to the pool, if the pool's size is # below min_size. Retries once if a socket-related error is # encountered during this process and raises if a second error or a # non socket-related error occurs. # 2. Removes stale connections from the connection pool. # 3. Interrupts connections marked for interruption. # # Used by the pool populator background thread. # # @return [ true | false ] Whether this method should be called again # to create more connections. # @raise [ Error::AuthError, Error ] The second socket-related error raised if a retry # occured, or the non socket-related error # # @api private def populate return false if closed? begin return create_and_add_connection rescue Error::SocketError, Error::SocketTimeoutError => e # an error was encountered while connecting the connection, # ignore this first error and try again. log_warn("Populator failed to connect a connection for #{address}: #{e.class}: #{e}. It will retry.") end return create_and_add_connection end # Finalize the connection pool for garbage collection. # # @param [ List ] available_connections The available connections. # @param [ List ] pending_connections The pending connections. # @param [ Populator ] populator The populator. # # @return [ Proc ] The Finalizer. def self.finalize(available_connections, pending_connections, populator) proc do available_connections.each do |connection| connection.disconnect!(reason: :pool_closed) end available_connections.clear pending_connections.each do |connection| connection.disconnect!(reason: :pool_closed) end pending_connections.clear # Finalizer does not close checked out connections. # Those would have to be garbage collected on their own # and that should close them. end end private # Returns the next available connection, optionally with given # global id. If no suitable connections are available, # returns nil. def next_available_connection(connection_global_id) raise_unless_locked! if @server.load_balancer? && connection_global_id conn = @available_connections.detect do |conn| conn.global_id == connection_global_id end if conn @available_connections.delete(conn) end conn else @available_connections.pop end end def create_connection r, _ = @generation_manager.pipe_fds(service_id: server.description.service_id) opts = options.merge( connection_pool: self, pipe: r # Do not pass app metadata - this will be retrieved by the connection # based on the auth needs. ) unless @server.load_balancer? opts[:generation] = generation end Connection.new(@server, opts) end # Create a connection, connect it, and add it to the pool. Also # check for stale and interruptable connections and deal with them. # # @return [ true | false ] True if a connection was created and # added to the pool, false otherwise # @raise [ Mongo::Error ] An error encountered during connection connect def create_and_add_connection connection = nil @lock.synchronize do if !closed? && @ready && (unsynchronized_size + @connection_requests) < min_size && @pending_connections.length < @max_connecting then connection = create_connection @pending_connections << connection else return true if remove_interrupted_connections return true if remove_stale_connection return false end end begin connect_connection(connection) rescue Exception @lock.synchronize do @pending_connections.delete(connection) @max_connecting_cv.signal @size_cv.signal end raise end @lock.synchronize do @available_connections << connection @pending_connections.delete(connection) @max_connecting_cv.signal @size_cv.signal end true end # Removes and disconnects all stale available connections. def remove_stale_connection if conn = @available_connections.detect(&method(:connection_stale_unlocked?)) conn.disconnect!(reason: :stale) @available_connections.delete(conn) return true end end # Interrupt connections scheduled for interruption. def remove_interrupted_connections return false if @interrupt_connections.empty? gens = Set.new while conn = @interrupt_connections.pop if @checked_out_connections.include?(conn) # If the connection has been checked out, mark it as interrupted and it will # be disconnected on check in. conn.interrupted! do_check_in(conn) elsif @pending_connections.include?(conn) # If the connection is pending, disconnect with the interrupted flag. conn.disconnect!(reason: :stale, interrupted: true) @pending_connections.delete(conn) end gens << [ conn.generation, conn.service_id ] end # Close the write side of the pipe. Pending connections might be # hanging on the Kernel#select call, so in order to interrupt that, # we also listen for the read side of the pipe in Kernel#select and # close the write side of the pipe here, which will cause select to # wake up and raise an IOError now that the socket is closed. # The read side of the pipe will be scheduled for closing on the next # generation bump. gens.each do |gen, service_id| @generation_manager.remove_pipe_fds(gen, service_id: service_id) end true end # Checks whether a connection is stale. # # @param [ Mongo::Server::Connection ] connection The connection to check. # # @return [ true | false ] Whether the connection is stale. def connection_stale_unlocked?(connection) connection.generation != generation_unlocked(service_id: connection.service_id) && !connection.pinned? end # Asserts that the pool has not been closed. # # @raise [ Error::PoolClosedError ] If the pool has been closed. # # @since 2.9.0 def raise_if_closed! if closed? raise Error::PoolClosedError.new(@server.address, self) end end # If the connection was interrupted, raise a pool cleared error. If it # wasn't interrupted raise the original error. # # @param [ Connection ] The connection. # @param [ Mongo::Error ] The original error. # # @raise [ Mongo::Error | Mongo::Error::PoolClearedError ] A PoolClearedError # if the connection was interrupted, the original error if not. def maybe_raise_pool_cleared!(connection, e) if connection&.interrupted? err = Error::PoolClearedError.new(connection.server.address, connection.server.pool_internal).tap do |err| e.labels.each { |l| err.add_label(l) } end raise err else raise e end end # Attempts to connect (handshake and auth) the connection. If an error is # encountered, closes the connection and raises the error. def connect_connection(connection, context = nil) begin connection.connect!(context) rescue Exception connection.disconnect!(reason: :error) raise end rescue Error::SocketError, Error::SocketTimeoutError => exc @server.unknown!( generation: exc.generation, service_id: exc.service_id, stop_push_monitor: true, ) raise end def check_invariants return unless Lint.enabled? # Server summary calls pool summary which requires pool lock -> deadlock. # Obtain the server summary ahead of time. server_summary = @server.summary @lock.synchronize do @available_connections.each do |connection| if connection.closed? raise Error::LintError, "Available connection is closed: #{connection} for #{server_summary}" end end @pending_connections.each do |connection| if connection.closed? raise Error::LintError, "Pending connection is closed: #{connection} for #{server_summary}" end end end end # Close the available connections. # # @param [ Array ] connections A list of connections. # @param [ Object ] service_id The service id. def close_available_connections(service_id) if @server.load_balancer? && service_id loop do conn = @available_connections.detect do |conn| conn.service_id == service_id && conn.generation < @generation_manager.generation(service_id: service_id) end if conn @available_connections.delete(conn) conn.disconnect!(reason: :stale, interrupted: true) @populate_semaphore.signal else break end end else @available_connections.delete_if do |conn| if conn.generation < @generation_manager.generation(service_id: service_id) conn.disconnect!(reason: :stale, interrupted: true) @populate_semaphore.signal true end end end end # Schedule connections of previous generations for interruption. # # @param [ Array ] connections A list of connections. # @param [ Object ] service_id The service id. def schedule_for_interruption(connections, service_id) @interrupt_connections += connections.select do |conn| (!server.load_balancer? || conn.service_id == service_id) && conn.generation < @generation_manager.generation(service_id: service_id) end end # Clear and disconnect the pending connections. def clear_pending_connections until @pending_connections.empty? connection = @pending_connections.take(1).first connection.disconnect! @pending_connections.delete(connection) end end # The lock should be acquired when calling this method. def raise_check_out_timeout!(connection_global_id) raise_unless_locked! publish_cmap_event( Monitoring::Event::Cmap::ConnectionCheckOutFailed.new( @server.address, Monitoring::Event::Cmap::ConnectionCheckOutFailed::TIMEOUT, ), ) connection_global_id_msg = if connection_global_id " for connection #{connection_global_id}" else '' end msg = "Timed out attempting to check out a connection " + "from pool for #{@server.address}#{connection_global_id_msg} after #{wait_timeout} sec. " + "Connections in pool: #{@available_connections.length} available, " + "#{@checked_out_connections.length} checked out, " + "#{@pending_connections.length} pending, " + "#{@connection_requests} connections requests " + "(max size: #{max_size})" raise Error::ConnectionCheckOutTimeout.new(msg, address: @server.address) end def raise_check_out_timeout_locked!(connection_global_id) @lock.synchronize do raise_check_out_timeout!(connection_global_id) end end def raise_if_pool_closed! if closed? publish_cmap_event( Monitoring::Event::Cmap::ConnectionCheckOutFailed.new( @server.address, Monitoring::Event::Cmap::ConnectionCheckOutFailed::POOL_CLOSED ), ) raise Error::PoolClosedError.new(@server.address, self) end end def raise_if_pool_paused! raise_unless_locked! if !@ready publish_cmap_event( Monitoring::Event::Cmap::ConnectionCheckOutFailed.new( @server.address, # CMAP spec decided to conflate pool paused with all the other # possible non-timeout errors. Monitoring::Event::Cmap::ConnectionCheckOutFailed::CONNECTION_ERROR, ), ) raise Error::PoolPausedError.new(@server.address, self) end end def raise_if_pool_paused_locked! @lock.synchronize do raise_if_pool_paused! end end # The lock should be acquired when calling this method. def raise_if_not_ready! raise_unless_locked! raise_if_pool_closed! raise_if_pool_paused! end def raise_unless_locked! unless @lock.owned? raise ArgumentError, "the lock must be owned when calling this method" end end def valid_available_connection?(connection, pid, connection_global_id) if connection.pid != pid log_warn("Detected PID change - Mongo client should have been reconnected (old pid #{connection.pid}, new pid #{pid}") connection.disconnect!(reason: :stale) @populate_semaphore.signal return false end if !connection.pinned? # If connection is marked as pinned, it is used by a transaction # or a series of cursor operations in a load balanced setup. # In this case connection should not be disconnected until # unpinned. if connection.generation != generation( service_id: connection.service_id ) # Stale connections should be disconnected in the clear # method, but if any don't, check again here connection.disconnect!(reason: :stale) @populate_semaphore.signal return false end if max_idle_time && connection.last_checkin && Time.now - connection.last_checkin > max_idle_time then connection.disconnect!(reason: :idle) @populate_semaphore.signal return false end end true end # Retrieves a connection if one is available, otherwise we create a new # one. If no connection exists and the pool is at max size, wait until # a connection is checked back into the pool. # # @param [ Integer ] pid The current process id. # @param [ Integer ] connection_global_id The global id for the # connection to check out. # # @return [ Mongo::Server::Connection ] The checked out connection. # # @raise [ Error::PoolClosedError ] If the pool has been closed. # @raise [ Timeout::Error ] If the connection pool is at maximum size # and remains so for longer than the wait timeout. def get_connection(pid, connection_global_id) if connection = next_available_connection(connection_global_id) unless valid_available_connection?(connection, pid, connection_global_id) return nil end # We've got a connection, so we decrement the number of connection # requests. # We do not need to signal condition variable here, because # because the execution will continue, and we signal later. @connection_requests -= 1 # If the connection is connected, it's not considered a # "pending connection". The pending_connections list represents # the set of connections that are awaiting connection. unless connection.connected? @pending_connections << connection end return connection elsif connection_global_id && @server.load_balancer? # A particular connection is requested, but it is not available. # If it is nether available not checked out, we should stop here. @checked_out_connections.detect do |conn| conn.global_id == connection_global_id end.tap do |conn| if conn.nil? publish_cmap_event( Monitoring::Event::Cmap::ConnectionCheckOutFailed.new( @server.address, Monitoring::Event::Cmap::ConnectionCheckOutFailed::CONNECTION_ERROR ), ) # We're going to raise, so we need to decrement the number of # connection requests. decrement_connection_requests_and_signal raise Error::MissingConnection.new end end # We need a particular connection, and if it is not available # we can wait for an in-progress operation to return # such a connection to the pool. nil else connection = create_connection @connection_requests -= 1 @pending_connections << connection return connection end end # Retrieves a connection and connects it. # # @param [ Integer | nil ] connection_global_id The global id for the # connection to check out. # @param [ Mongo::Operation:Context | nil ] context Context of the operation # the connection is requested for, if any. # # @return [ Mongo::Server::Connection ] The checked out connection. # # @raise [ Error::PoolClosedError ] If the pool has been closed. # @raise [ Timeout::Error ] If the connection pool is at maximum size # and remains so for longer than the wait timeout. def retrieve_and_connect_connection(connection_global_id, context = nil) deadline = Utils.monotonic_time + wait_timeout(context) connection = nil @lock.synchronize do # The first gate to checking out a connection. Make sure the number of # unavailable connections is less than the max pool size. until max_size == 0 || unavailable_connections < max_size wait = deadline - Utils.monotonic_time raise_check_out_timeout!(connection_global_id) if wait <= 0 @size_cv.wait(wait) raise_if_not_ready! end @connection_requests += 1 connection = wait_for_connection(connection_global_id, deadline) end connect_or_raise(connection, context) unless connection.connected? @lock.synchronize do @checked_out_connections << connection if @pending_connections.include?(connection) @pending_connections.delete(connection) end @max_connecting_cv.signal # no need to signal size_cv here since the number of unavailable # connections is unchanged. end connection end # Waits for a connection to become available, or raises is no connection # becomes available before the timeout. # @param [ Integer ] connection_global_id The global id for the # connection to check out. # @param [ Float ] deadline The time at which to stop waiting. # # @return [ Mongo::Server::Connection ] The checked out connection. def wait_for_connection(connection_global_id, deadline) connection = nil while connection.nil? # The second gate to checking out a connection. Make sure 1) there # exists an available connection and 2) we are under max_connecting. until @available_connections.any? || @pending_connections.length < @max_connecting wait = deadline - Utils.monotonic_time if wait <= 0 # We are going to raise a timeout error, so the connection # request is not going to be fulfilled. Decrement the counter # here. decrement_connection_requests_and_signal raise_check_out_timeout!(connection_global_id) end @max_connecting_cv.wait(wait) # We do not need to decrement the connection_requests counter # or signal here because the pool is not ready yet. raise_if_not_ready! end connection = get_connection(Process.pid, connection_global_id) wait = deadline - Utils.monotonic_time if connection.nil? && wait <= 0 # connection is nil here, it means that get_connection method # did not create a new connection; therefore, it did not decrease # the connection_requests counter. We need to do it here. decrement_connection_requests_and_signal raise_check_out_timeout!(connection_global_id) end end connection end # Connects a connection and raises an exception if the connection # cannot be connected. # This method also publish corresponding event and ensures that counters # and condition variables are updated. def connect_or_raise(connection, context) connect_connection(connection, context) rescue Exception # Handshake or authentication failed @lock.synchronize do if @pending_connections.include?(connection) @pending_connections.delete(connection) end @max_connecting_cv.signal @size_cv.signal end @populate_semaphore.signal publish_cmap_event( Monitoring::Event::Cmap::ConnectionCheckOutFailed.new( @server.address, Monitoring::Event::Cmap::ConnectionCheckOutFailed::CONNECTION_ERROR ), ) raise end # Decrement connection requests counter and signal the condition # variables that the number of unavailable connections has decreased. def decrement_connection_requests_and_signal @connection_requests -= 1 @max_connecting_cv.signal @size_cv.signal end end end end require 'mongo/server/connection_pool/generation_manager' require 'mongo/server/connection_pool/populator' mongo-ruby-driver-2.21.3/lib/mongo/server/connection_pool/000077500000000000000000000000001505113246500235535ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/server/connection_pool/generation_manager.rb000066400000000000000000000107321505113246500277300ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2021 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Server class ConnectionPool # @api private class GenerationManager def initialize(server:) @map = Hash.new { |hash, key| hash[key] = 1 } @pipe_fds = Hash.new { |hash, key| hash[key] = { 1 => IO.pipe } } @server = server @lock = Mutex.new @scheduled_for_close = [] end attr_reader :server def generation(service_id: nil) validate_service_id!(service_id) @lock.synchronize do @map[service_id] end end def generation_unlocked(service_id: nil) validate_service_id!(service_id) @map[service_id] end def pipe_fds(service_id: nil) @pipe_fds.dig(service_id, @map[service_id]) end def remove_pipe_fds(generation, service_id: nil) validate_service_id!(service_id) r, w = @pipe_fds[service_id].delete(generation) return unless r && w w.close # Schedule the read end of the pipe to be closed. We cannot close it # immediately since we need to wait for any Kernel#select calls to # notice that part of the pipe is closed, and check the socket. This # all happens when attempting to read from the socket and waiting for # it to become ready again. @scheduled_for_close << r end def bump(service_id: nil) @lock.synchronize do close_all_scheduled if service_id gen = @map[service_id] += 1 @pipe_fds[service_id] ||= {} @pipe_fds[service_id][gen] = IO.pipe else # When service id is not supplied, one of two things may be # happening; # # 1. The pool is not to a load balancer, in which case we only # need to increment the generation for the nil service_id. # 2. The pool is to a load balancer, in which case we need to # increment the generation for each service. # # Incrementing everything in the map accomplishes both tasks. @map.each do |k, v| gen = @map[k] += 1 @pipe_fds[service_id] ||= {} @pipe_fds[service_id][gen] = IO.pipe end end end end # Close all pipes in the generation manager. # # This method should be called only when the +ConnectionPool+ that # owns this +GenerationManager+ is closed, to ensure that all # pipes are closed properly. def close_all_pipes @lock.synchronize do close_all_scheduled @pipe_fds.keys.each do |service_id| generations = @pipe_fds.delete(service_id) generations.values.each do |(r, w)| r.close w.close rescue IOError # Ignore any IOError that occurs when closing the # pipe, as there is nothing we can do about it. end end end end private def validate_service_id!(service_id) if service_id unless server.load_balancer? raise ArgumentError, "Generation scoping to services is only available in load-balanced mode, but the server at #{server.address} is not a load balancer" end else if server.load_balancer? raise ArgumentError, "The server at #{server.address} is a load balancer and therefore does not have a single global generation" end end end # Close all fds scheduled for closing. def close_all_scheduled while pipe = @scheduled_for_close.pop pipe.close end end end end end end mongo-ruby-driver-2.21.3/lib/mongo/server/connection_pool/populator.rb000066400000000000000000000035061505113246500261310ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2019-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Server class ConnectionPool # A manager that maintains the invariant that the # size of a connection pool is at least minPoolSize. # # @api private class Populator include BackgroundThread # @param [ Server::ConnectionPool ] pool The connection pool. # @param [ Hash ] options The options. # # @option options [ Logger ] :logger A custom logger to use. def initialize(pool, options = {}) @pool = pool @thread = nil @options = options end attr_reader :options def pre_stop @pool.populate_semaphore.signal end private def do_work throw(:done) if @pool.closed? begin unless @pool.populate @pool.populate_semaphore.wait end rescue Error::AuthError, Error => e # Errors encountered when trying to add connections to # pool; try again later log_warn("Populator failed to connect a connection for #{@pool.address}: #{e.class}: #{e}.") @pool.populate_semaphore.wait(5) end end end end end end mongo-ruby-driver-2.21.3/lib/mongo/server/description.rb000066400000000000000000000660531505113246500232450ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an 'AS IS' BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Server # Represents a description of the server, populated by the result of the # hello command. # # Note: Unknown servers do not have wire versions, but for legacy reasons # we return 0 for min_wire_version and max_wire_version of any server that does # not have them. Presently the driver sometimes constructs commands when the # server is unknown, so references to min_wire_version and max_wire_version # should not be nil. When driver behavior is changed # (https://jira.mongodb.org/browse/RUBY-1805), this may no longer be necessary. # # @since 2.0.0 class Description # Constant for reading arbiter info from config. # # @since 2.0.0 # @deprecated ARBITER = 'arbiterOnly'.freeze # Constant for reading arbiters info from config. # # @since 2.0.0 ARBITERS = 'arbiters'.freeze # Constant for reading hidden info from config. # # @since 2.0.0 HIDDEN = 'hidden'.freeze # Constant for reading hosts info from config. # # @since 2.0.0 HOSTS = 'hosts'.freeze # Constant for the key for the message value. # # @since 2.0.0 # @deprecated MESSAGE = 'msg'.freeze # Constant for the message that indicates a sharded cluster. # # @since 2.0.0 # @deprecated MONGOS_MESSAGE = 'isdbgrid'.freeze # Constant for determining ghost servers. # # @since 2.0.0 # @deprecated REPLICA_SET = 'isreplicaset'.freeze # Constant for reading max bson size info from config. # # @since 2.0.0 MAX_BSON_OBJECT_SIZE = 'maxBsonObjectSize'.freeze # Constant for reading max message size info from config. # # @since 2.0.0 MAX_MESSAGE_BYTES = 'maxMessageSizeBytes'.freeze # Constant for the max wire version. # # @since 2.0.0 MAX_WIRE_VERSION = 'maxWireVersion'.freeze # Constant for min wire version. # # @since 2.0.0 MIN_WIRE_VERSION = 'minWireVersion'.freeze # Constant for reading max write batch size. # # @since 2.0.0 MAX_WRITE_BATCH_SIZE = 'maxWriteBatchSize'.freeze # Constant for the lastWrite subdocument. # # @since 2.4.0 LAST_WRITE = 'lastWrite'.freeze # Constant for the lastWriteDate field in the lastWrite subdocument. # # @since 2.4.0 LAST_WRITE_DATE = 'lastWriteDate'.freeze # Constant for reading the me field. # # @since 2.1.0 ME = 'me'.freeze # Default max write batch size. # # @since 2.0.0 DEFAULT_MAX_WRITE_BATCH_SIZE = 1000.freeze # The legacy wire protocol version. # # @since 2.0.0 # @deprecated Will be removed in 3.0. LEGACY_WIRE_VERSION = 0.freeze # Constant for reading passive info from config. # # @since 2.0.0 PASSIVE = 'passive'.freeze # Constant for reading the passive server list. # # @since 2.0.0 PASSIVES = 'passives'.freeze # Constant for reading primary info from config. # # @since 2.0.0 # @deprecated PRIMARY = 'ismaster'.freeze # Constant for reading primary host field from config. # # @since 2.5.0 PRIMARY_HOST = 'primary'.freeze # Constant for reading secondary info from config. # # @since 2.0.0 # @deprecated SECONDARY = 'secondary'.freeze # Constant for reading replica set name info from config. # # @since 2.0.0 SET_NAME = 'setName'.freeze # Constant for reading tags info from config. # # @since 2.0.0 TAGS = 'tags'.freeze # Constant for reading electionId info from config. # # @since 2.1.0 ELECTION_ID = 'electionId'.freeze # Constant for reading setVersion info from config. # # @since 2.2.2 SET_VERSION = 'setVersion'.freeze # Constant for reading localTime info from config. # # @since 2.1.0 LOCAL_TIME = 'localTime'.freeze # Constant for reading operationTime info from config. # # @since 2.5.0 OPERATION_TIME = 'operationTime'.freeze # Constant for reading logicalSessionTimeoutMinutes info from config. # # @since 2.5.0 LOGICAL_SESSION_TIMEOUT_MINUTES = 'logicalSessionTimeoutMinutes'.freeze # Constant for reading connectionId info from config. # # @api private CONNECTION_ID = 'connectionId'.freeze # Fields to exclude when comparing two descriptions. # # @since 2.0.6 EXCLUDE_FOR_COMPARISON = [ LOCAL_TIME, LAST_WRITE, OPERATION_TIME, Operation::CLUSTER_TIME, CONNECTION_ID, ].freeze # Instantiate the new server description from the result of the hello # command or fabricate a placeholder description for Unknown and # LoadBalancer servers. # # @example Instantiate the new description. # Description.new(address, { 'isWritablePrimary' => true }, 0.5) # # @param [ Address ] address The server address. # @param [ Hash ] config The result of the hello command. # @param [ Float ] average_round_trip_time The moving average time (sec) the hello # command took to complete. # @param [ Float ] minimum_round_trip_time The minimum round trip time # of ten last hello commands. # @param [ true | false ] load_balancer Whether the server is treated as # a load balancer. # @param [ true | false ] force_load_balancer Whether the server is # forced to be a load balancer. # # @api private def initialize(address, config = {}, average_round_trip_time: nil, minimum_round_trip_time: 0, load_balancer: false, force_load_balancer: false ) @address = address @config = config @load_balancer = !!load_balancer @force_load_balancer = !!force_load_balancer @features = Features.new(wire_versions, me || @address.to_s) @average_round_trip_time = average_round_trip_time @minimum_round_trip_time = minimum_round_trip_time @last_update_time = Time.now.freeze @last_update_monotime = Utils.monotonic_time if load_balancer # When loadBalanced=true URI option is set, the driver will refuse # to work if the server it communicates with does not set serviceId # in ismaster/hello response. # # At the moment we cannot run a proper load balancer setup on evergreen # # Therefore, when connect=:load_balanced Ruby option is used instead # of the loadBalanced=true URI option, if serviceId is not set in # ismaster/hello response, the driver fabricates a serviceId and # proceeds to treat a server that does not report itself as being # behind a load balancer as a server that is behind a load balancer. # # 5.0+ servers should provide topologyVersion.processId which # is specific to the particular process instance. We can use that # field as a proxy for serviceId. # # If the topologyVersion isn't provided for whatever reason, we # fabricate a serviceId locally. # # In either case, a serviceId provided by an actual server behind # a load balancer is supposed to be a BSON::ObjectId. The fabricated # service ids are strings, to distinguish them from the real ones. # In particular processId is also a BSON::ObjectId, but will be # mapped to a string for clarity that this is a fake service id. # # TODO: Remove this when https://jira.mongodb.org/browse/RUBY-2881 is done. if ok? && !service_id unless force_load_balancer raise Error::MissingServiceId, "The server at #{address.seed} did not provide a service id in handshake response" end fake_service_id = if process_id = topology_version && topology_version['processId'] "process:#{process_id}" else "fake:#{rand(2**32-1)+1}" end @config = @config.merge('serviceId' => fake_service_id) end end if Mongo::Lint.enabled? # prepopulate cache instance variables hosts arbiters passives topology_version freeze end end # @return [ Address ] address The server's address. attr_reader :address # @return [ Hash ] The actual result from the hello command. attr_reader :config # Returns whether this server is a load balancer. # # @return [ true | false ] Whether this server is a load balancer. def load_balancer? @load_balancer end # @return [ Features ] features The features for the server. def features @features end # @return [ Float ] The moving average time the hello call took to complete. attr_reader :average_round_trip_time # @return [ Float ] The minimum time from the ten last hello calls took # to complete. attr_reader :minimum_round_trip_time # Returns whether this server is an arbiter, per the SDAM spec. # # @example Is the server an arbiter? # description.arbiter? # # @return [ true, false ] If the server is an arbiter. # # @since 2.0.0 def arbiter? ok? && config['arbiterOnly'] == true && !!config['setName'] end # Get a list of all arbiters in the replica set. # # @example Get the arbiters in the replica set. # description.arbiters # # @return [ Array ] The arbiters in the set. # # @since 2.0.0 def arbiters @arbiters ||= (config[ARBITERS] || []).map { |s| s.downcase } end # Whether this server is a ghost, per the SDAM spec. # # @example Is the server a ghost? # description.ghost? # # @return [ true, false ] If the server is a ghost. # # @since 2.0.0 def ghost? ok? && config['isreplicaset'] == true end # Will return true if the server is hidden. # # @example Is the server hidden? # description.hidden? # # @return [ true, false ] If the server is hidden. # # @since 2.0.0 def hidden? ok? && !!config[HIDDEN] end # Get a list of all servers in the replica set. # # @example Get the servers in the replica set. # description.hosts # # @return [ Array ] The servers in the set. # # @since 2.0.0 def hosts @hosts ||= (config[HOSTS] || []).map { |s| s.downcase } end # Inspect the server description. # # @example Inspect the server description # description.inspect # # @return [ String ] The inspection. # # @since 2.0.0 def inspect "#" end # Get the max BSON object size for this server version. # # @example Get the max BSON object size. # description.max_bson_object_size # # @return [ Integer ] The maximum object size in bytes. # # @since 2.0.0 def max_bson_object_size config[MAX_BSON_OBJECT_SIZE] end # Get the max message size for this server version. # # @example Get the max message size. # description.max_message_size # # @return [ Integer ] The maximum message size in bytes. # # @since 2.0.0 def max_message_size config[MAX_MESSAGE_BYTES] end # Get the maximum batch size for writes. # # @example Get the max batch size. # description.max_write_batch_size # # @return [ Integer ] The max batch size. # # @since 2.0.0 def max_write_batch_size config[MAX_WRITE_BATCH_SIZE] || DEFAULT_MAX_WRITE_BATCH_SIZE end # Get the maximum wire version. Defaults to zero. # # @example Get the max wire version. # description.max_wire_version # # @return [ Integer ] The max wire version supported. # # @since 2.0.0 def max_wire_version config[MAX_WIRE_VERSION] || 0 end # Get the minimum wire version. Defaults to zero. # # @example Get the min wire version. # description.min_wire_version # # @return [ Integer ] The min wire version supported. # # @since 2.0.0 def min_wire_version config[MIN_WIRE_VERSION] || 0 end # Get the me field value. # # @note The value in me field may differ from the server description's # address. This can happen, for example, in split horizon configurations. # The SDAM spec only requires removing servers whose me does not match # their address in some of the situations (e.g. when the server in # question is an RS member but not a primary). # # @return [ String ] The me field. # # @since 2.1.0 def me config[ME] end # Get the tags configured for the server. # # @example Get the tags. # description.tags # # @return [ Hash ] The tags of the server. # # @since 2.0.0 def tags config[TAGS] || {} end # Get the electionId from the config. # # @example Get the electionId. # description.election_id # # @return [ BSON::ObjectId ] The election id. # # @since 2.1.0 def election_id config[ELECTION_ID] end # Get the setVersion from the config. # # @example Get the setVersion. # description.set_version # # @return [ Integer ] The set version. # # @since 2.2.2 def set_version config[SET_VERSION] end # @return [ TopologyVersion | nil ] The topology version. def topology_version unless defined?(@topology_version) @topology_version = config['topologyVersion'] && TopologyVersion.new(config['topologyVersion']) end @topology_version end # Returns whether topology version in this description is potentially # newer than or equal to topology version in another description. # # @param [ Server::Description ] other_desc The other server description. # # @return [ true | false ] Whether topology version in this description # is potentially newer or equal. # @api private def topology_version_gt?(other_desc) if topology_version.nil? || other_desc.topology_version.nil? true else topology_version.gt?(other_desc.topology_version) end end # Returns whether topology version in this description is potentially # newer than topology version in another description. # # @param [ Server::Description ] other_desc The other server description. # # @return [ true | false ] Whether topology version in this description # is potentially newer. # @api private def topology_version_gte?(other_desc) if topology_version.nil? || other_desc.topology_version.nil? true else topology_version.gte?(other_desc.topology_version) end end # Get the lastWriteDate from the lastWrite subdocument in the config. # # @example Get the lastWriteDate value. # description.last_write_date # # @return [ Time ] The last write date. # # @since 2.4.0 def last_write_date config[LAST_WRITE][LAST_WRITE_DATE] if config[LAST_WRITE] end # Get the logicalSessionTimeoutMinutes from the config. # # @example Get the logicalSessionTimeoutMinutes value in minutes. # description.logical_session_timeout # # @return [ Integer, nil ] The logical session timeout in minutes. # # @since 2.5.0 def logical_session_timeout config[LOGICAL_SESSION_TIMEOUT_MINUTES] if config[LOGICAL_SESSION_TIMEOUT_MINUTES] end # Returns whether this server is a mongos, per the SDAM spec. # # @example Is the server a mongos? # description.mongos? # # @return [ true, false ] If the server is a mongos. # # @since 2.0.0 def mongos? ok? && config['msg'] == 'isdbgrid' end # Returns whether the server is an other, per the SDAM spec. # # @example Is the description of type other. # description.other? # # @return [ true, false ] If the description is other. # # @since 2.0.0 def other? # The SDAM spec is slightly confusing on what "other" means, # but it's referred to it as "RSOther" which means a non-RS member # cannot be "other". ok? && !!config['setName'] && ( config['hidden'] == true || !primary? && !secondary? && !arbiter? ) end # Will return true if the server is passive. # # @example Is the server passive? # description.passive? # # @return [ true, false ] If the server is passive. # # @since 2.0.0 def passive? ok? && !!config[PASSIVE] end # Get a list of the passive servers in the cluster. # # @example Get the passives. # description.passives # # @return [ Array ] The list of passives. # # @since 2.0.0 def passives @passives ||= (config[PASSIVES] || []).map { |s| s.downcase } end # Get the address of the primary host. # # @example Get the address of the primary. # description.primary_host # # @return [ String | nil ] The address of the primary. # # @since 2.6.0 def primary_host config[PRIMARY_HOST] && config[PRIMARY_HOST].downcase end # Returns whether this server is a primary, per the SDAM spec. # # @example Is the server a primary? # description.primary? # # @return [ true, false ] If the server is a primary. # # @since 2.0.0 def primary? ok? && (config['ismaster'] == true || config['isWritablePrimary'] == true ) && !!config['setName'] end # Get the name of the replica set the server belongs to, returns nil if # none. # # @example Get the replica set name. # description.replica_set_name # # @return [ String, nil ] The name of the replica set. # # @since 2.0.0 def replica_set_name config[SET_NAME] end # Get a list of all servers known to the cluster. # # @example Get all servers. # description.servers # # @return [ Array ] The list of all servers. # # @since 2.0.0 def servers hosts + arbiters + passives end # Returns whether this server is a secondary, per the SDAM spec. # # @example Is the server a secondary? # description.secondary? # # @return [ true, false ] If the server is a secondary. # # @since 2.0.0 def secondary? ok? && config['secondary'] == true && !!config['setName'] end # Returns the server type as a symbol. # # @example Get the server type. # description.server_type # # @return [ Symbol ] The server type. # # @since 2.4.0 def server_type return :load_balancer if load_balancer? return :arbiter if arbiter? return :ghost if ghost? return :sharded if mongos? return :primary if primary? return :secondary if secondary? return :standalone if standalone? return :other if other? :unknown end # Returns whether this server is a standalone, per the SDAM spec. # # @example Is the server standalone? # description.standalone? # # @return [ true, false ] If the server is standalone. # # @since 2.0.0 def standalone? ok? && config['msg'] != 'isdbgrid' && config['setName'].nil? && config['isreplicaset'] != true end # Returns whether this server is an unknown, per the SDAM spec. # # @example Is the server description unknown? # description.unknown? # # @return [ true, false ] If the server description is unknown. # # @since 2.0.0 def unknown? return false if load_balancer? config.empty? || config.keys == %w(topologyVersion) || !ok? end # @api private def ok? config[Operation::Result::OK] == 1 end # Get the range of supported wire versions for the server. # # @example Get the wire version range. # description.wire_versions # # @return [ Range ] The wire version range. # # @since 2.0.0 def wire_versions min_wire_version..max_wire_version end # Is this description from the given server. # # @example Check if the description is from a given server. # description.is_server?(server) # # @return [ true, false ] If the description is from the server. # # @since 2.0.6 # @deprecated def is_server?(server) address == server.address end # Is a server included in this description's list of servers. # # @example Check if a server is in the description list of servers. # description.lists_server?(server) # # @return [ true, false ] If a server is in the description's list # of servers. # # @since 2.0.6 # @deprecated def lists_server?(server) servers.include?(server.address.to_s) end # Does this description correspond to a replica set member. # # @example Check if the description is from a replica set member. # description.replica_set_member? # # @return [ true, false ] If the description is from a replica set # member. # # @since 2.0.6 def replica_set_member? ok? && !(standalone? || mongos?) end # Whether this description is from a data-bearing server # (standalone, mongos, primary or secondary). # # @return [ true, false ] Whether the description is from a data-bearing # server. # # @since 2.7.0 def data_bearing? mongos? || primary? || secondary? || standalone? end # Check if there is a mismatch between the address host and the me field. # # @example Check if there is a mismatch. # description.me_mismatch? # # @return [ true, false ] If there is a mismatch between the me field and the address host. # # @since 2.0.6 def me_mismatch? !!(address.to_s.downcase != me.downcase if me) end # Whether this description is from a mongocryptd server. # # @return [ true, false ] Whether this description is from a mongocryptd # server. def mongocryptd? ok? && config['iscryptd'] == true end # opTime in lastWrite subdocument of the hello response. # # @return [ BSON::Timestamp ] The timestamp. # # @since 2.7.0 def op_time if config['lastWrite'] && config['lastWrite']['opTime'] config['lastWrite']['opTime']['ts'] end end # Time when this server description was created. # # @note This time does not indicate when a successful server check # completed, because marking a server unknown updates its description # and last_update_time. Use Server#last_scan to find out when the server # was last successfully checked by its Monitor. # # @return [ Time ] Server description creation time. # # @since 2.7.0 attr_reader :last_update_time # Time when this server description was created according to monotonic clock. # # @see Description::last_updated_time for more detail # # @return [ Float ] Server description creation monotonic time. # # @api private attr_reader :last_update_monotime # @api experimental def server_connection_id config['connectionId'] end # @return [ nil | Object ] The service id, if any. # # @api experimental def service_id config['serviceId'] end # Check equality of two descriptions. # # @example Check description equality. # description == other # # @param [ Object ] other The other description. # # @return [ true, false ] Whether the objects are equal. # # @since 2.0.6 def ==(other) return false if self.class != other.class return false if unknown? || other.unknown? (config.keys + other.config.keys).uniq.all? do |k| config[k] == other.config[k] || EXCLUDE_FOR_COMPARISON.include?(k) end end alias_method :eql?, :== # @api private def server_version_gte?(version) required_wv = case version when '7.0' 21 when '6.0' 17 when '5.2' 15 when '5.1' 14 when '5.0' 12 when '4.4' 9 when '4.2' 8 when '4.0' 7 when '3.6' 6 when '3.4' 5 when '3.2' 4 when '3.0' 3 when '2.6' 2 else raise ArgumentError, "Bogus required version #{version}" end if load_balancer? # If we are talking to a load balancer, there is no monitoring # and we don't know what server is behind the load balancer. # Assume everything is supported. # TODO remove this when RUBY-2220 is implemented. return true end required_wv >= min_wire_version && required_wv <= max_wire_version end end end end require 'mongo/server/description/features' require 'mongo/server/description/load_balancer' mongo-ruby-driver-2.21.3/lib/mongo/server/description/000077500000000000000000000000001505113246500227065ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/server/description/features.rb000066400000000000000000000130521505113246500250520ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an 'AS IS' BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Server class Description # Defines behavior around what features a specific server supports. # # @since 2.0.0 class Features # List of features and the wire protocol version they appear in. # # Wire protocol versions map to server releases as follows: # - 2 => 2.6 # - 3 => 3.0 # - 4 => 3.2 # - 5 => 3.4 # - 6 => 3.6 # - 7 => 4.0 # - 8 => 4.2 # - 9 => 4.4 # - 13 => 5.0 # - 14 => 5.1 # - 17 => 6.0 # # @since 2.0.0 MAPPINGS = { merge_out_on_secondary: 13, get_more_comment: 9, retryable_write_error_label: 9, commit_quorum: 9, # Server versions older than 4.2 do not reliably validate options # provided by the client during findAndModify operations, requiring the # driver to raise client-side errors when those options are provided. find_and_modify_option_validation: 8, sharded_transactions: 8, transactions: 7, scram_sha_256: 7, array_filters: 6, op_msg: 6, sessions: 6, collation: 5, max_staleness: 5, # Server versions older than 3.4 do not reliably validate options # provided by the client during update/delete operations, requiring the # driver to raise client-side errors when those options are provided. update_delete_option_validation: 5, find_command: 4, list_collections: 3, list_indexes: 3, scram_sha_1: 3, write_command: 2, users_info: 2, }.freeze # Error message if the server is too old for this version of the driver. # # @since 2.5.0 SERVER_TOO_OLD = "Server at (%s) reports wire version (%s), but this version of the Ruby driver " + "requires at least (%s)." # Error message if the driver is too old for the version of the server. # # @since 2.5.0 DRIVER_TOO_OLD = "Server at (%s) requires wire version (%s), but this version of the Ruby driver " + "only supports up to (%s)." # The wire protocol versions that this version of the driver supports. # # @since 2.0.0 DRIVER_WIRE_VERSIONS = (6..25).freeze # Create the methods for each mapping to tell if they are supported. # # @since 2.0.0 MAPPINGS.each do |name, version| # Determine whether or not the feature is supported. # # @example Is a feature enabled? # features.list_collections_enabled? # # @return [ true, false ] Whether the feature is supported. # # @since 2.0.0 define_method("#{name}_enabled?") do server_wire_versions.include?(MAPPINGS[name]) end end # @return [ Range ] server_wire_versions The server's supported wire # versions. attr_reader :server_wire_versions # Initialize the features. # # @example Initialize the features. # Features.new(0..3) # # @param [ Range ] server_wire_versions The server supported wire # versions. # # @since 2.0.0 def initialize(server_wire_versions, address = nil) if server_wire_versions.min.nil? raise ArgumentError, "server_wire_versions's min is nil" end if server_wire_versions.max.nil? raise ArgumentError, "server_wire_versions's max is nil" end @server_wire_versions = server_wire_versions @address = address if Mongo::Lint.enabled? freeze end end # Check that there is an overlap between the driver supported wire # version range and the server wire version range. # # @example Verify the wire version overlap. # features.check_driver_support! # # @raise [ Error::UnsupportedFeatures ] If the wire version range is # not covered by the driver. # # @since 2.5.1 def check_driver_support! if DRIVER_WIRE_VERSIONS.min > @server_wire_versions.max raise Error::UnsupportedFeatures.new(SERVER_TOO_OLD % [@address, @server_wire_versions.max, DRIVER_WIRE_VERSIONS.min]) elsif DRIVER_WIRE_VERSIONS.max < @server_wire_versions.min raise Error::UnsupportedFeatures.new(DRIVER_TOO_OLD % [@address, @server_wire_versions.min, DRIVER_WIRE_VERSIONS.max]) end end end end end end mongo-ruby-driver-2.21.3/lib/mongo/server/description/load_balancer.rb000066400000000000000000000017141505113246500260040ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2021 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an 'AS IS' BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Server class Description # Represents an assumed description of servers behind load balancers. class LoadBalancer def initialize(address) @address = address end # @return [ Address ] address The server's address. attr_reader :address end end end end mongo-ruby-driver-2.21.3/lib/mongo/server/monitor.rb000066400000000000000000000266371505113246500224150ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Server # Responsible for periodically polling a server via hello commands to # keep the server's status up to date. # # Does all work in a background thread so as to not interfere with other # operations performed by the driver. # # @since 2.0.0 # @api private class Monitor include Loggable extend Forwardable include Event::Publisher include BackgroundThread # The default interval between server status refreshes is 10 seconds. # # @since 2.0.0 DEFAULT_HEARTBEAT_INTERVAL = 10.freeze # The minimum time between forced server scans. Is # minHeartbeatFrequencyMS in the SDAM spec. # # @since 2.0.0 MIN_SCAN_INTERVAL = 0.5.freeze # The weighting factor (alpha) for calculating the average moving round trip time. # # @since 2.0.0 # @deprecated Will be removed in version 3.0. RTT_WEIGHT_FACTOR = 0.2.freeze # Create the new server monitor. # # @example Create the server monitor. # Mongo::Server::Monitor.new(address, listeners, monitoring) # # @note Monitor must never be directly instantiated outside of a Server. # # @param [ Server ] server The server to monitor. # @param [ Event::Listeners ] event_listeners The event listeners. # @param [ Monitoring ] monitoring The monitoring.. # @param [ Hash ] options The options. # # @option options [ Float ] :connect_timeout The timeout, in seconds, to # use when establishing the monitoring connection. # @option options [ Float ] :heartbeat_interval The interval between # regular server checks. # @option options [ Logger ] :logger A custom logger to use. # @option options [ Mongo::Server::Monitor::AppMetadata ] :monitor_app_metadata # The metadata to use for regular monitoring connection. # @option options [ Mongo::Server::Monitor::AppMetadata ] :push_monitor_app_metadata # The metadata to use for push monitor's connection. # @option options [ Float ] :socket_timeout The timeout, in seconds, to # execute operations on the monitoring connection. # # @since 2.0.0 # @api private def initialize(server, event_listeners, monitoring, options = {}) unless monitoring.is_a?(Monitoring) raise ArgumentError, "Wrong monitoring type: #{monitoring.inspect}" end unless options[:app_metadata] raise ArgumentError, 'App metadata is required' end unless options[:push_monitor_app_metadata] raise ArgumentError, 'Push monitor app metadata is required' end @server = server @event_listeners = event_listeners @monitoring = monitoring @options = options.freeze @mutex = Mutex.new @sdam_mutex = Mutex.new @next_earliest_scan = @next_wanted_scan = Time.now @update_mutex = Mutex.new end # @return [ Server ] server The server that this monitor is monitoring. # @api private attr_reader :server # @return [ Mongo::Server::Monitor::Connection ] connection The connection to use. attr_reader :connection # @return [ Hash ] options The server options. attr_reader :options # The interval between regular server checks. # # @return [ Float ] The heartbeat interval, in seconds. def heartbeat_interval options[:heartbeat_interval] || DEFAULT_HEARTBEAT_INTERVAL end # @deprecated def_delegators :server, :last_scan # The compressor is determined during the handshake, so it must be an # attribute of the connection. # # @deprecated def_delegators :connection, :compressor # @return [ Monitoring ] monitoring The monitoring. attr_reader :monitoring # @return [ Server::PushMonitor | nil ] The push monitor, if one is being # used. def push_monitor @update_mutex.synchronize do @push_monitor end end # Perform a check of the server. # # @since 2.0.0 def do_work scan! # @next_wanted_scan may be updated by the push monitor. # However we need to check for termination flag so that the monitor # thread exits when requested. loop do delta = @next_wanted_scan - Time.now if delta > 0 signaled = server.scan_semaphore.wait(delta) if signaled || @stop_requested break end else break end end end # Stop the background thread and wait for it to terminate for a # reasonable amount of time. # # @return [ true | false ] Whether the thread was terminated. # # @api public for backwards compatibility only def stop! stop_push_monitor! # Forward super's return value super.tap do # Important: disconnect should happen after the background thread # terminates. connection&.disconnect! end end def create_push_monitor!(topology_version) @update_mutex.synchronize do if @push_monitor && !@push_monitor.running? @push_monitor = nil end @push_monitor ||= PushMonitor.new( self, topology_version, monitoring, **Utils.shallow_symbolize_keys(options.merge( socket_timeout: heartbeat_interval + connection.socket_timeout, app_metadata: options[:push_monitor_app_metadata], check_document: @connection.check_document )), ) end end def stop_push_monitor! @update_mutex.synchronize do if @push_monitor @push_monitor.stop! @push_monitor = nil end end end # Perform a check of the server with throttling, and update # the server's description and average round trip time. # # If the server was checked less than MIN_SCAN_INTERVAL seconds # ago, sleep until MIN_SCAN_INTERVAL seconds have passed since the last # check. Then perform the check which involves running hello # on the server being monitored and updating the server description # as a result. # # @note If the system clock moves backwards, this method can sleep # for a very long time. # # @note The return value of this method is deprecated. In version 3.0.0 # this method will not have a return value. # # @return [ Description ] The updated description. # # @since 2.0.0 def scan! # Ordinarily the background thread would invoke this method. # But it is also possible to invoke scan! directly on a monitor. # Allow only one scan to be performed at a time. @mutex.synchronize do throttle_scan_frequency! begin result = do_scan rescue => e run_sdam_flow({}, scan_error: e) else run_sdam_flow(result) end end end def run_sdam_flow(result, awaited: false, scan_error: nil) @sdam_mutex.synchronize do old_description = server.description new_description = Description.new( server.address, result, average_round_trip_time: server.round_trip_time_calculator.average_round_trip_time, minimum_round_trip_time: server.round_trip_time_calculator.minimum_round_trip_time ) server.cluster.run_sdam_flow(server.description, new_description, awaited: awaited, scan_error: scan_error) server.description.tap do |new_description| unless awaited if new_description.unknown? && !old_description.unknown? @next_earliest_scan = @next_wanted_scan = Time.now else @next_earliest_scan = Time.now + MIN_SCAN_INTERVAL @next_wanted_scan = Time.now + heartbeat_interval end end end end end # Restarts the server monitor unless the current thread is alive. # # @example Restart the monitor. # monitor.restart! # # @return [ Thread ] The thread the monitor runs on. # # @since 2.1.0 def restart! if @thread && @thread.alive? @thread else run! end end def to_s "#<#{self.class.name}:#{object_id} #{server.address}>" end private def pre_stop server.scan_semaphore.signal end def do_scan begin monitoring.publish_heartbeat(server) do check end rescue => exc msg = "Error checking #{server.address}" Utils.warn_bg_exception(msg, exc, logger: options[:logger], log_prefix: options[:log_prefix], bg_error_backtrace: options[:bg_error_backtrace], ) raise exc end end def check if @connection && @connection.pid != Process.pid log_warn("Detected PID change - Mongo client should have been reconnected (old pid #{@connection.pid}, new pid #{Process.pid}") @connection.disconnect! @connection = nil end if @connection result = server.round_trip_time_calculator.measure do begin doc = @connection.check_document cmd = Protocol::Query.new( Database::ADMIN, Database::COMMAND, doc, :limit => -1 ) message = @connection.dispatch_bytes(cmd.serialize.to_s) message.documents.first rescue Mongo::Error @connection.disconnect! @connection = nil raise end end else connection = Connection.new(server.address, options) connection.connect! result = server.round_trip_time_calculator.measure do connection.handshake! end @connection = connection if tv_doc = result['topologyVersion'] # Successful response, server 4.4+ create_push_monitor!(TopologyVersion.new(tv_doc)) push_monitor.run! else # Failed response or pre-4.4 server stop_push_monitor! end result end result end # @note If the system clock is set to a time in the past, this method # can sleep for a very long time. def throttle_scan_frequency! delta = @next_earliest_scan - Time.now if delta > 0 sleep(delta) end end end end end require 'mongo/server/monitor/connection' require 'mongo/server/monitor/app_metadata' mongo-ruby-driver-2.21.3/lib/mongo/server/monitor/000077500000000000000000000000001505113246500220525ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/server/monitor/app_metadata.rb000066400000000000000000000021621505113246500250200ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2018-2020 MongoDB Inc. # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # http://www.apache.org/licenses/LICENSE-2.0 # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Server class Monitor # App metadata for monitoring sockets. # # It is easiest to start with the normal app metadata and remove # authentication-related bits. # # @api private class AppMetadata < Server::AppMetadata def initialize(options = {}) super if instance_variable_defined?(:@request_auth_mech) remove_instance_variable(:@request_auth_mech) end end end end end end mongo-ruby-driver-2.21.3/lib/mongo/server/monitor/connection.rb000066400000000000000000000223071505113246500245420ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2015-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Server class Monitor # This class models the monitor connections and their behavior. # # @since 2.0.0 # @api private class Connection < Server::ConnectionCommon include Loggable # Creates a new connection object to the specified target address # with the specified options. # # The constructor does not perform any I/O (and thus does not create # sockets nor handshakes); call connect! method on the connection # object to create the network connection. # # @note Monitoring connections do not authenticate. # # @param [ Mongo::Address ] address The address the connection is for. # @param [ Hash ] options The connection options. # # @option options [ Mongo::Server::Monitor::AppMetadata ] :app_metadata # Metadata to use for handshake. If missing or nil, handshake will # not be performed. Although a Mongo::Server::AppMetadata instance # will also work, monitoring connections are meant to use # Mongo::Server::Monitor::AppMetadata instances in order to omit # performing SCRAM negotiation with the server, as monitoring # sockets do not authenticate. # @option options [ Array ] :compressors A list of potential # compressors to use, in order of preference. The driver chooses the # first compressor that is also supported by the server. Currently the # driver only supports 'zstd', 'snappy' and 'zlib'. # @option options [ Float ] :connect_timeout The timeout, in seconds, # to use for network operations. This timeout is used for all # socket operations rather than connect calls only, contrary to # what the name implies, # # @since 2.0.0 def initialize(address, options = {}) @address = address @options = options.dup.freeze unless @app_metadata = options[:app_metadata] raise ArgumentError, 'App metadata is required' end @socket = nil @pid = Process.pid @compressor = nil @hello_ok = false end # @return [ Hash ] options The passed in options. attr_reader :options # @return [ Mongo::Address ] address The address to connect to. attr_reader :address # Returns the monitoring socket timeout. # # Note that monitoring connections use the connect timeout value as # the socket timeout value. See the Server Discovery and Monitoring # specification for details. # # @return [ Float ] The socket timeout in seconds. # # @since 2.4.3 def socket_timeout options[:connect_timeout] || Server::CONNECT_TIMEOUT end # @return [ Integer ] server_connection_id The server connection id. attr_reader :server_connection_id # Sends a message and returns the result. # # @param [ Protocol::Message ] message The message to send. # # @return [ Protocol::Message ] The result. def dispatch(message) dispatch_bytes(message.serialize.to_s) end # Sends a preserialized message and returns the result. # # @param [ String ] bytes The serialized message to send. # # @option opts [ Numeric ] :read_socket_timeout The timeout to use for # each read operation. # # @return [ Protocol::Message ] The result. def dispatch_bytes(bytes, **opts) write_bytes(bytes) read_response( socket_timeout: opts[:read_socket_timeout], ) end def write_bytes(bytes) unless connected? raise ArgumentError, "Trying to dispatch on an unconnected connection #{self}" end add_server_connection_id do add_server_diagnostics do socket.write(bytes) end end end # @option opts [ Numeric ] :socket_timeout The timeout to use for # each read operation. def read_response(**opts) unless connected? raise ArgumentError, "Trying to read on an unconnected connection #{self}" end add_server_connection_id do add_server_diagnostics do Protocol::Message.deserialize(socket, Protocol::Message::MAX_MESSAGE_SIZE, nil, **opts) end end end # Establishes a network connection to the target address. # # If the connection is already established, this method does nothing. # # @example Connect to the host. # connection.connect! # # @note This method mutates the connection class by setting a socket if # one previously did not exist. # # @return [ true ] If the connection succeeded. # # @since 2.0.0 def connect! if @socket raise ArgumentError, 'Monitoring connection already connected' end @socket = add_server_diagnostics do address.socket(socket_timeout, ssl_options.merge( connection_address: address, monitor: true)) end true end # Disconnect the connection. # # @example Disconnect from the host. # connection.disconnect! # # @note This method mutates the connection by setting the socket to nil # if the closing succeeded. # # @note This method accepts an options argument for compatibility with # Server::Connections. However, all options are ignored. # # @return [ true ] If the disconnect succeeded. # # @since 2.0.0 def disconnect!(options = nil) if socket socket.close rescue nil @socket = nil end true end # Send handshake command to connected host and validate the response. # # @return [BSON::Document] Handshake response from server # # @raise [Mongo::Error] If handshake failed. def handshake! command = handshake_command( handshake_document( @app_metadata, server_api: options[:server_api] ) ) payload = command.serialize.to_s message = dispatch_bytes(payload) result = Operation::Result.new(message) result.validate! reply = result.documents.first set_compressor!(reply) set_hello_ok!(reply) @server_connection_id = reply['connectionId'] reply rescue => exc msg = "Failed to handshake with #{address}" Utils.warn_bg_exception(msg, exc, logger: options[:logger], log_prefix: options[:log_prefix], bg_error_backtrace: options[:bg_error_backtrace], ) raise end # Build a document that should be used for connection check. # # @return [BSON::Document] Document that should be sent to a server # for connection check. # # @api private def check_document server_api = @app_metadata.server_api || options[:server_api] doc = if hello_ok? || server_api _doc = HELLO_DOC if server_api _doc = _doc.merge(Utils.transform_server_api(server_api)) end _doc else LEGACY_HELLO_DOC end # compressors must be set to maintain correct compression status # in the server description. See RUBY-2427 if compressors = options[:compressors] doc = doc.merge(compression: compressors) end doc end private def add_server_connection_id yield rescue Mongo::Error => e if server_connection_id note = "sconn:#{server_connection_id}" e.add_note(note) end raise e end # Update @hello_ok flag according to server reply to legacy hello # command. The flag will be set to true if connected server supports # hello command, otherwise the flag will be set to false. # # @param [ BSON::Document ] reply Server reply to legacy hello command. def set_hello_ok!(reply) @hello_ok = !!reply[:helloOk] end def hello_ok? @hello_ok end end end end end mongo-ruby-driver-2.21.3/lib/mongo/server/pending_connection.rb000066400000000000000000000306341505113246500245610ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2019-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Server # This class encapsulates connections during handshake and authentication. # # @api private class PendingConnection < ConnectionBase extend Forwardable def initialize(socket, server, monitoring, options = {}) @socket = socket @options = options @server = server @monitoring = monitoring @id = options[:id] end # @return [ Integer ] The ID for the connection. This is the same ID # as that of the regular Connection object for which this # PendingConnection instance was created. attr_reader :id def handshake_and_authenticate! speculative_auth_doc = nil if options[:user] || options[:auth_mech] # To create an Auth instance, we need to specify the mechanism, # but at this point we don't know the mechanism that ultimately # will be used (since this depends on the data returned by # the handshake, specifically server version). # However, we know that only 4.4+ servers support speculative # authentication, and those servers also generally support # SCRAM-SHA-256. We expect that user accounts created for 4.4+ # servers would generally allow SCRAM-SHA-256 authentication; # user accounts migrated from pre-4.4 servers may only allow # SCRAM-SHA-1. The use of SCRAM-SHA-256 by default is thus # sensible, and it is also mandated by the speculative auth spec. # If no mechanism was specified and we are talking to a 3.0+ # server, we'll send speculative auth document, the server will # ignore it and we'll perform authentication using explicit # command after having defaulted the mechanism later to CR. # If no mechanism was specified and we are talking to a 4.4+ # server and the user account doesn't allow SCRAM-SHA-256, we will # authenticate in a separate command with SCRAM-SHA-1 after # going through SCRAM mechanism negotiation. default_options = Options::Redacted.new(:auth_mech => :scram256) speculative_auth_user = Auth::User.new(default_options.merge(options)) speculative_auth = Auth.get(speculative_auth_user, self) speculative_auth_doc = speculative_auth.conversation.speculative_auth_document end result = handshake!(speculative_auth_doc: speculative_auth_doc) if description.unknown? raise Error::InternalDriverError, "Connection description cannot be unknown after successful handshake: #{description.inspect}" end begin if speculative_auth_doc && (speculative_auth_result = result['speculativeAuthenticate']) unless description.features.scram_sha_1_enabled? raise Error::InvalidServerAuthResponse, "Speculative auth succeeded on a pre-3.0 server" end case speculative_auth_user.mechanism when :mongodb_x509 # Done # We default auth mechanism to scram256, but if user specified # scram explicitly we may be able to authenticate speculatively # with scram. when :scram, :scram256 authenticate!( speculative_auth_client_nonce: speculative_auth.conversation.client_nonce, speculative_auth_mech: speculative_auth_user.mechanism, speculative_auth_result: speculative_auth_result, ) else raise Error::InternalDriverError, "Speculative auth unexpectedly succeeded for mechanism #{speculative_auth_user.mechanism.inspect}" end elsif !description.arbiter? authenticate! end rescue Mongo::Error, Mongo::Error::AuthError => exc exc.service_id = service_id raise end if description.unknown? raise Error::InternalDriverError, "Connection description cannot be unknown after successful authentication: #{description.inspect}" end if server.load_balancer? && !description.mongos? raise Error::BadLoadBalancerTarget, "Load-balanced operation requires being connected a mongos, but the server at #{address.seed} reported itself as #{description.server_type.to_s.gsub('_', ' ')}" end end private # Sends the hello command to the server, then receive and deserialize # the response. # # This method is extracted to be mocked in the tests. # # @param [ Protocol::Message ] Command that should be sent to a server # for handshake purposes. # # @return [ Mongo::Protocol::Reply ] Deserialized server response. def get_handshake_response(hello_command) @server.round_trip_time_calculator.measure do add_server_diagnostics do socket.write(hello_command.serialize.to_s) Protocol::Message.deserialize(socket, Protocol::Message::MAX_MESSAGE_SIZE) end end end # @param [ BSON::Document | nil ] speculative_auth_doc The document to # provide in speculativeAuthenticate field of handshake command. # # @return [ BSON::Document ] The document of the handshake response for # this particular connection. def handshake!(speculative_auth_doc: nil) unless socket raise Error::InternalDriverError, "Cannot handshake because there is no usable socket (for #{address})" end hello_command = handshake_command( handshake_document( app_metadata, speculative_auth_doc: speculative_auth_doc, load_balancer: server.load_balancer?, server_api: options[:server_api] ) ) doc = nil @server.handle_handshake_failure! do begin response = get_handshake_response(hello_command) result = Operation::Result.new([response]) result.validate! doc = result.documents.first rescue => exc msg = "Failed to handshake with #{address}" Utils.warn_bg_exception(msg, exc, logger: options[:logger], log_prefix: options[:log_prefix], bg_error_backtrace: options[:bg_error_backtrace], ) raise end end if @server.force_load_balancer? doc['serviceId'] ||= "fake:#{rand(2**32-1)+1}" end post_handshake( doc, @server.round_trip_time_calculator.average_round_trip_time, @server.round_trip_time_calculator.minimum_round_trip_time ) doc end # @param [ String | nil ] speculative_auth_client_nonce The client # nonce used in speculative auth on this connection that # produced the specified speculative auth result. # @param [ Symbol | nil ] speculative_auth_mech Auth mechanism used # for speculative auth, if speculative auth succeeded. If speculative # auth was not performed or it failed, this must be nil. # @param [ BSON::Document | nil ] speculative_auth_result The # value of speculativeAuthenticate field of hello response of # the handshake on this connection. def authenticate!( speculative_auth_client_nonce: nil, speculative_auth_mech: nil, speculative_auth_result: nil ) if options[:user] || options[:auth_mech] @server.handle_auth_failure! do begin auth = Auth.get( resolved_user(speculative_auth_mech: speculative_auth_mech), self, speculative_auth_client_nonce: speculative_auth_client_nonce, speculative_auth_result: speculative_auth_result, ) auth.login rescue => exc msg = "Failed to authenticate to #{address}" Utils.warn_bg_exception(msg, exc, logger: options[:logger], log_prefix: options[:log_prefix], bg_error_backtrace: options[:bg_error_backtrace], ) raise end end end end def ensure_connected yield @socket end # This is a separate method to keep the nesting level down. # # @return [ Server::Description ] The server description calculated from # the handshake response for this particular connection. def post_handshake(response, average_rtt, minimum_rtt) if response["ok"] == 1 # Auth mechanism is entirely dependent on the contents of # hello response *for this connection*. # Hello received by the monitoring connection should advertise # the same wire protocol, but if it doesn't, we use whatever # the monitoring connection advertised for filling out the # server description and whatever the non-monitoring connection # (that's this one) advertised for performing auth on that # connection. @sasl_supported_mechanisms = response['saslSupportedMechs'] set_compressor!(response) else @sasl_supported_mechanisms = nil end @description = Description.new( address, response, average_round_trip_time: average_rtt, load_balancer: server.load_balancer?, force_load_balancer: options[:connect] == :load_balanced, ).tap do |new_description| @server.cluster.run_sdam_flow(@server.description, new_description) end end # The user as going to be used for authentication. This user has the # auth mechanism set and, if necessary, auth source. # # @param [ Symbol | nil ] speculative_auth_mech Auth mechanism used # for speculative auth, if speculative auth succeeded. If speculative # auth was not performed or it failed, this must be nil. # # @return [ Auth::User ] The resolved user. def resolved_user(speculative_auth_mech: nil) @resolved_user ||= begin unless options[:user] || options[:auth_mech] raise Mongo::Error, 'No authentication information specified in the client' end user_options = Options::Redacted.new( # When speculative auth is performed, we always use SCRAM-SHA-256. # At the same time we perform SCRAM mechanism negotiation in the # hello request. # If the credentials we are trying to authenticate with do not # map to an existing user, SCRAM mechanism negotiation will not # return anything which would cause the driver to use # SCRAM-SHA-1. However, on 4.4+ servers speculative auth would # succeed (technically just the first round-trip, not the entire # authentication flow) and we would be continuing it here; # in this case, we must use SCRAM-SHA-256 as the mechanism since # that is what the conversation was started with, even though # SCRAM mechanism negotiation did not return SCRAM-SHA-256 as a # valid mechanism to use for these credentials. :auth_mech => speculative_auth_mech || default_mechanism, ).merge(options) if user_options[:auth_mech] == :mongodb_x509 user_options[:auth_source] = '$external' end Auth::User.new(user_options) end end def default_mechanism if description.nil? raise Mongo::Error, 'Trying to query default mechanism when handshake has not completed' end if description.features.scram_sha_1_enabled? if @sasl_supported_mechanisms&.include?('SCRAM-SHA-256') :scram256 else :scram end else :mongodb_cr end end end end end mongo-ruby-driver-2.21.3/lib/mongo/server/push_monitor.rb000066400000000000000000000160351505113246500234430ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Server # A monitor utilizing server-pushed hello requests. # # When a Monitor handshakes with a 4.4+ server, it creates an instance # of PushMonitor. PushMonitor subsequently executes server-pushed hello # (i.e. awaited & exhausted hello) to receive topology changes from the # server as quickly as possible. The Monitor still monitors the server # for round-trip time calculations and to perform immediate checks as # requested by the application. # # @api private class PushMonitor extend Forwardable include BackgroundThread def initialize(monitor, topology_version, monitoring, **options) if topology_version.nil? raise ArgumentError, 'Topology version must be provided but it was nil' end unless options[:app_metadata] raise ArgumentError, 'App metadata is required' end unless options[:check_document] raise ArgumentError, 'Check document is required' end @app_metadata = options[:app_metadata] @check_document = options[:check_document] @monitor = monitor @topology_version = topology_version @monitoring = monitoring @options = options @lock = Mutex.new end # @return [ Monitor ] The monitor to which this push monitor is attached. attr_reader :monitor # @return [ TopologyVersion ] Most recently received topology version. attr_reader :topology_version # @return [ Monitoring ] monitoring The monitoring. attr_reader :monitoring # @return [ Hash ] Push monitor options. attr_reader :options # @return [ Server ] The server that is being monitored. def_delegator :monitor, :server def start! @lock.synchronize do super end end def stop! @lock.synchronize do @stop_requested = true if @connection # Interrupt any in-progress exhausted hello reads by # disconnecting the connection. @connection.send(:socket).close rescue nil end end super.tap do @lock.synchronize do if @connection @connection.disconnect! @connection = nil end end end end def do_work @lock.synchronize do return if @stop_requested end result = monitoring.publish_heartbeat(server, awaited: true) do check end new_description = monitor.run_sdam_flow(result, awaited: true) # When hello fails due to a fail point, the response does not # include topology version. In this case we need to keep our existing # topology version so that we can resume monitoring. # The spec does not appear to directly address this case but # https://github.com/mongodb/specifications/blob/master/source/server-discovery-and-monitoring/server-monitoring.md#streamable-hello-or-legacy-hello-command # says that topologyVersion should only be updated from successful # hello responses. if new_description.topology_version @topology_version = new_description.topology_version end rescue IOError, SocketError, SystemCallError, Mongo::Error => exc stop_requested = @lock.synchronize { @stop_requested } if stop_requested # Ignore the exception, see RUBY-2771. return end msg = "Error running awaited hello on #{server.address}" Utils.warn_bg_exception(msg, exc, logger: options[:logger], log_prefix: options[:log_prefix], bg_error_backtrace: options[:bg_error_backtrace], ) # If a request failed on a connection, stop push monitoring. # In case the server is dead we don't want to have two connections # trying to connect unsuccessfully at the same time. stop! # Request an immediate check on the monitor to get reinstated as # soon as possible in case the server is actually alive. server.scan_semaphore.signal end def check @lock.synchronize do if @connection && @connection.pid != Process.pid log_warn("Detected PID change - Mongo client should have been reconnected (old pid #{@connection.pid}, new pid #{Process.pid}") @connection.disconnect! @connection = nil end end @lock.synchronize do unless @connection @server_pushing = false connection = PushMonitor::Connection.new(server.address, options) connection.connect! @connection = connection end end resp_msg = begin unless @server_pushing write_check_command end read_response rescue Mongo::Error @lock.synchronize do @connection.disconnect! @connection = nil end raise end @server_pushing = resp_msg.flags.include?(:more_to_come) result = Operation::Result.new(resp_msg) result.validate! result.documents.first end def write_check_command document = @check_document.merge( topologyVersion: topology_version.to_doc, maxAwaitTimeMS: monitor.heartbeat_interval * 1000, ) command = Protocol::Msg.new( [:exhaust_allowed], {}, document.merge({'$db' => Database::ADMIN}) ) @lock.synchronize { @connection }.write_bytes(command.serialize.to_s) end def read_response if timeout = options[:connect_timeout] if timeout < 0 raise Mongo::SocketTimeoutError, "Requested to read with a negative timeout: #{}" elsif timeout > 0 timeout += options[:heartbeat_frequency] || Monitor::DEFAULT_HEARTBEAT_INTERVAL end end # We set the timeout twice: once passed into read_socket which applies # to each individual read operation, and again around the entire read. Timeout.timeout(timeout, Error::SocketTimeoutError, "Failed to read an awaited hello response in #{timeout} seconds") do @lock.synchronize { @connection }.read_response(socket_timeout: timeout) end end def to_s "#<#{self.class.name}:#{object_id} #{server.address}>" end end end end require 'mongo/server/push_monitor/connection' mongo-ruby-driver-2.21.3/lib/mongo/server/push_monitor/000077500000000000000000000000001505113246500231115ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/server/push_monitor/connection.rb000066400000000000000000000015311505113246500255750ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Server class PushMonitor # @api private class Connection < Server::Monitor::Connection def socket_timeout options[:socket_timeout] end end end end end mongo-ruby-driver-2.21.3/lib/mongo/server/round_trip_time_calculator.rb000066400000000000000000000055301505113246500263270ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2018-2020 MongoDB Inc. # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # http://www.apache.org/licenses/LICENSE-2.0 # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Server # @api private class RoundTripTimeCalculator # The weighting factor (alpha) for calculating the average moving # round trip time. RTT_WEIGHT_FACTOR = 0.2.freeze private_constant :RTT_WEIGHT_FACTOR RTT_SAMPLES_FOR_MINIMUM = 10 private_constant :RTT_SAMPLES_FOR_MINIMUM MIN_SAMPLES = 3 private_constant :MIN_SAMPLES def initialize @last_round_trip_time = nil @average_round_trip_time = nil @minimum_round_trip_time = 0 @lock = Mutex.new @rtts = [] end attr_reader :last_round_trip_time attr_reader :average_round_trip_time attr_reader :minimum_round_trip_time def measure start = Utils.monotonic_time begin rv = yield rescue Error::SocketError, Error::SocketTimeoutError # If we encountered a network error, the round-trip is not # complete and thus RTT for it does not make sense. raise rescue Error, Error::AuthError => exc # For other errors, RTT is valid. end last_rtt = Utils.monotonic_time - start # If hello fails, we need to return the last round trip time # because it is used in the heartbeat failed SDAM event, # but we must not update the round trip time recorded in the server. unless exc @last_round_trip_time = last_rtt @lock.synchronize do update_average_round_trip_time update_minimum_round_trip_time end end if exc raise exc else rv end end def update_average_round_trip_time @average_round_trip_time = if average_round_trip_time RTT_WEIGHT_FACTOR * last_round_trip_time + (1 - RTT_WEIGHT_FACTOR) * average_round_trip_time else last_round_trip_time end end def update_minimum_round_trip_time @rtts.push(last_round_trip_time) unless last_round_trip_time.nil? @minimum_round_trip_time = 0 and return if @rtts.size < MIN_SAMPLES @rtts.shift if @rtts.size > RTT_SAMPLES_FOR_MINIMUM @minimum_round_trip_time = @rtts.compact.min end end end end mongo-ruby-driver-2.21.3/lib/mongo/server_selector.rb000066400000000000000000000054021505113246500226110ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'mongo/server_selector/base' require 'mongo/server_selector/nearest' require 'mongo/server_selector/primary' require 'mongo/server_selector/primary_preferred' require 'mongo/server_selector/secondary' require 'mongo/server_selector/secondary_preferred' module Mongo # Functionality for getting an object able to select a server, given a preference. # # @since 2.0.0 module ServerSelector extend self # The max latency in seconds between the closest server and other servers # considered for selection. # # @since 2.0.0 LOCAL_THRESHOLD = 0.015.freeze # How long to block for server selection before throwing an exception. # # @since 2.0.0 SERVER_SELECTION_TIMEOUT = 30.freeze # The smallest allowed max staleness value, in seconds. # # @since 2.4.0 SMALLEST_MAX_STALENESS_SECONDS = 90 # Primary read preference. # # @since 2.1.0 PRIMARY = Options::Redacted.new(mode: :primary).freeze # Hash lookup for the selector classes based off the symbols # provided in configuration. # # @since 2.0.0 PREFERENCES = { nearest: Nearest, primary: Primary, primary_preferred: PrimaryPreferred, secondary: Secondary, secondary_preferred: SecondaryPreferred }.freeze # Create a server selector object. # # @example Get a server selector object for selecting a secondary with # specific tag sets. # Mongo::ServerSelector.get(:mode => :secondary, :tag_sets => [{'dc' => 'nyc'}]) # # @param [ Hash ] preference The server preference. # # @since 2.0.0 def get(preference = {}) return preference if PREFERENCES.values.include?(preference.class) Mongo::Lint.validate_underscore_read_preference(preference) PREFERENCES.fetch((preference[:mode] || :primary).to_sym).new(preference) end # Returns the primary server selector. # # A call to this method is equivalent to `get(mode: :primary)`, except the # resulting server selector object is cached and not recreated each time. # # @api private def primary @primary ||= get(mode: :primary) end end end mongo-ruby-driver-2.21.3/lib/mongo/server_selector/000077500000000000000000000000001505113246500222635ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/server_selector/base.rb000066400000000000000000000660461505113246500235360ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module ServerSelector class Base # Initialize the server selector. # # @example Initialize the selector. # Mongo::ServerSelector::Secondary.new(:tag_sets => [{'dc' => 'nyc'}]) # # @example Initialize the preference with no options. # Mongo::ServerSelector::Secondary.new # # @param [ Hash ] options The server preference options. # # @option options [ Integer ] :local_threshold The local threshold boundary for # nearest selection in seconds. # @option options [ Integer ] :max_staleness The maximum replication lag, # in seconds, that a secondary can suffer and still be eligible for a read. # A value of -1 is treated identically to nil, which is to not # have a maximum staleness. # @option options [ Hash | nil ] :hedge A Hash specifying whether to enable hedged # reads on the server. Hedged reads are not enabled by default. When # specifying this option, it must be in the format: { enabled: true }, # where the value of the :enabled key is a boolean value. # # @raise [ Error::InvalidServerPreference ] If tag sets are specified # but not allowed. # # @api private def initialize(options = nil) options = options ? options.dup : {} if options[:max_staleness] == -1 options.delete(:max_staleness) end @options = options @tag_sets = options[:tag_sets] || [] @max_staleness = options[:max_staleness] @hedge = options[:hedge] validate! end # @return [ Hash ] options The options. attr_reader :options # @return [ Array ] tag_sets The tag sets used to select servers. attr_reader :tag_sets # @return [ Integer ] max_staleness The maximum replication lag, in # seconds, that a secondary can suffer and still be eligible for a read. # # @since 2.4.0 attr_reader :max_staleness # @return [ Hash | nil ] hedge The document specifying whether to enable # hedged reads. attr_reader :hedge # Get the timeout for server selection. # # @example Get the server selection timeout, in seconds. # selector.server_selection_timeout # # @return [ Float ] The timeout. # # @since 2.0.0 # # @deprecated This setting is now taken from the cluster options when # a server is selected. Will be removed in version 3.0. def server_selection_timeout @server_selection_timeout ||= (options[:server_selection_timeout] || ServerSelector::SERVER_SELECTION_TIMEOUT) end # Get the local threshold boundary for nearest selection in seconds. # # @example Get the local threshold. # selector.local_threshold # # @return [ Float ] The local threshold. # # @since 2.0.0 # # @deprecated This setting is now taken from the cluster options when # a server is selected. Will be removed in version 3.0. def local_threshold @local_threshold ||= (options[:local_threshold] || ServerSelector::LOCAL_THRESHOLD) end # @api private def local_threshold_with_cluster(cluster) options[:local_threshold] || cluster.options[:local_threshold] || LOCAL_THRESHOLD end # Inspect the server selector. # # @example Inspect the server selector. # selector.inspect # # @return [ String ] The inspection. # # @since 2.2.0 def inspect "#<#{self.class.name}:0x#{object_id} tag_sets=#{tag_sets.inspect} max_staleness=#{max_staleness.inspect} hedge=#{hedge}>" end # Check equality of two server selectors. # # @example Check server selector equality. # preference == other # # @param [ Object ] other The other preference. # # @return [ true, false ] Whether the objects are equal. # # @since 2.0.0 def ==(other) name == other.name && hedge == other.hedge && max_staleness == other.max_staleness && tag_sets == other.tag_sets end # Select a server from the specified cluster, taking into account # mongos pinning for the specified session. # # If the session is given and has a pinned server, this server is the # only server considered for selection. If the server is of type mongos, # it is returned immediately; otherwise monitoring checks on this # server are initiated to update its status, and if the server becomes # a mongos within the server selection timeout, it is returned. # # If no session is given or the session does not have a pinned server, # normal server selection process is performed among all servers in the # specified cluster matching the preference of this server selector # object. Monitoring checks are initiated on servers in the cluster until # a suitable server is found, up to the server selection timeout. # # If a suitable server is not found within the server selection timeout, # this method raises Error::NoServerAvailable. # # @param [ Mongo::Cluster ] cluster The cluster from which to select # an eligible server. # @param [ true, false ] ping Whether to ping the server before selection. # Deprecated and ignored. # @param [ Session | nil ] session Optional session to take into account # for mongos pinning. Added in version 2.10.0. # @param [ true | false ] write_aggregation Whether we need a server that # supports writing aggregations (e.g. with $merge/$out) on secondaries. # @param [ Array ] deprioritized A list of servers that should # be selected from only if no other servers are available. This is # used to avoid selecting the same server twice in a row when # retrying a command. # @param [ Float | nil ] :timeout Timeout in seconds for the operation, # if any. # # @return [ Mongo::Server ] A server matching the server preference. # # @raise [ Error::NoServerAvailable ] No server was found matching the # specified preference / pinning requirement in the server selection # timeout. # @raise [ Error::LintError ] An unexpected condition was detected, and # lint mode is enabled. # # @since 2.0.0 def select_server( cluster, ping = nil, session = nil, write_aggregation: false, deprioritized: [], timeout: nil ) select_server_impl(cluster, ping, session, write_aggregation, deprioritized, timeout).tap do |server| if Lint.enabled? && !server.pool.ready? raise Error::LintError, 'Server selector returning a server with a pool which is not ready' end end end # Parameters and return values are the same as for select_server, only # the +timeout+ param is renamed to +csot_timeout+. private def select_server_impl(cluster, ping, session, write_aggregation, deprioritized, csot_timeout) if cluster.topology.is_a?(Cluster::Topology::LoadBalanced) return cluster.servers.first end timeout = cluster.options[:server_selection_timeout] || SERVER_SELECTION_TIMEOUT server_selection_timeout = if csot_timeout && csot_timeout > 0 [timeout, csot_timeout].min else timeout end # Special handling for zero timeout: if we have to select a server, # and the timeout is zero, fail immediately (since server selection # will take some non-zero amount of time in any case). if server_selection_timeout == 0 msg = "Failing server selection due to zero timeout. " + " Requested #{name} in cluster: #{cluster.summary}" raise Error::NoServerAvailable.new(self, cluster, msg) end deadline = Utils.monotonic_time + server_selection_timeout if session && session.pinned_server if Mongo::Lint.enabled? unless cluster.sharded? raise Error::LintError, "Session has a pinned server in a non-sharded topology: #{topology}" end end if !session.in_transaction? session.unpin end if server = session.pinned_server # Here we assume that a mongos stays in the topology indefinitely. # This will no longer be the case once SRV polling is implemented. unless server.mongos? while (time_remaining = deadline - Utils.monotonic_time) > 0 wait_for_server_selection(cluster, time_remaining) end unless server.mongos? msg = "The session being used is pinned to the server which is not a mongos: #{server.summary} " + "(after #{server_selection_timeout} seconds)" raise Error::NoServerAvailable.new(self, cluster, msg) end end return server end end if cluster.replica_set? validate_max_staleness_value_early! end if cluster.addresses.empty? if Lint.enabled? unless cluster.servers.empty? raise Error::LintError, "Cluster has no addresses but has servers: #{cluster.servers.map(&:inspect).join(', ')}" end end msg = "Cluster has no addresses, and therefore will never have a server" raise Error::NoServerAvailable.new(self, cluster, msg) end =begin Add this check in version 3.0.0 unless cluster.connected? msg = 'Cluster is disconnected' raise Error::NoServerAvailable.new(self, cluster, msg) end =end loop do if Lint.enabled? cluster.servers.each do |server| # TODO: Add this back in RUBY-3174. # if !server.unknown? && !server.connected? # raise Error::LintError, "Server #{server.summary} is known but is not connected" # end if !server.unknown? && !server.pool.ready? raise Error::LintError, "Server #{server.summary} is known but has non-ready pool" end end end server = try_select_server(cluster, write_aggregation: write_aggregation, deprioritized: deprioritized) if server unless cluster.topology.compatible? raise Error::UnsupportedFeatures, cluster.topology.compatibility_error.to_s end if session && session.starting_transaction? && cluster.sharded? session.pin_to_server(server) end return server end cluster.scan!(false) time_remaining = deadline - Utils.monotonic_time if time_remaining > 0 wait_for_server_selection(cluster, time_remaining) # If we wait for server selection, perform another round of # attempting to locate a suitable server. Otherwise server selection # can raise NoServerAvailable message when the diagnostics # reports an available server of the requested type. else break end end msg = "No #{name} server" if is_a?(ServerSelector::Secondary) && !tag_sets.empty? msg += " with tag sets: #{tag_sets}" end msg += " is available in cluster: #{cluster.summary} " + "with timeout=#{server_selection_timeout}, " + "LT=#{local_threshold_with_cluster(cluster)}" msg += server_selection_diagnostic_message(cluster) raise Error::NoServerAvailable.new(self, cluster, msg) rescue Error::NoServerAvailable => e if session && session.in_transaction? && !session.committing_transaction? e.add_label('TransientTransactionError') end if session && session.committing_transaction? e.add_label('UnknownTransactionCommitResult') end raise e end # Tries to find a suitable server, returns the server if one is available # or nil if there isn't a suitable server. # # @param [ Mongo::Cluster ] cluster The cluster from which to select # an eligible server. # @param [ true | false ] write_aggregation Whether we need a server that # supports writing aggregations (e.g. with $merge/$out) on secondaries. # @param [ Array ] deprioritized A list of servers that should # be selected from only if no other servers are available. This is # used to avoid selecting the same server twice in a row when # retrying a command. # # @return [ Server | nil ] A suitable server, if one exists. # # @api private def try_select_server(cluster, write_aggregation: false, deprioritized: []) servers = if write_aggregation && cluster.replica_set? # 1. Check if ALL servers in cluster support secondary writes. is_write_supported = cluster.servers.reduce(true) do |res, server| res && server.features.merge_out_on_secondary_enabled? end if is_write_supported # 2. If all servers support secondary writes, we respect read preference. suitable_servers(cluster) else # 3. Otherwise we fallback to primary for replica set. [cluster.servers.detect(&:primary?)] end else suitable_servers(cluster) end # This list of servers may be ordered in a specific way # by the selector (e.g. for secondary preferred, the first # server may be a secondary and the second server may be primary) # and we should take the first server here respecting the order server = suitable_server(servers, deprioritized) if server if Lint.enabled? # It is possible for a server to have a nil average RTT here # because the ARTT comes from description which may be updated # by a background thread while server selection is running. # Currently lint mode is not a public feature, if/when this # changes (https://jira.mongodb.org/browse/RUBY-1576) the # requirement for ARTT to be not nil would need to be removed. if server.average_round_trip_time.nil? raise Error::LintError, "Server #{server.address} has nil average rtt" end end end server end # Returns servers of acceptable types from the cluster. # # Does not perform staleness validation, staleness filtering or # latency filtering. # # @param [ Cluster ] cluster The cluster. # # @return [ Array ] The candidate servers. # # @api private def candidates(cluster) servers = cluster.servers servers.each do |server| validate_max_staleness_support!(server) end if cluster.single? servers elsif cluster.sharded? servers elsif cluster.replica_set? select_in_replica_set(servers) else # Unknown cluster - no servers [] end end # Returns servers satisfying the server selector from the cluster. # # @param [ Cluster ] cluster The cluster. # # @return [ Array ] The suitable servers. # # @api private def suitable_servers(cluster) if cluster.single? candidates(cluster) elsif cluster.sharded? local_threshold = local_threshold_with_cluster(cluster) servers = candidates(cluster) near_servers(servers, local_threshold) elsif cluster.replica_set? validate_max_staleness_value!(cluster) candidates(cluster) else # Unknown cluster - no servers [] end end private # Returns a server from the list of servers that is suitable for # executing the operation. # # @param [ Array ] servers The candidate servers. # @param [ Array ] deprioritized A list of servers that should # be selected from only if no other servers are available. # # @return [ Server | nil ] The suitable server or nil if no suitable # server is available. def suitable_server(servers, deprioritized) preferred = servers - deprioritized if preferred.empty? servers.first else preferred.first end end # Convert this server preference definition into a format appropriate # for sending to a MongoDB server (i.e., as a command field). # # @return [ Hash ] The server preference formatted as a command field value. # # @since 2.0.0 def full_doc @full_doc ||= begin preference = { :mode => self.class.const_get(:SERVER_FORMATTED_NAME) } preference.update(tags: tag_sets) unless tag_sets.empty? preference.update(maxStalenessSeconds: max_staleness) if max_staleness preference.update(hedge: hedge) if hedge preference end end # Select the primary from a list of provided candidates. # # @param [ Array ] candidates List of candidate servers to select the # primary from. # # @return [ Array ] The primary. # # @since 2.0.0 def primary(candidates) candidates.select do |server| server.primary? end end # Select the secondaries from a list of provided candidates. # # @param [ Array ] candidates List of candidate servers to select the # secondaries from. # # @return [ Array ] The secondary servers. # # @since 2.0.0 def secondaries(candidates) matching_servers = candidates.select(&:secondary?) matching_servers = filter_stale_servers(matching_servers, primary(candidates).first) matching_servers = match_tag_sets(matching_servers) unless tag_sets.empty? # Per server selection spec the server selected MUST be a random # one matching staleness and latency requirements. # Selectors always pass the output of #secondaries to #nearest # which shuffles the server list, fulfilling this requirement. matching_servers end # Select the near servers from a list of provided candidates, taking the # local threshold into account. # # @param [ Array ] candidates List of candidate servers to select the # near servers from. # @param [ Integer ] local_threshold Local threshold. This parameter # will be required in driver version 3.0. # # @return [ Array ] The near servers. # # @since 2.0.0 def near_servers(candidates = [], local_threshold = nil) return candidates if candidates.empty? # Average RTT on any server may change at any time by the server # monitor's background thread. ARTT may also become nil if the # server is marked unknown. Take a snapshot of ARTTs for the duration # of this method. candidates = candidates.map do |server| {server: server, artt: server.average_round_trip_time} end.reject do |candidate| candidate[:artt].nil? end return candidates if candidates.empty? nearest_candidate = candidates.min_by do |candidate| candidate[:artt] end # Default for legacy signarure local_threshold ||= self.local_threshold threshold = nearest_candidate[:artt] + local_threshold candidates.select do |candidate| candidate[:artt] <= threshold end.map do |candidate| candidate[:server] end.shuffle! end # Select the servers matching the defined tag sets. # # @param [ Array ] candidates List of candidate servers from which those # matching the defined tag sets should be selected. # # @return [ Array ] The servers matching the defined tag sets. # # @since 2.0.0 def match_tag_sets(candidates) matches = [] tag_sets.find do |tag_set| matches = candidates.select { |server| server.matches_tag_set?(tag_set) } !matches.empty? end matches || [] end def filter_stale_servers(candidates, primary = nil) return candidates unless @max_staleness # last_scan is filled out by the Monitor, and can be nil if a server # had its description manually set rather than being normally updated # via the SDAM flow. We don't handle the possibility of a nil # last_scan here. if primary candidates.select do |server| validate_max_staleness_support!(server) staleness = (server.last_scan - server.last_write_date) - (primary.last_scan - primary.last_write_date) + server.cluster.heartbeat_interval staleness <= @max_staleness end else max_write_date = candidates.collect(&:last_write_date).max candidates.select do |server| validate_max_staleness_support!(server) staleness = max_write_date - server.last_write_date + server.cluster.heartbeat_interval staleness <= @max_staleness end end end def validate! if !@tag_sets.all? { |set| set.empty? } && !tags_allowed? raise Error::InvalidServerPreference.new(Error::InvalidServerPreference::NO_TAG_SUPPORT) elsif @max_staleness && !max_staleness_allowed? raise Error::InvalidServerPreference.new(Error::InvalidServerPreference::NO_MAX_STALENESS_SUPPORT) end if @hedge unless hedge_allowed? raise Error::InvalidServerPreference.new(Error::InvalidServerPreference::NO_HEDGE_SUPPORT) end unless @hedge.is_a?(Hash) && @hedge.key?(:enabled) && [true, false].include?(@hedge[:enabled]) raise Error::InvalidServerPreference.new( "`hedge` value (#{hedge}) is invalid - hedge must be a Hash in the " \ "format { enabled: true }" ) end end end def validate_max_staleness_support!(server) if @max_staleness && !server.features.max_staleness_enabled? raise Error::InvalidServerPreference.new(Error::InvalidServerPreference::NO_MAX_STALENESS_WITH_LEGACY_SERVER) end end def validate_max_staleness_value_early! if @max_staleness unless @max_staleness >= SMALLEST_MAX_STALENESS_SECONDS msg = "`max_staleness` value (#{@max_staleness}) is too small - it must be at least " + "`Mongo::ServerSelector::SMALLEST_MAX_STALENESS_SECONDS` (#{ServerSelector::SMALLEST_MAX_STALENESS_SECONDS})" raise Error::InvalidServerPreference.new(msg) end end end def validate_max_staleness_value!(cluster) if @max_staleness heartbeat_interval = cluster.heartbeat_interval unless @max_staleness >= [ SMALLEST_MAX_STALENESS_SECONDS, min_cluster_staleness = heartbeat_interval + Cluster::IDLE_WRITE_PERIOD_SECONDS, ].max msg = "`max_staleness` value (#{@max_staleness}) is too small - it must be at least " + "`Mongo::ServerSelector::SMALLEST_MAX_STALENESS_SECONDS` (#{ServerSelector::SMALLEST_MAX_STALENESS_SECONDS}) and (the cluster's heartbeat_frequency " + "setting + `Mongo::Cluster::IDLE_WRITE_PERIOD_SECONDS`) (#{min_cluster_staleness})" raise Error::InvalidServerPreference.new(msg) end end end # Waits for server state changes in the specified cluster. # # If the cluster has a server selection semaphore, waits on that # semaphore up to the specified remaining time. Any change in server # state resulting from SDAM will immediately wake up this method and # cause it to return. # # If the cluster does not have a server selection semaphore, waits # the smaller of 0.25 seconds and the specified remaining time. # This functionality is provided for backwards compatibility only for # applications directly invoking the server selection process. # If lint mode is enabled and the cluster does not have a server # selection semaphore, Error::LintError will be raised. # # @param [ Cluster ] cluster The cluster to wait for. # @param [ Numeric ] time_remaining Maximum time to wait, in seconds. def wait_for_server_selection(cluster, time_remaining) if cluster.server_selection_semaphore # Since the semaphore may have been signaled between us checking # the servers list earlier and the wait call below, we should not # wait for the full remaining time - wait for up to 0.5 second, then # recheck the state. cluster.server_selection_semaphore.wait([time_remaining, 0.5].min) else if Lint.enabled? raise Error::LintError, 'Waiting for server selection without having a server selection semaphore' end sleep [time_remaining, 0.25].min end end # Creates a diagnostic message when server selection fails. # # The diagnostic message includes the following information, as applicable: # # - Servers having dead monitor threads # - Cluster is disconnected # # If none of the conditions for diagnostic messages apply, an empty string # is returned. # # @param [ Cluster ] cluster The cluster on which server selection was # performed. # # @return [ String ] The diagnostic message. def server_selection_diagnostic_message(cluster) msg = '' dead_monitors = [] cluster.servers_list.each do |server| thread = server.monitor.instance_variable_get('@thread') if thread.nil? || !thread.alive? dead_monitors << server end end if dead_monitors.any? msg += ". The following servers have dead monitor threads: #{dead_monitors.map(&:summary).join(', ')}" end unless cluster.connected? msg += ". The cluster is disconnected (client may have been closed)" end msg end end end end mongo-ruby-driver-2.21.3/lib/mongo/server_selector/nearest.rb000066400000000000000000000061441505113246500242560ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module ServerSelector # Encapsulates specifications for selecting near servers given a list # of candidates. # # @since 2.0.0 class Nearest < Base # Name of the this read preference in the server's format. # # @since 2.5.0 SERVER_FORMATTED_NAME = 'nearest'.freeze # Get the name of the server mode type. # # @example Get the name of the server mode for this preference. # preference.name # # @return [ Symbol ] :nearest # # @since 2.0.0 def name :nearest end # Whether the secondaryOk bit should be set on wire protocol messages. # I.e. whether the operation can be performed on a secondary server. # # @return [ true ] true # @api private def secondary_ok? true end # Whether tag sets are allowed to be defined for this server preference. # # @return [ true ] true # # @since 2.0.0 def tags_allowed? true end # Whether the hedge option is allowed to be defined for this server preference. # # @return [ true ] true def hedge_allowed? true end # Convert this server preference definition into a format appropriate # for sending to a MongoDB server (i.e., as a command field). # # @return [ Hash ] The server preference formatted as a command field value. # # @since 2.0.0 def to_doc full_doc end # Convert this server preference definition into a value appropriate # for sending to a mongos. # # This method may return nil if the read preference should not be sent # to a mongos. # # @return [ Hash | nil ] The server preference converted to a mongos # command field value. # # @since 2.0.0 alias :to_mongos :to_doc private # Select the near servers taking into account any defined tag sets and # local threshold between the nearest server and other servers. # # @return [ Array ] The nearest servers from the list of candidates. # # @since 2.0.0 def select_in_replica_set(candidates) matching_servers = filter_stale_servers(candidates, primary(candidates).first) matching_servers = match_tag_sets(matching_servers) unless tag_sets.empty? near_servers(matching_servers) end def max_staleness_allowed? true end end end end mongo-ruby-driver-2.21.3/lib/mongo/server_selector/primary.rb000066400000000000000000000056111505113246500242760ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module ServerSelector # Encapsulates specifications for selecting the primary server given a list # of candidates. # # @since 2.0.0 class Primary < Base # Name of the this read preference in the server's format. # # @since 2.5.0 SERVER_FORMATTED_NAME = 'primary'.freeze # Get the name of the server mode type. # # @example Get the name of the server mode for this preference. # preference.name # # @return [ Symbol ] :primary # # @since 2.0.0 def name :primary end # Whether the secondaryOk bit should be set on wire protocol messages. # I.e. whether the operation can be performed on a secondary server. # # @return [ false ] false # @api private def secondary_ok? false end # Whether tag sets are allowed to be defined for this server preference. # # @return [ false ] false # # @since 2.0.0 def tags_allowed? false end # Whether the hedge option is allowed to be defined for this server preference. # # @return [ false ] false def hedge_allowed? false end # Convert this server preference definition into a format appropriate # for sending to a MongoDB server (i.e., as a command field). # # @return [ Hash ] The server preference formatted as a command field value. # # @since 2.5.0 def to_doc { mode: SERVER_FORMATTED_NAME } end # Convert this server preference definition into a value appropriate # for sending to a mongos. # # This method may return nil if the read preference should not be sent # to a mongos. # # @return [ Hash | nil ] The server preference converted to a mongos # command field value. # # @since 2.0.0 def to_mongos nil end private # Select the primary server from a list of candidates. # # @return [ Array ] The primary server from the list of candidates. # # @since 2.0.0 def select_in_replica_set(candidates) primary(candidates) end def max_staleness_allowed? false end end end end mongo-ruby-driver-2.21.3/lib/mongo/server_selector/primary_preferred.rb000066400000000000000000000062101505113246500263300ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module ServerSelector # Encapsulates specifications for selecting servers, with the # primary preferred, given a list of candidates. # # @since 2.0.0 class PrimaryPreferred < Base # Name of the this read preference in the server's format. # # @since 2.5.0 SERVER_FORMATTED_NAME = 'primaryPreferred'.freeze # Get the name of the server mode type. # # @example Get the name of the server mode for this preference. # preference.name # # @return [ Symbol ] :primary_preferred # # @since 2.0.0 def name :primary_preferred end # Whether the secondaryOk bit should be set on wire protocol messages. # I.e. whether the operation can be performed on a secondary server. # # @return [ true ] true # @api private def secondary_ok? true end # Whether tag sets are allowed to be defined for this server preference. # # @return [ true ] true # # @since 2.0.0 def tags_allowed? true end # Whether the hedge option is allowed to be defined for this server preference. # # @return [ true ] true def hedge_allowed? true end # Convert this server preference definition into a format appropriate # for sending to a MongoDB server (i.e., as a command field). # # @return [ Hash ] The server preference formatted as a command field value. # # @since 2.0.0 def to_doc full_doc end # Convert this server preference definition into a value appropriate # for sending to a mongos. # # This method may return nil if the read preference should not be sent # to a mongos. # # @return [ Hash | nil ] The server preference converted to a mongos # command field value. # # @since 2.0.0 alias :to_mongos :to_doc private # Select servers taking into account any defined tag sets and # local threshold, with the primary preferred. # # @return [ Array ] A list of servers matching tag sets and acceptable # latency with the primary preferred. # # @since 2.0.0 def select_in_replica_set(candidates) primaries = primary(candidates) if primaries.first primaries else near_servers(secondaries(candidates)) end end def max_staleness_allowed? true end end end end mongo-ruby-driver-2.21.3/lib/mongo/server_selector/secondary.rb000066400000000000000000000057351505113246500246110ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module ServerSelector # Encapsulates specifications for selecting secondary servers given a list # of candidates. # # @since 2.0.0 class Secondary < Base # Name of the this read preference in the server's format. # # @since 2.5.0 SERVER_FORMATTED_NAME = 'secondary'.freeze # Get the name of the server mode type. # # @example Get the name of the server mode for this preference. # preference.name # # @return [ Symbol ] :secondary # # @since 2.0.0 def name :secondary end # Whether the secondaryOk bit should be set on wire protocol messages. # I.e. whether the operation can be performed on a secondary server. # # @return [ true ] true # @api private def secondary_ok? true end # Whether tag sets are allowed to be defined for this server preference. # # @return [ true ] true # # @since 2.0.0 def tags_allowed? true end # Whether the hedge option is allowed to be defined for this server preference. # # @return [ true ] true def hedge_allowed? true end # Convert this server preference definition into a format appropriate # for sending to a MongoDB server (i.e., as a command field). # # @return [ Hash ] The server preference formatted as a command field value. # # @since 2.0.0 def to_doc full_doc end # Convert this server preference definition into a value appropriate # for sending to a mongos. # # This method may return nil if the read preference should not be sent # to a mongos. # # @return [ Hash | nil ] The server preference converted to a mongos # command field value. # # @since 2.0.0 alias :to_mongos :to_doc private # Select the secondary servers taking into account any defined tag sets and # local threshold between the nearest secondary and other secondaries. # # @return [ Array ] The secondary servers from the list of candidates. # # @since 2.0.0 def select_in_replica_set(candidates) near_servers(secondaries(candidates)) end def max_staleness_allowed? true end end end end mongo-ruby-driver-2.21.3/lib/mongo/server_selector/secondary_preferred.rb000066400000000000000000000061731505113246500266440ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module ServerSelector # Encapsulates specifications for selecting servers, with # secondaries preferred, given a list of candidates. # # @since 2.0.0 class SecondaryPreferred < Base # Name of the this read preference in the server's format. # # @since 2.5.0 SERVER_FORMATTED_NAME = 'secondaryPreferred'.freeze # Get the name of the server mode type. # # @example Get the name of the server mode for this preference. # preference.name # # @return [ Symbol ] :secondary_preferred # # @since 2.0.0 def name :secondary_preferred end # Whether the secondaryOk bit should be set on wire protocol messages. # I.e. whether the operation can be performed on a secondary server. # # @return [ true ] true # @api private def secondary_ok? true end # Whether tag sets are allowed to be defined for this server preference. # # @return [ true ] true # # @since 2.0.0 def tags_allowed? true end # Whether the hedge option is allowed to be defined for this server preference. # # @return [ true ] true def hedge_allowed? true end # Convert this server preference definition into a format appropriate # for sending to a MongoDB server (i.e., as a command field). # # @return [ Hash ] The server preference formatted as a command field value. # # @since 2.0.0 def to_doc full_doc end # Convert this server preference definition into a value appropriate # for sending to a mongos. # # This method may return nil if the read preference should not be sent # to a mongos. # # @return [ Hash | nil ] The server preference converted to a mongos # command field value. # # @since 2.0.0 def to_mongos # Always send the read preference to mongos: DRIVERS-1642. to_doc end private # Select servers taking into account any defined tag sets and # local threshold, with secondaries. # # @return [ Array ] A list of servers matching tag sets and acceptable # latency with secondaries preferred. # # @since 2.0.0 def select_in_replica_set(candidates) near_servers(secondaries(candidates)) + primary(candidates) end def max_staleness_allowed? true end end end end mongo-ruby-driver-2.21.3/lib/mongo/session.rb000066400000000000000000001245251505113246500210760ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2017-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'mongo/session/session_pool' require 'mongo/session/server_session' module Mongo # A logical session representing a set of sequential operations executed # by an application that are related in some way. # # @note Session objects are not thread-safe. An application may use a session # from only one thread or process at a time. # # @since 2.5.0 class Session extend Forwardable include Retryable include Loggable include ClusterTime::Consumer # Initialize a Session. # # A session can be explicit or implicit. Lifetime of explicit sessions is # managed by the application - applications explicitry create such sessions # and explicitly end them. Implicit sessions are created automatically by # the driver when sending operations to servers that support sessions # (3.6+), and their lifetime is managed by the driver. # # When an implicit session is created, it cannot have a server session # associated with it. The server session will be checked out of the # session pool when an operation using this session is actually executed. # When an explicit session is created, it must reference a server session # that is already allocated. # # @note Applications should use Client#start_session to begin a session. # This constructor is for internal driver use only. # # @param [ ServerSession | nil ] server_session The server session this session is associated with. # If the :implicit option is true, this must be nil. # @param [ Client ] client The client through which this session is created. # @param [ Hash ] options The options for this session. # # @option options [ true|false ] :causal_consistency Whether to enable # causal consistency for this session. # @option options [ Integer ] :default_timeout_ms The timeoutMS value for # the following operations executed on the session: # - commitTransaction # - abortTransaction # - withTransaction # - endSession # @option options [ Hash ] :default_transaction_options Options to pass # to start_transaction by default, can contain any of the options that # start_transaction accepts. # @option options [ true|false ] :implicit For internal driver use only - # specifies whether the session is implicit. If this is true, the server_session # will be nil. This is done so that the server session is only checked # out after the connection is checked out. # @option options [ Hash ] :read_preference The read preference options hash, # with the following optional keys: # - *:mode* -- the read preference as a string or symbol; valid values are # *:primary*, *:primary_preferred*, *:secondary*, *:secondary_preferred* # and *:nearest*. # @option options [ true | false ] :snapshot Set up the session for # snapshot reads. # # @since 2.5.0 # @api private def initialize(server_session, client, options = {}) if options[:causal_consistency] && options[:snapshot] raise ArgumentError, ':causal_consistency and :snapshot options cannot be both set on a session' end if options[:implicit] unless server_session.nil? raise ArgumentError, 'Implicit session cannot reference server session during construction' end else if server_session.nil? raise ArgumentError, 'Explicit session must reference server session during construction' end end @server_session = server_session options = options.dup @client = client.use(:admin) @options = options.dup.freeze @cluster_time = nil @state = NO_TRANSACTION_STATE @with_transaction_deadline = nil end # @return [ Hash ] The options for this session. # # @since 2.5.0 attr_reader :options # @return [ Client ] The client through which this session was created. # # @since 2.5.1 attr_reader :client def cluster @client.cluster end # @return [ true | false ] Whether the session is configured for snapshot # reads. def snapshot? !!options[:snapshot] end # @return [ BSON::Timestamp ] The latest seen operation time for this session. # # @since 2.5.0 attr_reader :operation_time # Sets the dirty state to the given value for the underlying server # session. If there is no server session, this does nothing. # # @param [ true | false ] mark whether to mark the server session as # dirty, or not. def dirty!(mark = true) @server_session&.dirty!(mark) end # @return [ true | false | nil ] whether the underlying server session is # dirty. If no server session exists for this session, returns nil. # # @api private def dirty? @server_session&.dirty? end # @return [ Hash ] The options for the transaction currently being executed # on this session. # # @since 2.6.0 def txn_options @txn_options or raise ArgumentError, "There is no active transaction" end # Is this session an implicit one (not user-created). # # @example Is the session implicit? # session.implicit? # # @return [ true, false ] Whether this session is implicit. # # @since 2.5.1 def implicit? @implicit ||= !!(@options.key?(:implicit) && @options[:implicit] == true) end # Is this session an explicit one (i.e. user-created). # # @example Is the session explicit? # session.explicit? # # @return [ true, false ] Whether this session is explicit. # # @since 2.5.2 def explicit? !implicit? end # Whether reads executed with this session can be retried according to # the modern retryable reads specification. # # If this method returns true, the modern retryable reads have been # requested by the application. If the server selected for a read operation # supports modern retryable reads, they will be used for that particular # operation. If the server selected for a read operation does not support # modern retryable reads, the read will not be retried. # # If this method returns false, legacy retryable reads have been requested # by the application. Legacy retryable read logic will be used regardless # of server version of the server(s) that the client is connected to. # The number of read retries is given by :max_read_retries client option, # which is 1 by default and can be set to 0 to disable legacy read retries. # # @api private def retry_reads? client.options[:retry_reads] != false end # Will writes executed with this session be retried. # # @example Will writes be retried. # session.retry_writes? # # @return [ true, false ] If writes will be retried. # # @note Retryable writes are only available on server versions at least 3.6 # and with sharded clusters, replica sets, or load-balanced topologies. # # @since 2.5.0 def retry_writes? !!client.options[:retry_writes] && (cluster.replica_set? || cluster.sharded? || cluster.load_balanced?) end # Get the read preference the session will use in the currently # active transaction. # # This is a driver style hash with underscore keys. # # @example Get the transaction's read preference # session.txn_read_preference # # @return [ Hash ] The read preference of the transaction. # # @since 2.6.0 def txn_read_preference rp = txn_options[:read] || @client.read_preference Mongo::Lint.validate_underscore_read_preference(rp) rp end # Whether this session has ended. # # @example # session.ended? # # @return [ true, false ] Whether the session has ended. # # @since 2.5.0 def ended? !!@ended end # Get the server session id of this session, if the session has not been # ended. If the session had been ended, raises Error::SessionEnded. # # @return [ BSON::Document ] The server session id. # # @raise [ Error::SessionEnded ] If the session had been ended. # # @since 2.5.0 def session_id if ended? raise Error::SessionEnded end # An explicit session will always have a session_id, because during # construction a server session must be provided. An implicit session # will not have a session_id until materialized, thus calls to # session_id might fail. An application should not have an opportunity # to experience this failure because an implicit session shouldn't be # accessible to applications due to its lifetime being constrained to # operation execution, which is done entirely by the driver. unless materialized? raise Error::SessionNotMaterialized end @server_session.session_id end # @return [ Server | nil ] The server (which should be a mongos) that this # session is pinned to, if any. # # @api private attr_reader :pinned_server # @return [ Integer | nil ] The connection global id that this session is pinned to, # if any. # # @api private attr_reader :pinned_connection_global_id # @return [ BSON::Document | nil ] Recovery token for the sharded # transaction being executed on this session, if any. # # @api private attr_accessor :recovery_token # Error message indicating that the session was retrieved from a client with a different cluster than that of the # client through which it is currently being used. # # @since 2.5.0 MISMATCHED_CLUSTER_ERROR_MSG = 'The configuration of the client used to create this session does not match that ' + 'of the client owning this operation. Please only use this session for operations through its parent ' + 'client.'.freeze # Error message describing that the session cannot be used because it has already been ended. # # @since 2.5.0 SESSION_ENDED_ERROR_MSG = 'This session has ended and cannot be used. Please create a new one.'.freeze # Error message describing that sessions are not supported by the server version. # # @since 2.5.0 # @deprecated SESSIONS_NOT_SUPPORTED = 'Sessions are not supported by the connected servers.'.freeze # Note: SESSIONS_NOT_SUPPORTED is used by Mongoid - do not remove from driver. # The state of a session in which the last operation was not related to # any transaction or no operations have yet occurred. # # @since 2.6.0 NO_TRANSACTION_STATE = :no_transaction # The state of a session in which a user has initiated a transaction but # no operations within the transactions have occurred yet. # # @since 2.6.0 STARTING_TRANSACTION_STATE = :starting_transaction # The state of a session in which a transaction has been started and at # least one operation has occurred, but the transaction has not yet been # committed or aborted. # # @since 2.6.0 TRANSACTION_IN_PROGRESS_STATE = :transaction_in_progress # The state of a session in which the last operation executed was a transaction commit. # # @since 2.6.0 TRANSACTION_COMMITTED_STATE = :transaction_committed # The state of a session in which the last operation executed was a transaction abort. # # @since 2.6.0 TRANSACTION_ABORTED_STATE = :transaction_aborted # @api private UNLABELED_WRITE_CONCERN_CODES = [ 79, # UnknownReplWriteConcern 100, # CannotSatisfyWriteConcern, ].freeze # Get a formatted string for use in inspection. # # @example Inspect the session object. # session.inspect # # @return [ String ] The session inspection. # # @since 2.5.0 def inspect "#" end # End this session. # # If there is an in-progress transaction on this session, the transaction # is aborted. The server session associated with this session is returned # to the server session pool. Finally, this session is marked ended and # is no longer usable. # # If this session is already ended, this method does nothing. # # Note that this method does not directly issue an endSessions command # to this server, contrary to what its name might suggest. # # @example # session.end_session # # @return [ nil ] Always nil. # # @since 2.5.0 def end_session if !ended? && @client if within_states?(TRANSACTION_IN_PROGRESS_STATE) begin abort_transaction rescue Mongo::Error, Error::AuthError end end if @server_session @client.cluster.session_pool.checkin(@server_session) end end ensure @server_session = nil @ended = true end # Executes the provided block in a transaction, retrying as necessary. # # Returns the return value of the block. # # Exact number of retries and when they are performed are implementation # details of the driver; the provided block should be idempotent, and # should be prepared to be called more than once. The driver may retry # the commit command within an active transaction or it may repeat the # transaction and invoke the block again, depending on the error # encountered if any. Note also that the retries may be executed against # different servers. # # Transactions cannot be nested - InvalidTransactionOperation will be raised # if this method is called when the session already has an active transaction. # # Exceptions raised by the block which are not derived from Mongo::Error # stop processing, abort the transaction and are propagated out of # with_transaction. Exceptions derived from Mongo::Error may be # handled by with_transaction, resulting in retries of the process. # # Currently, with_transaction will retry commits and block invocations # until at least 120 seconds have passed since with_transaction started # executing. This timeout is not configurable and may change in a future # driver version. # # @note with_transaction contains a loop, therefore the if with_transaction # itself is placed in a loop, its block should not call next or break to # control the outer loop because this will instead affect the loop in # with_transaction. The driver will warn and abort the transaction # if it detects this situation. # # @example Execute a statement in a transaction # session.with_transaction(write_concern: {w: :majority}) do # collection.update_one({ id: 3 }, { '$set' => { status: 'Inactive'} }, # session: session) # # end # # @example Execute a statement in a transaction, limiting total time consumed # Timeout.timeout(5) do # session.with_transaction(write_concern: {w: :majority}) do # collection.update_one({ id: 3 }, { '$set' => { status: 'Inactive'} }, # session: session) # # end # end # # @param [ Hash ] options The options for the transaction being started. # These are the same options that start_transaction accepts. # # @raise [ Error::InvalidTransactionOperation ] If a transaction is already in # progress or if the write concern is unacknowledged. # # @since 2.7.0 def with_transaction(options = nil) if timeout_ms = (options || {})[:timeout_ms] timeout_sec = timeout_ms / 1_000.0 deadline = Utils.monotonic_time + timeout_sec @with_transaction_deadline = deadline elsif default_timeout_ms = @options[:default_timeout_ms] timeout_sec = default_timeout_ms / 1_000.0 deadline = Utils.monotonic_time + timeout_sec @with_transaction_deadline = deadline elsif @client.timeout_sec deadline = Utils.monotonic_time + @client.timeout_sec @with_transaction_deadline = deadline else deadline = Utils.monotonic_time + 120 end transaction_in_progress = false loop do commit_options = {} if options commit_options[:write_concern] = options[:write_concern] end start_transaction(options) transaction_in_progress = true begin rv = yield self rescue Exception => e if within_states?(STARTING_TRANSACTION_STATE, TRANSACTION_IN_PROGRESS_STATE) log_warn("Aborting transaction due to #{e.class}: #{e}") @with_transaction_deadline = nil abort_transaction transaction_in_progress = false end if Utils.monotonic_time >= deadline transaction_in_progress = false raise end if e.is_a?(Mongo::Error) && e.label?('TransientTransactionError') next end raise else if within_states?(TRANSACTION_ABORTED_STATE, NO_TRANSACTION_STATE, TRANSACTION_COMMITTED_STATE) transaction_in_progress = false return rv end begin commit_transaction(commit_options) transaction_in_progress = false return rv rescue Mongo::Error => e if e.label?('UnknownTransactionCommitResult') if Utils.monotonic_time >= deadline || e.is_a?(Error::OperationFailure::Family) && e.max_time_ms_expired? then transaction_in_progress = false raise end wc_options = case v = commit_options[:write_concern] when WriteConcern::Base v.options when nil {} else v end commit_options[:write_concern] = wc_options.merge(w: :majority) retry elsif e.label?('TransientTransactionError') if Utils.monotonic_time >= deadline transaction_in_progress = false raise end @state = NO_TRANSACTION_STATE next else transaction_in_progress = false raise end rescue Error::AuthError transaction_in_progress = false raise end end end # No official return value, but return true so that in interactive # use the method hints that it succeeded. true ensure if transaction_in_progress log_warn('with_transaction callback broke out of with_transaction loop, aborting transaction') begin abort_transaction rescue Error::OperationFailure::Family, Error::InvalidTransactionOperation end end @with_transaction_deadline = nil end # Places subsequent operations in this session into a new transaction. # # Note that the transaction will not be started on the server until an # operation is performed after start_transaction is called. # # @example Start a new transaction # session.start_transaction(options) # # @param [ Hash ] options The options for the transaction being started. # # @option options [ Integer ] :max_commit_time_ms The maximum amount of # time to allow a single commitTransaction command to run, in milliseconds. # This options is deprecated, use :timeout_ms instead. # @option options [ Hash ] :read_concern The read concern options hash, # with the following optional keys: # - *:level* -- the read preference level as a symbol; valid values # are *:local*, *:majority*, and *:snapshot* # @option options [ Hash ] :write_concern The write concern options. Can be :w => # Integer|String, :fsync => Boolean, :j => Boolean. # @option options [ Hash ] :read The read preference options. The hash may have the following # items: # - *:mode* -- read preference specified as a symbol; the only valid value is # *:primary*. # @option options [ Integer ] :timeout_ms The operation timeout in milliseconds. # Must be a non-negative integer. An explicit value of 0 means infinite. # The default value is unset which means the value is inherited from # the client. # # @raise [ Error::InvalidTransactionOperation ] If a transaction is already in # progress or if the write concern is unacknowledged. # # @since 2.6.0 def start_transaction(options = nil) check_transactions_supported! if options Lint.validate_read_concern_option(options[:read_concern]) =begin # It would be handy to detect invalid read preferences here, but # some of the spec tests require later detection of invalid read prefs. # Maybe we can do this when lint mode is on. mode = options[:read] && options[:read][:mode].to_s if mode && mode != 'primary' raise Mongo::Error::InvalidTransactionOperation.new( "read preference in a transaction must be primary (requested: #{mode})" ) end =end end if snapshot? raise Mongo::Error::SnapshotSessionTransactionProhibited end check_if_ended! if within_states?(STARTING_TRANSACTION_STATE, TRANSACTION_IN_PROGRESS_STATE) raise Mongo::Error::InvalidTransactionOperation.new( Mongo::Error::InvalidTransactionOperation::TRANSACTION_ALREADY_IN_PROGRESS) end unpin next_txn_num @txn_options = (@options[:default_transaction_options] || {}).merge(options || {}) if txn_write_concern && !WriteConcern.get(txn_write_concern).acknowledged? raise Mongo::Error::InvalidTransactionOperation.new( Mongo::Error::InvalidTransactionOperation::UNACKNOWLEDGED_WRITE_CONCERN) end @state = STARTING_TRANSACTION_STATE @already_committed = false # This method has no explicit return value. # We could return nil here but true indicates to the user that the # operation succeeded. This is intended for interactive use. # Note that the return value is not documented. true end # Commit the currently active transaction on the session. # # @example Commits the transaction. # session.commit_transaction # # @option options :write_concern [ nil | WriteConcern::Base ] The write # concern to use for this operation. # @option options [ Integer ] :timeout_ms The operation timeout in milliseconds. # Must be a non-negative integer. An explicit value of 0 means infinite. # The default value is unset which means the value is inherited from # the client. # # @raise [ Error::InvalidTransactionOperation ] If there is no active transaction. # # @since 2.6.0 def commit_transaction(options=nil) QueryCache.clear check_if_ended! check_if_no_transaction! if within_states?(TRANSACTION_ABORTED_STATE) raise Mongo::Error::InvalidTransactionOperation.new( Mongo::Error::InvalidTransactionOperation.cannot_call_after_msg( :abortTransaction, :commitTransaction)) end options ||= {} begin # If commitTransaction is called twice, we need to run the same commit # operation again, so we revert the session to the previous state. if within_states?(TRANSACTION_COMMITTED_STATE) @state = @last_commit_skipped ? STARTING_TRANSACTION_STATE : TRANSACTION_IN_PROGRESS_STATE @already_committed = true end if starting_transaction? @last_commit_skipped = true else @last_commit_skipped = false @committing_transaction = true write_concern = options[:write_concern] || txn_options[:write_concern] if write_concern && !write_concern.is_a?(WriteConcern::Base) write_concern = WriteConcern.get(write_concern) end context = Operation::Context.new( client: @client, session: self, operation_timeouts: operation_timeouts(options) ) write_with_retry(write_concern, ending_transaction: true, context: context, ) do |connection, txn_num, context| if context.retry? if write_concern wco = write_concern.options.merge(w: :majority) wco[:wtimeout] ||= 10000 write_concern = WriteConcern.get(wco) else write_concern = WriteConcern.get(w: :majority, wtimeout: 10000) end end spec = { selector: { commitTransaction: 1 }, db_name: 'admin', session: self, txn_num: txn_num, write_concern: write_concern, } Operation::Command.new(spec).execute_with_connection(connection, context: context) end end ensure @state = TRANSACTION_COMMITTED_STATE @committing_transaction = false end # No official return value, but return true so that in interactive # use the method hints that it succeeded. true end # Abort the currently active transaction without making any changes to the database. # # @example Abort the transaction. # session.abort_transaction # # @option options [ Integer ] :timeout_ms The operation timeout in milliseconds. # Must be a non-negative integer. An explicit value of 0 means infinite. # The default value is unset which means the value is inherited from # the client. # # @raise [ Error::InvalidTransactionOperation ] If there is no active transaction. # # @since 2.6.0 def abort_transaction(options = nil) QueryCache.clear check_if_ended! check_if_no_transaction! if within_states?(TRANSACTION_COMMITTED_STATE) raise Mongo::Error::InvalidTransactionOperation.new( Mongo::Error::InvalidTransactionOperation.cannot_call_after_msg( :commitTransaction, :abortTransaction)) end if within_states?(TRANSACTION_ABORTED_STATE) raise Mongo::Error::InvalidTransactionOperation.new( Mongo::Error::InvalidTransactionOperation.cannot_call_twice_msg(:abortTransaction)) end options ||= {} begin unless starting_transaction? @aborting_transaction = true context = Operation::Context.new( client: @client, session: self, operation_timeouts: operation_timeouts(options) ) write_with_retry(txn_options[:write_concern], ending_transaction: true, context: context, ) do |connection, txn_num, context| begin Operation::Command.new( selector: { abortTransaction: 1 }, db_name: 'admin', session: self, txn_num: txn_num ).execute_with_connection(connection, context: context) ensure unpin end end end @state = TRANSACTION_ABORTED_STATE rescue Mongo::Error::InvalidTransactionOperation raise rescue Mongo::Error @state = TRANSACTION_ABORTED_STATE rescue Exception @state = TRANSACTION_ABORTED_STATE raise ensure @aborting_transaction = false end # No official return value, but return true so that in interactive # use the method hints that it succeeded. true end # @api private def starting_transaction? within_states?(STARTING_TRANSACTION_STATE) end # Whether or not the session is currently in a transaction. # # @example Is the session in a transaction? # session.in_transaction? # # @return [ true | false ] Whether or not the session in a transaction. # # @since 2.6.0 def in_transaction? within_states?(STARTING_TRANSACTION_STATE, TRANSACTION_IN_PROGRESS_STATE) end # @return [ true | false ] Whether the session is currently committing a # transaction. # # @api private def committing_transaction? !!@committing_transaction end # @return [ true | false ] Whether the session is currently aborting a # transaction. # # @api private def aborting_transaction? !!@aborting_transaction end # Pins this session to the specified server, which should be a mongos. # # @param [ Server ] server The server to pin this session to. # # @api private def pin_to_server(server) if server.nil? raise ArgumentError, 'Cannot pin to a nil server' end if Lint.enabled? unless server.mongos? raise Error::LintError, "Attempted to pin the session to server #{server.summary} which is not a mongos" end end @pinned_server = server end # Pins this session to the specified connection. # # @param [ Integer ] connection_global_id The global id of connection to pin # this session to. # # @api private def pin_to_connection(connection_global_id) if connection_global_id.nil? raise ArgumentError, 'Cannot pin to a nil connection id' end @pinned_connection_global_id = connection_global_id end # Unpins this session from the pinned server or connection, # if the session was pinned. # # @param [ Connection | nil ] connection Connection to unpin from. # # @api private def unpin(connection = nil) @pinned_server = nil @pinned_connection_global_id = nil connection.unpin unless connection.nil? end # Unpins this session from the pinned server or connection, if the session was pinned # and the specified exception instance and the session's transaction state # require it to be unpinned. # # The exception instance should already have all of the labels set on it # (both client- and server-side generated ones). # # @param [ Error ] error The exception instance to process. # @param [ Connection | nil ] connection Connection to unpin from. # # @api private def unpin_maybe(error, connection = nil) if !within_states?(Session::NO_TRANSACTION_STATE) && error.label?('TransientTransactionError') then unpin(connection) end if committing_transaction? && error.label?('UnknownTransactionCommitResult') then unpin(connection) end end # Add the autocommit field to a command document if applicable. # # @example # session.add_autocommit!(cmd) # # @return [ Hash, BSON::Document ] The command document. # # @since 2.6.0 # @api private def add_autocommit!(command) command.tap do |c| c[:autocommit] = false if in_transaction? end end # Add the startTransaction field to a command document if applicable. # # @example # session.add_start_transaction!(cmd) # # @return [ Hash, BSON::Document ] The command document. # # @since 2.6.0 # @api private def add_start_transaction!(command) command.tap do |c| if starting_transaction? c[:startTransaction] = true end end end # Add the transaction number to a command document if applicable. # # @example # session.add_txn_num!(cmd) # # @return [ Hash, BSON::Document ] The command document. # # @since 2.6.0 # @api private def add_txn_num!(command) command.tap do |c| c[:txnNumber] = BSON::Int64.new(@server_session.txn_num) if in_transaction? end end # Add the transactions options if applicable. # # @example # session.add_txn_opts!(cmd) # # @return [ Hash, BSON::Document ] The command document. # # @since 2.6.0 # @api private def add_txn_opts!(command, read, context) command.tap do |c| # The read concern should be added to any command that starts a transaction. if starting_transaction? # https://jira.mongodb.org/browse/SPEC-1161: transaction's # read concern overrides collection/database/client read concerns, # even if transaction's read concern is not set. # Read concern here is the one sent to the server and may # include afterClusterTime. if rc = c[:readConcern] rc = rc.dup rc.delete(:level) end if txn_read_concern if rc rc.update(txn_read_concern) else rc = txn_read_concern.dup end end if rc.nil? || rc.empty? c.delete(:readConcern) else c[:readConcern ] = Options::Mapper.transform_values_to_strings(rc) end end # We need to send the read concern level as a string rather than a symbol. if c[:readConcern] c[:readConcern] = Options::Mapper.transform_values_to_strings(c[:readConcern]) end if c[:commitTransaction] if max_time_ms = txn_options[:max_commit_time_ms] c[:maxTimeMS] = max_time_ms end end # The write concern should be added to any abortTransaction or commitTransaction command. if (c[:abortTransaction] || c[:commitTransaction]) if @already_committed wc = BSON::Document.new(c[:writeConcern] || txn_write_concern || {}) wc.merge!(w: :majority) wc[:wtimeout] ||= 10000 c[:writeConcern] = wc elsif txn_write_concern c[:writeConcern] ||= txn_write_concern end end # A non-numeric write concern w value needs to be sent as a string rather than a symbol. if c[:writeConcern] && c[:writeConcern][:w] && c[:writeConcern][:w].is_a?(Symbol) c[:writeConcern][:w] = c[:writeConcern][:w].to_s end # Ignore wtimeout if csot if context&.csot? c[:writeConcern]&.delete(:wtimeout) end # We must not send an empty (server default) write concern. c.delete(:writeConcern) if c[:writeConcern]&.empty? end end # Remove the read concern and/or write concern from the command if not applicable. # # @example # session.suppress_read_write_concern!(cmd) # # @return [ Hash, BSON::Document ] The command document. # # @since 2.6.0 # @api private def suppress_read_write_concern!(command) command.tap do |c| next unless in_transaction? c.delete(:readConcern) unless starting_transaction? c.delete(:writeConcern) unless c[:commitTransaction] || c[:abortTransaction] end end # Ensure that the read preference of a command primary. # # @example # session.validate_read_preference!(command) # # @raise [ Mongo::Error::InvalidTransactionOperation ] If the read preference of the command is # not primary. # # @since 2.6.0 # @api private def validate_read_preference!(command) return unless in_transaction? return unless command['$readPreference'] mode = command['$readPreference']['mode'] || command['$readPreference'][:mode] if mode && mode != 'primary' raise Mongo::Error::InvalidTransactionOperation.new( "read preference in a transaction must be primary (requested: #{mode})" ) end end # Update the state of the session due to a (non-commit and non-abort) operation being run. # # @since 2.6.0 # @api private def update_state! case @state when STARTING_TRANSACTION_STATE @state = TRANSACTION_IN_PROGRESS_STATE when TRANSACTION_COMMITTED_STATE, TRANSACTION_ABORTED_STATE @state = NO_TRANSACTION_STATE end end # Validate the session for use by the specified client. # # The session must not be ended and must have been created by a client # with the same cluster as the client that the session is to be used with. # # @param [ Client ] client The client the session is to be used with. # # @return [ Session ] self, if the session is valid. # # @raise [ Mongo::Error::InvalidSession ] Exception raised if the session is not valid. # # @since 2.5.0 # @api private def validate!(client) check_matching_cluster!(client) check_if_ended! self end # Process a response from the server that used this session. # # @example Process a response from the server. # session.process(result) # # @param [ Operation::Result ] result The result from the operation. # # @return [ Operation::Result ] The result. # # @since 2.5.0 # @api private def process(result) unless implicit? set_operation_time(result) if cluster_time_doc = result.cluster_time advance_cluster_time(cluster_time_doc) end end @server_session.set_last_use! if doc = result.reply && result.reply.documents.first if doc[:recoveryToken] self.recovery_token = doc[:recoveryToken] end end result end # Advance the cached operation time for this session. # # @example Advance the operation time. # session.advance_operation_time(timestamp) # # @param [ BSON::Timestamp ] new_operation_time The new operation time. # # @return [ BSON::Timestamp ] The max operation time, considering the current and new times. # # @since 2.5.0 def advance_operation_time(new_operation_time) if @operation_time @operation_time = [ @operation_time, new_operation_time ].max else @operation_time = new_operation_time end end # If not already set, populate a session objects's server_session by # checking out a session from the session pool. # # @return [ Session ] Self. # # @api private def materialize_if_needed if ended? raise Error::SessionEnded end return unless implicit? && !@server_session @server_session = cluster.session_pool.checkout self end # @api private def materialized? if ended? raise Error::SessionEnded end !@server_session.nil? end # Increment and return the next transaction number. # # @example Get the next transaction number. # session.next_txn_num # # @return [ Integer ] The next transaction number. # # @since 2.5.0 # @api private def next_txn_num if ended? raise Error::SessionEnded end @server_session.next_txn_num end # Get the current transaction number. # # @example Get the current transaction number. # session.txn_num # # @return [ Integer ] The current transaction number. # # @since 2.6.0 def txn_num if ended? raise Error::SessionEnded end @server_session.txn_num end # @api private attr_accessor :snapshot_timestamp attr_reader :with_transaction_deadline private # Get the read concern the session will use when starting a transaction. # # This is a driver style hash with underscore keys. # # @example Get the session's transaction read concern. # session.txn_read_concern # # @return [ Hash ] The read concern used for starting transactions. # # @since 2.9.0 def txn_read_concern # Read concern is inherited from client but not db or collection. txn_options[:read_concern] || @client.read_concern end def within_states?(*states) states.include?(@state) end def check_if_no_transaction! return unless within_states?(NO_TRANSACTION_STATE) raise Mongo::Error::InvalidTransactionOperation.new( Mongo::Error::InvalidTransactionOperation::NO_TRANSACTION_STARTED) end def txn_write_concern txn_options[:write_concern] || (@client.write_concern && @client.write_concern.options) end # Returns causal consistency document if the last operation time is # known and causal consistency is enabled, otherwise returns nil. def causal_consistency_doc if operation_time && causal_consistency? {:afterClusterTime => operation_time} else nil end end def causal_consistency? @causal_consistency ||= (if @options.key?(:causal_consistency) !!@options[:causal_consistency] else true end) end def set_operation_time(result) if result && result.operation_time @operation_time = result.operation_time end end def check_if_ended! raise Mongo::Error::InvalidSession.new(SESSION_ENDED_ERROR_MSG) if ended? end def check_matching_cluster!(client) if @client.cluster != client.cluster raise Mongo::Error::InvalidSession.new(MISMATCHED_CLUSTER_ERROR_MSG) end end def check_transactions_supported! raise Mongo::Error::TransactionsNotSupported, "standalone topology" if cluster.single? cluster.next_primary.with_connection do |conn| if cluster.replica_set? && !conn.features.transactions_enabled? raise Mongo::Error::TransactionsNotSupported, "server version is < 4.0" end if cluster.sharded? && !conn.features.sharded_transactions_enabled? raise Mongo::Error::TransactionsNotSupported, "sharded transactions require server version >= 4.2" end end end def operation_timeouts(opts) { inherited_timeout_ms: @client.timeout_ms }.tap do |result| if @with_transaction_deadline.nil? if timeout_ms = opts[:timeout_ms] result[:operation_timeout_ms] = timeout_ms elsif default_timeout_ms = options[:default_timeout_ms] result[:operation_timeout_ms] = default_timeout_ms end end end end end end mongo-ruby-driver-2.21.3/lib/mongo/session/000077500000000000000000000000001505113246500205405ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/session/server_session.rb000066400000000000000000000071241505113246500241420ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2017-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'mongo/session/server_session/dirtyable' module Mongo class Session # An object representing the server-side session. # # @api private # # @since 2.5.0 class ServerSession include Dirtyable # Regex for removing dashes from the UUID string. # # @since 2.5.0 DASH_REGEX = /\-/.freeze # Pack directive for the UUID. # # @since 2.5.0 UUID_PACK = 'H*'.freeze # The last time the server session was used. # # @since 2.5.0 attr_reader :last_use # The current transaction number. # # When a transaction is active, all operations in that transaction # use the same transaction number. If the entire transaction is restarted # (for example, by Session#with_transaction, in which case it would # also invoke the block provided to it again), each transaction attempt # has its own transaction number. # # Transaction number is also used outside of transactions for # retryable writes. In this case, each write operation has its own # transaction number, but retries of a write operation use the same # transaction number as the first write (which is how the server # knows that subsequent writes are retries and should be ignored if # the first write succeeded on the server but was not read by the # client, for example). # # @since 2.5.0 attr_reader :txn_num # Initialize a ServerSession. # # @example # ServerSession.new # # @since 2.5.0 def initialize set_last_use! session_id @txn_num = 0 end # Update the last_use attribute of the server session to now. # # @example Set the last use field to now. # server_session.set_last_use! # # @return [ Time ] The last time the session was used. # # @since 2.5.0 def set_last_use! @last_use = Time.now end # The session id of this server session. # # @example Get the session id. # server_session.session_id # # @return [ BSON::Document ] The session id. # # @since 2.5.0 def session_id @session_id ||= (bytes = [SecureRandom.uuid.gsub(DASH_REGEX, '')].pack(UUID_PACK) BSON::Document.new(id: BSON::Binary.new(bytes, :uuid))) end # Increment the current transaction number and return the new value. # # @return [ Integer ] The updated transaction number. # # @since 2.5.0 def next_txn_num @txn_num += 1 end # Get a formatted string for use in inspection. # # @example Inspect the session object. # session.inspect # # @return [ String ] The session inspection. # # @since 2.5.0 def inspect "#" end end end end mongo-ruby-driver-2.21.3/lib/mongo/session/server_session/000077500000000000000000000000001505113246500236115ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/session/server_session/dirtyable.rb000066400000000000000000000040001505113246500261070ustar00rootroot00000000000000# frozen_string_literal: true # Copyright (C) 2024 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Session class ServerSession # Functionality for manipulating and querying a session's # "dirty" state, per the last paragraph at # https://github.com/mongodb/specifications/blob/master/source/sessions/driver-sessions.md#server-session-pool # # If a driver has a server session pool and a network error is # encountered when executing any command with a ClientSession, the # driver MUST mark the associated ServerSession as dirty. Dirty server # sessions are discarded when returned to the server session pool. It is # valid for a dirty session to be used for subsequent commands (e.g. an # implicit retry attempt, a later command in a bulk write, or a later # operation on an explicit session), however, it MUST remain dirty for # the remainder of its lifetime regardless if later commands succeed. # # @api private module Dirtyable # Query whether the server session has been marked dirty or not. # # @return [ true | false ] the server session's dirty state def dirty? @dirty end # Mark the server session as dirty (the default) or clean. # # @param [ true | false ] mark whether the mark the server session # dirty or not. def dirty!(mark = true) @dirty = mark end end end end end mongo-ruby-driver-2.21.3/lib/mongo/session/session_pool.rb000066400000000000000000000110501505113246500235760ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2017-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Session # A pool of server sessions. # # @api private # # @since 2.5.0 class SessionPool # Initialize a SessionPool. # # @example # SessionPool.new(cluster) # # @param [ Mongo::Cluster ] cluster The cluster that will be associated with this # session pool. # # @since 2.5.0 def initialize(cluster) @queue = [] @mutex = Mutex.new @cluster = cluster end # Get a formatted string for use in inspection. # # @example Inspect the session pool object. # session_pool.inspect # # @return [ String ] The session pool inspection. # # @since 2.5.0 def inspect "#" end # Check out a server session from the pool. # # @example Check out a session. # pool.checkout # # @return [ ServerSession ] The server session. # # @since 2.5.0 def checkout @mutex.synchronize do loop do if @queue.empty? return ServerSession.new else session = @queue.shift unless about_to_expire?(session) return session end end end end end # Checkin a server session to the pool. # # @example Checkin a session. # pool.checkin(session) # # @param [ Session::ServerSession ] session The session to checkin. # # @since 2.5.0 def checkin(session) if session.nil? raise ArgumentError, 'session cannot be nil' end @mutex.synchronize do prune! @queue.unshift(session) if return_to_queue?(session) end end # End all sessions in the pool by sending the endSessions command to the server. # # @example End all sessions. # pool.end_sessions # # @since 2.5.0 def end_sessions while !@queue.empty? server = ServerSelector.get(mode: :primary_preferred).select_server(@cluster) op = Operation::Command.new( selector: { endSessions: @queue.shift(10_000).map(&:session_id), }, db_name: Database::ADMIN, ) context = Operation::Context.new(options: { server_api: server.options[:server_api], }) op.execute(server, context: context) end rescue Mongo::Error, Error::AuthError end private # Query whether the given session is okay to return to the # pool's queue. # # @param [ Session::ServerSession ] session the session to query # # @return [ true | false ] whether to return the session to the # queue. def return_to_queue?(session) !session.dirty? && !about_to_expire?(session) end def about_to_expire?(session) if session.nil? raise ArgumentError, 'session cannot be nil' end # Load balancers spec explicitly requires to ignore the logical session # timeout value. # No rationale is provided as of the time of this writing. if @cluster.load_balanced? return false end logical_session_timeout = @cluster.logical_session_timeout if logical_session_timeout idle_time_minutes = (Time.now - session.last_use) / 60 (idle_time_minutes + 1) >= logical_session_timeout end end def prune! # Load balancers spec explicitly requires not to prune sessions. # No rationale is provided as of the time of this writing. return if @cluster.load_balanced? while !@queue.empty? if about_to_expire?(@queue[-1]) @queue.pop else break end end end end end end mongo-ruby-driver-2.21.3/lib/mongo/socket.rb000066400000000000000000000476171505113246500207110ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'mongo/socket/ssl' require 'mongo/socket/tcp' require 'mongo/socket/unix' require 'mongo/socket/ocsp_verifier' require 'mongo/socket/ocsp_cache' module Mongo # Provides additional data around sockets for the driver's use. # # @since 2.0.0 # @api private class Socket include ::Socket::Constants # Error message for TLS related exceptions. # # @since 2.0.0 # @deprecated SSL_ERROR = 'MongoDB may not be configured with TLS support'.freeze # Error message for timeouts on socket calls. # # @since 2.0.0 # @deprecated TIMEOUT_ERROR = 'Socket request timed out'.freeze # The pack directive for timeouts. # # @since 2.0.0 TIMEOUT_PACK = 'l_2'.freeze # Write data to the socket in chunks of this size. # # @api private WRITE_CHUNK_SIZE = 65536 # Initializes common socket attributes. # # @param [ Float ] timeout The socket timeout value. # @param [ Hash ] options The options. # # @option options [ Float ] :connect_timeout Connect timeout. # @option options [ Address ] :connection_address Address of the # connection that created this socket. # @option options [ Integer ] :connection_generation Generation of the # connection (for non-monitoring connections) that created this socket. # @option options [ true | false ] :monitor Whether this socket was # created by a monitoring connection. # @option options :pipe [ IO ] The file descriptor for the read end of the # pipe to listen on during the select system call when reading from the # socket. # # @api private def initialize(timeout, options) @timeout = timeout @options = options end # @return [ Integer ] family The type of host family. attr_reader :family # @return [ Socket ] socket The wrapped socket. attr_reader :socket # @return [ Hash ] The options. attr_reader :options # @return [ Float ] timeout The socket timeout. attr_reader :timeout # @return [ Address ] Address of the connection that created this socket. # # @api private def connection_address options[:connection_address] end # @return [ Integer ] Generation of the connection (for non-monitoring # connections) that created this socket. # # @api private def connection_generation options[:connection_generation] end # @return [ true | false ] Whether this socket was created by a monitoring # connection. # # @api private def monitor? !!options[:monitor] end # @return [ IO ] The file descriptor for the read end of the pipe to # listen on during the select system call when reading from the # socket. def pipe options[:pipe] end # @return [ String ] Human-readable summary of the socket for debugging. # # @api private def summary fileno = @socket&.fileno rescue '' || '' if monitor? indicator = if options[:push] 'pm' else 'm' end "#{connection_address};#{indicator};fd=#{fileno}" else "#{connection_address};c:#{connection_generation};fd=#{fileno}" end end # Is the socket connection alive? # # @example Is the socket alive? # socket.alive? # # @return [ true, false ] If the socket is alive. # # @deprecated Use #connectable? on the connection instead. def alive? sock_arr = [ @socket ] if Kernel::select(sock_arr, nil, sock_arr, 0) # The eof? call is supposed to return immediately since select # indicated the socket is readable. However, if @socket is a TLS # socket, eof? can block anyway - see RUBY-2140. begin Timeout.timeout(0.1) do eof? end rescue ::Timeout::Error true end else true end end # Close the socket. # # @example Close the socket. # socket.close # # @return [ true ] Always true. # # @since 2.0.0 def close begin # Sometimes it seems the close call can hang for a long time ::Timeout.timeout(5) do @socket&.close end rescue # Silence all errors end true end # Delegates gets to the underlying socket. # # @example Get the next line. # socket.gets(10) # # @param [ Array ] args The arguments to pass through. # # @return [ Object ] The returned bytes. # # @since 2.0.0 def gets(*args) map_exceptions do @socket.gets(*args) end end # Will read all data from the socket for the provided number of bytes. # If no data is returned, an exception will be raised. # # @example Read all the requested data from the socket. # socket.read(4096) # # @param [ Integer ] length The number of bytes to read. # @param [ Numeric ] socket_timeout The timeout to use for each chunk read, # mutually exclusive to +timeout+. # @param [ Numeric ] timeout The total timeout to the whole read operation, # mutually exclusive to +socket_timeout+. # # @raise [ Mongo::SocketError ] If not all data is returned. # # @return [ Object ] The data from the socket. # # @since 2.0.0 def read(length, socket_timeout: nil, timeout: nil) if !socket_timeout.nil? && !timeout.nil? raise ArgumentError, 'Both timeout and socket_timeout cannot be set' end if !socket_timeout.nil? || timeout.nil? read_without_timeout(length, socket_timeout) else read_with_timeout(length, timeout) end end # Read a single byte from the socket. # # @example Read a single byte. # socket.readbyte # # @return [ Object ] The read byte. # # @since 2.0.0 def readbyte map_exceptions do @socket.readbyte end end # Writes data to the socket instance. # # @param [ Array ] args The data to be written. # @param [ Numeric ] timeout The total timeout to the whole write operation. # # @return [ Integer ] The length of bytes written to the socket. # # @raise [ Error::SocketError | Error::SocketTimeoutError ] When there is a network error during the write. # # @since 2.0.0 def write(*args, timeout: nil) map_exceptions do do_write(*args, timeout: timeout) end end # Tests if this socket has reached EOF. Primarily used for liveness checks. # # @since 2.0.5 def eof? @socket.eof? rescue IOError, SystemCallError true end # For backwards compatibility only, do not use. # # @return [ true ] Always true. # # @deprecated def connectable? true end private # Reads the +length+ bytes from the socket, the read operation duration is # limited to +timeout+ second. # # @param [ Integer ] length The number of bytes to read. # @param [ Numeric ] timeout The total timeout to the whole read operation. # # @return [ Object ] The data from the socket. def read_with_timeout(length, timeout) deadline = Utils.monotonic_time + timeout map_exceptions do String.new.tap do |data| while data.length < length socket_timeout = deadline - Utils.monotonic_time if socket_timeout <= 0 raise Mongo::Error::TimeoutError end chunk = read_from_socket(length - data.length, socket_timeout: socket_timeout, csot: true) unless chunk.length > 0 raise IOError, "Expected to read > 0 bytes but read 0 bytes" end data << chunk end end end end # Reads the +length+ bytes from the socket. The read operation may involve # multiple socket reads, each read is limited to +timeout+ second, # if the parameter is provided. # # @param [ Integer ] length The number of bytes to read. # @param [ Numeric ] socket_timeout The timeout to use for each chunk read. # # @return [ Object ] The data from the socket. def read_without_timeout(length, socket_timeout = nil) map_exceptions do String.new.tap do |data| while data.length < length chunk = read_from_socket(length - data.length, socket_timeout: socket_timeout) unless chunk.length > 0 raise IOError, "Expected to read > 0 bytes but read 0 bytes" end data << chunk end end end end # Reads the +length+ bytes from the socket. The read operation may involve # multiple socket reads, each read is limited to +timeout+ second, # if the parameter is provided. # # @param [ Integer ] length The number of bytes to read. # @param [ Numeric ] :socket_timeout The timeout to use for each chunk read. # @param [ true | false ] :csot Whether the CSOT timeout is set for the operation. # # @return [ Object ] The data from the socket. def read_from_socket(length, socket_timeout: nil, csot: false) # Just in case if length == 0 return ''.force_encoding('BINARY') end _timeout = socket_timeout || self.timeout if _timeout if _timeout > 0 deadline = Utils.monotonic_time + _timeout elsif _timeout < 0 raise_timeout_error!("Negative timeout #{_timeout} given to socket", csot) end end # We want to have a fixed and reasonably small size buffer for reads # because, for example, OpenSSL reads in 16 kb chunks max. # Having a 16 mb buffer means there will be 1000 reads each allocating # 16 mb of memory and using 16 kb of it. buf_size = read_buffer_size data = nil # If we want to read less than the buffer size, just allocate the # memory that is necessary if length < buf_size buf_size = length end # The binary encoding is important, otherwise Ruby performs encoding # conversions of some sort during the write into the buffer which # kills performance buf = allocate_string(buf_size) retrieved = 0 begin while retrieved < length retrieve = length - retrieved if retrieve > buf_size retrieve = buf_size end chunk = @socket.read_nonblock(retrieve, buf) # If we read the entire wanted length in one operation, # return the data as is which saves one memory allocation and # one copy per read if retrieved == 0 && chunk.length == length return chunk end # If we are here, we are reading the wanted length in # multiple operations. Allocate the total buffer here rather # than up front so that the special case above won't be # allocating twice if data.nil? data = allocate_string(length) end # ... and we need to copy the chunks at this point data[retrieved, chunk.length] = chunk retrieved += chunk.length end # As explained in https://ruby-doc.com/core-trunk/IO.html#method-c-select, # reading from a TLS socket may require writing which may raise WaitWritable rescue IO::WaitReadable, IO::WaitWritable => exc if deadline select_timeout = deadline - Utils.monotonic_time if select_timeout <= 0 raise_timeout_error!("Took more than #{_timeout} seconds to receive data", csot) end end if exc.is_a?(IO::WaitReadable) if pipe select_args = [[@socket, pipe], nil, [@socket, pipe], select_timeout] else select_args = [[@socket], nil, [@socket], select_timeout] end else select_args = [nil, [@socket], [@socket], select_timeout] end rv = Kernel.select(*select_args) if Lint.enabled? if pipe && rv&.include?(pipe) # If the return value of select is the read end of the pipe, and # an IOError is not raised, then that means the socket is still # open. Select is interrupted be closing the write end of the # pipe, which either returns the pipe if the socket is open, or # raises an IOError if it isn't. Select is interrupted after all # of the pending and checked out connections have been interrupted # and closed, and this only happens once the pool is cleared with # interrupt_in_use connections flag. This means that in order for # the socket to still be open when the select is interrupted, and # that socket is being read from, that means after clear was # called, a connection from the previous generation was checked # out of the pool, for reading on its socket. This should be impossible. raise Mongo::LintError, "Select interrupted for live socket. This should be impossible." end end if BSON::Environment.jruby? # Ignore the return value of Kernel.select. # On JRuby, select appears to return nil prior to timeout expiration # (apparently due to a EAGAIN) which then causes us to fail the read # even though we could have retried it. # Check the deadline ourselves. if deadline select_timeout = deadline - Utils.monotonic_time if select_timeout <= 0 raise_timeout_error!("Took more than #{_timeout} seconds to receive data", csot) end end elsif rv.nil? raise_timeout_error!("Took more than #{_timeout} seconds to receive data (select call timed out)", csot) end retry end data end def allocate_string(capacity) String.new('', :capacity => capacity, :encoding => 'BINARY') end def read_buffer_size # Buffer size for non-TLS reads # 64kb 65536 end # Writes data to the socket instance. # # This is a separate method from +write+ for ease of mocking in the tests. # This method should not perform any exception mapping, upstream code # sholud map exceptions. # # @param [ Array ] args The data to be written. # @param [ Numeric ] :timeout The total timeout to the whole write operation. # # @return [ Integer ] The length of bytes written to the socket. def do_write(*args, timeout: nil) if timeout.nil? write_without_timeout(*args) else write_with_timeout(*args, timeout: timeout) end end # Writes data to to the socket. # # @param [ Array ] args The data to be written. # # @return [ Integer ] The length of bytes written to the socket. def write_without_timeout(*args) # This method used to forward arguments to @socket.write in a # single call like so: # # @socket.write(*args) # # Turns out, when each buffer to be written is large (e.g. 32 MiB), # this write call would take an extremely long time (20+ seconds) # while using 100% CPU. Splitting the writes into chunks produced # massively better performance (0.05 seconds to write the 32 MiB of # data on the same hardware). Unfortunately splitting the data, # one would assume, results in it being copied, but this seems to be # a much more minor issue compared to CPU cost of writing large buffers. args.each do |buf| buf = buf.to_s i = 0 while i < buf.length chunk = buf[i, WRITE_CHUNK_SIZE] i += @socket.write(chunk) end end end # Writes data to to the socket, the write duration is limited to +timeout+. # # @param [ Array ] args The data to be written. # @param [ Numeric ] :timeout The total timeout to the whole write operation. # # @return [ Integer ] The length of bytes written to the socket. def write_with_timeout(*args, timeout:) raise ArgumentError, 'timeout cannot be nil' if timeout.nil? raise_timeout_error!("Negative timeout #{timeout} given to socket", true) if timeout < 0 written = 0 args.each do |buf| buf = buf.to_s i = 0 while i < buf.length chunk = buf[i...(i + WRITE_CHUNK_SIZE)] written += write_chunk(chunk, timeout) i += WRITE_CHUNK_SIZE end end written end def write_chunk(chunk, timeout) deadline = Utils.monotonic_time + timeout written = 0 while written < chunk.length begin written += @socket.write_nonblock(chunk[written..-1]) rescue IO::WaitWritable, Errno::EINTR if !wait_for_socket_to_be_writable(deadline) raise_timeout_error!("Took more than #{timeout} seconds to receive data", true) end retry end end written end def wait_for_socket_to_be_writable(deadline) select_timeout = deadline - Utils.monotonic_time rv = Kernel.select(nil, [@socket], nil, select_timeout) if BSON::Environment.jruby? # Ignore the return value of Kernel.select. # On JRuby, select appears to return nil prior to timeout expiration # (apparently due to a EAGAIN) which then causes us to fail the read # even though we could have retried it. # Check the deadline ourselves. select_timeout = deadline - Utils.monotonic_time return select_timeout > 0 end !rv.nil? end def unix_socket?(sock) defined?(UNIXSocket) && sock.is_a?(UNIXSocket) end DEFAULT_TCP_KEEPINTVL = 10 DEFAULT_TCP_KEEPCNT = 9 DEFAULT_TCP_KEEPIDLE = 120 DEFAULT_TCP_USER_TIMEOUT = 210 def set_keepalive_opts(sock) sock.setsockopt(SOL_SOCKET, SO_KEEPALIVE, true) set_option(sock, :TCP_KEEPINTVL, DEFAULT_TCP_KEEPINTVL) set_option(sock, :TCP_KEEPCNT, DEFAULT_TCP_KEEPCNT) set_option(sock, :TCP_KEEPIDLE, DEFAULT_TCP_KEEPIDLE) set_option(sock, :TCP_USER_TIMEOUT, DEFAULT_TCP_USER_TIMEOUT) rescue # JRuby 9.2.13.0 and lower do not define TCP_KEEPINTVL etc. constants. # JRuby 9.2.14.0 defines the constants but does not allow to get or # set them with this error: # Errno::ENOPROTOOPT: Protocol not available - Protocol not available end def set_option(sock, option, default) if Socket.const_defined?(option) system_default = sock.getsockopt(IPPROTO_TCP, option).int if system_default > default sock.setsockopt(IPPROTO_TCP, option, default) end end end def set_socket_options(sock) sock.set_encoding(BSON::BINARY) set_keepalive_opts(sock) end def map_exceptions begin yield rescue Errno::ETIMEDOUT => e raise Error::SocketTimeoutError, "#{e.class}: #{e} (for #{human_address})" rescue IOError, SystemCallError => e raise Error::SocketError, "#{e.class}: #{e} (for #{human_address})" rescue OpenSSL::SSL::SSLError => e raise Error::SocketError, "#{e.class}: #{e} (for #{human_address})" end end def human_address raise NotImplementedError end def raise_timeout_error!(message = nil, csot = false) if csot raise Mongo::Error::TimeoutError else raise Errno::ETIMEDOUT, message end end end end mongo-ruby-driver-2.21.3/lib/mongo/socket/000077500000000000000000000000001505113246500203455ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/socket/ocsp_cache.rb000066400000000000000000000061031505113246500227610ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Socket # This module caches OCSP responses for their indicated validity time. # # The key is the CertificateId used for the OCSP request. # The value is the SingleResponse. # # @api private module OcspCache module_function def set(cert_id, response) delete(cert_id) responses << response end # Retrieves a cached SingleResponse for the specified CertificateId. # # This method may return expired responses if they are revoked. # Such responses were valid when they were first received. # # This method may also return responses that are valid but that may # expire by the time caller uses them. The caller should not perform # update time checks on the returned response. # # @return [ OpenSSL::OCSP::SingleResponse ] The previously # retrieved response. module_function def get(cert_id) resp = responses.detect do |resp| resp.certid.cmp(cert_id) end if resp # Only expire responses with good status. # Once a certificate is revoked, it should stay revoked forever, # hence we should be able to cache revoked responses indefinitely. if resp.cert_status == OpenSSL::OCSP::V_CERTSTATUS_GOOD && resp.next_update < Time.now then responses.delete(resp) resp = nil end end # If we have connected to a server and cached the OCSP response for it, # and then never connect to that server again, the cached OCSP response # is going to remain in memory indefinitely. Periodically remove all # expired OCSP responses, not just the ones matching the certificate id # we are querying by. if rand < 0.01 responses.delete_if do |resp| resp.next_update < Time.now end end resp end module_function def delete(cert_id) responses.delete_if do |resp| resp.certid.cmp(cert_id) end end # Clears the driver's OCSP response cache. # # @note Use Mongo.clear_ocsp_cache from applications instead of invoking # this method directly. module_function def clear responses.replace([]) end private LOCK = Mutex.new module_function def responses LOCK.synchronize do @responses ||= [] end end end end end mongo-ruby-driver-2.21.3/lib/mongo/socket/ocsp_verifier.rb000066400000000000000000000262341505113246500235400ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Net autoload :HTTP, 'net/http' end module Mongo class Socket # OCSP endpoint verifier. # # After a TLS connection is established, this verifier inspects the # certificate presented by the server, and if the certificate contains # an OCSP URI, performs the OCSP status request to the specified URI # (following up to 5 redirects) to verify the certificate status. # # @see https://ruby-doc.org/stdlib/libdoc/openssl/rdoc/OpenSSL/OCSP.html # # @api private class OcspVerifier include Loggable # @param [ String ] host_name The host name being verified, for # diagnostic output. # @param [ OpenSSL::X509::Certificate ] cert The certificate presented by # the server at host_name. # @param [ OpenSSL::X509::Certificate ] ca_cert The CA certificate # presented by the server or resolved locally from the server # certificate. # @param [ OpenSSL::X509::Store ] cert_store The certificate store to # use for verifying OCSP response. This should be the same store as # used in SSLContext used with the SSLSocket that we are verifying the # certificate for. This must NOT be the CA certificate provided by # the server (i.e. anything taken out of peer_cert) - otherwise the # server would dictate which CA authorities the client trusts. def initialize(host_name, cert, ca_cert, cert_store, **opts) @host_name = host_name @cert = cert @ca_cert = ca_cert @cert_store = cert_store @options = opts end attr_reader :host_name attr_reader :cert attr_reader :ca_cert attr_reader :cert_store attr_reader :options def timeout options[:timeout] || 5 end # @return [ Array ] OCSP URIs in the specified server certificate. def ocsp_uris @ocsp_uris ||= begin # https://tools.ietf.org/html/rfc3546#section-2.3 # prohibits multiple extensions with the same oid. ext = cert.extensions.detect do |ext| ext.oid == 'authorityInfoAccess' end if ext # Our test certificates have multiple OCSP URIs. ext.value.split("\n").select do |line| line.start_with?('OCSP - URI:') end.map do |line| line.split(':', 2).last end else [] end end end def cert_id @cert_id ||= OpenSSL::OCSP::CertificateId.new( cert, ca_cert, OpenSSL::Digest::SHA1.new, ) end def verify_with_cache handle_exceptions do return false if ocsp_uris.empty? resp = OcspCache.get(cert_id) if resp return return_ocsp_response(resp) end resp, errors = do_verify if resp OcspCache.set(cert_id, resp) end return_ocsp_response(resp, errors) end end # @return [ true | false ] Whether the certificate was verified. # # @raise [ Error::ServerCertificateRevoked ] If the certificate was # definitively revoked. def verify handle_exceptions do return false if ocsp_uris.empty? resp, errors = do_verify return_ocsp_response(resp, errors) end end private def do_verify # This synchronized array contains definitive pass/fail responses # obtained from the responders. We'll take the first one but due to # concurrency multiple responses may be produced and queued. @resp_queue = Queue.new # This synchronized array contains strings, one per responder, that # explain why each responder hasn't produced a definitive response. # These are concatenated and logged if none of the responders produced # a definitive respnose, or if the main thread times out waiting for # a definitive response (in which case some of the worker threads' # diagnostics may be logged and some may not). @resp_errors = Queue.new @req = OpenSSL::OCSP::Request.new @req.add_certid(cert_id) @req.add_nonce @serialized_req = @req.to_der @outstanding_requests = ocsp_uris.count @outstanding_requests_lock = Mutex.new threads = ocsp_uris.map do |uri| Thread.new do verify_one_responder(uri) end end resp = begin ::Timeout.timeout(timeout) do @resp_queue.shift end rescue ::Timeout::Error nil end threads.map(&:kill) threads.map(&:join) [resp, @resp_errors] end def verify_one_responder(uri) original_uri = uri redirect_count = 0 http_response = nil loop do http_response = begin uri = URI(uri) Net::HTTP.start(uri.hostname, uri.port) do |http| path = uri.path if path.empty? path = '/' end http.post(path, @serialized_req, 'content-type' => 'application/ocsp-request') end rescue IOError, SystemCallError => e @resp_errors << "OCSP request to #{report_uri(original_uri, uri)} failed: #{e.class}: #{e}" return false end code = http_response.code.to_i if (300..399).include?(code) redirected_uri = http_response.header['location'] uri = ::URI.join(uri, redirected_uri) redirect_count += 1 if redirect_count > 5 @resp_errors << "OCSP request to #{report_uri(original_uri, uri)} failed: too many redirects (6)" return false end next end if code >= 400 @resp_errors << "OCSP request to #{report_uri(original_uri, uri)} failed with HTTP status code #{http_response.code}" + report_response_body(http_response.body) return false end if code != 200 # There must be a body provided with the response, if one isn't # provided the response cannot be verified. @resp_errors << "OCSP request to #{report_uri(original_uri, uri)} failed with unexpected HTTP status code #{http_response.code}" + report_response_body(http_response.body) return false end break end resp = OpenSSL::OCSP::Response.new(http_response.body) unless resp.basic @resp_errors << "OCSP response from #{report_uri(original_uri, uri)} is #{resp.status}: #{resp.status_string}" return false end resp = resp.basic unless resp.verify([ca_cert], cert_store) # Ruby's OpenSSL binding discards error information - see # https://github.com/ruby/openssl/issues/395 @resp_errors << "OCSP response from #{report_uri(original_uri, uri)} failed signature verification; set `OpenSSL.debug = true` to see why" return false end if @req.check_nonce(resp) == 0 @resp_errors << "OCSP response from #{report_uri(original_uri, uri)} included invalid nonce" return false end resp = resp.find_response(cert_id) unless resp @resp_errors << "OCSP response from #{report_uri(original_uri, uri)} did not include information about the requested certificate" return false end # TODO make a new class instead of patching the stdlib one? resp.instance_variable_set('@uri', uri) resp.instance_variable_set('@original_uri', original_uri) class << resp attr_reader :uri, :original_uri end unless resp.check_validity @resp_errors << "OCSP response from #{report_uri(original_uri, uri)} was invalid: this_update was in the future or next_update time has passed" return false end unless [ OpenSSL::OCSP::V_CERTSTATUS_GOOD, OpenSSL::OCSP::V_CERTSTATUS_REVOKED, ].include?(resp.cert_status) @resp_errors << "OCSP response from #{report_uri(original_uri, uri)} had a non-definitive status: #{resp.cert_status}" return false end # Note this returns the redirected URI @resp_queue << resp rescue => exc Utils.warn_bg_exception("Error performing OCSP verification for '#{host_name}' via '#{uri}'", exc, logger: options[:logger], log_prefix: options[:log_prefix], bg_error_backtrace: options[:bg_error_backtrace], ) false ensure @outstanding_requests_lock.synchronize do @outstanding_requests -= 1 if @outstanding_requests == 0 @resp_queue << nil end end end def return_ocsp_response(resp, errors = nil) if resp if resp.cert_status == OpenSSL::OCSP::V_CERTSTATUS_REVOKED raise_revoked_error(resp) end true else reasons = [] errors.length.times do reasons << errors.shift end if reasons.empty? msg = "No responses from responders: #{ocsp_uris.join(', ')} within #{timeout} seconds" else msg = "For responders #{ocsp_uris.join(', ')} with a timeout of #{timeout} seconds: #{reasons.join(', ')}" end log_warn("TLS certificate of '#{host_name}' could not be definitively verified via OCSP: #{msg}") false end end def handle_exceptions begin yield rescue Error::ServerCertificateRevoked raise rescue => exc Utils.warn_bg_exception( "Error performing OCSP verification for '#{host_name}'", exc, **options) false end end def raise_revoked_error(resp) if resp.uri == resp.original_uri redirect = '' else redirect = " (redirected from #{resp.original_uri})" end raise Error::ServerCertificateRevoked, "TLS certificate of '#{host_name}' has been revoked according to '#{resp.uri}'#{redirect} for reason '#{resp.revocation_reason}' at '#{resp.revocation_time}'" end def report_uri(original_uri, uri) if URI(uri) == URI(original_uri) uri else "#{original_uri} (redirected to #{uri})" end end def report_response_body(body) if body ": #{body}" else '' end end end end end mongo-ruby-driver-2.21.3/lib/mongo/socket/ssl.rb000066400000000000000000000515311505113246500215000ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Socket # Wrapper for TLS sockets. # # @since 2.0.0 class SSL < Socket include OpenSSL include Loggable # Initializes a new TLS socket. # # @example Create the TLS socket. # SSL.new('::1', 27017, 30) # # @param [ String ] host The hostname or IP address. # @param [ Integer ] port The port number. # @param [ Float ] timeout The socket timeout value. # @param [ Integer ] family The socket family. # @param [ Hash ] options The options. # # @option options [ Float ] :connect_timeout Connect timeout. # @option options [ Address ] :connection_address Address of the # connection that created this socket. # @option options [ Integer ] :connection_generation Generation of the # connection (for non-monitoring connections) that created this socket. # @option options [ true | false ] :monitor Whether this socket was # created by a monitoring connection. # @option options [ String ] :ssl_ca_cert The file containing concatenated # certificate authority certificates used to validate certs passed from the # other end of the connection. Intermediate certificates should NOT be # specified in files referenced by this option. One of :ssl_ca_cert, # :ssl_ca_cert_string or :ssl_ca_cert_object (in order of priority) is # required when using :ssl_verify. # @option options [ Array ] :ssl_ca_cert_object # An array of OpenSSL::X509::Certificate objects representing the # certificate authority certificates used to validate certs passed from # the other end of the connection. Intermediate certificates should NOT # be specified in files referenced by this option. One of :ssl_ca_cert, # :ssl_ca_cert_string or :ssl_ca_cert_object (in order of priority) # is required when using :ssl_verify. # @option options [ String ] :ssl_ca_cert_string A string containing # certificate authority certificate used to validate certs passed from the # other end of the connection. This option allows passing only one CA # certificate to the driver. Intermediate certificates should NOT # be specified in files referenced by this option. One of :ssl_ca_cert, # :ssl_ca_cert_string or :ssl_ca_cert_object (in order of priority) is # required when using :ssl_verify. # @option options [ String ] :ssl_cert The certificate file used to identify # the connection against MongoDB. A certificate chain may be passed by # specifying the client certificate first followed by any intermediate # certificates up to the CA certificate. The file may also contain the # certificate's private key, which will be ignored. This option, if present, # takes precedence over the values of :ssl_cert_string and :ssl_cert_object # @option options [ OpenSSL::X509::Certificate ] :ssl_cert_object The OpenSSL::X509::Certificate # used to identify the connection against MongoDB. Only one certificate # may be passed through this option. # @option options [ String ] :ssl_cert_string A string containing the PEM-encoded # certificate used to identify the connection against MongoDB. A certificate # chain may be passed by specifying the client certificate first followed # by any intermediate certificates up to the CA certificate. The string # may also contain the certificate's private key, which will be ignored, # This option, if present, takes precedence over the value of :ssl_cert_object # @option options [ String ] :ssl_key The private keyfile used to identify the # connection against MongoDB. Note that even if the key is stored in the same # file as the certificate, both need to be explicitly specified. This option, # if present, takes precedence over the values of :ssl_key_string and :ssl_key_object # @option options [ OpenSSL::PKey ] :ssl_key_object The private key used to identify the # connection against MongoDB # @option options [ String ] :ssl_key_pass_phrase A passphrase for the private key. # @option options [ String ] :ssl_key_string A string containing the PEM-encoded private key # used to identify the connection against MongoDB. This parameter, if present, # takes precedence over the value of option :ssl_key_object # @option options [ true, false ] :ssl_verify Whether to perform peer certificate validation and # hostname verification. Note that the decision of whether to validate certificates will be # overridden if :ssl_verify_certificate is set, and the decision of whether to validate # hostnames will be overridden if :ssl_verify_hostname is set. # @option options [ true, false ] :ssl_verify_certificate Whether to perform peer certificate # validation. This setting overrides :ssl_verify with respect to whether certificate # validation is performed. # @option options [ true, false ] :ssl_verify_hostname Whether to perform peer hostname # validation. This setting overrides :ssl_verify with respect to whether hostname validation # is performed. # # @since 2.0.0 # @api private def initialize(host, port, host_name, timeout, family, options = {}) super(timeout, options) @host, @port, @host_name = host, port, host_name @context = create_context(options) @family = family @tcp_socket = ::Socket.new(family, SOCK_STREAM, 0) begin @tcp_socket.setsockopt(IPPROTO_TCP, TCP_NODELAY, 1) set_socket_options(@tcp_socket) run_tls_context_hooks connect! rescue @tcp_socket.close raise end end # @return [ SSLContext ] context The TLS context. attr_reader :context # @return [ String ] host The host to connect to. attr_reader :host # @return [ String ] host_name The original host name. attr_reader :host_name # @return [ Integer ] port The port to connect to. attr_reader :port # Establishes a socket connection. # # @example Connect the socket. # sock.connect! # # @note This method mutates the object by setting the socket # internally. # # @return [ SSL ] The connected socket instance. # # @since 2.0.0 def connect! sockaddr = ::Socket.pack_sockaddr_in(port, host) connect_timeout = options[:connect_timeout] map_exceptions do if connect_timeout && connect_timeout != 0 deadline = Utils.monotonic_time + connect_timeout if BSON::Environment.jruby? # We encounter some strange problems with connect_nonblock for # ssl sockets on JRuby. Therefore, we use the old +Timeout.timeout+ # solution, even though it is known to be not very reliable. raise Error::SocketTimeoutError, 'connect_timeout expired' if connect_timeout < 0 Timeout.timeout(connect_timeout, Error::SocketTimeoutError, "The socket took over #{options[:connect_timeout]} seconds to connect") do connect_without_timeout(sockaddr) end else connect_with_timeout(sockaddr, connect_timeout) end remaining_timeout = deadline - Utils.monotonic_time verify_certificate!(@socket) verify_ocsp_endpoint!(@socket, remaining_timeout) else connect_without_timeout(sockaddr) verify_certificate!(@socket) verify_ocsp_endpoint!(@socket) end end self rescue @socket&.close @socket = nil raise end private :connect! # Read a single byte from the socket. # # @example Read a single byte. # socket.readbyte # # @return [ Object ] The read byte. # # @since 2.0.0 def readbyte map_exceptions do byte = socket.read(1).bytes.to_a[0] byte.nil? ? raise(EOFError) : byte end end private # Connects the socket without a timeout provided. # # @param [ String ] sockaddr Address to connect to. def connect_without_timeout(sockaddr) @tcp_socket.connect(sockaddr) @socket = OpenSSL::SSL::SSLSocket.new(@tcp_socket, context) @socket.hostname = @host_name @socket.sync_close = true @socket.connect end # Connects the socket with the connect timeout. The timeout applies to # connecting both ssl socket and the underlying tcp socket. # # @param [ String ] sockaddr Address to connect to. def connect_with_timeout(sockaddr, connect_timeout) if connect_timeout <= 0 raise Error::SocketTimeoutError, "The socket took over #{connect_timeout} seconds to connect" end deadline = Utils.monotonic_time + connect_timeout connect_tcp_socket_with_timeout(sockaddr, deadline, connect_timeout) connnect_ssl_socket_with_timeout(deadline, connect_timeout) end def connect_tcp_socket_with_timeout(sockaddr, deadline, connect_timeout) if deadline <= Utils.monotonic_time raise Error::SocketTimeoutError, "The socket took over #{connect_timeout} seconds to connect" end begin @tcp_socket.connect_nonblock(sockaddr) rescue IO::WaitWritable with_select_timeout(deadline, connect_timeout) do |select_timeout| IO.select(nil, [@tcp_socket], nil, select_timeout) end retry rescue Errno::EISCONN # Socket is connected, nothing to do. end end def connnect_ssl_socket_with_timeout(deadline, connect_timeout) if deadline <= Utils.monotonic_time raise Error::SocketTimeoutError, "The socket took over #{connect_timeout} seconds to connect" end @socket = OpenSSL::SSL::SSLSocket.new(@tcp_socket, context) @socket.hostname = @host_name @socket.sync_close = true # We still have time, connecting ssl socket. begin @socket.connect_nonblock rescue IO::WaitReadable, OpenSSL::SSL::SSLErrorWaitReadable with_select_timeout(deadline, connect_timeout) do |select_timeout| IO.select([@socket], nil, nil, select_timeout) end retry rescue IO::WaitWritable, OpenSSL::SSL::SSLErrorWaitWritable with_select_timeout(deadline, connect_timeout) do |select_timeout| IO.select(nil, [@socket], nil, select_timeout) end retry rescue Errno::EISCONN # Socket is connected, nothing to do end end # Raises +Error::SocketTimeoutError+ exception if deadline reached or the # block returns nil. The block should call +IO.select+ with the # +connect_timeout+ value. It returns nil if the +connect_timeout+ expires. def with_select_timeout(deadline, connect_timeout, &block) select_timeout = deadline - Utils.monotonic_time if select_timeout <= 0 raise Error::SocketTimeoutError, "The socket took over #{connect_timeout} seconds to connect" end rv = block.call(select_timeout) if rv.nil? raise Error::SocketTimeoutError, "The socket took over #{connect_timeout} seconds to connect" end end def verify_certificate? # If ssl_verify_certificate is not present, disable only if # ssl_verify is explicitly set to false. if options[:ssl_verify_certificate].nil? options[:ssl_verify] != false # If ssl_verify_certificate is present, enable or disable based on its value. else !!options[:ssl_verify_certificate] end end def verify_hostname? # If ssl_verify_hostname is not present, disable only if ssl_verify is # explicitly set to false. if options[:ssl_verify_hostname].nil? options[:ssl_verify] != false # If ssl_verify_hostname is present, enable or disable based on its value. else !!options[:ssl_verify_hostname] end end def verify_ocsp_endpoint? if !options[:ssl_verify_ocsp_endpoint].nil? options[:ssl_verify_ocsp_endpoint] != false elsif !options[:ssl_verify_certificate].nil? options[:ssl_verify_certificate] != false else options[:ssl_verify] != false end end def create_context(options) OpenSSL::SSL::SSLContext.new.tap do |context| if OpenSSL::SSL.const_defined?(:OP_NO_RENEGOTIATION) context.options = context.options | OpenSSL::SSL::OP_NO_RENEGOTIATION end if context.respond_to?(:renegotiation_cb=) # Disable renegotiation for older Ruby versions per the sample code at # https://rubydocs.org/d/ruby-2-6-0/classes/OpenSSL/SSL/SSLContext.html # In JRuby we must allow one call as this callback is invoked for # the initial connection also, not just for renegotiations - # https://github.com/jruby/jruby-openssl/issues/180 if BSON::Environment.jruby? allowed_calls = 1 else allowed_calls = 0 end context.renegotiation_cb = lambda do |ssl| if allowed_calls <= 0 raise RuntimeError, 'Client renegotiation disabled' end allowed_calls -= 1 end end set_cert(context, options) set_key(context, options) if verify_certificate? context.verify_mode = OpenSSL::SSL::VERIFY_PEER set_cert_verification(context, options) else context.verify_mode = OpenSSL::SSL::VERIFY_NONE end if context.respond_to?(:verify_hostname=) # We manually check the hostname after the connection is established if necessary, so # we disable it here in order to give consistent errors across Ruby versions which # don't support hostname verification at the time of the handshake. context.verify_hostname = OpenSSL::SSL::VERIFY_NONE end end end def set_cert(context, options) # Since we clear cert_text during processing, we need to examine # ssl_cert_object here to avoid considering it if we have also # processed the text. if options[:ssl_cert] cert_text = File.read(options[:ssl_cert]) cert_object = nil elsif cert_text = options[:ssl_cert_string] cert_object = nil else cert_object = options[:ssl_cert_object] end # The client certificate may be a single certificate or a bundle # (client certificate followed by intermediate certificates). # The text may also include private keys for the certificates. # OpenSSL supports passing the entire bundle as a certificate chain # to the context via SSL_CTX_use_certificate_chain_file, but the # Ruby openssl extension does not currently expose this functionality # per https://github.com/ruby/openssl/issues/254. # Therefore, extract the individual certificates from the certificate # text, and if there is more than one certificate provided, use # extra_chain_cert option to add the intermediate ones. This # implementation is modeled after # https://github.com/venuenext/ruby-kafka/commit/9495f5daf254b43bc88062acad9359c5f32cb8b5. # Note that the parsing here is not identical to what OpenSSL employs - # for instance, if there is no newline between two certificates # this code will extract them both but OpenSSL fails in this situation. if cert_text certs = extract_certs(cert_text) if certs.length > 1 context.cert = OpenSSL::X509::Certificate.new(certs.shift) context.extra_chain_cert = certs.map do |cert| OpenSSL::X509::Certificate.new(cert) end # All certificates are already added to the context, skip adding # them again below. cert_text = nil end end if cert_text context.cert = OpenSSL::X509::Certificate.new(cert_text) elsif cert_object context.cert = cert_object end end def set_key(context, options) passphrase = options[:ssl_key_pass_phrase] if options[:ssl_key] context.key = load_private_key(File.read(options[:ssl_key]), passphrase) elsif options[:ssl_key_string] context.key = load_private_key(options[:ssl_key_string], passphrase) elsif options[:ssl_key_object] context.key = options[:ssl_key_object] end end def load_private_key(text, passphrase) args = if passphrase [text, passphrase] else [text] end # On JRuby, PKey.read does not grok cert+key bundles. # https://github.com/jruby/jruby-openssl/issues/176 if BSON::Environment.jruby? [OpenSSL::PKey::RSA, OpenSSL::PKey::DSA].each do |cls| begin return cls.send(:new, *args) rescue OpenSSL::PKey::PKeyError # ignore end end # Neither RSA nor DSA worked, fall through to trying PKey end OpenSSL::PKey.send(:read, *args) end def set_cert_verification(context, options) context.verify_mode = OpenSSL::SSL::VERIFY_PEER cert_store = OpenSSL::X509::Store.new if options[:ssl_ca_cert] cert_store.add_file(options[:ssl_ca_cert]) elsif options[:ssl_ca_cert_string] cert_store.add_cert(OpenSSL::X509::Certificate.new(options[:ssl_ca_cert_string])) elsif options[:ssl_ca_cert_object] raise TypeError("Option :ssl_ca_cert_object should be an array of OpenSSL::X509:Certificate objects") unless options[:ssl_ca_cert_object].is_a? Array options[:ssl_ca_cert_object].each {|cert| cert_store.add_cert(cert)} else cert_store.set_default_paths end context.cert_store = cert_store end def verify_certificate!(socket) if verify_hostname? unless OpenSSL::SSL.verify_certificate_identity(socket.peer_cert, host_name) raise Error::SocketError, 'TLS handshake failed due to a hostname mismatch.' end end end def verify_ocsp_endpoint!(socket, timeout = nil) return unless verify_ocsp_endpoint? cert = socket.peer_cert ca_cert = find_issuer(cert, socket.peer_cert_chain) unless ca_cert log_warn("TLS certificate of '#{host_name}' could not be definitively verified via OCSP: issuer certificate not found in the chain.") return end verifier = OcspVerifier.new(@host_name, cert, ca_cert, context.cert_store, **Utils.shallow_symbolize_keys(options).merge(timeout: timeout)) verifier.verify_with_cache end def read_buffer_size # Buffer size for TLS reads. # Capped at 16k due to https://linux.die.net/man/3/ssl_read 16384 end def human_address "#{host}:#{port} (#{host_name}:#{port}, TLS)" end def run_tls_context_hooks Mongo.tls_context_hooks.each do |hook| hook.call(@context) end end BEGIN_CERT = "-----BEGIN CERTIFICATE-----" END_CERT = "-----END CERTIFICATE-----" # This was originally a scan + regex, but the regex was particularly # inefficient and was flagged as a concern by static analysis. def extract_certs(text) [].tap do |list| pos = 0 while (begin_idx = text.index(BEGIN_CERT, pos)) end_idx = text.index(END_CERT, begin_idx) break unless end_idx end_idx += END_CERT.length list.push(text[begin_idx...end_idx]) pos = end_idx end end end # Find the issuer certificate in the chain. def find_issuer(cert, cert_chain) cert_chain.find { |c| c.subject == cert.issuer } end end end end mongo-ruby-driver-2.21.3/lib/mongo/socket/tcp.rb000066400000000000000000000102241505113246500214570ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Socket # Wrapper for TCP sockets. # # @since 2.0.0 class TCP < Socket # Initializes a new TCP socket. # # @example Create the TCP socket. # TCP.new('::1', 27017, 30, Socket::PF_INET) # TCP.new('127.0.0.1', 27017, 30, Socket::PF_INET) # # @param [ String ] host The hostname or IP address. # @param [ Integer ] port The port number. # @param [ Float ] timeout The socket timeout value. # @param [ Integer ] family The socket family. # @param [ Hash ] options The options. # # @option options [ Float ] :connect_timeout Connect timeout. # @option options [ Address ] :connection_address Address of the # connection that created this socket. # @option options [ Integer ] :connection_generation Generation of the # connection (for non-monitoring connections) that created this socket. # @option options [ true | false ] :monitor Whether this socket was # created by a monitoring connection. # # @since 2.0.0 # @api private def initialize(host, port, timeout, family, options = {}) if family.nil? raise ArgumentError, 'family must be specified' end super(timeout, options) @host, @port = host, port @family = family @socket = ::Socket.new(family, SOCK_STREAM, 0) begin set_socket_options(@socket) connect! rescue @socket.close raise end end # @return [ String ] host The host to connect to. attr_reader :host # @return [ Integer ] port The port to connect to. attr_reader :port # Establishes a socket connection. # # @example Connect the socket. # sock.connect! # # @note This method mutates the object by setting the socket # internally. # # @return [ TCP ] The connected socket instance. # # @since 2.0.0 # @api private def connect! socket.setsockopt(IPPROTO_TCP, TCP_NODELAY, 1) sockaddr = ::Socket.pack_sockaddr_in(port, host) connect_timeout = options[:connect_timeout] map_exceptions do if connect_timeout && connect_timeout != 0 connect_with_timeout(sockaddr, connect_timeout) else connect_without_timeout(sockaddr) end end self end # @api private def connect_without_timeout(sockaddr) socket.connect(sockaddr) end # @api private def connect_with_timeout(sockaddr, connect_timeout) if connect_timeout <= 0 raise Error::SocketTimeoutError, "The socket took over #{connect_timeout} seconds to connect" end deadline = Utils.monotonic_time + connect_timeout begin socket.connect_nonblock(sockaddr) rescue IO::WaitWritable select_timeout = deadline - Utils.monotonic_time if select_timeout <= 0 raise Error::SocketTimeoutError, "The socket took over #{connect_timeout} seconds to connect" end if IO.select(nil, [socket], nil, select_timeout) retry else socket.close raise Error::SocketTimeoutError, "The socket took over #{connect_timeout} seconds to connect" end rescue Errno::EISCONN # Socket is connected, nothing more to do end end private def human_address "#{host}:#{port} (no TLS)" end end end end mongo-ruby-driver-2.21.3/lib/mongo/socket/unix.rb000066400000000000000000000035641505113246500216650ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class Socket # Wrapper for Unix sockets. # # @since 2.0.0 class Unix < Socket # Initializes a new Unix socket. # # @example Create the Unix socket. # Unix.new('/path/to.sock', 5) # # @param [ String ] path The path. # @param [ Float ] timeout The socket timeout value. # @param [ Hash ] options The options. # # @option options [ Float ] :connect_timeout Connect timeout (unused). # @option options [ Address ] :connection_address Address of the # connection that created this socket. # @option options [ Integer ] :connection_generation Generation of the # connection (for non-monitoring connections) that created this socket. # @option options [ true | false ] :monitor Whether this socket was # created by a monitoring connection. # # @since 2.0.0 # @api private def initialize(path, timeout, options = {}) super(timeout, options) @path = path @socket = ::UNIXSocket.new(path) set_socket_options(@socket) end # @return [ String ] path The path to connect to. attr_reader :path private def human_address path end end end end mongo-ruby-driver-2.21.3/lib/mongo/srv.rb000066400000000000000000000013161505113246500202150ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2019-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an 'AS IS' BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'mongo/srv/result' require 'mongo/srv/resolver' require 'mongo/srv/monitor' mongo-ruby-driver-2.21.3/lib/mongo/srv/000077500000000000000000000000001505113246500176675ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/srv/monitor.rb000066400000000000000000000066771505113246500217230ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Srv # Periodically retrieves SRV records for the cluster's SRV URI, and # sets the cluster's server list to the SRV lookup result. # # If an error is encountered during SRV lookup or an SRV record is invalid # or disallowed for security reasons, a warning is logged and monitoring # continues. # # @api private class Monitor include Loggable include BackgroundThread MIN_SCAN_INTERVAL = 60 DEFAULT_TIMEOUT = 10 # Creates the SRV monitor. # # @param [ Cluster ] cluster The cluster. # # @option opts [ Float ] :timeout The timeout to use for DNS lookups. # @option opts [ URI::SRVProtocol ] :srv_uri The SRV URI to monitor. # @option opts [ Hash ] :resolv_options For internal driver use only. # Options to pass through to Resolv::DNS constructor for SRV lookups. def initialize(cluster, **opts) @cluster = cluster unless @srv_uri = opts.delete(:srv_uri) raise ArgumentError, 'SRV URI is required' end @options = opts.freeze @resolver = Srv::Resolver.new(**opts) @last_result = @srv_uri.srv_result @stop_semaphore = Semaphore.new end attr_reader :options attr_reader :cluster # @return [ Srv::Result ] Last known SRV lookup result. Used for # determining intervals between SRV lookups, which depend on SRV DNS # records' TTL values. attr_reader :last_result private def do_work scan! @stop_semaphore.wait(scan_interval) end def scan! begin last_result = Timeout.timeout(timeout) do @resolver.get_records(@srv_uri.query_hostname) end rescue Resolv::ResolvTimeout => e log_warn("SRV monitor: timed out trying to resolve hostname #{@srv_uri.query_hostname}: #{e.class}: #{e}") return rescue ::Timeout::Error log_warn("SRV monitor: timed out trying to resolve hostname #{@srv_uri.query_hostname} (timeout=#{timeout})") return rescue Resolv::ResolvError => e log_warn("SRV monitor: unable to resolve hostname #{@srv_uri.query_hostname}: #{e.class}: #{e}") return end if last_result.empty? log_warn("SRV monitor: hostname #{@srv_uri.query_hostname} resolved to zero records") return end @cluster.set_server_list(last_result.address_strs) end def scan_interval if last_result.empty? [cluster.heartbeat_interval, MIN_SCAN_INTERVAL].min elsif last_result.min_ttl.nil? MIN_SCAN_INTERVAL else [last_result.min_ttl, MIN_SCAN_INTERVAL].max end end def timeout options[:timeout] || DEFAULT_TIMEOUT end end end end mongo-ruby-driver-2.21.3/lib/mongo/srv/resolver.rb000066400000000000000000000137141505113246500220630ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2017-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an 'AS IS' BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Srv # Encapsulates the necessary behavior for querying SRV records as # required by the driver. # # @api private class Resolver include Loggable # @return [ String ] RECORD_PREFIX The prefix prepended to each hostname # before querying SRV records. RECORD_PREFIX = '_mongodb._tcp.'.freeze # Generates the record prefix with a custom SRV service name if it is # provided. # # @option srv_service_name [ String | nil ] The SRV service name to use # in the record prefix. # @return [ String ] The generated record prefix. def record_prefix(srv_service_name=nil) return srv_service_name ? "_#{srv_service_name}._tcp." : RECORD_PREFIX end # Creates a new Resolver. # # @option opts [ Float ] :timeout The timeout, in seconds, to use for # each DNS record resolution. # @option opts [ Boolean ] :raise_on_invalid Whether or not to raise # an exception if either a record with a mismatched domain is found # or if no records are found. Defaults to true. # @option opts [ Hash ] :resolv_options For internal driver use only. # Options to pass through to Resolv::DNS constructor for SRV lookups. def initialize(**opts) @options = opts.freeze @resolver = Resolv::DNS.new(@options[:resolv_options]) @resolver.timeouts = timeout end # @return [ Hash ] Resolver options. attr_reader :options def timeout options[:timeout] || Monitor::DEFAULT_TIMEOUT end # Obtains all of the SRV records for a given hostname. If a srv_max_hosts # is specified and it is greater than 0, return maximum srv_max_hosts records. # # In the event that a record with a mismatched domain is found or no # records are found, if the :raise_on_invalid option is true, # an exception will be raised, otherwise a warning will be logged. # # @param [ String ] hostname The hostname whose records should be obtained. # @param [ String | nil ] srv_service_name The SRV service name for the DNS query. # If nil, 'mongodb' is used. # @param [ Integer | nil ] srv_max_hosts The maximum number of records to return. # If this value is nil, return all of the records. # # @raise [ Mongo::Error::MismatchedDomain ] If the :raise_in_invalid # Resolver option is true and a record with a domain name that does # not match the hostname's is found. # @raise [ Mongo::Error::NoSRVRecords ] If the :raise_in_invalid Resolver # option is true and no records are found. # # @return [ Mongo::Srv::Result ] SRV lookup result. def get_records(hostname, srv_service_name=nil, srv_max_hosts=nil) query_name = record_prefix(srv_service_name) + hostname resources = @resolver.getresources(query_name, Resolv::DNS::Resource::IN::SRV) # Collect all of the records into a Result object, raising an error # or logging a warning if a record with a mismatched domain is found. # Note that in the case a warning is raised, the record is _not_ # added to the Result object. result = Srv::Result.new(hostname) resources.each do |record| begin result.add_record(record) rescue Error::MismatchedDomain => e if raise_on_invalid? raise else log_warn(e.message) end end end # If no records are found, either raise an error or log a warning # based on the Resolver's :raise_on_invalid option. if result.empty? if raise_on_invalid? raise Error::NoSRVRecords.new(URI::SRVProtocol::NO_SRV_RECORDS % hostname) else log_warn(URI::SRVProtocol::NO_SRV_RECORDS % hostname) end end # if srv_max_hosts is in [1, #addresses) if (1...result.address_strs.length).include? srv_max_hosts sampled_records = resources.shuffle.first(srv_max_hosts) result = Srv::Result.new(hostname) sampled_records.each { |record| result.add_record(record) } end result end # Obtains the TXT records of a host. # # @param [ String ] hostname The host whose TXT records should be obtained. # # @return [ nil | String ] URI options string from TXT record # associated with the hostname, or nil if there is no such record. # # @raise [ Mongo::Error::InvalidTXTRecord ] If more than one TXT record is found. def get_txt_options_string(hostname) records = @resolver.getresources(hostname, Resolv::DNS::Resource::IN::TXT) if records.empty? return nil end if records.length > 1 msg = "Only one TXT record is allowed: querying hostname #{hostname} returned #{records.length} records" raise Error::InvalidTXTRecord, msg end records[0].strings.join end private # Checks whether an error should be raised due to either a record with # a mismatched domain being found or no records being found. # # @return [ Boolean ] Whether an error should be raised. def raise_on_invalid? @raise_on_invalid ||= @options[:raise_on_invalid] || true end end end end mongo-ruby-driver-2.21.3/lib/mongo/srv/result.rb000066400000000000000000000104061505113246500215330ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2017-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an 'AS IS' BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Srv # SRV record lookup result. # # Contains server addresses that the query resolved to, and minimum TTL # of the DNS records. # # @api private class Result include Address::Validator # @return [ String ] MISMATCHED_DOMAINNAME Error message format string indicating that an SRV # record found does not match the domain of a hostname. MISMATCHED_DOMAINNAME = "Parent domain name in SRV record result (%s) does not match " + "that of the hostname (%s)".freeze # @return [ String ] query_hostname The hostname pointing to the DNS records. attr_reader :query_hostname # @return [ Array ] address_strs The host strings of the SRV records # for the query hostname. attr_reader :address_strs # @return [ Integer | nil ] min_ttl The smallest TTL found among the # records (or nil if no records have been added). attr_accessor :min_ttl # Create a new object to keep track of the SRV records of the hostname. # # @param [ String ] hostname The hostname pointing to the DNS records. def initialize(hostname) @query_hostname = hostname @address_strs = [] @min_ttl = nil end # Checks whether there are any records. # # @return [ Boolean ] Whether or not there are any records. def empty? @address_strs.empty? end # Adds a new record. # # @param [ Resolv::DNS::Resource ] record An SRV record found for the hostname. def add_record(record) record_host = normalize_hostname(record.target.to_s) port = record.port validate_hostname!(record_host) validate_same_origin!(record_host) address_str = if record_host.index(':') # IPV6 address "[#{record_host}]:#{port}" else "#{record_host}:#{port}" end @address_strs << address_str if @min_ttl.nil? @min_ttl = record.ttl else @min_ttl = [@min_ttl, record.ttl].min end nil end private # Transforms the provided hostname to simplify its validation later on. # # This method is safe to call during both initial DNS seed list discovery # and during SRV monitoring, in that it does not convert invalid hostnames # into valid ones. # # - Converts the hostname to lower case. # - Removes one trailing dot, if there is exactly one. If the hostname # has multiple trailing dots, it is unchanged. # # @param [ String ] host Hostname to transform. def normalize_hostname(host) host = host.downcase unless host.end_with?('..') host = host.sub(/\.\z/, '') end host end # Ensures that a record's domain name matches that of the hostname. # # A hostname's domain name consists of each of the '.' delineated # parts after the first. For example, the hostname 'foo.bar.baz' # has the domain name 'bar.baz'. # # @param [ String ] record_host The host of the SRV record. # # @raise [ Mongo::Error::MismatchedDomain ] If the record's domain name doesn't match that of # the hostname. def validate_same_origin!(record_host) domain_name ||= query_hostname.split('.')[1..-1] host_parts = record_host.split('.') unless (host_parts.size > domain_name.size) && (domain_name == host_parts[-domain_name.length..-1]) raise Error::MismatchedDomain.new(MISMATCHED_DOMAINNAME % [record_host, domain_name]) end end end end end mongo-ruby-driver-2.21.3/lib/mongo/timeout.rb000066400000000000000000000036271505113246500211000ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo # @api private module Timeout # A wrapper around Ruby core's Timeout::timeout method that provides # a standardized API for Ruby versions older and newer than 2.4.0, # which is when the third argument was introduced. # # @param [ Numeric ] sec The number of seconds before timeout. # @param [ Class ] klass The exception class to raise on timeout, optional. # When no error exception is provided, Timeout::Error is raised. # @param [ String ] message The error message passed to the exception raised # on timeout, optional. When no error message is provided, the default # error message for the exception class is used. def timeout(sec, klass=nil, message=nil) if message && RUBY_VERSION < '2.94.0' begin ::Timeout.timeout(sec) do yield end rescue ::Timeout::Error raise klass, message end else # Jruby Timeout::timeout method does not support passing nil arguments. # Remove the nil arguments before passing them along to the core # Timeout::timeout method. optional_args = [klass, message].compact ::Timeout.timeout(sec, *optional_args) do yield end end end module_function :timeout end end mongo-ruby-driver-2.21.3/lib/mongo/topology_version.rb000066400000000000000000000053511505113246500230270ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo # TopologyVersion encapsulates the topologyVersion document obtained from # hello responses and not master-like OperationFailure errors. # # @api private class TopologyVersion < BSON::Document def initialize(doc) if Lint.enabled? unless doc['processId'] raise ArgumentError, 'Creating a topology version without processId field' end unless doc['counter'] raise ArgumentError, 'Creating a topology version without counter field' end end super end # @return [ BSON::ObjectId ] The process id. def process_id self['processId'] end # @return [ Integer ] The counter. def counter self['counter'] end # Returns whether this topology version is potentially newer than another # topology version. # # Note that there is no total ordering of topology versions - given # two topology versions, each may be "potentially newer" than the other one. # # @param [ TopologyVersion ] other The other topology version. # # @return [ true | false ] Whether this topology version is potentially newer. # @api private def gt?(other) if process_id != other.process_id true else counter > other.counter end end # Returns whether this topology version is potentially newer than or equal # to another topology version. # # Note that there is no total ordering of topology versions - given # two topology versions, each may be "potentially newer" than the other one. # # @param [ TopologyVersion ] other The other topology version. # # @return [ true | false ] Whether this topology version is potentially newer. # @api private def gte?(other) if process_id != other.process_id true else counter >= other.counter end end # Converts the object to a document suitable for being sent to the server. # # @return [ BSON::Document ] The document. # # @api private def to_doc BSON::Document.new(self).merge(counter: BSON::Int64.new(counter)) end end end mongo-ruby-driver-2.21.3/lib/mongo/uri.rb000066400000000000000000000430761505113246500202130ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an 'AS IS' BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo # The URI class provides a way for users to parse the MongoDB uri as # defined in the connection string format spec. # # https://www.mongodb.com/docs/manual/reference/connection-string/ # # @example Use the uri string to make a client connection. # uri = Mongo::URI.new('mongodb://localhost:27017') # client = Mongo::Client.new(uri.servers, uri.options) # client.login(uri.credentials) # client[uri.database] # # @since 2.0.0 class URI include Loggable include Address::Validator # The uri parser object options. # # @since 2.0.0 attr_reader :options # Mongo::Options::Redacted of the options specified in the uri. # # @since 2.1.0 attr_reader :uri_options # The servers specified in the uri. # # @since 2.0.0 attr_reader :servers # The mongodb connection string scheme. # # @deprecated Will be removed in 3.0. # # @since 2.0.0 SCHEME = 'mongodb://'.freeze # The mongodb connection string scheme root. # # @since 2.5.0 MONGODB_SCHEME = 'mongodb'.freeze # The mongodb srv protocol connection string scheme root. # # @since 2.5.0 MONGODB_SRV_SCHEME = 'mongodb+srv'.freeze # Error details for an invalid scheme. # # @since 2.1.0 # @deprecated INVALID_SCHEME = "Invalid scheme. Scheme must be '#{MONGODB_SCHEME}' or '#{MONGODB_SRV_SCHEME}'".freeze # MongoDB URI format specification. # # @since 2.0.0 FORMAT = 'mongodb://[username:password@]host1[:port1][,host2[:port2]' + ',...[,hostN[:portN]]][/[database][?options]]'.freeze # MongoDB URI (connection string) documentation url # # @since 2.0.0 HELP = 'https://www.mongodb.com/docs/manual/reference/connection-string/'.freeze # Unsafe characters that must be urlencoded. # # @since 2.1.0 UNSAFE = /[\:\/\@]/ # Percent sign that must be encoded in user creds. # # @since 2.5.1 PERCENT_CHAR = /\%/ # Unix socket suffix. # # @since 2.1.0 UNIX_SOCKET = /.sock/ # The character delimiting hosts. # # @since 2.1.0 HOST_DELIM = ','.freeze # The character separating a host and port. # # @since 2.1.0 HOST_PORT_DELIM = ':'.freeze # The character delimiting a database. # # @since 2.1.0 DATABASE_DELIM = '/'.freeze # The character delimiting options. # # @since 2.1.0 URI_OPTS_DELIM = '?'.freeze # The character delimiting multiple options. # # @since 2.1.0 # @deprecated INDIV_URI_OPTS_DELIM = '&'.freeze # The character delimiting an option and its value. # # @since 2.1.0 URI_OPTS_VALUE_DELIM = '='.freeze # The character separating a username from the password. # # @since 2.1.0 AUTH_USER_PWD_DELIM = ':'.freeze # The character delimiting auth credentials. # # @since 2.1.0 AUTH_DELIM = '@'.freeze # Scheme delimiter. # # @since 2.5.0 SCHEME_DELIM = '://'.freeze # Error details for an invalid options format. # # @since 2.1.0 INVALID_OPTS_VALUE_DELIM = "Options and their values must be delimited" + " by '#{URI_OPTS_VALUE_DELIM}'".freeze # Error details for an non-urlencoded user name or password. # # @since 2.1.0 UNESCAPED_USER_PWD = "User name and password must be urlencoded.".freeze # Error details for a non-urlencoded unix socket path. # # @since 2.1.0 UNESCAPED_UNIX_SOCKET = "UNIX domain sockets must be urlencoded.".freeze # Error details for a non-urlencoded auth database name. # # @since 2.1.0 UNESCAPED_DATABASE = "Auth database must be urlencoded.".freeze # Error details for providing options without a database delimiter. # # @since 2.1.0 INVALID_OPTS_DELIM = "Database delimiter '#{DATABASE_DELIM}' must be present if options are specified.".freeze # Error details for a missing host. # # @since 2.1.0 INVALID_HOST = "Missing host; at least one must be provided.".freeze # Error details for an invalid port. # # @since 2.1.0 INVALID_PORT = "Invalid port. Port must be an integer greater than 0 and less than 65536".freeze # Map of URI read preference modes to Ruby driver read preference modes # # @since 2.0.0 READ_MODE_MAP = { 'primary' => :primary, 'primarypreferred' => :primary_preferred, 'secondary' => :secondary, 'secondarypreferred' => :secondary_preferred, 'nearest' => :nearest }.freeze # Map of URI authentication mechanisms to Ruby driver mechanisms # # @since 2.0.0 AUTH_MECH_MAP = { 'GSSAPI' => :gssapi, 'MONGODB-AWS' => :aws, # MONGODB-CR is deprecated and will be removed in driver version 3.0 'MONGODB-CR' => :mongodb_cr, 'MONGODB-X509' => :mongodb_x509, 'PLAIN' => :plain, 'SCRAM-SHA-1' => :scram, 'SCRAM-SHA-256' => :scram256, }.freeze # Options that are allowed to appear more than once in the uri. # # In order to follow the URI options spec requirement that all instances # of 'tls' and 'ssl' have the same value, we need to keep track of all # of the values passed in for those options. Assuming they don't conflict, # they will be condensed to a single value immediately after parsing the URI. # # @since 2.1.0 REPEATABLE_OPTIONS = [ :tag_sets, :ssl ] # Get either a URI object or a SRVProtocol URI object. # # @example Get the uri object. # URI.get(string) # # @param [ String ] string The URI to parse. # @param [ Hash ] opts The options. # # @option options [ Logger ] :logger A custom logger to use. # # @return [URI, URI::SRVProtocol] The uri object. # # @since 2.5.0 def self.get(string, opts = {}) unless string raise Error::InvalidURI.new(string, 'URI must be a string, not nil.') end if string.empty? raise Error::InvalidURI.new(string, 'Cannot parse an empty URI.') end scheme, _, _ = string.partition(SCHEME_DELIM) case scheme when MONGODB_SCHEME URI.new(string, opts) when MONGODB_SRV_SCHEME SRVProtocol.new(string, opts) else raise Error::InvalidURI.new(string, "Invalid scheme '#{scheme}'. Scheme must be '#{MONGODB_SCHEME}' or '#{MONGODB_SRV_SCHEME}'") end end # Gets the options hash that needs to be passed to a Mongo::Client on # instantiation, so we don't have to merge the credentials and database in # at that point - we only have a single point here. # # @example Get the client options. # uri.client_options # # @return [ Mongo::Options::Redacted ] The options passed to the Mongo::Client # # @since 2.0.0 def client_options opts = uri_options.tap do |opts| opts[:database] = @database if @database end @user ? opts.merge(credentials) : opts end def srv_records nil end # Create the new uri from the provided string. # # @example Create the new URI. # URI.new('mongodb://localhost:27017') # # @param [ String ] string The URI to parse. # @param [ Hash ] options The options. # # @option options [ Logger ] :logger A custom logger to use. # # @raise [ Error::InvalidURI ] If the uri does not match the spec. # # @since 2.0.0 def initialize(string, options = {}) unless string raise Error::InvalidURI.new(string, 'URI must be a string, not nil.') end if string.empty? raise Error::InvalidURI.new(string, 'Cannot parse an empty URI.') end @string = string @options = options parsed_scheme, _, remaining = string.partition(SCHEME_DELIM) unless parsed_scheme == scheme raise_invalid_error!("Invalid scheme '#{parsed_scheme}'. Scheme must be '#{MONGODB_SCHEME}'. Use URI#get to parse SRV URIs.") end if remaining.empty? raise_invalid_error!('No hosts in the URI') end parse!(remaining) validate_uri_options! end # Get the credentials provided in the URI. # # @example Get the credentials. # uri.credentials # # @return [ Hash ] The credentials. # * :user [ String ] The user. # * :password [ String ] The provided password. # # @since 2.0.0 def credentials { :user => @user, :password => @password } end # Get the database provided in the URI. # # @example Get the database. # uri.database # # @return [String] The database. # # @since 2.0.0 def database @database ? @database : Database::ADMIN end # Get the uri as a string. # # @example Get the uri as a string. # uri.to_s # # @return [ String ] The uri string. def to_s reconstruct_uri end private # Reconstruct the URI from its parts. Invalid options are dropped and options # are converted to camelCase. # # @return [ String ] the uri. def reconstruct_uri servers = @servers.join(',') options = options_mapper.ruby_to_string(@uri_options).map do |k, vs| unless vs.nil? if vs.is_a?(Array) vs.map { |v| "#{k}=#{v}" }.join('&') else "#{k}=#{vs}" end end end.compact.join('&') uri = "#{scheme}#{SCHEME_DELIM}" uri += @user.to_s if @user uri += "#{AUTH_USER_PWD_DELIM}#{@password}" if @password uri += "@" if @user || @password uri += @query_hostname || servers uri += "/" if @database || !options.empty? uri += @database.to_s if @database uri += "?#{options}" unless options.empty? uri end def scheme MONGODB_SCHEME end def parse!(remaining) hosts_and_db, options = remaining.split('?', 2) if options && options.index('?') raise_invalid_error!("Options contain an unescaped question mark (?), or the database name contains a question mark and was not escaped") end hosts, db = hosts_and_db.split('/', 2) if db && db.index('/') raise_invalid_error!("Database name contains an unescaped slash (/): #{db}") end if hosts.index('@') creds, hosts = hosts.split('@', 2) if hosts.empty? raise_invalid_error!("Empty hosts list") end if hosts.index('@') raise_invalid_error!("Unescaped @ in auth info") end end unless hosts.length > 0 raise_invalid_error!("Missing host; at least one must be provided") end @servers = hosts.split(',').map do |host| if host.empty? raise_invalid_error!('Empty host given in the host list') end decode(host).tap do |host| validate_address_str!(host) end end @user = parse_user!(creds) @password = parse_password!(creds) @uri_options = Options::Redacted.new(parse_uri_options!(options)) if db @database = parse_database!(db) end rescue Error::InvalidAddress => e raise_invalid_error!(e.message) end def options_mapper @options_mapper ||= OptionsMapper.new( logger: @options[:logger], ) end def parse_uri_options!(string) uri_options = {} unless string return uri_options end string.split('&').each do |option_str| if option_str.empty? next end key, value = option_str.split('=', 2) if value.nil? raise_invalid_error!("Option #{key} has no value") end key = decode(key) value = decode(value) options_mapper.add_uri_option(key, value, uri_options) end uri_options end def parse_user!(string) if (string && user = string.partition(AUTH_USER_PWD_DELIM)[0]) raise_invalid_error!(UNESCAPED_USER_PWD) if user =~ UNSAFE user_decoded = decode(user) if user_decoded =~ PERCENT_CHAR && encode(user_decoded) != user raise_invalid_error!(UNESCAPED_USER_PWD) end user_decoded end end def parse_password!(string) if (string && pwd = string.partition(AUTH_USER_PWD_DELIM)[2]) if pwd.length > 0 raise_invalid_error!(UNESCAPED_USER_PWD) if pwd =~ UNSAFE pwd_decoded = decode(pwd) if pwd_decoded =~ PERCENT_CHAR && encode(pwd_decoded) != pwd raise_invalid_error!(UNESCAPED_USER_PWD) end pwd_decoded end end end def parse_database!(string) raise_invalid_error!(UNESCAPED_DATABASE) if string =~ UNSAFE decode(string) if string.length > 0 end def raise_invalid_error!(details) raise Error::InvalidURI.new(@string, details, FORMAT) end def raise_invalid_error_no_fmt!(details) raise Error::InvalidURI.new(@string, details) end def decode(value) ::URI::DEFAULT_PARSER.unescape(value) end def encode(value) CGI.escape(value).gsub('+', '%20') end def validate_uri_options! # The URI options spec requires that we raise an error if there are conflicting values of # 'tls' and 'ssl'. In order to fulfill this, we parse the values of each instance into an # array; assuming all values in the array are the same, we replace the array with that value. unless uri_options[:ssl].nil? || uri_options[:ssl].empty? unless uri_options[:ssl].uniq.length == 1 raise_invalid_error_no_fmt!("all instances of 'tls' and 'ssl' must have the same value") end uri_options[:ssl] = uri_options[:ssl].first end # Check for conflicting TLS insecure options. unless uri_options[:ssl_verify].nil? unless uri_options[:ssl_verify_certificate].nil? raise_invalid_error_no_fmt!("'tlsInsecure' and 'tlsAllowInvalidCertificates' cannot both be specified") end unless uri_options[:ssl_verify_hostname].nil? raise_invalid_error_no_fmt!("tlsInsecure' and 'tlsAllowInvalidHostnames' cannot both be specified") end unless uri_options[:ssl_verify_ocsp_endpoint].nil? raise_invalid_error_no_fmt!("tlsInsecure' and 'tlsDisableOCSPEndpointCheck' cannot both be specified") end end unless uri_options[:ssl_verify_certificate].nil? unless uri_options[:ssl_verify_ocsp_endpoint].nil? raise_invalid_error_no_fmt!("tlsAllowInvalidCertificates' and 'tlsDisableOCSPEndpointCheck' cannot both be specified") end end # Since we know that the only URI option that sets :ssl_cert is # "tlsCertificateKeyFile", any value set for :ssl_cert must also be set # for :ssl_key. if uri_options[:ssl_cert] uri_options[:ssl_key] = uri_options[:ssl_cert] end if uri_options[:write_concern] && !uri_options[:write_concern].empty? begin WriteConcern.get(uri_options[:write_concern]) rescue Error::InvalidWriteConcern => e raise_invalid_error_no_fmt!("#{e.class}: #{e}") end end if uri_options[:direct_connection] if uri_options[:connect] && uri_options[:connect].to_s != 'direct' raise_invalid_error_no_fmt!("directConnection=true cannot be used with connect=#{uri_options[:connect]}") end if servers.length > 1 raise_invalid_error_no_fmt!("directConnection=true cannot be used with multiple seeds") end elsif uri_options[:direct_connection] == false && uri_options[:connect].to_s == 'direct' raise_invalid_error_no_fmt!("directConnection=false cannot be used with connect=direct") end if uri_options[:load_balanced] if servers.length > 1 raise_invalid_error_no_fmt!("loadBalanced=true cannot be used with multiple seeds") end if uri_options[:direct_connection] raise_invalid_error_no_fmt!("directConnection=true cannot be used with loadBalanced=true") end if uri_options[:connect] && uri_options[:connect].to_sym == :direct raise_invalid_error_no_fmt!("connect=direct cannot be used with loadBalanced=true") end if uri_options[:replica_set] raise_invalid_error_no_fmt!("loadBalanced=true cannot be used with replicaSet option") end end unless self.is_a?(URI::SRVProtocol) if uri_options[:srv_max_hosts] raise_invalid_error_no_fmt!("srvMaxHosts cannot be used on non-SRV URI") end if uri_options[:srv_service_name] raise_invalid_error_no_fmt!("srvServiceName cannot be used on non-SRV URI") end end if uri_options[:srv_max_hosts] && uri_options[:srv_max_hosts] > 0 if uri_options[:replica_set] raise_invalid_error_no_fmt!("srvMaxHosts > 0 cannot be used with replicaSet option") end if options[:load_balanced] raise_invalid_error_no_fmt!("srvMaxHosts > 0 cannot be used with loadBalanced=true") end end end end end require 'mongo/uri/options_mapper' require 'mongo/uri/srv_protocol' mongo-ruby-driver-2.21.3/lib/mongo/uri/000077500000000000000000000000001505113246500176545ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/uri/options_mapper.rb000066400000000000000000000670211505113246500232460ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an 'AS IS' BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class URI # Performs mapping between URI options and Ruby options. # # This class contains: # # - The mapping defining how URI options are converted to Ruby options. # - The mapping from downcased URI option names to canonical-cased URI # option names. # - Methods to perform conversion of URI option values to Ruby option # values (the convert_* methods). These generally warn and return nil # when input given is invalid. # - Methods to perform conversion of Ruby option values to standardized # MongoClient options (revert_* methods). These assume the input is valid # and generally do not perform validation. # # URI option names are case insensitive. Ruby options are specified as # symbols (though in Client options use indifferent access). # # @api private class OptionsMapper include Loggable # Instantates the options mapper. # # @option opts [ Logger ] :logger A custom logger to use. def initialize(**opts) @options = opts end # @return [ Hash ] The options. attr_reader :options # Adds an option to the uri options hash. # # Acquires a target for the option based on group. # Transforms the value. # Merges the option into the target. # # @param [ String ] key URI option name. # @param [ String ] value The value of the option. # @param [ Hash ] uri_options The base option target. def add_uri_option(key, value, uri_options) strategy = URI_OPTION_MAP[key.downcase] if strategy.nil? log_warn("Unsupported URI option '#{key}' on URI '#{@string}'. It will be ignored.") return end group = strategy[:group] target = if group uri_options[group] || {} else uri_options end value = apply_transform(key, value, strategy[:type]) # Sometimes the value here would be nil, for example if we are processing # read preference tags or auth mechanism properties and all of the # data within is invalid. Ignore such options. unless value.nil? merge_uri_option(target, value, strategy[:name]) end if group && !target.empty? && !uri_options.key?(group) uri_options[group] = target end end def smc_to_ruby(opts) uri_options = {} opts.each do |key, value| strategy = URI_OPTION_MAP[key.downcase] if strategy.nil? log_warn("Unsupported URI option '#{key}' on URI '#{@string}'. It will be ignored.") return end group = strategy[:group] target = if group uri_options[group] || {} else uri_options end value = apply_transform(key, value, strategy[:type]) # Sometimes the value here would be nil, for example if we are processing # read preference tags or auth mechanism properties and all of the # data within is invalid. Ignore such options. unless value.nil? merge_uri_option(target, value, strategy[:name]) end if group && !target.empty? && !uri_options.key?(group) uri_options[group] = target end end uri_options end # Converts Ruby options provided to "standardized MongoClient options". # # @param [ Hash ] opts Ruby options to convert. # # @return [ Hash ] Standardized MongoClient options. def ruby_to_smc(opts) rv = {} URI_OPTION_MAP.each do |uri_key, spec| if spec[:group] v = opts[spec[:group]] v = v && v[spec[:name]] else v = opts[spec[:name]] end unless v.nil? if type = spec[:type] v = send("revert_#{type}", v) end canonical_key = URI_OPTION_CANONICAL_NAMES[uri_key] unless canonical_key raise ArgumentError, "Option #{uri_key} is not known" end rv[canonical_key] = v end end # For options that default to true, remove the value if it is true. %w(retryReads retryWrites).each do |k| if rv[k] rv.delete(k) end end # Remove auth source when it is $external for mechanisms that default # (or require) that auth source. if %w(MONGODB-AWS).include?(rv['authMechanism']) && rv['authSource'] == '$external' rv.delete('authSource') end # ssl and tls are aliases, remove ssl ones rv.delete('ssl') # TODO remove authSource if it is the same as the database, # requires this method to know the database specified in the client. rv end # Converts Ruby options provided to their representation in a URI string. # # @param [ Hash ] opts Ruby options to convert. # # @return [ Hash ] URI string hash. def ruby_to_string(opts) rv = {} URI_OPTION_MAP.each do |uri_key, spec| if spec[:group] v = opts[spec[:group]] v = v && v[spec[:name]] else v = opts[spec[:name]] end unless v.nil? if type = spec[:type] v = send("stringify_#{type}", v) end canonical_key = URI_OPTION_CANONICAL_NAMES[uri_key] unless canonical_key raise ArgumentError, "Option #{uri_key} is not known" end rv[canonical_key] = v end end # For options that default to true, remove the value if it is true. %w(retryReads retryWrites).each do |k| if rv[k] rv.delete(k) end end # Remove auth source when it is $external for mechanisms that default # (or require) that auth source. if %w(MONGODB-AWS).include?(rv['authMechanism']) && rv['authSource'] == '$external' rv.delete('authSource') end # ssl and tls are aliases, remove ssl ones rv.delete('ssl') # TODO remove authSource if it is the same as the database, # requires this method to know the database specified in the client. rv end private # Applies URI value transformation by either using the default cast # or a transformation appropriate for the given type. # # @param [ String ] key URI option name. # @param [ String ] value The value to be transformed. # @param [ Symbol ] type The transform method. def apply_transform(key, value, type) if type send("convert_#{type}", key, value) else value end end # Merges a new option into the target. # # If the option exists at the target destination the merge will # be an addition. # # Specifically required to append an additional tag set # to the array of tag sets without overwriting the original. # # @param [ Hash ] target The destination. # @param [ Object ] value The value to be merged. # @param [ Symbol ] name The name of the option. def merge_uri_option(target, value, name) if target.key?(name) if REPEATABLE_OPTIONS.include?(name) target[name] += value else log_warn("Repeated option key: #{name}.") end else target.merge!(name => value) end end # Hash for storing map of URI option parameters to conversion strategies URI_OPTION_MAP = {} # @return [ Hash ] Map from lowercased to canonical URI # option names. URI_OPTION_CANONICAL_NAMES = {} # Simple internal dsl to register a MongoDB URI option in the URI_OPTION_MAP. # # @param [ String ] uri_key The MongoDB URI option to register. # @param [ Symbol ] name The name of the option in the driver. # @param [ Hash ] extra Extra options. # * :group [ Symbol ] Nested hash where option will go. # * :type [ Symbol ] Name of function to transform value. def self.uri_option(uri_key, name, **extra) URI_OPTION_MAP[uri_key.downcase] = { name: name }.update(extra) URI_OPTION_CANONICAL_NAMES[uri_key.downcase] = uri_key end # Replica Set Options uri_option 'replicaSet', :replica_set # Timeout Options uri_option 'connectTimeoutMS', :connect_timeout, type: :ms uri_option 'socketTimeoutMS', :socket_timeout, type: :ms uri_option 'serverSelectionTimeoutMS', :server_selection_timeout, type: :ms uri_option 'localThresholdMS', :local_threshold, type: :ms uri_option 'heartbeatFrequencyMS', :heartbeat_frequency, type: :ms uri_option 'maxIdleTimeMS', :max_idle_time, type: :ms uri_option 'timeoutMS', :timeout_ms, type: :integer # Write Options uri_option 'w', :w, group: :write_concern, type: :w uri_option 'journal', :j, group: :write_concern, type: :bool uri_option 'fsync', :fsync, group: :write_concern, type: :bool uri_option 'wTimeoutMS', :wtimeout, group: :write_concern, type: :integer # Read Options uri_option 'readPreference', :mode, group: :read, type: :read_mode uri_option 'readPreferenceTags', :tag_sets, group: :read, type: :read_tags uri_option 'maxStalenessSeconds', :max_staleness, group: :read, type: :max_staleness # Pool options uri_option 'maxConnecting', :max_connecting, type: :integer uri_option 'minPoolSize', :min_pool_size, type: :integer uri_option 'maxPoolSize', :max_pool_size, type: :integer uri_option 'waitQueueTimeoutMS', :wait_queue_timeout, type: :ms # Security Options uri_option 'ssl', :ssl, type: :repeated_bool uri_option 'tls', :ssl, type: :repeated_bool uri_option 'tlsAllowInvalidCertificates', :ssl_verify_certificate, type: :inverse_bool uri_option 'tlsAllowInvalidHostnames', :ssl_verify_hostname, type: :inverse_bool uri_option 'tlsCAFile', :ssl_ca_cert uri_option 'tlsCertificateKeyFile', :ssl_cert uri_option 'tlsCertificateKeyFilePassword', :ssl_key_pass_phrase uri_option 'tlsInsecure', :ssl_verify, type: :inverse_bool uri_option 'tlsDisableOCSPEndpointCheck', :ssl_verify_ocsp_endpoint, type: :inverse_bool # Topology options uri_option 'directConnection', :direct_connection, type: :bool uri_option 'connect', :connect, type: :symbol uri_option 'loadBalanced', :load_balanced, type: :bool uri_option 'srvMaxHosts', :srv_max_hosts, type: :integer uri_option 'srvServiceName', :srv_service_name # Auth Options uri_option 'authSource', :auth_source uri_option 'authMechanism', :auth_mech, type: :auth_mech uri_option 'authMechanismProperties', :auth_mech_properties, type: :auth_mech_props # Client Options uri_option 'appName', :app_name uri_option 'compressors', :compressors, type: :array uri_option 'readConcernLevel', :level, group: :read_concern, type: :symbol uri_option 'retryReads', :retry_reads, type: :bool uri_option 'retryWrites', :retry_writes, type: :bool uri_option 'zlibCompressionLevel', :zlib_compression_level, type: :zlib_compression_level # Converts +value+ to a boolean. # # Returns true for 'true', false for 'false', otherwise nil. # # @param [ String ] name Name of the URI option being processed. # @param [ String | true | false ] value URI option value. # # @return [ true | false | nil ] Converted value. def convert_bool(name, value) case value when true, "true", 'TRUE' true when false, "false", 'FALSE' false else log_warn("invalid boolean option for #{name}: #{value}") nil end end # Reverts a boolean type. # # @param [ true | false | nil ] value The boolean to revert. # # @return [ true | false | nil ] The passed value. def revert_bool(value) value end # Stringifies a boolean type. # # @param [ true | false | nil ] value The boolean. # # @return [ String | nil ] The string. def stringify_bool(value) revert_bool(value)&.to_s end # Converts the value into a boolean and returns it wrapped in an array. # # @param [ String ] name Name of the URI option being processed. # @param [ String ] value URI option value. # # @return [ Array | nil ] The boolean value parsed and wraped # in an array. def convert_repeated_bool(name, value) [convert_bool(name, value)] end # Reverts a repeated boolean type. # # @param [ Array | true | false | nil ] value The repeated boolean to revert. # # @return [ Array | true | false | nil ] The passed value. def revert_repeated_bool(value) value end # Stringifies a repeated boolean type. # # @param [ Array | nil ] value The repeated boolean. # # @return [ Array | nil ] The string. def stringify_repeated_bool(value) rep = revert_repeated_bool(value) if rep&.is_a?(Array) rep.join(",") else rep end end # Parses a boolean value and returns its inverse. # # @param [ String ] name Name of the URI option being processed. # @param [ String | true | false ] value The URI option value. # # @return [ true | false | nil ] The inverse of the boolean value parsed out, otherwise nil # (and a warning will be logged). def convert_inverse_bool(name, value) b = convert_bool(name, value) if b.nil? nil else !b end end # Reverts and inverts a boolean type. # # @param [ true | false | nil ] value The boolean to revert and invert. # # @return [ true | false | nil ] The inverted boolean. def revert_inverse_bool(value) value.nil? ? nil : !value end # Inverts and stringifies a boolean. # # @param [ true | false | nil ] value The boolean. # # @return [ String | nil ] The string. def stringify_inverse_bool(value) revert_inverse_bool(value)&.to_s end # Converts +value+ into an integer. Only converts positive integers. # # If the value is not a valid integer, warns and returns nil. # # @param [ String ] name Name of the URI option being processed. # @param [ String | Integer ] value URI option value. # # @return [ nil | Integer ] Converted value. def convert_integer(name, value) if value.is_a?(String) && /\A\d+\z/ !~ value log_warn("#{value} is not a valid integer for #{name}") return nil end value.to_i end # Reverts an integer. # # @param [ Integer | nil ] value The integer. # # @return [ Integer | nil ] The passed value. def revert_integer(value) value end # Stringifies an integer. # # @param [ Integer | nil ] value The integer. # # @return [ String | nil ] The string. def stringify_integer(value) revert_integer(value)&.to_s end # Ruby's convention is to provide timeouts in seconds, not milliseconds and # to use fractions where more precision is necessary. The connection string # options are always in MS so we provide an easy conversion type. # # @param [ String ] name Name of the URI option being processed. # @param [ String | Integer | Float ] value The millisecond value. # # @return [ Float ] The seconds value. # # @since 2.0.0 def convert_ms(name, value) case value when String if /\A-?\d+(\.\d+)?\z/ !~ value log_warn("Invalid ms value for #{name}: #{value}") return nil end if value.to_s[0] == '-' log_warn("#{name} cannot be a negative number") return nil end when Integer, Float if value < 0 log_warn("#{name} cannot be a negative number") return nil end else raise ArgumentError, "Can only convert Strings, Integers, or Floats to ms. Given: #{value.class}" end value.to_f / 1000 end # Reverts an ms. # # @param [ Float ] value The float. # # @return [ Integer ] The number multiplied by 1000 as an integer. def revert_ms(value) (value * 1000).round end # Stringifies an ms. # # @param [ Float ] value The float. # # @return [ String ] The string. def stringify_ms(value) revert_ms(value).to_s end # Converts +value+ into a symbol. # # @param [ String ] name Name of the URI option being processed. # @param [ String | Symbol ] value URI option value. # # @return [ Symbol ] Converted value. def convert_symbol(name, value) value.to_sym end # Reverts a symbol. # # @param [ Symbol ] value The symbol. # # @return [ String ] The passed value as a string. def revert_symbol(value) value.to_s end alias :stringify_symbol :revert_symbol # Extract values from the string and put them into an array. # # @param [ String ] name Name of the URI option being processed. # @param [ String ] value The string to build an array from. # # @return [ Array ] The array built from the string. def convert_array(name, value) value.split(',') end # Reverts an array. # # @param [ Array ] value An array of strings. # # @return [ Array ] The passed value. def revert_array(value) value end # Stringifies an array. # # @param [ Array ] value An array of strings. # # @return [ String ] The array joined by commas. def stringify_array(value) value.join(',') end # Authentication mechanism transformation. # # @param [ String ] name Name of the URI option being processed. # @param [ String ] value The authentication mechanism. # # @return [ Symbol ] The transformed authentication mechanism. def convert_auth_mech(name, value) auth_mech = AUTH_MECH_MAP[value.upcase] (auth_mech || value).tap do |mech| log_warn("#{value} is not a valid auth mechanism") unless auth_mech end end # Reverts auth mechanism. # # @param [ Symbol ] value The auth mechanism. # # @return [ String ] The auth mechanism as a string. # # @raise [ ArgumentError ] if its an invalid auth mechanism. def revert_auth_mech(value) found = AUTH_MECH_MAP.detect do |k, v| v == value end if found found.first else raise ArgumentError, "Unknown auth mechanism #{value}" end end # Stringifies auth mechanism. # # @param [ Symbol ] value The auth mechanism. # # @return [ String | nil ] The auth mechanism as a string. def stringify_auth_mech(value) revert_auth_mech(value) rescue nil end # Auth mechanism properties extractor. # # @param [ String ] name Name of the URI option being processed. # @param [ String ] value The auth mechanism properties string. # # @return [ Hash | nil ] The auth mechanism properties hash. def convert_auth_mech_props(name, value) properties = hash_extractor('authMechanismProperties', value) if properties properties.each do |k, v| if k.to_s.downcase == 'canonicalize_host_name' && v properties[k] = (v.downcase == 'true') end end end properties end # Reverts auth mechanism properties. # # @param [ Hash | nil ] value The auth mech properties. # # @return [ Hash | nil ] The passed value. def revert_auth_mech_props(value) value end # Stringifies auth mechanism properties. # # @param [ Hash | nil ] value The auth mech properties. # # @return [ String | nil ] The string. def stringify_auth_mech_props(value) return if value.nil? value.map { |k, v| "#{k}:#{v}" }.join(',') end # Parses the max staleness value, which must be either "0" or an integer # greater or equal to 90. # # @param [ String ] name Name of the URI option being processed. # @param [ String | Integer ] value The max staleness string. # # @return [ Integer | nil ] The max staleness integer parsed out if it is valid, otherwise nil # (and a warning will be logged). def convert_max_staleness(name, value) int = if value.is_a?(String) && /\A-?\d+\z/ =~ value value.to_i elsif value.is_a?(Integer) value end if int.nil? log_warn("Invalid max staleness value: #{value}") return nil end if int == -1 int = nil end if int && (int > 0 && int < 90 || int < 0) log_warn("max staleness should be either 0 or greater than 90: #{value}") int = nil end int end # Reverts max staleness. # # @param [ Integer | nil ] value The max staleness. # # @return [ Integer | nil ] The passed value. def revert_max_staleness(value) value end # Stringifies max staleness. # # @param [ Integer | nil ] value The max staleness. # # @return [ String | nil ] The string. def stringify_max_staleness(value) revert_max_staleness(value)&.to_s end # Read preference mode transformation. # # @param [ String ] name Name of the URI option being processed. # @param [ String ] value The read mode string value. # # @return [ Symbol | String ] The read mode. def convert_read_mode(name, value) READ_MODE_MAP[value.downcase] || value end # Reverts read mode. # # @param [ Symbol | String ] value The read mode. # # @return [ String ] The read mode as a string. def revert_read_mode(value) value.to_s.gsub(/_(\w)/) { $1.upcase } end alias :stringify_read_mode :revert_read_mode # Read preference tags transformation. # # @param [ String ] name Name of the URI option being processed. # @param [ String ] value The string representing tag set. # # @return [ Array | nil ] Array with tag set. def convert_read_tags(name, value) converted = convert_read_set(name, value) if converted [converted] else nil end end # Reverts read tags. # # @param [ Array | nil ] value The read tags. # # @return [ Array | nil ] The passed value. def revert_read_tags(value) value end # Stringifies read tags. # # @param [ Array | nil ] value The read tags. # # @return [ String | nil ] The joined string of read tags. def stringify_read_tags(value) value&.map { |ar| ar.map { |k, v| "#{k}:#{v}" }.join(',') } end # Read preference tag set extractor. # # @param [ String ] name Name of the URI option being processed. # @param [ String ] value The tag set string. # # @return [ Hash ] The tag set hash. def convert_read_set(name, value) hash_extractor('readPreferenceTags', value) end # Converts +value+ as a write concern. # # If +value+ is the word "majority", returns the symbol :majority. # If +value+ is a number, returns the number as an integer. # Otherwise returns the string +value+ unchanged. # # @param [ String ] name Name of the URI option being processed. # @param [ String | Integer ] value URI option value. # # @return [ Integer | Symbol | String ] Converted value. def convert_w(name, value) case value when 'majority' :majority when /\A[0-9]+\z/ value.to_i else value end end # Reverts write concern. # # @param [ Integer | Symbol | String ] value The write concern. # # @return [ Integer | String ] The write concern as a string. def revert_w(value) case value when Symbol value.to_s else value end end # Stringifies write concern. # # @param [ Integer | Symbol | String ] value The write concern. # # @return [ String ] The write concern as a string. def stringify_w(value) revert_w(value)&.to_s end # Parses the zlib compression level. # # @param [ String ] name Name of the URI option being processed. # @param [ String | Integer ] value The zlib compression level string. # # @return [ Integer | nil ] The compression level value if it is between -1 and 9 (inclusive), # otherwise nil (and a warning will be logged). def convert_zlib_compression_level(name, value) i = if value.is_a?(String) && /\A-?\d+\z/ =~ value value.to_i elsif value.is_a?(Integer) value end if i && (i >= -1 && i <= 9) i else log_warn("#{value} is not a valid zlibCompressionLevel") nil end end # Reverts zlib compression level # # @param [ Integer | nil ] value The write concern. # # @return [ Integer | nil ] The passed value. def revert_zlib_compression_level(value) value end # Stringifies zlib compression level # # @param [ Integer | nil ] value The write concern. # # @return [ String | nil ] The string. def stringify_zlib_compression_level(value) revert_zlib_compression_level(value)&.to_s end # Extract values from the string and put them into a nested hash. # # @param [ String ] name Name of the URI option being processed. # @param [ String ] value The string to build a hash from. # # @return [ Hash ] The hash built from the string. def hash_extractor(name, value) h = {} value.split(',').each do |tag| k, v = tag.split(':') if v.nil? log_warn("Invalid hash value for #{name}: key `#{k}` does not have a value: #{value}") next end h[k.to_sym] = v end if h.empty? nil else h end end end end end mongo-ruby-driver-2.21.3/lib/mongo/uri/srv_protocol.rb000066400000000000000000000214571505113246500227450ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2017-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an 'AS IS' BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo class URI # Parser for a URI using the mongodb+srv protocol, which specifies a DNS to query for SRV records. # The driver will query the DNS server for SRV records on ., # prefixed with _mongodb._tcp # The SRV records can then be used as the seedlist for a Mongo::Client. # The driver also queries for a TXT record providing default connection string options. # Only one TXT record is allowed, and only a subset of Mongo::Client options is allowed. # # Please refer to the Initial DNS Seedlist Discovery spec for details. # # https://github.com/mongodb/specifications/blob/master/source/initial-dns-seedlist-discovery/initial-dns-seedlist-discovery.md # # @example Use the uri string to make a client connection. # client = Mongo::Client.new('mongodb+srv://test6.test.build.10gen.cc/') # # @since 2.5.0 class SRVProtocol < URI attr_reader :srv_records # Gets the options hash that needs to be passed to a Mongo::Client on instantiation, so we # don't have to merge the txt record options, credentials, and database in at that point - # we only have a single point here. # # @example Get the client options. # uri.client_options # # @return [ Hash ] The options passed to the Mongo::Client # # @since 2.5.0 def client_options opts = @txt_options.merge(ssl: true) opts = opts.merge(uri_options).merge(:database => database) @user ? opts.merge(credentials) : opts end # @return [ Srv::Result ] SRV lookup result. # # @api private attr_reader :srv_result # The hostname that is specified in the URI and used to look up # SRV records. # # This attribute needs to be defined because SRVProtocol changes # #servers to be the result of the lookup rather than the hostname # specified in the URI. # # @return [ String ] The hostname used in SRV lookup. # # @api private attr_reader :query_hostname private # @return [ String ] DOT_PARTITION The '.' character used to delineate the parts of a # hostname. # # @deprecated DOT_PARTITION = '.'.freeze # @return [ Array ] VALID_TXT_OPTIONS The valid options for a TXT record to specify. VALID_TXT_OPTIONS = %w(replicaset authsource loadbalanced).freeze # @return [ String ] INVALID_HOST Error message format string indicating that the hostname in # in the URI does not fit the expected form. INVALID_HOST = "One and only one host is required in a connection string with the " + "'#{MONGODB_SRV_SCHEME}' protocol.".freeze # @return [ String ] INVALID_PORT Error message format string indicating that a port was # included with an SRV hostname. INVALID_PORT = "It is not allowed to specify a port in a connection string with the " + "'#{MONGODB_SRV_SCHEME}' protocol.".freeze # @return [ String ] INVALID_DOMAIN Error message format string indicating that the domain name # of the hostname does not fit the expected form. # @deprecated INVALID_DOMAIN = "The domain name must consist of at least two parts: the domain name, " + "and a TLD.".freeze # @return [ String ] NO_SRV_RECORDS Error message format string indicating that no SRV records # were found. NO_SRV_RECORDS = "The DNS query returned no SRV records for '%s'".freeze # @return [ String ] FORMAT The expected SRV URI format. FORMAT = 'mongodb+srv://[username:password@]host[/[database][?options]]'.freeze # Gets the MongoDB SRV URI scheme. # # @return [ String ] The MongoDB SRV URI scheme. def scheme MONGODB_SRV_SCHEME end # Raises an InvalidURI error. # # @param [ String ] details A detailed error message. # # @raise [ Mongo::Error::InvalidURI ] def raise_invalid_error!(details) raise Error::InvalidURI.new(@string, details, FORMAT) end # Gets the SRV resolver. # # @return [ Mongo::Srv::Resolver ] def resolver @resolver ||= Srv::Resolver.new( raise_on_invalid: true, resolv_options: options[:resolv_options], timeout: options[:connect_timeout], ) end # Parses the credentials from the URI and performs DNS queries to obtain # the hosts and TXT options. # # @param [ String ] remaining The portion of the URI pertaining to the # authentication credentials and the hosts. def parse!(remaining) super if @servers.length != 1 raise_invalid_error!(INVALID_HOST) end hostname = @servers.first validate_srv_hostname(hostname) @query_hostname = hostname log_debug "attempting to resolve #{hostname}" @srv_result = resolver.get_records(hostname, uri_options[:srv_service_name], uri_options[:srv_max_hosts]) if srv_result.empty? raise Error::NoSRVRecords.new(NO_SRV_RECORDS % hostname) end @txt_options = get_txt_options(hostname) || {} records = srv_result.address_strs records.each do |record| validate_address_str!(record) end @servers = records rescue Error::InvalidAddress => e raise_invalid_error!(e.message) end # Validates the hostname used in an SRV URI. # # The hostname cannot include a port. # # The hostname must not begin with a dot, end with a dot, or have # consecutive dots. The hostname must have a minimum of 3 total # components (foo.bar.tld). # # Raises Error::InvalidURI if validation fails. def validate_srv_hostname(hostname) raise_invalid_error!(INVALID_PORT) if hostname.include?(HOST_PORT_DELIM) if hostname.start_with?('.') raise_invalid_error!("Hostname cannot start with a dot: #{hostname}") end if hostname.end_with?('.') raise_invalid_error!("Hostname cannot end with a dot: #{hostname}") end parts = hostname.split('.') if parts.any?(&:empty?) raise_invalid_error!("Hostname cannot have consecutive dots: #{hostname}") end if parts.length < 3 raise_invalid_error!("Hostname must have a minimum of 3 components (foo.bar.tld): #{hostname}") end end # Obtains the TXT options of a host. # # @param [ String ] hostname The hostname whose records should be obtained. # # @return [ Hash ] The TXT record options (an empyt hash if no TXT # records are found). # # @raise [ Mongo::Error::InvalidTXTRecord ] If more than one TXT record is found. def get_txt_options(hostname) options_string = resolver.get_txt_options_string(hostname) if options_string parse_txt_options!(options_string) else {} end end # Parses the TXT record options into a hash and adds the options to set of all URI options # parsed. # # @param [ String ] string The concatenated TXT options. # # @return [ Hash ] The parsed TXT options. # # @raise [ Mongo::Error::InvalidTXTRecord ] If the TXT record does not fit the expected form # or the option specified is not a valid TXT option. def parse_txt_options!(string) string.split(INDIV_URI_OPTS_DELIM).reduce({}) do |txt_options, opt| raise Error::InvalidTXTRecord.new(INVALID_OPTS_VALUE_DELIM) unless opt.index(URI_OPTS_VALUE_DELIM) key, value = opt.split('=') unless VALID_TXT_OPTIONS.include?(key.downcase) msg = "TXT records can only specify the options [#{VALID_TXT_OPTIONS.join(', ')}]: #{string}" raise Error::InvalidTXTRecord.new(msg) end options_mapper.add_uri_option(key, value, txt_options) txt_options end end def validate_uri_options! if uri_options[:direct_connection] raise_invalid_error_no_fmt!("directConnection=true is incompatible with SRV URIs") end super end end end end mongo-ruby-driver-2.21.3/lib/mongo/utils.rb000066400000000000000000000067631505113246500205560ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an 'AS IS' BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo # @api private module Utils class LocalLogger include Loggable def initialize(**opts) @options = opts end attr_reader :options end # @option opts [ true | false | nil | Integer ] :bg_error_backtrace # Experimental. Set to true to log complete backtraces for errors in # background threads. Set to false or nil to not log backtraces. Provide # a positive integer to log up to that many backtrace lines. # @option opts [ Logger ] :logger A custom logger to use. # @option opts [ String ] :log_prefix A custom log prefix to use when # logging. module_function def warn_bg_exception(msg, exc, **opts) bt_excerpt = excerpt_backtrace(exc, **opts) logger = LocalLogger.new(**opts) logger.log_warn("#{msg}: #{exc.class}: #{exc}#{bt_excerpt}") end # @option opts [ true | false | nil | Integer ] :bg_error_backtrace # Experimental. Set to true to log complete backtraces for errors in # background threads. Set to false or nil to not log backtraces. Provide # a positive integer to log up to that many backtrace lines. module_function def excerpt_backtrace(exc, **opts) case lines = opts[:bg_error_backtrace] when Integer ":\n#{exc.backtrace[0..lines].join("\n")}" when false, nil nil else ":\n#{exc.backtrace.join("\n")}" end end # Symbolizes the keys in the provided hash. module_function def shallow_symbolize_keys(hash) Hash[hash.map { |k, v| [k.to_sym, v] }] end # Stringifies the keys in the provided hash and converts underscore # style keys to camel case style keys. module_function def shallow_camelize_keys(hash) Hash[hash.map { |k, v| [camelize(k), v] }] end module_function def camelize(sym) sym.to_s.gsub(/_(\w)/) { $1.upcase } end # @note server_api must have symbol keys or be a BSON::Document. module_function def transform_server_api(server_api) {}.tap do |doc| if version = server_api[:version] doc['apiVersion'] = version end unless server_api[:strict].nil? doc['apiStrict'] = server_api[:strict] end unless server_api[:deprecation_errors].nil? doc['apiDeprecationErrors'] = server_api[:deprecation_errors] end end end # This function should be used if you need to measure time. # @example Calculate elapsed time. # starting = Utils.monotonic_time # # do something time consuming # ending = Utils.monotonic_time # puts "It took #{(ending - starting).to_i} seconds" # # @see https://blog.dnsimple.com/2018/03/elapsed-time-with-ruby-the-right-way/ # # @return [Float] seconds according to monotonic clock module_function def monotonic_time Process.clock_gettime(Process::CLOCK_MONOTONIC) end end end mongo-ruby-driver-2.21.3/lib/mongo/version.rb000066400000000000000000000004051505113246500210660ustar00rootroot00000000000000# frozen_string_literal: true module Mongo # The current version of the driver. # # Note that this file is automatically updated via `rake candidate:create`. # Manual changes to this file will be overwritten by that rake task. VERSION = '2.21.3' end mongo-ruby-driver-2.21.3/lib/mongo/write_concern.rb000066400000000000000000000051621505113246500222470ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2015-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'mongo/write_concern/base' require 'mongo/write_concern/acknowledged' require 'mongo/write_concern/unacknowledged' module Mongo # Base module for all write concern specific behavior. # # @since 2.0.0 module WriteConcern extend self # The number of servers write concern. # # @since 2.0.0 # @deprecated W = :w.freeze # The journal write concern. # # @since 2.0.0 # @deprecated J = :j.freeze # The file sync write concern. # # @since 2.0.0 # @deprecated FSYNC = :fsync.freeze # The wtimeout write concern. # # @since 2.0.0 # @deprecated WTIMEOUT = :wtimeout.freeze # The GLE command name. # # @since 2.0.0 # @deprecated GET_LAST_ERROR = :getlasterror.freeze # The default write concern is to acknowledge on a single server. # # @since 2.0.0 DEFAULT = { }.freeze # Create a write concern object for the provided options. # # If options are nil, returns nil. # # @example Get a write concern. # Mongo::WriteConcern.get(:w => 1) # # @param [ Hash ] options The options to instantiate with. # # @option options :w [ Integer, String ] The number of servers or the # custom mode to acknowledge. # @option options :j [ true, false ] Whether to acknowledge a write to # the journal. # @option options :fsync [ true, false ] Should the write be synced to # disc. # @option options :wtimeout [ Integer ] The number of milliseconds to # wait for acknowledgement before raising an error. # # @return [ nil | Unacknowledged | Acknowledged ] The appropriate concern. # # @raise [ Error::InvalidWriteConcern ] If the options are invalid. # # @since 2.0.0 def get(options) return options if options.is_a?(Base) if options if (options[:w] || options['w']) == 0 Unacknowledged.new(options) else Acknowledged.new(options) end end end end end mongo-ruby-driver-2.21.3/lib/mongo/write_concern/000077500000000000000000000000001505113246500217165ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/lib/mongo/write_concern/acknowledged.rb000066400000000000000000000036731505113246500247030ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an 'AS IS' BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module WriteConcern # An acknowledged write concern provides a get last error command with the # appropriate options on each write operation. # # @since 2.0.0 class Acknowledged < Base # Get the get last error command for the concern. # # @example Get the gle command. # acknowledged.get_last_error # # @return [ Hash ] The gle command. # # @since 2.0.0 def get_last_error @get_last_error ||= { GET_LAST_ERROR => 1 }.merge( Options::Mapper.transform_values_to_strings(options) ) end # Is this write concern acknowledged. # # @example Whether this write concern object is acknowledged. # write_concern.acknowledged? # # @return [ true, false ] Whether this write concern is acknowledged. # # @since 2.5.0 def acknowledged? true end # Get a human-readable string representation of an acknowledged write concern. # # @example Inspect the write concern. # write_concern.inspect # # @return [ String ] A string representation of an acknowledged write concern. # # @since 2.0.0 def inspect "#" end end end end mongo-ruby-driver-2.21.3/lib/mongo/write_concern/base.rb000066400000000000000000000050541505113246500231610ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an 'AS IS' BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module WriteConcern # Defines common behavior for write concerns. # # @since 2.7.0 class Base # @return [ Hash ] The write concern options. attr_reader :options # Instantiate a new write concern given the options. # # @api private # # @example Instantiate a new write concern mode. # Mongo::WriteConcern::Acknowledged.new(:w => 1) # # @param [ Hash ] options The options to instantiate with. # # @option options :w [ Integer, String ] The number of servers or the # custom mode to acknowledge. # @option options :j [ true, false ] Whether to acknowledge a write to # the journal. # @option options :fsync [ true, false ] Should the write be synced to # disc. # @option options :wtimeout [ Integer ] The number of milliseconds to # wait for acknowledgement before raising an error. # # @since 2.0.0 def initialize(options) options = Options::Mapper.transform_keys_to_symbols(options) options = Options::Mapper.transform_values_to_strings(options).freeze if options[:w] if options[:w] == 0 && options[:j] raise Error::InvalidWriteConcern, "Invalid write concern options: :j cannot be true when :w is 0: #{options.inspect}" elsif options[:w] == 0 && options[:fsync] raise Error::InvalidWriteConcern, "Invalid write concern options: :fsync cannot be true when :w is 0: #{options.inspect}" elsif options[:w].is_a?(Integer) && options[:w] < 0 raise Error::InvalidWriteConcern, "Invalid write concern options: :w cannot be negative (#{options[:w]}): #{options.inspect}" end end if options[:journal] raise Error::InvalidWriteConcern, "Invalid write concern options: use :j for journal: #{options.inspect}" end @options = options end end end end mongo-ruby-driver-2.21.3/lib/mongo/write_concern/unacknowledged.rb000066400000000000000000000036251505113246500252430ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the 'License'); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an 'AS IS' BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module WriteConcern # An unacknowledged write concern will provide no error on write outside of # network and connection exceptions. # # @since 2.0.0 class Unacknowledged < Base # The noop constant for the gle. # # @since 2.0.0 NOOP = nil # Get the gle command for an unacknowledged write. # # @example Get the gle command. # unacknowledged.get_last_error # # @return [ nil ] The noop. # # @since 2.0.0 def get_last_error NOOP end # Is this write concern acknowledged. # # @example Whether this write concern object is acknowledged. # write_concern.acknowledged? # # @return [ true, false ] Whether this write concern is acknowledged. # # @since 2.5.0 def acknowledged? false end # Get a human-readable string representation of an unacknowledged write concern. # # @example Inspect the write concern. # write_concern.inspect # # @return [ String ] A string representation of an unacknowledged write concern. # # @since 2.0.0 def inspect "#" end end end end mongo-ruby-driver-2.21.3/mongo.gemspec000066400000000000000000000030051505113246500176520ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all lib = File.expand_path('../lib', __FILE__) $LOAD_PATH.unshift(lib) unless $LOAD_PATH.include?(lib) require 'mongo/version' Gem::Specification.new do |s| s.name = 'mongo' s.version = Mongo::VERSION s.platform = Gem::Platform::RUBY s.authors = [ 'The MongoDB Ruby Team' ] s.email = 'dbx-ruby@mongodb.com' s.homepage = 'https://mongodb.com/docs/ruby-driver/' s.summary = 'Ruby driver for MongoDB' s.license = 'Apache-2.0' s.description = <<~DESC A pure-Ruby driver for connecting to, querying, and manipulating MongoDB databases. Officially developed and supported by MongoDB, with love for the Ruby community. DESC s.metadata = { 'bug_tracker_uri' => 'https://jira.mongodb.org/projects/RUBY', 'changelog_uri' => 'https://github.com/mongodb/mongo-ruby-driver/releases', 'homepage_uri' => 'https://mongodb.com/docs/ruby-driver/', 'documentation_uri' => 'https://mongodb.com/docs/ruby-driver/current/tutorials/quick-start/', 'source_code_uri' => 'https://github.com/mongodb/mongo-ruby-driver', } s.files = Dir.glob('{bin,lib}/**/*') s.files += %w[mongo.gemspec LICENSE README.md CONTRIBUTING.md] s.executables = ['mongo_console'] s.require_paths = ['lib'] s.bindir = 'bin' s.required_ruby_version = ">= 2.7" s.add_dependency 'base64' s.add_dependency 'bson', '>=4.14.1', '<6.0.0' end mongo-ruby-driver-2.21.3/product.yml000066400000000000000000000003631505113246500173750ustar00rootroot00000000000000--- name: MongoDB Ruby Driver description: a pure-Ruby driver for connecting to, querying, and manipulating MongoDB databases package: mongo jira: https://jira.mongodb.org/projects/RUBY version: number: 2.21.3 file: lib/mongo/version.rb mongo-ruby-driver-2.21.3/profile/000077500000000000000000000000001505113246500166305ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/profile/driver_bench.rb000066400000000000000000000000741505113246500216100ustar00rootroot00000000000000# frozen_string_literal: true require 'driver_bench/suite' mongo-ruby-driver-2.21.3/profile/driver_bench/000077500000000000000000000000001505113246500212625ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/profile/driver_bench/base.rb000066400000000000000000000127111505113246500225230ustar00rootroot00000000000000# frozen_string_literal: true require 'benchmark' require 'mongo' require_relative 'percentiles' module Mongo module DriverBench # Base class for DriverBench profile benchmarking classes. # # @api private class Base # A convenience for setting and querying the benchmark's name def self.bench_name(benchmark_name = nil) @bench_name = benchmark_name if benchmark_name @bench_name end # Where to look for the data files DATA_PATH = File.expand_path('../data/driver_bench', __dir__) # The maximum number of iterations to perform when executing the # micro-benchmark. attr_reader :max_iterations # The minimum number of seconds that the micro-benchmark must run, # regardless of how many iterations it takes. attr_reader :min_time # The maximum number of seconds that the micro-benchmark must run, # regardless of how many iterations it takes. attr_reader :max_time # The dataset to be used by the micro-benchmark. attr_reader :dataset # The size of the dataset, computed per the spec, to be # used for scoring the results. attr_reader :dataset_size # Instantiate a new micro-benchmark class. def initialize @max_iterations = debug_mode? ? 10 : 100 @min_time = debug_mode? ? 1 : 60 @max_time = 300 # 5 minutes end def debug_mode? ENV['PERF_DEBUG'] end # Runs the benchmark and returns the score. # # @return [ Hash ] the score and other # attributes of the benchmark. def run timings = run_benchmark percentiles = Percentiles.new(timings) score = dataset_size / percentiles[50] / 1_000_000.0 { name: self.class.bench_name, score: score, percentiles: percentiles } end private # Runs the micro-benchmark, and returns an array of timings, with one # entry for each iteration of the benchmark. It may have fewer than # max_iterations entries if it takes longer than max_time seconds, or # more than max_iterations entries if it would take less than min_time # seconds to run. # # @return [ Array ] the array of timings (in seconds) for # each iteration. # # rubocop:disable Metrics/AbcSize def run_benchmark [].tap do |timings| iteration_count = 0 cumulative_time = 0 setup loop do before_task timing = consider_gc { Benchmark.realtime { debug_mode? ? sleep(0.1) : do_task } } after_task iteration_count += 1 cumulative_time += timing timings.push timing # always stop after the maximum time has elapsed, regardless of # iteration count. break if cumulative_time > max_time # otherwise, break if the minimum time has elapsed, and the maximum # number of iterations have been reached. break if cumulative_time >= min_time && iteration_count >= max_iterations end teardown end end # rubocop:enable Metrics/AbcSize # Instantiate a new client. def new_client(uri = ENV['MONGODB_URI']) Mongo::Client.new(uri) end # Takes care of garbage collection considerations before # running the block. # # Set BENCHMARK_NO_GC environment variable to suppress GC during # the core benchmark tasks; note that this may result in obscure issues # due to memory pressures on larger benchmarks. def consider_gc GC.start GC.disable if ENV['BENCHMARK_NO_GC'] yield ensure GC.enable if ENV['BENCHMARK_NO_GC'] end # By default, the file name is assumed to be relative to the # DATA_PATH, unless the file name is an absolute path. def path_to_file(file_name) return file_name if file_name.start_with?('/') File.join(DATA_PATH, file_name) end # Load a json file and represent each document as a Hash. # # @param [ String ] file_name The file name. # # @return [ Array ] A list of extended-json documents. def load_file(file_name) File.readlines(path_to_file(file_name)).map { |line| ::BSON::Document.new(parse_line(line)) } end # Returns the size (in bytes) of the given file. def size_of_file(file_name) File.size(path_to_file(file_name)) end # Load a json document as a Hash and convert BSON-specific types. # Replace the _id field as an BSON::ObjectId if it's represented as '$oid'. # # @param [ String ] document The json document. # # @return [ Hash ] An extended-json document. def parse_line(document) JSON.parse(document).tap do |doc| doc['_id'] = ::BSON::ObjectId.from_string(doc['_id']['$oid']) if doc['_id'] && doc['_id']['$oid'] end end # Executed at the start of the micro-benchmark. def setup; end # Executed before each iteration of the benchmark. def before_task; end # Smallest amount of code necessary to do the task, # invoked once per iteration. def do_task raise NotImplementedError end # Executed after each iteration of the benchmark. def after_task; end # Executed at the end of the micro-benchmark. def teardown; end end end end mongo-ruby-driver-2.21.3/profile/driver_bench/bson.rb000066400000000000000000000005001505113246500225430ustar00rootroot00000000000000# frozen_string_literal: true require_relative 'bson/deep' require_relative 'bson/flat' require_relative 'bson/full' module Mongo module DriverBench module BSON ALL = [ *Deep::ALL, *Flat::ALL, *Full::ALL ].freeze # BSONBench consists of all BSON micro-benchmarks BENCH = ALL end end end mongo-ruby-driver-2.21.3/profile/driver_bench/bson/000077500000000000000000000000001505113246500222235ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/profile/driver_bench/bson/base.rb000066400000000000000000000013601505113246500234620ustar00rootroot00000000000000# frozen_string_literal: true require_relative '../base' module Mongo module DriverBench module BSON # Abstract superclass for all BSON benchmarks. # # @api private class Base < Mongo::DriverBench::Base private # Common setup for these benchmarks. def setup # rubocop:disable Naming/MemoizedInstanceVariableName @dataset ||= load_file(file_name).first @dataset_size ||= size_of_file(file_name) * 10_000 # rubocop:enable Naming/MemoizedInstanceVariableName end # Returns the name of the file name that contains # the dataset to use. def file_name raise NotImplementedError end end end end end mongo-ruby-driver-2.21.3/profile/driver_bench/bson/decodable.rb000066400000000000000000000012321505113246500244500ustar00rootroot00000000000000# frozen_string_literal: true require_relative '../base' module Mongo module DriverBench module BSON # Common behavior for "decode" benchmarks. # # @api private module Decodable private # The buffer to decode for the test attr_reader :buffer # Before executing the task itself. def before_task @buffer = ::BSON::Document.new(dataset).to_bson end # The decode operation, performed 10k times. def do_task 10_000.times do ::BSON::Document.from_bson(buffer) buffer.rewind! end end end end end end mongo-ruby-driver-2.21.3/profile/driver_bench/bson/deep.rb000066400000000000000000000003561505113246500234710ustar00rootroot00000000000000# frozen_string_literal: true require_relative 'deep/encoding' require_relative 'deep/decoding' module Mongo module DriverBench module BSON module Deep ALL = [ Encoding, Decoding ].freeze end end end end mongo-ruby-driver-2.21.3/profile/driver_bench/bson/deep/000077500000000000000000000000001505113246500231405ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/profile/driver_bench/bson/deep/base.rb000066400000000000000000000010061505113246500243740ustar00rootroot00000000000000# frozen_string_literal: true require_relative '../base' module Mongo module DriverBench module BSON module Deep # Abstract superclass for deep BSON benchmarks. # # @api private class Base < Mongo::DriverBench::BSON::Base private # @return [ String ] the name of the file to use as the # dataset for these benchmarks. def file_name 'extended_bson/deep_bson.json' end end end end end end mongo-ruby-driver-2.21.3/profile/driver_bench/bson/deep/decoding.rb000066400000000000000000000010511505113246500252360ustar00rootroot00000000000000# frozen_string_literal: true require_relative 'base' require_relative '../decodable' module Mongo module DriverBench module BSON module Deep # "This benchmark tests driver performance decoding documents with # deeply nested key/value pairs involving subdocuments, strings, # integers, doubles and booleans." # # @api private class Decoding < Mongo::DriverBench::BSON::Deep::Base include Decodable bench_name 'Deep BSON Decoding' end end end end end mongo-ruby-driver-2.21.3/profile/driver_bench/bson/deep/encoding.rb000066400000000000000000000010511505113246500252500ustar00rootroot00000000000000# frozen_string_literal: true require_relative 'base' require_relative '../encodable' module Mongo module DriverBench module BSON module Deep # "This benchmark tests driver performance encoding documents with # deeply nested key/value pairs involving subdocuments, strings, # integers, doubles and booleans." # # @api private class Encoding < Mongo::DriverBench::BSON::Deep::Base include Encodable bench_name 'Deep BSON Encoding' end end end end end mongo-ruby-driver-2.21.3/profile/driver_bench/bson/encodable.rb000066400000000000000000000011141505113246500244610ustar00rootroot00000000000000# frozen_string_literal: true require_relative '../base' module Mongo module DriverBench module BSON # Common behavior for the "encode" benchmarks. # # @api private module Encodable private # The document to encode for the test attr_reader :document # Before each task. def before_task @document = ::BSON::Document.new(dataset) end # The encode operation itself, executed 10k times. def do_task 10_000.times { document.to_bson } end end end end end mongo-ruby-driver-2.21.3/profile/driver_bench/bson/flat.rb000066400000000000000000000003561505113246500235020ustar00rootroot00000000000000# frozen_string_literal: true require_relative 'flat/encoding' require_relative 'flat/decoding' module Mongo module DriverBench module BSON module Flat ALL = [ Encoding, Decoding ].freeze end end end end mongo-ruby-driver-2.21.3/profile/driver_bench/bson/flat/000077500000000000000000000000001505113246500231515ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/profile/driver_bench/bson/flat/base.rb000066400000000000000000000010051505113246500244040ustar00rootroot00000000000000# frozen_string_literal: true require_relative '../base' module Mongo module DriverBench module BSON module Flat # Abstract superclass of flat BSON benchmarks. # # @api private class Base < Mongo::DriverBench::BSON::Base private # @return [ String ] the name of the file to use as the # dataset for these benchmarks. def file_name 'extended_bson/flat_bson.json' end end end end end end mongo-ruby-driver-2.21.3/profile/driver_bench/bson/flat/decoding.rb000066400000000000000000000010071505113246500252500ustar00rootroot00000000000000# frozen_string_literal: true require_relative 'base' require_relative '../decodable' module Mongo module DriverBench module BSON module Flat # "This benchmark tests driver performance decoding documents with top # level key/value pairs involving the most commonly-used BSON types." # # @api private class Decoding < Mongo::DriverBench::BSON::Flat::Base include Decodable bench_name 'Flat BSON Decoding' end end end end end mongo-ruby-driver-2.21.3/profile/driver_bench/bson/flat/encoding.rb000066400000000000000000000010071505113246500252620ustar00rootroot00000000000000# frozen_string_literal: true require_relative 'base' require_relative '../encodable' module Mongo module DriverBench module BSON module Flat # "This benchmark tests driver performance encoding documents with top # level key/value pairs involving the most commonly-used BSON types." # # @api private class Encoding < Mongo::DriverBench::BSON::Flat::Base include Encodable bench_name 'Flat BSON Encoding' end end end end end mongo-ruby-driver-2.21.3/profile/driver_bench/bson/full.rb000066400000000000000000000003561505113246500235160ustar00rootroot00000000000000# frozen_string_literal: true require_relative 'full/encoding' require_relative 'full/decoding' module Mongo module DriverBench module BSON module Full ALL = [ Encoding, Decoding ].freeze end end end end mongo-ruby-driver-2.21.3/profile/driver_bench/bson/full/000077500000000000000000000000001505113246500231655ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/profile/driver_bench/bson/full/base.rb000066400000000000000000000010061505113246500244210ustar00rootroot00000000000000# frozen_string_literal: true require_relative '../base' module Mongo module DriverBench module BSON module Full # Abstract superclass for full BSON benchmarks. # # @api private class Base < Mongo::DriverBench::BSON::Base private # @return [ String ] the name of the file to use as the # dataset for these benchmarks. def file_name 'extended_bson/full_bson.json' end end end end end end mongo-ruby-driver-2.21.3/profile/driver_bench/bson/full/decoding.rb000066400000000000000000000010021505113246500252570ustar00rootroot00000000000000# frozen_string_literal: true require_relative 'base' require_relative '../decodable' module Mongo module DriverBench module BSON module Full # "This benchmark tests driver performance decoding documents with top # level key/value pairs involving the full range of BSON types." # # @api private class Decoding < Mongo::DriverBench::BSON::Full::Base include Decodable bench_name 'Full BSON Decoding' end end end end end mongo-ruby-driver-2.21.3/profile/driver_bench/bson/full/encoding.rb000066400000000000000000000010021505113246500252710ustar00rootroot00000000000000# frozen_string_literal: true require_relative 'base' require_relative '../encodable' module Mongo module DriverBench module BSON module Full # "This benchmark tests driver performance encoding documents with top # level key/value pairs involving the full range of BSON types." # # @api private class Encoding < Mongo::DriverBench::BSON::Full::Base include Encodable bench_name 'Full BSON Encoding' end end end end end mongo-ruby-driver-2.21.3/profile/driver_bench/crypto/000077500000000000000000000000001505113246500226025ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/profile/driver_bench/crypto/decrypt.rb000066400000000000000000000053621505113246500246070ustar00rootroot00000000000000# frozen_string_literal: true require 'mongo' require_relative '../base' module Mongo module DriverBench module Crypto # Benchmark for reporting the performance of decrypting a document with # a large number of encrypted fields. class Decrypt < Mongo::DriverBench::Base ALGORITHM = 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic' KEY_VAULT_NAMESPACE = 'encryption.__keyVault' N = 10 def run doc = build_encrypted_doc # warm up run_test(doc, 1) [ 1, 2, 8, 64 ].each do |thread_count| run_test_with_thread_count(doc, thread_count) end end private def run_test_with_thread_count(doc, thread_count) results = [] N.times do threads = Array.new(thread_count) do Thread.new { Thread.current[:ops_sec] = run_test(doc, 1) } end results << threads.each(&:join).sum { |t| t[:ops_sec] } end median = results.sort[N / 2] puts "thread_count=#{thread_count}; median ops/sec=#{median}" end def build_encrypted_doc data_key_id = client_encryption.create_data_key('local') pairs = Array.new(1500) do |i| n = format('%04d', i + 1) key = "key#{n}" value = "value #{n}" encrypted = client_encryption.encrypt(value, key_id: data_key_id, algorithm: ALGORITHM) [ key, encrypted ] end BSON::Document[pairs] end def timeout_holder @timeout_holder ||= Mongo::CsotTimeoutHolder.new end def encrypter @encrypter ||= Crypt::AutoEncrypter.new( client: new_client, key_vault_client: key_vault_client, key_vault_namespace: KEY_VAULT_NAMESPACE, kms_providers: kms_providers ) end def run_test(doc, duration) finish_at = Mongo::Utils.monotonic_time + duration count = 0 while Mongo::Utils.monotonic_time < finish_at encrypter.decrypt(doc, timeout_holder) count += 1 end count end def key_vault_client @key_vault_client ||= new_client end def kms_providers @kms_providers ||= { local: { key: SecureRandom.random_bytes(96) } } end def client_encryption @client_encryption ||= Mongo::ClientEncryption.new( key_vault_client, key_vault_namespace: KEY_VAULT_NAMESPACE, kms_providers: kms_providers ) end end end end end mongo-ruby-driver-2.21.3/profile/driver_bench/multi_doc.rb000066400000000000000000000005561505113246500235740ustar00rootroot00000000000000# frozen_string_literal: true require_relative 'multi_doc/bulk_insert' require_relative 'multi_doc/find_many' require_relative 'multi_doc/grid_fs' module Mongo module DriverBench module MultiDoc ALL = [ *BulkInsert::ALL, FindMany, *GridFS::ALL ].freeze # MultiBench consists of all Multi-doc micro-benchmarks BENCH = ALL end end end mongo-ruby-driver-2.21.3/profile/driver_bench/multi_doc/000077500000000000000000000000001505113246500232415ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/profile/driver_bench/multi_doc/base.rb000066400000000000000000000017051505113246500245030ustar00rootroot00000000000000# frozen_string_literal: true require_relative '../base' module Mongo module DriverBench module MultiDoc # Abstract base class for multi-doc benchmarks. # # @api private class Base < Mongo::DriverBench::Base private attr_reader :client, :collection def setup if file_name @dataset ||= load_file(file_name) @dataset_size ||= size_of_file(file_name) * scale end prepare_client end # The amount to scale the dataset size by (for scoring purposes). def scale 10_000 end def teardown cleanup_client end def prepare_client @client = new_client.use('perftest') @client.database.drop @collection = @client.database[:corpus].tap(&:create) end def cleanup_client @client.database.drop end end end end end mongo-ruby-driver-2.21.3/profile/driver_bench/multi_doc/bulk_insert.rb000066400000000000000000000004101505113246500261020ustar00rootroot00000000000000# frozen_string_literal: true require_relative 'bulk_insert/large_doc' require_relative 'bulk_insert/small_doc' module Mongo module DriverBench module MultiDoc module BulkInsert ALL = [ LargeDoc, SmallDoc ].freeze end end end end mongo-ruby-driver-2.21.3/profile/driver_bench/multi_doc/bulk_insert/000077500000000000000000000000001505113246500255625ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/profile/driver_bench/multi_doc/bulk_insert/base.rb000066400000000000000000000015161505113246500270240ustar00rootroot00000000000000# frozen_string_literal: true require_relative '../base' module Mongo module DriverBench module MultiDoc module BulkInsert # Abstract superclass for all bulk insert benchmarks. # # @api private class Base < Mongo::DriverBench::MultiDoc::Base attr_reader :repetitions, :bulk_dataset def setup super @bulk_dataset = dataset * repetitions end # How much the benchmark's dataset size ought to be scaled (for # scoring purposes). def scale @repetitions end def before_task collection.drop collection.create end def do_task collection.insert_many(bulk_dataset, ordered: true) end end end end end end mongo-ruby-driver-2.21.3/profile/driver_bench/multi_doc/bulk_insert/large_doc.rb000066400000000000000000000011551505113246500300300ustar00rootroot00000000000000# frozen_string_literal: true require_relative 'base' module Mongo module DriverBench module MultiDoc module BulkInsert # "This benchmark tests driver performance inserting multiple, large # documents to the database." # # @api private class LargeDoc < Mongo::DriverBench::MultiDoc::BulkInsert::Base bench_name 'Large doc bulk insert' def initialize super @repetitions = 10 end def file_name 'single_and_multi_document/large_doc.json' end end end end end end mongo-ruby-driver-2.21.3/profile/driver_bench/multi_doc/bulk_insert/small_doc.rb000066400000000000000000000011611505113246500300430ustar00rootroot00000000000000# frozen_string_literal: true require_relative 'base' module Mongo module DriverBench module MultiDoc module BulkInsert # "This benchmark tests driver performance inserting multiple, small # documents to the database." # # @api private class SmallDoc < Mongo::DriverBench::MultiDoc::BulkInsert::Base bench_name 'Small doc bulk insert' def initialize super @repetitions = 10_000 end def file_name 'single_and_multi_document/small_doc.json' end end end end end end mongo-ruby-driver-2.21.3/profile/driver_bench/multi_doc/find_many.rb000066400000000000000000000014531505113246500255350ustar00rootroot00000000000000# frozen_string_literal: true require_relative 'base' module Mongo module DriverBench module MultiDoc # "This benchmark tests driver performance retrieving multiple documents # from a query." # # @api private class FindMany < Mongo::DriverBench::MultiDoc::Base bench_name 'Find many and empty the cursor' private def file_name 'single_and_multi_document/tweet.json' end def setup super prototype = dataset.first 10.times do docs = Array.new(1000, prototype) @collection.insert_many(docs) end end def do_task collection.find.each do |result| # discard the result end end end end end end mongo-ruby-driver-2.21.3/profile/driver_bench/multi_doc/grid_fs.rb000066400000000000000000000003661505113246500252100ustar00rootroot00000000000000# frozen_string_literal: true require_relative 'grid_fs/download' require_relative 'grid_fs/upload' module Mongo module DriverBench module MultiDoc module GridFS ALL = [ Download, Upload ].freeze end end end end mongo-ruby-driver-2.21.3/profile/driver_bench/multi_doc/grid_fs/000077500000000000000000000000001505113246500246565ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/profile/driver_bench/multi_doc/grid_fs/base.rb000066400000000000000000000012721505113246500261170ustar00rootroot00000000000000# frozen_string_literal: true require_relative '../base' module Mongo module DriverBench module MultiDoc module GridFS # Abstract base class for multi-doc GridFS benchmarks. # # @api private class Base < Mongo::DriverBench::MultiDoc::Base private # how much the dataset size ought to be scaled (for scoring # purposes). def scale 1 end def file_name 'single_and_multi_document/gridfs_large.bin' end def load_file(file_name) File.read(path_to_file(file_name), encoding: 'BINARY') end end end end end end mongo-ruby-driver-2.21.3/profile/driver_bench/multi_doc/grid_fs/download.rb000066400000000000000000000015371505113246500270200ustar00rootroot00000000000000# frozen_string_literal: true require 'stringio' require_relative 'base' module Mongo module DriverBench module MultiDoc module GridFS # "This benchmark tests driver performance downloading a GridFS file # to memory." # # @api private class Download < Mongo::DriverBench::MultiDoc::GridFS::Base bench_name 'GridFS Download' private attr_reader :fs_bucket, :file_id def setup super @file_id = client.database.fs .upload_from_stream 'gridfstest', dataset end def before_task super @fs_bucket = client.database.fs end def do_task fs_bucket.download_to_stream(file_id, StringIO.new) end end end end end end mongo-ruby-driver-2.21.3/profile/driver_bench/multi_doc/grid_fs/upload.rb000066400000000000000000000013601505113246500264670ustar00rootroot00000000000000# frozen_string_literal: true require_relative 'base' module Mongo module DriverBench module MultiDoc module GridFS # "This benchmark tests driver performance uploading a GridFS file # from memory." # # @api private class Upload < Mongo::DriverBench::MultiDoc::GridFS::Base bench_name 'GridFS Upload' private attr_reader :fs_bucket def before_task super @fs_bucket = client.database.fs @fs_bucket.drop @fs_bucket.upload_from_stream 'one-byte-file', "\n" end def do_task fs_bucket.upload_from_stream file_name, dataset end end end end end end mongo-ruby-driver-2.21.3/profile/driver_bench/parallel.rb000066400000000000000000000004631505113246500234060ustar00rootroot00000000000000# frozen_string_literal: true require_relative 'parallel/gridfs' require_relative 'parallel/ldjson' module Mongo module DriverBench module Parallel ALL = [ *GridFS::ALL, *LDJSON::ALL ].freeze # ParallelBench consists of all Parallel micro-benchmarks BENCH = ALL end end end mongo-ruby-driver-2.21.3/profile/driver_bench/parallel/000077500000000000000000000000001505113246500230565ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/profile/driver_bench/parallel/base.rb000066400000000000000000000011571505113246500243210ustar00rootroot00000000000000# frozen_string_literal: true require_relative '../base' module Mongo module DriverBench module Parallel # Abstract base class for parallel micro-benchmarks. # # @api private class Base < Mongo::DriverBench::Base private attr_reader :client def setup prepare_client end def teardown cleanup_client end def prepare_client @client = new_client.use('perftest') @client.database.drop end def cleanup_client client.database.drop end end end end end mongo-ruby-driver-2.21.3/profile/driver_bench/parallel/counter.rb000066400000000000000000000027371505113246500250730ustar00rootroot00000000000000# frozen_string_literal: true module Mongo module DriverBench module Parallel # An implementation of a counter variable that can be waited on, which # will signal when the variable reaches zero. # # @api private class Counter # Create a new Counter object with the given initial value. # # @param [ Integer ] value the starting value of the counter (defaults # to zero). def initialize(value = 0) @mutex = Thread::Mutex.new @condition = Thread::ConditionVariable.new @counter = value end # Describes a block where the counter is incremented before executing # it, and decremented afterward. # # @yield Calls the provided block with no arguments. def enter inc yield ensure dec end # Waits for the counter to be zero. def wait @mutex.synchronize do return if @counter.zero? @condition.wait(@mutex) end end # Increments the counter. def inc @mutex.synchronize { @counter += 1 } end # Decrements the counter. If the counter reaches zero, # a signal is sent to any waiting process. def dec @mutex.synchronize do @counter -= 1 if @counter.positive? @condition.signal if @counter.zero? end end end end end end mongo-ruby-driver-2.21.3/profile/driver_bench/parallel/dispatcher.rb000066400000000000000000000044771505113246500255450ustar00rootroot00000000000000# frozen_string_literal: true require 'etc' require_relative 'counter' module Mongo module DriverBench module Parallel # Implements a dispatcher for executing multiple workers in parallel. # # @api private class Dispatcher attr_reader :source # Creates a new dispatcher with the given source. The source may be any # object that responds to ``#next``. It may be assumed that ``#next`` # will be called in a thread-safe manner, so the source does not need # to worry about thread-safety in that regard. Each call to ``#next`` # on the source object should return the next batch of work to be done. # When the source is empty, ``#next`` must return ``nil``. # # @param [ Object ] source an object responding to ``#next``. # @param [ Integer ] workers the number of workers to employ in # performing the task. # # @yield The associated block is executed in each worker and must # describe the worker's task to be accomplished. # # @yieldparam [ Object ] batch the next batch to be worked on. def initialize(source, workers: (ENV['WORKERS'] || (Etc.nprocessors * 0.4)).to_i, &block) @source = source @counter = Counter.new @source_mutex = Thread::Mutex.new @threads = Array.new(workers).map do Thread.new do @counter.enter do Thread.stop worker_loop(&block) end end end sleep 0.1 until @threads.all? { |t| t.status == 'sleep' } end # Runs the workers and waits for them to finish. def run @threads.each(&:wakeup) @counter.wait end private # @return [ Object ] returns the next batch of work to be done (from # the source object given when the dispatcher was created). def next_batch @source_mutex.synchronize do @source.next end end # Fetches the next batch and passes it to the block, in a loop. # Terminates when the next batch is ``nil``. def worker_loop loop do batch = next_batch or return yield batch end end end end end end mongo-ruby-driver-2.21.3/profile/driver_bench/parallel/gridfs.rb000066400000000000000000000003641505113246500246640ustar00rootroot00000000000000# frozen_string_literal: true require_relative 'gridfs/download' require_relative 'gridfs/upload' module Mongo module DriverBench module Parallel module GridFS ALL = [ Download, Upload ].freeze end end end end mongo-ruby-driver-2.21.3/profile/driver_bench/parallel/gridfs/000077500000000000000000000000001505113246500243345ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/profile/driver_bench/parallel/gridfs/base.rb000066400000000000000000000017771505113246500256070ustar00rootroot00000000000000# frozen_string_literal: true require_relative '../base' module Mongo module DriverBench module Parallel module GridFS # Abstract base class of parallel GridFS micro-benchmarks. # # @api private class Base < Mongo::DriverBench::Parallel::Base def file_name_at(index) format('parallel/gridfs_multi/file%02d.txt', index) end private attr_reader :bucket def setup super @dataset_size = 50.times.sum { |i| File.size(path_to_file(file_name_at(i))) } end def prepare_bucket(initialize: true) @bucket = client.database.fs @bucket.drop @bucket.upload_from_stream 'one-byte-file', "\n" if initialize end def upload_file(file_name) File.open(path_to_file(file_name), 'r') do |file| bucket.upload_from_stream file_name, file end end end end end end end mongo-ruby-driver-2.21.3/profile/driver_bench/parallel/gridfs/download.rb000066400000000000000000000036741505113246500265020ustar00rootroot00000000000000# frozen_string_literal: true require_relative 'base' require_relative 'upload' require_relative '../dispatcher' module Mongo module DriverBench module Parallel module GridFS # This benchmark tests driver performance downloading files from # GridFS to disk. # # @api private class Download < Mongo::DriverBench::Parallel::GridFS::Base bench_name 'GridFS multi-file download' private # The source object to use for this benchmark. Each batch is a tuple # consisting of the list position, and the element in the list at # that position. # # @api private class Source def initialize(list) @list = list @n = 0 end def next id = @list.pop or return nil [ @n, id ].tap { @n += 1 } end end def setup super prepare_bucket(initialize: false) dispatcher = Dispatcher.new(Upload::Source.new(self)) do |file_name| upload_file(file_name) end dispatcher.run @destination = File.join(Dir.tmpdir, 'parallel') end def before_task super FileUtils.rm_rf(@destination) FileUtils.mkdir_p(@destination) ids = bucket.files_collection.find.map { |doc| doc['_id'] } @dispatcher = Dispatcher.new(Source.new(ids)) do |(n, id)| download_file(n, id) end end def do_task @dispatcher.run end def download_file(index, id) path = File.join(@destination, file_name_at(index)) FileUtils.mkdir_p(File.dirname(path)) File.open(path, 'w') do |file| bucket.download_to_stream(id, file) end end end end end end end mongo-ruby-driver-2.21.3/profile/driver_bench/parallel/gridfs/upload.rb000066400000000000000000000022271505113246500261500ustar00rootroot00000000000000# frozen_string_literal: true require_relative 'base' require_relative '../dispatcher' module Mongo module DriverBench module Parallel module GridFS # "This benchmark tests driver performance uploading files from disk # to GridFS." # # @api private class Upload < Mongo::DriverBench::Parallel::GridFS::Base bench_name 'GridFS multi-file upload' # The source object to use for this benchmark. Each batch consists # of the name of the file to upload. # # @api private class Source def initialize(bench) @n = 0 @bench = bench end def next return nil if @n >= 50 @bench.file_name_at(@n).tap { @n += 1 } end end private def before_task super prepare_bucket @dispatcher = Dispatcher.new(Source.new(self)) do |file_name| upload_file(file_name) end end def do_task @dispatcher.run end end end end end end mongo-ruby-driver-2.21.3/profile/driver_bench/parallel/ldjson.rb000066400000000000000000000003601505113246500246730ustar00rootroot00000000000000# frozen_string_literal: true require_relative 'ldjson/export' require_relative 'ldjson/import' module Mongo module DriverBench module Parallel module LDJSON ALL = [ Export, Import ].freeze end end end end mongo-ruby-driver-2.21.3/profile/driver_bench/parallel/ldjson/000077500000000000000000000000001505113246500243475ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/profile/driver_bench/parallel/ldjson/base.rb000066400000000000000000000023611505113246500256100ustar00rootroot00000000000000# frozen_string_literal: true require_relative '../base' module Mongo module DriverBench module Parallel module LDJSON # The abstract base class for parallel LDSON benchmarks. # # @api private class Base < Mongo::DriverBench::Parallel::Base def file_name_at(index) format('parallel/ldjson_multi/ldjson%03d.txt', index) end private attr_reader :collection def insert_docs_from_file(file_name, ids_relative_to: nil) next_id = ids_relative_to docs = File.readlines(path_to_file(file_name)).map do |line| JSON.parse(line).tap do |doc| if ids_relative_to doc['_id'] = next_id next_id += 1 end end end collection.insert_many(docs) end def setup super @dataset_size = 100.times.sum { |i| File.size(path_to_file(file_name_at(i))) } end def prepare_collection @collection = @client.database[:corpus].tap do |corpus| corpus.drop corpus.create end end end end end end end mongo-ruby-driver-2.21.3/profile/driver_bench/parallel/ldjson/export.rb000066400000000000000000000035521505113246500262220ustar00rootroot00000000000000# frozen_string_literal: true require 'tmpdir' require 'fileutils' require_relative 'base' require_relative '../dispatcher' module Mongo module DriverBench module Parallel module LDJSON # "This benchmark tests driver performance exporting documents to a # set of LDJSON files." # # @api private class Export < Mongo::DriverBench::Parallel::LDJSON::Base bench_name 'LDJSON multi-file export' private # The data source for this benchmark; each batch is a set of 5000 # documents. class DataSource def initialize(collection) @n = 0 @collection = collection end def next return nil if @n >= 100 batch = @collection.find(_id: { '$gte' => @n * 5000, '$lt' => (@n + 1) * 5000 }).to_a [ @n, batch ].tap { @n += 1 } end end def setup super @destination = File.join(Dir.tmpdir, 'parallel') FileUtils.mkdir_p(@destination) prepare_collection 100.times do |n| insert_docs_from_file(file_name_at(n), ids_relative_to: n * 5000) end end def before_task super @dispatcher = Dispatcher.new(DataSource.new(collection)) do |(n, batch)| worker_task(n, batch) end end def do_task @dispatcher.run end def teardown super FileUtils.rm_rf(@destination) end def worker_task(index, batch) path = File.join(@destination, file_name_at(index)) FileUtils.mkdir_p(File.dirname(path)) File.write(path, batch.map(&:to_json).join("\n")) end end end end end end mongo-ruby-driver-2.21.3/profile/driver_bench/parallel/ldjson/import.rb000066400000000000000000000022151505113246500262060ustar00rootroot00000000000000# frozen_string_literal: true require_relative 'base' require_relative '../dispatcher' module Mongo module DriverBench module Parallel module LDJSON # "This benchmark tests driver performance importing documents from a # set of LDJSON files." # # @api private class Import < Mongo::DriverBench::Parallel::LDJSON::Base bench_name 'LDJSON multi-file import' private # The data source for this benchmark. Each batch is the name of a # file to read documents file. class DataSource def initialize(bench) @n = 0 @bench = bench end def next return nil if @n >= 100 @bench.file_name_at(@n).tap { @n += 1 } end end def before_task super prepare_collection @dispatcher = Dispatcher.new(DataSource.new(self)) do |file_name| insert_docs_from_file(file_name) end end def do_task @dispatcher.run end end end end end end mongo-ruby-driver-2.21.3/profile/driver_bench/percentiles.rb000066400000000000000000000015731505113246500241320ustar00rootroot00000000000000# frozen_string_literal: true module Mongo module DriverBench # A utility class for returning the list item at a given percentile # value. class Percentiles # @return [ Array ] the sorted list of numbers to consider attr_reader :list # Create a new Percentiles object that encapsulates the given list of # numbers. # # @param [ Array ] list the list of numbers to considier def initialize(list) @list = list.sort end # Finds and returns the element in the list that represents the given # percentile value. # # @param [ Number ] percentile a number in the range [1,100] # # @return [ Number ] the element of the list for the given percentile. def [](percentile) i = (list.size * percentile / 100.0).ceil - 1 list[i] end end end end mongo-ruby-driver-2.21.3/profile/driver_bench/rake/000077500000000000000000000000001505113246500222045ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/profile/driver_bench/rake/tasks.rake000066400000000000000000000024421505113246500241770ustar00rootroot00000000000000# frozen_string_literal: true $LOAD_PATH.unshift File.expand_path('../../../lib', __dir__) task driver_bench: %i[ driver_bench:data driver_bench:run ] SPECS_REPO_URI = 'https://github.com/mongodb/specifications' SPECS_PATH = File.expand_path('../../../specifications', __dir__) DRIVER_BENCH_DATA = File.expand_path('../../data/driver_bench', __dir__) # rubocop:disable Metrics/BlockLength namespace :driver_bench do desc 'Downloads the DriverBench data files, if necessary' task :data do if File.directory?('./profile/data/driver_bench') puts 'DriverBench data files are already downloaded' next end if File.directory?(SPECS_PATH) puts 'specifications repo is already checked out' else sh 'git', 'clone', SPECS_REPO_URI end mkdir_p DRIVER_BENCH_DATA Dir.glob(File.join(SPECS_PATH, 'source/benchmarking/data/*.tgz')) do |archive| Dir.chdir(DRIVER_BENCH_DATA) do sh 'tar', 'xzf', archive end end end desc 'Runs the DriverBench benchmark suite' task :run do require_relative '../suite' Mongo::DriverBench::Suite.run! end desc 'Runs the crypto benchmark' task :crypto do require_relative '../crypto/decrypt' Mongo::DriverBench::Crypto::Decrypt.new.run end end # rubocop:enable Metrics/BlockLength mongo-ruby-driver-2.21.3/profile/driver_bench/single_doc.rb000066400000000000000000000006631505113246500237220ustar00rootroot00000000000000# frozen_string_literal: true require_relative 'single_doc/find_one_by_id' require_relative 'single_doc/insert_one' require_relative 'single_doc/run_command' module Mongo module DriverBench module SingleDoc ALL = [ FindOneByID, *InsertOne::ALL, RunCommand ].freeze # SingleBench consists of all Single-doc micro-benchmarks # except "Run Command" BENCH = (ALL - [ RunCommand ]).freeze end end end mongo-ruby-driver-2.21.3/profile/driver_bench/single_doc/000077500000000000000000000000001505113246500233705ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/profile/driver_bench/single_doc/base.rb000066400000000000000000000021551505113246500246320ustar00rootroot00000000000000# frozen_string_literal: true require_relative '../base' module Mongo module DriverBench module SingleDoc # Abstract base class for all single-doc benchmarks. # # @api private class Base < Mongo::DriverBench::Base private attr_reader :client, :collection def setup if file_name @dataset ||= load_file(file_name).first @dataset_size ||= size_of_file(file_name) * scale end prepare_client end # The amount by which the dataset size should be scaled (for scoring # purposes). def scale 10_000 end def teardown cleanup_client end def prepare_client @client = new_client.use('perftest') @client.database.drop @collection = @client.database[:corpus].tap(&:create) end def cleanup_client @client.database.drop end # Returns the name of the file that contains # the dataset to use. def file_name nil end end end end end mongo-ruby-driver-2.21.3/profile/driver_bench/single_doc/find_one_by_id.rb000066400000000000000000000014721505113246500266500ustar00rootroot00000000000000# frozen_string_literal: true require_relative 'base' module Mongo module DriverBench module SingleDoc # "This benchmark tests driver performance sending an indexed query to # the database and reading a single document in response." # # @api private class FindOneByID < Mongo::DriverBench::SingleDoc::Base bench_name 'Find one by ID' def file_name 'single_and_multi_document/tweet.json' end def setup super 10.times do |i| docs = Array.new(1000) { |j| dataset.merge(_id: (i * 1000) + j + 1) } @collection.insert_many(docs) end end def do_task 10_000.times do |i| collection.find(_id: i + 1).to_a end end end end end end mongo-ruby-driver-2.21.3/profile/driver_bench/single_doc/insert_one.rb000066400000000000000000000004061505113246500260620ustar00rootroot00000000000000# frozen_string_literal: true require_relative 'insert_one/large_doc' require_relative 'insert_one/small_doc' module Mongo module DriverBench module SingleDoc module InsertOne ALL = [ LargeDoc, SmallDoc ].freeze end end end end mongo-ruby-driver-2.21.3/profile/driver_bench/single_doc/insert_one/000077500000000000000000000000001505113246500255355ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/profile/driver_bench/single_doc/insert_one/base.rb000066400000000000000000000011611505113246500267730ustar00rootroot00000000000000# frozen_string_literal: true require_relative '../base' module Mongo module DriverBench module SingleDoc module InsertOne # Abstract base class for "insert one" benchmarks. # # @api private class Base < Mongo::DriverBench::SingleDoc::Base attr_reader :repetitions alias scale repetitions def before_task collection.drop collection.create end def do_task repetitions.times do collection.insert_one(dataset) end end end end end end end mongo-ruby-driver-2.21.3/profile/driver_bench/single_doc/insert_one/large_doc.rb000066400000000000000000000011521505113246500300000ustar00rootroot00000000000000# frozen_string_literal: true require_relative 'base' module Mongo module DriverBench module SingleDoc module InsertOne # "This benchmark tests driver performance inserting a single, large # document to the database." # # @api private class LargeDoc < Mongo::DriverBench::SingleDoc::InsertOne::Base bench_name 'Large doc insertOne' def initialize super @repetitions = 10 end def file_name 'single_and_multi_document/large_doc.json' end end end end end end mongo-ruby-driver-2.21.3/profile/driver_bench/single_doc/insert_one/small_doc.rb000066400000000000000000000011561505113246500300220ustar00rootroot00000000000000# frozen_string_literal: true require_relative 'base' module Mongo module DriverBench module SingleDoc module InsertOne # "This benchmark tests driver performance inserting a single, small # document to the database." # # @api private class SmallDoc < Mongo::DriverBench::SingleDoc::InsertOne::Base bench_name 'Small doc insertOne' def initialize super @repetitions = 10_000 end def file_name 'single_and_multi_document/small_doc.json' end end end end end end mongo-ruby-driver-2.21.3/profile/driver_bench/single_doc/run_command.rb000066400000000000000000000013521505113246500262200ustar00rootroot00000000000000# frozen_string_literal: true require_relative 'base' module Mongo module DriverBench module SingleDoc # "This benchmark tests driver performance sending a command to the # database and reading a response." # # @api private class RunCommand < Mongo::DriverBench::SingleDoc::Base bench_name 'Run command' def setup super @dataset_size = { hello: true }.to_bson.length * scale end def prepare_client @client = new_client end def cleanup_client # do nothing end def do_task 10_000.times do client.database.command(hello: true) end end end end end end mongo-ruby-driver-2.21.3/profile/driver_bench/suite.rb000066400000000000000000000072211505113246500227420ustar00rootroot00000000000000# frozen_string_literal: true require_relative 'bson' require_relative 'multi_doc' require_relative 'parallel' require_relative 'single_doc' module Mongo module DriverBench ALL = [ *BSON::ALL, *SingleDoc::ALL, *MultiDoc::ALL, *Parallel::ALL ].freeze BENCHES = { 'BSONBench' => BSON::BENCH, 'SingleBench' => SingleDoc::BENCH, 'MultiBench' => MultiDoc::BENCH, 'ParallelBench' => Parallel::BENCH, 'ReadBench' => [ SingleDoc::FindOneByID, MultiDoc::FindMany, MultiDoc::GridFS::Download, Parallel::LDJSON::Export, Parallel::GridFS::Download ].freeze, 'WriteBench' => [ SingleDoc::InsertOne::SmallDoc, SingleDoc::InsertOne::LargeDoc, MultiDoc::BulkInsert::SmallDoc, MultiDoc::BulkInsert::LargeDoc, MultiDoc::GridFS::Upload, Parallel::LDJSON::Import, Parallel::GridFS::Upload ].freeze }.freeze # A benchmark suite for running all benchmarks and aggregating (and # reporting) the results. # # @api private class Suite PERCENTILES = [ 10, 25, 50, 75, 90, 95, 98, 99 ].freeze def self.run! new.run end def run perf_data = [] benches = Hash.new { |h, k| h[k] = [] } ALL.each do |klass| result = run_benchmark(klass) perf_data << compile_perf_data(result) append_to_benchmarks(klass, result, benches) end perf_data += compile_benchmarks(benches) save_perf_data(perf_data) summarize_perf_data(perf_data) end private def run_benchmark(klass) print klass.bench_name, ': ' $stdout.flush klass.new.run.tap do |result| puts format('%4.4g', result[:score]) end end def compile_perf_data(result) percentile_data = PERCENTILES.map do |percentile| { 'name' => "time-#{percentile}%", 'value' => result[:percentiles][percentile] } end { 'info' => { 'test_name' => result[:name], 'args' => {}, }, 'metrics' => [ { 'name' => 'score', 'value' => result[:score] }, *percentile_data ] } end def append_to_benchmarks(klass, result, benches) BENCHES.each do |benchmark, list| benches[benchmark] << result[:score] if list.include?(klass) end end def compile_benchmarks(benches) benches.each_key do |key| benches[key] = benches[key].sum / benches[key].length end benches['DriverBench'] = (benches['ReadBench'] + benches['WriteBench']) / 2 benches.map do |bench, score| { 'info' => { 'test_name' => bench, 'args' => {} }, 'metrics' => [ { 'name' => 'score', 'value' => score } ] } end end # rubocop:disable Metrics/AbcSize def summarize_perf_data(data) puts '===== Performance Results =====' data.each do |item| puts format('%s : %4.4g', item['info']['test_name'], item['metrics'][0]['value']) next unless item['metrics'].length > 1 item['metrics'].each do |metric| next if metric['name'] == 'score' puts format(' %s : %4.4g', metric['name'], metric['value']) end end end # rubocop:enable Metrics/AbcSize def save_perf_data(data, file_name: ENV['PERFORMANCE_RESULTS_FILE'] || 'results.json') File.write(file_name, data.to_json) end end end end mongo-ruby-driver-2.21.3/sbom.json000066400000000000000000000030211505113246500170170ustar00rootroot00000000000000{ "metadata": { "timestamp": "2024-06-10T11:52:41.052882+00:00", "tools": [ { "externalReferences": [ { "type": "build-system", "url": "https://github.com/CycloneDX/cyclonedx-python-lib/actions" }, { "type": "distribution", "url": "https://pypi.org/project/cyclonedx-python-lib/" }, { "type": "documentation", "url": "https://cyclonedx-python-library.readthedocs.io/" }, { "type": "issue-tracker", "url": "https://github.com/CycloneDX/cyclonedx-python-lib/issues" }, { "type": "license", "url": "https://github.com/CycloneDX/cyclonedx-python-lib/blob/main/LICENSE" }, { "type": "release-notes", "url": "https://github.com/CycloneDX/cyclonedx-python-lib/blob/main/CHANGELOG.md" }, { "type": "vcs", "url": "https://github.com/CycloneDX/cyclonedx-python-lib" }, { "type": "website", "url": "https://github.com/CycloneDX/cyclonedx-python-lib/#readme" } ], "name": "cyclonedx-python-lib", "vendor": "CycloneDX", "version": "6.4.4" } ] }, "serialNumber": "urn:uuid:397e5109-c899-4562-a23b-d5bb1988f069", "version": 1, "$schema": "http://cyclonedx.org/schema/bom-1.5.schema.json", "bomFormat": "CycloneDX", "specVersion": "1.5" } mongo-ruby-driver-2.21.3/spec/000077500000000000000000000000001505113246500161225ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/NOTES.aws-auth.md000066400000000000000000000362631505113246500210760ustar00rootroot00000000000000# AWS Authentication Implementation Notes ## AWS Account Per [its documentation](https://docs.aws.amazon.com/STS/latest/APIReference/API_GetCallerIdentity.html, the GetCallerIdentity API call that the server makes to STS to authenticate the user using MONGODB-AWS auth mechanism requires no privileges. This means in order to test authentication using non-temporary credentials (i.e., AWS access key id and secret access key only) it is sufficient to create an IAM user that has no permissions but does have programmatic access enabled (i.e. has an access key id and secret access key). ## AWS Signature V4 The driver implements the AWS signature v4 internally rather than relying on a third-party library (such as the [AWS SDK for Ruby](https://docs.aws.amazon.com/sdk-for-ruby/v3/api/index.html)) to provide the signature implementation. The implementation is quite compact but getting it working took some effort due to: 1. [The server not logging AWS responses when authentication fails ](https://jira.mongodb.org/browse/SERVER-46909) 2. Some of the messages from STS being quite cryptic (I could not figure out what the problem was for either "Request is missing Authentication Token" or "Request must contain a signature that conforms to AWS standards", and ultimately resolved these problems by comparing my requests to those produced by the AWS SDK). 3. Amazon's own documentation not providing an example signature calculation that could be followed to verify correctness, especially since this is a multi-step process and all kinds of subtle errors are possible in many of the steps like using a date instead of a time, hex-encoding a MAC in an intermediate step or not separating header values from the list of signed headers by two newlines. ### Reference Implementation - AWS SDK To see actual working STS requests I used Amazon's [AWS SDK for Ruby](https://docs.aws.amazon.com/sdk-for-ruby/v3/api/index.html) ([API docs for STS client](https://docs.aws.amazon.com/sdk-for-ruby/v3/api/Aws/EC2/Client.html), [configuration documentation](https://docs.aws.amazon.com/sdk-for-ruby/v3/developer-guide/setup-config.html)) as follows: 1. Set the credentials in the environment (note that the region must be explicitly provided): export AWS_ACCESS_KEY_ID=AKIAREALKEY export AWS_SECRET_ACCESS_KEY=Sweee/realsecret export AWS_REGION=us-east-1 2. Install the correct gem and launch IRb: gem install aws-sdk-core irb -raws-sdk-core -Iaws/sts 3. Send a GetCallerIdentity request, as used by MongoDB server: Aws::STS::Client.new( logger: Logger.new(STDERR, level: :debug), http_wire_trace: true, ).get_caller_identity This call enables HTTP request and response logging and produces output similar to the following: opening connection to sts.amazonaws.com:443... opened starting SSL for sts.amazonaws.com:443... SSL established, protocol: TLSv1.2, cipher: ECDHE-RSA-AES128-SHA <- "POST / HTTP/1.1\r\nContent-Type: application/x-www-form-urlencoded; charset=utf-8\r\nAccept-Encoding: \r\nUser-Agent: aws-sdk-ruby3/3.91.1 ruby/2.7.0 x86_64-linux aws-sdk-core/3.91.1\r\nHost: sts.amazonaws.com\r\nX-Amz-Date: 20200317T194745Z\r\nX-Amz-Content-Sha256: ab821ae955788b0e33ebd34c208442ccfc2d406e2edc5e7a39bd6458fbb4f843\r\nAuthorization: AWS4-HMAC-SHA256 Credential=AKIAREALKEY/20200317/us-east-1/sts/aws4_request, SignedHeaders=content-type;host;x-amz-content-sha256;x-amz-date, Signature=6cd3a60a2d7dfba0dcd17f9c4c42d0186de5830cf99545332253a327bba14131\r\nContent-Length: 43\r\nAccept: */*\r\n\r\n" -> "HTTP/1.1 200 OK\r\n" -> "x-amzn-RequestId: c56f5d68-8763-4032-a835-fd95efd83fa6\r\n" -> "Content-Type: text/xml\r\n" -> "Content-Length: 401\r\n" -> "Date: Tue, 17 Mar 2020 19:47:44 GMT\r\n" -> "\r\n" reading 401 bytes... -> "" -> "\n \n arn:aws:iam::5851234356:user/test\n AIDAREALUSERID\n 5851234356\n \n \n c56f5d68-8763-4032-a835-fd95efd83fa6\n \n\n" read 401 bytes Conn keep-alive I, [2020-03-17T15:47:45.275421 #9815] INFO -- : [Aws::STS::Client 200 0.091573 0 retries] get_caller_identity() => # Note that: 1. The set of headers sent by the AWS SDK differs from the set of headers that the MONGODB-AWS auth mechanism specification mentions. I used the AWS SDK implementation as a guide to determine the correct shape of the request to STS and in particular the `Authorization` header. The source code of Amazon's implementation is [here](https://github.com/aws/aws-sdk-ruby/blob/master/gems/aws-sigv4/lib/aws-sigv4/signer.rb) and it generates, in particular, the x-amz-content-sha256` header which the MONGODB-AWS auth mechanism specification does not mention. 2. This is a working request which can be replayed, making it possible to send this request that was created by the AWS SDK repeatedly with minor alterations to study STS error reporting behavior. STS as of this writing allows a 15 minute window during which a request may be replayed. 3. The printed request only shows the headers and not the request body. In case of the GetCallerIdentity, the payload is fixed and is the same as what the MONGODB-AWS auth mechanism specification requires (`Action=GetCallerIdentity&Version=2011-06-15`). Because the AWS SDK includes a different set of headers in its requests, it not feasible to compare the canonical requests generated by AWS SDK verbatim to the canonical requests generated by the driver. ### Manual Requests It is possible to manually send requests to STS using OpenSSL `s_client` tool in combination with the [printf](https://linux.die.net/man/3/printf) utility to transform the newline escapes. A sample command replaying the request printed above is as follows: (printf "POST / HTTP/1.1\r\nContent-Type: application/x-www-form-urlencoded; charset=utf-8\r\nAccept-Encoding: \r\nUser-Agent: aws-sdk-ruby3/3.91.1 ruby/2.7.0 x86_64-linux aws-sdk-core/3.91.1\r\nHost: sts.amazonaws.com\r\nX-Amz-Date: 20200317T194745Z\r\nX-Amz-Content-Sha256: ab821ae955788b0e33ebd34c208442ccfc2d406e2edc5e7a39bd6458fbb4f843\r\nAuthorization: AWS4-HMAC-SHA256 Credential=AKIAREALKEY/20200317/us-east-1/sts/aws4_request, SignedHeaders=content-type;host;x-amz-content-sha256;x-amz-date, Signature=6cd3a60a2d7dfba0dcd17f9c4c42d0186de5830cf99545332253a327bba14131\r\nContent-Length: 43\r\nAccept: */*\r\n\r\n" && echo "Action=GetCallerIdentity&Version=2011-06-15" && sleep 5) |openssl s_client -connect sts.amazonaws.com:443 Note the sleep call - `s_client` does not wait for the remote end to provide a response before exiting, thus the sleep on the input side allows 5 seconds for STS to process the request and respond. For reference, Amazon provides [GetCallerIdentity API documentation ](https://docs.aws.amazon.com/STS/latest/APIReference/API_GetCallerIdentity.html). ### Integration Test - Signature Generation The Ruby driver includes an integration test for signature generation, where the driver makes the call to `GetCallerIdentity` STS endpoint using the provided AWS credentials. This test is in `spec/integration/aws_auth_request_spec.rb`. ### STS Error Responses The error responses produced by STS sometimes do not clearly indicate the problem. Below are some of the puzzling responses I encountered: - *Request is missing Authentication Token*: request is missing the `Authorization` header, or the value of the header does not begin with `AWS4-`. For example, this error is produced if the signature algorithm is erroneously given as `AWS-HMAC-SHA256` instead of `AWS4-HMAC-SHA256` with the remainder of the header value being correctly constructed. This error is also produced if the value of the header erroneously includes the name of the header (i.e. the header name is specified twice in the header line) but the value is otherwise completely valid. This error has no relation to the "session token" or "security token" as used with temporary AWS credentials. - *The security token included in the request is invalid*: this error can be produced in several circumstances: - When the AWS access key id, as specified in the scope part of the `Authorization` header, is not a valid access key id. In the case of non-temporary credentials being used for authentication, the error refers to a "security token" but the authentication process does not actually use a security token as this term is used in the AWS documentation describing temporary credentials. - When using temporary credentials and the security token is not provided in the STS request at all (x-amz-security-token header). - *Signature expired: 20200317T000000Z is now earlier than 20200317T222541Z (20200317T224041Z - 15 min.)*: This error happens when `x-amz-date` header value is the formatted date (`YYYYMMDD`) rather than the ISO8601 formatted time (`YYYYMMDDTHHMMSSZ`). Note that the string `20200317T000000Z` is never explicitly provided in the request - it is derived by AWS from the provided header `x-amz-date: 20200317`. - *The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details*: this is the error produced when the signature is not calculated correctly but everything else in the request is valid. If a different error is produced, most likely the problem is in something other than signature calculation. - *The security token included in the request is expired*: this error is produced when temporary credentials are used and the credentials have expired. See also [AWS documentation for STS error messages](https://docs.aws.amazon.com/STS/latest/APIReference/CommonErrors.html). ### Resources Generally I found Amazon's own documentation to be the best for implementing the signature calculation. The following documents should be read in order: - [Signing AWS requests overview](https://docs.aws.amazon.com/general/latest/gr/sigv4_signing.html) - [Creating canonical request](https://docs.aws.amazon.com/general/latest/gr/sigv4-create-canonical-request.html) - [Creating string to sign](https://docs.aws.amazon.com/general/latest/gr/sigv4-create-string-to-sign.html) - [Calculating signature](https://docs.aws.amazon.com/general/latest/gr/sigv4-calculate-signature.html) ### Signature Debugger The most excellent [awssignature.com](http://www.awssignature.com/) was indispensable in debugging the actual signature calculation process. ### MongoDB Server MongoDB server internally defines the set of headers that it is prepared to handle when it is processing AWS authentication. Headers that are not part of that set cause the server to reject driver's payloads. The error reporting when additional headers are provided and when the correct set of headers is provided but the headers are not ordered lexicographically [can be misleading](https://jira.mongodb.org/browse/SERVER-47488). ## Direct AWS Requests [STS GetCallerIdentity API docs](https://docs.aws.amazon.com/STS/latest/APIReference/API_GetCallerIdentity.html) When making direct requests to AWS, adding `Accept: application/json` header will return the results in the JSON format, including the errors. ## AWS CLI [Configuration reference](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-envvars.html) Note that AWS CLI uses `AWS_DEFAULT_REGION` environment variable to configure the region used for operations. ## AWS Ruby SDK [Configuration reference](https://docs.aws.amazon.com/sdk-for-ruby/v3/developer-guide/setup-config.html) Note that AWS Ruby SDK uses `AWS_REGION` environment variable to configure the region used for operations. [STS::Client#assume_role documentation](https://docs.aws.amazon.com/sdk-for-ruby/v3/api/Aws/STS/Client.html#assume_role-instance_method) ## IMDSv2 `X-aws-ec2-metadata-token-ttl-seconds` is a required header when using IMDSv2 EC2 instance metadata requests. This header is used in the examples on [Amazon's page describing IMDSv2](https://aws.amazon.com/blogs/security/defense-in-depth-open-firewalls-reverse-proxies-ssrf-vulnerabilities-ec2-instance-metadata-service/), but is not explicitly stated as being required. Not providing this header fails the PUT requests with HTTP code 400. ## IAM Roles For EC2 Instances ### Metadata Rate Limit [Amazon documentation](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instancedata-data-retrieval.html#instancedata-throttling) states that the EC2 instance metadata endpoint is rate limited. Since the driver accesses it to obtain credentials whenever a connection is established, rate limits may adversely affect the driver's ability to establish connections. ### Instance Profile Assignment It can take over 5 seconds for an instance to see its instance profile change reflected in the instance metadata. Evergreen test runs seem to experience this delay to a significantly larger extent than testing in a standalone AWS account. ## IAM Roles For ECS Tasks ### ECS Task Roles When an ECS task (or more precisely, the task definition) is created, it is possible to specify an *execution role* and a *task role*. The two are completely separate; an execution role is required to, for example, be able to send container logs to CloudWatch if the container is running in Fargate, and a task role is required for AWS authentication purposes. The ECS task role is also separate from EC2 instance role and the IAM role for a user to assume a role - these roles all require different configuration. ### `AWS_CONTAINER_CREDENTIALS_RELATIVE_URI` Scope As stated in [this Amazon support document](https://aws.amazon.com/premiumsupport/knowledge-center/ecs-iam-task-roles-config-errors/), the `AWS_CONTAINER_CREDENTIALS_RELATIVE_URI` environment variable is only available to the PID 1 process in the container. Other processes need to extract it from PID 1's environment: strings /proc/1/environment ### Other ECS Metadata `strings /proc/1/environment` also shows a number of other enviroment variables available in the container with metadata. For example a test container yields: HOSTNAME=f893c90ec4bd ECS_CONTAINER_METADATA_URI=http://169.254.170.2/v3/5fb0b11b-c4c8-4cdb-b68b-edf70b3f4937 AWS_DEFAULT_REGION=us-east-2 AWS_EXECUTION_ENV=AWS_ECS_FARGATE AWS_REGION=us-east-2 AWS_CONTAINER_CREDENTIALS_RELATIVE_URI=/v2/credentials/f17b5770-9a0d-498c-8d26-eea69f8d0924 ### Metadata Rate Limit [Amazon documentation](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/troubleshoot-task-iam-roles.html) states that ECS task metadata endpoint is subject to rate limiting, which is configured via [ECS_TASK_METADATA_RPS_LIMIT container agent parameter](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-agent-config.html). When the rate limit is reached, requests fail with `429 Too Many Requests` HTTP status code. Since the driver accesses this endpoint to obtain credentials whenever a connection is established, rate limits may adversely affect the driver's ability to establish connections. mongo-ruby-driver-2.21.3/spec/README.aws-auth.md000066400000000000000000000337241505113246500211420ustar00rootroot00000000000000# Testing AWS Authentication ## Server Configuration AWS authentication requires the following to be done on the server side: 1. The AWS authentication mechanism must be enabled on the server. This is done by adding `MONGODB-AWS` to the values in `authenticationMechanisms` server parameter. 2. A user must be created in the `$external` database with the ARN matching the IAM user or role that the client will authenticate as. Note that the server does not need to have AWS keys provided to it - it uses the keys that the client provides during authentication. An easy way to configure the deployment in the required fashion is to configure the deployment to accept both password authentication and AWS authentication, and add a bootstrap user: mlaunch init --single --auth --username root --password toor \ --setParameter authenticationMechanisms=MONGODB-AWS,SCRAM-SHA-1,SCRAM-SHA-256 \ --dir /tmp/db Then connect as the bootstrap user and create AWS-mapped users: mongosh mongodb://root:toor@localhost:27017 # In the mongo shell: use $external db.createUser({ user: 'arn:aws:iam::1234567890:user/test', roles: [{role:'root', db:'admin'}]}) The ARN can be retrieved from the AWS management console. Alternatively, if the IAM user's access and secret keys are known, trying to authenticate as the user will log the user's ARN into the server log when authentication fails; this ARN can be then used to create the server user. With the server user created, it is possible to authenticate using AWS. The following example uses regular user credentials for an IAM user created as described in the next section; mongosh 'mongodb://AKIAAAAAAAAAAAA:t9t2mawssecretkey@localhost:27017/?authMechanism=MONGODB-AWS&authsource=$external' To authenticate, provide the IAM user's access key id as the username and secret access key as the password. Note that the username and the password must be percent-escaped when they are passed in the URI as the examples here show. Also note that the user's ARN is not explicitly specified by the client during authentication - the server determines the ARN from the acess key id and the secret access key provided by the client. ## Provisioning Tools The Ruby driver includes tools that set up the resources needed to test AWS authentication. These are exposed by the `.evergreen/aws` script. To use this script, it must be provided AWS credentials and the region to operate in. The credentials and region can be given as command-line arguments or set in the environment, as follows: export AWS_ACCESS_KEY_ID=AKIAYOURACCESSKEY export AWS_SECRET_ACCESS_KEY=YOURSECRETACCESSKEY export AWS_REGION=us-east-1 If you also perform manual testing (for example by following some of the instructions in this file), ensure AWS_SESSION_TOKEN is not set unless you are intending to invoke the `.evergreen/aws` script with temporary credentials: unset AWS_SESSION_TOKEN Note that [AWS CLI](https://aws.amazon.com/cli/) uses a different environment variable for the region - `AWS_DEFAULT_REGION` rather than `AWS_REGION`. If you also intend to use the AWS CLI, execute: export AWS_DEFAULT_REGION=$AWS_REGION To verify that credentials are correctly set in the environment, you can perform the following operations: # Test driver tooling ./.evergreen/aws key-pairs # Test AWS CLI aws sts get-caller-identity Alternatively, to provide the credentials on each call to the driver's `aws` script, use the `-a` and `-s` arguments as follows: ./.evergreen/aws -a KEY-ID -s SECRET-KEY key-pairs ## Common Setup In order to test all AWS authentication scenarios, a large number of AWS objects needs to be configured. This configuration is split into two parts: common setup and scenario-specific setup. The common setup is performed by running: ./.evergreen/aws setup-resources This creates resources like security groups, IAM users and CloudWatch log groups that do not cost money. It is possible to test authentication with regular credentials and temporary credentials obtained via an AssumeRole request using these resources. In order to test authentication from an EC2 instance or an ECS task, the instance and/or the task need to be started which costs money and is performed as separate steps as detailed below. ## Regular Credentials - IAM User AWS authentication as a regular IAM user requires having an IAM user to authenticate as. This user can be created using the AWS management console. The IAM user requires no permissions, but it must have the programmatic access enabled (i.e. have an access key ID and the secret access key). An IAM user is created as part of the common setup described earlier. To reset and retrieve the access key ID and secret access key for the created user, run: ./.evergreen/aws reset-keys Note that if the user already had an access key, the old credentials are removed and replaced with new credentials. Given the credentials for the test user, the URI for running the driver test suite can be formed as follows: export "MONGODB_URI=mongodb://$AWS_ACCESS_KEY_ID:$AWS_SECRET_ACCESS_KEY@localhost:27017/?authMechanism=MONGODB-AWS&authsource=$external" ## Temporary Credentials - AssumeRole Request To test a user authenticating with an assumed role, you can follow [the example provided in Amazon documentation](https://aws.amazon.com/premiumsupport/knowledge-center/iam-assume-role-cli/) to set up the assumed role and related objects and obtain temporary credentials or use the driver's tooling using the commands given below. Since the temporary credentials expire, the role needs to be re-assumed periodically during testing and the new credentials and session token retrieved. If following the example in Amazon's documentation, [jq](https://stedolan.github.io/jq/) can be used to efficiently place the credentials from the AssumeRole request into the environment, as follows: # Call given in the example guide aws sts assume-role --role-arn arn:aws:iam::YOUR-ACCOUNT-ID:role/example-role --role-session-name AWSCLI-Session >~/.aws-assumed-role.json # Extract the credentials export AWS_ACCESS_KEY_ID=`jq .Credentials.AccessKeyId ~/.aws-assumed-role.json -r` export AWS_SECRET_ACCESS_KEY=`jq .Credentials.SecretAccessKey ~/.aws-assumed-role.json -r` export AWS_SESSION_TOKEN=`jq .Credentials.SessionToken ~/.aws-assumed-role.json -r` Alternatively, the `./evergreen/aws` script can be used to assume the role. By default, it will assume the role that `setup-resources` action configured. Note: The ability to assume this role is granted to the [IAM user](#regular-credentials-iam-user) that the provisioning tool creates. Therefore the shell must be configured with credentials of the test user, not with credentials of the master user that performed the provisioning. To assume the role created by the common setup, run: ./.evergreen/aws assume-role It is also possible to specify the ARN of the role to assume manually, if you created the role using other means: ./.evergreen/aws assume-role ASSUME-ROLE-ARN To place the credentials into the environment: eval $(./.evergreen/aws assume-role) export AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY AWS_SESSION_TOKEN With the credentials in the environment, to verify that the role was assumed and the credentials are complete and correct, perform a `GetCallerIdentity` call: aws sts get-caller-identity Given the credentials for the test user, the URI for running the driver test suite can be formed as follows: export "MONGODB_URI=mongodb://$AWS_ACCESS_KEY_ID:$AWS_SECRET_ACCESS_KEY@localhost:27017/?authMechanism=MONGODB-AWS&authsource=$external&authMechanismProperties=AWS_SESSION_TOKEN:$AWS_SESSION_TOKEN" ## Temporary Credentials - EC2 Instance Role To test authentication [using temporary credentials for an EC2 instance role](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2.html), an EC2 instance launched with an IAM role or an EC2 instance configured with an instance profile is required. No permissions are needed for the IAM role used with the EC2 instance. To create an EC2 instance with an attached role using the AWS console: 1. Crate an IAM role that the instance will use. It is not necessary to specify any permissions. 2. Launch an instance, choosing the IAM role created in the launch wizard. To define an instance profile which allows adding and removing an IAM role to/from an instance at runtime, follow Amazon documentation [here](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html#attach-iam-role). To test temporary credentials obtained via an EC2 instance role in Evergreen, an instance profile must be associated with the running instance as per this guide. The driver provides tooling to configure a suitable instance profile and launch an EC2 instance that can have this instance profile attached to it. The instance profile and associated IAM role are created by the common setup described above. To launch an EC2 instance suitable for testing authentication via an EC2 role, run: ./.evergreen/aws launch-ec2 path/to/ssh.key.pub The `launch-ec2` command takes one argument which is the path to the public key for the key pair to use for SSH access to the instance. This script will output the instance ID of the launched instance. The instance initially does not have an instance profile assigned; to assign the instance profile created in the common setup to the instance, run: ./.evergreen/aws set-instance-profile i-instanceid To remove the instance profile from the instance, run: ./.evergreen/aws clear-instance-profile i-instanceid To provision the instance for running the driver's test suite via Docker, run: ip=12.34.56.78 ./.evergreen/provision-remote ubuntu@$ip docker To run the AWS auth tests using the EC2 instance role credentials, run: ./.evergreen/test-docker-remote ubuntu@$ip \ MONGODB_VERSION=4.4 AUTH=aws-ec2 \ -s .evergreen/run-tests-aws-auth.sh \ -a .env.private Note that if if you are not using MongoDB AWS account for testing, you would need to specify MONGO_RUBY_DRIVER_AWS_AUTH_USER_ARN in your `.env.private` file with the ARN of the user to add to MongoDB. The easiest way to find out this value is to run the tests and note which username the test suite is trying to authenticate as. To terminate the instance, run: ./.evergreen/aws stop-ec2 ## Temporary Credentials - ECS Task Role The basic procedure for setting up an ECS cluster is described in [this guide](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ECS_AWSCLI_Fargate.html). For testing AWS auth, the ECS task must have a role assigned to it which is covered in [this guide](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html) and additionally [here](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_execution_IAM_role.html). Although not required for testing AWS auth specifically, it is very helpful for general troubleshooting of ECS provisioning to have log output from the tasks. Logging to CloudWatch is covered by [this Amazon guide](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/QuickStartEC2Instance.html) with these potentially helpful [additional](https://stackoverflow.com/questions/50397217/how-to-determine-the-cloudwatch-log-stream-for-a-fargate-service#50704804) [resources](https://help.sumologic.com/03Send-Data/Collect-from-Other-Data-Sources/AWS_Fargate_log_collection). A log group must be manually created, the steps for which are described [here](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Working-with-log-groups-and-streams.html). Additional references: - [Task definition CPU and memory values](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-task-definition.html) The common setup creates all of the necessary prerequisites to test authentication using ECS task credentials, which includes an empty ECS cluster. To test authentication, a service needs to be created in the ECS cluster that runs the SSH daemon, which can be done by running: ./.evergreen/aws launch-ecs path/to/ssh.key.pub The `launch-ecs` command takes one argument which is the path to the public key for the key pair to use for SSH access to the instance. This script generally produces no output if it succeeds. As the service takes some time to start, run the following command to check its status: ./.evergreen/aws ecs-status The status output shows the tasks running in the ECS cluster ordered by their generation, with the newest ones first. Event log for the cluster is displayed, as well as event stream for the running task of the latest available generation which includes the Docker execution output collected via CloudWatch. The status output includes the public IP of the running task once it is available, which can be used to SSH into the container and run the tests. Note that when AWS auth from an ECS task is tested in Evergreen, the task is accessed via its private IP; when the test is performed using the provisioning tooling described in this document, the task is accessed via its public IP. If the public IP address is in the `IP` shell variable, provision the task: ./.evergreen/provision-remote root@$IP local To run the credentials retrieval test on the ECS task, execute: ./.evergreen/test-remote root@$IP env AUTH=aws-ecs RVM_RUBY=ruby-2.7 MONGODB_VERSION=4.4 TEST_CMD='rspec spec/integration/aws*spec.rb' .evergreen/run-tests.sh To run the test again without rebuilding the remote environment, execute: ./.evergreen/test-remote -e root@$IP \ env AUTH=aws-ecs RVM_RUBY=ruby-2.7 sh -c '\ export PATH=`pwd`/rubies/ruby-2.7/bin:$PATH && \ eval export `strings /proc/1/environ |grep ^AWS_CONTAINER_CREDENTIALS_RELATIVE_URI` && \ bundle exec rspec spec/integration/aws*spec.rb' Note that this command retrieves the value of `AWS_CONTAINER_CREDENTIALS_RELATIVE_URI` from the PID 1 environment and places it into the current environment prior to running the tests. To terminate the AWS auth-related ECS tasks, run: ./.evergreen/aws stop-ecs mongo-ruby-driver-2.21.3/spec/README.md000066400000000000000000000736651505113246500174220ustar00rootroot00000000000000# Running Ruby Driver Tests ## Quick Start The test suite requires shared tooling that is stored in a separate repository and is referenced as a submodule. After checking out the desired driver branch, check out the matching submodules: git submodule init git submodule update To run the test suite against a local MongoDB deployment listening on port 27017, run: rake When run without options, the test suite will automatically detect deployment topology and configure itself appropriately. Standalone, replica set and sharded cluster topologies are supported (though the test suite will presently use only the first listed shard in a sharded cluster if given a seed list, or the one running on port 27017 if not given a seed list). TLS, authentication and other options can be configured via URI options by setting `MONGODB_URI` environment variable appropriately. Examples of such configuration are given later in this document. ## MongoDB Server Deployment The tests require a running MongoDB deployment, configured and started externally to the test suite. Tests that are not appropriate for the running deployment will be skipped, with one exception: the test suite assumes that fail points are enabled in the deployment (see the Fail Points section below). Not every test uses fail points, therefore it is possible to launch the server without fail points being enabled and still pass many of the tests in the test suite. ## Starting MongoDB Deployment There are many ways in which MongoDB can be started. The instructions below are for manually launching `mongod` instances and using [mlaunch](http://blog.rueckstiess.com/mtools/mlaunch.html) (part of [mtools](https://github.com/rueckstiess/mtools)) for more complex deployments, but other tools like [mongodb-runner](https://github.com/mongodb-js/runner) and [Mongo Orchestration](https://github.com/10gen/mongo-orchestration) can in principle also work. ### Standalone The simplest possible deployment is a standalone `mongod`, which can be launched as follows: # Launch mongod in one terminal mkdir /tmp/mdb mongod --dbpath /tmp/mdb --setParameter enableTestCommands=1 # Run tests in another terminal rake A standalone deployment is a good starting point, however a great many tests require a replica set deployment and will be skipped on a standalone deployment. ### Replica Set While a replica set can be started and configured by hand, doing so is cumbersome. The examples below use [mlaunch](http://blog.rueckstiess.com/mtools/mlaunch.html) to start a replica set. First, install [mtools](https://github.com/rueckstiess/mtools): pip install 'mtools[mlaunch]' --user -U --upgrade-strategy eager # On Linux: export PATH=~/.local/bin:$PATH # On MacOS: export PATH=$PATH:~/Library/Python/2.7/bin Then, launch a replica set: mlaunch init --replicaset --name ruby-driver-rs \ --dir /tmp/mdb-rs --setParameter enableTestCommands=1 The test suite willl automatically detect the topology, no explicit configuration is needed: rake ### Replica Set With Arbiter Some tests require an arbiter to be present in the replica set. Such a deployment can be obtained by providing `--arbiter` argument to mlaunch: mlaunch init --replicaset --arbiter --name ruby-driver-rs \ --dir /tmp/mdb-rs --setParameter enableTestCommands=1 To indicate to the test suite that the deployment contains an arbiter, set HAVE_ARBITER environment variable as follows: HAVE_ARBITER=1 rake ### Sharded Cluster A sharded cluster can be configured with mlaunch: mlaunch init --replicaset --name ruby-driver-rs --sharded 1 --mongos 2 \ --dir /tmp/mdb-sc --setParameter enableTestCommands=1 As with the replica set, the test suite will automatically detect sharded cluster topology. Note that some tests require a sharded cluster with exactly one shard and other tests require a sharded cluster with more than one shard. Tests requiring a single shard can be run against a deployment with multiple shards by specifying only one mongos address in MONGODB_URI. ## Note Regarding TLS/SSL Arguments MongoDB 4.2 (server and shell) added new command line options for setting TLS parameters. These options follow the naming of URI options used by both the shell and MongoDB drivers starting with MongoDB 4.2. The new options start with the `--tls` prefix. Old options, starting with the `--ssl` prefix, are still supported for backwards compatibility, but their use is deprecated. As of this writing, mlaunch only supports the old `--ssl` prefix options. In the rest of this document, when TLS options are given for `mongo` or `mongod` they use the new `--tls` prefixed arguments, and when the same options are given to `mlaunch` they use the old `--ssl` prefixed forms. The conversion table of the options used herein is as follows: | --tls prefixed option | --ssl prefixed option | | ----------------------- | --------------------- | | --tls | --ssl | | --tlsCAFile | --sslCAFile | | --tlsCertificateKeyFile | --sslPEMKeyFile | ## TLS With Verification The test suite includes a set of TLS certificates for configuring a server and a client to perform full TLS verification in the `spec/support/certificates` directory. The server can be started as follows, if the current directory is the top of the driver source tree: mlaunch init --single --dir /tmp/mdb-ssl --sslMode requireSSL \ --sslPEMKeyFile `pwd`/spec/support/certificates/server.pem \ --sslCAFile `pwd`/spec/support/certificates/ca.crt \ --sslClientCertificate `pwd`/spec/support/certificates/client.pem To test that the driver works when the server's certificate is signed by an intermediate certificate (i.e. uses certificate chaining), use the chained server certificate bundle: mlaunch init --single --dir /tmp/mdb-ssl --sslMode requireSSL \ --sslPEMKeyFile `pwd`/spec/support/certificates/server-second-level-bundle.pem \ --sslCAFile `pwd`/spec/support/certificates/ca.crt \ --sslClientCertificate `pwd`/spec/support/certificates/client.pem The driver's test suite is configured to verify certificates by default. If the server is launched with the certificates from the driver's test suite, the test suite can be run simply by specifying `tls=true` URI option: MONGODB_URI='mongodb://localhost:27017/?tls=true' rake The driver's test suite can also be executed against a server launched with any other certificates. In this case the certificates need to be explicitly specified in the URI, for example as follows: MONGODB_URI='mongodb://localhost:27017/?tls=true&tlsCAFile=path/to/ca.crt&tlsCertificateKeyFile=path/to/client.pem' rake Note that some tests (specifically testing TLS verification) expect the server to be launched using the certificates in the driver's test suite, and will fail when run against a server using other certificates. ## TLS Without Verification It is also possible to enable TLS but omit certificate verification. In this case a standalone server can be started as follows: mlaunch init --single --dir /tmp/mdb-ssl --sslMode requireSSL \ --sslPEMKeyFile `pwd`/spec/support/certificates/server.pem \ --sslCAFile `pwd`/spec/support/certificates/ca.crt \ --sslAllowConnectionsWithoutCertificates \ --sslAllowInvalidCertificates To run the test suite against such a server, also omitting certificate verification, run: MONGODB_URI='mongodb://localhost:27017/?tls=true&tlsInsecure=true' rake Note that there are tests in the test suite that cover TLS verification, and they may fail if the test suite is run in this way. ## OCSP There are several types of OCSP tests implemented in the test suite. OCSP unit tests are in `spec/integration/ocsp_verifier_spec.rb`. To run these, set `OCSP_VERIFIER=1` in the environment. There must NOT be a process running on the host port 8100 as that port will be used by the OCSP responder launched by the tests. For the remaining OCSP tests, the following environment variables must be set to the possible values indicated below: OCSP_ALGORITHM=rsa|ecdsa OCSP_STATUS=valid|revoked|unknown OCSP_DELEGATE=0|1 OCSP_MUST_STAPLE=0|1 These tests also require the mock OCSP responder running on the host machine on port 8100 with the configuration that matches the environment variables just described. Please refer to the Docker and Evergreen scripts in the driver repository for further details. Additionally, the server must be configured to use the appropriate server certificate and CA certificate from the respective subdirectory of `spec/support/ocsp`. This is easiest to achieve by using the Docker tooling described in `.evergreen/README.md`. OCSP connectivity tests are in `spec/integration/ocsp_connectivity.rb`. These test the combinations described [here](https://github.com/mongodb/specifications/blob/master/source/ocsp-support/tests/README.md#integration-tests-permutations-to-be-tested). To run these tests, set `OCSP_CONNECTIVITY=pass` environment variable if the tests are expected to connect successfully or `OCSP_CONNECTIVITY=fail` if the tests are expected to not connect. Note that some of these configurations require OCSP responder to return the failure response; in such configurations, ONLY the OCSP connectivity tests may pass (since the driver may reject connections to servers when OCSP responder returns the failure response, or OCSP verification otherwise definitively fails). When not running either OCSP verifier tests or OCSP connectivity tests but when OCSP algorithm is configured, the test suite will execute normally using the provided `MONGO_URI`. This configuration may be used to exercise OCSP while running the full test suite. In this case, setting `OCSP_STATUS` to `revoked` will generally cause the test suite to fail. ## Authentication mlaunch can configure authentication on the server: mlaunch init --single --dir /tmp/mdb-auth --auth --username dev --password dev To run the test suite against such a server, run: MONGODB_URI='mongodb://dev:dev@localhost:27017/' rake ## X.509 Authentication Note: Testing X.509 authentication requires an enterprise build of the MongoDB server. To set up a server configured for authentication with an X.509 certificate, first launch a TLS-enabled server with a regular credentialed user. The credentialed user is required because mlaunch configures `--keyFile` option for cluster member authentication, which in turn enables authentication. With authentication enabled, `mongod` allows creating the first user in the `admin` database but the X.509 user must be created in the `$external` database - as a result, the X.509 user cannot be the only user in the deployment. Run the following command to set up a standalone `mongod` with a bootstrap user: mlaunch init --single --dir /tmp/mdb-x509 --sslMode requireSSL \ --sslPEMKeyFile `pwd`/spec/support/certificates/server.pem \ --sslCAFile `pwd`/spec/support/certificates/ca.crt \ --sslClientCertificate `pwd`/spec/support/certificates/client.pem \ --auth --username bootstrap --password bootstrap Next, create the X.509 user. The command to create the user is the same across all supported MongoDB versions, and for convenience we assign its text to a variable as follows: create_user_cmd="`cat <<'EOT' db.getSiblingDB("$external").runCommand( { createUser: "C=US,ST=New York,L=New York City,O=MongoDB,OU=x509,CN=localhost", roles: [ { role: "dbAdminAnyDatabase", db: "admin" }, { role: "readWriteAnyDatabase", db: "admin" }, { role: "userAdminAnyDatabase", db: "admin" }, { role: "clusterAdmin", db: "admin" }, ], writeConcern: { w: "majority" , wtimeout: 5000 }, } ) EOT `" Use the MongoDB shell to execute this command: mongosh --tls \ --tlsCAFile `pwd`/spec/support/certificates/ca.crt \ --tlsCertificateKeyFile `pwd`/spec/support/certificates/client-x509.pem \ -u bootstrap -p bootstrap \ --eval "$create_user_cmd" Verify that authentication is required by running the following command, which should fail: mongosh --tls \ --tlsCAFile `pwd`/spec/support/certificates/ca.crt \ --tlsCertificateKeyFile `pwd`/spec/support/certificates/client-x509.pem \ --eval 'db.serverStatus()' Verify that X.509 authentication works by running the following command: mongosh --tls \ --tlsCAFile `pwd`/spec/support/certificates/ca.crt \ --tlsCertificateKeyFile `pwd`/spec/support/certificates/client-x509.pem \ --authenticationDatabase '$external' \ --authenticationMechanism MONGODB-X509 \ --eval 'db.serverStatus()' The test suite includes a set of integration tests for X.509 client authentication. To run the test suite against such a server, run: MONGODB_URI="mongodb://localhost:27017/?authMechanism=MONGODB-X509&tls=true&tlsCAFile=spec/support/certificates/ca.crt&tlsCertificateKeyFile=spec/support/certificates/client-x509.pem" rake ## Kerberos The Kerberos-related functionality is packaged in a separate gem, `mongo_kerberos`. To run any of the Kerberos tests, a special gemfile must be used that references `mongo_kerberos`: export BUNDLE_GEMFILE=gemfiles/mongo_kerberos.gemfile bundle install Ensure that BUNDLE_GEMFILE is set in the environment for both the `bundle install` invocation and the `rake` / `rspec` invocation. ### Unit Tests The driver test suite includes a number of Kerberos-related unit tests that are skipped by default. To run them as part of the test suite, set the `MONGO_RUBY_DRIVER_KERBEROS` environment variable to `1`, `yes` or `true` as follows: export MONGO_RUBY_DRIVER_KERBEROS=1 rake Note that running the full test suite requires a MongoDB deployment. It is possible to run just the Kerberos-related unit tests without provisioning a MongoDB deployment; consult the `.evergreen/run-tests-kerberos-unit.sh` file for the full list of relevant test files. ### Integration Tests The driver test suite includes a number of Kerberos-related integration tests in the `spec/kerberos` directory. These require a provisioned Kerberos deployment and appropriately configured MongoDB deployment. One such deployment is provided internally by MongoDB and is used in the driver's Evergreen configuration; it is also possible to provision a test deployment locally, either via the Docker tooling provided by the driver test suite or manually. #### Via Docker Run: ./.evergreen/test-on-docker -s .evergreen/run-tests-kerberos-integration.sh -pd rhel70 When `SASL_HOST` environment variable is not set, the Kerberos integration test script `.evergreen/run-tests-kerberos-integration.sh` provisions a local Kerberos deployment in the Docker container and configures the test suite to use it. Note: the tooling is currently set up to provision a working `rhel70` container. Ubuntu distros are not presently supported. #### Locally The following additional environment variables must be set to run the Kerberos integration tests: - `MONGO_RUBY_DRIVER_KERBEROS_INTEGRATION=1` - `SASL_HOST`: the FQDN host name of the MongoDB server that is configured to use Kerberos. Note that this is NOT the Kerberos domain controller (KDC). - `SASL_REALM`: the Kerberos realm. Depending on how Kerberos is configured, this can be the same as or different from `SASL_HOST`. The Evergreen configuration uses the same host and realm; Docker configuration provided by the Ruby driver uses different host and realm. - `SASL_PORT`: the port number that the Kerberized MongoDB server is listening on. - `SASL_USER`: the username to provide to MongoDB for authentication. This must match the username of the principal. - `SASL_DB`: the database that stores the user used for authentication. This is the "auth soure" in MongoDB parlance. Normally this should be `$external`. - `PRINCIPAL`: the Kerberos principal to use for authentication, in the form of `username@realm`. Note that the realm is commonly uppercased. - `KERBEROS_DB`: the database that the user has access to. Note that the driver does not directly provide a password to the MongoDB server when using Kerberos authentication, and because of this there is no user password provided to the test suite either when Kerberos authentication is used. Instead, there must be a local session established via e.g. `kinit`. Consult the `.evergreen/run-tests-kerberos-integration.sh` file for details. ## Client-Side Encryption NOTE: Client-side encryption tests require an enterprise build of MongoDB server version 4.2 or higher. These builds of the MongoDB server come packaged with mongocryptd, a daemon that is spawned by the driver during automatic encryption. The client-side encryption tests require the mongocryptd binary to be in the system path. Download enterprise versions of MongoDB here: https://www.mongodb.com/download-center/enterprise Download the Automatic Encryption Shared Library https://www.mongodb.com/docs/manual/core/queryable-encryption/reference/shared-library/#std-label-qe-reference-shared-library-download Install and Configure mongocryptd: https://www.mongodb.com/docs/manual/core/queryable-encryption/reference/mongocryptd/ Install libmongocrypt on your machine: Option 1: Download a pre-built binary - Download a tarball of all libmongocrypt variations from this link: https://s3.amazonaws.com/mciuploads/libmongocrypt/all/master/latest/libmongocrypt-all.tar.gz - Unzip the file you downloaded. You will see a list of folders, each corresponding to an operating system. Find the folder that matches your operating system and open it. - Inside that folder, open the folder called "nocrypto." In either the lib or lb64 folder, you will find the libmongocrypt.so or libmongocrypt.dylib or libmongocrypt.dll file, depending on your OS. - Move that file to wherever you want to keep it on your machine. Option 2: Build from source - To build libmongocrypt from source, follow the instructions in the README on the libmongocrypt GitHub repo: https://github.com/mongodb/libmongocrypt Option 3: Use libmongocrypt-helper gem (Linux only) - Run command `FLE=helper bundle install` Create AWS KMS keys Many of the Client-Side Encryption tests require that you have an encryption master key hosted on AWS's Key Management Service. Set up a master key by following these steps: 1. Sign up for an AWS account at this link if you don't already have one: https://aws.amazon.com/resources/create-account/ 2. Create a new IAM user that you want to have permissions to access your new master key by following this guide: the "Creating an Administrator IAM User and Group (Console)" section of this guide: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user.html 3. Create an access key for your new IAM user and store the access key credentials in environment variables on your local machine. Create an access key by following the "Managing Access Keys (Console)" instructions in this guide: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html#Using_CreateAccessKey Once an access key has been created, store the access key id and the access key secret in environment variables. If you plan to frequently run Client-Side Encryption tests, it may be a good idea to put these lines in your .bash_profile or .bashrc file. Otherwise, you can run them in the terminal window where you plan to run your tests. ``` export MONGO_RUBY_DRIVER_AWS_KEY="YOUR-ACCESS-KEY-ID" export MONGO_RUBY_DRIVER_AWS_SECRET="YOUR-ACCESS-KEY-SECRET" ``` 4. Create a new symmetric Customer Master Key (CMK) by following the "Creating Symmetric CMKs (Console)" section of this guide: https://docs.aws.amazon.com/kms/latest/developerguide/create-keys.html 5. Store information about your CMK in the following environment variables: a. **Region:** Find your AWS region by following this guide: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html#using-regions-availability-zones-describe (for example, your region might be "us-east-1" or "ap-south-2"). b. **Amazon Resource Name (ARN):** Read the following guide to learn more about ARNs and how to view your key's ARN: https://docs.aws.amazon.com/kms/latest/developerguide/viewing-keys-console.html Store these two pieces of information in environment variables. If you plan to frequently run Client-Side Encryption tests, it may be a good idea to put these lines in your .bash_profile or .bashrc file. Otherwise, you can run them in the terminal window where you plan to run your tests. ``` export MONGO_RUBY_DRIVER_AWS_REGION="YOUR-AWS-REGION" export MONGO_RUBY_DRIVER_AWS_ARN="YOUR-AWS-ARN" ``` 6. Give your IAM user "Key administrator" and "Key user" privileges on your new CMK by following the "Using the AWS Management Console Default View" section of this guide: https://docs.aws.amazon.com/kms/latest/developerguide/key-policy-modifying.html In one terminal, launch MongoDB: ``` mkdir /tmp/mdb mongod --dbpath /tmp/mdb --setParameter enableTestCommands=1 ``` In another terminal run the tests, making sure to set the `LIBMONGOCRYPT_PATH` environment variable to the full path to the .so/.dll/.dylib ``` LIBMONGOCRYPT_PATH=/path/to/your/libmongocrypt/nocrypto/libmongocrypt.so bundle exec rake ``` If you would like to run the client-side encryption tests on a replica set or sharded cluster, be aware that the driver will try to spawn the mongocryptd daemon on port 27020 by default. If port 27020 is already in use by a mongod or mongos process, spawning mongocryptd will fail, causing the tests to fail as well. To avoid this problem, set the MONGO_RUBY_DRIVER_MONGOCRYPTD_PORT environment variable to the port at which you would like the driver to spawn mongocryptd. For example, to always have the mongocryptd process listen on port 27090: ``` export MONGO_RUBY_DRIVER_MONGOCRYPTD_PORT=27090 ``` Keep in mind that this will only impact the behavior of the Ruby Driver test suite, not the behavior of the driver itself. ## Compression To test compression, set the `compressors` URI option: MONGODB_URI="mongodb://localhost:27017/?compressors=zlib" rake Note that as of this writing, the driver supports [ztsd](https://mongodb.com/docs/manual/reference/glossary/#term-zstd), [snappy](https://mongodb.com/docs/manual/reference/glossary/#term-snappy) and [zlib](https://mongodb.com/docs/manual/reference/glossary/#term-zlib) compression. Servers 4.2+ enable zlib by default; to test older servers, explicitly enable zlib compression when launching the server: mongod --dbpath /tmp/mdb --setParameter enableTestCommands=1 \ --networkMessageCompressors snappy,zlib ## Server API To specify server API parameters, use the `SERVER_API` environment variable. The server API parameters cannot be specified via URI options. Both YAML and JSON syntaxes are accepted: SERVER_API='{version: "1", strict: true}' rake SERVER_API='{"version":"1","strict":true}' rake Note that the input must be valid YAML or JSON and the version number must be a string, therefore all of the following specifications are invalid: SERVER_API='{version:"1",strict:true}' rake SERVER_API='{version: 1}' rake SERVER_API='{"version":1,"strict":true}' rake ## Other Options Generally, all URI options recognized by the driver may be set for a test run, and will cause the clients created by the test suite to have those options by default. For example, retryable writes may be turned on and off as follows: MONGODB_URI='mongodb://localhost:27017/?retryWrites=true' rake MONGODB_URI='mongodb://localhost:27017/?retryWrites=false' rake Individual tests may override options that the test suite uses as defaults. For example, retryable writes tests may create clients with the retry writes option set to true or false as needed regardless of what the default is for the entire test run. It is also possible to, for example, reference non-default hosts and replica set names: MONGODB_URI='mongodb://test.host:27017,test.host:27018/?replicaSet=fooset' rake However, as noted in the caveats section, changing the database name used by the test suite is not supported. ## Special Tests Some tests require internet connectivity, for example to test DNS seed lists and SRV URIs. These tests can be skipped by setting the following environment variable: EXTERNAL_DISABLED=1 Some tests are designed to validate the driver's behavior under load, or otherwise execute a large number of operations which may take a sizable amount of time. Such tests are skipped by default and can be run by setting the following environment variable: STRESS=1 Some tests fork the process to validate the driver's behavior when forking is involved. These tests are skipped by default and can be run by setting the following environment variable: FORK=1 OCSP tests require Python 3 with asn1crypto, oscrypto and flask packages installed, and they require the drivers-evergreen-tools submodule to be checked out. To run these tests, set the following environment variable: OCSP=1 To check out the submodule, run: git submodule update --init --recursive ## Debug Logging The test suite is run with the driver log level set to `WARN` by default. This produces a fair amount of output as many tests trigger various conditions resulting in the driver outputting warnings. This is expected behavior. To increase the driver log level to `DEBUG`, set the `MONGO_RUBY_DRIVER_CLIENT_DEBUG` environment variable to `1`, `true` or `yes`. This will produce additional log output pertaining to, for example, SDAM events and transitions performed by the driver, as well as log all commands sent to and responses received from the database. To debug authentication and user management commands, set the `MONGO_RUBY_DRIVER_UNREDACT_EVENTS` environment variable to `1`, `true` or `yes`. This will disable redaction of command monitoring payloads for sensitive commands. Normally this environment variable should be used with `MONGO_RUBY_DRIVER_CLIENT_DEBUG` to see the command payloads. ## Caveats ### Socket Permission Errors If you get permission errors connecting to `mongod`'s socket, adjust its permissions: sudo chmod 0666 /tmp/mongodb-27017.sock Alternatively, specify the following argument to `mlaunch` or `mongod`: --filePermissions 0666 ### Non-Identical Hostnames The test suite should be configured to connect to exactly the hostnames configured in the cluster. If, for example, the test suite is configured to use IP addresses but the cluster is configured with hostnames, most tests would still work (by using SDAM to discover correct cluster configuration) but will spend a significant amount of extra time on server discovery. Some tests perform address assertions and will fail if hostnames configured in the test suite do not match hostnames configured in the cluster. For the same reason, each node in server configuration should have its port specified. ### Database Name The test suite currently does not allow changing the database name that it uses, which is `ruby-driver`. Attempts to specify a different database name in the URI for example will lead to some of the tests failing. ### Fail Points In order to run some of the tests, the mongo cluster needs to have fail points enabled. This is accomplished by starting `mongod` with the following option: --setParameter enableTestCommands=1 ## Running Individual Examples Individual examples can be run by invoking `rspec` instead of `rake`. Prior to running `rspec`, ensure the test suite created users for itself - this is done by the `rake` command automatically, or you can manually invoke the Rake task which configures the deployment for testing: rake spec:prepare Then, any of the standard RSpec invocations will work: rspec path/to/file_spec.rb ## Configuration Reporting To have the test suite report its current configuration, run: rake spec:config ## Color Output The test suite uses color output by default. To view the output in `less` with color, use the `-R` option: rake 2>&1 | tee rake.log less -R rake.log ## Debugging The test suite is configured to use [Byebug](https://github.com/deivid-rodriguez/byebug) for debugging on MRI and [ruby-debug](https://github.com/jruby/jruby/wiki/UsingTheJRubyDebugger) on JRuby. ### MRI Call `byebug` anywhere in the test suite to break into Byebug. ### JRuby To debug on JRuby, the test suite must be started with the `--debug` argument to `jruby`. This can be achieved by starting the test suite as follows: jruby --debug -S rspec [rspec args...] Call `debugger` anywhere in the test suite to break into the debugger. ### Docker By default, when the test suite is running in a CI environment the debuggers are not loaded. The Docker runner emulates the CI environment, therefore to debug in Docker the debugger must be explicitly loaded first. To break into the debugger on MRI, call: require 'byebug' byebug To break into the debugger on JRuby, call: require 'ruby-debug' debugger ## Testing against load balancer locally 1. Install mongodb server v5.2+. 2. Install haproxy. 3. Install mongo-orchestration - https://github.com/10gen/mongo-orchestration/ 4. Install drivers-evergreen-tools - https://github.com/mongodb-labs/drivers-evergreen-tools. In ruby driver it is installed as git submodule under `.mod/drivers-evergreen-tools/`. 5. Start mongo-orchestration: `mongo-orchestration start`. 6. Start the cluster: `http PUT http://localhost:8889/v1/sharded_clusters/myCluster @.mod/drivers-evergreen-tools/.evergreen/orchestration/configs/sharded_clusters/basic-load-balancer.json` (this example uses httpie client, can be done with curl). 7. Start load balancer: `MONGODB_URI="mongodb://localhost:27017,localhost:27018/" .mod/drivers-evergreen-tools/.evergreen/run-load-balancer.sh start`. 8. Run tests: `TOPOLOGY=load-balanced MONGODB_URI='mongodb://127.0.0.1:8000/?loadBalanced=true' be rspec spec/`. 9. Stop load balancer: `MONGODB_URI="mongodb://localhost:27017,localhost:27018/" .mod/drivers-evergreen-tools/.evergreen/run-load-balancer.sh stop`. 10. Stop the cluster: `http DELETE http://localhost:8889/v1/sharded_clusters/myCluster` 11. Stop mongo-orchestration: `mongo-orchestration stop`. mongo-ruby-driver-2.21.3/spec/USERS.md000066400000000000000000000115471505113246500173550ustar00rootroot00000000000000# Test Users The Mongo Ruby Driver tests assume the presence of two `Mongo::Auth::User` objects: `root_user` and `test_user`. This document details the roles and privileges granted to those users as well as how they are created and used in the tests. Both users are defined in the [spec_config](support/spec_config.rb#L376) file. ## root_user `root_user` is the test user with the most privileges. It is created with the following roles: - userAdminAnyDatabase - dbAdminAnyDatabase - readWriteAnyDatabase - clusterAdmin By default, `root_user` is given a username of `root-user` and a password of `password`. However, you may override these defaults by specifying a username and password in the `MONGODB_URI` environment variable while running your tests. For example, if you set `MONGODB_URI` to: `mongodb://alanturing:enigma@localhost:27017/`, the username of `root_user` would be set to `alanturing`, and the password would be set to `enigma`. ## test_user `test_user` is the user created with a more limited set of privileges. It is created with the following roles: - readWrite on the ruby-driver database - dbAdmin on the ruby-driver database It is also granted the following roles against a database called "invalid_database." These permissions are used for the purpose of running tests against a database that doesn't exist. - readWrite on the invalid_database database - dbAdmin on the invalid_database database `test_user` also has the following roles, which are exclusively used to test transactions: - readWrite on the hr database - dbAdmin on the hr database - readWrite on the reporting database - dbAdmin on the reporting database The `test_user` has the username `test-user` and the password `password`; these values are not customizable without changing the source code. ## User Creation Both users are typically created in the [spec_setup](support/spec_setup.rb) script, which can be run in two ways: either by running `bundle exec rake spec:prepare`, which only runs spec setup without running any actual tests, or by running `rake`, which runs spec setup and the entire test suite. First, the `spec_setup` script attempts to create the `root_user`. If this user already exists (for example, if you have already created this user in your test instance), `spec_setup` will skip this step. Once the script has verified the existence of `root_user`, it will create a client authenticated with the `root_user` and use that client to create a second user, `test_user`. Because `root_user` has the `userAdminAnyDatabase` role, it has the permissions necessary to create and destroy users on your MongoDB instance. If you have already created a user with the same credentials as `test_user` prior to running the `spec_setup` script, the script will delete this user and re-create it. The `root_user` is created in the `admin` database, while the `test_user` is created in the `ruby-driver` database. The authentication mechanism used to store the user credentials is going to change depending on the version of MongoDB running on your deployment. If you are running tests against a MongoDB instance with a server version older than 3.0, the users will be created using the `MONGODB-CR` authentication mechanism. If your server version is between 3.0 and 3.6 (inclusive), the test users will be created using the `SCRAM-SHA-1` mechanism, which was introduced as the new default starting in MongoDB version 3.0. If you are running a version of MongoDB newer than 4.0, test users will be authenticated using either `SCRAM-SHA-1` or `SCRAM-SHA-256`. **Note:** (m-launch)[http://blog.rueckstiess.com/mtools/mlaunch.html], the client tool we use to spin up MongoDB instances for our tests, creates users EXCLUSIVELY with the `SCRAM-SHA-1` mechanism, even when `SCRAM-SHA-256` is enabled on the test server. This should not impact your ability to run the Mongo Ruby Driver test suite. ## Test Usage `root_user` is used in the Mongo Ruby Driver tests to perform functionality that requires its high-level roles and privileges (if your client is set up with authentication), such as creating and destroying users and database administration. To easily set up a `Mongo::Client` object authenticated with the roles and privileges of `root_user`, you can initialize a client using the `ClientRegistry` module as follows: ``` client = ClientRegistry.instance.global_client('root_authorized') ``` Of course, not every test will require you to create a client with so many privileges. Often, it is enough to have a user who is only authorized to read and write to a specific test database. In this case, it is preferable to use `test_user`. To initialize a `Mongo::Client` object authenticated with the `test_user` object, use the `ClientRegistry` module as follows: ``` client = ClientRegistry.instance.global_client('authorized') ``` Once you have initialized these client objects, you may use them to perform functionality required by your tests. mongo-ruby-driver-2.21.3/spec/atlas/000077500000000000000000000000001505113246500172265ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/atlas/atlas_connectivity_spec.rb000066400000000000000000000040341505113246500244700ustar00rootroot00000000000000# frozen_string_literal: true require 'lite_spec_helper' require 'base64' require 'tempfile' RSpec.shared_examples 'atlas connectivity test' do after do client.close rescue StandardError # no-op end it 'runs hello successfully' do expect { client.database.command(ping: 1) } .not_to raise_error end end describe 'Atlas connectivity' do before do skip 'These tests must be run against a live Atlas cluster' unless ENV['ATLAS_TESTING'] end context 'with regular authentication' do regular_auth_env_vars = %w[ ATLAS_REPLICA_SET_URI ATLAS_SHARDED_URI ATLAS_FREE_TIER_URI ATLAS_TLS11_URI ATLAS_TLS12_URI ] regular_auth_env_vars.each do |uri_var| describe "Connecting to #{uri_var}" do before do raise "Environment variable #{uri_var} is not set" unless ENV[uri_var] end let(:uri) { ENV[uri_var] } let(:client) { Mongo::Client.new(uri) } include_examples 'atlas connectivity test' end end end context 'with X.509 authentication' do x509_auth_env_vars = [ %w[ATLAS_X509_URI ATLAS_X509_CERT_BASE64], %w[ATLAS_X509_DEV_URI ATLAS_X509_DEV_CERT_BASE64] ] x509_auth_env_vars.each do |uri_var, cert_var| describe "Connecting to #{uri_var} with certificate" do before do raise "Environment variable #{uri_var} is not set" unless ENV[uri_var] end let(:client_cert) do decoded = Base64.strict_decode64(ENV[cert_var]) cert_file = Tempfile.new([ 'x509-cert', '.pem' ]) cert_file.write(decoded) File.chmod(0o600, cert_file.path) cert_file.close cert_file end let(:uri) do "#{ENV[uri_var]}&tlsCertificateKeyFile=#{URI::DEFAULT_PARSER.escape(client_cert.path)}" end let(:client) do Mongo::Client.new(uri) end after do client_cert&.unlink end include_examples 'atlas connectivity test' end end end end mongo-ruby-driver-2.21.3/spec/faas/000077500000000000000000000000001505113246500170345ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/faas/ruby-sam-app/000077500000000000000000000000001505113246500213515ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/faas/ruby-sam-app/.gitignore000066400000000000000000000141451505113246500233460ustar00rootroot00000000000000 # Created by https://www.toptal.com/developers/gitignore/api/osx,linux,python,windows,pycharm,visualstudiocode,sam # Edit at https://www.toptal.com/developers/gitignore?templates=osx,linux,python,windows,pycharm,visualstudiocode,sam ### Linux ### *~ # temporary files which can be created if a process still has a handle open of a deleted file .fuse_hidden* # KDE directory preferences .directory # Linux trash folder which might appear on any partition or disk .Trash-* # .nfs files are created when an open file is removed but is still being accessed .nfs* ### OSX ### # General .DS_Store .AppleDouble .LSOverride # Icon must end with two \r Icon # Thumbnails ._* # Files that might appear in the root of a volume .DocumentRevisions-V100 .fseventsd .Spotlight-V100 .TemporaryItems .Trashes .VolumeIcon.icns .com.apple.timemachine.donotpresent # Directories potentially created on remote AFP share .AppleDB .AppleDesktop Network Trash Folder Temporary Items .apdisk ### PyCharm ### # Covers JetBrains IDEs: IntelliJ, RubyMine, PhpStorm, AppCode, PyCharm, CLion, Android Studio, WebStorm and Rider # Reference: https://intellij-support.jetbrains.com/hc/en-us/articles/206544839 # User-specific stuff .idea/**/workspace.xml .idea/**/tasks.xml .idea/**/usage.statistics.xml .idea/**/dictionaries .idea/**/shelf # Generated files .idea/**/contentModel.xml # Sensitive or high-churn files .idea/**/dataSources/ .idea/**/dataSources.ids .idea/**/dataSources.local.xml .idea/**/sqlDataSources.xml .idea/**/dynamic.xml .idea/**/uiDesigner.xml .idea/**/dbnavigator.xml # Gradle .idea/**/gradle.xml .idea/**/libraries # Gradle and Maven with auto-import # When using Gradle or Maven with auto-import, you should exclude module files, # since they will be recreated, and may cause churn. Uncomment if using # auto-import. # .idea/artifacts # .idea/compiler.xml # .idea/jarRepositories.xml # .idea/modules.xml # .idea/*.iml # .idea/modules # *.iml # *.ipr # CMake cmake-build-*/ # Mongo Explorer plugin .idea/**/mongoSettings.xml # File-based project format *.iws # IntelliJ out/ # mpeltonen/sbt-idea plugin .idea_modules/ # JIRA plugin atlassian-ide-plugin.xml # Cursive Clojure plugin .idea/replstate.xml # Crashlytics plugin (for Android Studio and IntelliJ) com_crashlytics_export_strings.xml crashlytics.properties crashlytics-build.properties fabric.properties # Editor-based Rest Client .idea/httpRequests # Android studio 3.1+ serialized cache file .idea/caches/build_file_checksums.ser ### PyCharm Patch ### # Comment Reason: https://github.com/joeblau/gitignore.io/issues/186#issuecomment-215987721 # *.iml # modules.xml # .idea/misc.xml # *.ipr # Sonarlint plugin # https://plugins.jetbrains.com/plugin/7973-sonarlint .idea/**/sonarlint/ # SonarQube Plugin # https://plugins.jetbrains.com/plugin/7238-sonarqube-community-plugin .idea/**/sonarIssues.xml # Markdown Navigator plugin # https://plugins.jetbrains.com/plugin/7896-markdown-navigator-enhanced .idea/**/markdown-navigator.xml .idea/**/markdown-navigator-enh.xml .idea/**/markdown-navigator/ # Cache file creation bug # See https://youtrack.jetbrains.com/issue/JBR-2257 .idea/$CACHE_FILE$ # CodeStream plugin # https://plugins.jetbrains.com/plugin/12206-codestream .idea/codestream.xml ### Python ### # Byte-compiled / optimized / DLL files __pycache__/ *.py[cod] *$py.class # C extensions *.so # Distribution / packaging .Python build/ develop-eggs/ dist/ downloads/ eggs/ .eggs/ parts/ sdist/ var/ wheels/ pip-wheel-metadata/ share/python-wheels/ *.egg-info/ .installed.cfg *.egg MANIFEST # PyInstaller # Usually these files are written by a python script from a template # before PyInstaller builds the exe, so as to inject date/other infos into it. *.manifest *.spec # Installer logs pip-log.txt pip-delete-this-directory.txt # Unit test / coverage reports htmlcov/ .tox/ .nox/ .coverage .coverage.* .cache nosetests.xml coverage.xml *.cover *.py,cover .hypothesis/ .pytest_cache/ pytestdebug.log # Translations *.mo *.pot # Django stuff: *.log local_settings.py db.sqlite3 db.sqlite3-journal # Flask stuff: instance/ .webassets-cache # Scrapy stuff: .scrapy # Sphinx documentation docs/_build/ doc/_build/ # PyBuilder target/ # Jupyter Notebook .ipynb_checkpoints # IPython profile_default/ ipython_config.py # pyenv .python-version # pipenv # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control. # However, in case of collaboration, if having platform-specific dependencies or dependencies # having no cross-platform support, pipenv may install dependencies that don't work, or not # install all needed dependencies. #Pipfile.lock # poetry #poetry.lock # PEP 582; used by e.g. github.com/David-OConnor/pyflow __pypackages__/ # Celery stuff celerybeat-schedule celerybeat.pid # SageMath parsed files *.sage.py # Environments # .env .env/ .venv/ env/ venv/ ENV/ env.bak/ venv.bak/ pythonenv* # Spyder project settings .spyderproject .spyproject # Rope project settings .ropeproject # mkdocs documentation /site # mypy .mypy_cache/ .dmypy.json dmypy.json # Pyre type checker .pyre/ # pytype static type analyzer .pytype/ # operating system-related files # file properties cache/storage on macOS *.DS_Store # thumbnail cache on Windows Thumbs.db # profiling data .prof ### SAM ### # Ignore build directories for the AWS Serverless Application Model (SAM) # Info: https://aws.amazon.com/serverless/sam/ # Docs: https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-reference.html **/.aws-sam ### VisualStudioCode ### .vscode/* !.vscode/settings.json !.vscode/tasks.json !.vscode/launch.json !.vscode/extensions.json *.code-workspace ### VisualStudioCode Patch ### # Ignore all local history of files .history .ionide ### Windows ### # Windows thumbnail cache files Thumbs.db:encryptable ehthumbs.db ehthumbs_vista.db # Dump file *.stackdump # Folder config file [Dd]esktop.ini # Recycle Bin used on file shares $RECYCLE.BIN/ # Windows Installer files *.cab *.msi *.msix *.msm *.msp # Windows shortcuts *.lnk # End of https://www.toptal.com/developers/gitignore/api/osx,linux,python,windows,pycharm,visualstudiocode,sam mongo-ruby-driver-2.21.3/spec/faas/ruby-sam-app/Gemfile000066400000000000000000000001561505113246500226460ustar00rootroot00000000000000source "https://rubygems.org" gem "httparty" gem "mongo" group :test do gem "test-unit" gem "mocha" end mongo-ruby-driver-2.21.3/spec/faas/ruby-sam-app/mongodb/000077500000000000000000000000001505113246500227765ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/faas/ruby-sam-app/mongodb/Gemfile000066400000000000000000000000721505113246500242700ustar00rootroot00000000000000source "https://rubygems.org" gem "httparty" gem "mongo" mongo-ruby-driver-2.21.3/spec/faas/ruby-sam-app/mongodb/app.rb000066400000000000000000000061301505113246500241030ustar00rootroot00000000000000# frozen_string_literal: true require 'mongo' require 'json' class StatsAggregator def initialize @open_connections = 0 @heartbeats_count = 0 @total_heartbeat_time = 0 @commands_count = 0 @total_command_time = 0 end def add_command(duration) @commands_count += 1 @total_command_time += duration end def add_heartbeat(duration) @heartbeats_count += 1 @total_heartbeat_time += duration end def add_connection @open_connections += 1 end def remove_connection @open_connections -= 1 end def average_heartbeat_time if @heartbeats_count == 0 0 else @total_heartbeat_time / @heartbeats_count end end def average_command_time if @commands_count == 0 0 else @total_command_time / @commands_count end end def reset @open_connections = 0 @heartbeats_count = 0 @total_heartbeat_time = 0 @commands_count = 0 @total_command_time = 0 end def result { average_heartbeat_time: average_heartbeat_time, average_command_time: average_command_time, heartbeats_count: @heartbeats_count, open_connections: @open_connections, } end end class CommandMonitor def initialize(stats_aggregator) @stats_aggregator = stats_aggregator end def started(event); end def failed(event) @stats_aggregator.add_command(event.duration) end def succeeded(event) @stats_aggregator.add_command(event.duration) end end class HeartbeatMonitor def initialize(stats_aggregator) @stats_aggregator = stats_aggregator end def started(event); end def succeeded(event) @stats_aggregator.add_heartbeat(event.duration) end def failed(event) @stats_aggregator.add_heartbeat(event.duration) end end class PoolMonitor def initialize(stats_aggregator) @stats_aggregator = stats_aggregator end def published(event) case event when Mongo::Monitoring::Event::Cmap::ConnectionCreated @stats_aggregator.add_connection when Mongo::Monitoring::Event::Cmap::ConnectionClosed @stats_aggregator.remove_connection end end end $stats_aggregator = StatsAggregator.new command_monitor = CommandMonitor.new($stats_aggregator) heartbeat_monitor = HeartbeatMonitor.new($stats_aggregator) pool_monitor = PoolMonitor.new($stats_aggregator) sdam_proc = proc do |client| client.subscribe(Mongo::Monitoring::COMMAND, command_monitor) client.subscribe(Mongo::Monitoring::SERVER_HEARTBEAT, heartbeat_monitor) client.subscribe(Mongo::Monitoring::CONNECTION_POOL, pool_monitor) end puts 'Connecting' $client = Mongo::Client.new(ENV['MONGODB_URI'], sdam_proc: sdam_proc) # Populate the connection pool $client.use('lambda_test').database.list_collections puts 'Connected' def lambda_handler(event:, context:) db = $client.use('lambda_test') collection = db[:test_collection] result = collection.insert_one({ name: 'test' }) collection.delete_one({ _id: result.inserted_id }) response = $stats_aggregator.result.to_json $stats_aggregator.reset puts "Response: #{response}" { statusCode: 200, body: response } end mongo-ruby-driver-2.21.3/spec/faas/ruby-sam-app/template.yaml000066400000000000000000000034221505113246500240510ustar00rootroot00000000000000AWSTemplateFormatVersion: '2010-09-09' Transform: AWS::Serverless-2016-10-31 Description: > Sample SAM Template for ruby-sam-app # More info about Globals: https://github.com/awslabs/serverless-application-model/blob/master/docs/globals.rst Globals: Function: Timeout: 30 MemorySize: 128 Parameters: MongoDbUri: Type: String Description: The MongoDB connection string. Resources: MongoDBFunction: Type: AWS::Serverless::Function # More info about Function Resource: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#awsserverlessfunction Properties: CodeUri: mongodb/ Environment: Variables: MONGODB_URI: !Ref MongoDbUri Handler: app.lambda_handler Runtime: ruby3.2 Architectures: - x86_64 Events: MongoDB: Type: Api # More info about API Event Source: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#api Properties: Path: /mongodb Method: get Outputs: # ServerlessRestApi is an implicit API created out of Events key under Serverless::Function # Find out more about other implicit resources you can reference within SAM # https://github.com/awslabs/serverless-application-model/blob/master/docs/internals/generated_resources.rst#api MongoDBApi: Description: "API Gateway endpoint URL for Prod stage for MongoDB function" Value: !Sub "https://${ServerlessRestApi}.execute-api.${AWS::Region}.amazonaws.com/Prod/mongodb/" MongoDBFunction: Description: "MongoDB Lambda Function ARN" Value: !GetAtt MongoDBFunction.Arn MongoDBFunctionIamRole: Description: "Implicit IAM Role created for MongoDB function" Value: !GetAtt MongoDBFunctionRole.Arn mongo-ruby-driver-2.21.3/spec/integration/000077500000000000000000000000001505113246500204455ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/integration/auth_spec.rb000066400000000000000000000230031505113246500227430ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe 'Auth' do # User creation with a password fails on the server if, for example, # only MONGODB-AWS auth mechanism is allowed in server configuration. require_no_external_user describe 'Unauthorized exception message' do let(:server) do authorized_client.cluster.next_primary end let(:base_options) do SpecConfig.instance.monitoring_options.merge(connect: SpecConfig.instance.test_options[:connect]) end let(:connection) do Mongo::Server::Connection.new(server, base_options.merge(options)) end before(:all) do # If auth is configured, the test suite uses the configured user # and does not create its own users. However, the configured user may # not have the auth mechanisms we need. Therefore we create a user # for this test without specifying auth mechanisms, which gets us # server default (scram for 4.0, scram & scram256 for 4.2). users = ClientRegistry.instance.global_client('root_authorized').use(:admin).database.users unless users.info('existing_user').empty? users.remove('existing_user') end users.create('existing_user', password: 'password') end context 'user mechanism not provided' do context 'user does not exist' do let(:options) do {user: 'nonexistent_user' } end before do expect(connection.app_metadata.send(:document)[:saslSupportedMechs]).to eq('admin.nonexistent_user') end context 'scram-sha-1 only server' do min_server_fcv '3.0' max_server_version '3.6' it 'indicates scram-sha-1 was used' do expect do connection.connect! end.to raise_error(Mongo::Auth::Unauthorized, /User nonexistent_user \(mechanism: scram\) is not authorized to access admin.*used mechanism: SCRAM-SHA-1/) end end context 'scram-sha-256 server' do min_server_fcv '4.0' # An existing user on 4.0+ will negotiate scram-sha-256. # A non-existing user on 4.0+ will negotiate scram-sha-1. it 'indicates scram-sha-1 was used' do expect do connection.connect! end.to raise_error(Mongo::Auth::Unauthorized, /User nonexistent_user \(mechanism: scram\) is not authorized to access admin.*used mechanism: SCRAM-SHA-1/) end end end context 'user exists' do let(:options) do {user: 'existing_user', password: 'bogus'} end before do expect(connection.app_metadata.send(:document)[:saslSupportedMechs]).to eq("admin.existing_user") end context 'scram-sha-1 only server' do min_server_fcv '3.0' max_server_version '3.6' it 'indicates scram-sha-1 was used' do expect do connection.connect! end.to raise_error(Mongo::Auth::Unauthorized, /User existing_user \(mechanism: scram\) is not authorized to access admin.*used mechanism: SCRAM-SHA-1/) end end context 'scram-sha-256 server' do min_server_fcv '4.0' # An existing user on 4.0+ will negotiate scram-sha-256. # A non-existing user on 4.0+ will negotiate scram-sha-1. it 'indicates scram-sha-256 was used' do expect do connection.connect! end.to raise_error(Mongo::Auth::Unauthorized, /User existing_user \(mechanism: scram256\) is not authorized to access admin.*used mechanism: SCRAM-SHA-256/) end end end end context 'user mechanism is provided' do min_server_fcv '3.0' context 'scram-sha-1 requested' do let(:options) do {user: 'nonexistent_user', auth_mech: :scram} end it 'indicates scram-sha-1 was requested and used' do expect do connection.connect! end.to raise_error(Mongo::Auth::Unauthorized, /User nonexistent_user \(mechanism: scram\) is not authorized to access admin.*used mechanism: SCRAM-SHA-1/) end end context 'scram-sha-256 requested' do min_server_fcv '4.0' let(:options) do {user: 'nonexistent_user', auth_mech: :scram256} end it 'indicates scram-sha-256 was requested and used' do expect do connection.connect! end.to raise_error(Mongo::Auth::Unauthorized, /User nonexistent_user \(mechanism: scram256\) is not authorized to access admin.*used mechanism: SCRAM-SHA-256/) end end end context 'when authentication fails' do let(:options) do {user: 'nonexistent_user', password: 'foo'} end it 'reports which server authentication was attempted against' do expect do connection.connect! end.to raise_error(Mongo::Auth::Unauthorized, /used server: #{connection.address.to_s}/) end context 'with default auth source' do it 'reports auth source used' do expect do connection.connect! end.to raise_error(Mongo::Auth::Unauthorized, /auth source: admin/) end end context 'with custom auth source' do let(:options) do {user: 'nonexistent_user', password: 'foo', auth_source: 'authdb'} end it 'reports auth source used' do expect do connection.connect! end.to raise_error(Mongo::Auth::Unauthorized, /auth source: authdb/) end end end context 'attempting to connect to a non-tls server with tls' do require_no_tls # The exception raised is SocketTimeout on 3.6 server for whatever reason, # run the test on 4.0+ only. min_server_fcv '4.0' let(:options) { {ssl: true} } it 'reports host, port and tls status' do begin connection.connect! rescue Mongo::Error::SocketError => exc end expect(exc).not_to be nil expect(exc.message).to include('OpenSSL::SSL::SSLError') expect(exc.message).to include(server.address.to_s) expect(exc.message).to include('TLS') expect(exc.message).not_to include('no TLS') end end context 'attempting to connect to a tls server without tls' do require_tls let(:options) { {ssl: false} } it 'reports host, port and tls status' do begin connection.connect! rescue Mongo::Error::SocketError => exc end expect(exc).not_to be nil expect(exc.message).not_to include('OpenSSL::SSL::SSLError') addresses = Socket.getaddrinfo(server.address.host, nil) expect(addresses.any? do |address| exc.message.include?("#{address[2]}:#{server.address.port}") end).to be true expect(exc.message).to include('no TLS') end end end shared_examples_for 'caches client key' do it 'caches' do client.close Mongo::Auth::CredentialCache.clear RSpec::Mocks.with_temporary_scope do expect_any_instance_of(conversation_class).to receive(:hi).exactly(:once).and_call_original client.reconnect server = client.cluster.next_primary server.with_connection do server.with_connection do # nothing end end end end end describe 'scram-sha-1 client key caching' do clean_slate min_server_version '3.0' require_no_external_user let(:client) { authorized_client.with(max_pool_size: 2, auth_mech: :scram) } let(:conversation_class) { Mongo::Auth::Scram::Conversation } it_behaves_like 'caches client key' end describe 'scram-sha-256 client key caching' do clean_slate min_server_version '4.0' require_no_external_user let(:client) { authorized_client.with(max_pool_size: 2, auth_mech: :scram256) } let(:conversation_class) { Mongo::Auth::Scram256::Conversation } it_behaves_like 'caches client key' end context 'when only auth source is specified' do require_no_auth let(:client) do new_local_client(SpecConfig.instance.addresses, SpecConfig.instance.monitoring_options.merge( auth_source: 'foo')) end it 'does not authenticate' do expect(Mongo::Auth::User).not_to receive(:new) client.database.command(ping: 1) end end context 'when only auth mechanism is specified' do require_x509_auth let(:client) do new_local_client(SpecConfig.instance.addresses, base_options.merge( auth_mech: :mongodb_x509)) end it 'authenticates' do expect(Mongo::Auth::User).to receive(:new).and_call_original client.database.command(ping: 1) end end context 'in lb topology' do require_topology :load_balanced context 'when authentication fails with network error' do let(:server) do authorized_client.cluster.next_primary end let(:base_options) do SpecConfig.instance.monitoring_options.merge(connect: SpecConfig.instance.test_options[:connect]) end let(:connection) do Mongo::Server::Connection.new(server, base_options) end it 'includes service id in exception' do expect_any_instance_of(Mongo::Server::PendingConnection).to receive(:authenticate!).and_raise(Mongo::Error::SocketError) begin connection.connect! rescue Mongo::Error::SocketError => exc exc.service_id.should_not be nil else fail 'Expected the SocketError to be raised' end end end end end mongo-ruby-driver-2.21.3/spec/integration/awaited_ismaster_spec.rb000066400000000000000000000015141505113246500253320ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe 'awaited hello' do min_server_fcv '4.4' # If we send the consecutive hello commands to different mongoses, # they have different process ids, and so the awaited one would return # immediately. require_no_multi_mongos let(:client) { authorized_client } it 'waits' do # Perform a regular hello to get topology version resp = client.database.command(hello: 1) doc = resp.replies.first.documents.first tv = Mongo::TopologyVersion.new(doc['topologyVersion']) tv.should be_a(BSON::Document) elapsed_time = Benchmark.realtime do resp = client.database.command(hello: 1, topologyVersion: tv.to_doc, maxAwaitTimeMS: 500) end doc = resp.replies.first.documents.first elapsed_time.should > 0.5 end end mongo-ruby-driver-2.21.3/spec/integration/aws_auth_credentials_cache_spec.rb000066400000000000000000000031031505113246500273140ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Auth::Aws::CredentialsCache do require_auth 'aws-ec2', 'aws-ecs', 'aws-web-identity' def new_client ClientRegistry.instance.new_authorized_client.tap do |client| @clients << client end end before do @clients = [] described_class.instance.clear end after do @clients.each(&:close) end it 'caches the credentials' do client1 = new_client client1['test-collection'].find.to_a expect(described_class.instance.credentials).not_to be_nil described_class.instance.credentials = Mongo::Auth::Aws::Credentials.new( described_class.instance.credentials.access_key_id, described_class.instance.credentials.secret_access_key, described_class.instance.credentials.session_token, Time.now + 60 ) client2 = new_client client2['test-collection'].find.to_a expect(described_class.instance.credentials).not_to be_expired described_class.instance.credentials = Mongo::Auth::Aws::Credentials.new( 'bad_access_key_id', described_class.instance.credentials.secret_access_key, described_class.instance.credentials.session_token, described_class.instance.credentials.expiration ) client3 = new_client expect { client3['test-collection'].find.to_a }.to raise_error(Mongo::Auth::Unauthorized) expect(described_class.instance.credentials).to be_nil expect { client3['test-collection'].find.to_a }.not_to raise_error expect(described_class.instance.credentials).not_to be_nil end end mongo-ruby-driver-2.21.3/spec/integration/aws_auth_request_spec.rb000066400000000000000000000042521505113246500253720ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' require 'net/http' describe Mongo::Auth::Aws::Request do require_aws_auth before(:all) do if ENV['AUTH'] =~ /aws-(ec2|ecs|web)/ skip "This test requires explicit credentials to be provided" end end let(:access_key_id) { ENV.fetch('MONGO_RUBY_DRIVER_AWS_AUTH_ACCESS_KEY_ID') } let(:secret_access_key) { ENV.fetch('MONGO_RUBY_DRIVER_AWS_AUTH_SECRET_ACCESS_KEY') } let(:session_token) { ENV['MONGO_RUBY_DRIVER_AWS_AUTH_SESSION_TOKEN'] } describe '#authorization' do let(:request) do described_class.new( access_key_id: access_key_id, secret_access_key: secret_access_key, session_token: session_token, host: 'sts.amazonaws.com', server_nonce: 'aaaaaaaaaaafake', ) end let(:sts_request) do Net::HTTP::Post.new("https://sts.amazonaws.com").tap do |req| request.headers.each do |k, v| req[k] = v end req['authorization'] = request.authorization req['accept'] = 'application/json' req.body = described_class::STS_REQUEST_BODY end end let(:sts_response) do http = Net::HTTP.new('sts.amazonaws.com', 443) http.use_ssl = true # Uncomment to log complete request headers and the response. # WARNING: do not enable this in Evergreen as this can expose real # AWS credentias. #http.set_debug_output(STDERR) http.start do resp = http.request(sts_request) end end let(:sts_response_payload) do JSON.parse(sts_response.body) end let(:result) do sts_response_payload['GetCallerIdentityResponse']['GetCallerIdentityResult'] end it 'is usable' do # This assertion intentionally does not use payload so that if it fails, # the entire response is printed for diagnostic purposes. sts_response.body.should_not =~ /"Error"/ sts_response.code.should == '200' result['Arn'].should =~ /^arn:aws:(iam|sts)::/ result['Account'].should be_a(String) result['UserId'].should =~ /^A/ puts "STS request successful with ARN #{result['Arn']}" end end end mongo-ruby-driver-2.21.3/spec/integration/aws_credentials_retriever_spec.rb000066400000000000000000000076541505113246500272560ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' require 'support/aws_utils' describe Mongo::Auth::Aws::CredentialsRetriever do require_aws_auth let(:retriever) do described_class.new(user) end let(:credentials) do retriever.credentials end context 'when user is not given' do let(:user) do Mongo::Auth::User.new(auth_mech: :aws) end before do Mongo::Auth::Aws::CredentialsCache.instance.clear end shared_examples_for 'retrieves the credentials' do it 'retrieves' do credentials.should be_a(Mongo::Auth::Aws::Credentials) # When user is not given, credentials retrieved are always temporary. retriever.credentials.access_key_id.should =~ /^ASIA/ retriever.credentials.secret_access_key.should =~ /./ retriever.credentials.session_token.should =~ /./ end let(:request) do Mongo::Auth::Aws::Request.new( access_key_id: credentials.access_key_id, secret_access_key: credentials.secret_access_key, session_token: credentials.session_token, host: 'sts.amazonaws.com', server_nonce: 'test', ) end it 'produces valid credentials' do result = request.validate! puts "STS request successful with ARN #{result['Arn']}" end end context 'ec2 instance role' do require_ec2_host before(:all) do unless ENV['AUTH'] == 'aws-ec2' skip "Set AUTH=aws-ec2 in environment to run EC2 instance role tests" end end context 'when instance profile is not assigned' do before(:all) do orchestrator = AwsUtils::Orchestrator.new( region: ENV.fetch('MONGO_RUBY_DRIVER_AWS_AUTH_REGION'), access_key_id: ENV.fetch('MONGO_RUBY_DRIVER_AWS_AUTH_ACCESS_KEY_ID'), secret_access_key: ENV.fetch('MONGO_RUBY_DRIVER_AWS_AUTH_SECRET_ACCESS_KEY'), ) orchestrator.clear_instance_profile(Utils.ec2_instance_id) Utils.wait_for_no_instance_profile end it 'raises an error' do lambda do credentials end.should raise_error(Mongo::Auth::Aws::CredentialsNotFound, /Could not locate AWS credentials/) end end context 'when instance profile is assigned' do before(:all) do orchestrator = AwsUtils::Orchestrator.new( region: ENV.fetch('MONGO_RUBY_DRIVER_AWS_AUTH_REGION'), access_key_id: ENV.fetch('MONGO_RUBY_DRIVER_AWS_AUTH_ACCESS_KEY_ID'), secret_access_key: ENV.fetch('MONGO_RUBY_DRIVER_AWS_AUTH_SECRET_ACCESS_KEY'), ) orchestrator.set_instance_profile(Utils.ec2_instance_id, instance_profile_name: nil, instance_profile_arn: ENV.fetch('MONGO_RUBY_DRIVER_AWS_AUTH_INSTANCE_PROFILE_ARN'), ) Utils.wait_for_instance_profile end it_behaves_like 'retrieves the credentials' end end context 'ecs task role' do before(:all) do unless ENV['AUTH'] == 'aws-ecs' skip "Set AUTH=aws-ecs in environment to run ECS task role tests" end end it_behaves_like 'retrieves the credentials' end context 'web identity' do before(:all) do unless ENV['AUTH'] == 'aws-web-identity' skip "Set AUTH=aws-web-identity in environment to run Wed identity tests" end end context 'with AWS_ROLE_SESSION_NAME' do before do stub_const('ENV', ENV.to_hash.merge('AWS_ROLE_SESSION_NAME' => 'mongo-ruby-driver-test-app')) end it_behaves_like 'retrieves the credentials' end context 'without AWS_ROLE_SESSION_NAME' do before do env = ENV.to_hash.dup env.delete('AWS_ROLE_SESSION_NAME') stub_const('ENV', env) end it_behaves_like 'retrieves the credentials' end end end end mongo-ruby-driver-2.21.3/spec/integration/aws_lambda_examples_spec.rb000066400000000000000000000046611505113246500260030ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require "spec_helper" describe "AWS Lambda examples in Ruby" do it "shares the client" do # Start AWS Lambda Example 1 # Require the driver library. require "mongo" # Create a Mongo::Client instance. # CRITICAL: You must create the client instance outside the handler # so that the client can be reused across function invocations. client = Mongo::Client.new(ENV.fetch("MONGODB_URI")) def lambda_handler(event:, context:) # Use the client to return the name of the configured database. client.database.name end # End AWS Lambda Example 1 client.close end context "when using AWS IAM authentication" do require_auth 'aws-assume-role' it "connects to the deployment" do allow(ENV).to receive(:fetch).and_call_original allow(ENV).to receive(:fetch).with("MONGODB_HOST").and_return(SpecConfig.instance.addresses.first) allow(ENV).to receive(:fetch).with("AWS_ACCESS_KEY_ID").and_return(ENV.fetch("MONGO_RUBY_DRIVER_AWS_AUTH_ACCESS_KEY_ID")) allow(ENV).to receive(:fetch).with("AWS_SECRET_ACCESS_KEY").and_return(ENV.fetch("MONGO_RUBY_DRIVER_AWS_AUTH_SECRET_ACCESS_KEY")) allow(ENV).to receive(:fetch).with("AWS_SESSION_TOKEN").and_return(ENV.fetch("MONGO_RUBY_DRIVER_AWS_AUTH_SESSION_TOKEN")) allow(ENV).to receive(:fetch).with("MONGODB_DATABASE").and_return("test") # Start AWS Lambda Example 2 # Require the driver library. require "mongo" # Create a Mongo::Client instance using AWS IAM authentication. # CRITICAL: You must create the client instance outside the handler # so that the client can be reused across function invocations. client = Mongo::Client.new([ENV.fetch("MONGODB_HOST")], auth_mech: :aws, user: ENV.fetch("AWS_ACCESS_KEY_ID"), password: ENV.fetch("AWS_SECRET_ACCESS_KEY"), auth_mech_properties: { aws_session_token: ENV.fetch("AWS_SESSION_TOKEN"), }, database: ENV.fetch("MONGODB_DATABASE")) def lambda_handler(event:, context:) # Use the client to return the name of the configured database. client.database.name end # End AWS Lambda Example 2 client.close end end end mongo-ruby-driver-2.21.3/spec/integration/bson_symbol_spec.rb000066400000000000000000000020201505113246500243240ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe 'Symbol encoding to BSON' do let(:value) { :foo } let(:hash) do {'foo' => value} end let(:serialized) do hash.to_bson.to_s end let(:expected) do (+"\x12\x00\x00\x00\x0Efoo\x00\x04\x00\x00\x00foo\x00\x00").force_encoding('binary') end it 'encodes symbol to BSON symbol' do serialized.should == expected end it 'round-trips symbol values' do buffer = BSON::ByteBuffer.new(serialized) Hash.from_bson(buffer).should == hash end it 'round-trips symbol values using the same byte buffer' do if BSON::Environment.jruby? && (BSON::VERSION.split('.').map(&:to_i) <=> [4, 11, 0]) < 0 skip 'This test is only relevant to bson versions that increment ByteBuffer '\ 'read and write positions separately in JRuby, as implemented in ' \ 'bson version 4.11.0. For more information, see https://jira.mongodb.org/browse/RUBY-2128' end Hash.from_bson(hash.to_bson).should == hash end end mongo-ruby-driver-2.21.3/spec/integration/bulk_insert_spec.rb000066400000000000000000000042241505113246500243270ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe 'Bulk insert' do include PrimarySocket let(:fail_point_base_command) do { 'configureFailPoint' => "failCommand" } end let(:collection_name) { 'bulk_insert_spec' } let(:collection) { authorized_client[collection_name] } describe 'inserted_ids' do before do collection.delete_many end context 'success' do it 'returns one insert_id as array' do result = collection.insert_many([ {:_id => 9}, ]) expect(result.inserted_ids).to eql([9]) end end context 'error on first insert' do it 'is an empty array' do collection.insert_one(:_id => 9) begin result = collection.insert_many([ {:_id => 9}, ]) fail 'Should have raised' rescue Mongo::Error::BulkWriteError => e expect(e.result['inserted_ids']).to eql([]) end end end context 'error on third insert' do it 'is an array of the first two ids' do collection.insert_one(:_id => 9) begin result = collection.insert_many([ {:_id => 7}, {:_id => 8}, {:_id => 9}, ]) fail 'Should have raised' rescue Mongo::Error::BulkWriteError => e expect(e.result['inserted_ids']).to eql([7, 8]) end end end context 'entire operation fails' do min_server_fcv '4.0' require_topology :single, :replica_set it 'is an empty array' do collection.client.use(:admin).command(fail_point_base_command.merge( :mode => {:times => 1}, :data => {:failCommands => ['insert'], errorCode: 100})) begin result = collection.insert_many([ {:_id => 7}, {:_id => 8}, {:_id => 9}, ]) fail 'Should have raised' rescue Mongo::Error => e result = e.send(:instance_variable_get, '@result') expect(result).to be_a(Mongo::Operation::Insert::BulkResult) expect(result.inserted_ids).to eql([]) end end end end end mongo-ruby-driver-2.21.3/spec/integration/bulk_write_error_message_spec.rb000066400000000000000000000045201505113246500270710ustar00rootroot00000000000000# rubocop:todo all require 'spec_helper' describe 'BulkWriteError message' do let(:client) { authorized_client } let(:collection_name) { 'bulk_write_error_message_spec' } let(:collection) { client[collection_name] } before do collection.delete_many end context 'a bulk write with one error' do it 'reports code name, code and message' do begin collection.insert_many([ {_id: 1}, {_id: 1}, {_id: 1}, ], ordered: true) fail('Should have raised') rescue Mongo::Error::BulkWriteError => e e.message.should =~ %r,\A\[11000\]: (insertDocument :: caused by :: 11000 )?E11000 duplicate key error (collection|index):, end end end context 'a bulk write with multiple errors' do it 'reports code name, code and message' do begin collection.insert_many([ {_id: 1}, {_id: 1}, {_id: 1}, ], ordered: false) fail('Should have raised') rescue Mongo::Error::BulkWriteError => e e.message.should =~ %r,\AMultiple errors: \[11000\]: (insertDocument :: caused by :: 11000 )?E11000 duplicate key error (collection|index):.*\[11000\]: (insertDocument :: caused by :: 11000 )?E11000 duplicate key error (collection|index):, end end end context 'a bulk write with validation errors' do let(:collection_name) { 'bulk_write_error_validation_message_spec' } let(:collection) do client[:collection_name].drop client[:collection_name, { 'validator' => { 'x' => { '$type' => 'string' }, } }].create client[:collection_name] end it 'reports code name, code, message, and details' do begin collection.insert_one({_id:1, x:"1"}) collection.insert_many([ {_id: 1, x:"1"}, {_id: 2, x:1}, ], ordered: false) fail('Should have raised') rescue Mongo::Error::BulkWriteError => e e.message.should =~ %r,\AMultiple errors: \[11000\]: (insertDocument :: caused by :: 11000 )?E11000 duplicate key error (collection|index):.*\; \[121\]: Document failed validation( -- .*)?, # The duplicate key error should not print details because it's not a # WriteError or a WriteConcernError e.message.scan(/ -- /).length.should be <= 1 end end end end mongo-ruby-driver-2.21.3/spec/integration/bulk_write_spec.rb000066400000000000000000000042331505113246500241550ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe 'Bulk writes' do before do authorized_collection.drop end context 'when bulk write is larger than 48MB' do let(:operations) do [ { insert_one: { text: 'a' * 1000 * 1000 } } ] * 48 end it 'succeeds' do expect do authorized_collection.bulk_write(operations) end.not_to raise_error end context 'in transaction' do require_transaction_support min_server_version "4.4" it 'succeeds' do authorized_collection.create expect do authorized_collection.client.start_session do |session| session.with_transaction do authorized_collection.bulk_write(operations, { session: session }) end end end.not_to raise_error end end end context 'when bulk write needs to be split' do let(:subscriber) { Mrss::EventSubscriber.new } let(:max_bson_size) { Mongo::Server::ConnectionBase::DEFAULT_MAX_BSON_OBJECT_SIZE } let(:insert_events) do subscriber.command_started_events('insert') end let(:failed_events) do subscriber.failed_events end let(:operations) do [{ insert_one: { text: 'a' * (max_bson_size/2) } }] * 6 end before do authorized_client.subscribe(Mongo::Monitoring::COMMAND, subscriber) authorized_collection.bulk_write(operations) end context '3.6+ server' do min_server_fcv '3.6' it 'splits the operations' do # 3.6+ servers can send multiple bulk operations in one message, # with the whole message being limited to 48m. expect(insert_events.length).to eq(2) end end context 'pre-3.6 server' do max_server_version '3.4' it 'splits the operations' do # Pre-3.6 servers limit the entire message payload to the size of # a single document which is 16m. Given our test data this means # twice as many messages are sent. expect(insert_events.length).to eq(4) end end it 'does not have a command failed event' do expect(failed_events).to be_empty end end end mongo-ruby-driver-2.21.3/spec/integration/change_stream_examples_spec.rb000066400000000000000000000153541505113246500265120ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe 'change streams examples in Ruby' do min_server_fcv '3.6' require_topology :replica_set require_wired_tiger # On JRuby, change streams should be accessed using try_next on the # change stream objects rather than using the Enumerable interface. # https://jira.mongodb.org/browse/RUBY-1877 fails_on_jruby let!(:inventory) do client[:inventory] end let(:client) do authorized_client.with(max_pool_size: 5, wait_queue_timeout: 3) end before do inventory.drop end context 'example 1 - basic watching'do it 'returns a change after an insertion' do insert_thread = Thread.new do sleep 2 inventory.insert_one(x: 1) end stream_thread = Thread.new do # Start Changestream Example 1 cursor = inventory.watch.to_enum next_change = cursor.next # End Changestream Example 1 end insert_thread.value change = stream_thread.value expect(change['_id']).not_to be_nil expect(change['_id']['_data']).not_to be_nil expect(change['operationType']).to eq('insert') expect(change['fullDocument']).not_to be_nil expect(change['fullDocument']['_id']).not_to be_nil expect(change['fullDocument']['x']).to eq(1) expect(change['ns']).not_to be_nil expect(change['ns']['db']).to eq(SpecConfig.instance.test_db) expect(change['ns']['coll']).to eq(inventory.name) expect(change['documentKey']).not_to be_nil expect(change['documentKey']['_id']).to eq(change['fullDocument']['_id']) end end context 'example 2 - full document update lookup specified' do it 'returns a change and the delta after an insertion' do inventory.insert_one(_id: 1, x: 2) update_thread = Thread.new do sleep 2 inventory.update_one({ _id: 1}, { '$set' => { x: 5 }}) end stream_thread = Thread.new do # Start Changestream Example 2 cursor = inventory.watch([], full_document: 'updateLookup').to_enum next_change = cursor.next # End Changestream Example 2 end update_thread.value change = stream_thread.value expect(change['_id']).not_to be_nil expect(change['_id']['_data']).not_to be_nil expect(change['operationType']).to eq('update') expect(change['fullDocument']).not_to be_nil expect(change['fullDocument']['_id']).to eq(1) expect(change['fullDocument']['x']).to eq(5) expect(change['ns']).not_to be_nil expect(change['ns']['db']).to eq(SpecConfig.instance.test_db) expect(change['ns']['coll']).to eq(inventory.name) expect(change['documentKey']).not_to be_nil expect(change['documentKey']['_id']).to eq(1) expect(change['updateDescription']).not_to be_nil expect(change['updateDescription']['updatedFields']).not_to be_nil expect(change['updateDescription']['updatedFields']['x']).to eq(5) expect(change['updateDescription']['removedFields']).to eq([]) end end context 'example 3 - resuming from a previous change' do it 'returns the correct change when resuming' do insert_thread = Thread.new do sleep 2 inventory.insert_one(x: 1) inventory.insert_one(x: 2) end next_change = nil resume_stream_thread = Thread.new do # Start Changestream Example 3 change_stream = inventory.watch cursor = change_stream.to_enum next_change = cursor.next resume_token = change_stream.resume_token new_cursor = inventory.watch([], resume_after: resume_token).to_enum resumed_change = new_cursor.next # End Changestream Example 3 end insert_thread.value resumed_change = resume_stream_thread.value expect(next_change['_id']).not_to be_nil expect(next_change['_id']['_data']).not_to be_nil expect(next_change['operationType']).to eq('insert') expect(next_change['fullDocument']).not_to be_nil expect(next_change['fullDocument']['_id']).not_to be_nil expect(next_change['fullDocument']['x']).to eq(1) expect(next_change['ns']).not_to be_nil expect(next_change['ns']['db']).to eq(SpecConfig.instance.test_db) expect(next_change['ns']['coll']).to eq(inventory.name) expect(next_change['documentKey']).not_to be_nil expect(next_change['documentKey']['_id']).to eq(next_change['fullDocument']['_id']) expect(resumed_change['_id']).not_to be_nil expect(resumed_change['_id']['_data']).not_to be_nil expect(resumed_change['operationType']).to eq('insert') expect(resumed_change['fullDocument']).not_to be_nil expect(resumed_change['fullDocument']['_id']).not_to be_nil expect(resumed_change['fullDocument']['x']).to eq(2) expect(resumed_change['ns']).not_to be_nil expect(resumed_change['ns']['db']).to eq(SpecConfig.instance.test_db) expect(resumed_change['ns']['coll']).to eq(inventory.name) expect(resumed_change['documentKey']).not_to be_nil expect(resumed_change['documentKey']['_id']).to eq(resumed_change['fullDocument']['_id']) expect(resumed_change.length).to eq(resumed_change.length) resumed_change.each { |key| expect(resumed_change[key]).to eq(resumed_change[key]) } end end context 'example 4 - using a pipeline to filter changes' do it 'returns the filtered changes' do ops_thread = Thread.new do sleep 2 inventory.insert_one(username: 'wallace') inventory.insert_one(username: 'alice') inventory.delete_one(username: 'wallace') end stream_thread = Thread.new do # Start Changestream Example 4 pipeline = [ { "$match" => { 'fullDocument.username' => 'alice' } }, { "$addFields" => { 'newField' => 'this is an added field!' } } ]; cursor = inventory.watch(pipeline).to_enum cursor.next # End Changestream Example 4 end ops_thread.value change = stream_thread.value expect(change['_id']).not_to be_nil expect(change['_id']['_data']).not_to be_nil expect(change['operationType']).to eq('insert') expect(change['fullDocument']).not_to be_nil expect(change['fullDocument']['_id']).not_to be_nil expect(change['fullDocument']['username']).to eq('alice') expect(change['newField']).not_to be_nil expect(change['newField']).to eq('this is an added field!') expect(change['ns']).not_to be_nil expect(change['ns']['db']).to eq(SpecConfig.instance.test_db) expect(change['ns']['coll']).to eq(inventory.name) expect(change['documentKey']).not_to be_nil expect(change['documentKey']['_id']).to eq(change['fullDocument']['_id']) end end end mongo-ruby-driver-2.21.3/spec/integration/change_stream_spec.rb000066400000000000000000000574061505113246500246200ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe 'Change stream integration' do retry_test tries: 4 require_mri max_example_run_time 7 min_server_fcv '3.6' require_topology :replica_set require_wired_tiger let(:fail_point_base_command) do { 'configureFailPoint' => "failCommand" } end # There is value in not clearing fail points between tests because # their triggering will distinguish fail points not being set vs # them not being triggered def clear_fail_point(collection) collection.client.use(:admin).command(fail_point_base_command.merge(mode: "off")) end class << self def clear_fail_point_before before do clear_fail_point(authorized_collection) end end end describe 'watch+next' do let(:change_stream) { authorized_collection.watch } shared_context 'returns a change document' do it 'returns a change document' do change_stream authorized_collection.insert_one(:a => 1) sleep 0.5 change = change_stream.to_enum.next expect(change).to be_a(BSON::Document) expect(change['operationType']).to eql('insert') doc = change['fullDocument'] expect(doc['_id']).to be_a(BSON::ObjectId) doc.delete('_id') expect(doc).to eql('a' => 1) end end shared_examples_for 'raises an exception' do it 'raises an exception and does not attempt to resume' do change_stream subscriber = Mrss::EventSubscriber.new authorized_client.subscribe(Mongo::Monitoring::COMMAND, subscriber) expect do change_stream.to_enum.next end.to raise_error(Mongo::Error::OperationFailure) aggregate_commands = subscriber.started_events.select { |e| e.command_name == 'aggregate' } expect(aggregate_commands.length).to be 0 get_more_commands = subscriber.started_events.select { |e| e.command_name == 'getMore' } expect(get_more_commands.length).to be 1 end end context 'no errors' do it 'next returns changes' do change_stream authorized_collection.insert_one(:a => 1) change = change_stream.to_enum.next expect(change).to be_a(BSON::Document) expect(change['operationType']).to eql('insert') doc = change['fullDocument'] expect(doc['_id']).to be_a(BSON::ObjectId) doc.delete('_id') expect(doc).to eql('a' => 1) end end context 'error on initial aggregation' do min_server_fcv '4.0' clear_fail_point_before let(:client) do authorized_client_without_any_retries end before do client.use(:admin).command(fail_point_base_command.merge( :mode => {:times => 1}, :data => {:failCommands => ['aggregate'], errorCode: 10107})) end it 'watch raises error' do expect do client['change-stream'].watch end.to raise_error(Mongo::Error::OperationFailure, /10107\b.*Failing command (due to|via) 'failCommand' failpoint/) end end context 'one error on getMore' do min_server_fcv '4.0' clear_fail_point_before context 'error on first getMore' do before do authorized_collection.client.use(:admin).command(fail_point_base_command.merge( mode: {times: 1}, data: { failCommands: ['getMore'], errorCode: error_code, errorLabels: error_labels, })) end context 'when the error is resumable' do let(:error_code) { 10107 } let(:error_labels) { ["ResumableChangeStreamError"] } it_behaves_like 'returns a change document' end context 'when the error is Interrupted' do let(:error_code) { 11601 } let(:error_labels) { [] } it_behaves_like 'raises an exception' end context 'when the error is CappedPositionLost' do let(:error_code) { 136 } let(:error_labels) { [] } it_behaves_like 'raises an exception' end context 'when the error is CursorKilled' do let(:error_code) { 237 } let(:error_labels) { [] } it_behaves_like 'raises an exception' end context 'when the error is ElectionInProgress' do let(:error_code) { 216 } let(:error_labels) { [] } it_behaves_like 'raises an exception' end end context 'error on a getMore other than first' do before do # Need to retrieve a change stream document successfully prior to # failing to have the resume token, otherwise the change stream # ignores documents inserted after the first aggregation # and the test gets stuck change_stream authorized_collection.insert_one(:a => 1) change_stream.to_enum.next authorized_collection.insert_one(:a => 1) authorized_collection.client.use(:admin).command(fail_point_base_command.merge( mode: {times: 1}, data: { failCommands: ['getMore'], errorCode: error_code, errorLabels: error_labels, })) end context 'when the error is resumable' do let(:error_code) { 10107 } let(:error_labels) { ["ResumableChangeStreamError"] } it_behaves_like 'returns a change document' end context 'when the error is Interrupted' do let(:error_code) { 11601 } let(:error_labels) { [] } it_behaves_like 'raises an exception' end context 'when the error is CappedPositionLost' do let(:error_code) { 136 } let(:error_labels) { [] } it_behaves_like 'raises an exception' end context 'when the error is CursorKilled' do let(:error_code) { 237 } let(:error_labels) { [] } it_behaves_like 'raises an exception' end end end context 'two errors on getMore' do min_server_fcv '4.0' clear_fail_point_before let(:error_code) { 10107 } let(:error_labels) { ["ResumableChangeStreamError"] } context 'error on first getMore' do before do authorized_collection.client.use(:admin).command(fail_point_base_command.merge( mode: {times: 2}, data: { failCommands: ['getMore'], errorCode: error_code, errorLabels: error_labels, })) end # this retries twice because aggregation resets retry count, # and ultimately succeeds and returns data it_behaves_like 'returns a change document' end context 'error on a getMore other than first' do before do # Need to retrieve a change stream document successfully prior to # failing to have the resume token, otherwise the change stream # ignores documents inserted after the first aggregation # and the test gets stuck change_stream authorized_collection.insert_one(:a => 1) change_stream.to_enum.next authorized_collection.insert_one(:a => 1) authorized_collection.client.use(:admin).command(fail_point_base_command.merge( mode: {times: 2}, data: { failCommands: ['getMore'], errorCode: error_code, errorLabels: error_labels, })) end # this retries twice because aggregation resets retry count, # and ultimately succeeds and returns data it_behaves_like 'returns a change document' end end context 'two errors on getMore followed by an error on aggregation' do min_server_fcv '4.0' clear_fail_point_before it 'next raises error' do change_stream sleep 0.5 authorized_collection.insert_one(:a => 1) sleep 0.5 enum = change_stream.to_enum authorized_collection.client.use(:admin).command(fail_point_base_command.merge( :mode => {:times => 2}, :data => {:failCommands => ['getMore', 'aggregate'], errorCode: 101})) sleep 0.5 expect do enum.next end.to raise_error(Mongo::Error::OperationFailure, /101\b.*Failing command (due to|via) 'failCommand' failpoint/) end after do # TODO see RUBY-3135. clear_fail_point(authorized_collection) end end end describe 'try_next' do let(:change_stream) { authorized_collection.watch } shared_context 'returns a change document' do it 'returns a change document' do change_stream sleep 0.5 authorized_collection.insert_one(:a => 1) sleep 0.5 change = change_stream.to_enum.try_next expect(change).to be_a(BSON::Document) expect(change['operationType']).to eql('insert') doc = change['fullDocument'] expect(doc['_id']).to be_a(BSON::ObjectId) doc.delete('_id') expect(doc).to eql('a' => 1) end end context 'there are changes' do it_behaves_like 'returns a change document' end context 'there are no changes' do it 'returns nil' do change_stream change = change_stream.to_enum.try_next expect(change).to be nil end end let(:error_code) { 10107 } let(:error_labels) { ["ResumableChangeStreamError"] } context 'one error on getMore' do min_server_fcv '4.0' clear_fail_point_before context 'error on first getMore' do before do authorized_collection.client.use(:admin).command(fail_point_base_command.merge( mode: {times: 1}, data: { failCommands: ['getMore'], errorCode: error_code, errorLabels: error_labels, })) end it_behaves_like 'returns a change document' end context 'error on a getMore other than first' do before do change_stream authorized_collection.insert_one(:a => 1) change_stream.to_enum.next authorized_collection.insert_one(:a => 1) authorized_collection.client.use(:admin).command(fail_point_base_command.merge( mode: {times: 1}, data: { failCommands: ['getMore'], errorCode: error_code, errorLabels: error_labels, })) end it_behaves_like 'returns a change document' end end context 'two errors on getMore' do min_server_fcv '4.0' clear_fail_point_before before do # Note: this fail point seems to be broken in 4.0 < 4.0.5 # (command to set it returns success but the fail point is not set). # The test succeeds in this case but doesn't test two errors on # getMore as no errors actually happen. # 4.0.5-dev server appears to correctly set the fail point. authorized_collection.client.use(:admin).command(fail_point_base_command.merge( mode: {times: 2}, data: { failCommands: ['getMore'], errorCode: error_code, errorLabels: error_labels, })) end # this retries twice because aggregation resets retry count, # and ultimately succeeds and returns data it_behaves_like 'returns a change document' end context 'two errors on getMore followed by an error on aggregation' do min_server_fcv '4.0' clear_fail_point_before context 'error on first getMore' do it 'next raises error' do change_stream sleep 0.5 authorized_collection.insert_one(:a => 1) sleep 0.5 enum = change_stream.to_enum authorized_collection.client.use(:admin).command(fail_point_base_command.merge( mode: {times: 3}, data: { failCommands: ['getMore', 'aggregate'], errorCode: error_code, errorLabels: error_labels, })) sleep 0.5 expect do enum.try_next end.to raise_error(Mongo::Error::OperationFailure, /10107\b.*Failing command (due to|via) 'failCommand' failpoint/) end end context 'error on a getMore other than first' do it 'next raises error' do change_stream authorized_collection.insert_one(:a => 1) change_stream.to_enum.next authorized_collection.insert_one(:a => 1) sleep 0.5 enum = change_stream.to_enum authorized_collection.client.use(:admin).command(fail_point_base_command.merge( mode: {times: 3}, data: { failCommands: ['getMore', 'aggregate'], errorCode: error_code, errorLabels: error_labels, })) sleep 0.5 expect do enum.try_next end.to raise_error(Mongo::Error::OperationFailure, /10107\b.*Failing command (due to|via) 'failCommand' failpoint/) end end end end describe ':start_at_operation_time option' do min_server_fcv '4.0' before do authorized_collection.delete_many end it 'respects start time prior to beginning of aggregation' do time = Time.now - 1 authorized_collection.insert_one(:a => 1) sleep 0.5 cs = authorized_collection.watch([], start_at_operation_time: time) document = cs.to_enum.next expect(document).to be_a(BSON::Document) end it 'respects start time after beginning of aggregation' do time = Time.now + 10 cs = authorized_collection.watch([], start_at_operation_time: time) sleep 0.5 authorized_collection.insert_one(:a => 1) sleep 0.5 document = cs.to_enum.try_next expect(document).to be_nil end it 'accepts a Time' do time = Time.now cs = authorized_collection.watch([], start_at_operation_time: time) end it 'accepts a BSON::Timestamp' do time = BSON::Timestamp.new(Time.now.to_i, 1) cs = authorized_collection.watch([], start_at_operation_time: time) end it 'rejects a Date' do time = Date.today expect do authorized_collection.watch([], start_at_operation_time: time) end.to raise_error(ArgumentError, 'Time must be a Time or a BSON::Timestamp instance') end it 'rejects an integer' do time = 1 expect do authorized_collection.watch([], start_at_operation_time: time) end.to raise_error(ArgumentError, 'Time must be a Time or a BSON::Timestamp instance') end end describe ':start_after option' do require_topology :replica_set min_server_fcv '4.2' let(:start_after) do stream = authorized_collection.watch([]) authorized_collection.insert_one(x: 1) start_after = stream.to_enum.next['_id'] end let(:stream) do authorized_collection.watch([], { start_after: start_after }) end let(:events) do start_after subscriber = Mrss::EventSubscriber.new authorized_client.subscribe(Mongo::Monitoring::COMMAND, subscriber) use_stream subscriber.started_events.select { |e| e.command_name == 'aggregate' } end context 'when an initial aggregation is run' do let(:use_stream) do stream end it 'sends startAfter' do expect(events.size >= 1).to eq(true) command = events.first.command expect(command['pipeline'].size == 1).to eq(true) expect(command['pipeline'].first.key?('$changeStream')).to eq(true) expect(command['pipeline'].first['$changeStream'].key?('startAfter')).to eq(true) end end context 'when resuming' do let(:use_stream) do stream authorized_collection.insert_one(x: 1) stream.to_enum.next authorized_collection.insert_one(x: 1) authorized_collection.client.use(:admin).command(fail_point_base_command.merge( mode: {times: 1}, data: { failCommands: ['getMore'], errorCode: error_code, errorLabels: error_labels, })) stream.to_enum.next end let(:error_code) { 10107 } let(:error_labels) { ["ResumableChangeStreamError"] } it 'does not startAfter even when passed in' do expect(events.size == 2).to eq(true) command = events.last.command expect(command['pipeline'].size == 1).to eq(true) expect(command['pipeline'].first.key?('$changeStream')).to eq(true) expect(command['pipeline'].first['$changeStream'].key?('startAfter')).to eq(false) end end end describe 'resume_token' do let(:stream) { authorized_collection.watch } let(:events) do subscriber = Mrss::EventSubscriber.new authorized_client.subscribe(Mongo::Monitoring::COMMAND, subscriber) use_stream subscriber.succeeded_events.select { |e| e.command_name == 'aggregate' || e.command_name === 'getMore' } end let!(:sample_resume_token) do cs = authorized_collection.watch authorized_collection.insert_one(a: 1) doc = cs.to_enum.next cs.close doc[:_id] end let(:use_stream) do stream authorized_collection.insert_one(x: 1) stream.to_enum.next end context 'when batch has been emptied' do context '4.2+' do min_server_fcv '4.2' it 'returns post batch resume token from current command response' do expect(events.size).to eq(2) aggregate_response = events.first.reply get_more_response = events.last.reply expect(aggregate_response['cursor'].key?('postBatchResumeToken')).to eq(true) expect(get_more_response['cursor'].key?('postBatchResumeToken')).to eq(true) res_tok = stream.resume_token expect(res_tok).to eq(get_more_response['cursor']['postBatchResumeToken']) expect(res_tok).to_not eq(aggregate_response['cursor']['postBatchResumeToken']) end end context '4.0-' do max_server_version '4.0' it 'returns _id of previous document returned if one exists' do doc = use_stream expect(stream.resume_token).to eq(doc['_id']) end context 'when start_after is specified' do min_server_fcv '4.2' it 'must return startAfter from the initial aggregate if the option was specified' do start_after = sample_resume_token authorized_collection.insert_one(:a => 1) stream = authorized_collection.watch([], { start_after: start_after }) expect(stream.resume_token).to eq(start_after) end end it 'must return resumeAfter from the initial aggregate if the option was specified' do resume_after = sample_resume_token authorized_collection.insert_one(:a => 1) stream = authorized_collection.watch([], { resume_after: resume_after }) expect(stream.resume_token).to eq(resume_after) end it 'must be empty if neither the startAfter nor resumeAfter options were specified' do authorized_collection.insert_one(:a => 1) stream = authorized_collection.watch expect(stream.resume_token).to be(nil) end end end context 'before batch has been emptied' do it 'returns _id of previous document returned' do stream authorized_collection.insert_one(:a => 1) authorized_collection.insert_one(:a => 1) authorized_collection.insert_one(:a => 1) stream.to_enum.next change = stream.to_enum.next expect(stream.resume_token).to eq(change['_id']) end end # Note that the watch method executes the initial aggregate command context 'for non-empty, non-iterated batch, only the initial aggregate command executed' do let (:use_stream) do authorized_collection.insert_one(:a => 1) stream end context 'if startAfter was specified' do min_server_fcv '4.2' let (:stream) do authorized_collection.watch([], { start_after: sample_resume_token }) end it 'must return startAfter from the initial aggregate' do # Need to sample a doc id from the stream before we use the stream, so # the events subscriber does not record these commands as part of the example. sample_resume_token # Verify that only the initial aggregate command was executed expect(events.size).to eq(1) expect(events.first.command_name).to eq('aggregate') expect(stream.resume_token).to eq(sample_resume_token) end end context 'if resumeAfter was specified' do let (:stream) do authorized_collection.watch([], { resume_after: sample_resume_token }) end it 'must return resumeAfter from the initial aggregate' do sample_resume_token expect(events.size).to eq(1) expect(events.first.command_name).to eq('aggregate') expect(stream.resume_token).to eq(sample_resume_token) end end context 'if neither the startAfter nor resumeAfter options were specified' do it 'must be empty' do expect(events.size).to eq(1) expect(events.first.command_name).to eq('aggregate') expect(stream.resume_token).to be(nil) end end end context 'for non-empty, non-iterated batch directly after get_more' do let(:next_doc) do authorized_collection.insert_one(:a => 1) stream.to_enum.next end let(:do_get_more) do authorized_collection.insert_one(:a => 1) stream.instance_variable_get('@cursor').get_more end context '4.2+' do min_server_fcv '4.2' let(:use_stream) do stream next_doc do_get_more end it 'returns post batch resume token from previous command response' do expect(events.size).to eq(3) expect(events.last.command_name).to eq('getMore') first_get_more = events[1].reply second_get_more = events[2].reply expect(first_get_more['cursor'].key?('postBatchResumeToken')).to eq(true) expect(second_get_more['cursor'].key?('postBatchResumeToken')).to eq(true) res_tok = stream.resume_token expect(res_tok).to eq(first_get_more['cursor']['postBatchResumeToken']) expect(res_tok).not_to eq(second_get_more['cursor']['postBatchResumeToken']) end end context '4.0-' do max_server_version '4.0' context 'if a document was returned' do let(:use_stream) do stream next_doc do_get_more end it 'returns _id of previous document' do expect(events.last.command_name).to eq('getMore') expect(stream.resume_token).to eq(next_doc['_id']) end end context 'if a document was not returned' do let(:use_stream) do stream do_get_more end context 'when resumeAfter is specified' do let (:stream) do authorized_collection.watch([], { resume_after: sample_resume_token }) end it 'must return resumeAfter from the initial aggregate if the option was specified' do sample_resume_token expect(events.last.command_name).to eq('getMore') expect(stream.resume_token).to eq(sample_resume_token) end end context 'if neither the startAfter nor resumeAfter options were specified' do it 'must be empty' do expect(events.last.command_name).to eq('getMore') expect(stream.resume_token).to be(nil) end end end end end end end mongo-ruby-driver-2.21.3/spec/integration/check_clean_slate_spec.rb000066400000000000000000000007361505113246500254210ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' # This test can be used to manually verify that there are no leaked # background threads - execute it after executing another test (in the same # rspec run) that is suspected to leak background threads, such as by # running: # # rspec your_spec.rb spec/integration/check_clean_slate_spec.rb describe 'Check clean slate' do clean_slate_for_all_if_possible it 'checks' do # Nothing end end mongo-ruby-driver-2.21.3/spec/integration/client_authentication_options_spec.rb000066400000000000000000000366571505113246500301550ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' describe 'Client authentication options' do let(:uri) { "mongodb://#{credentials}127.0.0.1:27017/#{options}" } let(:credentials) { nil } let(:options) { nil } let(:client_opts) { {} } let(:client) { new_local_client_nmio(uri, client_opts) } let(:auth_source_in_options) { client.options[:auth_source] } let(:final_auth_source) { Mongo::Auth::User.new(client.options).auth_source } let(:user) { 'username' } let(:pwd) { 'password' } shared_examples_for 'a supported auth mechanism' do context 'with URI options' do let(:credentials) { "#{user}:#{pwd}@" } let(:options) { "?authMechanism=#{auth_mech_string}" } it 'creates a client with the correct auth mechanism' do expect(client.options[:auth_mech]).to eq(auth_mech_sym) end end context 'with client options' do let(:client_opts) do { auth_mech: auth_mech_sym, user: user, password: pwd, } end it 'creates a client with the correct auth mechanism' do expect(client.options[:auth_mech]).to eq(auth_mech_sym) end end end shared_examples_for 'auth mechanism that uses database or default auth source' do |default_auth_source| context 'where no database is provided' do context 'with URI options' do let(:credentials) { "#{user}:#{pwd}@" } let(:options) { "?authMechanism=#{auth_mech_string}" } it 'creates a client with default auth source' do expect(auth_source_in_options).to eq(default_auth_source) expect(final_auth_source).to eq(default_auth_source) end end context 'with client options' do let(:client_opts) do { auth_mech: auth_mech_sym, user: user, password: pwd, } end it 'creates a client with default auth source' do expect(auth_source_in_options).to eq(default_auth_source) expect(final_auth_source).to eq(default_auth_source) end end end context 'where database is provided' do let(:database) { 'test-db' } context 'with URI options' do let(:credentials) { "#{user}:#{pwd}@" } let(:options) { "#{database}?authMechanism=#{auth_mech_string}" } it 'creates a client with database as auth source' do expect(auth_source_in_options).to eq(database) expect(final_auth_source).to eq(database) end end context 'with client options' do let(:client_opts) do { auth_mech: auth_mech_sym, user: user, password: pwd, database: database } end it 'creates a client with database as auth source' do expect(auth_source_in_options).to eq(database) expect(final_auth_source).to eq(database) end end end end shared_examples_for 'an auth mechanism with ssl' do let(:ca_file_path) { '/path/to/ca.pem' } let(:cert_path) { '/path/to/client.pem' } context 'with URI options' do let(:credentials) { "#{user}:#{pwd}@" } let(:options) { "?authMechanism=#{auth_mech_string}&tls=true&tlsCAFile=#{ca_file_path}&tlsCertificateKeyFile=#{cert_path}" } it 'creates a client with ssl properties' do expect(client.options[:ssl]).to be true expect(client.options[:ssl_cert]).to eq(cert_path) expect(client.options[:ssl_ca_cert]).to eq(ca_file_path) expect(client.options[:ssl_key]).to eq(cert_path) end end context 'with client options' do let(:client_opts) do { auth_mech: auth_mech_sym, ssl: true, ssl_cert: cert_path, ssl_key: cert_path, ssl_ca_cert: ca_file_path, user: user, password: pwd } end it 'creates a client with ssl properties' do expect(client.options[:ssl]).to be true expect(client.options[:ssl_cert]).to eq(cert_path) expect(client.options[:ssl_ca_cert]).to eq(ca_file_path) expect(client.options[:ssl_key]).to eq(cert_path) end end end shared_examples_for 'an auth mechanism that does not support auth_mech_properties' do context 'with URI options' do let(:credentials) { "#{user}:#{pwd}@" } let(:options) { "?authMechanism=#{auth_mech_string}&authMechanismProperties=CANONICALIZE_HOST_NAME:true" } it 'raises an exception on client creation' do expect { client }.to raise_error(Mongo::Auth::InvalidConfiguration, /mechanism_properties are not supported/) end end context 'with client options' do let(:client_opts) do { auth_mech: auth_mech_sym, user: user, password: pwd, auth_mech_properties: { canonicalize_host_name: true } } end it 'raises an exception on client creation' do expect { client }.to raise_error(Mongo::Auth::InvalidConfiguration, /mechanism_properties are not supported/) end end end shared_examples_for 'an auth mechanism that does not support invalid auth sources' do context 'with URI options' do let(:credentials) { "#{user}:#{pwd}@" } let(:options) { "?authMechanism=#{auth_mech_string}&authSource=foo" } it 'raises an exception on client creation' do expect { client }.to raise_error(Mongo::Auth::InvalidConfiguration, /invalid auth source/) end end context 'with client options' do let(:client_opts) do { auth_mech: auth_mech_sym, user: user, password: pwd, auth_source: 'foo' } end it 'raises an exception on client creation' do expect { client }.to raise_error(Mongo::Auth::InvalidConfiguration, /invalid auth source/) end end end context 'with MONGODB-CR auth mechanism' do let(:auth_mech_string) { 'MONGODB-CR' } let(:auth_mech_sym) { :mongodb_cr } it_behaves_like 'a supported auth mechanism' it_behaves_like 'auth mechanism that uses database or default auth source', 'admin' it_behaves_like 'an auth mechanism that does not support auth_mech_properties' end context 'with SCRAM-SHA-1 auth mechanism' do let(:auth_mech_string) { 'SCRAM-SHA-1' } let(:auth_mech_sym) { :scram } it_behaves_like 'a supported auth mechanism' it_behaves_like 'auth mechanism that uses database or default auth source', 'admin' it_behaves_like 'an auth mechanism that does not support auth_mech_properties' end context 'with SCRAM-SHA-256 auth mechanism' do let(:auth_mech_string) { 'SCRAM-SHA-256' } let(:auth_mech_sym) { :scram256 } it_behaves_like 'a supported auth mechanism' it_behaves_like 'auth mechanism that uses database or default auth source', 'admin' it_behaves_like 'an auth mechanism that does not support auth_mech_properties' end context 'with GSSAPI auth mechanism' do require_mongo_kerberos let(:auth_mech_string) { 'GSSAPI' } let(:auth_mech_sym) { :gssapi } it_behaves_like 'a supported auth mechanism' it_behaves_like 'an auth mechanism that does not support invalid auth sources' let(:auth_mech_properties) { { canonicalize_host_name: true, service_name: 'other'} } context 'with URI options' do let(:credentials) { "#{user}:#{pwd}@" } context 'with default auth mech properties' do let(:options) { '?authMechanism=GSSAPI' } it 'correctly sets client options' do expect(client.options[:auth_mech_properties]).to eq({ 'service_name' => 'mongodb' }) end end end context 'with client options' do let(:client_opts) do { auth_mech: :gssapi, user: user, password: pwd } end it 'sets default auth mech properties' do expect(client.options[:auth_mech_properties]).to eq({ 'service_name' => 'mongodb' }) end end context 'when properties are given but not service name' do context 'with URI options' do let(:credentials) { "#{user}:#{pwd}@" } context 'with default auth mech properties' do let(:options) { '?authMechanism=GSSAPI&authMechanismProperties=service_realm:foo' } it 'sets service name to mongodb' do expect(client.options[:auth_mech_properties]).to eq( 'service_name' => 'mongodb', 'service_realm' => 'foo', ) end end end context 'with client options' do let(:client_opts) do { auth_mech: :gssapi, user: user, password: pwd, auth_mech_properties: { service_realm: 'foo', }.freeze, }.freeze end it 'sets default auth mech properties' do expect(client.options[:auth_mech_properties]).to eq( 'service_name' => 'mongodb', 'service_realm' => 'foo', ) end end end end context 'with PLAIN auth mechanism' do let(:auth_mech_string) { 'PLAIN' } let(:auth_mech_sym) { :plain } it_behaves_like 'a supported auth mechanism' it_behaves_like 'auth mechanism that uses database or default auth source', '$external' it_behaves_like 'an auth mechanism with ssl' it_behaves_like 'an auth mechanism that does not support auth_mech_properties' end context 'with MONGODB-X509 auth mechanism' do let(:auth_mech_string) { 'MONGODB-X509' } let(:auth_mech_sym) { :mongodb_x509 } let(:pwd) { nil } it_behaves_like 'a supported auth mechanism' it_behaves_like 'an auth mechanism with ssl' it_behaves_like 'an auth mechanism that does not support auth_mech_properties' it_behaves_like 'an auth mechanism that does not support invalid auth sources' context 'with URI options' do let(:credentials) { "#{user}@" } let(:options) { '?authMechanism=MONGODB-X509' } it 'sets default auth source' do expect(auth_source_in_options).to eq('$external') expect(final_auth_source).to eq('$external') end context 'when username is not provided' do let(:credentials) { '' } it 'recognizes the mechanism with no username' do expect(client.options[:user]).to be_nil end end context 'when a password is provided' do let(:credentials) { "#{user}:password@" } it 'raises an exception on client creation' do expect do client end.to raise_error(Mongo::Auth::InvalidConfiguration, /Password is not supported/) end end end context 'with client options' do let(:client_opts) { { auth_mech: :mongodb_x509, user: user } } it 'sets default auth source' do expect(auth_source_in_options).to eq('$external') expect(final_auth_source).to eq('$external') end context 'when username is not provided' do let(:client_opts) { { auth_mech: :mongodb_x509} } it 'recognizes the mechanism with no username' do expect(client.options[:user]).to be_nil end end context 'when a password is provided' do let(:client_opts) { { auth_mech: :mongodb_x509, user: user, password: 'password' } } it 'raises an exception on client creation' do expect do client end.to raise_error(Mongo::Auth::InvalidConfiguration, /Password is not supported/) end end end end context 'with no auth mechanism provided' do context 'with URI options' do context 'with no credentials' do it 'creates a client without credentials' do expect(client.options[:user]).to be_nil expect(client.options[:password]).to be_nil end end context 'with empty username' do let(:credentials) { '@' } it 'raises an exception' do expect do client end.to raise_error(Mongo::Auth::InvalidConfiguration, /Empty username is not supported/) end end end context 'with client options' do context 'with no credentials' do it 'creates a client without credentials' do expect(client.options[:user]).to be_nil expect(client.options[:password]).to be_nil end end context 'with empty username' do let(:client_opts) { { user: '', password: '' } } it 'raises an exception' do expect do client end.to raise_error(Mongo::Auth::InvalidConfiguration, /Empty username is not supported/) end end end end context 'with auth source provided' do let(:auth_source) { 'foo' } context 'with URI options' do let(:options) { "?authSource=#{auth_source}" } it 'correctly sets auth source on the client' do expect(auth_source_in_options).to eq(auth_source) expect(final_auth_source).to eq(auth_source) end end context 'with client options' do let(:client_opts) { { auth_source: auth_source } } it 'correctly sets auth source on the client' do expect(auth_source_in_options).to eq(auth_source) expect(final_auth_source).to eq(auth_source) end end end context 'with auth mechanism properties' do let(:service_name) { 'service name' } let(:canonicalize_host_name) { true } let(:service_realm) { 'service_realm' } let(:auth_mechanism_properties) do { service_name: service_name, canonicalize_host_name: canonicalize_host_name, service_realm: service_realm, }.freeze end shared_examples 'correctly sets auth mechanism properties on the client' do it 'correctly sets auth mechanism properties on the client' do expect(client.options[:auth_mech_properties]).to eq( 'service_name' => service_name, 'canonicalize_host_name' => canonicalize_host_name, 'service_realm' => service_realm, ) end end context 'with URI options' do let(:options) do "?authMechanismProperties=SERVICE_name:#{service_name}," + "CANONICALIZE_HOST_name:#{canonicalize_host_name}," + "SERVICE_realm:#{service_realm}" end include_examples 'correctly sets auth mechanism properties on the client' end context 'with client options' do [:auth_mech_properties, 'auth_mech_properties'].each do |key| context "using #{key.class} keys" do let(:client_opts) { { key => auth_mechanism_properties } } include_examples 'correctly sets auth mechanism properties on the client' context 'when options are given in mixed case' do let(:auth_mechanism_properties) do { service_NAME: service_name, canonicalize_host_NAME: canonicalize_host_name, service_REALM: service_realm, }.freeze end context 'using URI and options' do let(:client) { new_local_client_nmio(uri, client_opts) } include_examples 'correctly sets auth mechanism properties on the client' end context 'using host and options' do let(:client) { new_local_client_nmio(['localhost'], client_opts) } include_examples 'correctly sets auth mechanism properties on the client' end end end end end end end mongo-ruby-driver-2.21.3/spec/integration/client_connectivity_spec.rb000066400000000000000000000023001505113246500260530ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' # This test is for checking connectivity of the test client to the # test cluster. In other words, it is a test that the test suite is # configured correctly. describe 'Client connectivity' do shared_examples_for 'is correctly configured' do it 'is configured with the correct database' do expect(client.options[:database]).to eq(SpecConfig.instance.test_db) end it 'has correct database in the cluster' do expect(client.cluster.options[:database]).to eq(SpecConfig.instance.test_db) end end context 'no auth' do let(:client) { ClientRegistry.instance.global_client('basic') } it_behaves_like 'is correctly configured' it 'connects and is usable' do resp = client.database.command(ping: 1) expect(resp).to be_a(Mongo::Operation::Result) end end context 'with auth' do let(:client) { ClientRegistry.instance.global_client('authorized') } it_behaves_like 'is correctly configured' it 'connects and is usable' do client['connectivity_spec'].insert_one(foo: 1) expect(client['connectivity_spec'].find(foo: 1).first['foo']).to eq(1) end end end mongo-ruby-driver-2.21.3/spec/integration/client_construction_aws_auth_spec.rb000066400000000000000000000146701505113246500277770ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe 'Client construction with AWS auth' do require_aws_auth let(:client) do new_local_client(SpecConfig.instance.addresses, SpecConfig.instance.ssl_options.merge( auth_mech: :aws, connect_timeout: 3.44, socket_timeout: 3.45, server_selection_timeout: 3.46)) end let(:authenticated_user_info) do # https://stackoverflow.com/questions/21414608/mongodb-show-current-user info = client.database.command(connectionStatus: 1).documents.first info[:authInfo][:authenticatedUsers].first end let(:authenticated_user_name) { authenticated_user_info[:user] } shared_examples_for 'connects successfully' do it 'connects successfully' do client['foo'].insert_one(test: true) end end context 'credentials specified explicitly' do let(:username) { ENV.fetch('MONGO_RUBY_DRIVER_AWS_AUTH_ACCESS_KEY_ID') } let(:password) { ENV.fetch('MONGO_RUBY_DRIVER_AWS_AUTH_SECRET_ACCESS_KEY') } let(:session_token) { ENV.fetch('MONGO_RUBY_DRIVER_AWS_AUTH_SESSION_TOKEN') } let(:address_strs) { SpecConfig.instance.addresses.join(',') } context 'regular credentials' do require_auth 'aws-regular' context 'via Ruby options' do let(:client) do new_local_client(SpecConfig.instance.addresses, SpecConfig.instance.ssl_options.merge( auth_mech: :aws, user: username, password: password, connect_timeout: 3.34, socket_timeout: 3.35, server_selection_timeout: 3.36)) end it_behaves_like 'connects successfully' it 'uses the expected user' do puts "Authenticated as #{authenticated_user_name}" authenticated_user_name.should =~ /^arn:aws:iam:/ end end context 'via URI' do let(:client) do new_local_client("mongodb://#{CGI.escape(username)}:#{CGI.escape(password)}@#{address_strs}/test?authMechanism=MONGODB-AWS&serverSelectionTimeoutMS=3.26") end it_behaves_like 'connects successfully' it 'uses the expected user' do puts "Authenticated as #{authenticated_user_name}" authenticated_user_name.should =~ /^arn:aws:iam:/ end end end context 'temporary credentials' do require_auth 'aws-assume-role' context 'via Ruby options' do let(:client) do new_local_client(SpecConfig.instance.addresses, SpecConfig.instance.ssl_options.merge( auth_mech: :aws, user: username, password: password, auth_mech_properties: { aws_session_token: session_token, }, connect_timeout: 3.34, socket_timeout: 3.35, server_selection_timeout: 3.36)) end it_behaves_like 'connects successfully' it 'uses the expected user' do puts "Authenticated as #{authenticated_user_name}" authenticated_user_name.should =~ /^arn:aws:sts:/ end end context 'via URI' do let(:client) do new_local_client("mongodb://#{CGI.escape(username)}:#{CGI.escape(password)}@#{address_strs}/test?authMechanism=MONGODB-AWS&serverSelectionTimeoutMS=3.26&authMechanismProperties=AWS_SESSION_TOKEN:#{CGI.escape(session_token)}") end it_behaves_like 'connects successfully' it 'uses the expected user' do puts "Authenticated as #{authenticated_user_name}" authenticated_user_name.should =~ /^arn:aws:sts:/ end end end end context 'credentials specified via environment' do require_auth 'aws-regular', 'aws-assume-role' context 'no credentials given explicitly to Client constructor' do context 'credentials not provided in environment' do local_env( 'AWS_ACCESS_KEY_ID' => nil, 'AWS_SECRET_ACCESS_KEY' => nil, 'AWS_SESSION_TOKEN' => nil, ) it 'does not connect' do lambda do client['foo'].insert_one(test: true) end.should raise_error(Mongo::Auth::Aws::CredentialsNotFound, /Could not locate AWS credentials/) end end context 'credentials provided in environment' do local_env do { 'AWS_ACCESS_KEY_ID' => ENV.fetch('MONGO_RUBY_DRIVER_AWS_AUTH_ACCESS_KEY_ID'), 'AWS_SECRET_ACCESS_KEY' => ENV.fetch('MONGO_RUBY_DRIVER_AWS_AUTH_SECRET_ACCESS_KEY'), 'AWS_SESSION_TOKEN' => ENV['MONGO_RUBY_DRIVER_AWS_AUTH_SESSION_TOKEN'], } end it_behaves_like 'connects successfully' context 'when using regular credentials' do require_auth 'aws-regular' it 'uses the expected user' do puts "Authenticated as #{authenticated_user_name}" authenticated_user_name.should =~ /^arn:/ authenticated_user_name.should_not =~ /^arn:.*assumed-role/ end end context 'when using assume role credentials' do require_auth 'aws-assume-role' it 'uses the expected user' do puts "Authenticated as #{authenticated_user_name}" authenticated_user_name.should =~ /^arn:.*assumed-role/ end end end end end context 'credentials specified via instance/task metadata' do require_auth 'aws-ec2', 'aws-ecs', 'aws-web-identity' before(:all) do # No explicit credentials are expected in the tested configurations ENV['AWS_ACCESS_KEY_ID'].should be_nil end it_behaves_like 'connects successfully' context 'when using ec2 instance role' do require_auth 'aws-ec2' it 'uses the expected user' do puts "Authenticated as #{authenticated_user_name}" authenticated_user_name.should =~ /^arn:aws:sts:.*assumed-role.*instance_profile_role/ end end context 'when using ecs task role' do require_auth 'aws-ecs' it 'uses the expected user' do puts "Authenticated as #{authenticated_user_name}" authenticated_user_name.should =~ /^arn:aws:sts:.*assumed-role.*ecstaskexecutionrole/i end end context 'when using web identity' do require_auth 'aws-web-identity' it 'uses the expected user' do puts "Authenticated as #{authenticated_user_name}" authenticated_user_name.should =~ /^arn:aws:sts:.*assumed-role.*webIdentityTestRole/i end end end end mongo-ruby-driver-2.21.3/spec/integration/client_construction_spec.rb000066400000000000000000000305531505113246500261020ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' # Create a client with all possible configurations (forcing/discovering each # topology type) and ensure the resulting client is usable. describe 'Client construction' do let(:base_options) do SpecConfig.instance.test_options.merge( server_selection_timeout: 5, database: SpecConfig.instance.test_db, ).merge(SpecConfig.instance.credentials_or_external_user( user: SpecConfig.instance.test_user.name, password: SpecConfig.instance.test_user.password, auth_source: 'admin', )) end context 'in single topology' do require_topology :single it 'discovers standalone' do options = base_options.dup options.delete(:connect) client = ClientRegistry.instance.new_local_client([SpecConfig.instance.addresses.first], options) client['client_construction'].insert_one(test: 1) expect(client.cluster.topology).to be_a(Mongo::Cluster::Topology::Single) expect(client.options[:connect]).to be nil end it 'connects directly' do client = ClientRegistry.instance.new_local_client([SpecConfig.instance.addresses.first], base_options.merge(connect: :direct)) client['client_construction'].insert_one(test: 1) expect(client.cluster.topology).to be_a(Mongo::Cluster::Topology::Single) expect(client.options[:connect]).to eq :direct end it 'creates connection pool and keeps it populated' do client = ClientRegistry.instance.new_local_client([SpecConfig.instance.addresses.first], base_options.merge(min_pool_size: 1, max_pool_size: 1)) # allow connection pool to populate sleep 0.1 server = client.cluster.next_primary expect(server.pool.size).to eq(1) client['client_construction'].insert_one(test: 1) expect(server.pool.size).to eq(1) end end context 'in replica set topology' do require_topology :replica_set it 'discovers replica set' do options = base_options.dup options.delete(:connect) options.delete(:replica_set) client = ClientRegistry.instance.new_local_client([SpecConfig.instance.addresses.first], options) client['client_construction'].insert_one(test: 1) expect(client.cluster.topology).to be_a(Mongo::Cluster::Topology::ReplicaSetWithPrimary) expect(client.options[:connect]).to be nil expect(client.options[:replica_set]).to be nil end it 'forces replica set' do replica_set_name = ClusterConfig.instance.replica_set_name expect(replica_set_name).not_to be nil client = ClientRegistry.instance.new_local_client([SpecConfig.instance.addresses.first], base_options.merge(connect: :replica_set, replica_set: replica_set_name)) client['client_construction'].insert_one(test: 1) expect(client.cluster.topology).to be_a(Mongo::Cluster::Topology::ReplicaSetWithPrimary) expect(client.options[:connect]).to be :replica_set expect(client.options[:replica_set]).to eq(replica_set_name) end it 'connects directly' do primary_address = ClusterConfig.instance.primary_address_str client = ClientRegistry.instance.new_local_client([primary_address], base_options.merge(connect: :direct)) client['client_construction'].insert_one(test: 1) expect(client.cluster.topology).to be_a(Mongo::Cluster::Topology::Single) expect(client.options[:connect]).to eq :direct end context 'direct connection with mismached me' do let(:address) { ClusterConfig.instance.alternate_address.to_s } let(:client) do new_local_client([address], SpecConfig.instance.test_options) end let(:server) { client.cluster.next_primary } it 'sets server type to primary' do expect(server.description).to be_primary end end # This test requires a PSA deployment. The port number is fixed for our # Evergreen/Docker setups. context 'when directly connecting to arbiters' do let(:options) do SpecConfig.instance.test_options.tap do |opt| opt.delete(:connect) opt.delete(:replica_set) opt.update(direct_connection: true) end end let(:client) do new_local_client(['localhost:27019'], options) end let(:response) { client.command(ismaster: 1).documents.first } it 'connects' do response.fetch('arbiterOnly').should be true end end end context 'in sharded topology' do require_topology :sharded it 'connects to sharded cluster' do options = base_options.dup options.delete(:connect) client = ClientRegistry.instance.new_local_client([SpecConfig.instance.addresses.first], base_options.merge(connect: :sharded)) client['client_construction'].insert_one(test: 1) expect(client.cluster.topology).to be_a(Mongo::Cluster::Topology::Sharded) expect(client.options[:connect]).to be :sharded end it 'connects directly' do primary_address = ClusterConfig.instance.primary_address_str client = ClientRegistry.instance.new_local_client([SpecConfig.instance.addresses.first], base_options.merge(connect: :direct)) client['client_construction'].insert_one(test: 1) expect(client.cluster.topology).to be_a(Mongo::Cluster::Topology::Single) expect(client.options[:connect]).to eq :direct end end context 'when time is frozen' do let(:now) { Time.now } before do allow(Time).to receive(:now).and_return(now) end it 'connects' do client = ClientRegistry.instance.new_local_client([SpecConfig.instance.addresses.first], SpecConfig.instance.test_options) expect(client.cluster.topology).not_to be_a(Mongo::Cluster::Topology::Unknown) end end context 'with auto encryption options'do require_libmongocrypt require_enterprise min_server_fcv '4.2' # Diagnostics of leaked background threads only, these tests do not # actually require a clean slate. https://jira.mongodb.org/browse/RUBY-2138 clean_slate include_context 'define shared FLE helpers' include_context 'with local kms_providers' let(:options) { { auto_encryption_options: auto_encryption_options } } let(:auto_encryption_options) do { key_vault_client: key_vault_client, key_vault_namespace: key_vault_namespace, kms_providers: kms_providers, # Spawn mongocryptd on non-default port for sharded cluster tests extra_options: extra_options, } end let(:client) do ClientRegistry.instance.new_local_client([SpecConfig.instance.addresses.first], options) end context 'with AWS kms providers with empty string credentials' do let(:auto_encryption_options) do { key_vault_namespace: key_vault_namespace, kms_providers: { aws: { access_key_id: '', secret_access_key: '', } }, # Spawn mongocryptd on non-default port for sharded cluster tests extra_options: extra_options, } end it 'raises an exception' do expect do client end.to raise_error(ArgumentError, /The access_key_id option must be a String with at least one character; it is currently an empty string/) end end context 'with default key vault client' do let(:key_vault_client) { nil } shared_examples 'creates a working key vault client' do it 'creates a working key vault client' do key_vault_client = client.encrypter.key_vault_client result = key_vault_client[:test].insert_one(test: 1) expect(result).to be_ok end end context 'when top-level max pool size is not 0' do include_examples 'creates a working key vault client' shared_examples 'limited connection pool' do it 'creates a key vault client with a different cluster than the existing client' do key_vault_client = client.encrypter.key_vault_client expect(key_vault_client.cluster).not_to eq(client.cluster) end # min pool size for the key vault client can be greater than 0 # when the key vault client is the same as the top-level client. # This is OK because we aren't making any more connections for FLE, # the minimum was requested by application for its own needs. it 'uses min pool size 0 for key vault client' do key_vault_client = client.encrypter.key_vault_client key_vault_client.options[:min_pool_size].should be 0 end end context 'when top-level max pool size is not specified' do before do client.options[:max_pool_size].should be nil end include_examples 'limited connection pool' it 'uses unspecified max pool size for key vault client' do key_vault_client = client.encrypter.key_vault_client key_vault_client.options[:max_pool_size].should be nil end end context 'when top-level max pool size is specified' do let(:options) do { auto_encryption_options: auto_encryption_options, max_pool_size: 42, } end include_examples 'limited connection pool' it 'uses the same max pool size for key vault client' do key_vault_client = client.encrypter.key_vault_client key_vault_client.options[:max_pool_size].should be 42 end end end context 'when top-level max pool size is 0' do let(:options) do { auto_encryption_options: auto_encryption_options, max_pool_size: 0, } end before do client.options[:max_pool_size].should be 0 end include_examples 'creates a working key vault client' it 'creates a key vault client with the same cluster as the existing client' do key_vault_client = client.encrypter.key_vault_client expect(key_vault_client.cluster).to eq(client.cluster) end end end end context 'when seed addresses are repeated in host list' do require_topology :single let(:primary_address) do ClusterConfig.instance.primary_address_host end let(:client) do new_local_client([primary_address, primary_address], SpecConfig.instance.test_options) end it 'deduplicates the addresses' do expect(client.cluster.addresses).to eq([Mongo::Address.new(primary_address)]) end end context 'when seed addresses are repeated in URI' do require_topology :single let(:primary_address) do ClusterConfig.instance.primary_address_host end let(:client) do new_local_client("mongodb://#{primary_address},#{primary_address}", SpecConfig.instance.test_options) end it 'deduplicates the addresses' do expect(client.cluster.addresses).to eq([Mongo::Address.new(primary_address)]) end end context 'when deployment is not a sharded cluster' do require_topology :single, :replica_set let(:client) do ClientRegistry.instance.new_local_client( [SpecConfig.instance.addresses.first], SpecConfig.instance.test_options.merge(options), ) end context 'when load-balanced topology is requested' do let(:options) do {connect: :load_balanced, replica_set: nil} end it 'creates the client successfully' do client.should be_a(Mongo::Client) end it 'fails all operations' do lambda do client.command(ping: true) end.should raise_error(Mongo::Error::BadLoadBalancerTarget) end end end context 'when in load-balanced mode' do require_topology :load_balanced let(:client) do ClientRegistry.instance.new_local_client( [SpecConfig.instance.addresses.first], SpecConfig.instance.test_options.merge(options), ) end context 'when load-balanced topology is requested via the URI option' do let(:options) do {connect: nil, load_balanced: true} end it 'creates the client successfully' do client.should be_a(Mongo::Client) end it 'fails all operations' do lambda do client.command(ping: true) end.should raise_error(Mongo::Error::MissingServiceId) end end end end mongo-ruby-driver-2.21.3/spec/integration/client_side_encryption/000077500000000000000000000000001505113246500252015ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/integration/client_side_encryption/auto_encryption_bulk_writes_spec.rb000066400000000000000000000264541505113246500344070ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe 'Bulk writes with auto-encryption enabled' do require_libmongocrypt require_enterprise min_server_fcv '4.2' include_context 'define shared FLE helpers' include_context 'with local kms_providers' let(:subscriber) { Mrss::EventSubscriber.new } let(:client) do new_local_client( SpecConfig.instance.addresses, SpecConfig.instance.test_options.merge( auto_encryption_options: { kms_providers: kms_providers, key_vault_namespace: key_vault_namespace, schema_map: { "auto_encryption.users" => schema_map }, # Spawn mongocryptd on non-default port for sharded cluster tests extra_options: extra_options, }, database: 'auto_encryption' ), ).tap do |client| client.subscribe(Mongo::Monitoring::COMMAND, subscriber) end end let(:size_limit) { Mongo::Server::ConnectionBase::REDUCED_MAX_BSON_SIZE } before do authorized_client.use('auto_encryption')['users'].drop key_vault_collection.drop key_vault_collection.insert_one(data_key) end let(:command_succeeded_events) do subscriber.succeeded_events.select do |event| event.command_name == command_name end end shared_examples 'a functioning encrypted bulk write' do |options={}| num_writes = options[:num_writes] before do perform_bulk_write end it 'executes an encrypted bulk write' do documents = authorized_client.use('auto_encryption')['users'].find ssns = documents.map { |doc| doc['ssn'] } expect(ssns).to all(be_ciphertext) end it 'executes the correct number of writes' do expect(command_succeeded_events.length).to eq(num_writes) end end context 'using BulkWrite' do let(:collection) { client['users'] } let(:bulk_write) { Mongo::BulkWrite.new(collection, requests, {}) } let(:perform_bulk_write) { bulk_write.execute } context 'with insert operations' do let(:command_name) { 'insert' } context 'when total request size does not exceed 2MiB' do let(:requests) do [ { insert_one: { ssn: 'a' * (size_limit/2) } }, { insert_one: { ssn: 'a' * (size_limit/2) } } ] end it_behaves_like 'a functioning encrypted bulk write', num_writes: 1 end context 'when each operation is smaller than 2MiB, but the total request size is greater than 2MiB' do let(:requests) do [ { insert_one: { ssn: 'a' * (size_limit - 2000) } }, { insert_one: { ssn: 'a' * (size_limit - 2000) } } ] end it_behaves_like 'a functioning encrypted bulk write', num_writes: 2 end context 'when each operation is larger than 2MiB' do let(:requests) do [ { insert_one: { ssn: 'a' * (size_limit * 2) } }, { insert_one: { ssn: 'a' * (size_limit * 2) } } ] end it_behaves_like 'a functioning encrypted bulk write', num_writes: 2 end context 'when one operation is larger than 16MiB' do let(:requests) do [ { insert_one: { ssn: 'a' * (Mongo::Server::ConnectionBase::DEFAULT_MAX_BSON_OBJECT_SIZE + 1000) } }, { insert_one: { ssn: 'a' * size_limit } } ] end it 'raises an exception' do expect do bulk_write.execute end.to raise_error(Mongo::Error::MaxBSONSize, /The document exceeds maximum allowed BSON object size after serialization/) end end end context 'with update operations' do let(:command_name) { 'update' } before do client['users'].insert_one(_id: 1) client['users'].insert_one(_id: 2) end context 'when total request size does not exceed 2MiB' do let(:requests) do [ { replace_one: { filter: { _id: 1 }, replacement: { ssn: 'a' * (size_limit/2) } } }, { replace_one: { filter: { _id: 2 }, replacement: { ssn: 'a' * (size_limit/2) } } }, ] end it_behaves_like 'a functioning encrypted bulk write', num_writes: 1 end context 'when each operation is smaller than 2MiB, but the total request size is greater than 2MiB' do let(:requests) do [ { replace_one: { filter: { _id: 1 }, replacement: { ssn: 'a' * (size_limit - 2000) } } }, { replace_one: { filter: { _id: 2 }, replacement: { ssn: 'a' * (size_limit - 2000) } } }, ] end it_behaves_like 'a functioning encrypted bulk write', num_writes: 2 end context 'when each operation is larger than 2MiB' do let(:requests) do [ { replace_one: { filter: { _id: 1 }, replacement: { ssn: 'a' * (size_limit * 2) } } }, { replace_one: { filter: { _id: 2 }, replacement: { ssn: 'a' * (size_limit * 2) } } }, ] end it_behaves_like 'a functioning encrypted bulk write', num_writes: 2 end context 'when one operation is larger than 16MiB' do let(:requests) do [ { replace_one: { filter: { _id: 1 }, replacement: { ssn: 'a' * (Mongo::Server::ConnectionBase::DEFAULT_MAX_BSON_OBJECT_SIZE) } } }, { replace_one: { filter: { _id: 2 }, replacement: { ssn: 'a' * size_limit } } }, ] end before do expect(requests.first.to_bson.length).to be > Mongo::Server::ConnectionBase::DEFAULT_MAX_BSON_OBJECT_SIZE end it 'raises an exception' do expect do bulk_write.execute end.to raise_error(Mongo::Error::MaxBSONSize, /The document exceeds maximum allowed BSON object size after serialization/) end end end context 'with delete operations' do let(:command_name) { 'delete' } context 'when total request size does not exceed 2MiB' do before do client['users'].insert_one(ssn: 'a' * (size_limit/2)) client['users'].insert_one(ssn: 'b' * (size_limit/2)) end let(:requests) do [ { delete_one: { filter: { ssn: 'a' * (size_limit/2) } } }, { delete_one: { filter: { ssn: 'b' * (size_limit/2) } } } ] end it 'performs one delete' do bulk_write.execute documents = authorized_client.use('auto_encryption')['users'].find.to_a expect(documents.length).to eq(0) expect(command_succeeded_events.length).to eq(1) end end context 'when each operation is smaller than 2MiB, but the total request size is greater than 2MiB' do before do client['users'].insert_one(ssn: 'a' * (size_limit - 2000)) client['users'].insert_one(ssn: 'b' * (size_limit - 2000)) end let(:requests) do [ { delete_one: { filter: { ssn: 'a' * (size_limit - 2000) } } }, { delete_one: { filter: { ssn: 'b' * (size_limit - 2000) } } } ] end it 'performs two deletes' do bulk_write.execute documents = authorized_client.use('auto_encryption')['users'].find.to_a expect(documents.length).to eq(0) expect(command_succeeded_events.length).to eq(2) end end context 'when each operation is larger than 2MiB' do before do client['users'].insert_one(ssn: 'a' * (size_limit * 2)) client['users'].insert_one(ssn: 'b' * (size_limit * 2)) end let(:requests) do [ { delete_one: { filter: { ssn: 'a' * (size_limit * 2) } } }, { delete_one: { filter: { ssn: 'b' * (size_limit * 2) } } } ] end it 'performs two deletes' do bulk_write.execute documents = authorized_client.use('auto_encryption')['users'].find.to_a expect(documents.length).to eq(0) expect(command_succeeded_events.length).to eq(2) end end context 'when one operation is larger than 16MiB' do let(:requests) do [ { delete_one: { filter: { ssn: 'a' * (Mongo::Server::ConnectionBase::DEFAULT_MAX_BSON_OBJECT_SIZE + 1000) } } }, { delete_one: { filter: { ssn: 'b' * (size_limit * 2) } } } ] end it 'raises an exception' do expect do bulk_write.execute end.to raise_error(Mongo::Error::MaxBSONSize, /The document exceeds maximum allowed BSON object size after serialization/) end end end context 'with insert, update, and delete operations' do context 'when total request size does not exceed 2MiB' do let(:requests) do [ { insert_one: { _id: 1, ssn: 'a' * (size_limit/3) } }, { replace_one: { filter: { _id: 1 }, replacement: { ssn: 'b' * (size_limit/3) } } }, { delete_one: { filter: { ssn: 'b' * (size_limit/3) } } } ] end it 'successfully performs the bulk write' do bulk_write.execute documents = authorized_client.use('auto_encryption')['users'].find.to_a expect(documents.length).to eq(0) end # Bulk writes with different types of operations should it 'performs 1 insert, 1 update, and 1 delete' do bulk_write.execute command_succeeded_events = subscriber.succeeded_events inserts = command_succeeded_events.select { |event| event.command_name == 'insert' } updates = command_succeeded_events.select { |event| event.command_name == 'update' } deletes = command_succeeded_events.select { |event| event.command_name == 'delete' } expect(inserts.length).to eq(1) expect(updates.length).to eq(1) expect(deletes.length).to eq(1) end end end end context '#insert_many' do let(:perform_bulk_write) do client['users'].insert_many(documents) end let(:command_name) { 'insert' } context 'when total request size does not exceed 2MiB' do let(:documents) do [ { ssn: 'a' * (size_limit/2) }, { ssn: 'a' * (size_limit/2) }, ] end it_behaves_like 'a functioning encrypted bulk write', num_writes: 1 end context 'when each operation is smaller than 2MiB, but the total request size is greater than 2MiB' do let(:documents) do [ { ssn: 'a' * (size_limit - 2000) }, { ssn: 'a' * (size_limit - 2000) }, ] end it_behaves_like 'a functioning encrypted bulk write', num_writes: 2 end context 'when each operation is larger than 2MiB' do let(:documents) do [ { ssn: 'a' * (size_limit * 2) }, { ssn: 'a' * (size_limit * 2) }, ] end it_behaves_like 'a functioning encrypted bulk write', num_writes: 2 end context 'when one operation is larger than 16MiB' do let(:documents) do [ { ssn: 'a' * (Mongo::Server::ConnectionBase::DEFAULT_MAX_BSON_OBJECT_SIZE + 1000) }, { ssn: 'a' * size_limit }, ] end it 'raises an exception' do expect do perform_bulk_write end.to raise_error(Mongo::Error::MaxBSONSize, /The document exceeds maximum allowed BSON object size after serialization/) end end end end auto_encryption_command_monitoring_spec.rb000066400000000000000000000230711505113246500356510ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/integration/client_side_encryption# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe 'Auto Encryption' do require_libmongocrypt require_enterprise min_server_fcv '4.2' # Diagnostics of leaked background threads only, these tests do not # actually require a clean slate. https://jira.mongodb.org/browse/RUBY-2138 clean_slate include_context 'define shared FLE helpers' include_context 'with local kms_providers' let(:subscriber) { Mrss::EventSubscriber.new } let(:db_name) { 'auto_encryption' } let(:encryption_client) do new_local_client( SpecConfig.instance.addresses, SpecConfig.instance.test_options.merge( auto_encryption_options: { kms_providers: kms_providers, key_vault_namespace: key_vault_namespace, schema_map: { "auto_encryption.users" => schema_map }, # Spawn mongocryptd on non-default port for sharded cluster tests extra_options: extra_options, }, database: db_name ), ).tap do |client| client.subscribe(Mongo::Monitoring::COMMAND, subscriber) end end before(:each) do key_vault_collection.drop key_vault_collection.insert_one(data_key) encryption_client['users'].drop end let(:started_event) do subscriber.single_command_started_event(command_name, database_name: db_name) end let(:succeeded_event) do subscriber.single_command_succeeded_event(command_name, database_name: db_name) end let(:key_vault_list_collections_event) do subscriber.started_events.find do |event| event.command_name == 'listCollections' && event.database_name == key_vault_db end end shared_examples 'it has a non-encrypted key_vault_client' do it 'does not register a listCollections event on the key vault client' do expect(key_vault_list_collections_event).to be_nil end end context 'when performing operations that need a document in the database' do before do result = encryption_client['users'].insert_one(ssn: ssn, age: 23) end describe '#aggregate' do let(:command_name) { 'aggregate' } before do encryption_client['users'].aggregate([{ '$match' => { 'ssn' => ssn } }]).first end it 'has encrypted data in command monitoring' do # Command started event occurs after ssn is encrypted expect( started_event.command["pipeline"].first["$match"]["ssn"]["$eq"] ).to be_ciphertext # Command succeeded event occurs before ssn is decrypted expect(succeeded_event.reply["cursor"]["firstBatch"].first["ssn"]).to be_ciphertext end it_behaves_like 'it has a non-encrypted key_vault_client' end describe '#count' do let(:command_name) { 'count' } before do encryption_client['users'].count(ssn: ssn) end it 'has encrypted data in command monitoring' do # Command started event occurs after ssn is encrypted # Command succeeded event does not contain any data to be decrypted expect(started_event.command["query"]["ssn"]["$eq"]).to be_ciphertext end it_behaves_like 'it has a non-encrypted key_vault_client' end describe '#distinct' do let(:command_name) { 'distinct' } before do encryption_client['users'].distinct(:ssn) end it 'has encrypted data in command monitoring' do # Command started event does not contain any data to be encrypted # Command succeeded event occurs before ssn is decrypted expect(succeeded_event.reply["values"].first).to be_ciphertext end it_behaves_like 'it has a non-encrypted key_vault_client' end describe '#delete_one' do let(:command_name) { 'delete' } before do encryption_client['users'].delete_one(ssn: ssn) end it 'has encrypted data in command monitoring' do # Command started event occurs after ssn is encrypted # Command succeeded event does not contain any data to be decrypted expect(started_event.command["deletes"].first["q"]["ssn"]["$eq"]).to be_ciphertext end it_behaves_like 'it has a non-encrypted key_vault_client' end describe '#delete_many' do let(:command_name) { 'delete' } before do encryption_client['users'].delete_many(ssn: ssn) end it 'has encrypted data in command monitoring' do # Command started event occurs after ssn is encrypted # Command succeeded event does not contain any data to be decrypted expect(started_event.command["deletes"].first["q"]["ssn"]["$eq"]).to be_ciphertext end it_behaves_like 'it has a non-encrypted key_vault_client' end describe '#find' do let(:command_name) { 'find' } before do encryption_client['users'].find(ssn: ssn).first end it 'has encrypted data in command monitoring' do # Command started event occurs after ssn is encrypted expect(started_event.command["filter"]["ssn"]["$eq"]).to be_ciphertext # Command succeeded event occurs before ssn is decrypted expect(succeeded_event.reply["cursor"]["firstBatch"].first["ssn"]).to be_ciphertext end it_behaves_like 'it has a non-encrypted key_vault_client' end describe '#find_one_and_delete' do let(:command_name) { 'findAndModify' } before do encryption_client['users'].find_one_and_delete(ssn: ssn) end it 'has encrypted data in command monitoring' do # Command started event occurs after ssn is encrypted expect(started_event.command["query"]["ssn"]["$eq"]).to be_ciphertext # Command succeeded event occurs before ssn is decrypted expect(succeeded_event.reply["value"]["ssn"]).to be_ciphertext end it_behaves_like 'it has a non-encrypted key_vault_client' end describe '#find_one_and_replace' do let(:command_name) { 'findAndModify' } before do encryption_client['users'].find_one_and_replace( { ssn: ssn }, { ssn: '555-555-5555' } ) end it 'has encrypted data in command monitoring' do # Command started event occurs after ssn is encrypted expect(started_event.command["query"]["ssn"]["$eq"]).to be_ciphertext expect(started_event.command["update"]["ssn"]).to be_ciphertext # Command succeeded event occurs before ssn is decrypted expect(succeeded_event.reply["value"]["ssn"]).to be_ciphertext end it_behaves_like 'it has a non-encrypted key_vault_client' end describe '#find_one_and_update' do let(:command_name) { 'findAndModify' } before do encryption_client['users'].find_one_and_update( { ssn: ssn }, { ssn: '555-555-5555' } ) end it 'has encrypted data in command monitoring' do # Command started event occurs after ssn is encrypted expect(started_event.command["query"]["ssn"]["$eq"]).to be_ciphertext expect(started_event.command["update"]["ssn"]).to be_ciphertext # Command succeeded event occurs before ssn is decrypted expect(succeeded_event.reply["value"]["ssn"]).to be_ciphertext end it_behaves_like 'it has a non-encrypted key_vault_client' end describe '#replace_one' do let(:command_name) { 'update' } before do encryption_client['users'].replace_one( { ssn: ssn }, { ssn: '555-555-5555' } ) end it 'has encrypted data in command monitoring' do # Command started event occurs after ssn is encrypted # Command succeeded event does not contain any data to be decrypted expect(started_event.command["updates"].first["q"]["ssn"]["$eq"]).to be_ciphertext expect(started_event.command["updates"].first["u"]["ssn"]).to be_ciphertext end it_behaves_like 'it has a non-encrypted key_vault_client' end describe '#update_one' do let(:command_name) { 'update' } before do encryption_client['users'].replace_one({ ssn: ssn }, { ssn: '555-555-5555' }) end it 'has encrypted data in command monitoring' do # Command started event occurs after ssn is encrypted # Command succeeded event does not contain any data to be decrypted expect(started_event.command["updates"].first["q"]["ssn"]["$eq"]).to be_ciphertext expect(started_event.command["updates"].first["u"]["ssn"]).to be_ciphertext end it_behaves_like 'it has a non-encrypted key_vault_client' end describe '#update_many' do let(:command_name) { 'update' } before do # update_many does not support replacement-style updates encryption_client['users'].update_many({ ssn: ssn }, { "$inc" => { :age => 1 } }) end it 'has encrypted data in command monitoring' do # Command started event occurs after ssn is encrypted # Command succeeded event does not contain any data to be decrypted expect(started_event.command["updates"].first["q"]["ssn"]["$eq"]).to be_ciphertext end it_behaves_like 'it has a non-encrypted key_vault_client' end end describe '#insert_one' do let(:command_name) { 'insert' } before do encryption_client['users'].insert_one(ssn: ssn) end it 'has encrypted data in command monitoring' do # Command started event occurs after ssn is encrypted # Command succeeded event does not contain any data to be decrypted expect(started_event.command["documents"].first["ssn"]).to be_ciphertext end it_behaves_like 'it has a non-encrypted key_vault_client' end end auto_encryption_mongocryptd_spawn_spec.rb000066400000000000000000000046011505113246500355410ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/integration/client_side_encryption# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe 'Auto Encryption' do require_libmongocrypt min_server_fcv '4.2' require_enterprise include_context 'define shared FLE helpers' include_context 'with local kms_providers' context 'with an invalid mongocryptd spawn path' do let(:client) do new_local_client( SpecConfig.instance.addresses, SpecConfig.instance.test_options.merge( auto_encryption_options: { kms_providers: kms_providers, key_vault_namespace: key_vault_namespace, schema_map: { 'auto_encryption.users' => schema_map }, extra_options: { mongocryptd_spawn_path: 'echo hello world', mongocryptd_spawn_args: [] } }, database: 'auto_encryption' ), ) end let(:server_selector) { double("ServerSelector") } let(:cluster) { double("Cluster") } before do key_vault_collection.drop key_vault_collection.insert_one(data_key) allow(server_selector).to receive(:name) allow(server_selector).to receive(:server_selection_timeout) allow(server_selector).to receive(:local_threshold) allow(cluster).to receive(:summary) # Raise a server selection error on intent-to-encrypt commands to mock # what would happen if mongocryptd hadn't already been spawned. It is # necessary to mock this behavior because it is likely that another test # will have already spawned mongocryptd, causing this test to fail. allow_any_instance_of(Mongo::Database) .to receive(:command) .with( hash_including( 'insert' => 'users', 'ordered' => true, 'lsid' => kind_of(Hash), 'documents' => kind_of(Array), 'jsonSchema' => kind_of(Hash), 'isRemoteSchema' => false, ), { execution_options: { deserialize_as_bson: true }, timeout_ms: nil }, ) .and_raise(Mongo::Error::NoServerAvailable.new(server_selector, cluster)) end it 'raises an exception when trying to perform auto encryption' do expect do client['users'].insert_one(ssn: ssn) end.to raise_error( Mongo::Error::MongocryptdSpawnError, /Failed to spawn mongocryptd at the path "echo hello world" with arguments/ ) end end end auto_encryption_old_wire_version_spec.rb000066400000000000000000000050731505113246500353410ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/integration/client_side_encryption# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe 'Auto Encryption' do require_libmongocrypt max_server_version '4.0' # Diagnostics of leaked background threads only, these tests do not # actually require a clean slate. https://jira.mongodb.org/browse/RUBY-2138 clean_slate include_context 'define shared FLE helpers' let(:encryption_client) do new_local_client( SpecConfig.instance.addresses, SpecConfig.instance.test_options.merge( auto_encryption_options: { kms_providers: kms_providers, key_vault_namespace: key_vault_namespace, # Must use local schema map because server versions older than 4.2 # do not support jsonSchema collection validator. schema_map: { 'auto_encryption.users' => schema_map }, bypass_auto_encryption: bypass_auto_encryption, # Spawn mongocryptd on non-default port for sharded cluster tests extra_options: extra_options, }, database: 'auto_encryption' ), ) end let(:bypass_auto_encryption) { false } let(:client) { authorized_client.use('auto_encryption') } let(:encrypted_ssn_binary) do BSON::Binary.new(Base64.decode64(encrypted_ssn), :ciphertext) end shared_examples 'it decrypts but does not encrypt on wire version < 8' do before do client['users'].drop client['users'].insert_one(ssn: encrypted_ssn_binary) key_vault_collection.drop key_vault_collection.insert_one(data_key) end it 'raises an exception when trying to encrypt' do expect do encryption_client['users'].find(ssn: ssn).first end.to raise_error(Mongo::Error::CryptError, /Auto-encryption requires a minimum MongoDB version of 4.2/) end context 'with bypass_auto_encryption=true' do let(:bypass_auto_encryption) { true } it 'does not raise an exception but doesn\'t encrypt' do document = encryption_client['users'].find(ssn: ssn).first expect(document).to be_nil end it 'still decrypts' do document = encryption_client['users'].find(ssn: encrypted_ssn_binary).first # ssn field is still decrypted expect(document['ssn']).to eq(ssn) end end end context 'with AWS kms provider' do include_context 'with AWS kms_providers' it_behaves_like 'it decrypts but does not encrypt on wire version < 8' end context 'with local kms provider' do include_context 'with local kms_providers' it_behaves_like 'it decrypts but does not encrypt on wire version < 8' end end mongo-ruby-driver-2.21.3/spec/integration/client_side_encryption/auto_encryption_reconnect_spec.rb000066400000000000000000000200111505113246500340140ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe 'Client with auto encryption #reconnect' do require_libmongocrypt min_server_fcv '4.2' require_enterprise # Diagnostics of leaked background threads only, these tests do not # actually require a clean slate. https://jira.mongodb.org/browse/RUBY-2138 clean_slate include_context 'define shared FLE helpers' let(:client) do new_local_client( SpecConfig.instance.addresses, SpecConfig.instance.test_options.merge( { auto_encryption_options: { kms_providers: kms_providers, kms_tls_options: kms_tls_options, key_vault_namespace: key_vault_namespace, key_vault_client: key_vault_client_option, schema_map: { 'auto_encryption.users': schema_map }, # Spawn mongocryptd on non-default port for sharded cluster tests extra_options: extra_options, }, database: 'auto_encryption', populator_io: false } ) ) end let(:unencrypted_client) { authorized_client.use('auto_encryption') } let(:mongocryptd_client) { client.encrypter.mongocryptd_client } let(:key_vault_client) { client.encrypter.key_vault_client } let(:data_key_id) { data_key['_id'] } shared_examples 'a functioning client' do it 'can perform an encrypted find command' do doc = client['users'].find(ssn: ssn).first expect(doc).not_to be_nil expect(doc['ssn']).to eq(ssn) end end shared_examples 'a functioning mongocryptd client' do it 'can perform a schemaRequiresEncryption command' do # A schemaRequiresEncryption command; mongocryptd should respond that # this command requires encryption. response = mongocryptd_client.database.command( insert: 'users', ordered: true, lsid: { id: BSON::Binary.new("\x00" * 16, :uuid) }, documents: [{ ssn: '123-456-7890', _id: BSON::ObjectId.new, }], jsonSchema: schema_map, isRemoteSchema: false ) expect(response).to be_ok expect(response.documents.first['schemaRequiresEncryption']).to be true end end shared_examples 'a functioning key vault client' do it 'can perform a find command' do doc = key_vault_client.use(key_vault_db)[key_vault_coll, read_concern: { level: :majority}].find(_id: data_key_id).first expect(doc).not_to be_nil expect(doc['_id']).to eq(data_key_id) end end shared_examples 'an auto-encryption client that reconnects properly' do before do key_vault_collection.drop key_vault_collection.insert_one(data_key) unencrypted_client['users'].drop # Use a client without auto_encryption_options to insert an # encrypted document into the collection; this ensures that the # client with auto_encryption_options must perform decryption # to properly read the document. unencrypted_client['users'].insert_one( ssn: BSON::Binary.new(Base64.decode64(encrypted_ssn), :ciphertext) ) end context 'after reconnecting without closing main client' do before do client.reconnect end it_behaves_like 'a functioning client' it_behaves_like 'a functioning mongocryptd client' it_behaves_like 'a functioning key vault client' end context 'after closing and reconnecting main client' do before do client.close client.reconnect end it_behaves_like 'a functioning client' it_behaves_like 'a functioning mongocryptd client' it_behaves_like 'a functioning key vault client' end context 'after killing client monitor thread' do before do thread = client.cluster.servers.first.monitor.instance_variable_get('@thread') expect(thread).to be_alive thread.kill sleep 0.1 expect(thread).not_to be_alive client.reconnect end it_behaves_like 'a functioning client' it_behaves_like 'a functioning mongocryptd client' it_behaves_like 'a functioning key vault client' end context 'after closing mongocryptd client and reconnecting' do before do # don't use the mongocryptd_client variable yet so that it will be computed # after the client reconnects client.encrypter.mongocryptd_client.close client.reconnect end it_behaves_like 'a functioning client' it_behaves_like 'a functioning mongocryptd client' it_behaves_like 'a functioning key vault client' end context 'after killing mongocryptd client monitor thread and reconnecting' do before do # don't use the mongocryptd_client variable yet so that it will be computed # after the client reconnects thread = client.encrypter.mongocryptd_client.cluster.servers.first.monitor.instance_variable_get('@thread') expect(thread).to be_alive thread.kill sleep 0.1 expect(thread).not_to be_alive client.reconnect end it_behaves_like 'a functioning client' it_behaves_like 'a functioning mongocryptd client' it_behaves_like 'a functioning key vault client' end context 'after closing key_vault_client and reconnecting' do before do key_vault_client.close client.reconnect end it_behaves_like 'a functioning client' it_behaves_like 'a functioning mongocryptd client' it_behaves_like 'a functioning key vault client' end context 'after killing key_vault_client monitor thread and reconnecting' do before do thread = key_vault_client.cluster.servers.first.monitor.instance_variable_get('@thread') expect(thread).to be_alive thread.kill sleep 0.1 expect(thread).not_to be_alive client.reconnect end it_behaves_like 'a functioning client' it_behaves_like 'a functioning mongocryptd client' it_behaves_like 'a functioning key vault client' end end context 'with default key vault client option' do let(:key_vault_client_option) { nil } context 'with AWS KMS providers' do include_context 'with AWS kms_providers' it_behaves_like 'an auto-encryption client that reconnects properly' end context 'with Azure KMS providers' do include_context 'with Azure kms_providers' it_behaves_like 'an auto-encryption client that reconnects properly' end context 'with GCP KMS providers' do include_context 'with GCP kms_providers' it_behaves_like 'an auto-encryption client that reconnects properly' end context 'with KMIP KMS providers' do include_context 'with KMIP kms_providers' it_behaves_like 'an auto-encryption client that reconnects properly' end context 'with local KMS providers' do include_context 'with local kms_providers' it_behaves_like 'an auto-encryption client that reconnects properly' end end context 'with custom key vault client option' do let(:key_vault_client_option) do new_local_client( SpecConfig.instance.addresses, SpecConfig.instance.test_options.merge(populator_io: false) ) end context 'with AWS KMS providers' do include_context 'with AWS kms_providers' it_behaves_like 'an auto-encryption client that reconnects properly' end context 'with Azure KMS providers' do include_context 'with Azure kms_providers' it_behaves_like 'an auto-encryption client that reconnects properly' end context 'with GCP KMS providers' do include_context 'with GCP kms_providers' it_behaves_like 'an auto-encryption client that reconnects properly' end context 'with KMIP KMS providers' do include_context 'with KMIP kms_providers' it_behaves_like 'an auto-encryption client that reconnects properly' end context 'with local KMS providers' do include_context 'with local kms_providers' it_behaves_like 'an auto-encryption client that reconnects properly' end end end mongo-ruby-driver-2.21.3/spec/integration/client_side_encryption/auto_encryption_spec.rb000066400000000000000000000564211505113246500317720ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' require 'bson' require 'json' describe 'Auto Encryption' do require_libmongocrypt min_server_fcv '4.2' require_enterprise # Diagnostics of leaked background threads only, these tests do not # actually require a clean slate. https://jira.mongodb.org/browse/RUBY-2138 clean_slate include_context 'define shared FLE helpers' let(:encryption_client) do new_local_client( SpecConfig.instance.addresses, SpecConfig.instance.test_options.merge( auto_encryption_options: { kms_providers: kms_providers, kms_tls_options: kms_tls_options, key_vault_namespace: key_vault_namespace, schema_map: local_schema, bypass_auto_encryption: bypass_auto_encryption, # Spawn mongocryptd on non-default port for sharded cluster tests extra_options: extra_options, }, database: 'auto_encryption', max_pool_size: max_pool_size, timeout_ms: timeout_ms ), ) end let(:client) { authorized_client.use('auto_encryption') } let(:bypass_auto_encryption) { false } let(:max_pool_size) do Mongo::Server::ConnectionPool::DEFAULT_MAX_SIZE end let(:encrypted_ssn_binary) do BSON::Binary.new(Base64.decode64(encrypted_ssn), :ciphertext) end shared_context 'bypass auto encryption' do let(:bypass_auto_encryption) { true } end shared_context 'jsonSchema validator on collection' do let(:local_schema) { nil } before do client['users', { 'validator' => { '$jsonSchema' => schema_map } } ].create end end shared_context 'schema map in client options' do let(:local_schema) { { "auto_encryption.users" => schema_map } } before do client['users'].create end end shared_context 'encrypted document in collection' do before do client['users'].insert_one(ssn: encrypted_ssn_binary) end end shared_context 'multiple encrypted documents in collection' do before do client['users'].insert_one(ssn: encrypted_ssn_binary) client['users'].insert_one(ssn: encrypted_ssn_binary) end end shared_context 'limited connection pool' do let(:max_pool_size) do 1 end end before(:each) do client['users'].drop key_vault_collection.drop key_vault_collection.insert_one(data_key) end shared_examples 'an encrypted command' do # context 'with AWS KMS provider' do # include_context 'with AWS kms_providers' # context 'with validator' do # include_context 'jsonSchema validator on collection' # it_behaves_like 'it performs an encrypted command' # end # context 'with schema map' do # include_context 'schema map in client options' # it_behaves_like 'it performs an encrypted command' # context 'with limited connection pool' do # include_context 'limited connection pool' # it_behaves_like 'it performs an encrypted command' # end # end # end # context 'with Azure KMS provider' do # include_context 'with Azure kms_providers' # context 'with validator' do # include_context 'jsonSchema validator on collection' # it_behaves_like 'it performs an encrypted command' # end # context 'with schema map' do # include_context 'schema map in client options' # it_behaves_like 'it performs an encrypted command' # context 'with limited connection pool' do # include_context 'limited connection pool' # it_behaves_like 'it performs an encrypted command' # end # end # end # context 'with GCP KMS provider' do # include_context 'with GCP kms_providers' # context 'with validator' do # include_context 'jsonSchema validator on collection' # it_behaves_like 'it performs an encrypted command' # end # context 'with schema map' do # include_context 'schema map in client options' # it_behaves_like 'it performs an encrypted command' # context 'with limited connection pool' do # include_context 'limited connection pool' # it_behaves_like 'it performs an encrypted command' # end # end # end # context 'with KMIP KMS provider' do # include_context 'with KMIP kms_providers' # context 'with validator' do # include_context 'jsonSchema validator on collection' # it_behaves_like 'it performs an encrypted command' # end # context 'with schema map' do # include_context 'schema map in client options' # it_behaves_like 'it performs an encrypted command' # context 'with limited connection pool' do # include_context 'limited connection pool' # it_behaves_like 'it performs an encrypted command' # end # end # end context 'with local KMS provider' do include_context 'with local kms_providers' context 'with validator' do include_context 'jsonSchema validator on collection' it_behaves_like 'it performs an encrypted command' end context 'with schema map' do include_context 'schema map in client options' it_behaves_like 'it performs an encrypted command' context 'with limited connection pool' do include_context 'limited connection pool' it_behaves_like 'it performs an encrypted command' end end end end [nil, 0].each do |timeout_ms| context "with timeout_ms #{timeout_ms}" do let(:timeout_ms) { timeout_ms } describe '#aggregate' do shared_examples 'it performs an encrypted command' do include_context 'encrypted document in collection' let(:result) do encryption_client['users'].aggregate([ { '$match' => { 'ssn' => ssn } } ]).first end it 'encrypts the command and decrypts the response' do result.should_not be_nil result['ssn'].should == ssn end context 'when bypass_auto_encryption=true' do include_context 'bypass auto encryption' it 'does not encrypt the command' do result.should be_nil end it 'does auto decrypt the response' do result = encryption_client['users'].aggregate([ { '$match' => { 'ssn' => encrypted_ssn_binary } } ]).first result.should_not be_nil result['ssn'].should == ssn end end end it_behaves_like 'an encrypted command' end describe '#count' do shared_examples 'it performs an encrypted command' do include_context 'multiple encrypted documents in collection' let(:result) { encryption_client['users'].count(ssn: ssn) } it 'encrypts the command and finds the documents' do expect(result).to eq(2) end context 'with bypass_auto_encryption=true' do include_context 'bypass auto encryption' it 'does not encrypt the command' do expect(result).to eq(0) end end end it_behaves_like 'an encrypted command' end describe '#distinct' do shared_examples 'it performs an encrypted command' do include_context 'encrypted document in collection' let(:result) { encryption_client['users'].distinct(:ssn) } it 'decrypts the SSN field' do expect(result.length).to eq(1) expect(result).to include(ssn) end context 'with bypass_auto_encryption=true' do include_context 'bypass auto encryption' it 'still decrypts the SSN field' do expect(result.length).to eq(1) expect(result).to include(ssn) end end end it_behaves_like 'an encrypted command' end describe '#delete_one' do shared_examples 'it performs an encrypted command' do include_context 'encrypted document in collection' let(:result) { encryption_client['users'].delete_one(ssn: ssn) } it 'encrypts the SSN field' do expect(result.deleted_count).to eq(1) end context 'with bypass_auto_encryption=true' do include_context 'bypass auto encryption' it 'does not encrypt the SSN field' do expect(result.deleted_count).to eq(0) end end end it_behaves_like 'an encrypted command' end describe '#delete_many' do shared_examples 'it performs an encrypted command' do include_context 'multiple encrypted documents in collection' let(:result) { encryption_client['users'].delete_many(ssn: ssn) } it 'decrypts the SSN field' do expect(result.deleted_count).to eq(2) end context 'with bypass_auto_encryption=true' do include_context 'bypass auto encryption' it 'does not encrypt the SSN field' do expect(result.deleted_count).to eq(0) end end end it_behaves_like 'an encrypted command' end describe '#find' do shared_examples 'it performs an encrypted command' do include_context 'encrypted document in collection' let(:result) { encryption_client['users'].find(ssn: ssn).first } it 'encrypts the command and decrypts the response' do result.should_not be_nil expect(result['ssn']).to eq(ssn) end context 'when bypass_auto_encryption=true' do include_context 'bypass auto encryption' it 'does not encrypt the command' do expect(result).to be_nil end end end it_behaves_like 'an encrypted command' end describe '#find_one_and_delete' do shared_examples 'it performs an encrypted command' do include_context 'encrypted document in collection' let(:result) { encryption_client['users'].find_one_and_delete(ssn: ssn) } it 'encrypts the command and decrypts the response' do expect(result['ssn']).to eq(ssn) end context 'when bypass_auto_encryption=true' do include_context 'bypass auto encryption' it 'does not encrypt the command' do expect(result).to be_nil end it 'still decrypts the command' do result = encryption_client['users'].find_one_and_delete(ssn: encrypted_ssn_binary) expect(result['ssn']).to eq(ssn) end end end it_behaves_like 'an encrypted command' end describe '#find_one_and_replace' do shared_examples 'it performs an encrypted command' do let(:name) { 'Alan Turing' } context 'with :return_document => :before' do include_context 'encrypted document in collection' let(:result) do encryption_client['users'].find_one_and_replace( { ssn: ssn }, { name: name }, return_document: :before ) end it 'encrypts the command and decrypts the response, returning original document' do expect(result['ssn']).to eq(ssn) documents = client['users'].find expect(documents.count).to eq(1) expect(documents.first['ssn']).to be_nil end end context 'with :return_document => :after' do before do client['users'].insert_one(name: name) end let(:result) do encryption_client['users'].find_one_and_replace( { name: name }, { ssn: ssn }, return_document: :after ) end it 'encrypts the command and decrypts the response, returning new document' do expect(result['ssn']).to eq(ssn) documents = client['users'].find expect(documents.count).to eq(1) expect(documents.first['ssn']).to eq(encrypted_ssn_binary) end end context 'when bypass_auto_encryption=true' do include_context 'bypass auto encryption' include_context 'encrypted document in collection' let(:result) do encryption_client['users'].find_one_and_replace( { ssn: encrypted_ssn_binary }, { name: name }, :return_document => :before ) end it 'does not encrypt the command but still decrypts the response, returning original document' do expect(result['ssn']).to eq(ssn) documents = client['users'].find expect(documents.count).to eq(1) expect(documents.first['ssn']).to be_nil end end end it_behaves_like 'an encrypted command' end describe '#find_one_and_update' do shared_examples 'it performs an encrypted command' do include_context 'encrypted document in collection' let(:name) { 'Alan Turing' } let(:result) do encryption_client['users'].find_one_and_update( { ssn: ssn }, { name: name } ) end it 'encrypts the command and decrypts the response' do expect(result['ssn']).to eq(ssn) documents = client['users'].find expect(documents.count).to eq(1) expect(documents.first['ssn']).to be_nil end context 'with bypass_auto_encryption=true' do include_context 'bypass auto encryption' it 'does not encrypt the command' do expect(result).to be_nil end it 'still decrypts the response' do # Query using the encrypted ssn value so the find will succeed result = encryption_client['users'].find_one_and_update( { ssn: encrypted_ssn_binary }, { name: name } ) expect(result['ssn']).to eq(ssn) end end end it_behaves_like 'an encrypted command' end describe '#insert_one' do let(:query) { { ssn: ssn } } let(:result) { encryption_client['users'].insert_one(query) } shared_examples 'it performs an encrypted command' do it 'encrypts the ssn field' do expect(result).to be_ok expect(result.inserted_ids.length).to eq(1) id = result.inserted_ids.first document = client['users'].find(_id: id).first document.should_not be_nil expect(document['ssn']).to eq(encrypted_ssn_binary) end end shared_examples 'it obeys bypass_auto_encryption option' do include_context 'bypass auto encryption' it 'does not encrypt the command' do result = encryption_client['users'].insert_one(ssn: ssn) expect(result).to be_ok expect(result.inserted_ids.length).to eq(1) id = result.inserted_ids.first document = client['users'].find(_id: id).first expect(document['ssn']).to eq(ssn) end end it_behaves_like 'an encrypted command' context 'with jsonSchema in schema_map option' do include_context 'schema map in client options' context 'with AWS KMS provider' do include_context 'with AWS kms_providers' it_behaves_like 'it obeys bypass_auto_encryption option' end context 'with Azure KMS provider' do include_context 'with Azure kms_providers' it_behaves_like 'it obeys bypass_auto_encryption option' end context 'with GCP KMS provider' do include_context 'with GCP kms_providers' it_behaves_like 'it obeys bypass_auto_encryption option' end context 'with KMIP KMS provider' do include_context 'with KMIP kms_providers' it_behaves_like 'it obeys bypass_auto_encryption option' end context 'with local KMS provider and ' do include_context 'with local kms_providers' it_behaves_like 'it obeys bypass_auto_encryption option' end end context 'with schema_map client option pointing to wrong collection' do let(:local_schema) { { 'wrong_db.wrong_coll' => schema_map } } include_context 'with local kms_providers' it 'does not raise an exception but doesn\'t encrypt either' do expect do result end.not_to raise_error expect(result).to be_ok id = result.inserted_ids.first document = client['users'].find(_id: id).first document.should_not be_nil # Document was not encrypted expect(document['ssn']).to eq(ssn) end end context 'encrypting using key alt name' do include_context 'schema map in client options' let(:query) { { ssn: ssn, altname: key_alt_name } } context 'with AWS KMS provider' do include_context 'with AWS kms_providers and key alt names' it 'encrypts the ssn field' do expect(result).to be_ok expect(result.inserted_ids.length).to eq(1) id = result.inserted_ids.first document = client['users'].find(_id: id).first document.should_not be_nil # Auto-encryption with key alt names only works with random encryption, # so it will not generate the same result on every test run. expect(document['ssn']).to be_ciphertext end end context 'with Azure KMS provider' do include_context 'with Azure kms_providers and key alt names' it 'encrypts the ssn field' do expect(result).to be_ok expect(result.inserted_ids.length).to eq(1) id = result.inserted_ids.first document = client['users'].find(_id: id).first document.should_not be_nil # Auto-encryption with key alt names only works with random encryption, # so it will not generate the same result on every test run. expect(document['ssn']).to be_ciphertext end context 'with GCP KMS provider' do include_context 'with GCP kms_providers and key alt names' it 'encrypts the ssn field' do expect(result).to be_ok expect(result.inserted_ids.length).to eq(1) id = result.inserted_ids.first document = client['users'].find(_id: id).first document.should_not be_nil # Auto-encryption with key alt names only works with random encryption, # so it will not generate the same result on every test run. expect(document['ssn']).to be_ciphertext end end context 'with KMIP KMS provider' do include_context 'with KMIP kms_providers and key alt names' it 'encrypts the ssn field' do expect(result).to be_ok expect(result.inserted_ids.length).to eq(1) id = result.inserted_ids.first document = client['users'].find(_id: id).first document.should_not be_nil # Auto-encryption with key alt names only works with random encryption, # so it will not generate the same result on every test run. expect(document['ssn']).to be_ciphertext end end end context 'with local KMS provider' do include_context 'with local kms_providers and key alt names' it 'encrypts the ssn field' do expect(result).to be_ok expect(result.inserted_ids.length).to eq(1) id = result.inserted_ids.first document = client['users'].find(_id: id).first document.should_not be_nil # Auto-encryption with key alt names only works with random encryption, # so it will not generate the same result on every test run. expect(document['ssn']).to be_a_kind_of(BSON::Binary) end end end end describe '#replace_one' do shared_examples 'it performs an encrypted command' do include_context 'encrypted document in collection' let(:replacement_ssn) { '098-765-4321' } let(:result) do encryption_client['users'].replace_one( { ssn: ssn }, { ssn: replacement_ssn } ) end it 'encrypts the ssn field' do expect(result.modified_count).to eq(1) find_result = encryption_client['users'].find(ssn: '098-765-4321') expect(find_result.count).to eq(1) end context 'with bypass_auto_encryption=true' do include_context 'bypass auto encryption' it 'does not encrypt the command' do expect(result.modified_count).to eq(0) end end end it_behaves_like 'an encrypted command' end describe '#update_one' do shared_examples 'it performs an encrypted command' do include_context 'encrypted document in collection' let(:result) do encryption_client['users'].replace_one({ ssn: ssn }, { ssn: '098-765-4321' }) end it 'encrypts the ssn field' do expect(result.n).to eq(1) find_result = encryption_client['users'].find(ssn: '098-765-4321') expect(find_result.count).to eq(1) end context 'with bypass_auto_encryption=true' do include_context 'bypass auto encryption' it 'does not encrypt the command' do expect(result.n).to eq(0) end end end it_behaves_like 'an encrypted command' end describe '#update_many' do shared_examples 'it performs an encrypted command' do before do client['users'].insert_one(ssn: encrypted_ssn_binary, age: 25) client['users'].insert_one(ssn: encrypted_ssn_binary, age: 43) end let(:result) do encryption_client['users'].update_many({ ssn: ssn }, { "$inc" => { :age => 1 } }) end it 'encrypts the ssn field' do expect(result.n).to eq(2) updated_documents = encryption_client['users'].find(ssn: ssn) ages = updated_documents.map { |doc| doc['age'] } expect(ages).to include(26) expect(ages).to include(44) end context 'with bypass_auto_encryption=true' do include_context 'bypass auto encryption' it 'does not encrypt the command' do expect(result.n).to eq(0) end end end it_behaves_like 'an encrypted command' end end end end automatic_data_encryption_keys_prose_spec.rb000066400000000000000000000100021505113246500361460ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/integration/client_side_encryption# frozen_string_literal: true require 'spec_helper' describe 'Client-Side Encryption' do describe 'Automatic Data Encryption Keys' do require_libmongocrypt require_enterprise require_topology :replica_set, :sharded, :load_balanced min_server_version '7.0.0-rc0' include_context 'define shared FLE helpers' let(:test_database_name) do 'automatic_data_encryption_keys' end let(:key_vault_client) do ClientRegistry.instance.new_local_client(SpecConfig.instance.addresses) end let(:client_encryption) do Mongo::ClientEncryption.new( key_vault_client, kms_tls_options: kms_tls_options, key_vault_namespace: key_vault_namespace, kms_providers: { local: { key: local_master_key }, aws: { access_key_id: SpecConfig.instance.fle_aws_key, secret_access_key: SpecConfig.instance.fle_aws_secret, } } ) end let(:database) do authorized_client.use(test_database_name).database end before do authorized_client.use(key_vault_db)[key_vault_coll].drop authorized_client.use(test_database_name).database.drop end shared_examples 'creates data keys automatically' do let(:opts) do { encrypted_fields: { fields: [ field ] } } end context 'when insert unencrypted value' do let(:field) do { path: 'ssn', bsonType: 'string', keyId: nil } end it 'fails document validation' do client_encryption.create_encrypted_collection( database, 'testing1', opts, kms_provider, master_key ) expect { database['testing1'].insert_one(ssn: '123-45-6789') } .to raise_error(Mongo::Error::OperationFailure, /Document failed validation/) end end it 'fails when missing encrypted field' do expect do client_encryption.create_encrypted_collection( database, 'testing1', {}, kms_provider, master_key ) end.to raise_error(ArgumentError, /coll_opts must contain :encrypted_fields/) end context 'when invalid keyId provided' do let(:field) do { path: 'ssn', bsonType: 'string', keyId: false } end it 'fails' do expect do client_encryption.create_encrypted_collection( database, 'testing1', opts, kms_provider, master_key ) end.to raise_error(Mongo::Error::CryptError, /keyId' is the wrong type/) end end context 'when configured correctly' do let(:field) do { path: 'ssn', bsonType: 'string', keyId: nil } end let(:new_encrypted_fields) do _, new_encrypted_fields = client_encryption.create_encrypted_collection( database, 'testing1', opts, kms_provider, master_key ) new_encrypted_fields end let(:key_id) do new_encrypted_fields[:fields].first[:keyId] end let(:encrypted_payload) do client_encryption.encrypt( '123-45-6789', key_id: key_id, algorithm: 'Unindexed' ) end it 'successfully inserts encrypted value' do expect do database['testing1'].insert_one(ssn: encrypted_payload) end.not_to raise_error end end end context 'with aws' do let(:kms_provider) { 'aws' } let(:master_key) do { region: 'us-east-1', key: 'arn:aws:kms:us-east-1:579766882180:key/89fcc2c4-08b0-4bd9-9f25-e30687b580d0' } end it_behaves_like 'creates data keys automatically' end context 'with local' do let(:kms_provider) { 'local' } let(:master_key) { { key: local_master_key } } it_behaves_like 'creates data keys automatically' end end end mongo-ruby-driver-2.21.3/spec/integration/client_side_encryption/bson_size_limit_spec.rb000066400000000000000000000131321505113246500317310ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe 'Client-Side Encryption' do describe 'Prose tests: BSON size limits and batch splitting' do require_libmongocrypt require_enterprise min_server_fcv '4.2' include_context 'define shared FLE helpers' let(:subscriber) { Mrss::EventSubscriber.new } let(:client) do authorized_client.use('db') end let(:json_schema) do BSON::ExtJSON.parse(File.read('spec/support/crypt/limits/limits-schema.json')) end let(:limits_doc) do BSON::ExtJSON.parse(File.read('spec/support/crypt/limits/limits-doc.json')) end let(:client_encrypted) do new_local_client( SpecConfig.instance.addresses, SpecConfig.instance.test_options.merge( auto_encryption_options: { kms_providers: { local: { key: local_master_key }, }, key_vault_namespace: 'keyvault.datakeys', # Spawn mongocryptd on non-default port for sharded cluster tests extra_options: extra_options, }, database: 'db', ) ).tap do |client| client.subscribe(Mongo::Monitoring::COMMAND, subscriber) end end before do client['coll'].drop client['coll', { 'validator' => { '$jsonSchema' => json_schema } } ].create key_vault_collection = client.use('keyvault')['datakeys', write_concern: { w: :majority }] key_vault_collection.drop key_vault_collection.insert_one( BSON::ExtJSON.parse(File.read('spec/support/crypt/limits/limits-key.json')) ) end let(:_2mib) { 2097152 } let(:_16mib) { 16777216 } context 'when a single, unencrypted document is larger than 2MiB' do it 'can perform insert_one using the encrypted client' do document = { _id: "over_2mib_under_16mib", unencrypted: 'a' * _2mib } result = client_encrypted['coll'].insert_one(document) expect(result).to be_ok end end context 'when a single encrypted document is larger than 2MiB' do it 'can perform insert_one using the encrypted client' do result = client_encrypted['coll'].insert_one( limits_doc.merge( _id: "encryption_exceeds_2mi", unencrypted: 'a' * (_2mib - 2000) ) ) expect(result).to be_ok end end context 'when bulk inserting two unencrypted documents under 2MiB' do it 'can perform bulk insert using the encrypted client' do bulk_write = Mongo::BulkWrite.new( client_encrypted['coll'], [ { insert_one: { _id: 'over_2mib_1', unencrypted: 'a' * _2mib } }, { insert_one: { _id: 'over_2mib_2', unencrypted: 'a' * _2mib } }, ] ) result = bulk_write.execute expect(result.inserted_count).to eq(2) command_succeeded_events = subscriber.succeeded_events.select do |event| event.command_name == 'insert' end expect(command_succeeded_events.length).to eq(2) end end context 'when bulk deletes two unencrypted documents under 2MiB' do it 'can perform bulk delete using the encrypted client' do # Insert documents that we can match and delete later bulk_write = Mongo::BulkWrite.new( client_encrypted['coll'], [ { insert_one: { _id: 'over_2mib_1', unencrypted: 'a' * _2mib } }, { insert_one: { _id: 'over_2mib_2', unencrypted: 'a' * _2mib } }, ] ) result = bulk_write.execute expect(result.inserted_count).to eq(2) command_succeeded_events = subscriber.succeeded_events.select do |event| event.command_name == 'insert' end expect(command_succeeded_events.length).to eq(2) end end context 'when bulk inserting two encrypted documents under 2MiB' do it 'can perform bulk_insert using the encrypted client' do bulk_write = Mongo::BulkWrite.new( client_encrypted['coll'], [ { insert_one: limits_doc.merge( _id: "encryption_exceeds_2mib_1", unencrypted: 'a' * (_2mib - 2000) ) }, { insert_one: limits_doc.merge( _id: 'encryption_exceeds_2mib_2', unencrypted: 'a' * (_2mib - 2000) ) }, ] ) result = bulk_write.execute expect(result.inserted_count).to eq(2) command_succeeded_events = subscriber.succeeded_events.select do |event| event.command_name == 'insert' end expect(command_succeeded_events.length).to eq(2) end end context 'when a single document is just smaller than 16MiB' do it 'can perform insert_one using the encrypted client' do result = client_encrypted['coll'].insert_one( _id: "under_16mib", unencrypted: "a" * (_16mib - 2000) ) expect(result).to be_ok end end context 'when an encrypted document is greater than the 16MiB limit' do it 'raises an exception when attempting to insert the document' do expect do client_encrypted['coll'].insert_one( limits_doc.merge( _id: "encryption_exceeds_16mib", unencrypted: "a" * (16*1024*1024 + 500*1024), ) ) end.to raise_error(Mongo::Error::MaxBSONSize, /The document exceeds maximum allowed BSON object size after serialization/) end end end end mongo-ruby-driver-2.21.3/spec/integration/client_side_encryption/bypass_mongocryptd_spawn_spec.rb000066400000000000000000000052361505113246500337040ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe 'Client-Side Encryption' do describe 'Prose tests: Bypass mongocryptd spawn' do require_libmongocrypt require_enterprise min_server_fcv '4.2' include_context 'define shared FLE helpers' # Choose a different port for mongocryptd than the one used by all the other # tests to avoid failures caused by other tests spawning mongocryptd. let(:mongocryptd_port) { 27091 } context 'via mongocryptdBypassSpawn' do let(:test_schema_map) do BSON::ExtJSON.parse(File.read('spec/support/crypt/external/external-schema.json')) end let(:client) do new_local_client( SpecConfig.instance.addresses, SpecConfig.instance.test_options.merge( auto_encryption_options: { kms_providers: local_kms_providers, key_vault_namespace: 'keyvault.datakeys', schema_map: { 'db.coll' => test_schema_map }, extra_options: { mongocryptd_bypass_spawn: true, mongocryptd_uri: "mongodb://localhost:#{mongocryptd_port}/db?serverSelectionTimeoutMS=1000", mongocryptd_spawn_args: [ "--pidfilepath=bypass-spawning-mongocryptd.pid", "--port=#{mongocryptd_port}"], }, }, database: 'db' ), ) end it 'does not spawn' do lambda do client['coll'].insert_one(encrypted: 'test') end.should raise_error(Mongo::Error::NoServerAvailable, /Server address=localhost:#{Regexp.quote(mongocryptd_port.to_s)} UNKNOWN/) end end context 'via bypassAutoEncryption' do let(:client) do new_local_client( SpecConfig.instance.addresses, SpecConfig.instance.test_options.merge( auto_encryption_options: { kms_providers: local_kms_providers, key_vault_namespace: 'keyvault.datakeys', bypass_auto_encryption: true, extra_options: { mongocryptd_spawn_args: [ "--pidfilepath=bypass-spawning-mongocryptd.pid", "--port=#{mongocryptd_port}"], }, }, database: 'db' ), ) end let(:mongocryptd_client) do new_local_client(["localhost:#{mongocryptd_port}"], server_selection_timeout: 1) end it 'does not spawn' do lambda do client['coll'].insert_one(encrypted: 'test') end.should_not raise_error lambda do mongocryptd_client.database.command(hello: 1) end.should raise_error(Mongo::Error::NoServerAvailable) end end end end mongo-ruby-driver-2.21.3/spec/integration/client_side_encryption/client_close_spec.rb000066400000000000000000000034261505113246500312100ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe 'Auto encryption client' do require_libmongocrypt require_enterprise min_server_fcv '4.2' context 'after client is disconnected' do include_context 'define shared FLE helpers' include_context 'with local kms_providers' let(:client) do new_local_client( SpecConfig.instance.addresses, SpecConfig.instance.test_options.merge( auto_encryption_options: { kms_providers: kms_providers, key_vault_namespace: 'keyvault.datakeys', schema_map: { 'auto_encryption.users' => schema_map }, # Spawn mongocryptd on non-default port for sharded cluster tests extra_options: extra_options, }, database: 'auto_encryption', ) ) end shared_examples 'a functioning auto-encrypter' do it 'can still perform encryption' do result = client['users'].insert_one(ssn: '000-000-0000') expect(result).to be_ok encrypted_document = authorized_client .use('auto_encryption')['users'] .find(_id: result.inserted_ids.first) .first expect(encrypted_document['ssn']).to be_ciphertext end end context 'after performing operation with auto encryption' do before do key_vault_collection.drop key_vault_collection.insert_one(data_key) client['users'].insert_one(ssn: ssn) client.close end it_behaves_like 'a functioning auto-encrypter' end context 'after performing operation without auto encryption' do before do client['users'].insert_one(age: 23) client.close end it_behaves_like 'a functioning auto-encrypter' end end end mongo-ruby-driver-2.21.3/spec/integration/client_side_encryption/corpus_spec.rb000066400000000000000000000231671505113246500300640ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe 'Client-Side Encryption' do describe 'Prose tests: Corpus Test' do require_libmongocrypt require_enterprise min_server_fcv '4.2' include_context 'define shared FLE helpers' let(:client) { authorized_client } let(:key_vault_client) do client.with( database: 'keyvault', write_concern: { w: :majority } )['datakeys'] end let(:test_schema_map) { BSON::ExtJSON.parse(File.read('spec/support/crypt/corpus/corpus-schema.json')) } let(:local_data_key) { BSON::ExtJSON.parse(File.read('spec/support/crypt/corpus/corpus-key-local.json')) } let(:aws_data_key) { BSON::ExtJSON.parse(File.read('spec/support/crypt/corpus/corpus-key-aws.json')) } let(:azure_data_key) { BSON::ExtJSON.parse(File.read('spec/support/crypt/corpus/corpus-key-azure.json')) } let(:gcp_data_key) { BSON::ExtJSON.parse(File.read('spec/support/crypt/corpus/corpus-key-gcp.json')) } let(:kmip_data_key) { BSON::ExtJSON.parse(File.read('spec/support/crypt/corpus/corpus-key-kmip.json')) } let(:client_encrypted) do new_local_client( SpecConfig.instance.addresses, SpecConfig.instance.test_options.merge( auto_encryption_options: { kms_providers: { local: { key: local_master_key }, aws: { access_key_id: SpecConfig.instance.fle_aws_key, secret_access_key: SpecConfig.instance.fle_aws_secret, }, azure: { tenant_id: SpecConfig.instance.fle_azure_tenant_id, client_id: SpecConfig.instance.fle_azure_client_id, client_secret: SpecConfig.instance.fle_azure_client_secret, }, gcp: { email: SpecConfig.instance.fle_gcp_email, private_key: SpecConfig.instance.fle_gcp_private_key, }, kmip: { endpoint: SpecConfig.instance.fle_kmip_endpoint, } }, kms_tls_options: { kmip: { ssl_ca_cert: SpecConfig.instance.fle_kmip_tls_ca_file, ssl_cert: SpecConfig.instance.fle_kmip_tls_certificate_key_file, ssl_key: SpecConfig.instance.fle_kmip_tls_certificate_key_file, } }, key_vault_namespace: 'keyvault.datakeys', schema_map: local_schema_map, # Spawn mongocryptd on non-default port for sharded cluster tests extra_options: extra_options, }, database: 'db', ) ) end let(:client_encryption) do Mongo::ClientEncryption.new( client, { kms_providers: { local: { key: local_master_key }, aws: { access_key_id: SpecConfig.instance.fle_aws_key, secret_access_key: SpecConfig.instance.fle_aws_secret, }, azure: { tenant_id: SpecConfig.instance.fle_azure_tenant_id, client_id: SpecConfig.instance.fle_azure_client_id, client_secret: SpecConfig.instance.fle_azure_client_secret, }, gcp: { email: SpecConfig.instance.fle_gcp_email, private_key: SpecConfig.instance.fle_gcp_private_key, }, kmip: { endpoint: SpecConfig.instance.fle_kmip_endpoint, } }, kms_tls_options: { kmip: { ssl_ca_cert: SpecConfig.instance.fle_kmip_tls_ca_file, ssl_cert: SpecConfig.instance.fle_kmip_tls_certificate_key_file, ssl_key: SpecConfig.instance.fle_kmip_tls_certificate_key_file, } }, key_vault_namespace: 'keyvault.datakeys', }, ) end let(:corpus) do BSON::ExtJSON.parse(File.read('spec/support/crypt/corpus/corpus.json'), mode: :bson) end let(:corpus_encrypted_expected) do BSON::ExtJSON.parse(File.read('spec/support/crypt/corpus/corpus-encrypted.json')) end let(:corpus_copied) do # As per the instructions of the prose spec, corpus_copied is a copy of # the corpus BSON::Document that encrypts all fields that are meant to # be explicitly encrypted. corpus is a document containing many # sub-documents, each with a value to encrypt and information about how # to encrypt that value. corpus_copied = BSON::Document.new corpus.each do |key, doc| if ['_id', 'altname_aws', 'altname_azure', 'altname_gcp', 'altname_kmip', 'altname_local'].include?(key) corpus_copied[key] = doc next end if doc['method'] == 'auto' corpus_copied[key] = doc elsif doc['method'] == 'explicit' options = if doc['identifier'] == 'id' key_id = if doc['kms'] == 'local' 'LOCALAAAAAAAAAAAAAAAAA==' elsif doc['kms'] == 'azure' 'AZUREAAAAAAAAAAAAAAAAA==' elsif doc['kms'] == 'gcp' 'GCPAAAAAAAAAAAAAAAAAAA==' elsif doc['kms'] == 'aws' 'AWSAAAAAAAAAAAAAAAAAAA==' elsif doc['kms'] == 'kmip' 'KMIPAAAAAAAAAAAAAAAAAA==' end { key_id: BSON::Binary.new(Base64.decode64(key_id), :uuid) } elsif doc['identifier'] == 'altname' { key_alt_name: doc['kms'] } end algorithm = if doc['algo'] == 'rand' 'AEAD_AES_256_CBC_HMAC_SHA_512-Random' else 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic' end begin encrypted_value = client_encryption.encrypt( doc['value'], options.merge({ algorithm: algorithm }) ) corpus_copied[key] = doc.merge('value' => encrypted_value) rescue => e # If doc['allowed'] is true, it means that this field should have # been encrypted without error, and thus that this error is unexpected. # If doc['allowed'] is false, this error was expected and the value # should be copied over without being encrypted. if doc['allowed'] raise "Unexpected error occurred in client-side encryption " + "corpus tests: #{e.class}: #{e.message}" end corpus_copied[key] = doc end end end corpus_copied end before do client.use('db')['coll'].drop key_vault_collection = client.use('keyvault')['datakeys', write_concern: { w: :majority }] key_vault_collection.drop key_vault_collection.insert_one(local_data_key) key_vault_collection.insert_one(aws_data_key) key_vault_collection.insert_one(azure_data_key) key_vault_collection.insert_one(gcp_data_key) key_vault_collection.insert_one(kmip_data_key) end # This method compensates for an API change between BSON 4 and # BSON 5. def normalize_cse_value(a) case a when BSON::Decimal128 then a.to_d else a end end shared_context 'with jsonSchema collection validator' do let(:local_schema_map) { nil } before do client.use('db')['coll', { 'validator' => { '$jsonSchema' => test_schema_map } } ].create end end shared_context 'with local schema map' do let(:local_schema_map) { { 'db.coll' => test_schema_map } } end shared_examples 'a functioning encrypter' do it 'properly encrypts and decrypts a document' do corpus_encrypted_id = client_encrypted['coll'] .insert_one(corpus_copied) .inserted_id corpus_decrypted = client_encrypted['coll'] .find(_id: corpus_encrypted_id) .first # Ensure that corpus_decrypted is the same as the original corpus # document by checking that they have the same set of keys, and that # they have the same values at those keys (improved diagnostics). expect(corpus_decrypted.keys).to eq(corpus.keys) corpus_decrypted.each do |key, doc| expect(key => doc).to eq(key => corpus[key]) end corpus_encrypted_actual = client .use('db')['coll'] .find(_id: corpus_encrypted_id) .first corpus_encrypted_actual.each do |key, value| # If it was deterministically encrypted, test the encrypted values # for equality. if value['algo'] == 'det' expect(normalize_cse_value(value['value'])).to eq(normalize_cse_value(corpus_encrypted_expected[key]['value'])) else # If the document was randomly encrypted, the two encrypted values # will not be equal. Ensure that they are equal when decrypted. if value['allowed'] actual_decrypted_value = client_encryption.decrypt(value['value']) expected_decrypted_value = client_encryption.decrypt(corpus_encrypted_expected[key]['value']) expect(actual_decrypted_value).to eq(expected_decrypted_value) else # If 'allowed' was false, the value was never encrypted; ensure # that it is equal to the original, unencrypted value. expect(value['value']).to eq(corpus[key]['value']) end end end end end context 'with collection validator' do include_context 'with jsonSchema collection validator' it_behaves_like 'a functioning encrypter' end context 'with schema map' do include_context 'with local schema map' it_behaves_like 'a functioning encrypter' end end end mongo-ruby-driver-2.21.3/spec/integration/client_side_encryption/custom_endpoint_spec.rb000066400000000000000000000061461505113246500317610ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe 'Client-Side Encryption' do describe 'Prose tests: Data key and double encryption' do require_libmongocrypt require_enterprise min_server_fcv '4.2' include_context 'define shared FLE helpers' let(:client) do new_local_client( SpecConfig.instance.addresses, SpecConfig.instance.test_options ) end let(:client_encryption) do Mongo::ClientEncryption.new( client, { kms_providers: aws_kms_providers, key_vault_namespace: 'keyvault.datakeys', }, ) end let(:master_key_template) do { region: "us-east-1", key: "arn:aws:kms:us-east-1:579766882180:key/89fcc2c4-08b0-4bd9-9f25-e30687b580d0" } end let(:data_key_id) do client_encryption.create_data_key('aws', master_key: master_key) end shared_examples 'a functioning data key' do it 'can encrypt and decrypt a string' do encrypted = client_encryption.encrypt( 'test', { key_id: data_key_id, algorithm: 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic' } ) expect(encrypted).to be_ciphertext decrypted = client_encryption.decrypt(encrypted) expect(decrypted).to eq('test') end end context 'with region and key options' do let(:master_key) do master_key_template end it_behaves_like 'a functioning data key' end context 'with region, key, and endpoint options' do let(:master_key) do master_key_template.merge({endpoint: "kms.us-east-1.amazonaws.com"}) end it_behaves_like 'a functioning data key' end context 'with region, key, and endpoint with valid port' do let(:master_key) do master_key_template.merge({endpoint: "kms.us-east-1.amazonaws.com:443"}) end it_behaves_like 'a functioning data key' end shared_examples 'raising a KMS error' do it 'throws an exception' do expect do data_key_id end.to raise_error(Mongo::Error::KmsError, error_regex) end end context 'with region, key, and endpoint with invalid port' do let(:master_key) do master_key_template.merge({endpoint: "kms.us-east-1.amazonaws.com:12345"}) end let(:error_regex) do /Connection refused|SocketError|SocketTimeoutError/ end it_behaves_like 'raising a KMS error' end context 'with region, key, and endpoint with invalid region' do let(:master_key) do master_key_template.merge({endpoint: "kms.us-east-2.amazonaws.com"}) end let(:error_regex) do // end it_behaves_like 'raising a KMS error' end context 'with region, key, and endpoint at incorrect domain' do let(:master_key) do master_key_template.merge({endpoint: "doesnotexist.invalid"}) end let(:error_regex) do /(SocketError|ResolutionError): getaddrinfo:/ end it_behaves_like 'raising a KMS error' end end end mongo-ruby-driver-2.21.3/spec/integration/client_side_encryption/data_key_spec.rb000066400000000000000000000172071505113246500303300ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe 'Client-Side Encryption' do describe 'Prose tests: Data key and double encryption' do require_libmongocrypt require_enterprise min_server_fcv '4.2' include_context 'define shared FLE helpers' let(:subscriber) { Mrss::EventSubscriber.new } let(:client) do new_local_client( SpecConfig.instance.addresses, SpecConfig.instance.test_options ).tap do |client| client.subscribe(Mongo::Monitoring::COMMAND, subscriber) end end let(:test_schema_map) do { "db.coll": { "bsonType": "object", "properties": { "encrypted_placeholder": { "encrypt": { "keyId": "/placeholder", "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random" } } } } } end let(:client_encrypted) do new_local_client( SpecConfig.instance.addresses, SpecConfig.instance.test_options.merge( auto_encryption_options: { kms_providers: { local: { key: local_master_key }, aws: { access_key_id: SpecConfig.instance.fle_aws_key, secret_access_key: SpecConfig.instance.fle_aws_secret, }, azure: { tenant_id: SpecConfig.instance.fle_azure_tenant_id, client_id: SpecConfig.instance.fle_azure_client_id, client_secret: SpecConfig.instance.fle_azure_client_secret, }, gcp: { email: SpecConfig.instance.fle_gcp_email, private_key: SpecConfig.instance.fle_gcp_private_key, }, kmip: { endpoint: SpecConfig.instance.fle_kmip_endpoint } }, kms_tls_options: { kmip: { ssl_ca_cert: SpecConfig.instance.fle_kmip_tls_ca_file, ssl_cert: SpecConfig.instance.fle_kmip_tls_certificate_key_file, ssl_key: SpecConfig.instance.fle_kmip_tls_certificate_key_file, } }, key_vault_namespace: 'keyvault.datakeys', schema_map: test_schema_map, # Spawn mongocryptd on non-default port for sharded cluster tests extra_options: extra_options, }, database: 'db', ) ) end let(:client_encryption) do Mongo::ClientEncryption.new( client, { kms_providers: { local: { key: local_master_key }, aws: { access_key_id: SpecConfig.instance.fle_aws_key, secret_access_key: SpecConfig.instance.fle_aws_secret, }, azure: { tenant_id: SpecConfig.instance.fle_azure_tenant_id, client_id: SpecConfig.instance.fle_azure_client_id, client_secret: SpecConfig.instance.fle_azure_client_secret, }, gcp: { email: SpecConfig.instance.fle_gcp_email, private_key: SpecConfig.instance.fle_gcp_private_key, }, kmip: { endpoint: SpecConfig.instance.fle_kmip_endpoint } }, kms_tls_options: { kmip: { ssl_ca_cert: SpecConfig.instance.fle_kmip_tls_ca_file, ssl_cert: SpecConfig.instance.fle_kmip_tls_certificate_key_file, ssl_key: SpecConfig.instance.fle_kmip_tls_certificate_key_file, } }, key_vault_namespace: 'keyvault.datakeys', }, ) end before do client.use('keyvault')['datakeys'].drop client.use('db')['coll'].drop end shared_examples 'can create and use a data key' do it 'creates a data key and uses it for encryption' do data_key_id = client_encryption.create_data_key( kms_provider_name, data_key_options.merge(key_alt_names: [key_alt_name]) ) expect(data_key_id).to be_uuid keys = client.use('keyvault')['datakeys'].find(_id: data_key_id) expect(keys.count).to eq(1) expect(keys.first['masterKey']['provider']).to eq(kms_provider_name) command_started_event = subscriber.started_events.find do |event| event.command_name == 'find' end expect(command_started_event).not_to be_nil encrypted = client_encryption.encrypt( value_to_encrypt, { key_id: data_key_id, algorithm: 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic' } ) expect(encrypted).to be_ciphertext client_encrypted['coll'].insert_one( _id: kms_provider_name, value: encrypted, ) document = client_encrypted['coll'].find(_id: kms_provider_name).first expect(document['value']).to eq(value_to_encrypt) encrypted_with_alt_name = client_encryption.encrypt( value_to_encrypt, { key_alt_name: key_alt_name, algorithm: 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic' } ) expect(encrypted_with_alt_name).to be_ciphertext expect(encrypted_with_alt_name).to eq(encrypted) expect do client_encrypted['coll'].insert_one(encrypted_placeholder: encrypted) end.to raise_error(Mongo::Error::OperationFailure, /Cannot encrypt element of type(: encrypted binary data| binData)/) end end context 'with local KMS options' do include_context 'with local kms_providers' let(:key_alt_name) { 'local_altname' } let(:data_key_options) { {} } let(:value_to_encrypt) { 'hello local' } it_behaves_like 'can create and use a data key' end context 'with AWS KMS options' do include_context 'with AWS kms_providers' let(:key_alt_name) { 'aws_altname' } let(:value_to_encrypt) { 'hello aws' } let(:data_key_options) do { master_key: { region: SpecConfig.instance.fle_aws_region, key: SpecConfig.instance.fle_aws_arn, } } end it_behaves_like 'can create and use a data key' end context 'with Azure KMS options' do include_context 'with Azure kms_providers' let(:key_alt_name) { 'azure_altname' } let(:value_to_encrypt) { 'hello azure' } let(:data_key_options) do { master_key: { key_vault_endpoint: SpecConfig.instance.fle_azure_key_vault_endpoint, key_name: SpecConfig.instance.fle_azure_key_name, } } end it_behaves_like 'can create and use a data key' end context 'with GCP KMS options' do include_context 'with GCP kms_providers' let(:key_alt_name) { 'gcp_altname' } let(:value_to_encrypt) { 'hello gcp' } let(:data_key_options) do { master_key: { project_id: SpecConfig.instance.fle_gcp_project_id, location: SpecConfig.instance.fle_gcp_location, key_ring: SpecConfig.instance.fle_gcp_key_ring, key_name: SpecConfig.instance.fle_gcp_key_name, } } end it_behaves_like 'can create and use a data key' end context 'with KMIP KMS options' do include_context 'with KMIP kms_providers' let(:key_alt_name) { 'kmip_altname' } let(:value_to_encrypt) { 'hello kmip' } let(:data_key_options) do { master_key: { key_id: "1" } } end it_behaves_like 'can create and use a data key' end end end mongo-ruby-driver-2.21.3/spec/integration/client_side_encryption/decryption_events_prose_spec.rb000066400000000000000000000074221505113246500335210ustar00rootroot00000000000000# frozen_string_literal: true require 'spec_helper' describe 'Decryption events' do require_enterprise min_server_fcv '4.2' require_libmongocrypt include_context 'define shared FLE helpers' require_topology :replica_set min_server_version '7.0.0-rc0' let(:setup_client) do ClientRegistry.instance.new_local_client( SpecConfig.instance.addresses, SpecConfig.instance.test_options.merge( database: SpecConfig.instance.test_db ) ) end let(:collection_name) do 'decryption_event' end let(:client_encryption) do Mongo::ClientEncryption.new( setup_client, key_vault_namespace: "#{key_vault_db}.#{key_vault_coll}", kms_providers: local_kms_providers ) end let(:key_id) do client_encryption.create_data_key('local') end let(:unencrypted_value) do 'hello' end let(:ciphertext) do client_encryption.encrypt( unencrypted_value, key_id: key_id, algorithm: 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic' ) end let(:malformed_ciphertext) do ciphertext.dup.tap do |obj| obj.data[-1] = 0.chr end end let(:encrypted_client) do ClientRegistry.instance.new_local_client( SpecConfig.instance.addresses, SpecConfig.instance.test_options.merge( auto_encryption_options: { key_vault_namespace: "#{key_vault_db}.#{key_vault_coll}", kms_providers: local_kms_providers, extra_options: extra_options, }, database: SpecConfig.instance.test_db, retry_reads: false, max_read_retries: 0 ) ) end let(:collection) do encrypted_client[collection_name] end let(:subscriber) { Mrss::EventSubscriber.new } let(:command_error) do { 'configureFailPoint' => 'failCommand', 'mode' => { 'times' => 1 }, 'data' => { 'errorCode' => 123, 'failCommands' => [ 'aggregate' ] } } end let(:network_error) do { 'configureFailPoint' => 'failCommand', 'mode' => { 'times' => 1 }, 'data' => { 'errorCode' => 123, 'closeConnection' => true, 'failCommands' => [ 'aggregate' ] } } end let(:aggregate_event) do subscriber.succeeded_events.detect do |evt| evt.command_name == 'aggregate' end end before do setup_client[collection_name].drop setup_client[collection_name].create encrypted_client.subscribe(Mongo::Monitoring::COMMAND, subscriber) end it 'tests command error' do setup_client.use(:admin).command(command_error) expect do collection.aggregate([]).to_a end.to raise_error(Mongo::Error::OperationFailure, /Failing command (?:via|due to) 'failCommand' failpoint/) expect(subscriber.failed_events.length).to be 1 end it 'tests network error' do setup_client.use(:admin).command(network_error) expect do collection.aggregate([]).to_a end.to raise_error(Mongo::Error::SocketError) expect(subscriber.failed_events.length).to be 1 end context 'when decrypt error' do before do collection.insert_one(encrypted: malformed_ciphertext) end it 'fails' do expect { collection.aggregate([]).to_a }.to raise_error(Mongo::Error::CryptError) expect(aggregate_event).not_to be_nil expect( aggregate_event.reply.dig('cursor', 'firstBatch')&.first&.dig('encrypted') ).to be_a(BSON::Binary) end end context 'when decrypt success' do before do collection.insert_one(encrypted: ciphertext) end it 'succeeds' do expect { collection.aggregate([]).to_a }.not_to raise_error expect(aggregate_event).not_to be_nil expect( aggregate_event.reply.dig('cursor', 'firstBatch')&.first&.dig('encrypted') ).to be_a(BSON::Binary) end end end mongo-ruby-driver-2.21.3/spec/integration/client_side_encryption/explicit_encryption_spec.rb000066400000000000000000000102111505113246500326260ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe 'Explicit Encryption' do require_libmongocrypt include_context 'define shared FLE helpers' let(:client) { ClientRegistry.instance.new_local_client(SpecConfig.instance.addresses) } let(:client_encryption_opts) do { kms_providers: kms_providers, kms_tls_options: kms_tls_options, key_vault_namespace: key_vault_namespace } end let(:client_encryption) do Mongo::ClientEncryption.new( client, client_encryption_opts ) end before do client.use(key_vault_db)[key_vault_coll].drop end shared_examples 'an explicit encrypter' do it 'encrypts and decrypts the value using key_id' do data_key_id = client_encryption.create_data_key( kms_provider_name, data_key_options ) encrypted = client_encryption.encrypt( value, { key_id: data_key_id, algorithm: 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic', } ) decrypted = client_encryption.decrypt(encrypted) expect(decrypted).to eq(value) expect(decrypted).to be_a_kind_of(value.class) end it 'encrypts and decrypts the value using key_alt_name' do data_key_id = client_encryption.create_data_key( kms_provider_name, data_key_options.merge(key_alt_names: [key_alt_name]) ) encrypted = client_encryption.encrypt( value, { key_alt_name: key_alt_name, algorithm: 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic', } ) decrypted = client_encryption.decrypt(encrypted) expect(decrypted).to eq(value) expect(decrypted).to be_a_kind_of(value.class) end end context 'value is a string' do let(:value) { 'Hello, world!' } context 'with AWS KMS provider' do include_context 'with AWS kms_providers' retry_test it_behaves_like 'an explicit encrypter' end context 'with Azure KMS provider' do include_context 'with Azure kms_providers' retry_test it_behaves_like 'an explicit encrypter' end context 'with GCP KMS provider' do include_context 'with GCP kms_providers' retry_test it_behaves_like 'an explicit encrypter' end context 'with KMIP KMS provider' do include_context 'with KMIP kms_providers' retry_test it_behaves_like 'an explicit encrypter' end context 'with local KMS provider' do include_context 'with local kms_providers' it_behaves_like 'an explicit encrypter' end end context 'value is an integer' do let(:value) { 42 } context 'with AWS KMS provider' do include_context 'with AWS kms_providers' it_behaves_like 'an explicit encrypter' end context 'with Azure KMS provider' do include_context 'with Azure kms_providers' it_behaves_like 'an explicit encrypter' end context 'with GCP KMS provider' do include_context 'with GCP kms_providers' it_behaves_like 'an explicit encrypter' end context 'with KMIP KMS provider' do include_context 'with KMIP kms_providers' it_behaves_like 'an explicit encrypter' end context 'with local KMS provider' do include_context 'with local kms_providers' it_behaves_like 'an explicit encrypter' end end context 'value is an symbol' do let(:value) { BSON::Symbol::Raw.new(:hello_world) } context 'with AWS KMS provider' do include_context 'with AWS kms_providers' it_behaves_like 'an explicit encrypter' end context 'with Azure KMS provider' do include_context 'with Azure kms_providers' it_behaves_like 'an explicit encrypter' end context 'with GCP KMS provider' do include_context 'with GCP kms_providers' it_behaves_like 'an explicit encrypter' end context 'with KMIP KMS provider' do include_context 'with KMIP kms_providers' it_behaves_like 'an explicit encrypter' end context 'with local KMS provider' do include_context 'with local kms_providers' it_behaves_like 'an explicit encrypter' end end end explicit_queryable_encryption_spec.rb000066400000000000000000000113721505113246500346310ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/integration/client_side_encryption# frozen_string_literal: true require 'spec_helper' # No need to rewrite existing specs to make the examples shorter, until/unless # we revisit these specs and need to make substantial changes. # rubocop:disable RSpec/ExampleLength describe 'Explicit Queryable Encryption' do require_libmongocrypt min_server_version '7.0.0-rc0' require_topology :replica_set, :sharded, :load_balanced include_context 'define shared FLE helpers' let(:key1_id) do key1_document['_id'] end let(:encrypted_coll) do 'explicit_encryption' end let(:value) do 'encrypted indexed value' end let(:unindexed_value) do 'encrypted unindexed value' end let(:key_vault_client) do ClientRegistry.instance.new_local_client(SpecConfig.instance.addresses) end let(:client_encryption_opts) do { kms_providers: local_kms_providers, kms_tls_options: kms_tls_options, key_vault_namespace: key_vault_namespace } end let(:client_encryption) do Mongo::ClientEncryption.new( key_vault_client, client_encryption_opts ) end let(:encrypted_client) do ClientRegistry.instance.new_local_client( SpecConfig.instance.addresses, auto_encryption_options: { key_vault_namespace: "#{key_vault_db}.#{key_vault_coll}", kms_providers: local_kms_providers, bypass_query_analysis: true }, database: SpecConfig.instance.test_db ) end before do authorized_client[encrypted_coll].drop(encrypted_fields: encrypted_fields) authorized_client[encrypted_coll].create(encrypted_fields: encrypted_fields) authorized_client.use(key_vault_db)[key_vault_coll].drop authorized_client.use(key_vault_db)[key_vault_coll, write_concern: { w: :majority }].insert_one(key1_document) end after do authorized_client[encrypted_coll].drop(encrypted_fields: encrypted_fields) authorized_client.use(key_vault_db)[key_vault_coll].drop end it 'can insert encrypted indexed and find' do insert_payload = client_encryption.encrypt( value, key_id: key1_id, algorithm: 'Indexed', contention_factor: 0 ) encrypted_client[encrypted_coll].insert_one( 'encryptedIndexed' => insert_payload ) find_payload = client_encryption.encrypt( value, key_id: key1_id, algorithm: 'Indexed', query_type: 'equality', contention_factor: 0 ) find_results = encrypted_client[encrypted_coll] .find('encryptedIndexed' => find_payload) .to_a expect(find_results.size).to eq(1) expect(find_results.first['encryptedIndexed']).to eq(value) end it 'can insert encrypted indexed and find with non-zero contention' do 10.times do insert_payload = client_encryption.encrypt( value, key_id: key1_id, algorithm: 'Indexed', contention_factor: 10 ) encrypted_client[encrypted_coll].insert_one( 'encryptedIndexed' => insert_payload ) end find_payload = client_encryption.encrypt( value, key_id: key1_id, algorithm: 'Indexed', query_type: 'equality', contention_factor: 0 ) find_results = encrypted_client[encrypted_coll] .find('encryptedIndexed' => find_payload) .to_a expect(find_results.size).to be < 10 find_results.each do |doc| expect(doc['encryptedIndexed']).to eq(value) end find_payload2 = client_encryption.encrypt( value, key_id: key1_id, algorithm: 'Indexed', query_type: 'equality', contention_factor: 10 ) find_results2 = encrypted_client[encrypted_coll] .find('encryptedIndexed' => find_payload2) .to_a expect(find_results2.size).to eq(10) find_results2.each do |doc| expect(doc['encryptedIndexed']).to eq(value) end end it 'can insert encrypted unindexed' do insert_payload = client_encryption.encrypt( unindexed_value, key_id: key1_id, algorithm: 'Unindexed' ) encrypted_client[encrypted_coll].insert_one( '_id' => 1, 'encryptedUnindexed' => insert_payload ) find_results = encrypted_client[encrypted_coll].find('_id' => 1).to_a expect(find_results.size).to eq(1) expect(find_results.first['encryptedUnindexed']).to eq(unindexed_value) end it 'can roundtrip encrypted indexed' do payload = client_encryption.encrypt( value, key_id: key1_id, algorithm: 'Indexed', contention_factor: 0 ) decrypted_value = client_encryption.decrypt(payload) expect(decrypted_value).to eq(value) end it 'can roundtrip encrypted unindexed' do payload = client_encryption.encrypt( unindexed_value, key_id: key1_id, algorithm: 'Unindexed' ) decrypted_value = client_encryption.decrypt(payload) expect(decrypted_value).to eq(unindexed_value) end end # rubocop:enable RSpec/ExampleLength mongo-ruby-driver-2.21.3/spec/integration/client_side_encryption/external_key_vault_spec.rb000066400000000000000000000101151505113246500324430ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe 'Client-Side Encryption' do describe 'Prose tests: External Key Vault Test' do require_libmongocrypt require_enterprise min_server_fcv '4.2' include_context 'define shared FLE helpers' let(:client) do new_local_client( SpecConfig.instance.addresses, SpecConfig.instance.test_options ) end let(:test_schema_map) do { 'db.coll' => BSON::ExtJSON.parse(File.read('spec/support/crypt/external/external-schema.json')) } end let(:external_key_vault_client) do new_local_client( SpecConfig.instance.addresses, SpecConfig.instance.test_options.merge( user: 'fake-user', password: 'fake-pwd' ) ) end let(:data_key_id) do BSON::Binary.new(Base64.decode64('LOCALAAAAAAAAAAAAAAAAA=='), :uuid) end before do client.use('keyvault')['datakeys'].drop client.use('db')['coll'].drop data_key = BSON::ExtJSON.parse(File.read('spec/support/crypt/external/external-key.json')) client.use('keyvault')['datakeys', write_concern: { w: :majority }].insert_one(data_key) end context 'with default key vault client' do let(:client_encrypted) do new_local_client( SpecConfig.instance.addresses, SpecConfig.instance.test_options.merge( auto_encryption_options: { kms_providers: local_kms_providers, key_vault_namespace: 'keyvault.datakeys', schema_map: test_schema_map, # Spawn mongocryptd on non-default port for sharded cluster tests extra_options: extra_options, }, database: 'db', ) ) end let(:client_encryption) do Mongo::ClientEncryption.new( client, { kms_providers: local_kms_providers, key_vault_namespace: 'keyvault.datakeys', } ) end it 'inserts an encrypted document with client' do result = client_encrypted['coll'].insert_one(encrypted: 'test') expect(result).to be_ok encrypted = client.use('db')['coll'].find.first['encrypted'] expect(encrypted).to be_ciphertext end it 'encrypts a value with client encryption' do encrypted = client_encryption.encrypt( 'test', { key_id: data_key_id, algorithm: 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic', } ) expect(encrypted).to be_ciphertext end end context 'with external key vault client' do let(:client_encrypted) do new_local_client( SpecConfig.instance.addresses, SpecConfig.instance.test_options.merge( auto_encryption_options: { kms_providers: local_kms_providers, key_vault_namespace: 'keyvault.datakeys', schema_map: test_schema_map, key_vault_client: external_key_vault_client, # Spawn mongocryptd on non-default port for sharded cluster tests extra_options: extra_options, }, database: 'db', ) ) end let(:client_encryption) do Mongo::ClientEncryption.new( external_key_vault_client, { kms_providers: local_kms_providers, key_vault_namespace: 'keyvault.datakeys', } ) end it 'raises an authentication exception when auto encrypting' do expect do client_encrypted['coll'].insert_one(encrypted: 'test') end.to raise_error(Mongo::Auth::Unauthorized, /fake-user/) end it 'raises an authentication exception when explicit encrypting' do expect do client_encryption.encrypt( 'test', { key_id: data_key_id, algorithm: 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic', } ) end.to raise_error(Mongo::Auth::Unauthorized, /fake-user/) end end end end mongo-ruby-driver-2.21.3/spec/integration/client_side_encryption/kms_retry_prose_spec.rb000066400000000000000000000064031505113246500317720ustar00rootroot00000000000000# frozen_string_literal: true require 'spec_helper' def simulate_failure(type, times = 1) url = URI.parse("https://localhost:9003/set_failpoint/#{type}") data = { count: times }.to_json http = Net::HTTP.new(url.host, url.port) http.use_ssl = true http.verify_mode = OpenSSL::SSL::VERIFY_NONE http.ca_file = '.evergreen/x509gen/ca.pem' request = Net::HTTP::Post.new(url.path, { 'Content-Type' => 'application/json' }) request.body = data http.request(request) end describe 'KMS Retry Prose Spec' do require_libmongocrypt require_enterprise min_server_version '4.2' include_context 'define shared FLE helpers' let(:key_vault_client) do ClientRegistry.instance.new_local_client(SpecConfig.instance.addresses) end let(:client_encryption) do Mongo::ClientEncryption.new( key_vault_client, kms_tls_options: { aws: default_kms_tls_options_for_provider, gcp: default_kms_tls_options_for_provider, azure: default_kms_tls_options_for_provider, }, key_vault_namespace: key_vault_namespace, # For some reason libmongocrypt ignores custom endpoints for Azure and CGP # kms_providers: aws_kms_providers.merge(azure_kms_providers).merge(gcp_kms_providers) kms_providers: aws_kms_providers ) end shared_examples 'kms_retry prose spec' do it 'createDataKey and encrypt with TCP retry' do simulate_failure('network') data_key_id = client_encryption.create_data_key(kms_provider, master_key: master_key) simulate_failure('network') expect do client_encryption.encrypt(123, key_id: data_key_id, algorithm: 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic') end.not_to raise_error end it 'createDataKey and encrypt with HTTP retry' do simulate_failure('http') data_key_id = client_encryption.create_data_key(kms_provider, master_key: master_key) simulate_failure('http') expect do client_encryption.encrypt(123, key_id: data_key_id, algorithm: 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic') end.not_to raise_error end it 'createDataKey fails after too many retries' do simulate_failure('network', 4) expect do client_encryption.create_data_key(kms_provider, master_key: master_key) end.to raise_error(Mongo::Error::KmsError) end end context 'with AWS KMS provider' do let(:kms_provider) { 'aws' } let(:master_key) do { region: 'foo', key: 'bar', endpoint: '127.0.0.1:9003', } end include_examples 'kms_retry prose spec' end context 'with GCP KMS provider', skip: 'For some reason libmongocrypt ignores custom endpoints for Azure and CGP' do let(:kms_provider) { 'gcp' } let(:master_key) do { project_id: 'foo', location: 'bar', key_ring: 'baz', key_name: 'qux', endpoint: '127.0.0.1:9003' } end include_examples 'kms_retry prose spec' end context 'with Azure KMS provider', skip: 'For some reason libmongocrypt ignores custom endpoints for Azure and CGP' do let(:kms_provider) { 'azure' } let(:master_key) do { key_vault_endpoint: '127.0.0.1:9003', key_name: 'foo', } end include_examples 'kms_retry prose spec' end end mongo-ruby-driver-2.21.3/spec/integration/client_side_encryption/kms_tls_options_spec.rb000066400000000000000000000313261505113246500317740ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe 'Client-Side Encryption' do describe 'Prose tests: KMS TLS Options Tests' do require_libmongocrypt require_enterprise min_server_fcv '4.2' include_context 'define shared FLE helpers' let(:client) do new_local_client( SpecConfig.instance.addresses, SpecConfig.instance.test_options ) end let(:client_encryption_no_client_cert) do Mongo::ClientEncryption.new( client, { kms_providers: { aws: { access_key_id: SpecConfig.instance.fle_aws_key, secret_access_key: SpecConfig.instance.fle_aws_secret }, azure: { tenant_id: SpecConfig.instance.fle_azure_tenant_id, client_id: SpecConfig.instance.fle_azure_client_id, client_secret: SpecConfig.instance.fle_azure_client_secret, identity_platform_endpoint: "127.0.0.1:8002" }, gcp: { email: SpecConfig.instance.fle_gcp_email, private_key: SpecConfig.instance.fle_gcp_private_key, endpoint: "127.0.0.1:8002" }, kmip: { endpoint: "127.0.0.1:5698" } }, kms_tls_options: { aws: { ssl_ca_cert: SpecConfig.instance.fle_kmip_tls_ca_file }, azure: { ssl_ca_cert: SpecConfig.instance.fle_kmip_tls_ca_file }, gcp: { ssl_ca_cert: SpecConfig.instance.fle_kmip_tls_ca_file }, kmip: { ssl_ca_cert: SpecConfig.instance.fle_kmip_tls_ca_file } }, key_vault_namespace: 'keyvault.datakeys', }, ) end let(:client_encryption_with_tls) do Mongo::ClientEncryption.new( client, { kms_providers: { aws: { access_key_id: SpecConfig.instance.fle_aws_key, secret_access_key: SpecConfig.instance.fle_aws_secret }, azure: { tenant_id: SpecConfig.instance.fle_azure_tenant_id, client_id: SpecConfig.instance.fle_azure_client_id, client_secret: SpecConfig.instance.fle_azure_client_secret, identity_platform_endpoint: "127.0.0.1:8002" }, gcp: { email: SpecConfig.instance.fle_gcp_email, private_key: SpecConfig.instance.fle_gcp_private_key, endpoint: "127.0.0.1:8002" }, kmip: { endpoint: "127.0.0.1:5698" } }, kms_tls_options: { aws: { ssl_ca_cert: SpecConfig.instance.fle_kmip_tls_ca_file, ssl_cert: SpecConfig.instance.fle_kmip_tls_certificate_key_file, ssl_key: SpecConfig.instance.fle_kmip_tls_certificate_key_file, }, azure: { ssl_ca_cert: SpecConfig.instance.fle_kmip_tls_ca_file, ssl_cert: SpecConfig.instance.fle_kmip_tls_certificate_key_file, ssl_key: SpecConfig.instance.fle_kmip_tls_certificate_key_file, }, gcp: { ssl_ca_cert: SpecConfig.instance.fle_kmip_tls_ca_file, ssl_cert: SpecConfig.instance.fle_kmip_tls_certificate_key_file, ssl_key: SpecConfig.instance.fle_kmip_tls_certificate_key_file, }, kmip: { ssl_ca_cert: SpecConfig.instance.fle_kmip_tls_ca_file, ssl_cert: SpecConfig.instance.fle_kmip_tls_certificate_key_file, ssl_key: SpecConfig.instance.fle_kmip_tls_certificate_key_file, } }, key_vault_namespace: 'keyvault.datakeys', }, ) end let(:client_encryption_expired) do Mongo::ClientEncryption.new( client, { kms_providers: { aws: { access_key_id: SpecConfig.instance.fle_aws_key, secret_access_key: SpecConfig.instance.fle_aws_secret }, azure: { tenant_id: SpecConfig.instance.fle_azure_tenant_id, client_id: SpecConfig.instance.fle_azure_client_id, client_secret: SpecConfig.instance.fle_azure_client_secret, identity_platform_endpoint: "127.0.0.1:8000" }, gcp: { email: SpecConfig.instance.fle_gcp_email, private_key: SpecConfig.instance.fle_gcp_private_key, endpoint: "127.0.0.1:8000" }, kmip: { endpoint: "127.0.0.1:8000" } }, kms_tls_options: { aws: { ssl_ca_cert: SpecConfig.instance.fle_kmip_tls_ca_file }, azure: { ssl_ca_cert: SpecConfig.instance.fle_kmip_tls_ca_file }, gcp: { ssl_ca_cert: SpecConfig.instance.fle_kmip_tls_ca_file }, kmip: { ssl_ca_cert: SpecConfig.instance.fle_kmip_tls_ca_file } }, key_vault_namespace: 'keyvault.datakeys', }, ) end let(:client_encryption_invalid_hostname) do Mongo::ClientEncryption.new( client, { kms_providers: { aws: { access_key_id: SpecConfig.instance.fle_aws_key, secret_access_key: SpecConfig.instance.fle_aws_secret }, azure: { tenant_id: SpecConfig.instance.fle_azure_tenant_id, client_id: SpecConfig.instance.fle_azure_client_id, client_secret: SpecConfig.instance.fle_azure_client_secret, identity_platform_endpoint: "127.0.0.1:8001" }, gcp: { email: SpecConfig.instance.fle_gcp_email, private_key: SpecConfig.instance.fle_gcp_private_key, endpoint: "127.0.0.1:8001" }, kmip: { endpoint: "127.0.0.1:8001" } }, kms_tls_options: { aws: { ssl_ca_cert: SpecConfig.instance.fle_kmip_tls_ca_file }, azure: { ssl_ca_cert: SpecConfig.instance.fle_kmip_tls_ca_file }, gcp: { ssl_ca_cert: SpecConfig.instance.fle_kmip_tls_ca_file }, kmip: { ssl_ca_cert: SpecConfig.instance.fle_kmip_tls_ca_file } }, key_vault_namespace: 'keyvault.datakeys', }, ) end # We do noy use shared examples for AWS because of the way we pass endpoint. context 'AWS' do let(:master_key_template) do { region: "us-east-1", key: "arn:aws:kms:us-east-1:579766882180:key/89fcc2c4-08b0-4bd9-9f25-e30687b580d0", } end context 'with no client certificate' do it 'TLS handshake failed' do expect do client_encryption_no_client_cert.create_data_key( 'aws', { master_key: master_key_template.merge({endpoint: "127.0.0.1:8002"}) } ) end.to raise_error(Mongo::Error::KmsError, /(certificate_required|SocketError|ECONNRESET)/) end end context 'with valid certificate' do it 'TLS handshake passes' do expect do client_encryption_with_tls.create_data_key( 'aws', { master_key: master_key_template.merge({endpoint: "127.0.0.1:8002"}) } ) end.to raise_error(Mongo::Error::KmsError, /libmongocrypt error code/) end end context 'with expired server certificate' do let(:error_regex) do if BSON::Environment.jruby? /certificate verify failed/ else /certificate has expired/ end end it 'TLS handshake failed' do expect do client_encryption_expired.create_data_key( 'aws', { master_key: master_key_template.merge({endpoint: "127.0.0.1:8000"}) } ) end.to raise_error(Mongo::Error::KmsError, error_regex) end end context 'with server certificate with invalid hostname' do let(:error_regex) do if BSON::Environment.jruby? /TLS handshake failed due to a hostname mismatch/ else /certificate verify failed/ end end it 'TLS handshake failed' do expect do client_encryption_invalid_hostname.create_data_key( 'aws', { master_key: master_key_template.merge({endpoint: "127.0.0.1:8001"}) } ) end.to raise_error(Mongo::Error::KmsError, error_regex) end end end shared_examples 'it respect KMS TLS options' do context 'with no client certificate' do it 'TLS handshake failed' do expect do client_encryption_no_client_cert.create_data_key( kms_provider, { master_key: master_key } ) end.to raise_error(Mongo::Error::KmsError, /(certificate_required|SocketError|ECONNRESET)/) end end context 'with valid certificate' do it 'TLS handshake passes' do if should_raise_with_tls expect do client_encryption_with_tls.create_data_key( kms_provider, { master_key: master_key } ) end.to raise_error(Mongo::Error::KmsError, /libmongocrypt error code/) else expect do client_encryption_with_tls.create_data_key( kms_provider, { master_key: master_key } ) end.not_to raise_error end end it 'raises KmsError directly without wrapping CryptError' do if should_raise_with_tls begin client_encryption_with_tls.create_data_key( kms_provider, { master_key: master_key } ) rescue Mongo::Error::KmsError => exc exc.message.should =~ /Error when connecting to KMS provider|Empty KMS response/ exc.message.should =~ /libmongocrypt error code/ exc.message.should_not =~ /CryptError/ else fail 'Expected to raise KmsError' end end end end context 'with expired server certificate' do let(:error_regex) do if BSON::Environment.jruby? /certificate verify failed/ else /certificate has expired/ end end it 'TLS handshake failed' do expect do client_encryption_expired.create_data_key( kms_provider, { master_key: master_key } ) end.to raise_error(Mongo::Error::KmsError, error_regex) end end context 'with server certificate with invalid hostname' do let(:error_regex) do if BSON::Environment.jruby? /TLS handshake failed due to a hostname mismatch/ else /certificate verify failed/ end end it 'TLS handshake failed' do expect do client_encryption_invalid_hostname.create_data_key( kms_provider, { master_key: master_key } ) end.to raise_error(Mongo::Error::KmsError, error_regex) end end end context 'Azure' do let(:kms_provider) do 'azure' end let(:master_key) do { key_vault_endpoint: 'doesnotexist.local', key_name: 'foo' } end let(:should_raise_with_tls) do true end it_behaves_like 'it respect KMS TLS options' end context 'GCP' do let(:kms_provider) do 'gcp' end let(:master_key) do { project_id: 'foo', location: 'bar', key_ring: 'baz', key_name: 'foo' } end let(:should_raise_with_tls) do true end it_behaves_like 'it respect KMS TLS options' end context 'KMIP' do let(:kms_provider) do 'kmip' end let(:master_key) do {} end let(:should_raise_with_tls) do false end it_behaves_like 'it respect KMS TLS options' end end end mongo-ruby-driver-2.21.3/spec/integration/client_side_encryption/kms_tls_spec.rb000066400000000000000000000046561505113246500302270ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe 'Client-Side Encryption' do describe 'Prose tests: KMS TLS Tests' do require_libmongocrypt require_enterprise min_server_fcv '4.2' include_context 'define shared FLE helpers' let(:client) do new_local_client( SpecConfig.instance.addresses, SpecConfig.instance.test_options ) end let(:client_encryption) do Mongo::ClientEncryption.new( client, { kms_providers: aws_kms_providers, kms_tls_options: { aws: default_kms_tls_options_for_provider }, key_vault_namespace: 'keyvault.datakeys', }, ) end context 'invalid KMS certificate' do it 'raises an error when creating data key' do expect do client_encryption.create_data_key( 'aws', { master_key: { region: "us-east-1", key: "arn:aws:kms:us-east-1:579766882180:key/89fcc2c4-08b0-4bd9-9f25-e30687b580d0", endpoint: "127.0.0.1:8000", } } ) end.to raise_error(Mongo::Error::KmsError, /certificate verify failed/) end end context 'Invalid Hostname in KMS Certificate' do context 'MRI' do require_mri it 'raises an error when creating data key' do expect do client_encryption.create_data_key( 'aws', { master_key: { region: "us-east-1", key: "arn:aws:kms:us-east-1:579766882180:key/89fcc2c4-08b0-4bd9-9f25-e30687b580d0", endpoint: "127.0.0.1:8001", } } ) end.to raise_error(Mongo::Error::KmsError, /certificate verify failed/) end end context 'JRuby' do require_jruby it 'raises an error when creating data key' do expect do client_encryption.create_data_key( 'aws', { master_key: { region: "us-east-1", key: "arn:aws:kms:us-east-1:579766882180:key/89fcc2c4-08b0-4bd9-9f25-e30687b580d0", endpoint: "127.0.0.1:8001", } } ) end.to raise_error(Mongo::Error::KmsError, /hostname mismatch/) end end end end end mongo-ruby-driver-2.21.3/spec/integration/client_side_encryption/mongocryptd_prose_spec.rb000066400000000000000000000050511505113246500323160ustar00rootroot00000000000000# frozen_string_literal: true require 'spec_helper' describe 'mongocryptd prose tests' do require_libmongocrypt require_enterprise min_server_version '7.0.0-rc0' include_context 'define shared FLE helpers' include_context 'with local kms_providers' let(:mongocryptd_uri) { 'mongodb://localhost:27777' } let(:encryption_client) do new_local_client( SpecConfig.instance.addresses, SpecConfig.instance.test_options.merge( auto_encryption_options: { kms_providers: kms_providers, kms_tls_options: kms_tls_options, key_vault_namespace: key_vault_namespace, schema_map: { 'auto_encryption.users' => schema_map }, extra_options: extra_options, }, database: 'auto_encryption' ) ) end before do skip 'This test requires crypt shared library' unless SpecConfig.instance.crypt_shared_lib_path key_vault_collection.drop key_vault_collection.insert_one(data_key) encryption_client['users'].drop end context 'when shared library is loaded' do let(:extra_options) do { crypt_shared_lib_path: SpecConfig.instance.crypt_shared_lib_path, mongocryptd_uri: mongocryptd_uri } end let!(:connect_attempt) do Class.new do def lock @lock ||= Mutex.new end def done? lock.synchronize do !!@done end end def done! lock.synchronize do @done = true end end end.new end let!(:listener) do Thread.new do TCPServer.new(27_777).accept connect_attempt.done! end end after do listener.exit end it 'does not try to connect to mongocryptd' do encryption_client[:users].insert_one(ssn: ssn) expect(connect_attempt.done?).to be false end end context 'when shared library is required' do let(:extra_options) do { crypt_shared_lib_path: SpecConfig.instance.crypt_shared_lib_path, crypt_shared_lib_required: true, mongocryptd_uri: mongocryptd_uri, mongocryptd_spawn_args: [ '--pidfilepath=bypass-spawning-mongocryptd.pid', '--port=27777' ] } end let(:mongocryptd_client) { new_local_client(mongocryptd_uri) } it 'does not spawn mongocryptd' do expect { encryption_client[:users].insert_one(ssn: ssn) } .not_to raise_error expect { mongocryptd_client.database.command(hello: 1) } .to raise_error(Mongo::Error::NoServerAvailable) end end end mongo-ruby-driver-2.21.3/spec/integration/client_side_encryption/on_demand_aws_credentials_spec.rb000066400000000000000000000024361505113246500337200ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe 'On-demand AWS Credentials' do require_libmongocrypt include_context 'define shared FLE helpers' include_context 'with AWS kms_providers' let(:client) { ClientRegistry.instance.new_local_client(SpecConfig.instance.addresses) } let(:client_encryption_opts) do { kms_providers: { aws: {} }, kms_tls_options: kms_tls_options, key_vault_namespace: key_vault_namespace } end let(:client_encryption) do Mongo::ClientEncryption.new( client, client_encryption_opts ) end context 'when credentials are available' do it 'authenticates successfully' do expect do client_encryption.create_data_key('aws', data_key_options) end.not_to raise_error end end context 'when credentials are not available' do it 'raises an error' do expect_any_instance_of( Mongo::Auth::Aws::CredentialsRetriever ).to receive(:credentials).with(kind_of(Mongo::CsotTimeoutHolder)).once.and_raise( Mongo::Auth::Aws::CredentialsNotFound ) expect do client_encryption.create_data_key('aws', data_key_options) end.to raise_error(Mongo::Error::CryptError, /Could not locate AWS credentials/) end end end mongo-ruby-driver-2.21.3/spec/integration/client_side_encryption/on_demand_azure_credentials_spec.rb000066400000000000000000000023531505113246500342520ustar00rootroot00000000000000# frozen_string_literal: true require 'spec_helper' describe 'On-demand Azure Credentials' do require_libmongocrypt include_context 'define shared FLE helpers' include_context 'with Azure kms_providers' let(:client) { ClientRegistry.instance.new_local_client(SpecConfig.instance.addresses) } let(:client_encryption_opts) do { kms_providers: { azure: {} }, kms_tls_options: kms_tls_options, key_vault_namespace: key_vault_namespace } end let(:client_encryption) do Mongo::ClientEncryption.new( client, client_encryption_opts ) end context 'when credentials are available' do it 'authenticates successfully' do skip 'This tests should be run inside Azure Cloud only' unless ENV['TEST_FLE_AZURE_AUTO'] expect do client_encryption.create_data_key('azure', data_key_options) end.not_to raise_error end end context 'when credentials are not available' do it 'raises an error' do skip 'This tests should NOT be run inside Azure Cloud only' if ENV['TEST_FLE_AZURE_AUTO'] expect do client_encryption.create_data_key('azure', data_key_options) end.to raise_error(Mongo::Error::CryptError, /Azure credentials/) end end end mongo-ruby-driver-2.21.3/spec/integration/client_side_encryption/on_demand_gcp_credentials_spec.rb000066400000000000000000000023601505113246500336730ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe 'On-demand GCP Credentials' do require_libmongocrypt include_context 'define shared FLE helpers' include_context 'with GCP kms_providers' let(:client) { ClientRegistry.instance.new_local_client(SpecConfig.instance.addresses) } let(:client_encryption_opts) do { kms_providers: { gcp: {} }, kms_tls_options: kms_tls_options, key_vault_namespace: key_vault_namespace } end let(:client_encryption) do Mongo::ClientEncryption.new( client, client_encryption_opts ) end context 'when credentials are available' do it 'authenticates successfully' do skip 'This tests should be run inside Google Cloud only' unless ENV['TEST_FLE_GCP_AUTO'] expect do client_encryption.create_data_key('gcp', data_key_options) end.not_to raise_error end end context 'when credentials are not available' do it 'raises an error' do skip 'This tests should NOT be run inside Google Cloud only' if ENV['TEST_FLE_GCP_AUTO'] expect do client_encryption.create_data_key('gcp', data_key_options) end.to raise_error(Mongo::Error::CryptError, /GCP credentials/) end end end queryable_encryption_examples_spec.rb000066400000000000000000000073331505113246500346300ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/integration/client_side_encryption# frozen_string_literal: true require 'spec_helper' # No need to rewrite existing specs to make the examples shorter, until/unless # we revisit these specs and need to make substantial changes. # rubocop:disable RSpec/ExampleLength describe 'Queryable encryption examples' do require_libmongocrypt min_server_version '7.0.0-rc0' require_topology :replica_set, :sharded, :load_balanced require_enterprise include_context 'define shared FLE helpers' it 'uses queryable encryption' do # Drop data from prior test runs. authorized_client.use('docs_examples').database.drop authorized_client.use('keyvault')['datakeys'].drop # Create two data keys. # Note for docs team: remove the test_options argument when copying # this example into public documentation. key_vault_client = ClientRegistry.instance.new_local_client( SpecConfig.instance.addresses, SpecConfig.instance.test_options ) client_encryption = Mongo::ClientEncryption.new( key_vault_client, key_vault_namespace: 'keyvault.datakeys', kms_providers: { local: { key: local_master_key } } ) data_key_1_id = client_encryption.create_data_key('local') data_key_2_id = client_encryption.create_data_key('local') # Create an encryptedFieldsMap. encrypted_fields_map = { 'docs_examples.encrypted' => { fields: [ { path: 'encrypted_indexed', bsonType: 'string', keyId: data_key_1_id, queries: { queryType: 'equality' } }, { path: 'encrypted_unindexed', bsonType: 'string', keyId: data_key_2_id, } ] } } # Create client with automatic queryable encryption enabled. # Note for docs team: remove the test_options argument when copying # this example into public documentation. encrypted_client = ClientRegistry.instance.new_local_client( SpecConfig.instance.addresses, SpecConfig.instance.test_options.merge( auto_encryption_options: { key_vault_namespace: 'keyvault.datakeys', kms_providers: { local: { key: local_master_key } }, encrypted_fields_map: encrypted_fields_map, # Spawn mongocryptd on non-default port for sharded cluster tests # Note for docs team: remove the extra_options argument when copying # this example into public documentation. extra_options: extra_options, }, database: 'docs_examples' ) ) # Create collection with queryable encryption enabled. encrypted_client['encrypted'].create # Auto encrypt an insert and find. encrypted_client['encrypted'].insert_one( _id: 1, encrypted_indexed: 'indexed_value', encrypted_unindexed: 'unindexed_value' ) find_results = encrypted_client['encrypted'].find( encrypted_indexed: 'indexed_value' ).to_a expect(find_results.size).to eq(1) expect(find_results.first[:encrypted_indexed]).to eq('indexed_value') expect(find_results.first[:encrypted_unindexed]).to eq('unindexed_value') # Find documents without decryption. find_results = authorized_client .use('docs_examples')['encrypted'] .find(_id: 1) .to_a expect(find_results.size).to eq(1) expect(find_results.first[:encrypted_indexed]).to be_a(BSON::Binary) expect(find_results.first[:encrypted_unindexed]).to be_a(BSON::Binary) # Cleanup authorized_client.use('docs_examples').database.drop authorized_client.use('keyvault')['datakeys'].drop end end # rubocop:enable RSpec/ExampleLength range_explicit_encryption_prose_spec.rb000066400000000000000000000341621505113246500351460ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/integration/client_side_encryption# frozen_string_literal: true require 'spec_helper' # Unnecessary to rewrite a legacy test to use shorter examples; this can # be revisited if these tests ever need to be significantly modified. # rubocop:disable RSpec/ExampleLength describe 'Range Explicit Encryption' do min_server_version '8.0.0-rc18' require_libmongocrypt include_context 'define shared FLE helpers' let(:key1_id) do key1_document['_id'] end let(:key_vault_client) do ClientRegistry.instance.new_local_client(SpecConfig.instance.addresses) end let(:client_encryption) do Mongo::ClientEncryption.new( key_vault_client, kms_tls_options: kms_tls_options, key_vault_namespace: key_vault_namespace, kms_providers: local_kms_providers ) end let(:encrypted_client) do ClientRegistry.instance.new_local_client( SpecConfig.instance.addresses, auto_encryption_options: { key_vault_namespace: key_vault_namespace, kms_providers: local_kms_providers, bypass_query_analysis: true }, database: SpecConfig.instance.test_db ) end before do authorized_client['explicit_encryption'].drop(encrypted_fields: encrypted_fields) authorized_client['explicit_encryption'].create(encrypted_fields: encrypted_fields) authorized_client.use(key_vault_db)[key_vault_coll].drop authorized_client.use(key_vault_db)[key_vault_coll, write_concern: { w: :majority }].insert_one(key1_document) end shared_examples 'common cases' do it 'can decrypt a payload' do value = value_converter.call(6) insert_payload = client_encryption.encrypt( value, { key_id: key1_id, algorithm: 'Range', contention_factor: 0, range_opts: range_opts } ) decrypted_value = client_encryption.decrypt(insert_payload) expect(value).to eq(decrypted_value) end it 'can find encrypted range and return the maximum' do expr = { '$and': [ { "encrypted#{type}" => { '$gte': value_converter.call(6) } }, { "encrypted#{type}" => { '$lte': value_converter.call(200) } } ] } find_payload = client_encryption.encrypt_expression( expr, { key_id: key1_id, algorithm: 'Range', query_type: 'range', contention_factor: 0, range_opts: range_opts } ) results = encrypted_client['explicit_encryption'].find(find_payload, sort: { _id: 1 }).to_a expect(results.size).to eq(3) value_converter.call([ 6, 30, 200 ]).each_with_index do |value, idx| expect(results[idx]["encrypted#{type}"]).to eq(value) end end it 'can find encrypted range and return the minimum' do expr = { '$and': [ { "encrypted#{type}" => { '$gte': value_converter.call(0) } }, { "encrypted#{type}" => { '$lte': value_converter.call(6) } } ] } find_payload = client_encryption.encrypt_expression( expr, { key_id: key1_id, algorithm: 'Range', query_type: 'range', contention_factor: 0, range_opts: range_opts } ) results = encrypted_client['explicit_encryption'].find(find_payload, sort: { _id: 1 }).to_a expect(results.size).to eq(2) value_converter.call([ 0, 6 ]).each_with_index do |value, idx| expect(results[idx]["encrypted#{type}"]).to eq(value) end end it 'can find encrypted range with an open range query' do expr = { '$and': [ { "encrypted#{type}" => { '$gt': value_converter.call(30) } } ] } find_payload = client_encryption.encrypt_expression( expr, { key_id: key1_id, algorithm: 'Range', query_type: 'range', contention_factor: 0, range_opts: range_opts } ) results = encrypted_client['explicit_encryption'].find(find_payload, sort: { _id: 1 }).to_a expect(results.size).to eq(1) expect(results.first["encrypted#{type}"]).to eq(value_converter.call(200)) end it 'can run an aggregation expression inside $expr' do expr = { '$and': [ { '$lt': [ "$encrypted#{type}", value_converter.call(30) ] } ] } find_payload = client_encryption.encrypt_expression( expr, { key_id: key1_id, algorithm: 'Range', query_type: 'range', contention_factor: 0, range_opts: range_opts } ) results = encrypted_client['explicit_encryption'].find( { '$expr' => find_payload }, sort: { _id: 1 } ).to_a expect(results.size).to eq(2) value_converter.call([ 0, 6 ]).each_with_index do |value, idx| expect(results[idx]["encrypted#{type}"]).to eq(value) end end it 'encrypting a document greater than the maximum errors' do skip if %w[ DoubleNoPrecision DecimalNoPrecision ].include?(type) expect do client_encryption.encrypt( value_converter.call(201), { key_id: key1_id, algorithm: 'Range', contention_factor: 0, range_opts: range_opts } ) end.to raise_error(Mongo::Error::CryptError, /less than or equal to the maximum value/) end it 'encrypting a document of a different type errors' do skip if %w[ DoubleNoPrecision DecimalNoPrecision ].include?(type) value = if type == 'Int' 6.0 else 6 end expect do client_encryption.encrypt( value, { key_id: key1_id, algorithm: 'Range', contention_factor: 0, range_opts: range_opts } ) end.to raise_error(Mongo::Error::CryptError, /expected matching 'min' and value type/) end it 'setting precision errors if the type is not a double' do skip if %w[ DoublePrecision DoubleNoPrecision DecimalPrecision DecimalNoPrecision ].include?(type) expect do client_encryption.encrypt( value_converter.call(6), { key_id: key1_id, algorithm: 'Range', contention_factor: 0, range_opts: { min: value_converter.call(0), max: value_converter.call(200), sparsity: 1, precision: 2 } } ) end.to raise_error(Mongo::Error::CryptError, /precision/) end end context 'when Int' do let(:type) do 'Int' end let(:value_converter) do proc do |value| if value.is_a?(Array) value.map(&:to_i) else value.to_i end end end let(:encrypted_fields) do range_encrypted_fields_int end let(:range_opts) do { min: BSON::Int32.new(0), max: BSON::Int32.new(200), sparsity: 1 } end before do [ 0, 6, 30, 200 ].each_with_index do |num, idx| insert_payload = client_encryption.encrypt( num, key_id: key1_id, algorithm: 'Range', contention_factor: 0, range_opts: range_opts ) encrypted_client['explicit_encryption'].insert_one( _id: idx, "encrypted#{type}" => insert_payload ) end end include_examples 'common cases' end context 'when Long' do let(:type) do 'Long' end let(:value_converter) do proc do |value| if value.is_a?(Array) value.map { |i| BSON::Int64.new(i) } else BSON::Int64.new(value) end end end let(:encrypted_fields) do range_encrypted_fields_long end let(:range_opts) do { min: BSON::Int64.new(0), max: BSON::Int64.new(200), sparsity: 1 } end before do [ 0, 6, 30, 200 ].each_with_index do |num, idx| insert_payload = client_encryption.encrypt( BSON::Int64.new(num), key_id: key1_id, algorithm: 'Range', contention_factor: 0, range_opts: range_opts ) encrypted_client['explicit_encryption'].insert_one( _id: idx, "encrypted#{type}" => insert_payload ) end end include_examples 'common cases' end context 'when DoublePrecision' do let(:type) do 'DoublePrecision' end let(:value_converter) do proc do |value| if value.is_a?(Array) value.map(&:to_f) else value.to_f end end end let(:encrypted_fields) do range_encrypted_fields_doubleprecision end let(:range_opts) do { min: 0.0, max: 200.0, sparsity: 1, precision: 2 } end before do [ 0.0, 6.0, 30.0, 200.0 ].each_with_index do |num, idx| insert_payload = client_encryption.encrypt( num, key_id: key1_id, algorithm: 'Range', contention_factor: 0, range_opts: range_opts ) encrypted_client['explicit_encryption'].insert_one( _id: idx, "encrypted#{type}" => insert_payload ) end end include_examples 'common cases' end context 'when DoubleNoPrecision' do let(:type) do 'DoubleNoPrecision' end let(:value_converter) do proc do |value| if value.is_a?(Array) value.map(&:to_f) else value.to_f end end end let(:encrypted_fields) do range_encrypted_fields_doublenoprecision end let(:range_opts) do { sparsity: 1 } end before do [ 0.0, 6.0, 30.0, 200.0 ].each_with_index do |num, idx| insert_payload = client_encryption.encrypt( num, key_id: key1_id, algorithm: 'Range', contention_factor: 0, range_opts: range_opts ) encrypted_client['explicit_encryption'].insert_one( _id: idx, "encrypted#{type}" => insert_payload ) end end include_examples 'common cases' end context 'when Date' do let(:type) do 'Date' end let(:value_converter) do proc do |value| if value.is_a?(Array) value.map { |i| Time.new(i) } else Time.new(value) end end end let(:encrypted_fields) do range_encrypted_fields_date end let(:range_opts) do { min: Time.new(0), max: Time.new(200), sparsity: 1 } end before do [ 0, 6, 30, 200 ].each_with_index do |num, idx| insert_payload = client_encryption.encrypt( Time.new(num), key_id: key1_id, algorithm: 'Range', contention_factor: 0, range_opts: range_opts ) encrypted_client['explicit_encryption'].insert_one( _id: idx, "encrypted#{type}" => insert_payload ) end end include_examples 'common cases' end context 'when DecimalPrecision' do require_topology :replica_set let(:type) do 'DecimalPrecision' end let(:value_converter) do proc do |value| if value.is_a?(Array) value.map { |val| BSON::Decimal128.new(val.to_s) } else BSON::Decimal128.new(value.to_s) end end end let(:encrypted_fields) do range_encrypted_fields_decimalprecision end let(:range_opts) do { min: BSON::Decimal128.new('0.0'), max: BSON::Decimal128.new('200.0'), sparsity: 1, precision: 2 } end before do %w[ 0 6 30 200 ].each_with_index do |num, idx| insert_payload = client_encryption.encrypt( BSON::Decimal128.new(num), key_id: key1_id, algorithm: 'Range', contention_factor: 0, range_opts: range_opts ) encrypted_client['explicit_encryption'].insert_one( _id: idx, "encrypted#{type}" => insert_payload ) end end include_examples 'common cases' end context 'when DecimalNoPrecision' do require_topology :replica_set let(:type) do 'DecimalNoPrecision' end let(:value_converter) do proc do |value| if value.is_a?(Array) value.map { |val| BSON::Decimal128.new(val.to_s) } else BSON::Decimal128.new(value.to_s) end end end let(:encrypted_fields) do range_encrypted_fields_decimalnoprecision end let(:range_opts) do { sparsity: 1 } end before do %w[ 0 6 30 200 ].each_with_index do |num, idx| insert_payload = client_encryption.encrypt( BSON::Decimal128.new(num), key_id: key1_id, algorithm: 'Range', contention_factor: 0, range_opts: range_opts ) encrypted_client['explicit_encryption'].insert_one( _id: idx, "encrypted#{type}" => insert_payload ) end end include_examples 'common cases' end describe 'Range Explicit Encryption applies defaults' do let(:payload_defaults) do client_encryption.encrypt( 123, key_id: key1_id, algorithm: 'Range', contention_factor: 0, range_opts: { min: 0, max: 1000 } ) end it 'uses libmongocrypt default' do payload = client_encryption.encrypt( 123, key_id: key1_id, algorithm: 'Range', contention_factor: 0, range_opts: { min: 0, max: 1000, sparsity: 2, trim_factor: 6 } ) expect(payload.to_s.size).to eq(payload_defaults.to_s.size) end it 'accepts trim_factor 0' do payload = client_encryption.encrypt( 123, key_id: key1_id, algorithm: 'Range', contention_factor: 0, range_opts: { min: 0, max: 1000, trim_factor: 0 } ) expect(payload.to_s.size).to eq(payload_defaults.to_s.size) end end end # rubocop:enable RSpec/ExampleLength mongo-ruby-driver-2.21.3/spec/integration/client_side_encryption/rewrap_prose_spec.rb000066400000000000000000000063451505113246500312600ustar00rootroot00000000000000# frozen_string_literal: true require 'spec_helper' describe 'RewrapManyDataKey' do require_libmongocrypt min_server_version '7.0.0-rc0' require_topology :replica_set, :sharded, :load_balanced include_context 'define shared FLE helpers' let(:kms_providers) do {}.merge(aws_kms_providers) .merge(azure_kms_providers) .merge(gcp_kms_providers) .merge(kmip_kms_providers) .merge(local_kms_providers) end let(:master_keys) do { aws: { region: 'us-east-1', key: 'arn:aws:kms:us-east-1:579766882180:key/89fcc2c4-08b0-4bd9-9f25-e30687b580d0', }, azure: { key_vault_endpoint: 'key-vault-csfle.vault.azure.net', key_name: 'key-name-csfle', }, gcp: { project_id: 'devprod-drivers', location: 'global', key_ring: 'key-ring-csfle', key_name: 'key-name-csfle', }, kmip: {} } end before do authorized_client.use('keyvault')['datakeys'].drop end %i[ aws azure gcp kmip local ].each do |src_provider| %i[ aws azure gcp kmip local ].each do |dst_provider| context "with #{src_provider} as source provider and #{dst_provider} as destination provider" do let(:client_encryption1) do key_vault_client = ClientRegistry.instance.new_local_client( SpecConfig.instance.addresses, SpecConfig.instance.test_options ) Mongo::ClientEncryption.new( key_vault_client, key_vault_namespace: 'keyvault.datakeys', kms_providers: kms_providers, kms_tls_options: { kmip: default_kms_tls_options_for_provider } ) end let(:client_encryption2) do key_vault_client = ClientRegistry.instance.new_local_client( SpecConfig.instance.addresses, SpecConfig.instance.test_options ) Mongo::ClientEncryption.new( key_vault_client, key_vault_namespace: 'keyvault.datakeys', kms_providers: kms_providers, kms_tls_options: { kmip: default_kms_tls_options_for_provider } ) end let(:key_id) do client_encryption1.create_data_key( src_provider.to_s, master_key: master_keys[src_provider] ) end let(:ciphertext) do client_encryption1.encrypt( 'test', key_id: key_id, algorithm: 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic' ) end before do client_encryption2.rewrap_many_data_key( {}, provider: dst_provider.to_s, master_key: master_keys[dst_provider] ) end it 'rewraps', :aggregate_failures do expect(client_encryption1.decrypt(ciphertext)).to eq('test') expect(client_encryption2.decrypt(ciphertext)).to eq('test') end context 'when master_key is present without provider' do it 'raises an exception' do expect { client_encryption1.rewrap_many_data_key({}, master_key: {}) } .to raise_error(ArgumentError, /provider/) end end end end end end unique_index_on_key_alt_names_prose_spec.rb000066400000000000000000000050261505113246500357600ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/integration/client_side_encryption# frozen_string_literal: true require 'spec_helper' # No need to rewrite legacy tests to use shorter examples, unless/until we # revisit these tests and need to make more significant changes. # rubocop:disable RSpec/ExampleLength describe 'Decryption events' do require_enterprise min_server_fcv '4.2' require_libmongocrypt include_context 'define shared FLE helpers' min_server_version '7.0.0-rc0' let(:client) do ClientRegistry.instance.new_local_client( SpecConfig.instance.addresses, SpecConfig.instance.test_options.merge( database: SpecConfig.instance.test_db ) ) end let(:client_encryption) do Mongo::ClientEncryption.new( client, key_vault_namespace: "#{key_vault_db}.#{key_vault_coll}", kms_providers: local_kms_providers ) end let(:existing_key_alt_name) do 'def' end let(:existing_key_id) do client_encryption.create_data_key('local', key_alt_names: [ existing_key_alt_name ]) end before do client.use(key_vault_db)[key_vault_coll].drop client.use(key_vault_db).command( createIndexes: key_vault_coll, indexes: [ { name: 'keyAltNames_1', key: { keyAltNames: 1 }, unique: true, partialFilterExpression: { keyAltNames: { '$exists' => true } }, }, ], writeConcern: { w: 'majority' } ) # Force key creation existing_key_id end it 'tests create_data_key' do expect do client_encryption.create_data_key('local', key_alt_names: [ 'abc' ]) end.not_to raise_error expect do client_encryption.create_data_key('local', key_alt_names: [ existing_key_alt_name ]) end.to raise_error(Mongo::Error::OperationFailure, /E11000/) # duplicate key error end it 'tests add_key_alt_name' do key_id = client_encryption.create_data_key('local') expect do client_encryption.add_key_alt_name(key_id, 'abc') end.not_to raise_error expect do key_document = client_encryption.add_key_alt_name(key_id, 'abc') expect(key_document['keyAltNames']).to include('abc') end.not_to raise_error expect do client_encryption.add_key_alt_name(key_id, existing_key_alt_name) end.to raise_error(Mongo::Error::OperationFailure, /E11000/) # duplicate key error expect do key_document = client_encryption.add_key_alt_name(existing_key_id, existing_key_alt_name) expect(key_document['keyAltNames']).to include(existing_key_alt_name) end.not_to raise_error end end # rubocop:enable RSpec/ExampleLength mongo-ruby-driver-2.21.3/spec/integration/client_side_encryption/views_spec.rb000066400000000000000000000023341505113246500276770ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe 'Client-Side Encryption' do describe 'Prose tests: Data key and double encryption' do require_libmongocrypt require_enterprise min_server_fcv '4.2' include_context 'define shared FLE helpers' let(:client) do new_local_client( SpecConfig.instance.addresses, SpecConfig.instance.test_options ) end let(:client_encrypted) do new_local_client( SpecConfig.instance.addresses, SpecConfig.instance.test_options.merge( auto_encryption_options: { kms_providers: local_kms_providers, key_vault_namespace: 'keyvault.datakeys', # Spawn mongocryptd on non-default port for sharded cluster tests extra_options: extra_options, }, database: 'db', ) ) end before do client.use('db')['view'].drop client.use('db').database.command(create: 'view', viewOn: 'coll') end it 'does not perform encryption on views' do expect do client_encrypted['view'].insert_one({}) end.to raise_error(Mongo::Error::CryptError, /cannot auto encrypt a view/) end end end mongo-ruby-driver-2.21.3/spec/integration/client_side_operations_timeout/000077500000000000000000000000001505113246500267405ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/integration/client_side_operations_timeout/encryption_prose_spec.rb000066400000000000000000000103621505113246500337030ustar00rootroot00000000000000# frozen_string_literal: true require 'spec_helper' describe 'CSOT for encryption' do require_libmongocrypt require_no_multi_mongos min_server_fcv '4.2' include_context 'define shared FLE helpers' include_context 'with local kms_providers' let(:subscriber) { Mrss::EventSubscriber.new } describe 'mongocryptd' do before do Process.spawn( 'mongocryptd', '--pidfilepath=bypass-spawning-mongocryptd.pid', '--port=23000', '--idleShutdownTimeoutSecs=60', %i[ out err ] => '/dev/null' ) end let(:client) do Mongo::Client.new('mongodb://localhost:23000/?timeoutMS=1000').tap do |client| client.subscribe(Mongo::Monitoring::COMMAND, subscriber) end end let(:ping_command) do subscriber.started_events.find do |event| event.command_name == 'ping' end&.command end after do client.close end it 'does not set maxTimeMS for commands sent to mongocryptd' do expect do client.use('admin').command(ping: 1) end.to raise_error(Mongo::Error::OperationFailure) expect(ping_command).not_to have_key('maxTimeMS') end end describe 'ClientEncryption' do let(:key_vault_client) do ClientRegistry.instance.new_local_client( SpecConfig.instance.addresses, SpecConfig.instance.test_options.merge(timeout_ms: 20) ) end let(:client_encryption) do Mongo::ClientEncryption.new( key_vault_client, key_vault_namespace: key_vault_namespace, kms_providers: local_kms_providers ) end describe '#createDataKey' do before do authorized_client.use(key_vault_db)[key_vault_coll].drop authorized_client.use(key_vault_db)[key_vault_coll].create authorized_client.use(:admin).command({ configureFailPoint: 'failCommand', mode: { times: 1 }, data: { failCommands: [ 'insert' ], blockConnection: true, blockTimeMS: 30 } }) end after do authorized_client.use(:admin).command({ configureFailPoint: 'failCommand', mode: 'off', }) key_vault_client.close end it 'fails with timeout error' do expect do client_encryption.create_data_key('local') end.to raise_error(Mongo::Error::TimeoutError) end end describe '#encrypt' do let!(:data_key_id) do client_encryption.create_data_key('local') end before do authorized_client.use(:admin).command({ configureFailPoint: 'failCommand', mode: { times: 1 }, data: { failCommands: [ 'find' ], blockConnection: true, blockTimeMS: 30 } }) end after do authorized_client.use(:admin).command({ configureFailPoint: 'failCommand', mode: 'off', }) end it 'fails with timeout error' do expect do client_encryption.encrypt('hello', key_id: data_key_id, algorithm: 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic') end.to raise_error(Mongo::Error::TimeoutError) end end end end mongo-ruby-driver-2.21.3/spec/integration/client_spec.rb000066400000000000000000000027251505113246500232700ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe 'Client' do # TODO after the client is closed, operations should fail with an exception # that communicates this state, instead of failing with server selection or # pool errors. RUBY-3102, RUBY-3174. context 'after client is disconnected' do let(:client) { authorized_client.with(server_selection_timeout: 1) } before do client.close end it 'is still usable for operations' do resp = client.database.command(ping: 1) expect(resp).to be_a(Mongo::Operation::Result) end context 'operation that can use sessions' do it 'is still usable for operations' do client['collection'].insert_one(test: 1) end end context 'after all servers are marked unknown' do require_topology :single, :replica_set, :sharded before do client.cluster.servers.each do |server| server.unknown! end end context 'operation that never uses sessions' do it 'fails server selection' do expect do client.database.command(ping: 1) end.to raise_error(Mongo::Error::NoServerAvailable) end end context 'operation that can use sessions' do it 'fails server selection' do expect do client['collection'].insert_one(test: 1) end.to raise_error(Mongo::Error::NoServerAvailable) end end end end end mongo-ruby-driver-2.21.3/spec/integration/client_update_spec.rb000066400000000000000000000121571505113246500246320ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Client do clean_slate context 'auto encryption options' do require_libmongocrypt min_server_fcv '4.2' require_enterprise include_context 'define shared FLE helpers' include_context 'with local kms_providers' before do authorized_client.use(:keyvault)[:datakeys, write_concern: { w: :majority }].drop authorized_client.use(:keyvault)[:datakeys, write_concern: { w: :majority }].insert_one(data_key) authorized_client.use(:auto_encryption)[:users].drop authorized_client.use(:auto_encryption)[:users, { 'validator' => { '$jsonSchema' => schema_map } } ].create end describe '#with' do let(:old_client) do new_local_client( SpecConfig.instance.addresses, SpecConfig.instance.test_options.merge( auto_encryption_options: { kms_providers: kms_providers, key_vault_namespace: key_vault_namespace, # Spawn mongocryptd on non-default port for sharded cluster tests extra_options: extra_options, }, database: :auto_encryption ), ) end context 'with new, invalid auto_encryption_options' do let(:new_auto_encryption_options) { { kms_providers: nil } } let(:new_client) do old_client.with(auto_encryption_options: new_auto_encryption_options) end # Detection of leaked background threads only, these tests do not # actually require a clean slate. https://jira.mongodb.org/browse/RUBY-2138 clean_slate before do authorized_client.reconnect if authorized_client.closed? end it 'raises an exception' do expect do new_client end.to raise_error(ArgumentError) end it 'allows the original client to keep encrypting' do old_client[:users].insert_one(ssn: ssn) document = authorized_client.use(:auto_encryption)[:users].find.first expect(document['ssn']).to be_ciphertext end end context 'with new auto_encryption_options' do let!(:new_client) do old_client.with(auto_encryption_options: new_auto_encryption_options) end let(:new_auto_encryption_options) do { kms_providers: kms_providers, key_vault_namespace: key_vault_namespace, schema_map: { 'auto_encryption.users' => schema_map }, # Spawn mongocryptd on non-default port for sharded cluster tests extra_options: extra_options, } end it 'creates a new client' do expect(new_client).not_to eq(old_client) end it 'maintains the old client\'s auto encryption options' do expect(old_client.encrypter.options[:schema_map]).to be_nil end it 'updates the client\'s auto encryption options' do expect(new_client.encrypter.options[:schema_map]).to eq('auto_encryption.users' => schema_map) end it 'shares a cluster with the old client' do expect(old_client.cluster).to eq(new_client.cluster) end it 'allows the original client to keep encrypting' do old_client[:users].insert_one(ssn: ssn) document = authorized_client.use(:auto_encryption)[:users].find.first expect(document['ssn']).to be_ciphertext end it 'allows the new client to keep encrypting' do old_client[:users].insert_one(ssn: ssn) document = authorized_client.use(:auto_encryption)[:users].find.first expect(document['ssn']).to be_ciphertext end end context 'with nil auto_encryption_options' do let!(:new_client) do old_client.with(auto_encryption_options: new_auto_encryption_options) end let(:new_auto_encryption_options) { nil } it 'removes auto encryption options' do expect(new_client.encrypter).to be_nil end it 'allows original client to keep encrypting' do old_client[:users].insert_one(ssn: ssn) document = authorized_client.use(:auto_encryption)[:users].find.first expect(document['ssn']).to be_ciphertext end end end describe '#use' do let(:old_client) do new_local_client( SpecConfig.instance.addresses, SpecConfig.instance.test_options.merge( auto_encryption_options: { kms_providers: kms_providers, key_vault_namespace: key_vault_namespace, # Spawn mongocryptd on non-default port for sharded cluster tests extra_options: extra_options, } ) ) end let(:new_client) do old_client.use(:auto_encryption) end it 'creates a new client with encryption enabled' do new_client[:users].insert_one(ssn: ssn) document = authorized_client.use(:auto_encryption)[:users].find.first expect(document['ssn']).to be_ciphertext end end end end mongo-ruby-driver-2.21.3/spec/integration/collection_indexes_prose_spec.rb000066400000000000000000000026731505113246500270760ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe 'Mongo::Collection#indexes / listIndexes prose tests' do let(:collection) do authorized_client['list-indexes-prose'] end before do collection.drop collection.create collection.indexes.create_one({name: 1}, name: 'simple') collection.indexes.create_one({hello: 1, world: -1}, name: 'compound') collection.indexes.create_one({test: 1}, unique: true, name: 'unique') collection.insert_one( name: 'Stanley', hello: 'Yes', world: 'No', test: 'Always', ) end let(:index_list) do collection.indexes.to_a end it 'returns all index names' do %w(simple compound unique).each do |name| index_list.detect do |spec| spec['name'] = name end.should be_a(Hash) end end it 'does not return duplicate or nonexistent index names' do # There are 4 total indexes: 3 that we explicitly defined + the # implicit index on _id. index_list.length.should == 4 end it 'returns the unique flag for unique index' do unique_index = index_list.detect do |spec| spec['name'] == 'unique' end unique_index['unique'].should be true end it 'does not return the unique flag for non-unique index' do %w(simple compound).each do |name| index = index_list.detect do |spec| spec['name'] == name end index['unique'].should be nil end end end mongo-ruby-driver-2.21.3/spec/integration/command_monitoring_spec.rb000066400000000000000000000141211505113246500256660ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe 'Command monitoring' do let(:subscriber) { Mrss::EventSubscriber.new } let(:client) do authorized_client.with(app_name: 'command monitoring spec').tap do |client| client.subscribe(Mongo::Monitoring::COMMAND, subscriber) end end context 'pre 3.6 servers' do max_server_fcv '3.5' it 'notifies on successful commands' do result = client.database.command('ismaster' => 1) expect(result.documents.first['ismaster']).to be true started_events = subscriber.started_events.select do |event| event.command_name == 'ismaster' end expect(started_events.length).to eql(1) started_event = started_events.first expect(started_event.command_name).to eql('ismaster') expect(started_event.address).to be_a(Mongo::Address) expect(started_event.command).to have_key('$db') succeeded_events = subscriber.succeeded_events.select do |event| event.command_name == 'ismaster' end expect(succeeded_events.length).to eql(1) succeeded_event = succeeded_events.first expect(succeeded_event.command_name).to eql('ismaster') expect(succeeded_event.reply).to be_a(BSON::Document) expect(succeeded_event.reply['ismaster']).to eql(true) expect(succeeded_event.reply['ok']).to eq(1) expect(succeeded_event.address).to be_a(Mongo::Address) expect(succeeded_event.duration).to be_a(Float) expect(subscriber.failed_events.length).to eql(0) end end context '3.6+ servers' do min_server_fcv '3.6' it 'notifies on successful commands' do result = client.database.command(hello: 1) expect(result.documents.first['isWritablePrimary']).to be true started_events = subscriber.started_events.select do |event| event.command_name == 'hello' end expect(started_events.length).to eql(1) started_event = started_events.first expect(started_event.command_name).to eql('hello') expect(started_event.address).to be_a(Mongo::Address) expect(started_event.command).to have_key('$db') succeeded_events = subscriber.succeeded_events.select do |event| event.command_name == 'hello' end expect(succeeded_events.length).to eql(1) succeeded_event = succeeded_events.first expect(succeeded_event.command_name).to eql('hello') expect(succeeded_event.reply).to be_a(BSON::Document) expect(succeeded_event.reply['isWritablePrimary']).to eql(true) expect(succeeded_event.reply['ok']).to eq(1) expect(succeeded_event.address).to be_a(Mongo::Address) expect(succeeded_event.duration).to be_a(Float) expect(subscriber.failed_events.length).to eql(0) end end it 'notifies on failed commands' do expect do result = client.database.command(:bogus => 1) end.to raise_error(Mongo::Error::OperationFailure, /no such c(om)?m(an)?d/) started_events = subscriber.started_events.select do |event| event.command_name == 'bogus' end expect(started_events.length).to eql(1) started_event = started_events.first expect(started_event.command_name).to eql('bogus') expect(started_event.address).to be_a(Mongo::Address) succeeded_events = subscriber.succeeded_events.select do |event| event.command_name == 'hello' end expect(succeeded_events.length).to eql(0) failed_events = subscriber.failed_events.select do |event| event.command_name == 'bogus' end expect(failed_events.length).to eql(1) failed_event = failed_events.first expect(failed_event.command_name).to eql('bogus') expect(failed_event.message).to match(/no such c(om)?m(an)?d/) expect(failed_event.address).to be_a(Mongo::Address) expect(failed_event.duration).to be_a(Float) end context 'client with no established connections' do # For simplicity use 3.6+ servers only, then we can assert # scram auth commands min_server_fcv '3.6' # X.509 auth uses authenticate instead of sasl* commands require_no_external_user shared_examples_for 'does not nest auth and find' do it 'does not nest auth and find' do expect(subscriber.started_events.length).to eq 0 client['test-collection'].find(a: 1).first command_names = subscriber.started_events.map(&:command_name) command_names.should == expected_command_names end end context 'pre-4.4 servers' do max_server_version '4.2' let(:expected_command_names) do # Long SCRAM conversation. %w(saslStart saslContinue saslContinue find) end it_behaves_like 'does not nest auth and find' end context '4.4+ servers' do min_server_fcv '4.4' let(:expected_command_names) do # Speculative auth + short SCRAM conversation. %w(saslContinue find) end it_behaves_like 'does not nest auth and find' end end context 'when write concern is specified outside of command document' do require_wired_tiger require_topology :replica_set min_server_fcv '4.0' let(:collection) do client['command-monitoring-test'] end let(:write_concern) { Mongo::WriteConcern.get({w: 42}) } let(:session) { client.start_session } let(:command) do Mongo::Operation::Command.new( selector: { commitTransaction: 1 }, db_name: 'admin', session: session, txn_num: 123, write_concern: write_concern, ) end it 'includes write concern in notified command document' do server = client.cluster.next_primary collection.insert_one(a: 1) session.start_transaction collection.insert_one({a: 1}, session: session) subscriber.clear_events! expect do command.execute(server, context: Mongo::Operation::Context.new(session: session)) end.to raise_error(Mongo::Error::OperationFailure, /100\b.*Not enough data-bearing nodes/) expect(subscriber.started_events.length).to eq(1) event = subscriber.started_events.first expect(event.command['writeConcern']['w']).to eq(42) end end end mongo-ruby-driver-2.21.3/spec/integration/command_spec.rb000066400000000000000000000101041505113246500234160ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe 'Command' do let(:subscriber) { Mrss::EventSubscriber.new } describe 'payload' do let(:server) { authorized_client.cluster.next_primary } let(:payload) do server.with_connection do |connection| command.send(:final_operation).send(:message, connection).payload.dup.tap do |payload| if payload['request_id'].is_a?(Integer) payload['request_id'] = 42 end # $clusterTime may be present depending on the client's state payload['command'].delete('$clusterTime') # 3.6+ servers also return a payload field, earlier ones do not. # The contents of this field duplicates the rest of the response # so we can get rid of it without losing information. payload.delete('reply') end end end let(:session) { nil } context 'commitTransaction' do # Although these are unit tests, when targeting pre-4.0 servers # the driver does not add arguments like write concerns to commands that # it adds for 4.0+ servers, breaking expectations min_server_fcv '4.0' let(:selector) do { commitTransaction: 1 }.freeze end let(:write_concern) { nil } let(:command) do Mongo::Operation::Command.new( selector: selector, db_name: 'admin', session: session, txn_num: 123, write_concern: write_concern, ) end let(:expected_payload) do { 'command' => { 'commitTransaction' => 1, '$db' => 'admin', }, 'command_name' => 'commitTransaction', 'database_name' => 'admin', 'request_id' => 42, } end it 'returns expected payload' do expect(payload).to eq(expected_payload) end context 'with session' do min_server_fcv '3.6' let(:session) do authorized_client.start_session.tap do |session| # We are bypassing the normal transaction lifecycle, which would # set txn_options allow(session).to receive(:txn_options).and_return({}) end end let(:expected_payload) do { 'command' => { 'commitTransaction' => 1, 'lsid' => session.session_id, 'txnNumber' => BSON::Int64.new(123), '$db' => 'admin', }, 'command_name' => 'commitTransaction', 'database_name' => 'admin', 'request_id' => 42, } end it 'returns selector with write concern' do expect(payload).to eq(expected_payload) end end context 'with write concern' do let(:write_concern) { Mongo::WriteConcern.get(w: :majority) } let(:expected_payload) do { 'command' => { '$db' => 'admin', 'commitTransaction' => 1, 'writeConcern' => {'w' => 'majority'}, }, 'command_name' => 'commitTransaction', 'database_name' => 'admin', 'request_id' => 42, } end it 'returns selector with write concern' do expect(payload).to eq(expected_payload) end end end context 'find' do let(:selector) do { find: 'collection_name' }.freeze end let(:command) do Mongo::Operation::Command.new( selector: selector, db_name: 'foo', session: session, ) end context 'OP_MSG-capable servers' do min_server_fcv '3.6' let(:expected_payload) do { 'command' => { '$db' => 'foo', 'find' => 'collection_name', }, 'command_name' => 'find', 'database_name' => 'foo', 'request_id' => 42, } end it 'returns expected payload' do expect(payload).to eq(expected_payload) end end end end end mongo-ruby-driver-2.21.3/spec/integration/connect_single_rs_name_spec.rb000066400000000000000000000036361505113246500265120ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe 'Direct connection with RS name' do before(:all) do # preload ClusterConfig.instance.replica_set_name end clean_slate_for_all shared_examples_for 'passes RS name to topology' do it 'passes RS name to topology' do expect(client.cluster.topology.replica_set_name).to eq(replica_set_name) end end let(:client) do new_local_client( [SpecConfig.instance.addresses.first], SpecConfig.instance.test_options.merge( replica_set: replica_set_name, connect: :direct, server_selection_timeout: 3.32, )) end context 'in replica set' do require_topology :replica_set context 'with correct RS name' do let(:replica_set_name) { ClusterConfig.instance.replica_set_name } it_behaves_like 'passes RS name to topology' it 'creates a working client' do expect do res = client.database.command(ping: 1) p res end.not_to raise_error end end context 'with wrong RS name' do let(:replica_set_name) { 'wrong' } it_behaves_like 'passes RS name to topology' it 'creates a client which does not find a suitable server' do # TODO When RUBY-2197 is implemented, assert the error message also expect do client.database.command(ping: 1) end.to raise_error(Mongo::Error::NoServerAvailable) end end end context 'in standalone' do require_topology :single context 'with any RS name' do let(:replica_set_name) { 'any' } it_behaves_like 'passes RS name to topology' it 'creates a client which raises on every operation' do # TODO When RUBY-2197 is implemented, assert the error message also expect do client.database.command(ping: 1) end.to raise_error(Mongo::Error::NoServerAvailable) end end end end mongo-ruby-driver-2.21.3/spec/integration/connection/000077500000000000000000000000001505113246500226045ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/integration/connection/faas_env_spec.rb000066400000000000000000000026751505113246500257370ustar00rootroot00000000000000# frozen_string_literal: true require 'spec_helper' # Test Plan scenarios from the handshake spec SCENARIOS = { 'Valid AWS' => { 'AWS_EXECUTION_ENV' => 'AWS_Lambda_ruby2.7', 'AWS_REGION' => 'us-east-2', 'AWS_LAMBDA_FUNCTION_MEMORY_SIZE' => '1024', }, 'Valid Azure' => { 'FUNCTIONS_WORKER_RUNTIME' => 'ruby', }, 'Valid GCP' => { 'K_SERVICE' => 'servicename', 'FUNCTION_MEMORY_MB' => '1024', 'FUNCTION_TIMEOUT_SEC' => '60', 'FUNCTION_REGION' => 'us-central1', }, 'Valid Vercel' => { 'VERCEL' => '1', 'VERCEL_REGION' => 'cdg1', }, 'Invalid - multiple providers' => { 'AWS_EXECUTION_ENV' => 'AWS_Lambda_ruby2.7', 'AWS_REGION' => 'us-east-2', 'AWS_LAMBDA_FUNCTION_MEMORY_SIZE' => '1024', 'FUNCTIONS_WORKER_RUNTIME' => 'ruby', }, 'Invalid - long string' => { 'AWS_EXECUTION_ENV' => 'AWS_Lambda_ruby2.7', 'AWS_REGION' => 'a' * 512, 'AWS_LAMBDA_FUNCTION_MEMORY_SIZE' => '1024', }, 'Invalid - wrong types' => { 'AWS_EXECUTION_ENV' => 'AWS_Lambda_ruby2.7', 'AWS_REGION' => 'us-east-2', 'AWS_LAMBDA_FUNCTION_MEMORY_SIZE' => 'big', }, }.freeze describe 'Connect under FaaS Env' do clean_slate SCENARIOS.each do |name, env| context "when given #{name}" do local_env(env) it 'connects successfully' do resp = authorized_client.database.command(ping: 1) expect(resp).to be_a(Mongo::Operation::Result) end end end end mongo-ruby-driver-2.21.3/spec/integration/connection_pool_populator_spec.rb000066400000000000000000000210551505113246500273040ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe 'Connection pool populator integration' do let(:options) { {} } let(:server_options) do Mongo::Utils.shallow_symbolize_keys(Mongo::Client.canonicalize_ruby_options( SpecConfig.instance.all_test_options, )).update(options) end let(:address) do Mongo::Address.new(SpecConfig.instance.addresses.first) end let(:monitoring) do Mongo::Monitoring.new(monitoring: false) end let(:listeners) do Mongo::Event::Listeners.new end declare_topology_double retry_test let(:app_metadata) do Mongo::Server::AppMetadata.new(options) end let(:cluster) do double('cluster').tap do |cl| allow(cl).to receive(:topology).and_return(topology) allow(cl).to receive(:app_metadata).and_return(app_metadata) allow(cl).to receive(:options).and_return({}) allow(cl).to receive(:update_cluster_time) allow(cl).to receive(:cluster_time).and_return(nil) allow(cl).to receive(:run_sdam_flow) end end let(:server) do register_server( Mongo::Server.new(address, cluster, monitoring, listeners, {monitoring_io: false}.update(server_options) ).tap do |server| allow(server).to receive(:description).and_return(ClusterConfig.instance.primary_description) end ) end let(:pool) do server.pool end describe '#initialize' do context 'when a min size is provided' do let(:options) do { min_pool_size: 2, max_pool_size: 5 } end it 'creates the pool with min pool size connections' do pool sleep 2 expect(pool.size).to eq(2) expect(pool.available_count).to eq(2) end it 'does not use the same objects in the pool' do expect(pool.check_out).to_not equal(pool.check_out) end end context 'when min size is zero' do it 'does start the background thread' do pool sleep 2 expect(pool.size).to eq(0) expect(pool.instance_variable_get('@populator')).to be_running end end end describe '#clear' do context 'when a min size is provided' do require_no_linting let(:options) do { min_pool_size: 1 } end it 'repopulates the pool periodically only up to min size' do pool.ready expect(pool.instance_variable_get('@populator')).to be_running sleep 2 expect(pool.size).to eq(1) expect(pool.available_count).to eq(1) first_connection = pool.check_out pool.check_in(first_connection) RSpec::Mocks.with_temporary_scope do allow(pool.server).to receive(:unknown?).and_return(true) if server.load_balancer? pool.clear(service_id: first_connection.service_id) else pool.clear end end ::Utils.wait_for_condition(3) do pool.size == 0 end expect(pool.size).to eq(0) pool.ready sleep 2 expect(pool.size).to eq(1) expect(pool.available_count).to eq(1) second_connection = pool.check_out pool.check_in(second_connection) expect(second_connection).to_not eq(first_connection) # When populate is re-run, the pool size should not change pool.populate expect(pool.size).to eq(1) expect(pool.available_count).to eq(1) third_connection = pool.check_out expect(third_connection).to eq(second_connection) end end end describe '#check_in' do context 'when a min size is provided' do let(:options) do { min_pool_size: 1 } end it 'repopulates the pool after check_in of a closed connection' do pool sleep 2 expect(pool.size).to eq(1) first_connection = pool.check_out first_connection.disconnect! expect(pool.size).to eq(1) pool.check_in(first_connection) sleep 2 expect(pool.size).to eq(1) expect(pool.available_count).to eq(1) second_connection = pool.check_out expect(second_connection).to_not eq(first_connection) end end end describe '#check_out' do context 'when min size and idle time are provided' do let(:options) do { max_pool_size: 2, min_pool_size: 2, max_idle_time: 0.5 } end it 'repopulates the pool after check_out empties idle connections' do pool first_connection = pool.check_out second_connection = pool.check_out first_connection.record_checkin! second_connection.record_checkin! pool.check_in(first_connection) pool.check_in(second_connection) expect(pool.size).to eq(2) # let both connections become idle sleep 0.5 # check_out should discard first two connections, trigger in-flow # creation of a single connection, then wake up populate thread third_connection = pool.check_out expect(third_connection).to_not eq(first_connection) expect(third_connection).to_not eq(second_connection) # populate thread should create a new connection for the pool sleep 2 expect(pool.size).to eq(2) fourth_connection = pool.check_out expect(fourth_connection).to_not eq(first_connection) expect(fourth_connection).to_not eq(second_connection) expect(fourth_connection).to_not eq(third_connection) end end end describe '#close' do context 'when min size is provided' do let(:options) do { min_pool_size: 2, max_pool_size: 5 } end it 'terminates and does not repopulate the pool after pool is closed' do pool sleep 2 expect(pool.size).to eq(2) connection = pool.check_out expect(pool.size).to eq(2) pool.close(force: true) expect(pool.closed?).to be true expect(pool.instance_variable_get('@available_connections').empty?).to be true expect(pool.instance_variable_get('@checked_out_connections').empty?).to be true # populate thread should terminate sleep 2 expect(pool.instance_variable_get('@populator').running?).to be false expect(pool.closed?).to be true end end end describe '#close_idle_sockets' do context 'when min size and idle time are provided' do let(:options) do { min_pool_size: 1, max_idle_time: 0.5 } end it 'repopulates pool after sockets are closes' do pool sleep 2 expect(pool.size).to eq(1) connection = pool.check_out connection.record_checkin! pool.check_in(connection) # let the connection become idle sleep 0.5 # close idle_sockets should trigger populate pool.close_idle_sockets sleep 2 expect(pool.size).to eq(1) expect(pool.check_out).not_to eq(connection) end end end describe '#populate' do let(:options) do { min_pool_size: 1 } end context 'when populate encounters a network error twice' do it 'retries once and does not stop the populator' do expect_any_instance_of(Mongo::Server::ConnectionPool).to \ receive(:create_and_add_connection).twice.and_raise(Mongo::Error::SocketError) pool sleep 2 expect(pool.populator).to be_running end end context 'when populate encounters a non-network error' do it 'does not retry and does not stop the populator' do expect_any_instance_of(Mongo::Server::ConnectionPool).to \ receive(:create_and_add_connection).and_raise(Mongo::Error) pool sleep 2 expect(pool.populator).to be_running end end end describe 'when forking is enabled' do require_mri context 'when min size is provided' do min_server_version '2.8' it 'populates the parent and child pools' do client = ClientRegistry.instance.new_local_client([SpecConfig.instance.addresses.first], server_options.merge(min_pool_size: 2, max_pool_size: 5)) # force initialization of the pool client.cluster.servers.first.pool # let pool populate sleep 2 server = client.cluster.next_primary pool = server.pool expect(pool.size).to eq(2) fork do # follow forking guidance client.close client.reconnect # let pool populate sleep 2 server = client.cluster.next_primary pool = server.pool expect(pool.size).to eq(2) end end end end end mongo-ruby-driver-2.21.3/spec/integration/connection_spec.rb000066400000000000000000000274711505113246500241560ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe 'Connections' do clean_slate let(:client) do ClientRegistry.instance.global_client('authorized').tap do |client| stop_monitoring(client) end end let(:server) { client.cluster.servers.first } describe '#connect!' do let(:connection) do Mongo::Server::Connection.new(server, server.options) end context 'network error during handshake' do # On JRuby 9.2.7.0, this line: # expect_any_instance_of(Mongo::Socket).to receive(:write).and_raise(exception) # ... appears to produce a moment in which Mongo::Socket#write is undefined # entirely, resulting in this failure: # RSpec::Expectations::ExpectationNotMetError: expected Mongo::Error::SocketError, got # fails_on_jruby # 4.4 has two monitors and thus our socket mocks get hit twice max_server_version '4.2' let(:exception) { Mongo::Error::SocketError } let(:error) do connection expect_any_instance_of(Mongo::Socket).to receive(:write).and_raise(exception) expect do connection.connect! end.to raise_error(exception) end it 'sets server type to unknown' do expect(server).not_to be_unknown error expect(server).to be_unknown end context 'with sdam event subscription' do let(:subscriber) { Mrss::EventSubscriber.new } let(:client) do ClientRegistry.instance.global_client('authorized').with(app_name: 'connection_integration').tap do |client| client.subscribe(Mongo::Monitoring::SERVER_OPENING, subscriber) client.subscribe(Mongo::Monitoring::SERVER_CLOSED, subscriber) client.subscribe(Mongo::Monitoring::SERVER_DESCRIPTION_CHANGED, subscriber) client.subscribe(Mongo::Monitoring::TOPOLOGY_OPENING, subscriber) client.subscribe(Mongo::Monitoring::TOPOLOGY_CHANGED, subscriber) end end it 'publishes server description changed event' do expect(subscriber.succeeded_events).to be_empty wait_for_all_servers(client.cluster) connection subscriber.succeeded_events.clear error event = subscriber.first_event('server_description_changed_event') expect(event).not_to be_nil expect(event.address).to eq(server.address) expect(event.new_description).to be_unknown end it 'marks server unknown' do expect(server).not_to be_unknown connection error expect(server).to be_unknown end context 'in replica set topology' do require_topology :replica_set # need to use the primary here, otherwise a secondary will be # changed to unknown which wouldn't alter topology let(:server) { client.cluster.next_primary } it 'changes topology type' do # wait for topology to get discovered client.cluster.next_primary expect(client.cluster.topology.class).to eql(Mongo::Cluster::Topology::ReplicaSetWithPrimary) # stop background monitoring to prevent it from racing with the test client.cluster.servers_list.each do |server| server.monitor.stop! end connection error expect(client.cluster.topology.class).to eql(Mongo::Cluster::Topology::ReplicaSetNoPrimary) end end end context 'error during handshake to primary in a replica set' do require_topology :replica_set let(:server) { client.cluster.next_primary } before do # insert to perform server selection and get topology to primary client.cluster.next_primary end it 'sets cluster type to replica set without primary' do expect(client.cluster.topology).to be_a(Mongo::Cluster::Topology::ReplicaSetWithPrimary) error expect(client.cluster.topology).to be_a(Mongo::Cluster::Topology::ReplicaSetNoPrimary) end end describe 'number of sockets created' do before do server end shared_examples_for 'is 1 per connection' do it 'is 1 per connection' do # Instantiating a connection object should not create any sockets RSpec::Mocks.with_temporary_scope do expect(socket_cls).not_to receive(:new) connection end # When the connection connects, exactly one socket should be created # (and subsequently connected) RSpec::Mocks.with_temporary_scope do expect(socket_cls).to receive(:new).and_call_original connection.connect! end end end let(:socket_cls) { ::Socket } it_behaves_like 'is 1 per connection' context 'connection to Unix domain socket' do # Server does not allow Unix socket connections when TLS is enabled require_no_tls let(:port) { SpecConfig.instance.any_port } let(:client) do new_local_client(["/tmp/mongodb-#{port}.sock"], connect: :direct).tap do |client| stop_monitoring(client) end end let(:socket_cls) { ::UNIXSocket } it_behaves_like 'is 1 per connection' end end context 'when socket connection fails' do before do server end let(:socket_cls) { ::Socket } let(:socket) do double('socket').tap do |socket| allow(socket).to receive(:setsockopt) allow(socket).to receive(:set_encoding) allow(socket).to receive(:getsockopt) expect(socket).to receive(:connect).and_raise(IOError, 'test error') # This test is testing for the close call: expect(socket).to receive(:close) end end it 'closes the socket' do RSpec::Mocks.with_temporary_scope do expect(::Socket).to receive(:new).with( Socket::AF_INET, Socket::SOCK_STREAM, 0).and_return(socket) lambda do connection.connect! end.should raise_error(Mongo::Error::SocketError, /test error/) end end context 'with tls' do require_tls let(:socket) do double('socket').tap do |socket| allow(socket).to receive(:hostname=) allow(socket).to receive(:sync_close=) expect(socket).to receive(:connect).and_raise(IOError, 'test error') # This test is testing for the close call: expect(socket).to receive(:close) end end it 'closes the SSL socket' do RSpec::Mocks.with_temporary_scope do expect(OpenSSL::SSL::SSLSocket).to receive(:new).and_return(socket) lambda do connection.connect! end.should raise_error(Mongo::Error::SocketError, /test error/) end end end end end describe 'wire protocol version range update' do require_no_required_api_version # 3.2 wire protocol is 4. # Wire protocol < 2 means only scram auth is available, # which is not supported by modern mongos. # Instead of mucking with this we just limit this test to 3.2+ # so that we can downgrade protocol range to 0..3 instead of 0..1. min_server_fcv '3.2' let(:client) { ClientRegistry.instance.global_client('authorized').with(app_name: 'wire_protocol_update') } context 'non-lb' do require_topology :single, :replica_set, :sharded it 'updates on handshake response from non-monitoring connections' do # connect server client['test'].insert_one(test: 1) # kill background threads so that they are not interfering with # our mocked hello response client.cluster.servers.each do |server| server.monitor.stop! end server = client.cluster.servers.first expect(server.features.server_wire_versions.max >= 4).to be true max_version = server.features.server_wire_versions.max # Depending on server version, handshake here may return a # description that compares equal to the one we got from a # monitoring connection (pre-4.2) or not (4.2+). # Since we do run SDAM flow on handshake responses on # non-monitoring connections, force descriptions to be different # by setting the existing description here to unknown. server.monitor.instance_variable_set('@description', Mongo::Server::Description.new(server.address)) RSpec::Mocks.with_temporary_scope do # now pretend a handshake returned a different range features = Mongo::Server::Description::Features.new(0..3) # One Features instantiation is for SDAM event publication, this # one always happens. The second one happens on servers # where we do not negotiate auth mechanism. expect(Mongo::Server::Description::Features).to receive(:new).at_least(:once).and_return(features) connection = Mongo::Server::Connection.new(server, server.options) expect(connection.connect!).to be true # hello response should update server description via sdam flow, # which includes wire version range expect(server.features.server_wire_versions.max).to eq(3) end end end context 'lb' do require_topology :load_balanced it 'does not update on handshake response from non-monitoring connections since there are not any' do # connect server client['test'].insert_one(test: 1) server = client.cluster.servers.first server.load_balancer?.should be true server.features.server_wire_versions.max.should be 0 end end end describe 'SDAM flow triggered by hello on non-monitoring thread' do # replica sets can transition between having and not having a primary require_topology :replica_set let(:client) do # create a new client because we make manual state changes ClientRegistry.instance.global_client('authorized').with(app_name: 'non-monitoring thread sdam') end it 'performs SDAM flow' do client['foo'].insert_one(bar: 1) client.cluster.servers_list.each do |server| server.monitor.stop! end expect(client.cluster.topology.class).to eq(Mongo::Cluster::Topology::ReplicaSetWithPrimary) # need to connect to the primary for topology to change server = client.cluster.servers.detect do |server| server.primary? end # overwrite server description server.instance_variable_set('@description', Mongo::Server::Description.new( server.address)) # overwrite topology client.cluster.instance_variable_set('@topology', Mongo::Cluster::Topology::ReplicaSetNoPrimary.new( client.cluster.topology.options, client.cluster.topology.monitoring, client.cluster)) # now create a connection. connection = Mongo::Server::Connection.new(server, server.options) # verify everything once again expect(server).to be_unknown expect(client.cluster.topology.class).to eq(Mongo::Cluster::Topology::ReplicaSetNoPrimary) # this should dispatch the sdam event expect(connection.connect!).to be true # back to primary expect(server).to be_primary expect(client.cluster.topology.class).to eq(Mongo::Cluster::Topology::ReplicaSetWithPrimary) end end end end mongo-ruby-driver-2.21.3/spec/integration/crud_spec.rb000066400000000000000000000305751505113246500227530ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe 'CRUD operations' do let(:client) { authorized_client } let(:collection) { client['crud_integration'] } before do collection.delete_many end describe 'find' do context 'when allow_disk_use is true' do # Other cases are adequately covered by spec tests. context 'on server version < 3.2' do max_server_fcv '3.0' it 'raises an exception' do expect do collection.find({}, { allow_disk_use: true }).first end.to raise_error(Mongo::Error::UnsupportedOption, /The MongoDB server handling this request does not support the allow_disk_use option on this command./) end end end context 'when allow_disk_use is false' do # Other cases are adequately covered by spec tests. context 'on server version < 3.2' do max_server_fcv '3.0' it 'raises an exception' do expect do collection.find({}, { allow_disk_use: false }).first end.to raise_error(Mongo::Error::UnsupportedOption, /The MongoDB server handling this request does not support the allow_disk_use option on this command./) end end end context 'when using the legacy $query syntax' do before do collection.insert_one(_id: 1, test: 1) collection.insert_one(_id: 2, test: 2) collection.insert_one(_id: 3, test: 3) end context 'filter only' do it 'passes the filter' do collection.find(:'$query' => {test: 1}).first.should == {'_id' => 1, 'test' => 1} end end context 'empty filter with order' do it 'passes the filter' do collection.find(:'$query' => {}, :'$orderby' => {test: 1}).first.should == {'_id' => 1, 'test' => 1} collection.find(:'$query' => {}, :'$orderby' => {test: -1}).first.should == {'_id' => 3, 'test' => 3} end end context 'filter with order' do it 'passes both filter and order' do collection.find(:'$query' => {test: {'$gt' => 1}}, '$orderby' => {test: 1}).first.should == {'_id' => 2, 'test' => 2} collection.find(:'$query' => {test: {'$gt' => 1}}, '$orderby' => {test: -1}).first.should == {'_id' => 3, 'test' => 3} end end end context 'with read concern' do # Read concern requires 3.2+ server. min_server_fcv '3.2' context 'with read concern specified on operation level' do it 'passes the read concern' do event = Utils.get_command_event(client, 'find') do |client| client['foo'].find({}, read_concern: {level: :local}).to_a end event.command.fetch('readConcern').should == {'level' => 'local'} end end context 'with read concern specified on collection level' do it 'passes the read concern' do event = Utils.get_command_event(client, 'find') do |client| client['foo', read_concern: {level: :local}].find.to_a end event.command.fetch('readConcern').should == {'level' => 'local'} end end context 'with read concern specified on client level' do let(:client) { authorized_client.with(read_concern: {level: :local}) } it 'passes the read concern' do event = Utils.get_command_event(client, 'find') do |client| client['foo'].find.to_a end event.command.fetch('readConcern').should == {'level' => 'local'} end end end context 'with oplog_replay option' do let(:collection_name) { 'crud_integration_oplog_replay' } let(:oplog_query) do {ts: {'$gt' => 1}} end context 'passed to operation' do it 'passes the option' do event = Utils.get_command_event(client, 'find') do |client| client[collection_name].find(oplog_query, oplog_replay: true).to_a end event.command.fetch('oplogReplay').should be true end it 'warns' do client.should receive(:log_warn).with('The :oplog_replay option is deprecated and ignored by MongoDB 4.4 and later') client[collection_name].find(oplog_query, oplog_replay: true).to_a end end context 'set on collection' do it 'passes the option' do event = Utils.get_command_event(client, 'find') do |client| client[collection_name, oplog_replay: true].find(oplog_query).to_a end event.command.fetch('oplogReplay').should be true end it 'warns' do client.should receive(:log_warn).with('The :oplog_replay option is deprecated and ignored by MongoDB 4.4 and later') client[collection_name, oplog_replay: true].find(oplog_query).to_a end end end end describe 'explain' do context 'with explicit session' do min_server_fcv '3.6' it 'passes the session' do client.start_session do |session| event = Utils.get_command_event(client, 'explain') do |client| client['foo'].find({}, session: session).explain.should be_explain_output end event.command.fetch('lsid').should == session.session_id end end end context 'with read preference specified on operation level' do require_topology :sharded # RUBY-2706 min_server_fcv '3.6' it 'passes the read preference' do event = Utils.get_command_event(client, 'explain') do |client| client['foo'].find({}, read: {mode: :secondary_preferred}).explain.should be_explain_output end event.command.fetch('$readPreference').should == {'mode' => 'secondaryPreferred'} end end context 'with read preference specified on collection level' do require_topology :sharded # RUBY-2706 min_server_fcv '3.6' it 'passes the read preference' do event = Utils.get_command_event(client, 'explain') do |client| client['foo', read: {mode: :secondary_preferred}].find.explain.should be_explain_output end event.command.fetch('$readPreference').should == {'mode' => 'secondaryPreferred'} end end context 'with read preference specified on client level' do require_topology :sharded # RUBY-2706 min_server_fcv '3.6' let(:client) { authorized_client.with(read: {mode: :secondary_preferred}) } it 'passes the read preference' do event = Utils.get_command_event(client, 'explain') do |client| client['foo'].find.explain.should be_explain_output end event.command.fetch('$readPreference').should == {'mode' => 'secondaryPreferred'} end end context 'with read concern' do # Read concern requires 3.2+ server. min_server_fcv '3.2' context 'with read concern specifed on operation level' do # Read concern is not allowed in explain command, driver drops it. it 'drops the read concern' do event = Utils.get_command_event(client, 'explain') do |client| client['foo'].find({}, read_concern: {level: :local}).explain.should have_key('queryPlanner') end event.command.should_not have_key('readConcern') end end context 'with read concern specifed on collection level' do # Read concern is not allowed in explain command, driver drops it. it 'drops the read concern' do event = Utils.get_command_event(client, 'explain') do |client| client['foo', read_concern: {level: :local}].find.explain.should have_key('queryPlanner') end event.command.should_not have_key('readConcern') end end context 'with read concern specifed on client level' do let(:client) { authorized_client.with(read_concern: {level: :local}) } # Read concern is not allowed in explain command, driver drops it. it 'drops the read concern' do event = Utils.get_command_event(client, 'explain') do |client| client['foo'].find.explain.should have_key('queryPlanner') end event.command.should_not have_key('readConcern') end end end end describe 'insert' do context 'user documents' do let(:doc) do IceNine.deep_freeze(test: 42) end it 'does not mutate user documents' do lambda do collection.insert_one(doc) end.should_not raise_error end end context 'inserting a BSON::Int64' do before do collection.insert_one(int64: BSON::Int64.new(42)) end it 'is stored as the correct type' do # 18 is the number that represents the Int64 type for the $type # operator; string aliases in the $type operator are only supported on # server versions 3.2 and newer. result = collection.find(int64: { '$type' => 18 }).first expect(result).not_to be_nil expect(result['int64']).to eq(42) end end context 'inserting a BSON::Int32' do before do collection.insert_one(int32: BSON::Int32.new(42)) end it 'is stored as the correct type' do # 16 is the number that represents the Int32 type for the $type # operator; string aliases in the $type operator are only supported on # server versions 3.2 and newer. result = collection.find(int32: { '$type' => 16 }).first expect(result).not_to be_nil expect(result['int32']).to eq(42) end end context 'with automatic encryption' do require_libmongocrypt require_enterprise min_server_fcv '4.2' include_context 'define shared FLE helpers' include_context 'with local kms_providers' let(:encrypted_collection) do new_local_client( SpecConfig.instance.addresses, SpecConfig.instance.test_options.merge( auto_encryption_options: { kms_providers: kms_providers, key_vault_namespace: key_vault_namespace, schema_map: { 'auto_encryption.users' => schema_map }, # Spawn mongocryptd on non-default port for sharded cluster tests extra_options: extra_options, }, database: 'auto_encryption' ) )['users'] end let(:collection) { authorized_client.use('auto_encryption')['users'] } context 'inserting a BSON::Int64' do before do encrypted_collection.insert_one(ssn: '123-456-7890', int64: BSON::Int64.new(42)) end it 'is stored as the correct type' do # 18 is the number that represents the Int64 type for the $type # operator; string aliases in the $type operator are only supported on # server versions 3.2 and newer. result = collection.find(int64: { '$type' => 18 }).first expect(result).not_to be_nil expect(result['int64']).to eq(42) end end context 'inserting a BSON::Int32' do before do encrypted_collection.insert_one(ssn: '123-456-7890', int32: BSON::Int32.new(42)) end it 'is stored as the correct type' do # 16 is the number that represents the Int32 type for the $type # operator; string aliases in the $type operator are only supported on # server versions 3.2 and newer. result = collection.find(int32: { '$type' => 16 }).first expect(result).not_to be_nil expect(result['int32']).to eq(42) end end end end describe 'upsert' do context 'with default write concern' do it 'upserts' do collection.count_documents.should == 0 res = collection.find(_id: 'foo').update_one({'$set' => {foo: 'bar'}}, upsert: true) res.documents.first['upserted'].length.should == 1 collection.count_documents.should == 1 end end context 'unacknowledged write' do let(:unack_collection) do collection.with(write_concern: {w: 0}) end before do unack_collection.write_concern.acknowledged?.should be false end it 'upserts' do unack_collection.count_documents.should == 0 res = unack_collection.find(_id: 'foo').update_one({'$set' => {foo: 'bar'}}, upsert: true) # since write concern is unacknowledged, wait for the data to be # persisted (hopefully) sleep 0.25 unack_collection.count_documents.should == 1 end end end end mongo-ruby-driver-2.21.3/spec/integration/cursor_pinning_spec.rb000066400000000000000000000041741505113246500250510ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe 'Cursor pinning' do let(:client) do authorized_client.tap do |client| client.reconnect if client.closed? end end let(:collection_name) { 'cursor_pinning' } let(:collection) { client[collection_name] } before do authorized_client[collection_name].insert_many([{test: 1}] * 200) end let(:server) { client.cluster.next_primary } clean_slate context 'non-lb' do require_topology :single, :replica_set, :sharded require_no_multi_mongos # When not in load-balanced topology, iterating a cursor creates # new connections as needed. it 'creates new connections for iteration' do server.pool.size.should == 0 # Use batch_size of 2 until RUBY-2727 is fixed. enum = collection.find({}, batch_size: 2).to_enum # Still zero because we haven't iterated server.pool.size.should == 0 enum.next enum.next server.pool.size.should == 1 # Grab the connection that was used server.with_connection do # This requires a new connection enum.next server.pool.size.should == 2 end end end context 'lb' do require_topology :load_balanced # In load-balanced topology, a cursor retains the connection used to create # it until the cursor is closed. context 'when connection is available' do require_multi_mongos let(:client) { authorized_client.with(max_pool_size: 2) } it 'does not return connection to the pool if cursor not drained' do expect(server.pool).not_to receive(:check_in) enum = collection.find({}, batch_size: 1).to_enum # Get the first element only; cursor is not drained, so there should # be no check_in of the connection. enum.next end it 'returns connection to the pool when cursor is drained' do view = collection.find({}, batch_size: 1) enum = view.to_enum expect_any_instance_of(Mongo::Cursor).to receive(:check_in_connection) # Drain the cursor enum.each { |it| it.nil? } end end end end mongo-ruby-driver-2.21.3/spec/integration/cursor_reaping_spec.rb000066400000000000000000000075771505113246500250460ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe 'Cursor reaping' do # JRuby does reap cursors but GC.start does not force GC to run like it does # in MRI, I don't currently know how to force GC to run in JRuby require_mri # Uncomment for debugging this test. =begin around(:all) do |example| saved_level = Mongo::Logger.logger.level Mongo::Logger.logger.level = Logger::DEBUG begin example.run ensure Mongo::Logger.logger.level = saved_level end end =end let(:subscriber) { Mrss::EventSubscriber.new } let(:client) do authorized_client.with(max_pool_size: 10).tap do |client| client.subscribe(Mongo::Monitoring::COMMAND, subscriber) end end let(:collection) { client['cursor_reaping_spec'] } before do data = [{a: 1}] * 10 authorized_client['cursor_reaping_spec'].delete_many authorized_client['cursor_reaping_spec'].insert_many(data) end context 'a no-timeout cursor' do it 'reaps nothing when we do not query' do # this is a base line test to ensure that the reaps in the other test # aren't done on some global cursor expect(Mongo::Operation::KillCursors).not_to receive(:new) # just the scope, no query is happening collection.find.batch_size(2).no_cursor_timeout events = subscriber.started_events.select do |event| event.command['killCursors'] end expect(events).to be_empty end def abandon_cursors [].tap do |cursor_ids| # scopes are weird, having this result in a let block # makes it not garbage collected 10.times do scope = collection.find.batch_size(2).no_cursor_timeout # Begin iteration, creating the cursor scope.each.first scope.cursor.should_not be nil cursor_ids << scope.cursor.id end end end # this let block is a kludge to avoid copy pasting all of this code let(:cursor_id_and_kill_event) do expect(Mongo::Operation::KillCursors).to receive(:new).at_least(:once).and_call_original cursor_ids = abandon_cursors cursor_ids.each do |cursor_id| expect(cursor_id).to be_a(Integer) expect(cursor_id > 0).to be true end GC.start sleep 1 # force periodic executor to run because its frequency is not configurable client.cluster.instance_variable_get('@periodic_executor').execute started_event = subscriber.started_events.detect do |event| event.command['killCursors'] end started_event.should_not be nil found_cursor_id = nil started_event = subscriber.started_events.detect do |event| found = false if event.command['killCursors'] cursor_ids.each do |cursor_id| if event.command['cursors'].map { |c| Utils.int64_value(c) }.include?(cursor_id) found_cursor_id = cursor_id found = true break end end end found end if started_event.nil? p subscriber.started_events end started_event.should_not be nil succeeded_event = subscriber.succeeded_events.detect do |event| event.command_name == 'killCursors' && event.request_id == started_event.request_id end expect(succeeded_event).not_to be_nil expect(succeeded_event.reply['ok']).to eq 1 [found_cursor_id, succeeded_event] end it 'is reaped' do cursor_id_and_kill_event end context 'newer servers' do min_server_fcv '3.2' it 'is really killed' do cursor_id, event = cursor_id_and_kill_event expect(event.reply['cursorsKilled']).to eq([cursor_id]) expect(event.reply['cursorsNotFound']).to be_empty expect(event.reply['cursorsAlive']).to be_empty expect(event.reply['cursorsUnknown']).to be_empty end end end end mongo-ruby-driver-2.21.3/spec/integration/docs_examples_spec.rb000066400000000000000000000123411505113246500246330ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe 'aggregation examples in Ruby' do before(:all) do # In sharded clusters we need to ensure the database exists before running # the tests in this file. begin ClientRegistry.instance.global_client('authorized')['_placeholder'].create rescue Mongo::Error::OperationFailure::Family => e # Collection already exists if e.code != 48 raise end end end let(:client) do authorized_client end context 'Aggregation Example 1 - Simple aggregation' do let(:example_code) do # Start Aggregation Example 1 client[:sales].aggregate( [ { '$match' => { 'items.fruit' => 'banana' } }, { '$sort' => { 'date' => 1 } } ]) # End Aggregation Example 1 end it 'successfully executes the aggregation' do example_code.to_a end end context 'Aggregation Example 2 - $match, $group, $project, $unwind, $sum, $sort, $dayOfWeek' do let(:example_code) do # Start Aggregation Example 2 client[:sales].aggregate( [ { '$unwind' => '$items' }, { '$match' => { 'items.fruit' => 'banana' } }, { '$group' => { '_id' => { 'day' => { '$dayOfWeek' => '$date' } }, 'count' => { '$sum' => '$items.quantity' } } }, { '$project' => { 'dayOfWeek' => '$_id.day', 'numberSold' => '$count', '_id' => 0 } }, { '$sort' => { 'numberSold' => 1 } } ]) # End Aggregation Example 2 end it 'successfully executes the aggregation' do example_code.to_a end end context 'Aggregation Example 3 - $unwind, $group, $sum, $dayOfWeek, $multiply, $project, $cond' do let(:example_code) do # Start Aggregation Example 3 client[:sales].aggregate( [ { '$unwind' => '$items' }, { '$group' => { '_id' => { 'day' => { '$dayOfWeek' => '$date' } }, 'items_sold' => { '$sum' => '$items.quantity' }, 'revenue' => { '$sum' => { '$multiply' => [ '$items.quantity', '$items.price' ] } } } }, { '$project' => { 'day' => '$_id.day', 'revenue' => 1, 'items_sold' => 1, 'discount' => { '$cond' => { 'if' => { '$lte' => ['$revenue', 250]}, 'then' => 25, 'else' => 0 } } } } ]) # End Aggregation Example 3 end it 'successfully executes the aggregation' do example_code.to_a end end context 'Aggregation Example 4 - $lookup, $filter, $match' do min_server_fcv '3.6' let(:example_code) do # Start Aggregation Example 4 client[:sales].aggregate( [ { '$lookup' => { 'from' => 'air_airlines', 'let' => { 'constituents' => '$airlines' }, 'pipeline' => [ { '$match' => { '$expr' => { '$in' => ['$name', '$$constituents'] } } }], 'as' => 'airlines' } }, { '$project' => { '_id' => 0, 'name' => 1, 'airlines' => { '$filter' => { 'input' => '$airlines', 'as' => 'airline', 'cond' => { '$eq' => ['$$airline.country', 'Canada'] } } } } } ]) # End Aggregation Example 4 end it 'successfully executes the aggregation' do example_code.to_a end end context 'runCommand Example 1' do let(:example_code) do # Start runCommand Example 1 client.database.command(buildInfo: 1) # End runCommand Example 1 end it 'successfully executes the command' do example_code end end context 'runCommand Example 2' do before do client[:restaurants].drop client[:restaurants].create end let(:example_code) do # Start runCommand Example 2 client.database.command(dbStats: 1) # End runCommand Example 2 end it 'successfully executes the command' do example_code end end context 'Index Example 1 - build simple ascending index' do let(:example_code) do # Start Index Example 1 client[:records].indexes.create_one(score: 1) # End Index Example 1 end it 'successfully executes the command' do example_code end end context 'Index Example 2 - build multikey index with partial filter expression' do let(:example_code) do # Start Index Example 2 client[:records].indexes.create_one({ cuisine: 1, name: 1 }, { partialFilterExpression: { rating: { '$gt' => 5 } } }) # End Index Example 2 end it 'successfully executes the command' do example_code end end end mongo-ruby-driver-2.21.3/spec/integration/error_detection_spec.rb000066400000000000000000000015541505113246500252000ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe 'Error detection' do context 'document contains a not master/node recovering code' do let(:document) { {code: 91} } let(:coll) { authorized_client_without_any_retries['error-detection'] } before do coll.delete_many end context 'cursors not used' do before do coll.insert_one(document) end it 'is not treated as an error when retrieved' do actual = coll.find.first expect(actual['code']).to eq(91) end end context 'cursors used' do before do 10.times do coll.insert_one(document) end end it 'is not treated as an error when retrieved' do actual = coll.find({}, batch_size: 2).first expect(actual['code']).to eq(91) end end end end mongo-ruby-driver-2.21.3/spec/integration/find_options_spec.rb000066400000000000000000000136131505113246500245030ustar00rootroot00000000000000# frozen_string_literal: true require 'spec_helper' describe 'Find operation options' do require_mri require_no_auth min_server_fcv '4.4' let(:subscriber) { Mrss::EventSubscriber.new } let(:seeds) do [ SpecConfig.instance.addresses.first ] end let(:client_options) do {} end let(:collection_options) do {} end let(:client) do ClientRegistry.instance.new_local_client( seeds, SpecConfig.instance.test_options .merge(database: SpecConfig.instance.test_db) .merge(client_options) ).tap do |client| client.subscribe(Mongo::Monitoring::COMMAND, subscriber) end end let(:collection) do client['find_options', collection_options] end let(:find_command) do subscriber.started_events.find { |cmd| cmd.command_name == 'find' } end let(:should_create_collection) { true } before do client['find_options'].drop collection.create if should_create_collection collection.insert_many([ { a: 1 }, { a: 2 }, { a: 3 } ]) end describe 'collation' do let(:client_options) do {} end let(:collation) do { 'locale' => 'en_US' } end context 'when defined on the collection' do let(:collection_options) do { collation: collation } end it 'uses the collation defined on the collection' do collection.find.to_a expect(find_command.command['collation']).to be_nil end end context 'when defined on the operation' do let(:collection_options) do {} end it 'uses the collation defined on the collection' do collection.find({}, collation: collation).to_a expect(find_command.command['collation']).to eq(collation) end end context 'when defined on both collection and operation' do let(:collection_options) do { 'locale' => 'de_AT' } end let(:should_create_collection) { false } it 'uses the collation defined on the collection' do collection.find({}, collation: collation).to_a expect(find_command.command['collation']).to eq(collation) end end end describe 'read concern' do context 'when defined on the client' do let(:client_options) do { read_concern: { level: :local } } end let(:collection_options) do {} end it 'uses the read concern defined on the client' do collection.find.to_a expect(find_command.command['readConcern']).to eq('level' => 'local') end context 'when defined on the collection' do let(:collection_options) do { read_concern: { level: :majority } } end it 'uses the read concern defined on the collection' do collection.find.to_a expect(find_command.command['readConcern']).to eq('level' => 'majority') end context 'when defined on the operation' do let(:operation_read_concern) do { level: :available } end it 'uses the read concern defined on the operation' do collection.find({}, read_concern: operation_read_concern).to_a expect(find_command.command['readConcern']).to eq('level' => 'available') end end end context 'when defined on the operation' do let(:collection_options) do {} end let(:operation_read_concern) do { level: :available } end it 'uses the read concern defined on the operation' do collection.find({}, read_concern: operation_read_concern).to_a expect(find_command.command['readConcern']).to eq('level' => 'available') end end end context 'when defined on the collection' do let(:client_options) do {} end let(:collection_options) do { read_concern: { level: :majority } } end it 'uses the read concern defined on the collection' do collection.find.to_a expect(find_command.command['readConcern']).to eq('level' => 'majority') end context 'when defined on the operation' do let(:operation_read_concern) do { level: :available } end it 'uses the read concern defined on the operation' do collection.find({}, read_concern: operation_read_concern).to_a expect(find_command.command['readConcern']).to eq('level' => 'available') end end end end describe 'read preference' do require_topology :replica_set context 'when defined on the client' do let(:client_options) do { read: { mode: :secondary } } end let(:collection_options) do {} end it 'uses the read preference defined on the client' do collection.find.to_a expect(find_command.command['$readPreference']).to eq('mode' => 'secondary') end context 'when defined on the collection' do let(:collection_options) do { read: { mode: :secondary_preferred } } end it 'uses the read concern defined on the collection' do collection.find.to_a expect(find_command.command['$readPreference']).to eq('mode' => 'secondaryPreferred') end end end end describe 'cursor type' do let(:collection_options) do { capped: true, size: 1000 } end context 'when cursor type is :tailable' do it 'sets the cursor type to tailable' do collection.find({}, cursor_type: :tailable).first expect(find_command.command['tailable']).to be true expect(find_command.command['awaitData']).to be_falsey end end context 'when cursor type is :tailable_await' do it 'sets the cursor type to tailable' do collection.find({}, cursor_type: :tailable_await).first expect(find_command.command['tailable']).to be true expect(find_command.command['awaitData']).to be true end end end end mongo-ruby-driver-2.21.3/spec/integration/fork_reconnect_spec.rb000066400000000000000000000150101505113246500250020ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe 'fork reconnect' do require_fork require_mri # On multi-shard sharded clusters a succeeding write request does not # guarantee that the next operation will succeed (since it could be sent to # another shard with a dead connection). require_no_multi_mongos let(:client) { authorized_client } let(:server) { client.cluster.next_primary } describe 'monitoring connection' do let(:monitor) do Mongo::Server::Monitor.new(server, [], Mongo::Monitoring.new, server.options.merge( app_metadata: client.cluster.monitor_app_metadata, push_monitor_app_metadata: client.cluster.push_monitor_app_metadata, )) end it 'reconnects' do monitor.send(:do_scan).should be_a(Hash) socket = monitor.connection.send(:socket).send(:socket) (socket.is_a?(Socket) || socket.is_a?(OpenSSL::SSL::SSLSocket)).should be true if pid = fork pid, status = Process.wait2(pid) status.exitstatus.should == 0 else monitor.send(:do_scan).should be_a(Hash) child_socket = monitor.connection.send(:socket).send(:socket) # fileno of child_socket may equal to fileno of socket, # as socket would've been closed first and file descriptors can be # reused by the kernel. child_socket.object_id.should_not == socket.object_id # Exec so that we do not close any clients etc. in the child. exec(Utils::BIN_TRUE) end # Connection should remain serviceable in the parent. # The operation here will be invoked again, since the earlier invocation # was in the child process. monitor.send(:do_scan).should be_a(Hash) # The child closes the connection's socket, but this races with the # parent. The parent can retain the original socket for a while. end end describe 'non-monitoring connection' do let(:connection) do Mongo::Server::Connection.new(server, server.options) end let(:operation) do connection.ping.should be true end it 'does not reconnect' do connection.connect! socket = connection.send(:socket).send(:socket) (socket.is_a?(Socket) || socket.is_a?(OpenSSL::SSL::SSLSocket)).should be true if pid = fork pid, status = Process.wait2(pid) status.exitstatus.should == 0 else Utils.wrap_forked_child do operation child_socket = connection.send(:socket).send(:socket) # fileno of child_socket may equal to fileno of socket, # as socket would've been closed first and file descriptors can be # reused by the kernel. child_socket.object_id.should == socket.object_id end end # The child closes the connection's socket, but this races with the # parent. The parent can retain the original socket for a while. end end describe 'connection pool' do it 'creates a new connection in child' do conn_id = server.with_connection do |connection| connection.id end if pid = fork pid, status = Process.wait2(pid) status.exitstatus.should == 0 else Utils.wrap_forked_child do new_conn_id = server.with_connection do |connection| connection.id end new_conn_id.should_not == conn_id end end parent_conn_id = server.with_connection do |connection| connection.id end parent_conn_id.should == conn_id end end describe 'client' do it 'works after fork' do # Perform a write so that we discover the current primary. # Previous test may have stepped down the server that authorized client # considers the primary. # In standalone deployments there are no retries, hence execute the # operation twice manually. client['foo'].insert_one(test: 1) rescue nil client['foo'].insert_one(test: 1) if pid = fork pid, status = Process.wait2(pid) status.exitstatus.should == 0 else Utils.wrap_forked_child do client.database.command(ping: 1).should be_a(Mongo::Operation::Result) end end # Perform a read which can be retried, so that the socket close # performed by the child is recovered from. client['foo'].find(test: 1) end # Test from Driver Sessions Spec # * Create ClientSession # * Record its lsid # * Delete it (so the lsid is pushed into the pool) # * Fork # * In the parent, create a ClientSession and assert its lsid is the same. # * In the child, create a ClientSession and assert its lsid is different. describe 'session pool' do it 'is cleared after fork' do session = client.get_session.materialize_if_needed parent_lsid = session.session_id session.end_session if pid = fork pid, status = Process.wait2(pid) status.exitstatus.should == 0 else Utils.wrap_forked_child do client.reconnect child_session = client.get_session.materialize_if_needed child_lsid = child_session.session_id expect(child_lsid).not_to eq(parent_lsid) end end session = client.get_session.materialize_if_needed session_id = session.session_id expect(session_id).to eq(parent_lsid) end # Test from Driver Sessions Spec # * Create ClientSession # * Record its lsid # * Fork # * In the parent, return the ClientSession to the pool, create a new # ClientSession, and assert its lsid is the same. # * In the child, return the ClientSession to the pool, create a new # ClientSession, and assert its lsid is different. it 'does not return parent process sessions to child process pool' do session = client.get_session.materialize_if_needed parent_lsid = session.session_id if pid = fork pid, status = Process.wait2(pid) status.exitstatus.should == 0 else Utils.wrap_forked_child do client.reconnect session.end_session child_session = client.get_session.materialize_if_needed child_lsid = child_session.session_id expect(child_lsid).not_to eq(parent_lsid) end end session.end_session session_id = client.get_session.materialize_if_needed.session_id expect(session_id).to eq(parent_lsid) end end end end mongo-ruby-driver-2.21.3/spec/integration/get_more_spec.rb000066400000000000000000000014741505113246500236130ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe 'getMore operation' do # https://jira.mongodb.org/browse/RUBY-1987 min_server_fcv '3.2' let(:subscriber) { Mrss::EventSubscriber.new } let(:client) do authorized_client.tap do |client| client.subscribe(Mongo::Monitoring::COMMAND, subscriber) end end let(:collection) do client['get_more_spec'] end let(:scope) do collection.find.batch_size(1).each end before do collection.delete_many collection.insert_one(a: 1) #collection.insert_one(a: 2) end let(:get_more_command) do event = subscriber.single_command_started_event('getMore') event.command['getMore'] end it 'sends cursor id as int64' do scope.to_a expect(get_more_command).to be_a(BSON::Int64) end end mongo-ruby-driver-2.21.3/spec/integration/grid_fs_bucket_spec.rb000066400000000000000000000020611505113246500247550ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe 'GridFS bucket integration' do let(:fs) do authorized_client.database.fs end describe 'UTF-8 string write' do let(:data) { "hello\u2210" } before do data.length.should_not == data.bytesize end shared_examples 'round-trips' do it 'round-trips' do stream = fs.open_upload_stream('test') do |stream| stream.write(data_to_write) end actual = nil fs.open_download_stream(stream.file_id) do |stream| actual = stream.read end actual.encoding.should == Encoding::BINARY actual.should == data.b end end context 'in binary encoding' do let(:data_to_write) do data.dup.force_encoding('binary').freeze end it_behaves_like 'round-trips' end context 'in UTF-8 encoding' do let(:data_to_write) do data.encoding.should == Encoding::UTF_8 data.freeze end it_behaves_like 'round-trips' end end end mongo-ruby-driver-2.21.3/spec/integration/heartbeat_events_spec.rb000066400000000000000000000063611505113246500253350ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe 'Heartbeat events' do class HeartbeatEventsSpecTestException < StandardError; end # 4.4 has two monitors and thus issues heartbeats multiple times max_server_version '4.2' clean_slate_for_all let(:subscriber) { Mrss::EventSubscriber.new } before do Mongo::Monitoring::Global.subscribe(Mongo::Monitoring::SERVER_HEARTBEAT, subscriber) end after do Mongo::Monitoring::Global.unsubscribe(Mongo::Monitoring::SERVER_HEARTBEAT, subscriber) end let(:address_str) { ClusterConfig.instance.primary_address_str } let(:client) { new_local_client([address_str], SpecConfig.instance.all_test_options.merge( server_selection_timeout: 0.1, connect: :direct)) } it 'notifies on successful heartbeats' do client.database.command(ping: 1) started_event = subscriber.started_events.first expect(started_event).not_to be nil expect(started_event.address).to be_a(Mongo::Address) expect(started_event.address.seed).to eq(address_str) succeeded_event = subscriber.succeeded_events.first expect(succeeded_event).not_to be nil expect(succeeded_event.address).to be_a(Mongo::Address) expect(succeeded_event.address.seed).to eq(address_str) failed_event = subscriber.failed_events.first expect(failed_event).to be nil end it 'notifies on failed heartbeats' do exc = HeartbeatEventsSpecTestException.new expect_any_instance_of(Mongo::Server::Monitor).to receive(:check).at_least(:once).and_raise(exc) expect do client.database.command(ping: 1) end.to raise_error(Mongo::Error::NoServerAvailable) started_event = subscriber.started_events.first expect(started_event).not_to be nil expect(started_event.address).to be_a(Mongo::Address) expect(started_event.address.seed).to eq(address_str) succeeded_event = subscriber.succeeded_events.first expect(succeeded_event).to be nil failed_event = subscriber.failed_events.first expect(failed_event).not_to be nil expect(failed_event.error).to be exc expect(failed_event.failure).to be exc expect(failed_event.address).to be_a(Mongo::Address) expect(failed_event.address.seed).to eq(address_str) end context 'when monitoring option is false' do let(:client) { new_local_client([address_str], SpecConfig.instance.all_test_options.merge( server_selection_timeout: 0.1, connect: :direct, monitoring: false)) } shared_examples_for 'does not notify on heartbeats' do it 'does not notify on heartbeats' do client.database.command(ping: 1) started_event = subscriber.started_events.first expect(started_event).to be nil end end it_behaves_like 'does not notify on heartbeats' context 'when a subscriber is added manually' do let(:client) do sdam_proc = Proc.new do |client| client.subscribe(Mongo::Monitoring::SERVER_HEARTBEAT, subscriber) end new_local_client([address_str], SpecConfig.instance.all_test_options.merge( server_selection_timeout: 0.1, connect: :direct, monitoring: false, sdam_proc: sdam_proc)) end it_behaves_like 'does not notify on heartbeats' end end end mongo-ruby-driver-2.21.3/spec/integration/map_reduce_spec.rb000066400000000000000000000037351505113246500241200ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe 'Map-reduce operations' do let(:client) { authorized_client } let(:collection) { client['mr_integration'] } let(:subscriber) { Mrss::EventSubscriber.new } let(:find_options) { {} } let(:operation) do collection.find({}, find_options).map_reduce('function(){}', 'function(){}') end before do collection.insert_one(test: 1) # Ensure all mongoses are aware of the collection. maybe_run_mongos_distincts(collection.database.name, collection.name) client.subscribe(Mongo::Monitoring::COMMAND, subscriber) end let(:event) { subscriber.single_command_started_event('mapReduce') } context 'read preference' do require_topology :sharded context 'specified on client' do let(:client) { authorized_client.with(read: {mode: :secondary_preferred }) } # RUBY-2706: read preference is not sent on pre-3.6 servers min_server_fcv '3.6' it 'is sent' do operation.to_a event.command['$readPreference'].should == {'mode' => 'secondaryPreferred'} end end context 'specified on collection' do let(:collection) { client['mr_integration', read: {mode: :secondary_preferred }] } # RUBY-2706: read preference is not sent on pre-3.6 servers min_server_fcv '3.6' it 'is sent' do operation.to_a event.command['$readPreference'].should == {'mode' => 'secondaryPreferred'} end end context 'specified on operation' do let(:find_options) { {read: {mode: :secondary_preferred }} } # RUBY-2706: read preference is not sent on pre-3.6 servers min_server_fcv '3.6' it 'is sent' do operation.to_a event.command['$readPreference'].should == {'mode' => 'secondaryPreferred'} end end end context 'session' do min_server_fcv '3.6' it 'is sent' do operation.to_a event.command['lsid'].should_not be nil end end end mongo-ruby-driver-2.21.3/spec/integration/mmapv1_spec.rb000066400000000000000000000007631505113246500232130ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' # This test is a marker used to verify that the test suite runs on # mmapv1 storage engine. describe 'mmapv1' do require_mmapv1 context 'standalone' do require_topology :single it 'is exercised' do end end context 'replica set' do require_topology :replica_set it 'is exercised' do end end context 'sharded' do require_topology :sharded it 'is exercised' do end end end mongo-ruby-driver-2.21.3/spec/integration/mongos_pinning_spec.rb000066400000000000000000000015421505113246500250320ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe 'Mongos pinning' do require_topology :sharded min_server_fcv '4.2' let(:client) { authorized_client } let(:collection) { client['mongos_pinning_spec'] } before do collection.create end context 'successful operations' do it 'pins and unpins' do session = client.start_session expect(session.pinned_server).to be nil session.start_transaction expect(session.pinned_server).to be nil primary = client.cluster.next_primary collection.insert_one({a: 1}, session: session) expect(session.pinned_server).not_to be nil session.commit_transaction expect(session.pinned_server).not_to be nil collection.insert_one({a: 1}, session: session) expect(session.pinned_server).to be nil end end end mongo-ruby-driver-2.21.3/spec/integration/ocsp_connectivity_spec.rb000066400000000000000000000013311505113246500255440ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' # These tests test the configurations described in # https://github.com/mongodb/specifications/blob/master/source/ocsp-support/tests/README.md#integration-tests-permutations-to-be-tested describe 'OCSP connectivity' do require_ocsp_connectivity clear_ocsp_cache let(:client) do new_local_client(ENV.fetch('MONGODB_URI'), server_selection_timeout: 5, ) end if ENV['OCSP_CONNECTIVITY'] == 'fail' it 'fails to connect' do lambda do client.command(ping: 1) end.should raise_error(Mongo::Error::NoServerAvailable, /UNKNOWN/) end else it 'works' do client.command(ping: 1) end end end mongo-ruby-driver-2.21.3/spec/integration/ocsp_verifier_cache_spec.rb000066400000000000000000000141641505113246500257740ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' require 'webrick' describe Mongo::Socket::OcspVerifier do require_ocsp_verifier shared_examples 'verifies' do context 'mri' do fails_on_jruby it 'verifies the first time and reads from cache the second time' do RSpec::Mocks.with_temporary_scope do expect_any_instance_of(Mongo::Socket::OcspVerifier).to receive(:do_verify).and_call_original verifier.verify_with_cache.should be true end RSpec::Mocks.with_temporary_scope do expect_any_instance_of(Mongo::Socket::OcspVerifier).not_to receive(:do_verify) verifier.verify_with_cache.should be true end end end context 'jruby' do require_jruby # JRuby does not return OCSP endpoints, therefore we never perform # any validation. # https://github.com/jruby/jruby-openssl/issues/210 it 'does not verify' do RSpec::Mocks.with_temporary_scope do expect_any_instance_of(Mongo::Socket::OcspVerifier).to receive(:do_verify).and_call_original verifier.verify.should be false end RSpec::Mocks.with_temporary_scope do expect_any_instance_of(Mongo::Socket::OcspVerifier).to receive(:do_verify).and_call_original verifier.verify.should be false end end end end shared_examples 'fails verification' do context 'mri' do fails_on_jruby it 'verifies the first time, reads from cache the second time, raises an exception in both cases' do RSpec::Mocks.with_temporary_scope do expect_any_instance_of(Mongo::Socket::OcspVerifier).to receive(:do_verify).and_call_original lambda do verifier.verify # Redirect tests receive responses from port 8101, # tests without redirects receive responses from port 8100. end.should raise_error(Mongo::Error::ServerCertificateRevoked, %r,TLS certificate of 'foo' has been revoked according to 'http://localhost:810[01]/status',) end RSpec::Mocks.with_temporary_scope do expect_any_instance_of(Mongo::Socket::OcspVerifier).not_to receive(:do_verify) lambda do verifier.verify # Redirect tests receive responses from port 8101, # tests without redirects receive responses from port 8100. end.should raise_error(Mongo::Error::ServerCertificateRevoked, %r,TLS certificate of 'foo' has been revoked according to 'http://localhost:810[01]/status',) end end end context 'jruby' do require_jruby # JRuby does not return OCSP endpoints, therefore we never perform # any validation. # https://github.com/jruby/jruby-openssl/issues/210 it 'does not verify' do RSpec::Mocks.with_temporary_scope do expect_any_instance_of(Mongo::Socket::OcspVerifier).to receive(:do_verify).and_call_original verifier.verify.should be false end RSpec::Mocks.with_temporary_scope do expect_any_instance_of(Mongo::Socket::OcspVerifier).to receive(:do_verify).and_call_original verifier.verify.should be false end end end end shared_examples 'does not verify' do it 'does not verify and does not raise an exception' do RSpec::Mocks.with_temporary_scope do expect_any_instance_of(Mongo::Socket::OcspVerifier).to receive(:do_verify).and_call_original verifier.verify.should be false end RSpec::Mocks.with_temporary_scope do expect_any_instance_of(Mongo::Socket::OcspVerifier).to receive(:do_verify).and_call_original verifier.verify.should be false end end end shared_context 'verifier' do |opts| algorithm = opts[:algorithm] let(:cert_path) { SpecConfig.instance.ocsp_files_dir.join("#{algorithm}/server.pem") } let(:ca_cert_path) { SpecConfig.instance.ocsp_files_dir.join("#{algorithm}/ca.pem") } let(:cert) { OpenSSL::X509::Certificate.new(File.read(cert_path)) } let(:ca_cert) { OpenSSL::X509::Certificate.new(File.read(ca_cert_path)) } let(:cert_store) do OpenSSL::X509::Store.new.tap do |store| store.add_cert(ca_cert) end end let(:verifier) do described_class.new('foo', cert, ca_cert, cert_store, timeout: 3) end end include_context 'verifier', algorithm: 'rsa' algorithm = 'rsa' %w(ca delegate).each do |responder_cert| responder_cert_file_name = { 'ca' => 'ca', 'delegate' => 'ocsp-responder', }.fetch(responder_cert) context "when responder uses #{responder_cert} cert" do context 'good response' do with_ocsp_mock( SpecConfig.instance.ocsp_files_dir.join("#{algorithm}/ca.pem"), SpecConfig.instance.ocsp_files_dir.join("#{algorithm}/#{responder_cert_file_name}.crt"), SpecConfig.instance.ocsp_files_dir.join("#{algorithm}/#{responder_cert_file_name}.key"), ) include_examples 'verifies' it 'does not wait for the timeout' do lambda do verifier.verify end.should take_shorter_than 3 end end context 'revoked response' do with_ocsp_mock( SpecConfig.instance.ocsp_files_dir.join("#{algorithm}/ca.pem"), SpecConfig.instance.ocsp_files_dir.join("#{algorithm}/#{responder_cert_file_name}.crt"), SpecConfig.instance.ocsp_files_dir.join("#{algorithm}/#{responder_cert_file_name}.key"), fault: 'revoked' ) include_examples 'fails verification' end context 'unknown response' do with_ocsp_mock( SpecConfig.instance.ocsp_files_dir.join("#{algorithm}/ca.pem"), SpecConfig.instance.ocsp_files_dir.join("#{algorithm}/#{responder_cert_file_name}.crt"), SpecConfig.instance.ocsp_files_dir.join("#{algorithm}/#{responder_cert_file_name}.key"), fault: 'unknown', ) include_examples 'does not verify' it 'does not wait for the timeout' do lambda do verifier.verify end.should take_shorter_than 3 end end end end end mongo-ruby-driver-2.21.3/spec/integration/ocsp_verifier_spec.rb000066400000000000000000000205401505113246500246440ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' require 'webrick' describe Mongo::Socket::OcspVerifier do require_ocsp_verifier with_openssl_debug retry_test sleep: 5 def self.with_ocsp_responder(port = 8100, path = '/', &setup) around do |example| server = WEBrick::HTTPServer.new(Port: port) server.mount_proc path, &setup Thread.new { server.start } begin example.run ensure server.shutdown end ::Utils.wait_for_port_free(port, 5) end end shared_examples 'verifies' do context 'mri' do fails_on_jruby it 'verifies' do verifier.verify.should be true end end context 'jruby' do require_jruby # JRuby does not return OCSP endpoints, therefore we never perform # any validation. # https://github.com/jruby/jruby-openssl/issues/210 it 'does not verify' do verifier.verify.should be false end end end shared_examples 'fails verification' do context 'mri' do fails_on_jruby it 'raises an exception' do lambda do verifier.verify # Redirect tests receive responses from port 8101, # tests without redirects receive responses from port 8100. end.should raise_error(Mongo::Error::ServerCertificateRevoked, %r,TLS certificate of 'foo' has been revoked according to 'http://localhost:810[01]/status',) end it 'does not wait for the timeout' do lambda do lambda do verifier.verify end.should raise_error(Mongo::Error::ServerCertificateRevoked) end.should take_shorter_than 7 end end context 'jruby' do require_jruby # JRuby does not return OCSP endpoints, therefore we never perform # any validation. # https://github.com/jruby/jruby-openssl/issues/210 it 'does not verify' do verifier.verify.should be false end end end shared_examples 'does not verify' do it 'does not verify and does not raise an exception' do verifier.verify.should be false end end shared_context 'basic verifier' do let(:cert) { OpenSSL::X509::Certificate.new(File.read(cert_path)) } let(:ca_cert) { OpenSSL::X509::Certificate.new(File.read(ca_cert_path)) } let(:cert_store) do OpenSSL::X509::Store.new.tap do |store| store.add_cert(ca_cert) end end let(:verifier) do described_class.new('foo', cert, ca_cert, cert_store, timeout: 7) end end shared_context 'verifier' do |opts| algorithm = opts[:algorithm] let(:cert_path) { SpecConfig.instance.ocsp_files_dir.join("#{algorithm}/server.pem") } let(:ca_cert_path) { SpecConfig.instance.ocsp_files_dir.join("#{algorithm}/ca.pem") } include_context 'basic verifier' end %w(rsa ecdsa).each do |algorithm| context "when using #{algorithm} cert" do include_context 'verifier', algorithm: algorithm context 'responder not responding' do include_examples 'does not verify' it 'does not wait for the timeout' do # Loopback interface should be refusing connections, which will make # the operation complete quickly. lambda do verifier.verify end.should take_shorter_than 7 end end %w(ca delegate).each do |responder_cert| responder_cert_file_name = { 'ca' => 'ca', 'delegate' => 'ocsp-responder', }.fetch(responder_cert) context "when responder uses #{responder_cert} cert" do context 'good response' do with_ocsp_mock( SpecConfig.instance.ocsp_files_dir.join("#{algorithm}/ca.pem"), SpecConfig.instance.ocsp_files_dir.join("#{algorithm}/#{responder_cert_file_name}.crt"), SpecConfig.instance.ocsp_files_dir.join("#{algorithm}/#{responder_cert_file_name}.key"), ) include_examples 'verifies' it 'does not wait for the timeout' do lambda do verifier.verify end.should take_shorter_than 7 end end context 'revoked response' do with_ocsp_mock( SpecConfig.instance.ocsp_files_dir.join("#{algorithm}/ca.pem"), SpecConfig.instance.ocsp_files_dir.join("#{algorithm}/#{responder_cert_file_name}.crt"), SpecConfig.instance.ocsp_files_dir.join("#{algorithm}/#{responder_cert_file_name}.key"), fault: 'revoked' ) include_examples 'fails verification' end context 'unknown response' do with_ocsp_mock( SpecConfig.instance.ocsp_files_dir.join("#{algorithm}/ca.pem"), SpecConfig.instance.ocsp_files_dir.join("#{algorithm}/#{responder_cert_file_name}.crt"), SpecConfig.instance.ocsp_files_dir.join("#{algorithm}/#{responder_cert_file_name}.key"), fault: 'unknown', ) include_examples 'does not verify' it 'does not wait for the timeout' do lambda do verifier.verify end.should take_shorter_than 7 end end end end end end context 'when OCSP responder redirects' do algorithm = 'rsa' responder_cert_file_name = 'ca' let(:algorithm) { 'rsa' } let(:responder_cert_file_name) { 'ca' } context 'one time' do with_ocsp_responder do |req, res| res.status = 303 res['locAtion'] = "http://localhost:8101#{req.path}" res.body = "See http://localhost:8101#{req.path}" end include_context 'verifier', algorithm: algorithm context 'good response' do with_ocsp_mock( SpecConfig.instance.ocsp_files_dir.join("#{algorithm}/ca.pem"), SpecConfig.instance.ocsp_files_dir.join("#{algorithm}/#{responder_cert_file_name}.crt"), SpecConfig.instance.ocsp_files_dir.join("#{algorithm}/#{responder_cert_file_name}.key"), port: 8101, ) include_examples 'verifies' it 'does not wait for the timeout' do lambda do verifier.verify end.should take_shorter_than 7 end end context 'revoked response' do with_ocsp_mock( SpecConfig.instance.ocsp_files_dir.join("#{algorithm}/ca.pem"), SpecConfig.instance.ocsp_files_dir.join("#{algorithm}/#{responder_cert_file_name}.crt"), SpecConfig.instance.ocsp_files_dir.join("#{algorithm}/#{responder_cert_file_name}.key"), fault: 'revoked', port: 8101, ) include_examples 'fails verification' end context 'unknown response' do with_ocsp_mock( SpecConfig.instance.ocsp_files_dir.join("#{algorithm}/ca.pem"), SpecConfig.instance.ocsp_files_dir.join("#{algorithm}/#{responder_cert_file_name}.crt"), SpecConfig.instance.ocsp_files_dir.join("#{algorithm}/#{responder_cert_file_name}.key"), fault: 'unknown', port: 8101, ) include_examples 'does not verify' it 'does not wait for the timeout' do lambda do verifier.verify end.should take_shorter_than 7 end end end context 'infinitely' do with_ocsp_mock( SpecConfig.instance.ocsp_files_dir.join("#{algorithm}/ca.pem"), SpecConfig.instance.ocsp_files_dir.join("#{algorithm}/#{responder_cert_file_name}.crt"), SpecConfig.instance.ocsp_files_dir.join("#{algorithm}/#{responder_cert_file_name}.key"), port: 8101, ) with_ocsp_responder do |req, res| res.status = 303 res['locAtion'] = req.path res.body = "See #{req.path} indefinitely" end include_context 'verifier', algorithm: algorithm include_examples 'does not verify' end end context 'responder returns unexpected status code' do include_context 'verifier', algorithm: 'rsa' [400, 404, 500, 503].each do |code| context "code #{code}" do with_ocsp_responder do |req, res| res.status = code res.body = "HTTP #{code}" end include_examples 'does not verify' end end context 'code 204' do with_ocsp_responder do |req, res| res.status = 204 end include_examples 'does not verify' end end end mongo-ruby-driver-2.21.3/spec/integration/operation_failure_code_spec.rb000066400000000000000000000014221505113246500265040ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe 'OperationFailure code' do let(:collection_name) { 'operation_failure_code_spec' } let(:collection) { authorized_client[collection_name] } before do collection.delete_many end context 'duplicate key error' do it 'is set' do begin collection.insert_one(_id: 1) collection.insert_one(_id: 1) fail('Should have raised') rescue Mongo::Error::OperationFailure::Family => e expect(e.code).to eq(11000) # 4.0 and 4.2 sharded clusters set code name. # 4.0 and 4.2 replica sets and standalones do not, # and neither do older versions. expect([nil, 'DuplicateKey']).to include(e.code_name) end end end end mongo-ruby-driver-2.21.3/spec/integration/operation_failure_message_spec.rb000066400000000000000000000056011505113246500272210ustar00rootroot00000000000000# rubocop:todo all require 'spec_helper' describe 'OperationFailure message' do let(:client) { authorized_client } let(:collection_name) { 'operation_failure_message_spec' } let(:collection) { client[collection_name] } context 'crud error' do before do collection.delete_many end context 'a command error with code and code name' do context 'on modern servers that provide code name' do # Sharded clusters include the code name: SERVER-55582 require_topology :single, :replica_set min_server_fcv '3.4' it 'reports code, code name and message' do begin client.command(bogus_command: nil) fail('Should have raised') rescue Mongo::Error::OperationFailure::Family => e e.code_name.should == 'CommandNotFound' e.message.should =~ %r,\A\[59:CommandNotFound\]: no such (?:command|cmd): '?bogus_command'?, end end end context 'on legacy servers where code name is not provided' do max_server_version '3.2' it 'reports code and message' do begin client.command(bogus_command: nil) fail('Should have raised') rescue Mongo::Error::OperationFailure::Family => e e.code_name.should be nil e.message.should =~ %r,\A\[59\]: no such (?:command|cmd): '?bogus_command'?, end end end end context 'a write error with code and no code name' do # Sharded clusters include the code name: SERVER-55582 require_topology :single, :replica_set it 'reports code name, code and message' do begin collection.insert_one(_id: 1) collection.insert_one(_id: 1) fail('Should have raised') rescue Mongo::Error::OperationFailure::Family => e e.code_name.should be nil e.message.should =~ %r,\A\[11000\]: (?:insertDocument :: caused by :: 11000 )?E11000 duplicate key error (?:collection|index):, end end end end context 'authentication error' do require_no_external_user let(:client) do authorized_client.with(user: 'bogus', password: 'bogus') end context 'on modern servers where code name is provided' do min_server_fcv '3.4' it 'includes code and code name in the message' do lambda do client.command(ping: 1) end.should raise_error(Mongo::Auth::Unauthorized, /User bogus.*is not authorized.*\[18:AuthenticationFailed\]: Authentication failed/) end end context 'on legacy servers where code name is not provided' do max_server_version '3.2' it 'includes code only in the message' do lambda do client.command(ping: 1) end.should raise_error(Mongo::Auth::Unauthorized, /User bogus.*is not authorized.*\[18\]: (?:Authentication|auth) failed/) end end end end mongo-ruby-driver-2.21.3/spec/integration/query_cache_spec.rb000066400000000000000000001200531505113246500242750ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe 'QueryCache' do around do |spec| Mongo::QueryCache.clear Mongo::QueryCache.cache { spec.run } end before do authorized_collection.delete_many subscriber.clear_events! end before(:all) do # It is likely that there are other session leaks in the driver that are # unrelated to the query cache. Clear the SessionRegistry at the start of # these tests in order to detect leaks that occur only within the scope of # these tests. # # Other session leaks will be detected and addressed as part of RUBY-2391. Mrss::SessionRegistry.instance.clear_registry end after do Mrss::SessionRegistry.instance.verify_sessions_ended! end let(:subscriber) { Mrss::EventSubscriber.new } let(:client) do authorized_client.tap do |client| client.subscribe(Mongo::Monitoring::COMMAND, subscriber) end end let(:authorized_collection) { client['collection_spec'] } let(:events) do subscriber.command_started_events('find') end describe '#cache' do before do Mongo::QueryCache.enabled = false authorized_collection.insert_one({ name: 'testing' }) authorized_collection.find(name: 'testing').to_a end it 'enables the query cache inside the block' do Mongo::QueryCache.cache do authorized_collection.find(name: 'testing').to_a expect(Mongo::QueryCache.enabled?).to be(true) expect(Mongo::QueryCache.send(:cache_table).length).to eq(1) expect(events.length).to eq(2) end authorized_collection.find(name: 'testing').to_a expect(Mongo::QueryCache.enabled?).to be(false) expect(Mongo::QueryCache.send(:cache_table).length).to eq(1) expect(events.length).to eq(2) end end describe '#uncached' do before do authorized_collection.insert_one({ name: 'testing' }) authorized_collection.find(name: 'testing').to_a end it 'disables the query cache inside the block' do expect(Mongo::QueryCache.send(:cache_table).length).to eq(1) Mongo::QueryCache.uncached do authorized_collection.find(name: 'testing').to_a expect(Mongo::QueryCache.enabled?).to be(false) expect(events.length).to eq(2) end authorized_collection.find(name: 'testing').to_a expect(Mongo::QueryCache.enabled?).to be(true) expect(Mongo::QueryCache.send(:cache_table).length).to eq(1) expect(events.length).to eq(2) end end describe 'query with multiple batches' do before do 102.times { |i| authorized_collection.insert_one(_id: i) } end let(:expected_results) { [*0..101].map { |id| { "_id" => id } } } it 'returns the correct result' do result = authorized_collection.find.to_a expect(result.length).to eq(102) expect(result).to eq(expected_results) end it 'returns the correct result multiple times' do result1 = authorized_collection.find.to_a result2 = authorized_collection.find.to_a expect(result1).to eq(expected_results) expect(result2).to eq(expected_results) end it 'caches the query' do authorized_collection.find.to_a authorized_collection.find.to_a expect(subscriber.command_started_events('find').length).to eq(1) expect(subscriber.command_started_events('getMore').length).to eq(1) end it 'uses cached cursor when limited' do authorized_collection.find.to_a result = authorized_collection.find({}, limit: 5).to_a expect(result.length).to eq(5) expect(result).to eq(expected_results.first(5)) expect(subscriber.command_started_events('find').length).to eq(1) expect(subscriber.command_started_events('getMore').length).to eq(1) end it 'can be used with a block API' do authorized_collection.find.to_a result = [] authorized_collection.find.each do |doc| result << doc end expect(result).to eq(expected_results) expect(subscriber.command_started_events('find').length).to eq(1) expect(subscriber.command_started_events('getMore').length).to eq(1) end context 'when the cursor isn\'t fully iterated the first time' do it 'continues iterating' do result1 = authorized_collection.find.first(5) expect(result1.length).to eq(5) expect(result1).to eq(expected_results.first(5)) expect(subscriber.command_started_events('find').length).to eq(1) expect(subscriber.command_started_events('getMore').length).to eq(0) result2 = authorized_collection.find.to_a expect(result2.length).to eq(102) expect(result2).to eq(expected_results) expect(subscriber.command_started_events('find').length).to eq(1) expect(subscriber.command_started_events('getMore').length).to eq(1) end it 'can be iterated multiple times' do authorized_collection.find.first(5) authorized_collection.find.to_a result = authorized_collection.find.to_a expect(result.length).to eq(102) expect(result).to eq(expected_results) expect(subscriber.command_started_events('find').length).to eq(1) expect(subscriber.command_started_events('getMore').length).to eq(1) end it 'can be used with a block API' do authorized_collection.find.first(5) result = [] authorized_collection.find.each do |doc| result << doc end expect(result.length).to eq(102) expect(result).to eq(expected_results) expect(subscriber.command_started_events('find').length).to eq(1) expect(subscriber.command_started_events('getMore').length).to eq(1) end end end describe 'queries with read concern' do require_wired_tiger min_server_fcv '3.6' before do authorized_client['test', write_concern: { w: :majority }].drop end context 'when two queries have same read concern' do before do authorized_client['test', read_concern: { level: :majority }].find.to_a authorized_client['test', read_concern: { level: :majority }].find.to_a end it 'executes one query' do expect(events.length).to eq(1) end end context 'when two queries have different read concerns' do before do authorized_client['test', read_concern: { level: :majority }].find.to_a authorized_client['test', read_concern: { level: :local }].find.to_a end it 'executes two queries' do expect(events.length).to eq(2) end end end describe 'queries with read preference' do before do subscriber.clear_events! authorized_client['test'].drop end context 'when two queries have different read preferences' do before do authorized_client['test', read: { mode: :primary }].find.to_a authorized_client['test', read: { mode: :primary_preferred }].find.to_a end it 'executes two queries' do expect(events.length).to eq(2) end end context 'when two queries have same read preference' do before do authorized_client['test', read: { mode: :primary }].find.to_a authorized_client['test', read: { mode: :primary }].find.to_a end it 'executes one query' do expect(events.length).to eq(1) end end end describe 'query fills up entire batch' do before do subscriber.clear_events! authorized_client['test'].drop 2.times { |i| authorized_client['test'].insert_one(_id: i) } end let(:expected_result) do [{ "_id" => 0 }, { "_id" => 1 }] end # When the last batch runs out, try_next will return nil instead of a # document. This test checks that nil is not added to the list of cached # documents or returned as a result. it 'returns the correct response' do expect(authorized_client['test'].find({}, batch_size: 2).to_a).to eq(expected_result) expect(authorized_client['test'].find({}, batch_size: 2).to_a).to eq(expected_result) end end context 'when querying in the same collection' do before do 10.times do |i| authorized_collection.insert_one(test: i) end end context 'when query cache is disabled' do before do Mongo::QueryCache.enabled = false authorized_collection.find(test: 1).to_a end it 'queries again' do authorized_collection.find(test: 1).to_a expect(events.length).to eq(2) expect(Mongo::QueryCache.send(:cache_table).length).to eq(0) end end context 'when query cache is enabled' do before do authorized_collection.find(test: 1).to_a end it 'does not query again' do authorized_collection.find(test: 1).to_a expect(events.length).to eq(1) expect(Mongo::QueryCache.send(:cache_table).length).to eq(1) end end context 'when query has collation' do min_server_fcv '3.4' let(:options1) do { :collation => { locale: 'fr' } } end let(:options2) do { collation: { locale: 'en_US' } } end before do authorized_collection.find({ test: 3 }, options1).to_a end context 'when query has the same collation' do it 'uses the cache' do authorized_collection.find({ test: 3 }, options1).to_a expect(events.length).to eq(1) end end context 'when query has a different collation' do it 'queries again' do authorized_collection.find({ test: 3 }, options2).to_a expect(events.length).to eq(2) expect(Mongo::QueryCache.send(:cache_table)['ruby-driver.collection_spec'].length).to eq(2) end end end describe 'queries with limits' do context 'when the first query has no limit and the second does' do before do authorized_collection.find.to_a.count end it 'uses the cache' do results_limit_5 = authorized_collection.find.limit(5).to_a results_limit_negative_5 = authorized_collection.find.limit(-5).to_a results_limit_3 = authorized_collection.find.limit(3).to_a results_limit_negative_3 = authorized_collection.find.limit(-3).to_a results_no_limit = authorized_collection.find.to_a results_limit_0 = authorized_collection.find.limit(0).to_a expect(results_limit_5.length).to eq(5) expect(results_limit_5.map { |r| r["test"] }).to eq([0, 1, 2, 3, 4]) expect(results_limit_negative_5.length).to eq(5) expect(results_limit_negative_5.map { |r| r["test"] }).to eq([0, 1, 2, 3, 4]) expect(results_limit_3.length).to eq(3) expect(results_limit_3.map { |r| r["test"] }).to eq([0, 1, 2]) expect(results_limit_negative_3.length).to eq(3) expect(results_limit_negative_3.map { |r| r["test"] }).to eq([0, 1, 2]) expect(results_no_limit.length).to eq(10) expect(results_no_limit.map { |r| r["test"] }).to eq([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) expect(results_limit_0.length).to eq(10) expect(results_limit_0.map { |r| r["test"] }).to eq([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) expect(events.length).to eq(1) end end context 'when the first query has a 0 limit' do before do authorized_collection.find.limit(0).to_a end it 'uses the cache' do results_limit_5 = authorized_collection.find.limit(5).to_a results_limit_negative_5 = authorized_collection.find.limit(-5).to_a results_limit_3 = authorized_collection.find.limit(3).to_a results_limit_negative_3 = authorized_collection.find.limit(-3).to_a results_no_limit = authorized_collection.find.to_a results_limit_0 = authorized_collection.find.limit(0).to_a expect(results_limit_5.length).to eq(5) expect(results_limit_5.map { |r| r["test"] }).to eq([0, 1, 2, 3, 4]) expect(results_limit_negative_5.length).to eq(5) expect(results_limit_negative_5.map { |r| r["test"] }).to eq([0, 1, 2, 3, 4]) expect(results_limit_3.length).to eq(3) expect(results_limit_3.map { |r| r["test"] }).to eq([0, 1, 2]) expect(results_limit_negative_3.length).to eq(3) expect(results_limit_negative_3.map { |r| r["test"] }).to eq([0, 1, 2]) expect(results_no_limit.length).to eq(10) expect(results_no_limit.map { |r| r["test"] }).to eq([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) expect(results_limit_0.length).to eq(10) expect(results_limit_0.map { |r| r["test"] }).to eq([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) expect(events.length).to eq(1) end end context 'when the first query has a limit' do before do authorized_collection.find.limit(2).to_a end context 'and the second query has a larger limit' do let(:results) { authorized_collection.find.limit(3).to_a } it 'queries again' do expect(results.length).to eq(3) expect(results.map { |result| result["test"] }).to eq([0, 1, 2]) expect(events.length).to eq(2) end end context 'and two queries are performed with a larger limit' do it 'uses the query cache for the third query' do results1 = authorized_collection.find.limit(3).to_a results2 = authorized_collection.find.limit(3).to_a expect(results1.length).to eq(3) expect(results1.map { |r| r["test"] }).to eq([0, 1, 2]) expect(results2.length).to eq(3) expect(results2.map { |r| r["test"] }).to eq([0, 1, 2]) expect(events.length).to eq(2) end end context 'and two queries are performed with a larger negative limit' do it 'uses the query cache for the third query' do results1 = authorized_collection.find.limit(-3).to_a results2 = authorized_collection.find.limit(-3).to_a expect(results1.length).to eq(3) expect(results1.map { |r| r["test"] }).to eq([0, 1, 2]) expect(results2.length).to eq(3) expect(results2.map { |r| r["test"] }).to eq([0, 1, 2]) expect(events.length).to eq(2) end end context 'and the second query has a smaller limit' do let(:results) { authorized_collection.find.limit(1).to_a } it 'uses the cached query' do expect(results.count).to eq(1) expect(results.first["test"]).to eq(0) expect(events.length).to eq(1) end end context 'and the second query has a smaller negative limit' do let(:results) { authorized_collection.find.limit(-1).to_a } it 'uses the cached query' do expect(results.count).to eq(1) expect(results.first["test"]).to eq(0) expect(events.length).to eq(1) end end context 'and the second query has no limit' do it 'queries again' do expect(authorized_collection.find.to_a.count).to eq(10) expect(events.length).to eq(2) end end end context 'when the first query has a negative limit' do before do authorized_collection.find.limit(-2).to_a end context 'and the second query has a larger limit' do let(:results) { authorized_collection.find.limit(3).to_a } it 'queries again' do expect(results.length).to eq(3) expect(results.map { |result| result["test"] }).to eq([0, 1, 2]) expect(events.length).to eq(2) end end context 'and the second query has a larger negative limit' do let(:results) { authorized_collection.find.limit(-3).to_a } it 'queries again' do expect(results.length).to eq(3) expect(results.map { |result| result["test"] }).to eq([0, 1, 2]) expect(events.length).to eq(2) end end context 'and two queries are performed with a larger limit' do it 'uses the query cache for the third query' do results1 = authorized_collection.find.limit(3).to_a results2 = authorized_collection.find.limit(3).to_a expect(results1.length).to eq(3) expect(results1.map { |r| r["test"] }).to eq([0, 1, 2]) expect(results2.length).to eq(3) expect(results2.map { |r| r["test"] }).to eq([0, 1, 2]) expect(events.length).to eq(2) end end context 'and two queries are performed with a larger negative limit' do it 'uses the query cache for the third query' do results1 = authorized_collection.find.limit(-3).to_a results2 = authorized_collection.find.limit(-3).to_a expect(results1.length).to eq(3) expect(results1.map { |r| r["test"] }).to eq([0, 1, 2]) expect(results2.length).to eq(3) expect(results2.map { |r| r["test"] }).to eq([0, 1, 2]) expect(events.length).to eq(2) end end context 'and the second query has a smaller limit' do let(:results) { authorized_collection.find.limit(1).to_a } it 'uses the cached query' do expect(results.count).to eq(1) expect(results.first["test"]).to eq(0) expect(events.length).to eq(1) end end context 'and the second query has a smaller negative limit' do let(:results) { authorized_collection.find.limit(-1).to_a } it 'uses the cached query' do expect(results.count).to eq(1) expect(results.first["test"]).to eq(0) expect(events.length).to eq(1) end end context 'and the second query has no limit' do it 'queries again' do expect(authorized_collection.find.to_a.count).to eq(10) expect(events.length).to eq(2) end end end end context 'when querying only the first' do before do 5.times do |i| authorized_collection.insert_one(test: 11) end end before do authorized_collection.find({test: 11}).to_a end it 'does not query again' do expect(authorized_collection.find({test: 11}).count).to eq(5) authorized_collection.find({test: 11}).first expect(events.length).to eq(1) end context 'when limiting the result' do it 'does not query again' do authorized_collection.find({test: 11}, limit: 2).to_a expect(authorized_collection.find({test: 11}, limit: 2).to_a.count).to eq(2) expect(events.length).to eq(1) end end end context 'when specifying a different skip value' do before do authorized_collection.find({}, {limit: 2, skip: 3}).to_a end it 'queries again' do results = authorized_collection.find({}, {limit: 2, skip: 5}).to_a expect(results.count).to eq(2) expect(events.length).to eq(2) end end context 'when sorting documents' do before do authorized_collection.find({}, desc).to_a end let(:desc) do { sort: {test: -1} } end let(:asc) do { sort: {test: 1} } end context 'with different selector' do it 'queries again' do authorized_collection.find({}, asc).to_a expect(events.length).to eq(2) end end it 'does not query again' do authorized_collection.find({}, desc).to_a expect(events.length).to eq(1) end end context 'when inserting new documents' do context 'when inserting and querying from same collection' do before do authorized_collection.find.to_a authorized_collection.insert_one({ name: "bob" }) end it 'queries again' do authorized_collection.find.to_a expect(events.length).to eq(2) end end context 'when inserting and querying from different collections' do before do authorized_collection.find.to_a authorized_client['different_collection'].insert_one({ name: "bob" }) end it 'uses the cached query' do authorized_collection.find.to_a expect(events.length).to eq(1) end end end [:delete_many, :delete_one].each do |method| context "when deleting with #{method}" do context 'when deleting and querying from same collection' do before do authorized_collection.find.to_a authorized_collection.send(method) end it 'queries again' do authorized_collection.find.to_a expect(events.length).to eq(2) end end context 'when deleting and querying from different collections' do before do authorized_collection.find.to_a authorized_client['different_collection'].send(method) end it 'uses the cached query' do authorized_collection.find.to_a expect(events.length).to eq(1) end end end end [:find_one_and_delete, :find_one_and_replace, :find_one_and_update, :replace_one].each do |method| context "when updating with #{method}" do context 'when updating and querying from same collection' do before do authorized_collection.find.to_a authorized_collection.send(method, { field: 'value' }, { field: 'new value' }) end it 'queries again' do authorized_collection.find.to_a expect(events.length).to eq(2) end end context 'when updating and querying from different collections' do before do authorized_collection.find.to_a authorized_client['different_collection'].send(method, { field: 'value' }, { field: 'new value' }) end it 'uses the cached query' do authorized_collection.find.to_a expect(events.length).to eq(1) end end end end [:update_one, :update_many].each do |method| context "when updating with ##{method}" do context 'when updating and querying from same collection' do before do authorized_collection.find.to_a authorized_collection.send(method, { field: 'value' }, { "$inc" => { :field => 1 } }) end it 'queries again' do authorized_collection.find.to_a expect(events.length).to eq(2) end end context 'when updating and querying from different collections' do before do authorized_collection.find.to_a authorized_client['different_collection'].send(method, { field: 'value' }, { "$inc" => { :field => 1 } }) end it 'uses the cached query' do authorized_collection.find.to_a expect(events.length).to eq(1) end end end end context 'when performing bulk write' do context 'with insert_one' do context 'when inserting and querying from same collection' do before do authorized_collection.find.to_a authorized_collection.bulk_write([ { insert_one: { name: 'bob' } } ]) end it 'queries again' do authorized_collection.find.to_a expect(events.length).to eq(2) end end context 'when inserting and querying from different collection' do before do authorized_collection.find.to_a authorized_client['different_collection'].bulk_write( [ { insert_one: { name: 'bob' } } ] ) end it 'uses the cached query' do authorized_collection.find.to_a expect(events.length).to eq(1) end end end [:update_one, :update_many].each do |method| context "with #{method}" do context 'when updating and querying from same collection' do before do authorized_collection.find.to_a authorized_collection.bulk_write([ { method => { filter: { field: 'value' }, update: { '$set' => { field: 'new value' } } } } ]) end it 'queries again' do authorized_collection.find.to_a expect(events.length).to eq(2) end end context 'when updating and querying from different collection' do before do authorized_collection.find.to_a authorized_client['different_collection'].bulk_write([ { method => { filter: { field: 'value' }, update: { '$set' => { field: 'new value' } } } } ]) end it 'uses the cached query' do authorized_collection.find.to_a expect(events.length).to eq(1) end end end end [:delete_one, :delete_many].each do |method| context "with #{method}" do context 'when delete and querying from same collection' do before do authorized_collection.find.to_a authorized_collection.bulk_write([ { method => { filter: { field: 'value' }, } } ]) end it 'queries again' do authorized_collection.find.to_a expect(events.length).to eq(2) end end context 'when delete and querying from different collection' do before do authorized_collection.find.to_a authorized_client['different_collection'].bulk_write([ { method => { filter: { field: 'value' }, } } ]) end it 'uses the cached query' do authorized_collection.find.to_a expect(events.length).to eq(1) end end end end context 'with replace_one' do context 'when replacing and querying from same collection' do before do authorized_collection.find.to_a authorized_collection.bulk_write([ { replace_one: { filter: { field: 'value' }, replacement: { field: 'new value' } } } ]) end it 'queries again' do authorized_collection.find.to_a expect(events.length).to eq(2) end end context 'when replacing and querying from different collection' do before do authorized_collection.find.to_a authorized_client['different_collection'].bulk_write([ { replace_one: { filter: { field: 'value' }, replacement: { field: 'new value' } } } ]) end it 'uses the cached query' do authorized_collection.find.to_a expect(events.length).to eq(1) end end end context 'when query occurs between bulk write creation and execution' do before do authorized_collection.delete_many end it 'queries again' do bulk_write = Mongo::BulkWrite.new( authorized_collection, [{ insert_one: { test: 1 } }] ) expect(authorized_collection.find(test: 1).to_a.length).to eq(0) bulk_write.execute expect(authorized_collection.find(test: 1).to_a.length).to eq(1) expect(events.length).to eq(2) end end end context 'when aggregating with $out' do before do authorized_collection.find.to_a authorized_collection.aggregate([ { '$match' => { test: 1 } }, { '$out' => { coll: 'new_coll' } } ]) end it 'queries again' do authorized_collection.find.to_a expect(events.length).to eq(2) end it 'clears the cache' do expect(Mongo::QueryCache.send(:cache_table)).to be_empty end end context 'when aggregating with $merge' do min_server_fcv '4.2' before do authorized_collection.delete_many authorized_collection.find.to_a authorized_collection.aggregate([ { '$match' => { 'test' => 1 } }, { '$merge' => { into: { db: SpecConfig.instance.test_db, coll: 'new_coll', }, on: "_id", whenMatched: "replace", whenNotMatched: "insert", } } ]) end it 'queries again' do authorized_collection.find.to_a expect(events.length).to eq(2) end it 'clears the cache' do expect(Mongo::QueryCache.send(:cache_table)).to be_empty end end end context 'when aggregating' do before do 3.times { authorized_collection.insert_one(test: 1) } end let(:events) do subscriber.command_started_events('aggregate') end let(:aggregation) do authorized_collection.aggregate([ { '$match' => { test: 1 } } ]) end it 'caches the aggregation' do expect(aggregation.to_a.length).to eq(3) expect(aggregation.to_a.length).to eq(3) expect(events.length).to eq(1) end context 'with read concern' do require_wired_tiger min_server_fcv '3.6' let(:aggregation_read_concern) do authorized_client['collection_spec', { read_concern: { level: :local } }] .aggregate([ { '$match' => { test: 1 } } ]) end it 'queries twice' do expect(aggregation.to_a.length).to eq(3) expect(aggregation_read_concern.to_a.length).to eq(3) expect(events.length).to eq(2) end end context 'with read preference' do let(:aggregation_read_preference) do authorized_client['collection_spec', { read: { mode: :primary } }] .aggregate([ { '$match' => { test: 1 } } ]) end it 'queries twice' do expect(aggregation.to_a.length).to eq(3) expect(aggregation_read_preference.to_a.length).to eq(3) expect(events.length).to eq(2) end end context 'when collation is specified' do min_server_fcv '3.4' let(:aggregation_collation) do authorized_collection.aggregate( [ { '$match' => { test: 1 } } ], { collation: { locale: 'fr' } } ) end it 'queries twice' do expect(aggregation.to_a.length).to eq(3) expect(aggregation_collation.to_a.length).to eq(3) expect(events.length).to eq(2) end end context 'when insert_one is performed on another collection' do before do aggregation.to_a authorized_client['different_collection'].insert_one(name: 'bob') aggregation.to_a end it 'queries again' do expect(events.length).to eq(2) end end context 'when insert_many is performed on another collection' do before do aggregation.to_a authorized_client['different_collection'].insert_many([name: 'bob']) aggregation.to_a end it 'queries again' do expect(events.length).to eq(2) end end [:delete_many, :delete_one].each do |method| context "when #{method} is performed on another collection" do before do aggregation.to_a authorized_client['different_collection'].send(method) aggregation.to_a end it 'queries again' do expect(events.length).to eq(2) end end end [:find_one_and_delete, :find_one_and_replace, :find_one_and_update, :replace_one].each do |method| context "when #{method} is performed on another collection" do before do aggregation.to_a authorized_client['different_collection'].send(method, { field: 'value' }, { field: 'new value' }) aggregation.to_a end it 'queries again' do expect(events.length).to eq(2) end end end [:update_one, :update_many].each do |method| context 'when update_many is performed on another collection' do before do aggregation.to_a authorized_client['different_collection'].send(method, { field: 'value' }, { "$inc" => { :field => 1 } }) aggregation.to_a end it 'queries again' do expect(events.length).to eq(2) end end end context '#count_documents' do context 'on same collection' do it 'caches the query' do expect(authorized_collection.count_documents(test: 1)).to eq(3) expect(authorized_collection.count_documents(test: 1)).to eq(3) expect(events.length).to eq(1) end end context 'on different collections' do let(:other_collection) { authorized_client['other_collection'] } before do other_collection.drop 6.times { other_collection.insert_one(test: 1) } end it 'caches the query' do expect(authorized_collection.count_documents(test: 1)).to eq(3) expect(other_collection.count_documents(test: 1)).to eq(6) expect(events.length).to eq(2) end end end end context 'when find command fails and retries' do require_fail_command require_no_multi_mongos require_warning_clean before do 5.times do |i| authorized_collection.insert_one(test: i) end end before do client.use('admin').command( configureFailPoint: 'failCommand', mode: { times: 1 }, data: { failCommands: ['find'], closeConnection: true } ) end let(:command_name) { 'find' } it 'uses modern retryable reads when using query cache' do expect(Mongo::QueryCache.enabled?).to be(true) expect(Mongo::Logger.logger).to receive(:warn).once.with(/modern.*attempt 1/).and_call_original authorized_collection.find(test: 1).to_a expect(Mongo::QueryCache.send(:cache_table).length).to eq(1) expect(subscriber.command_started_events('find').length).to eq(2) authorized_collection.find(test: 1).to_a expect(Mongo::QueryCache.send(:cache_table).length).to eq(1) expect(subscriber.command_started_events('find').length).to eq(2) end end context 'when querying in a different collection' do let(:database) { client.database } let(:new_collection) do Mongo::Collection.new(database, 'foo') end before do authorized_collection.find.to_a end it 'queries again' do new_collection.find.to_a expect(Mongo::QueryCache.send(:cache_table).length).to eq(2) expect(events.length).to eq(2) end end context 'with system collection' do let(:client) do ClientRegistry.instance.global_client('root_authorized').tap do |client| client.subscribe(Mongo::Monitoring::COMMAND, subscriber) end end before do begin client.database.users.remove('alanturing') rescue Mongo::Error::OperationFailure # can be user not found, ignore end end it 'does not use the query cache' do client['system.users'].find.to_a client['system.users'].find.to_a expect(events.length).to eq(2) end end context 'when result set has multiple documents and cursor is iterated partially' do before do Mongo::QueryCache.enabled = false 5.times do authorized_collection.insert_one({ name: 'testing' }) end end shared_examples 'retrieves full result set on second iteration' do it 'retrieves full result set on second iteration' do Mongo::QueryCache.clear Mongo::QueryCache.enabled = true partial_first_iteration authorized_collection.find.to_a.length.should == 5 end end context 'using each & break' do let(:partial_first_iteration) do called = false authorized_collection.find.each do called = true break end called.should be true end include_examples 'retrieves full result set on second iteration' end context 'using next' do let(:partial_first_iteration) do # #next is executed in its own fiber, and query cache is disabled # for that operation. authorized_collection.find.to_enum.next end include_examples 'retrieves full result set on second iteration' end end describe 'concurrent queries with multiple batches' do before do 102.times { |i| authorized_collection.insert_one(_id: i) } end # The query cache table is stored in thread local storage, so even though # we executed the same queries in the first thread (and waited for them to # finish), that query is going to be executed again (only once) in the # second thread. it "uses separate cache tables per thread" do thread1 = Thread.new do Mongo::QueryCache.cache do authorized_collection.find.to_a authorized_collection.find.to_a authorized_collection.find.to_a authorized_collection.find.to_a end end thread1.join thread2 = Thread.new do Mongo::QueryCache.cache do authorized_collection.find.to_a authorized_collection.find.to_a authorized_collection.find.to_a authorized_collection.find.to_a end end thread2.join expect(subscriber.command_started_events('find').length).to eq(2) expect(subscriber.command_started_events('getMore').length).to eq(2) end it "is able to query concurrently" do wait_for_first_thread = true wait_for_second_thread = true threads = [] first_thread_docs = [] threads << Thread.new do Mongo::QueryCache.cache do # 1. iterate first batch authorized_collection.find.each_with_index do |doc, i| # 2. verify that we're getting all of the correct documents first_thread_docs << doc expect(doc).to eq({ "_id" => i }) if i == 50 # 2. check that there hasn't been a getmore expect(subscriber.command_started_events('getMore').length).to eq(0) # 3. mark second thread ready to start wait_for_first_thread = false # 4. wait for second thread sleep 0.1 while wait_for_second_thread # 5. verify that the other thread sent a getmore expect(subscriber.command_started_events('getMore').length).to eq(1) end # 6. finish iterating the batch end # 7. verify that it still caches the query authorized_collection.find.to_a end end threads << Thread.new do Mongo::QueryCache.cache do # 1. wait for the first thread to finish first batch iteration sleep 0.1 while wait_for_first_thread # 2. iterate the entire result set authorized_collection.find.each_with_index do |doc, i| # 3. verify documnents expect(doc).to eq({ "_id" => i }) end # 4. verify get more expect(subscriber.command_started_events('getMore').length).to eq(1) # 5. mark second thread done wait_for_second_thread = false # 6. verify that it still caches the query authorized_collection.find.to_a end end threads.map(&:join) expect(first_thread_docs.length).to eq(102) expect(subscriber.command_started_events('find').length).to eq(2) expect(subscriber.command_started_events('getMore').length).to eq(2) end end end mongo-ruby-driver-2.21.3/spec/integration/query_cache_transactions_spec.rb000066400000000000000000000137641505113246500270770ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe 'QueryCache with transactions' do # Work around https://jira.mongodb.org/browse/HELP-10518 before(:all) do client = ClientRegistry.instance.global_client('authorized') Utils.create_collection(client, 'test') Utils.mongos_each_direct_client do |client| client['test'].distinct('foo').to_a end end around do |spec| Mongo::QueryCache.clear Mongo::QueryCache.cache { spec.run } end # These tests do not currently use the session registry because transactions # leak sessions independently of the query cache. This will be resolved by # RUBY-2391. let(:subscriber) { Mrss::EventSubscriber.new } let(:client) do authorized_client.tap do |client| client.subscribe(Mongo::Monitoring::COMMAND, subscriber) end end before do collection.delete_many # Work around https://jira.mongodb.org/browse/HELP-10518 client.start_session do |session| session.with_transaction do collection.find({}, session: session).to_a end end subscriber.clear_events! end describe 'in transactions' do require_transaction_support require_wired_tiger let(:collection) { client['test'] } let(:events) do subscriber.command_started_events('find') end context 'with convenient API' do context 'when same query is performed inside and outside of transaction' do it 'performs one query' do collection.find.to_a session = client.start_session session.with_transaction do collection.find({}, session: session).to_a end expect(subscriber.command_started_events('find').length).to eq(1) end end context 'when transaction has a different read concern' do it 'performs two queries' do collection.find.to_a session = client.start_session session.with_transaction( read_concern: { level: :snapshot } ) do collection.find({}, session: session).to_a end expect(subscriber.command_started_events('find').length).to eq(2) end end context 'when transaction has a different read preference' do it 'performs two queries' do collection.find.to_a session = client.start_session session.with_transaction( read: { mode: :primary } ) do collection.find({}, session: session).to_a end expect(subscriber.command_started_events('find').length).to eq(2) end end context 'when transaction is committed' do it 'clears the cache' do session = client.start_session session.with_transaction do collection.insert_one({ test: 1 }, session: session) collection.insert_one({ test: 2 }, session: session) expect(collection.find({}, session: session).to_a.length).to eq(2) expect(collection.find({}, session: session).to_a.length).to eq(2) # The driver caches the queries within the transaction expect(subscriber.command_started_events('find').length).to eq(1) session.commit_transaction end expect(collection.find.to_a.length).to eq(2) # The driver clears the cache and runs the query again expect(subscriber.command_started_events('find').length).to eq(2) end end context 'when transaction is aborted' do it 'clears the cache' do session = client.start_session session.with_transaction do collection.insert_one({ test: 1 }, session: session) collection.insert_one({ test: 2 }, session: session) expect(collection.find({}, session: session).to_a.length).to eq(2) expect(collection.find({}, session: session).to_a.length).to eq(2) # The driver caches the queries within the transaction expect(subscriber.command_started_events('find').length).to eq(1) session.abort_transaction end expect(collection.find.to_a.length).to eq(0) # The driver clears the cache and runs the query again expect(subscriber.command_started_events('find').length).to eq(2) end end end context 'with low-level API' do context 'when transaction is committed' do it 'clears the cache' do session = client.start_session session.start_transaction collection.insert_one({ test: 1 }, session: session) collection.insert_one({ test: 2 }, session: session) expect(collection.find({}, session: session).to_a.length).to eq(2) expect(collection.find({}, session: session).to_a.length).to eq(2) # The driver caches the queries within the transaction expect(subscriber.command_started_events('find').length).to eq(1) session.commit_transaction expect(collection.find.to_a.length).to eq(2) # The driver clears the cache and runs the query again expect(subscriber.command_started_events('find').length).to eq(2) end end context 'when transaction is aborted' do it 'clears the cache' do session = client.start_session session.start_transaction collection.insert_one({ test: 1 }, session: session) collection.insert_one({ test: 2 }, session: session) expect(collection.find({}, session: session).to_a.length).to eq(2) expect(collection.find({}, session: session).to_a.length).to eq(2) # The driver caches the queries within the transaction expect(subscriber.command_started_events('find').length).to eq(1) session.abort_transaction expect(collection.find.to_a.length).to eq(0) # The driver clears the cache and runs the query again expect(subscriber.command_started_events('find').length).to eq(2) end end end end end mongo-ruby-driver-2.21.3/spec/integration/read_concern_spec.rb000066400000000000000000000042031505113246500244250ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe 'read concern' do min_server_version '3.2' let(:subscriber) do Mrss::EventSubscriber.new end let(:specified_read_concern) do { :level => :local } end let(:expected_read_concern) do { 'level' => 'local' } end let(:sent_read_concern) do subscriber.clear_events! collection.count_documents subscriber.started_events.find { |c| c.command_name == 'aggregate' }.command[:readConcern] end shared_examples_for 'a read concern is specified' do it 'sends a read concern to the server' do expect(sent_read_concern).to eq(expected_read_concern) end end shared_examples_for 'no read concern is specified' do it 'does not send a read concern to the server' do expect(sent_read_concern).to be_nil end end context 'when the client has no read concern specified' do let(:client) do authorized_client.tap do |client| client.subscribe(Mongo::Monitoring::COMMAND, subscriber) end end context 'when the collection has no read concern specified' do let(:collection) do client[TEST_COLL] end it_behaves_like 'no read concern is specified' end context 'when the collection has a read concern specified' do let(:collection) do client[TEST_COLL].with(read_concern: specified_read_concern) end it_behaves_like 'a read concern is specified' end end context 'when the client has a read concern specified' do let(:client) do authorized_client.with(read_concern: specified_read_concern).tap do |client| client.subscribe(Mongo::Monitoring::COMMAND, subscriber) end end context 'when the collection has no read concern specified' do let(:collection) do client[TEST_COLL] end it_behaves_like 'a read concern is specified' end context 'when the collection has a read concern specified' do let(:collection) do client[TEST_COLL].with(read_concern: specified_read_concern) end it_behaves_like 'a read concern is specified' end end end mongo-ruby-driver-2.21.3/spec/integration/read_preference_spec.rb000066400000000000000000000362451505113246500251270ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' # The only allowed read preference in transaction is primary. # Because of this, the tests assert that the final read preference is primary. # It would be preferable to assert that some other read preference is selected, # but this would only work for non-transactional tests and would require # duplicating the examples. describe 'Read preference' do clean_slate_on_evergreen let(:client) do authorized_client.with(client_options) end let(:subscriber) { Mrss::EventSubscriber.new } before do client.subscribe(Mongo::Monitoring::COMMAND, subscriber) end let(:client_options) do {} end let(:session_options) do {} end let(:tx_options) do {} end let(:collection) { client['tx_read_pref_test'] } before do collection.drop collection.create(write_concern: {w: :majority}) end let(:find_options) do {} end shared_examples_for 'does not send read preference when reading' do it 'does not send read preference when reading' do read_operation event = subscriber.single_command_started_event('find') actual_preference = event.command['$readPreference'] expect(actual_preference).to be nil end end shared_examples_for 'non-transactional read preference examples' do it 'does not send read preference when writing' do write_operation event = subscriber.single_command_started_event('insert') actual_preference = event.command['$readPreference'] expect(actual_preference).to be nil end context 'standalone' do require_topology :single it_behaves_like 'does not send read preference when reading' end context 'replica set' do # Supposedly read preference should only be sent in a sharded cluster # topology. However, transactions spec tests contain read preference # assertions also when they are run in RS topologies. require_topology :replica_set context 'pre-OP_MSG server' do max_server_version '3.4' it_behaves_like 'does not send read preference when reading' end context 'server supporting OP_MSG' do min_server_fcv '3.6' it 'sends expected read preference when reading' do read_operation event = subscriber.single_command_started_event('find') actual_preference = event.command['$readPreference'] if expected_read_preference&.[]("mode") == "primary" expect(actual_preference).to be_nil else expect(actual_preference).to eq(expected_read_preference) end end end end context 'sharded cluster' do # Driver does not send $readPreference document to mongos when # specified mode is primary. require_topology :sharded it_behaves_like 'does not send read preference when reading' end end shared_examples_for 'sends expected read preference' do it_behaves_like 'non-transactional read preference examples' end shared_context 'non-transactional read preference specifications' do context 'when read preference is not explicitly given' do let(:client_options) do {} end let(:expected_read_preference) do nil end it_behaves_like 'sends expected read preference' end context 'when read preference is given in client options' do let(:client_options) do {read: { mode: :primary }} end let(:expected_read_preference) do {'mode' => 'primary'} end it_behaves_like 'sends expected read preference' end context 'when read preference is given in operation options' do let(:expected_read_preference) do {'mode' => 'primary'} end let(:find_options) do {read: {mode: :primary}} end it_behaves_like 'sends expected read preference' end context 'when read preference is given in client and operation options' do let(:client_options) do {read: { mode: :secondary }} end # Operation should override the client. let(:expected_read_preference) do {'mode' => 'primary'} end let(:find_options) do {read: {mode: :primary}} end it_behaves_like 'sends expected read preference' end context 'when read preference is given in collection and operation options' do let(:collection) do client['tx_read_pref_test', {read: {mode: :secondary}}] end # Operation should override the collection. let(:expected_read_preference) do {'mode' => 'primary'} end let(:find_options) do {read: {mode: :primary}} end it_behaves_like 'sends expected read preference' end end context 'not in transaction' do let(:write_operation) do collection.insert_one(hello: 'world') end let(:read_operation) do collection.with(write: {w: :majority}).insert_one(hello: 'world') res = collection.find({}, find_options || {}).to_a.count expect(res).to eq(1) end include_context 'non-transactional read preference specifications' context 'when read preference is given in collection options' do let(:client_options) do {} end let(:collection) do client['tx_read_pref_test', {read: {mode: :primary}}] end let(:expected_read_preference) do {'mode' => 'primary'} end it_behaves_like 'sends expected read preference' end context 'when read preference is given in collection options via #with' do let(:collection) do client['tx_read_pref_test'].with(read: {mode: :primary}) end let(:expected_read_preference) do {'mode' => 'primary'} end it_behaves_like 'sends expected read preference' end context 'when read preference is given in client and collection options' do let(:client_options) do {read: { mode: :secondary }} end let(:collection) do client['tx_read_pref_test', {read: {mode: :primary}}] end # Collection should override the client. let(:expected_read_preference) do {'mode' => 'primary'} end it_behaves_like 'sends expected read preference' end end context 'in transaction' do # 4.0/RS is a valid topology to test against, but our tooling doesn't # support multiple constraint specifications like runOn does. # There is no loss of generality to constrain these tests to 4.2+. min_server_fcv '4.2' require_topology :sharded, :replica_set let(:write_operation) do expect do session = client.start_session(session_options) session.with_transaction(tx_options) do collection.insert_one({hello: 'world'}, session: session) end end.not_to raise_error end let(:read_operation) do expect do session = client.start_session(session_options) session.with_transaction(tx_options) do collection.insert_one({hello: 'world'}, session: session) res = collection.find({}, {session: session}.merge(find_options || {})).to_a.count expect(res).to eq(1) end end.not_to raise_error end shared_examples_for 'sends expected read preference' do it_behaves_like 'non-transactional read preference examples' context 'on sharded cluster' do require_topology :sharded it 'does not send read preference' do # Driver does not send $readPreference document to mongos when # specified mode is primary. collection.insert_one(hello: 'world') session = client.start_session(session_options) session.with_transaction(tx_options) do res = collection.find({}, {session: session}.merge(find_options || {})).to_a.count expect(res).to eq(1) end event = subscriber.single_command_started_event('find') actual_preference = event.command['$readPreference'] expect(actual_preference).to be_nil end end context 'on replica set' do require_topology :replica_set it 'sends expected read preference when starting transaction' do collection.insert_one(hello: 'world') session = client.start_session(session_options) session.with_transaction(tx_options) do res = collection.find({}, {session: session}.merge(find_options || {})).to_a.count expect(res).to eq(1) end event = subscriber.single_command_started_event('find') actual_preference = event.command['$readPreference'] if expected_read_preference&.[]("mode") == "primary" expect(actual_preference).to be_nil else expect(actual_preference).to eq(expected_read_preference) end end end end include_context 'non-transactional read preference specifications' context 'when read preference is given in collection options' do let(:client_options) do {} end let(:collection) do client['tx_read_pref_test', {read: {mode: :primary}}] end # collection read preference is ignored let(:expected_read_preference) do nil end it_behaves_like 'sends expected read preference' end context 'when read preference is given in collection options via #with' do let(:collection) do client['tx_read_pref_test'].with(read: {mode: :primary}) end # collection read preference is ignored let(:expected_read_preference) do nil end it_behaves_like 'sends expected read preference' end context 'when read preference is given in client and collection options' do let(:client_options) do {read: { mode: :primary }} end let(:collection) do client['tx_read_pref_test', {read: {mode: :secondary}}] end # collection read preference is ignored, client read preference is used let(:expected_read_preference) do {'mode' => 'primary'} end it_behaves_like 'sends expected read preference' end context 'when read preference is given in default transaction options' do let(:session_options) do {default_transaction_options: {read: { mode: :primary }}} end let(:expected_read_preference) do {'mode' => 'primary'} end it_behaves_like 'sends expected read preference' end context 'when read preference is given in client and default transaction options' do let(:client_options) do {read: { mode: :secondary }} end let(:session_options) do {default_transaction_options: {read: { mode: :primary }}} end let(:expected_read_preference) do {'mode' => 'primary'} end it_behaves_like 'sends expected read preference' end context 'when read preference is given in collection and default transaction options' do let(:collection) do client['tx_read_pref_test', {read: {mode: :secondary}}] end let(:session_options) do {default_transaction_options: {read: { mode: :primary }}} end let(:expected_read_preference) do {'mode' => 'primary'} end it_behaves_like 'sends expected read preference' end context 'when read preference is given in default transaction and transaction options' do let(:session_options) do {default_transaction_options: {read: { mode: :secondary }}} end let(:tx_options) do {read: { mode: :primary }} end let(:expected_read_preference) do {'mode' => 'primary'} end it_behaves_like 'sends expected read preference' end context 'when read preference is given in default transaction and operation options' do let(:session_options) do {default_transaction_options: {read: { mode: :primary }}} end let(:find_options) do {read: {mode: :secondary}} end let(:expected_read_preference) do {'mode' => 'primary'} end it 'sends operation read preference and fails' do expect do session = client.start_session(session_options) session.with_transaction(tx_options) do collection.insert_one({hello: 'world'}, session: session) res = collection.find({}, {session: session}.merge(find_options || {})).to_a.count expect(res).to eq(1) end end.to raise_error(Mongo::Error::InvalidTransactionOperation, /read preference in a transaction must be primary \(requested: secondary\)/) end end context 'when read preference is given in transaction options' do let(:tx_options) do {read: { mode: :primary }} end let(:expected_read_preference) do {'mode' => 'primary'} end it_behaves_like 'sends expected read preference' end context 'when read preference is given in client and transaction options' do let(:client_options) do {read: { mode: :secondary }} end let(:tx_options) do {read: { mode: :primary }} end let(:expected_read_preference) do {'mode' => 'primary'} end it_behaves_like 'sends expected read preference' end context 'when read preference is given in collection and transaction options' do let(:collection) do client['tx_read_pref_test', {read: {mode: :secondary}}] end let(:tx_options) do {read: { mode: :primary }} end let(:expected_read_preference) do {'mode' => 'primary'} end it_behaves_like 'sends expected read preference' end context 'when read preference is given in transaction and operation options' do let(:tx_options) do {read: { mode: :primary }} end let(:find_options) do {read: {mode: :secondary}} end let(:expected_read_preference) do {'mode' => 'primary'} end it 'sends operation read preference and fails' do expect do session = client.start_session(session_options) session.with_transaction(tx_options) do collection.insert_one({hello: 'world'}, session: session) res = collection.find({}, {session: session}.merge(find_options || {})).to_a.count expect(res).to eq(1) end end.to raise_error(Mongo::Error::InvalidTransactionOperation, /read preference in a transaction must be primary \(requested: secondary\)/) end end end context 'secondary read with direct connection' do require_topology :replica_set let(:address_str) do Mongo::ServerSelector.get(mode: :secondary). select_server(authorized_client.cluster).address.seed end let(:secondary_client) do new_local_client([address_str], SpecConfig.instance.all_test_options.merge(connect: :direct)) end it 'succeeds without read preference' do secondary_client['foo'].find.to_a end it 'succeeds with read preference: secondary' do secondary_client['foo', {read: {mode: :secondary}}].find.to_a end it 'succeeds with read preference: primary' do secondary_client['foo', {read: {mode: :primary}}].find.to_a end end end mongo-ruby-driver-2.21.3/spec/integration/reconnect_spec.rb000066400000000000000000000125761505113246500237770ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe 'Client after reconnect' do let(:client) { authorized_client } it 'is a functioning client' do client['test'].insert_one('testk' => 'testv') client.reconnect doc = client['test'].find('testk' => 'testv').first expect(doc).not_to be_nil expect(doc['testk']).to eq('testv') end context 'non-lb' do require_topology :single, :replica_set, :sharded it 'recreates monitor thread' do thread = client.cluster.servers.first.monitor.instance_variable_get('@thread') expect(thread).to be_alive thread.kill # context switch to let the thread get killed sleep 0.1 expect(thread).not_to be_alive client.reconnect new_thread = client.cluster.servers.first.monitor.instance_variable_get('@thread') expect(new_thread).not_to eq(thread) expect(new_thread).to be_alive end end context 'lb' do require_topology :load_balanced it 'does not recreate monitor thread' do thread = client.cluster.servers.first.monitor.instance_variable_get('@thread') expect(thread).to be nil client.reconnect new_thread = client.cluster.servers.first.monitor.instance_variable_get('@thread') expect(new_thread).to be nil end end context 'with min_pool_size > 0' do # This test causes live threads errors in jruby in other tests. fails_on_jruby let(:client) { authorized_client.with(min_pool_size: 1) } it 'recreates connection pool populator thread' do server = client.cluster.next_primary thread = server.pool.populator.instance_variable_get('@thread') expect(thread).to be_alive thread.kill # context switch to let the thread get killed sleep 0.1 expect(thread).not_to be_alive client.reconnect new_server = client.cluster.next_primary new_thread = new_server.pool.populator.instance_variable_get('@thread') expect(new_thread).not_to eq(thread) expect(new_thread).to be_alive end end context 'SRV monitor thread' do require_external_connectivity let(:uri) do "mongodb+srv://test1.test.build.10gen.cc/?tls=#{SpecConfig.instance.ssl?}" end # Debug logging to troubleshoot failures in Evergreen let(:logger) do Logger.new(STDERR). tap do |logger| logger.level = :debug end end let(:client) do new_local_client(uri, SpecConfig.instance.monitoring_options.merge( server_selection_timeout: 3.86, logger: logger)) end let(:wait_for_discovery) do client.cluster.next_primary end let(:wait_for_discovery_again) do client.cluster.next_primary end shared_examples_for 'recreates SRV monitor' do # JRuby produces this error: # RSpec::Expectations::ExpectationNotMetError: expected nil to respond to `alive?` # for this assertion: # expect(thread).not_to be_alive # This is bizarre because if thread was nil, the earlier call to # thread.kill should've similarly failed, but it doesn't. fails_on_jruby minimum_mri_version '3.0.0' it 'recreates SRV monitor' do wait_for_discovery expect(client.cluster.topology).to be_a(expected_topology_cls) thread = client.cluster.srv_monitor.instance_variable_get('@thread') expect(thread).to be_alive thread.kill # context switch to let the thread get killed sleep 0.1 expect(thread).not_to be_alive client.reconnect wait_for_discovery_again new_thread = client.cluster.srv_monitor.instance_variable_get('@thread') expect(new_thread).not_to eq(thread) expect(new_thread).to be_alive end end context 'in sharded topology' do require_topology :sharded require_default_port_deployment require_multi_mongos let(:expected_topology_cls) { Mongo::Cluster::Topology::Sharded } it_behaves_like 'recreates SRV monitor' end context 'in unknown topology' do require_external_connectivity # JRuby apparently does not implement non-blocking UDP I/O which is used # by RubyDNS: # NotImplementedError: recvmsg_nonblock is not implemented fails_on_jruby let(:uri) do "mongodb+srv://test-fake.test.build.10gen.cc/" end let(:client) do ClientRegistry.instance.register_local_client( Mongo::Client.new(uri, timeout: 5, connect_timeout: 5, server_selection_timeout: 3.89, resolv_options: { nameserver: 'localhost', nameserver_port: [['localhost', 5300], ['127.0.0.1', 5300]], }, logger: logger)) end let(:expected_topology_cls) { Mongo::Cluster::Topology::Unknown } let(:wait_for_discovery) do # Since the entire test is done in unknown topology, we cannot use # next_primary to wait for the client to discover the topology. sleep 5 end let(:wait_for_discovery_again) do sleep 5 end around do |example| rules = [ ['_mongodb._tcp.test-fake.test.build.10gen.cc', :srv, [0, 0, 2799, 'localhost.test.build.10gen.cc'], ], ] mock_dns(rules) do example.run end end it_behaves_like 'recreates SRV monitor' end end end mongo-ruby-driver-2.21.3/spec/integration/retryable_errors_spec.rb000066400000000000000000000166311505113246500254000ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe 'Failing retryable operations' do # Requirement for fail point min_server_fcv '4.0' let(:subscriber) { Mrss::EventSubscriber.new } let(:client_options) do {} end let(:client) do authorized_client.with(client_options).tap do |client| client.subscribe(Mongo::Monitoring::COMMAND, subscriber) end end let(:collection) do client['retryable-errors-spec'] end context 'when operation fails' do require_topology :replica_set let(:clear_fail_point_command) do { configureFailPoint: 'failCommand', mode: 'off', } end after do ClusterTools.instance.direct_client_for_each_data_bearing_server do |client| client.use(:admin).database.command(clear_fail_point_command) end end let(:collection) do client['retryable-errors-spec', read: {mode: :secondary_preferred}] end let(:first_server) do client.cluster.servers_list.detect do |server| server.address.seed == events.first.address.seed end end let(:second_server) do client.cluster.servers_list.detect do |server| server.address.seed == events.last.address.seed end end shared_context 'read operation' do let(:fail_point_command) do { configureFailPoint: 'failCommand', mode: {times: 1}, data: { failCommands: ['find'], errorCode: 11600, }, } end let(:set_fail_point) do client.cluster.servers_list.each do |server| server.monitor.stop! end ClusterTools.instance.direct_client_for_each_data_bearing_server do |client| client.use(:admin).database.command(fail_point_command) end end let(:operation_exception) do set_fail_point begin collection.find(a: 1).to_a rescue Mongo::Error::OperationFailure::Family => exception else fail('Expected operation to fail') end puts exception.message exception end let(:events) do subscriber.command_started_events('find') end end shared_context 'write operation' do let(:fail_point_command) do command = { configureFailPoint: 'failCommand', mode: {times: 2}, data: { failCommands: ['insert'], errorCode: 11600, }, } if ClusterConfig.instance.short_server_version >= '4.4' # Server versions 4.4 and newer will add the RetryableWriteError # label to all retryable errors, and the driver must not add the label # if it is not already present. command[:data][:errorLabels] = ['RetryableWriteError'] end command end let(:set_fail_point) do client.use(:admin).database.command(fail_point_command) end let(:operation_exception) do set_fail_point begin collection.insert_one(a: 1) rescue Mongo::Error::OperationFailure::Family => exception else fail('Expected operation to fail') end #puts exception.message exception end let(:events) do subscriber.command_started_events('insert') end end shared_examples_for 'failing retry' do it 'indicates second attempt' do expect(operation_exception.message).to include('attempt 2') expect(operation_exception.message).not_to include('attempt 1') expect(operation_exception.message).not_to include('attempt 3') end it 'publishes two events' do operation_exception expect(events.length).to eq(2) end end shared_examples_for 'failing single attempt' do it 'does not indicate attempt' do expect(operation_exception.message).not_to include('attempt 1') expect(operation_exception.message).not_to include('attempt 2') expect(operation_exception.message).not_to include('attempt 3') end it 'publishes one event' do operation_exception expect(events.length).to eq(1) end end shared_examples_for 'failing retry on the same server' do it 'is reported on the server of the second attempt' do expect(operation_exception.message).to include(second_server.address.seed) end end shared_examples_for 'failing retry on a different server' do it 'is reported on the server of the second attempt' do expect(operation_exception.message).not_to include(first_server.address.seed) expect(operation_exception.message).to include(second_server.address.seed) end it 'marks servers used in both attempts unknown' do operation_exception expect(first_server).to be_unknown expect(second_server).to be_unknown end it 'publishes events for the different server addresses' do expect(events.length).to eq(2) expect(events.first.address.seed).not_to eq(events.last.address.seed) end end shared_examples_for 'modern retry' do it 'indicates modern retry' do expect(operation_exception.message).to include('modern retry') expect(operation_exception.message).not_to include('legacy retry') expect(operation_exception.message).not_to include('retries disabled') end end shared_examples_for 'legacy retry' do it 'indicates legacy retry' do expect(operation_exception.message).to include('legacy retry') expect(operation_exception.message).not_to include('modern retry') expect(operation_exception.message).not_to include('retries disabled') end end shared_examples_for 'disabled retry' do it 'indicates retries are disabled' do expect(operation_exception.message).to include('retries disabled') expect(operation_exception.message).not_to include('legacy retry') expect(operation_exception.message).not_to include('modern retry') end end context 'when read is retried and retry fails' do include_context 'read operation' context 'modern read retries' do require_wired_tiger_on_36 let(:client_options) do {retry_reads: true} end it_behaves_like 'failing retry' it_behaves_like 'modern retry' end context 'legacy read retries' do let(:client_options) do {retry_reads: false, read_retry_interval: 0} end it_behaves_like 'failing retry' it_behaves_like 'legacy retry' end end context 'when read retries are disabled' do let(:client_options) do {retry_reads: false, max_read_retries: 0} end include_context 'read operation' it_behaves_like 'failing single attempt' it_behaves_like 'disabled retry' end context 'when write is retried and retry fails' do include_context 'write operation' context 'modern write retries' do require_wired_tiger_on_36 let(:client_options) do {retry_writes: true} end it_behaves_like 'failing retry' it_behaves_like 'modern retry' end context 'legacy write' do let(:client_options) do {retry_writes: false} end it_behaves_like 'failing retry' it_behaves_like 'legacy retry' end end end end mongo-ruby-driver-2.21.3/spec/integration/retryable_reads_errors_spec.rb000066400000000000000000000175671505113246500265670ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe 'Retryable reads errors tests' do retry_test let(:client) { authorized_client.with(options.merge(retry_reads: true)) } let(:collection) do client['retryable-reads-error-spec'] end context "PoolClearedError retryability test" do require_topology :single, :replica_set, :sharded require_no_multi_mongos min_server_version '4.2.9' let(:options) { { max_pool_size: 1, heartbeat_frequency: 1000 } } let(:failpoint) do { configureFailPoint: "failCommand", mode: { times: 1 }, data: { failCommands: [ "find" ], errorCode: 91, blockConnection: true, blockTimeMS: 1000 } } end let(:subscriber) { Mrss::EventSubscriber.new } let(:threads) do threads = [] threads << Thread.new do expect(collection.find(x: 1).first[:x]).to eq(1) end threads << Thread.new do expect(collection.find(x: 1).first[:x]).to eq(1) end threads end let(:find_events) do subscriber.started_events.select { |e| e.command_name == "find" } end let(:cmap_events) do subscriber.published_events end let(:event_types) do [ Mongo::Monitoring::Event::Cmap::ConnectionCheckedOut, Mongo::Monitoring::Event::Cmap::ConnectionCheckOutFailed, Mongo::Monitoring::Event::Cmap::PoolCleared, ] end let(:check_out_results) do cmap_events.select do |e| event_types.include?(e.class) end end before do collection.insert_one(x: 1) authorized_client.use(:admin).command(failpoint) client.subscribe(Mongo::Monitoring::COMMAND, subscriber) client.subscribe(Mongo::Monitoring::CONNECTION_POOL, subscriber) end shared_examples_for 'retries on PoolClearedError' do it "retries on PoolClearedError" do # After the first find fails, the pool is paused and retry is triggered. # Now, a race is started between the second find acquiring a connection, # and the first retrying the read. Now, retry reads cause the cluster to # be rescanned and the pool to be unpaused, allowing the second checkout # to succeed (when it should fail). Therefore we want the second find's # check out to win the race. This gives the check out a little head start. allow_any_instance_of(Mongo::Server::ConnectionPool).to receive(:ready).and_wrap_original do |m, *args, &block| ::Utils.wait_for_condition(5) do # check_out_results should contain: # - find1 connection check out successful # - pool cleared # - find2 connection check out failed # We wait here for the third event to happen before we ready the pool. cmap_events.select do |e| event_types.include?(e.class) end.length >= 3 end m.call(*args, &block) end threads.map(&:join) expect(check_out_results[0]).to be_a(Mongo::Monitoring::Event::Cmap::ConnectionCheckedOut) expect(check_out_results[1]).to be_a(Mongo::Monitoring::Event::Cmap::PoolCleared) expect(check_out_results[2]).to be_a(Mongo::Monitoring::Event::Cmap::ConnectionCheckOutFailed) expect(find_events.length).to eq(3) end end it_behaves_like 'retries on PoolClearedError' context 'legacy read retries' do let(:client) { authorized_client.with(options.merge(retry_reads: false, max_read_retries: 1)) } it_behaves_like 'retries on PoolClearedError' end after do authorized_client.use(:admin).command({ configureFailPoint: "failCommand", mode: "off", }) end end context 'Retries in a sharded cluster' do require_topology :sharded min_server_version '4.2' require_no_auth let(:subscriber) { Mrss::EventSubscriber.new } let(:find_started_events) do subscriber.started_events.select { |e| e.command_name == "find" } end let(:find_failed_events) do subscriber.failed_events.select { |e| e.command_name == "find" } end let(:find_succeeded_events) do subscriber.succeeded_events.select { |e| e.command_name == "find" } end context 'when another mongos is available' do let(:first_mongos) do Mongo::Client.new( [SpecConfig.instance.addresses.first], direct_connection: true, database: 'admin' ) end let(:second_mongos) do Mongo::Client.new( [SpecConfig.instance.addresses.last], direct_connection: false, database: 'admin' ) end let(:client) do new_local_client( [ SpecConfig.instance.addresses.first, SpecConfig.instance.addresses.last, ], SpecConfig.instance.test_options.merge(retry_reads: true) ) end let(:expected_servers) do [ SpecConfig.instance.addresses.first.to_s, SpecConfig.instance.addresses.last.to_s ].sort end before do skip 'This test requires at least two mongos' if SpecConfig.instance.addresses.length < 2 first_mongos.database.command( configureFailPoint: 'failCommand', mode: { times: 1 }, data: { failCommands: %w(find), closeConnection: false, errorCode: 6 } ) second_mongos.database.command( configureFailPoint: 'failCommand', mode: { times: 1 }, data: { failCommands: %w(find), closeConnection: false, errorCode: 6 } ) end after do [first_mongos, second_mongos].each do |admin_client| admin_client.database.command( configureFailPoint: 'failCommand', mode: 'off' ) admin_client.close end client.close end it 'retries on different mongos' do client.subscribe(Mongo::Monitoring::COMMAND, subscriber) expect { collection.find.first }.to raise_error(Mongo::Error::OperationFailure) expect(find_started_events.map { |e| e.address.to_s }.sort).to eq(expected_servers) expect(find_failed_events.map { |e| e.address.to_s }.sort).to eq(expected_servers) end end context 'when no other mongos is available' do let(:mongos) do Mongo::Client.new( [SpecConfig.instance.addresses.first], direct_connection: true, database: 'admin' ) end let(:client) do new_local_client( [ SpecConfig.instance.addresses.first ], SpecConfig.instance.test_options.merge(retry_reads: true) ) end before do mongos.database.command( configureFailPoint: 'failCommand', mode: { times: 1 }, data: { failCommands: %w(find), closeConnection: false, errorCode: 6 } ) end after do mongos.database.command( configureFailPoint: 'failCommand', mode: 'off' ) mongos.close client.close end it 'retries on the same mongos' do client.subscribe(Mongo::Monitoring::COMMAND, subscriber) expect { collection.find.first }.not_to raise_error expect(find_started_events.map { |e| e.address.to_s }.sort).to eq([ SpecConfig.instance.addresses.first.to_s, SpecConfig.instance.addresses.first.to_s ]) expect(find_failed_events.map { |e| e.address.to_s }.sort).to eq([ SpecConfig.instance.addresses.first.to_s ]) expect(find_succeeded_events.map { |e| e.address.to_s }.sort).to eq([ SpecConfig.instance.addresses.first.to_s ]) end end end end mongo-ruby-driver-2.21.3/spec/integration/retryable_writes/000077500000000000000000000000001505113246500240335ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/integration/retryable_writes/retryable_writes_36_and_older_spec.rb000066400000000000000000000510261505113246500333030ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' # The tests raise OperationFailure in socket reads. This is done for # convenience to make the tests uniform between socket errors and operation # failures; in reality a socket read will never raise OperationFailure as # wire protocol parsing code raises this exception. For the purposes of # testing retryable writes, it is acceptable to raise OperationFailure in # socket reads because both exceptions end up getting handled in the same # place by retryable writes code. The SDAM error handling test specifically # checks server state (i.e. being marked unknown) and scanning behavior # that is performed by the wire protocol code; this test omits scan assertions # as otherwise it quickly becomes unwieldy. describe 'Retryable writes integration tests' do include PrimarySocket require_wired_tiger_on_36 # These tests override server selector, which fails if there are multiple # eligible servers as would be the case in a multi-shard sharded cluster require_no_multi_mongos # Note: these tests are deprecated in favor of the tests in the file # spec/integration/retryable_writes/retryable_writes_40_and_newer_spec.rb # If you are changing functionality in the driver that only impacts server # versions 4.0 or newer, test that functionality in the other test file. max_server_fcv '3.6' before do authorized_collection.delete_many end let(:check_collection) do # Verify data in the collection using another client instance to avoid # having the verification read trigger cluster scans on the writing client root_authorized_client[TEST_COLL] end let(:primary_connection) do client.database.command(ping: 1) expect(primary_server.pool.size).to eq(1) expect(primary_server.pool.available_count).to eq(1) primary_server.pool.instance_variable_get('@available_connections').last end shared_examples_for 'an operation that is retried' do context 'when the operation fails on the first attempt and succeeds on the second attempt' do before do wait_for_all_servers(client.cluster) allow(primary_socket).to receive(:do_write).and_raise(error.dup) end context 'when the error is retryable' do before do expect(Mongo::Logger.logger).to receive(:warn).once.and_call_original end context 'when the error is a socket error' do let(:error) do IOError.new('first error') end it 'retries writes' do operation expect(expectation).to eq(successful_retry_value) end end context 'when the error is a socket timeout error' do let(:error) do Errno::ETIMEDOUT.new end it 'retries writes' do operation expect(expectation).to eq(successful_retry_value) end end context 'when the error is a retryable OperationFailure' do let(:error) do Mongo::Error::OperationFailure.new('not master') end let(:reply) do make_not_master_reply end it 'retries writes' do operation expect(expectation).to eq(successful_retry_value) end end end context 'when the error is not retryable' do context 'when the error is a non-retryable OperationFailure' do let(:error) do Mongo::Error::OperationFailure.new('other error', code: 123) end it 'does not retry writes' do expect do operation end.to raise_error(Mongo::Error::OperationFailure, /other error/) expect(expectation).to eq(unsuccessful_retry_value) end it 'indicates server used for operation' do expect do operation end.to raise_error(Mongo::Error::OperationFailure, /on #{ClusterConfig.instance.primary_address_str}/) end it 'indicates first attempt' do expect do operation end.to raise_error(Mongo::Error::OperationFailure, /attempt 1/) end end end end context 'when the operation fails on the first attempt and again on the second attempt' do before do allow(primary_socket).to receive(:do_write).and_raise(error.dup) end context 'when the selected server does not support retryable writes' do before do legacy_primary = double('legacy primary', :retry_writes? => false) expect(collection).to receive(:select_server).and_return(primary_server, legacy_primary) expect(primary_socket).to receive(:do_write).and_raise(error.dup) end context 'when the error is a socket error' do let(:error) do IOError.new('first error') end let(:exposed_error_class) do Mongo::Error::SocketError end it 'does not retry writes and raises the original error' do expect do operation end.to raise_error(exposed_error_class, /first error/) expect(expectation).to eq(unsuccessful_retry_value) end end context 'when the error is a socket timeout error' do let(:error) do Errno::ETIMEDOUT.new('first error') end it 'does not retry writes and raises the original error' do expect do operation # The exception message is different because of added diagnostics. end.to raise_error(Mongo::Error::SocketTimeoutError, /first error/) expect(expectation).to eq(unsuccessful_retry_value) end end context 'when the error is a retryable OperationFailure' do let(:error) do Mongo::Error::OperationFailure.new('not master') end it 'does not retry writes and raises the original error' do expect do operation end.to raise_error(Mongo::Error::OperationFailure, /not master/) expect(expectation).to eq(unsuccessful_retry_value) end end end [ [IOError, 'first error', Mongo::Error::SocketError], [Errno::ETIMEDOUT, 'first error', Mongo::Error::SocketTimeoutError], [Mongo::Error::OperationFailure, 'first error: not master', Mongo::Error::OperationFailure], [Mongo::Error::OperationFailure, 'first error: node is recovering', Mongo::Error::OperationFailure], ].each do |error_cls, error_msg, exposed_first_error_class| # Note: actual exception instances must be different between tests context "when the first error is a #{error_cls}/#{error_msg}" do let(:error) do error_cls.new(error_msg) end before do wait_for_all_servers(client.cluster) bad_socket = primary_connection.address.socket(primary_connection.socket_timeout, primary_connection.send(:ssl_options)) good_socket = primary_connection.address.socket(primary_connection.socket_timeout, primary_connection.send(:ssl_options)) allow(bad_socket).to receive(:do_write).and_raise(second_error.dup) allow(primary_connection.address).to receive(:socket).and_return(bad_socket, good_socket) end context 'when the second error is a socket error' do let(:second_error) do IOError.new('second error') end let(:exposed_error_class) do Mongo::Error::SocketError end it 'raises the second error' do expect do operation end.to raise_error(exposed_error_class, /second error/) expect(expectation).to eq(unsuccessful_retry_value) end it 'indicates server used for operation' do expect do operation end.to raise_error(Mongo::Error, /on #{ClusterConfig.instance.primary_address_str}/) end it 'indicates second attempt' do expect do operation end.to raise_error(Mongo::Error, /attempt 2/) end end context 'when the second error is a socket timeout error' do let(:second_error) do Errno::ETIMEDOUT.new('second error') end let(:exposed_error_class) do Mongo::Error::SocketTimeoutError end it 'raises the second error' do expect do operation end.to raise_error(exposed_error_class, /second error/) expect(expectation).to eq(unsuccessful_retry_value) end end context 'when the second error is a retryable OperationFailure' do let(:second_error) do Mongo::Error::OperationFailure.new('second error: not master') end it 'raises the second error' do expect do operation end.to raise_error(Mongo::Error, /second error: not master/) expect(expectation).to eq(unsuccessful_retry_value) end end context 'when the second error is a non-retryable OperationFailure' do let(:second_error) do Mongo::Error::OperationFailure.new('other error') end it 'does not retry writes and raises the first error' do expect do operation end.to raise_error(exposed_first_error_class, /first error/) expect(expectation).to eq(unsuccessful_retry_value) end end # The driver shouldn't be producing non-Mongo::Error derived errors, # but if those are produced (like ArgumentError), they would be # immediately propagated to the application. context 'when the second error is another error' do let(:second_error) do StandardError.new('second error') end it 'raises the second error' do expect do operation end.to raise_error(StandardError, /second error/) expect(expectation).to eq(unsuccessful_retry_value) end end end end end end shared_examples_for 'an operation that is not retried' do let!(:client) do authorized_client_without_retry_writes end before do expect(primary_socket).to receive(:do_write).exactly(:once).and_raise(Mongo::Error::SocketError) end it 'does not retry writes' do expect do operation end.to raise_error(Mongo::Error::SocketError) expect(expectation).to eq(unsuccessful_retry_value) end end shared_examples_for 'an operation that does not support retryable writes' do let!(:client) do authorized_client_with_retry_writes end let!(:collection) do client[TEST_COLL] end before do expect(primary_socket).to receive(:do_write).and_raise(Mongo::Error::SocketError) end it 'does not retry writes' do expect do operation end.to raise_error(Mongo::Error::SocketError) expect(expectation).to eq(unsuccessful_retry_value) end end shared_examples_for 'operation that is retried when server supports retryable writes' do context 'when the server supports retryable writes' do min_server_fcv '3.6' before do allow(primary_server).to receive(:retry_writes?).and_return(true) end context 'standalone' do require_topology :single it_behaves_like 'an operation that is not retried' end context 'replica set or sharded cluster' do require_topology :replica_set, :sharded it_behaves_like 'an operation that is retried' end end context 'when the server does not support retryable writes' do before do allow(primary_server).to receive(:retry_writes?).and_return(false) end it_behaves_like 'an operation that is not retried' end end shared_examples_for 'supported retryable writes' do context 'when the client has retry_writes set to true' do let!(:client) do authorized_client_with_retry_writes end context 'when the collection has write concern acknowledged' do let!(:collection) do client[TEST_COLL, write: {w: :majority}] end it_behaves_like 'operation that is retried when server supports retryable writes' end context 'when the collection has write concern unacknowledged' do let!(:collection) do client[TEST_COLL, write: { w: 0 }] end it_behaves_like 'an operation that is not retried' end end context 'when the client has retry_writes set to false' do let!(:client) do authorized_client_without_retry_writes end context 'when the collection has write concern acknowledged' do let!(:collection) do client[TEST_COLL, write: {w: :majority}] end it_behaves_like 'an operation that is not retried' end context 'when the collection has write concern unacknowledged' do let!(:collection) do client[TEST_COLL, write: { w: 0 }] end it_behaves_like 'an operation that is not retried' end context 'when the collection has write concern not set' do let!(:collection) do client[TEST_COLL] end it_behaves_like 'an operation that is not retried' end end end context 'when the operation is insert_one' do let(:operation) do collection.insert_one(a:1) end let(:expectation) do check_collection.find(a: 1).count end let(:successful_retry_value) do 1 end let(:unsuccessful_retry_value) do 0 end it_behaves_like 'supported retryable writes' end context 'when the operation is update_one' do before do # Account for when the collection has unacknowledged write concern and use authorized_collection here. authorized_collection.insert_one(a:0) end let(:operation) do collection.update_one({ a: 0 }, { '$set' => { a: 1 } }) end let(:expectation) do check_collection.find(a: 1).count end let(:successful_retry_value) do 1 end let(:unsuccessful_retry_value) do 0 end it_behaves_like 'supported retryable writes' end context 'when the operation is replace_one' do before do # Account for when the collection has unacknowledged write concern and use authorized_collection here. authorized_collection.insert_one(a:0) end let(:operation) do collection.replace_one({ a: 0 }, { a: 1 }) end let(:expectation) do check_collection.find(a: 1).count end let(:successful_retry_value) do 1 end let(:unsuccessful_retry_value) do 0 end it_behaves_like 'supported retryable writes' end context 'when the operation is delete_one' do before do # Account for when the collection has unacknowledged write concern and use authorized_collection here. authorized_collection.insert_one(a:1) end let(:operation) do collection.delete_one(a:1) end let(:expectation) do check_collection.find(a: 1).count end let(:successful_retry_value) do 0 end let(:unsuccessful_retry_value) do 1 end it_behaves_like 'supported retryable writes' end context 'when the operation is find_one_and_update' do before do # Account for when the collection has unacknowledged write concern and use authorized_collection here. authorized_collection.insert_one(a:0) end let(:operation) do collection.find_one_and_update({ a: 0 }, { '$set' => { a: 1 } }) end let(:expectation) do check_collection.find(a: 1).count end let(:successful_retry_value) do 1 end let(:unsuccessful_retry_value) do 0 end it_behaves_like 'supported retryable writes' end context 'when the operation is find_one_and_replace' do before do # Account for when the collection has unacknowledged write concern and use authorized_collection here. authorized_collection.insert_one(a:0) end let(:operation) do collection.find_one_and_replace({ a: 0 }, { a: 3 }) end let(:expectation) do check_collection.find(a: 3).count end let(:successful_retry_value) do 1 end let(:unsuccessful_retry_value) do 0 end it_behaves_like 'supported retryable writes' end context 'when the operation is find_one_and_delete' do before do # Account for when the collection has unacknowledged write concern and use authorized_collection here. authorized_collection.insert_one(a:1) end let(:operation) do collection.find_one_and_delete({ a: 1 }) end let(:expectation) do check_collection.find(a: 1).count end let(:successful_retry_value) do 0 end let(:unsuccessful_retry_value) do 1 end it_behaves_like 'supported retryable writes' end context 'when the operation is update_many' do before do # Account for when the collection has unacknowledged write concern and use authorized_collection here. authorized_collection.insert_one(a:0) authorized_collection.insert_one(a:0) end let(:operation) do collection.update_many({ a: 0 }, { '$set' => { a: 1 } }) end let(:expectation) do check_collection.find(a: 1).count end let(:unsuccessful_retry_value) do 0 end it_behaves_like 'an operation that does not support retryable writes' end context 'when the operation is delete_many' do before do # Account for when the collection has unacknowledged write concern and use authorized_collection here. authorized_collection.insert_one(a:1) authorized_collection.insert_one(a:1) end let(:operation) do collection.delete_many(a: 1) end let(:expectation) do check_collection.find(a: 1).count end let(:unsuccessful_retry_value) do 2 end it_behaves_like 'an operation that does not support retryable writes' end context 'when the operation is a bulk write' do before do # Account for when the collection has unacknowledged write concern and use authorized_collection here. authorized_collection.insert_one(a: 1) end let(:operation) do collection.bulk_write([{ delete_one: { filter: { a: 1 } } }, { insert_one: { a: 1 } }, { insert_one: { a: 1 } }]) end let(:expectation) do check_collection.find(a: 1).count end let(:successful_retry_value) do 2 end let(:unsuccessful_retry_value) do 1 end it_behaves_like 'supported retryable writes' end context 'when the operation is bulk write including delete_many' do before do # Account for when the collection has unacknowledged write concern and use authorized_collection here. authorized_collection.insert_one(a:1) authorized_collection.insert_one(a:1) end let(:operation) do collection.bulk_write([{ delete_many: { filter: { a: 1 } } }]) end let(:expectation) do check_collection.find(a: 1).count end let(:unsuccessful_retry_value) do 2 end it_behaves_like 'an operation that does not support retryable writes' end context 'when the operation is bulk write including update_many' do before do # Account for when the collection has unacknowledged write concern and use authorized_collection here. authorized_collection.insert_one(a:0) authorized_collection.insert_one(a:0) end let(:operation) do collection.bulk_write([{ update_many: { filter: { a: 0 }, update: { "$set" => { a: 1 } } } }]) end let(:expectation) do check_collection.find(a: 1).count end let(:unsuccessful_retry_value) do 0 end it_behaves_like 'an operation that does not support retryable writes' end context 'when the operation is database#command' do let(:operation) do collection.database.command(ping: 1) end let(:expectation) do 0 end let(:unsuccessful_retry_value) do 0 end it_behaves_like 'an operation that does not support retryable writes' end end mongo-ruby-driver-2.21.3/spec/integration/retryable_writes/retryable_writes_40_and_newer_spec.rb000066400000000000000000000177561505113246500333250ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' require_relative './shared/supports_retries' require_relative './shared/only_supports_legacy_retries' require_relative './shared/does_not_support_retries' describe 'Retryable Writes' do require_fail_command require_wired_tiger require_no_multi_mongos require_warning_clean let(:client) do authorized_client.with( socket_timeout: socket_timeout, retry_writes: retry_writes, max_write_retries: max_write_retries, ) end let(:socket_timeout) { nil } let(:retry_writes) { nil } let(:max_write_retries) { nil } let(:collection) { client['test'] } before do collection.drop end context 'collection#insert_one' do let(:command_name) { 'insert' } let(:perform_operation) do collection.insert_one(_id: 1) end let(:actual_result) do collection.count(_id: 1) end let(:expected_successful_result) do 1 end let(:expected_failed_result) do 0 end it_behaves_like 'it supports retries' end context 'collection#update_one' do before do collection.insert_one(_id: 1) end let(:command_name) { 'update' } let(:perform_operation) do collection.update_one({ _id: 1 }, { '$set' => { a: 1 } }) end let(:actual_result) do collection.count(a: 1) end let(:expected_successful_result) do 1 end let(:expected_failed_result) do 0 end it_behaves_like 'it supports retries' end context 'collection#replace_one' do before do collection.insert_one(_id: 1, text: 'hello world') end let(:command_name) { 'update' } let(:perform_operation) do collection.replace_one({ text: 'hello world' }, { text: 'goodbye' }) end let(:actual_result) do collection.count(text: 'goodbye') end let(:expected_successful_result) do 1 end let(:expected_failed_result) do 0 end it_behaves_like 'it supports retries' end context 'collection#delete_one' do before do collection.insert_one(_id: 1) end let(:command_name) { 'delete' } let(:perform_operation) do collection.delete_one(_id: 1) end let(:actual_result) do collection.count(_id: 1) end let(:expected_successful_result) do 0 end let(:expected_failed_result) do 1 end it_behaves_like 'it supports retries' end context 'collection#find_one_and_update' do before do collection.insert_one(_id: 1) end let(:command_name) { 'findAndModify' } let(:perform_operation) do collection.find_one_and_update({ _id: 1 }, { '$set' => { text: 'hello world' } }) end let(:actual_result) do collection.count(text: 'hello world') end let(:expected_successful_result) do 1 end let(:expected_failed_result) do 0 end it_behaves_like 'it supports retries' end context 'collection#find_one_and_replace' do before do collection.insert_one(_id: 1, text: 'hello world') end let(:command_name) { 'findAndModify' } let(:perform_operation) do collection.find_one_and_replace({ text: 'hello world' }, { text: 'goodbye' }) end let(:actual_result) do collection.count(text: 'goodbye') end let(:expected_successful_result) do 1 end let(:expected_failed_result) do 0 end it_behaves_like 'it supports retries' end context 'collection#find_one_and_delete' do before do collection.insert_one(_id: 1) end let(:command_name) { 'findAndModify' } let(:perform_operation) do collection.find_one_and_delete(_id: 1) end let(:actual_result) do collection.count(_id: 1) end let(:expected_successful_result) do 0 end let(:expected_failed_result) do 1 end it_behaves_like 'it supports retries' end context 'collection#update_many' do let(:command_name) { 'update' } before do collection.insert_one(_id: 1, text: 'hello world') collection.insert_one(_id: 2, text: 'hello world') end let(:perform_operation) do collection.update_many({ text: 'hello world' }, { '$set' => { text: 'goodbye' } }) end let(:actual_result) do collection.count(text: 'goodbye') end let(:expected_successful_result) do 2 end let(:expected_failed_result) do 0 end it_behaves_like 'it only supports legacy retries' end context 'collection#delete_many' do let(:command_name) { 'delete' } before do collection.insert_one(_id: 1, text: 'hello world') collection.insert_one(_id: 2, text: 'hello world') end let(:perform_operation) do collection.delete_many(text: 'hello world') end let(:actual_result) do collection.count(text: 'hello world') end let(:expected_successful_result) do 0 end let(:expected_failed_result) do 2 end it_behaves_like 'it only supports legacy retries' end context 'collection#bulk_write' do context 'with insert_one' do let(:command_name) { 'insert' } let(:perform_operation) do collection.bulk_write([{ insert_one: { _id: 1 } }]) end let(:actual_result) do collection.count(_id: 1) end let(:expected_successful_result) do 1 end let(:expected_failed_result) do 0 end it_behaves_like 'it supports retries' end context 'with delete_one' do let(:command_name) { 'delete' } before do collection.insert_one(_id: 1) end let(:perform_operation) do collection.bulk_write([{ delete_one: { filter: { _id: 1 } } }]) end let(:actual_result) do collection.count(_id: 1) end let(:expected_successful_result) do 0 end let(:expected_failed_result) do 1 end it_behaves_like 'it supports retries' end context 'with update_one' do let(:command_name) { 'update' } before do collection.insert_one(_id: 1, text: 'hello world') end let(:perform_operation) do collection.bulk_write([{ update_one: { filter: { text: 'hello world' }, update: { '$set' => { text: 'goodbye' } } } }]) end let(:actual_result) do collection.count(text: 'goodbye') end let(:expected_successful_result) do 1 end let(:expected_failed_result) do 0 end it_behaves_like 'it supports retries' end context 'with delete_many' do let(:command_name) { 'delete' } before do collection.insert_one(_id: 1, text: 'hello world') collection.insert_one(_id: 2, text: 'hello world') end let(:perform_operation) do collection.bulk_write([{ delete_many: { filter: { text: 'hello world' } } }]) end let(:actual_result) do collection.count(text: 'hello world') end let(:expected_successful_result) do 0 end let(:expected_failed_result) do 2 end it_behaves_like 'it only supports legacy retries' end context 'with update_many' do let(:command_name) { 'update' } before do collection.insert_one(_id: 1, text: 'hello world') collection.insert_one(_id: 2, text: 'hello world') end let(:perform_operation) do collection.bulk_write([{ update_many: { filter: { text: 'hello world' }, update: { '$set' => { text: 'goodbye' } } } }]) end let(:actual_result) do collection.count(text: 'goodbye') end let(:expected_successful_result) do 2 end let(:expected_failed_result) do 0 end it_behaves_like 'it only supports legacy retries' end end context 'database#command' do let(:command_name) { 'ping' } let(:perform_operation) do collection.database.command(ping: 1) end it_behaves_like 'it does not support retries' end end mongo-ruby-driver-2.21.3/spec/integration/retryable_writes/shared/000077500000000000000000000000001505113246500253015ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/integration/retryable_writes/shared/adds_diagnostics.rb000066400000000000000000000007231505113246500311320ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all module AddsDiagnostics shared_examples 'it adds diagnostics' do it 'indicates the server used for the operation' do expect do perform_operation end.to raise_error(Mongo::Error, /on #{ClusterConfig.instance.primary_address_str}/) end it 'indicates the second attempt' do expect do perform_operation end.to raise_error(Mongo::Error, /attempt 2/) end end end mongo-ruby-driver-2.21.3/spec/integration/retryable_writes/shared/does_not_support_retries.rb000066400000000000000000000012111505113246500327640ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require_relative './performs_no_retries' module DoesNotSupportRetries shared_examples 'it does not support retries' do context 'when retry_writes is true' do let(:retry_writes) { true } it_behaves_like 'it performs no retries' end context 'when retry_writes is false' do let(:retry_writes) { false } it_behaves_like 'it performs no retries' end context 'when retry_writes is false with no max_write_retries' do let(:retry_writes) { false } let(:max_write_retries) { 0 } it_behaves_like 'it performs no retries' end end end mongo-ruby-driver-2.21.3/spec/integration/retryable_writes/shared/only_supports_legacy_retries.rb000066400000000000000000000013021505113246500336430ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require_relative './performs_no_retries' require_relative './performs_legacy_retries' module OnlySupportsLegacyRetries shared_examples 'it only supports legacy retries' do context 'when retry_writes is true' do let(:retry_writes) { true } it_behaves_like 'it performs no retries' end context 'when retry_writes is false' do let(:retry_writes) { false } it_behaves_like 'it performs legacy retries' end context 'when retry_writes is false with no max_write_retries' do let(:retry_writes) { false } let(:max_write_retries) { 0 } it_behaves_like 'it performs no retries' end end end mongo-ruby-driver-2.21.3/spec/integration/retryable_writes/shared/performs_legacy_retries.rb000066400000000000000000000144361505113246500325540ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require_relative './adds_diagnostics' module PerformsLegacyRetries shared_examples 'it performs legacy retries' do require_warning_clean context 'for connection error' do before do client.use('admin').command( configureFailPoint: 'failCommand', mode: { times: 1 }, data: { failCommands: [command_name], closeConnection: true, } ) end it 'does not retry the operation' do expect(Mongo::Logger.logger).not_to receive(:warn).with(/legacy/) expect do perform_operation end.to raise_error(Mongo::Error::SocketError) expect(actual_result).to eq(expected_failed_result) end end context 'for ETIMEDOUT' do min_server_fcv '4.4' # shorten socket timeout so these tests take less time to run let(:socket_timeout) { 1 } before do client.use('admin').command( configureFailPoint: 'failCommand', mode: { times: 1 }, data: { failCommands: [command_name], blockConnection: true, blockTimeMS: 1100, } ) end it 'does not retry the operation' do expect(Mongo::Logger.logger).not_to receive(:warn).with(/legacy/) expect do perform_operation end.to raise_error(Mongo::Error::SocketTimeoutError) end after do # Assure that the server has completed the operation before moving # on to the next test. sleep 1 end end context 'on server versions >= 4.4' do min_server_fcv '4.4' context 'for OperationFailure with RetryableWriteError label' do before do client.use('admin').command( configureFailPoint: 'failCommand', mode: { times: times }, data: { failCommands: [command_name], errorCode: 5, # normally NOT a retryable error code errorLabels: ['RetryableWriteError'] } ) end context 'when error occurs once' do let(:times) { 1 } it 'retries and the operation and succeeds' do expect(Mongo::Logger.logger).to receive(:warn).once.with(/legacy/).and_call_original perform_operation expect(actual_result).to eq(expected_successful_result) end end context 'when error occurs twice' do let(:times) { 2 } it 'retries the operation and fails' do expect(Mongo::Logger.logger).to receive(:warn).once.with(/legacy/).and_call_original expect do perform_operation end.to raise_error(Mongo::Error::OperationFailure, /5/) expect(actual_result).to eq(expected_failed_result) end it_behaves_like 'it adds diagnostics' context 'and max_write_retries is set to 2' do let(:max_write_retries) { 2 } it 'retries twice and the operation succeeds' do expect(Mongo::Logger.logger).to receive(:warn).twice.with(/legacy/).and_call_original perform_operation expect(actual_result).to eq(expected_successful_result) end end end end context 'for OperationFailure without RetryableWriteError label' do before do client.use('admin').command( configureFailPoint: 'failCommand', mode: { times: 1 }, data: { failCommands: [command_name], errorCode: 91, # normally a retryable error code errorLabels: [], } ) end it 'raises the error' do expect(Mongo::Logger.logger).not_to receive(:warn) expect do perform_operation end.to raise_error(Mongo::Error::OperationFailure, /91/) expect(actual_result).to eq(expected_failed_result) end end end context 'on server versions < 4.4' do max_server_fcv '4.2' context 'for OperationFailure with retryable code' do before do client.use('admin').command( configureFailPoint: 'failCommand', mode: { times: times }, data: { failCommands: [command_name], errorCode: 91, # a retryable error code } ) end context 'when error occurs once' do let(:times) { 1 } it 'retries and the operation and succeeds' do expect(Mongo::Logger.logger).to receive(:warn).once.with(/legacy/).and_call_original perform_operation expect(actual_result).to eq(expected_successful_result) end end context 'when error occurs twice' do let(:times) { 2 } it 'retries and the operation and fails' do expect(Mongo::Logger.logger).to receive(:warn).once.with(/legacy/).and_call_original expect do perform_operation end.to raise_error(Mongo::Error::OperationFailure, /91/) expect(actual_result).to eq(expected_failed_result) end it_behaves_like 'it adds diagnostics' context 'and max_write_retries is set to 2' do let(:max_write_retries) { 2 } it 'retries twice and the operation succeeds' do expect(Mongo::Logger.logger).to receive(:warn).twice.with(/legacy/).and_call_original perform_operation expect(actual_result).to eq(expected_successful_result) end end end end context 'for OperationFailure with non-retryable code' do before do client.use('admin').command( configureFailPoint: 'failCommand', mode: { times: 1 }, data: { failCommands: [command_name], errorCode: 5, # a non-retryable error code } ) end it 'raises the error' do expect(Mongo::Logger.logger).not_to receive(:warn) expect do perform_operation end.to raise_error(Mongo::Error::OperationFailure, /5/) expect(actual_result).to eq(expected_failed_result) end end end end end mongo-ruby-driver-2.21.3/spec/integration/retryable_writes/shared/performs_modern_retries.rb000066400000000000000000000152641505113246500325740ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require_relative './adds_diagnostics' module PerformsModernRetries shared_examples 'it performs modern retries' do context 'for connection error' do before do client.use('admin').command( configureFailPoint: 'failCommand', mode: { times: times }, data: { failCommands: [command_name], closeConnection: true, } ) end context 'when error occurs once' do let(:times) { 1 } it 'retries and the operation succeeds' do expect(Mongo::Logger.logger).to receive(:warn).once.with(/modern.*attempt 1/).and_call_original perform_operation expect(actual_result).to eq(expected_successful_result) end end context 'when error occurs twice' do let(:times) { 2 } it 'retries and the operation and fails' do expect(Mongo::Logger.logger).to receive(:warn).once.with(/modern.*attempt 1/).and_call_original expect do perform_operation end.to raise_error(Mongo::Error::SocketError) expect(actual_result).to eq(expected_failed_result) end it_behaves_like 'it adds diagnostics' end end context 'for ETIMEDOUT' do # blockConnection option in failCommand was introduced in # server version 4.4 min_server_fcv '4.4' # shorten socket timeout so these tests take less time to run let(:socket_timeout) { 1 } before do client.use('admin').command( configureFailPoint: 'failCommand', mode: { times: times }, data: { failCommands: [command_name], blockConnection: true, blockTimeMS: 1100, } ) end context 'when error occurs once' do let(:times) { 1 } it 'retries and the operation succeeds' do expect(Mongo::Logger.logger).to receive(:warn).once.with(/modern.*attempt 1/).and_call_original perform_operation expect(actual_result).to eq(expected_successful_result) end end context 'when error occurs twice' do let(:times) { 2 } it 'retries and the operation and fails' do expect(Mongo::Logger.logger).to receive(:warn).once.with(/modern.*attempt 1/).and_call_original expect do perform_operation end.to raise_error(Mongo::Error::SocketTimeoutError) end it_behaves_like 'it adds diagnostics' after do # Assure that the server has completed the operation before moving # on to the next test. sleep 1 end end end context 'on server versions >= 4.4' do min_server_fcv '4.4' context 'for OperationFailure with RetryableWriteError label' do before do client.use('admin').command( configureFailPoint: 'failCommand', mode: { times: times }, data: { failCommands: [command_name], errorCode: 5, # normally NOT a retryable error code errorLabels: ['RetryableWriteError'] } ) end context 'when error occurs once' do let(:times) { 1 } it 'retries and the operation and succeeds' do expect(Mongo::Logger.logger).to receive(:warn).once.with(/modern.*attempt 1/).and_call_original perform_operation expect(actual_result).to eq(expected_successful_result) end end context 'when error occurs twice' do let(:times) { 2 } it 'retries the operation and fails' do expect(Mongo::Logger.logger).to receive(:warn).once.with(/modern.*attempt 1/).and_call_original expect do perform_operation end.to raise_error(Mongo::Error::OperationFailure, /5/) expect(actual_result).to eq(expected_failed_result) end it_behaves_like 'it adds diagnostics' end end context 'for OperationFailure without RetryableWriteError label' do before do client.use('admin').command( configureFailPoint: 'failCommand', mode: { times: 1 }, data: { failCommands: [command_name], errorCode: 91, # normally a retryable error code errorLabels: [], } ) end it 'raises the error' do expect(Mongo::Logger.logger).not_to receive(:warn) expect do perform_operation end.to raise_error(Mongo::Error::OperationFailure, /91/) expect(actual_result).to eq(expected_failed_result) end end end context 'on server versions < 4.4' do max_server_fcv '4.2' context 'for OperationFailure with retryable code' do before do client.use('admin').command( configureFailPoint: 'failCommand', mode: { times: times }, data: { failCommands: [command_name], errorCode: 91, # a retryable error code } ) end context 'when error occurs once' do let(:times) { 1 } it 'retries and the operation succeeds' do expect(Mongo::Logger.logger).to receive(:warn).once.with(/modern.*attempt 1/).and_call_original perform_operation expect(actual_result).to eq(expected_successful_result) end end context 'when error occurs twice' do let(:times) { 2 } it 'retries and the operation and fails' do expect(Mongo::Logger.logger).to receive(:warn).once.with(/modern.*attempt 1/).and_call_original expect do perform_operation end.to raise_error(Mongo::Error::OperationFailure, /91/) expect(actual_result).to eq(expected_failed_result) end it_behaves_like 'it adds diagnostics' end end context 'for OperationFailure with non-retryable code' do before do client.use('admin').command( configureFailPoint: 'failCommand', mode: { times: times }, data: { failCommands: [command_name], errorCode: 5, # a non-retryable error code } ) end let(:times) { 1 } it 'raises the error' do expect(Mongo::Logger.logger).not_to receive(:warn) expect do perform_operation end.to raise_error(Mongo::Error::OperationFailure, /5/) expect(actual_result).to eq(expected_failed_result) end end end end end mongo-ruby-driver-2.21.3/spec/integration/retryable_writes/shared/performs_no_retries.rb000066400000000000000000000056051505113246500317220ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all module PerformsNoRetries shared_examples 'it performs no retries' do # required for failCommand min_server_fcv '4.0' context 'for connection error' do before do client.use('admin').command( configureFailPoint: 'failCommand', mode: { times: 1 }, data: { failCommands: [command_name], closeConnection: true, } ) end it 'does not retry the operation' do expect(Mongo::Logger.logger).not_to receive(:warn) expect do perform_operation end.to raise_error(Mongo::Error::SocketError) end end context 'for ETIMEDOUT' do min_server_fcv '4.4' # shorten socket timeout so these tests take less time to run let(:socket_timeout) { 1 } before do client.use('admin').command( configureFailPoint: 'failCommand', mode: { times: 1 }, data: { failCommands: [command_name], blockConnection: true, blockTimeMS: 1100, } ) end it 'does not retry the operation' do expect(Mongo::Logger.logger).not_to receive(:warn) expect do perform_operation end.to raise_error(Mongo::Error::SocketTimeoutError) end after do # Assure that the server has completed the operation before moving # on to the next test. sleep 1 end end context 'on server versions >= 4.4' do min_server_fcv '4.4' # These tests will be implemented in a follow-up PR end context 'on server versions <= 4.4' do max_server_fcv '4.2' context 'for OperationFailure with retryable code' do before do client.use('admin').command( configureFailPoint: 'failCommand', mode: { times: 1 }, data: { failCommands: [command_name], errorCode: 91, # a retryable error code } ) end it 'does not retry the operation' do expect(Mongo::Logger.logger).not_to receive(:warn) expect do perform_operation end.to raise_error(Mongo::Error::OperationFailure, /91/) end end context 'for OperationFailure with non-retryable code' do before do client.use('admin').command( configureFailPoint: 'failCommand', mode: { times: 1 }, data: { failCommands: [command_name], errorCode: 5, # a non-retryable error code } ) end it 'does not retry the operation' do expect(Mongo::Logger.logger).not_to receive(:warn) expect do perform_operation end.to raise_error(Mongo::Error::OperationFailure, /5/) end end end end end mongo-ruby-driver-2.21.3/spec/integration/retryable_writes/shared/supports_legacy_retries.rb000066400000000000000000000011001505113246500325760ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require_relative './performs_legacy_retries' module SupportsLegacyRetries shared_examples 'it supports legacy retries' do context 'when server does not support modern retries' do before do allow_any_instance_of(Mongo::Server).to receive(:retry_writes?).and_return(false) end it_behaves_like 'it performs legacy retries' end context 'when client is set to use legacy retries' do let(:retry_writes) { false } it_behaves_like 'it performs legacy retries' end end end mongo-ruby-driver-2.21.3/spec/integration/retryable_writes/shared/supports_modern_retries.rb000066400000000000000000000013671505113246500326350ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require_relative './performs_modern_retries' require_relative './performs_no_retries' module SupportsModernRetries shared_examples 'it supports modern retries' do let(:retry_writes) { true } context 'against a standalone server' do require_topology :single before(:all) do skip 'RUBY-2171: standalone topology currently uses legacy write retries ' \ 'by default. Standalone should NOT retry when modern retries are enabled.' end it_behaves_like 'it performs no retries' end context 'against a replica set or sharded cluster' do require_topology :replica_set, :sharded it_behaves_like 'it performs modern retries' end end end mongo-ruby-driver-2.21.3/spec/integration/retryable_writes/shared/supports_retries.rb000066400000000000000000000007451505113246500312700ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require_relative './supports_modern_retries' require_relative './supports_legacy_retries' module SupportsRetries shared_examples 'it supports retries' do it_behaves_like 'it supports modern retries' it_behaves_like 'it supports legacy retries' context 'when retry writes is off' do let(:retry_writes) { false } let(:max_write_retries) { 0 } it_behaves_like 'it performs no retries' end end end mongo-ruby-driver-2.21.3/spec/integration/retryable_writes_errors_spec.rb000066400000000000000000000237371505113246500270020ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe 'Retryable writes errors tests' do let(:options) { {} } let(:client) do authorized_client.with(options.merge(retry_writes: true)) end let(:collection) do client['retryable-writes-error-spec'] end context 'when the storage engine does not support retryable writes but the server does' do require_mmapv1 min_server_fcv '3.6' require_topology :replica_set, :sharded before do collection.delete_many end context 'when a retryable write is attempted' do it 'raises an actionable error message' do expect { collection.insert_one(a:1) }.to raise_error(Mongo::Error::OperationFailure, /This MongoDB deployment does not support retryable writes. Please add retryWrites=false to your connection string or use the retry_writes: false Ruby client option/) expect(collection.find.count).to eq(0) end end end context "when encountering a NoWritesPerformed error after an error with a RetryableWriteError label" do require_topology :replica_set require_retry_writes min_server_version '4.4' let(:failpoint1) do { configureFailPoint: "failCommand", mode: { times: 1 }, data: { writeConcernError: { code: 91, errorLabels: ["RetryableWriteError"], }, failCommands: ["insert"], } } end let(:failpoint2) do { configureFailPoint: "failCommand", mode: { times: 1 }, data: { errorCode: 10107, errorLabels: ["RetryableWriteError", "NoWritesPerformed"], failCommands: ["insert"], }, } end let(:subscriber) { Mrss::EventSubscriber.new } before do authorized_client.subscribe(Mongo::Monitoring::COMMAND, subscriber) authorized_client.use(:admin).command(failpoint1) expect(authorized_collection.write_worker).to receive(:retry_write).once.and_wrap_original do |m, *args, **kwargs, &block| expect(args.first.code).to eq(91) authorized_client.use(:admin).command(failpoint2) m.call(*args, **kwargs, &block) end end after do authorized_client.use(:admin).command({ configureFailPoint: "failCommand", mode: "off", }) end it "returns the original error" do expect do authorized_collection.insert_one(x: 1) end.to raise_error(Mongo::Error::OperationFailure, /\[91\]/) end end context "PoolClearedError retryability test" do require_topology :single, :sharded require_no_multi_mongos require_fail_command require_retry_writes let(:options) { { max_pool_size: 1 } } let(:failpoint) do { configureFailPoint: "failCommand", mode: { times: 1 }, data: { failCommands: [ "insert" ], errorCode: 91, blockConnection: true, blockTimeMS: 1000, errorLabels: ["RetryableWriteError"] } } end let(:subscriber) { Mrss::EventSubscriber.new } let(:threads) do threads = [] threads << Thread.new do expect(collection.insert_one(x: 2)).to be_successful end threads << Thread.new do expect(collection.insert_one(x: 2)).to be_successful end threads end let(:insert_events) do subscriber.started_events.select { |e| e.command_name == "insert" } end let(:cmap_events) do subscriber.published_events end let(:event_types) do [ Mongo::Monitoring::Event::Cmap::ConnectionCheckedOut, Mongo::Monitoring::Event::Cmap::ConnectionCheckOutFailed, Mongo::Monitoring::Event::Cmap::PoolCleared, ] end let(:check_out_results) do cmap_events.select do |e| event_types.include?(e.class) end end before do authorized_client.use(:admin).command(failpoint) client.subscribe(Mongo::Monitoring::COMMAND, subscriber) client.subscribe(Mongo::Monitoring::CONNECTION_POOL, subscriber) end it "retries on PoolClearedError" do # After the first insert fails, the pool is paused and retry is triggered. # Now, a race is started between the second insert acquiring a connection, # and the first retrying the read. Now, retry reads cause the cluster to # be rescanned and the pool to be unpaused, allowing the second checkout # to succeed (when it should fail). Therefore we want the second insert's # check out to win the race. This gives the check out a little head start. allow(collection.cluster.next_primary.pool).to receive(:ready).and_wrap_original do |m, *args, &block| ::Utils.wait_for_condition(3) do # check_out_results should contain: # - insert1 connection check out successful # - pool cleared # - insert2 connection check out failed # We wait here for the third event to happen before we ready the pool. cmap_events.select do |e| event_types.include?(e.class) end.length >= 3 end m.call(*args, &block) end threads.map(&:join) expect(check_out_results[0]).to be_a(Mongo::Monitoring::Event::Cmap::ConnectionCheckedOut) expect(check_out_results[1]).to be_a(Mongo::Monitoring::Event::Cmap::PoolCleared) expect(check_out_results[2]).to be_a(Mongo::Monitoring::Event::Cmap::ConnectionCheckOutFailed) expect(insert_events.length).to eq(3) end after do authorized_client.use(:admin).command({ configureFailPoint: "failCommand", mode: "off", }) end end context 'Retries in a sharded cluster' do require_topology :sharded min_server_version '4.2' require_no_auth let(:subscriber) { Mrss::EventSubscriber.new } let(:insert_started_events) do subscriber.started_events.select { |e| e.command_name == "insert" } end let(:insert_failed_events) do subscriber.failed_events.select { |e| e.command_name == "insert" } end let(:insert_succeeded_events) do subscriber.succeeded_events.select { |e| e.command_name == "insert" } end context 'when another mongos is available' do let(:first_mongos) do Mongo::Client.new( [SpecConfig.instance.addresses.first], direct_connection: true, database: 'admin' ) end let(:second_mongos) do Mongo::Client.new( [SpecConfig.instance.addresses.last], direct_connection: false, database: 'admin' ) end let(:client) do new_local_client( [ SpecConfig.instance.addresses.first, SpecConfig.instance.addresses.last, ], SpecConfig.instance.test_options.merge(retry_writes: true) ) end let(:expected_servers) do [ SpecConfig.instance.addresses.first.to_s, SpecConfig.instance.addresses.last.to_s ].sort end before do skip 'This test requires at least two mongos' if SpecConfig.instance.addresses.length < 2 first_mongos.database.command( configureFailPoint: 'failCommand', mode: { times: 1 }, data: { failCommands: %w(insert), closeConnection: false, errorCode: 6, errorLabels: ['RetryableWriteError'] } ) second_mongos.database.command( configureFailPoint: 'failCommand', mode: { times: 1 }, data: { failCommands: %w(insert), closeConnection: false, errorCode: 6, errorLabels: ['RetryableWriteError'] } ) end after do [first_mongos, second_mongos].each do |admin_client| admin_client.database.command( configureFailPoint: 'failCommand', mode: 'off' ) admin_client.close end client.close end it 'retries on different mongos' do client.subscribe(Mongo::Monitoring::COMMAND, subscriber) expect { collection.insert_one(x: 1) }.to raise_error(Mongo::Error::OperationFailure) expect(insert_started_events.map { |e| e.address.to_s }.sort).to eq(expected_servers) expect(insert_failed_events.map { |e| e.address.to_s }.sort).to eq(expected_servers) end end context 'when no other mongos is available' do let(:mongos) do Mongo::Client.new( [SpecConfig.instance.addresses.first], direct_connection: true, database: 'admin' ) end let(:client) do new_local_client( [ SpecConfig.instance.addresses.first ], SpecConfig.instance.test_options.merge(retry_writes: true) ) end before do mongos.database.command( configureFailPoint: 'failCommand', mode: { times: 1 }, data: { failCommands: %w(insert), closeConnection: false, errorCode: 6, errorLabels: ['RetryableWriteError'] } ) end after do mongos.database.command( configureFailPoint: 'failCommand', mode: 'off' ) mongos.close client.close end it 'retries on the same mongos' do client.subscribe(Mongo::Monitoring::COMMAND, subscriber) expect { collection.insert_one(x: 1) }.not_to raise_error expect(insert_started_events.map { |e| e.address.to_s }.sort).to eq([ SpecConfig.instance.addresses.first.to_s, SpecConfig.instance.addresses.first.to_s ]) expect(insert_failed_events.map { |e| e.address.to_s }.sort).to eq([ SpecConfig.instance.addresses.first.to_s ]) expect(insert_succeeded_events.map { |e| e.address.to_s }.sort).to eq([ SpecConfig.instance.addresses.first.to_s ]) end end end end mongo-ruby-driver-2.21.3/spec/integration/sdam_error_handling_spec.rb000066400000000000000000000334321505113246500260120ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe 'SDAM error handling' do require_topology :single, :replica_set, :sharded require_mri clean_slate retry_test after do # Close all clients after every test to avoid leaking expectations into # subsequent tests because we set global assertions on sockets. ClientRegistry.instance.close_all_clients end # These tests operate on specific servers, and don't work in a multi # shard cluster where multiple servers are equally eligible require_no_multi_mongos let(:diagnostic_subscriber) { Mrss::VerboseEventSubscriber.new } let(:client) do new_local_client(SpecConfig.instance.addresses, SpecConfig.instance.all_test_options.merge( socket_timeout: 3, connect_timeout: 3, heartbeat_frequency: 100, populator_io: false, # Uncomment to print all events to stdout: #sdam_proc: Utils.subscribe_all_sdam_proc(diagnostic_subscriber), **Utils.disable_retries_client_options) ) end let(:server) { client.cluster.next_primary } shared_examples_for 'marks server unknown' do before do server.monitor.stop! end after do client.close end it 'marks server unknown' do expect(server).not_to be_unknown RSpec::Mocks.with_temporary_scope do operation expect(server).to be_unknown end end end shared_examples_for 'does not mark server unknown' do before do server.monitor.stop! end after do client.close end it 'does not mark server unknown' do expect(server).not_to be_unknown RSpec::Mocks.with_temporary_scope do operation expect(server).not_to be_unknown end end end shared_examples_for 'requests server scan' do it 'requests server scan' do RSpec::Mocks.with_temporary_scope do expect(server.scan_semaphore).to receive(:signal) operation end end end shared_examples_for 'does not request server scan' do it 'does not request server scan' do RSpec::Mocks.with_temporary_scope do expect(server.scan_semaphore).not_to receive(:signal) operation end end end shared_examples_for 'clears connection pool' do it 'clears connection pool' do generation = server.pool.generation RSpec::Mocks.with_temporary_scope do operation new_generation = server.pool_internal.generation expect(new_generation).to eq(generation + 1) end end end shared_examples_for 'does not clear connection pool' do it 'does not clear connection pool' do generation = server.pool.generation RSpec::Mocks.with_temporary_scope do operation new_generation = server.pool_internal.generation expect(new_generation).to eq(generation) end end end describe 'when there is an error during an operation' do before do client.cluster.next_primary # we also need a connection to the primary so that our error # expectations do not get triggered during handshakes which # have different behavior from non-handshake errors client.database.command(ping: 1) end let(:operation) do expect_any_instance_of(Mongo::Server::Connection).to receive(:deliver).and_return(reply) expect do client.database.command(ping: 1) end.to raise_error(Mongo::Error::OperationFailure, exception_message) end shared_examples_for 'not master or node recovering' do it_behaves_like 'marks server unknown' it_behaves_like 'requests server scan' context 'server 4.2 or higher' do min_server_fcv '4.2' it_behaves_like 'does not clear connection pool' end context 'server 4.0 or lower' do max_server_version '4.0' it_behaves_like 'clears connection pool' end end shared_examples_for 'node shutting down' do it_behaves_like 'marks server unknown' it_behaves_like 'requests server scan' it_behaves_like 'clears connection pool' end context 'not master error' do let(:exception_message) do /not master/ end let(:reply) do make_not_master_reply end it_behaves_like 'not master or node recovering' end context 'node recovering error' do let(:exception_message) do /DueToStepDown/ end let(:reply) do make_node_recovering_reply end it_behaves_like 'not master or node recovering' end context 'node shutting down error' do let(:exception_message) do /shutdown in progress/ end let(:reply) do make_node_shutting_down_reply end it_behaves_like 'node shutting down' end context 'network error' do # With 4.4 servers we set up two monitoring connections, hence global # socket expectations get hit twice. max_server_version '4.2' let(:operation) do expect_any_instance_of(Mongo::Socket).to receive(:read).and_raise(exception) expect do client.database.command(ping: 1) end.to raise_error(exception) end context 'non-timeout network error' do let(:exception) do Mongo::Error::SocketError end it_behaves_like 'marks server unknown' it_behaves_like 'does not request server scan' it_behaves_like 'clears connection pool' end context 'network timeout error' do let(:exception) do Mongo::Error::SocketTimeoutError end it_behaves_like 'does not mark server unknown' it_behaves_like 'does not request server scan' it_behaves_like 'does not clear connection pool' end end end describe 'when there is an error during connection establishment' do require_topology :single # The push monitor creates sockets unpredictably and interferes with this # test. max_server_version '4.2' # When TLS is used there are two socket classes and we can't simply # mock the base Socket class. require_no_tls { SystemCallError => Mongo::Error::SocketError, Errno::ETIMEDOUT => Mongo::Error::SocketTimeoutError, }.each do |raw_error_cls, mapped_error_cls| context raw_error_cls.name do let(:socket) do double('mock socket').tap do |socket| allow(socket).to receive(:set_encoding) allow(socket).to receive(:setsockopt) allow(socket).to receive(:getsockopt) allow(socket).to receive(:connect) allow(socket).to receive(:close) socket.should receive(:write).and_raise(raw_error_cls, 'mocked failure') end end it 'marks server unknown' do server = client.cluster.next_primary pool = client.cluster.pool(server) client.cluster.servers.map(&:disconnect!) RSpec::Mocks.with_temporary_scope do Socket.should receive(:new).with(any_args).ordered.once.and_return(socket) allow(pool).to receive(:paused?).and_return(false) lambda do client.command(ping: 1) end.should raise_error(mapped_error_cls, /mocked failure/) server.should be_unknown end end it 'recovers' do server = client.cluster.next_primary # If we do not kill the monitor, the client will recover automatically. RSpec::Mocks.with_temporary_scope do Socket.should receive(:new).with(any_args).ordered.once.and_return(socket) Socket.should receive(:new).with(any_args).ordered.once.and_call_original lambda do client.command(ping: 1) end.should raise_error(mapped_error_cls, /mocked failure/) client.command(ping: 1) end end end end after do # Since we stopped monitoring on the client, close it. ClientRegistry.instance.close_all_clients end end describe 'when there is an error on monitoring connection' do clean_slate_for_all let(:subscriber) { Mrss::EventSubscriber.new } let(:set_subscribers) do client.subscribe(Mongo::Monitoring::SERVER_DESCRIPTION_CHANGED, subscriber) client.subscribe(Mongo::Monitoring::CONNECTION_POOL, subscriber) end let(:operation) do expect(server.monitor.connection).not_to be nil set_subscribers RSpec::Mocks.with_temporary_scope do expect(server.monitor).to receive(:check).and_raise(exception) server.monitor.scan! end expect_server_state_change end shared_examples_for 'marks server unknown - sdam event' do it 'marks server unknown' do expect(server).not_to be_unknown #subscriber.clear_events! events = subscriber.select_succeeded_events(Mongo::Monitoring::Event::ServerDescriptionChanged) events.should be_empty RSpec::Mocks.with_temporary_scope do operation events = subscriber.select_succeeded_events(Mongo::Monitoring::Event::ServerDescriptionChanged) events.should_not be_empty event = events.detect do |event| event.new_description.address == server.address && event.new_description.unknown? end event.should_not be_nil end end end shared_examples_for 'clears connection pool - cmap event' do it 'clears connection pool' do #subscriber.clear_events! events = subscriber.select_published_events(Mongo::Monitoring::Event::Cmap::PoolCleared) events.should be_empty RSpec::Mocks.with_temporary_scope do operation events = subscriber.select_published_events(Mongo::Monitoring::Event::Cmap::PoolCleared) events.should_not be_empty event = events.detect do |event| event.address == server.address end event.should_not be_nil end end end shared_examples_for 'marks server unknown and clears connection pool' do =begin These tests are not reliable context 'via object inspection' do let(:expect_server_state_change) do server.summary.should =~ /unknown/i expect(server).to be_unknown end it_behaves_like 'marks server unknown' it_behaves_like 'clears connection pool' end =end context 'via events' do # When we use events we do not need to examine object state, therefore # it does not matter whether the server stays unknown or gets # successfully checked. let(:expect_server_state_change) do # nothing end it_behaves_like 'marks server unknown - sdam event' it_behaves_like 'clears connection pool - cmap event' end end context 'via stubs' do # With 4.4 servers we set up two monitoring connections, hence global # socket expectations get hit twice. max_server_version '4.2' context 'network timeout' do let(:exception) { Mongo::Error::SocketTimeoutError } it_behaves_like 'marks server unknown and clears connection pool' end context 'non-timeout network error' do let(:exception) { Mongo::Error::SocketError } it_behaves_like 'marks server unknown and clears connection pool' end end context 'non-timeout network error via fail point' do require_fail_command let(:admin_client) { client.use(:admin) } let(:set_fail_point) do admin_client.command( configureFailPoint: 'failCommand', mode: {times: 2}, data: { failCommands: %w(isMaster hello), closeConnection: true, }, ) end let(:operation) do expect(server.monitor.connection).not_to be nil set_subscribers set_fail_point server.monitor.scan! expect_server_state_change end # https://jira.mongodb.org/browse/RUBY-2523 # it_behaves_like 'marks server unknown and clears connection pool' after do admin_client.command(configureFailPoint: 'failCommand', mode: 'off') end end end context "when there is an error on the handshake" do # require appName for fail point min_server_version "4.9" let(:admin_client) do new_local_client( [SpecConfig.instance.addresses.first], SpecConfig.instance.test_options.merge({ connect: :direct, populator_io: false, direct_connection: true, app_name: "SDAMMinHeartbeatFrequencyTest", database: 'admin' }) ) end let(:cmd_client) do # Change the server selection timeout so that we are given a new cluster. admin_client.with(server_selection_timeout: 5) end let(:set_fail_point) do admin_client.command( configureFailPoint: 'failCommand', mode: { times: 5 }, data: { failCommands: %w(isMaster hello), errorCode: 1234, appName: "SDAMMinHeartbeatFrequencyTest" }, ) end let(:operation) do expect(server.monitor.connection).not_to be nil set_fail_point end it "waits 500ms between failed hello checks" do operation start = Mongo::Utils.monotonic_time cmd_client.command(hello: 1) duration = Mongo::Utils.monotonic_time - start expect(duration).to be >= 2 expect(duration).to be <= 3.5 # The cluster that we use to set up the failpoint should not be the same # one we ping on, so that the ping will have to select a server. The admin # client has already selected a server. expect(admin_client.cluster.object_id).to_not eq(cmd_client.cluster.object_id) end after do admin_client.command(configureFailPoint: 'failCommand', mode: 'off') cmd_client.close end end end mongo-ruby-driver-2.21.3/spec/integration/sdam_events_spec.rb000066400000000000000000000133141505113246500243160ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe 'SDAM events' do let(:subscriber) { Mrss::EventSubscriber.new } describe 'server closed event' do it 'is published when client is closed' do client = ClientRegistry.instance.new_local_client( SpecConfig.instance.addresses, SpecConfig.instance.test_options) client.subscribe(Mongo::Monitoring::SERVER_CLOSED, subscriber) # get the client connected client.database.command(ping: 1) expect(subscriber.succeeded_events).to be_empty client.close expect(subscriber.succeeded_events).not_to be_empty event = subscriber.first_event('server_closed_event') expect(event).not_to be_nil end end describe 'topology closed event' do it 'is published when client is closed' do client = ClientRegistry.instance.new_local_client( SpecConfig.instance.addresses, SpecConfig.instance.test_options) client.subscribe(Mongo::Monitoring::TOPOLOGY_CLOSED, subscriber) # get the client connected client.database.command(ping: 1) expect(subscriber.succeeded_events).to be_empty client.close expect(subscriber.succeeded_events).not_to be_empty event = subscriber.first_event('topology_closed_event') expect(event).not_to be_nil expect(event.topology).to eql(client.cluster.topology) end end describe 'heartbeat event' do require_topology :single context 'pre-4.4 servers' do max_server_version '4.2' let(:sdam_proc) do Proc.new do |client| client.subscribe(Mongo::Monitoring::SERVER_HEARTBEAT, subscriber) end end let(:client) do new_local_client(SpecConfig.instance.addresses, # Heartbeat interval is bound by 500 ms SpecConfig.instance.test_options.merge( heartbeat_frequency: 0.5, sdam_proc: sdam_proc ), ) end it 'is published every heartbeat interval' do client sleep 4 client.close started_events = subscriber.select_started_events(Mongo::Monitoring::Event::ServerHeartbeatStarted) # Expect about 8 events, maybe 9 or 7 started_events.length.should >= 6 started_events.length.should <= 10 succeeded_events = subscriber.select_succeeded_events(Mongo::Monitoring::Event::ServerHeartbeatSucceeded) started_events.length.should > 1 (succeeded_events.length..succeeded_events.length+1).should include(started_events.length) end end context '4.4+ servers' do min_server_fcv '4.4' let(:sdam_proc) do Proc.new do |client| client.subscribe(Mongo::Monitoring::SERVER_HEARTBEAT, subscriber) end end let(:client) do new_local_client(SpecConfig.instance.addresses, # Heartbeat interval is bound by 500 ms SpecConfig.instance.test_options.merge( heartbeat_frequency: 0.5, sdam_proc: sdam_proc ), ) end it 'is published up to twice every heartbeat interval' do client sleep 3 client.close started_events = subscriber.select_started_events( Mongo::Monitoring::Event::ServerHeartbeatStarted ) # We could have up to 16 events and should have no fewer than 8 events. # Whenever an awaited hello succeeds while the regular monitor is # waiting, the regular monitor's next scan is pushed forward. started_events.length.should >= 6 started_events.length.should <= 18 (started_awaited = started_events.select(&:awaited?)).should_not be_empty (started_regular = started_events.reject(&:awaited?)).should_not be_empty completed_events = subscriber.select_completed_events( Mongo::Monitoring::Event::ServerHeartbeatSucceeded, Mongo::Monitoring::Event::ServerHeartbeatFailed, ) completed_events.length.should >= 6 completed_events.length.should <= 18 (succeeded_awaited = completed_events.select(&:awaited?)).should_not be_empty (succeeded_regular = completed_events.reject(&:awaited?)).should_not be_empty # There may be in-flight hellos that don't complete, both # regular and awaited. started_awaited.length.should > 1 (succeeded_awaited.length..succeeded_awaited.length+1).should include(started_awaited.length) started_regular.length.should > 1 (succeeded_regular.length..succeeded_regular.length+1).should include(started_regular.length) end end end describe 'server description changed' do require_topology :single let(:sdam_proc) do Proc.new do |client| client.subscribe(Mongo::Monitoring::SERVER_DESCRIPTION_CHANGED, subscriber) end end let(:client) do new_local_client(SpecConfig.instance.addresses, # Heartbeat interval is bound by 500 ms SpecConfig.instance.test_options.merge(client_options).merge( heartbeat_frequency: 0.5, sdam_proc: sdam_proc, ), ) end let(:client_options) do {} end it 'is not published when there are no changes in server state' do client sleep 6 client.close events = subscriber.select_succeeded_events(Mongo::Monitoring::Event::ServerDescriptionChanged) # In 6 seconds we should have about 10 or 12 heartbeats. # We expect 1 or 2 description changes: # The first one from unknown to known, # The second one because server changes the fields it returns based on # driver server check payload (e.g. ismaster/isWritablePrimary). events.length.should >= 1 events.length.should <= 2 end end end mongo-ruby-driver-2.21.3/spec/integration/sdam_prose_spec.rb000066400000000000000000000034001505113246500241350ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe 'SDAM prose tests' do # The "streaming protocol tests" are covered by the tests in # sdam_events_spec.rb. describe 'RTT tests' do min_server_fcv '4.4' require_topology :single let(:subscriber) { Mrss::EventSubscriber.new } let(:client) do new_local_client(SpecConfig.instance.addresses, # Heartbeat interval is bound by 500 ms SpecConfig.instance.test_options.merge( heartbeat_frequency: 0.5, app_name: 'streamingRttTest', ), ).tap do |client| client.subscribe(Mongo::Monitoring::SERVER_HEARTBEAT, subscriber) end end it 'updates RTT' do server = client.cluster.next_primary sleep 2 events = subscriber.select_succeeded_events(Mongo::Monitoring::Event::ServerHeartbeatSucceeded) events.each do |event| event.round_trip_time.should be_a(Numeric) event.round_trip_time.should > 0 end root_authorized_client.use('admin').database.command( configureFailPoint: 'failCommand', mode: {times: 1000}, data: { failCommands: %w(isMaster hello), blockConnection: true, blockTimeMS: 500, appName: "streamingRttTest", }, ) deadline = Mongo::Utils.monotonic_time + 10 loop do if server.average_round_trip_time > 0.25 break end if Mongo::Utils.monotonic_time >= deadline raise "Failed to witness RTT growing to >= 250 ms in 10 seconds" end sleep 0.2 end end after do root_authorized_client.use('admin').database.command( configureFailPoint: 'failCommand', mode: 'off') end end end mongo-ruby-driver-2.21.3/spec/integration/search_indexes_prose_spec.rb000066400000000000000000000120341505113246500262000ustar00rootroot00000000000000# frozen_string_literal: true require 'spec_helper' class SearchIndexHelper attr_reader :client, :collection_name def initialize(client) @client = client # https://github.com/mongodb/specifications/blob/master/source/index-management/tests/README.md#search-index-management-helpers # "...each test uses a randomly generated collection name. Drivers may # generate this collection name however they like, but a suggested # implementation is a hex representation of an ObjectId..." @collection_name = BSON::ObjectId.new.to_s end # `soft_create` means to create the collection object without forcing it to # be created in the database. def collection(soft_create: false) @collection ||= client.database[collection_name].tap do |collection| collection.create unless soft_create end end # Wait for all of the indexes with the given names to be ready; then return # the list of index definitions corresponding to those names. def wait_for(*names, &condition) timeboxed_wait do result = collection.search_indexes return filter_results(result, names) if names.all? { |name| ready?(result, name, &condition) } end end # Wait until all of the indexes with the given names are absent from the # search index list. def wait_for_absense_of(*names) names.each do |name| timeboxed_wait do break if collection.search_indexes(name: name).empty? end end end private def timeboxed_wait(step: 5, max: 300) start = Mongo::Utils.monotonic_time loop do yield sleep step raise Timeout::Error, 'wait took too long' if Mongo::Utils.monotonic_time - start > max end end # Returns true if the list of search indexes includes one with the given name, # which is ready to be queried. def ready?(list, name, &condition) condition ||= ->(index) { index['queryable'] } list.any? { |index| index['name'] == name && condition[index] } end def filter_results(result, names) result.select { |index| names.include?(index['name']) } end end describe 'Mongo::Collection#search_indexes prose tests' do # https://github.com/mongodb/specifications/blob/master/source/index-management/tests/README.md#setup # "These tests must run against an Atlas cluster with a 7.0+ server." require_atlas let(:client) do Mongo::Client.new( ENV['ATLAS_URI'], database: SpecConfig.instance.test_db, ssl: true, ssl_verify: true ) end let(:helper) { SearchIndexHelper.new(client) } let(:name) { 'test-search-index' } let(:definition) { { 'mappings' => { 'dynamic' => false } } } let(:create_index) { helper.collection.search_indexes.create_one(definition, name: name) } after do client.close end # Case 1: Driver can successfully create and list search indexes context 'when creating and listing search indexes' do let(:index) { helper.wait_for(name).first } it 'succeeds' do expect(create_index).to be == name expect(index['latestDefinition']).to be == definition end end # Case 2: Driver can successfully create multiple indexes in batch context 'when creating multiple indexes in batch' do let(:specs) do [ { 'name' => 'test-search-index-1', 'definition' => definition }, { 'name' => 'test-search-index-2', 'definition' => definition } ] end let(:names) { specs.map { |spec| spec['name'] } } let(:create_indexes) { helper.collection.search_indexes.create_many(specs) } let(:indexes) { helper.wait_for(*names) } let(:index1) { indexes[0] } let(:index2) { indexes[1] } it 'succeeds' do expect(create_indexes).to be == names expect(index1['latestDefinition']).to be == specs[0]['definition'] expect(index2['latestDefinition']).to be == specs[1]['definition'] end end # Case 3: Driver can successfully drop search indexes context 'when dropping search indexes' do it 'succeeds' do expect(create_index).to be == name helper.wait_for(name) helper.collection.search_indexes.drop_one(name: name) expect { helper.wait_for_absense_of(name) }.not_to raise_error end end # Case 4: Driver can update a search index context 'when updating search indexes' do let(:new_definition) { { 'mappings' => { 'dynamic' => true } } } let(:index) do helper .wait_for(name) { |idx| idx['queryable'] && idx['status'] == 'READY' } .first end it 'succeeds' do expect(create_index).to be == name helper.wait_for(name) expect do helper.collection.search_indexes.update_one(new_definition, name: name) end.not_to raise_error expect(index['latestDefinition']).to be == new_definition end end # Case 5: dropSearchIndex suppresses namespace not found errors context 'when dropping a non-existent search index' do it 'ignores `namespace not found` errors' do collection = helper.collection(soft_create: true) expect { collection.search_indexes.drop_one(name: name) } .not_to raise_error end end end mongo-ruby-driver-2.21.3/spec/integration/secondary_reads_spec.rb000066400000000000000000000053231505113246500251540ustar00rootroot00000000000000# rubocop:todo all require 'spec_helper' describe 'Secondary reads' do before do root_authorized_client.use('sr')['secondary_reads'].drop root_authorized_client.use('sr')['secondary_reads'].insert_one(test: 1) end shared_examples 'performs reads as per read preference' do %i(primary primary_preferred).each do |mode| context mode.inspect do let(:client) do root_authorized_client.with(read: {mode: mode}).use('sr') end it 'reads from primary' do start_stats = get_read_counters 30.times do client['secondary_reads'].find.to_a end end_stats = get_read_counters end_stats[:secondary].should be_within(10).of(start_stats[:secondary]) end_stats[:primary].should >= start_stats[:primary] + 30 end end end %i(secondary secondary_preferred).each do |mode| context mode.inspect do let(:client) do root_authorized_client.with(read: {mode: mode}).use('sr') end it 'reads from secondaries' do start_stats = get_read_counters 30.times do client['secondary_reads'].find.to_a end end_stats = get_read_counters end_stats[:primary].should be_within(10).of(start_stats[:primary]) end_stats[:secondary].should >= start_stats[:secondary] + 30 end end end end context 'replica set' do require_topology :replica_set include_examples 'performs reads as per read preference' end context 'sharded cluster' do require_topology :sharded include_examples 'performs reads as per read preference' end def get_read_counters client = ClientRegistry.instance.global_client('root_authorized') addresses = [] if client.cluster.sharded? doc = client.use('admin').command(listShards: 1).documents.first doc['shards'].each do |shard| addresses += shard['host'].split('/').last.split(',') end else client.cluster.servers.each do |server| next unless server.primary? || server.secondary? addresses << server.address.seed end end stats = Hash.new(0) addresses.each do |address| ClientRegistry.instance.new_local_client( [address], SpecConfig.instance.all_test_options.merge(connect: :direct), ) do |c| server = c.cluster.servers.first next unless server.primary? || server.secondary? stat = c.command(serverStatus: 1).documents.first queries = stat['opcounters']['query'] if server.primary? stats[:primary] += queries else stats[:secondary] += queries end end end stats end end mongo-ruby-driver-2.21.3/spec/integration/server_description_spec.rb000066400000000000000000000017171505113246500257230ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe 'Server description' do clean_slate let(:client) { ClientRegistry.instance.global_client('authorized') } let(:desc) do client.cluster.next_primary.description end let!(:start_time) { Time.now } describe '#op_time' do require_topology :replica_set min_server_fcv '3.4' it 'is set' do expect(desc).not_to be_unknown expect(desc.op_time).to be_a(BSON::Timestamp) end end describe '#last_write_date' do require_topology :replica_set min_server_fcv '3.4' it 'is set' do expect(desc).not_to be_unknown expect(desc.last_write_date).to be_a(Time) end end describe '#last_update_time' do it 'is set' do expect(desc).not_to be_unknown expect(desc.last_update_time).to be_a(Time) # checked while this test was running expect(desc.last_update_time).to be > start_time end end end mongo-ruby-driver-2.21.3/spec/integration/server_monitor_spec.rb000066400000000000000000000026171505113246500250670ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe 'Server::Monitor' do require_topology :single, :replica_set, :sharded let(:client) do new_local_client([ClusterConfig.instance.primary_address_str], SpecConfig.instance.test_options.merge(SpecConfig.instance.auth_options.merge( monitor_options))) end let(:monitor_options) do {heartbeat_frequency: 1} end retry_test it 'refreshes server descriptions in background' do server = client.cluster.next_primary expect(server.description).not_to be_unknown server.unknown! # This is racy, especially in JRuby, because the monitor may have # already run and updated the description. Because of this we retry # the test a few times. expect(server.description).to be_unknown # Wait for background thread to update the description sleep 1.5 expect(server.description).not_to be_unknown end context 'server-pushed hello' do min_server_fcv '4.4' require_topology :replica_set let(:monitor_options) do {heartbeat_frequency: 20} end it 'updates server description' do starting_primary_address = client.cluster.next_primary.address ClusterTools.instance.step_down sleep 2 new_primary_address = client.cluster.next_primary.address new_primary_address.should_not == starting_primary_address end end end mongo-ruby-driver-2.21.3/spec/integration/server_selection_spec.rb000066400000000000000000000020621505113246500253570ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe 'Server selection' do context 'replica set' do require_topology :replica_set # 2.6 server does not provide replSetGetConfig and hence we cannot add # the tags to the members. min_server_version '3.0' context 'when mixed case tag names are used' do # For simplicity this test assumes our Evergreen configuration: # nodes are started from port 27017 onwards and there are more than # one of them. let(:desired_index) do if authorized_client.cluster.next_primary.address.port == 27017 1 else 0 end end let(:client) do new_local_client(SpecConfig.instance.addresses, SpecConfig.instance.authorized_test_options.merge( server_selection_timeout: 4, read: {mode: :secondary, tag_sets: [nodeIndex: desired_index.to_s]}, )) end it 'selects the server' do client['nonexistent'].count.should == 0 end end end end mongo-ruby-driver-2.21.3/spec/integration/server_selector_spec.rb000066400000000000000000000062111505113246500252120ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe 'Server selector' do require_no_linting let(:selector) { Mongo::ServerSelector::Primary.new } let(:client) { authorized_client } let(:cluster) { client.cluster } describe '#select_server' do # These tests operate on specific servers, and don't work in a multi # shard cluster where multiple servers are equally eligible require_no_multi_mongos let(:result) { selector.select_server(cluster) } it 'selects' do expect(result).to be_a(Mongo::Server) end context 'no servers in the cluster' do let(:client) { new_local_client_nmio([], server_selection_timeout: 2) } it 'raises NoServerAvailable with a message explaining the situation' do expect do result end.to raise_error(Mongo::Error::NoServerAvailable, "Cluster has no addresses, and therefore will never have a server") end it 'does not wait for server selection timeout' do start_time = Mongo::Utils.monotonic_time expect do result end.to raise_error(Mongo::Error::NoServerAvailable) time_passed = Mongo::Utils.monotonic_time - start_time expect(time_passed).to be < 1 end end context 'client is closed' do context 'there is a known primary' do before do client.cluster.next_primary client.close expect(client.cluster.connected?).to be false end it 'returns the primary for BC reasons' do expect(result).to be_a(Mongo::Server) end end context 'there is no known primary' do require_topology :single, :replica_set, :sharded before do primary_server = client.cluster.next_primary client.close expect(client.cluster.connected?).to be false primary_server.unknown! end context 'non-lb' do require_topology :single, :replica_set, :sharded it 'raises NoServerAvailable with a message explaining the situation' do expect do result end.to raise_error(Mongo::Error::NoServerAvailable, /The cluster is disconnected \(client may have been closed\)/) end end context 'lb' do require_topology :load_balanced it 'returns the load balancer' do expect(result).to be_a(Mongo::Server) result.should be_load_balancer end end end end context 'monitoring thread is dead' do require_topology :single, :replica_set, :sharded before do client.cluster.servers.each do |server| server.monitor.instance_variable_get('@thread').kill end server = client.cluster.next_primary if server server.instance_variable_set('@description', Mongo::Server::Description.new({})) end end it 'raises NoServerAvailable with a message explaining the situation' do expect do result end.to raise_error(Mongo::Error::NoServerAvailable, /The following servers have dead monitor threads/) end end end end mongo-ruby-driver-2.21.3/spec/integration/server_spec.rb000066400000000000000000000034111505113246500233110ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe 'Server' do let(:client) { authorized_client } let(:context) { Mongo::Operation::Context.new(client: client) } let(:server) { client.cluster.next_primary } let(:collection) { client['collection'] } let(:view) { Mongo::Collection::View.new(collection) } describe 'operations when client/cluster are disconnected' do context 'it performs read operations and receives the correct result type' do context 'normal server' do it 'can be used for reads' do result = view.send(:send_initial_query, server, context) expect(result).to be_a(Mongo::Operation::Find::Result) end end context 'known server in disconnected cluster' do require_topology :single, :replica_set, :sharded require_no_linting before do server.disconnect! expect(server).not_to be_unknown end after do server.close end it 'can be used for reads' do # See also RUBY-3102. result = view.send(:send_initial_query, server, context) expect(result).to be_a(Mongo::Operation::Find::Result) end end context 'unknown server in disconnected cluster' do require_topology :single, :replica_set, :sharded require_no_linting before do client.close server.unknown! expect(server).to be_unknown end after do server.close end it 'is unusable' do # See also RUBY-3102. lambda do view.send(:send_initial_query, server, context) end.should raise_error(Mongo::Error::ServerNotUsable) end end end end end mongo-ruby-driver-2.21.3/spec/integration/shell_examples_spec.rb000066400000000000000000000737541505113246500250310ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe 'shell examples in Ruby' do let(:client) do authorized_client end before do client[:inventory].drop end after do client[:inventory].drop end context 'insert examples' do before do # Start Example 1 client[:inventory].insert_one({ item: 'canvas', qty: 100, tags: [ 'cotton' ], size: { h: 28, w: 35.5, uom: 'cm' } }) # End Example 1 end context 'example 2' do let(:example) do # Start Example 2 client[:inventory].find(item: 'canvas') # End Example 2 end it 'matches the expected output' do expect(example.count).to eq(1) end end context 'example 3' do let(:example) do # Start Example 3 client[:inventory].insert_many([{ item: 'journal', qty: 25, tags: ['blank', 'red'], size: { h: 14, w: 21, uom: 'cm' } }, { item: 'mat', qty: 85, tags: ['gray'], size: { h: 27.9, w: 35.5, uom: 'cm' } }, { item: 'mousepad', qty: 25, tags: ['gel', 'blue'], size: { h: 19, w: 22.85, uom: 'cm' } } ]) # End Example 3 end it 'matches the expected output' do expect(example.inserted_count).to eq(3) end end end context 'query top-level' do before do # Start Example 6 client[:inventory].insert_many([{ item: 'journal', qty: 25, size: { h: 14, w: 21, uom: 'cm' }, status: 'A' }, { item: 'notebook', qty: 50, size: { h: 8.5, w: 11, uom: 'in' }, status: 'A' }, { item: 'paper', qty: 100, size: { h: 8.5, w: 11, uom: 'in' }, status: 'D' }, { item: 'planner', qty: 75, size: { h: 22.85, w: 30, uom: 'cm' }, status: 'D' }, { item: 'postcard', qty: 45, size: { h: 10, w: 15.25, uom: 'cm' }, status: 'A' } ]) # End Example 6 end context 'example 7' do let(:example) do # Start Example 7 client[:inventory].find({}) # End Example 7 end it 'matches the expected output' do expect(example.to_a.size).to eq(5) end end context 'example 8' do let(:example) do # Start Example 8 client[:inventory].find # End Example 8 end it 'matches the expected output' do expect(example.to_a.size).to eq(5) end end context 'example 9' do let(:example) do # Start Example 9 client[:inventory].find(status: 'D') # End Example 9 end it 'matches the expected output' do expect(example.to_a.size).to eq(2) end end context 'example 10' do let(:example) do # Start Example 10 client[:inventory].find(status: { '$in' => [ 'A', 'D' ]}) # End Example 10 end it 'matches the expected output' do expect(example.to_a.size).to eq(5) end end context 'example 11' do let(:example) do # Start Example 11 client[:inventory].find(status: 'A', qty: { '$lt' => 30 }) # End Example 11 end it 'matches the expected output' do expect(example.to_a.size).to eq(1) end end context 'example 12' do let(:example) do # Start Example 12 client[:inventory].find('$or' => [{ status: 'A' }, { qty: { '$lt' => 30 } } ]) # End Example 12 end it 'matches the expected output' do expect(example.to_a.size).to eq(3) end end context 'example 13' do let(:example) do # Start Example 13 client[:inventory].find(status: 'A', '$or' => [{ qty: { '$lt' => 30 } }, { item: { '$regex' => BSON::Regexp::Raw.new('^p') } } ]) # End Example 13 end it 'matches the expected output' do expect(example.to_a.size).to eq(2) end end end context 'query embedded documents' do before do # Start Example 14 client[:inventory].insert_many([ { item: 'journal', qty: 25, size: { h: 14, w: 21, uom: 'cm' }, status: 'A' }, { item: 'notebook', qty: 50, size: { h: 8.5, w: 11, uom: 'in' }, status: 'A' }, { item: 'paper', qty: 100, size: { h: 8.5, w: 11, uom: 'in' }, status: 'D' }, { item: 'planner', qty: 75, size: { h: 22.85, w: 30, uom: 'cm' }, status: 'D' }, { item: 'postcard', qty: 45, size: { h: 10, w: 15.25, uom: 'cm' }, status: 'A' } ]) # End Example 14 end context 'example 15' do let(:example) do # Start Example 15 client[:inventory].find(size: { h: 14, w: 21, uom: 'cm' }) # End Example 15 end it 'matches the expected output' do expect(example.to_a.size).to eq(1) end end context 'example 16' do let(:example) do # Start Example 16 client[:inventory].find(size: { h: 21, w: 14, uom: 'cm' }) # End Example 16 end it 'matches the expected output' do expect(example.to_a.size).to eq(0) end end context 'example 17' do let(:example) do # Start Example 17 client[:inventory].find('size.uom' => 'in') # End Example 17 end it 'matches the expected output' do expect(example.to_a.size).to eq(2) end end context 'example 18' do let(:example) do # Start Example 18 client[:inventory].find('size.h' => { '$lt' => 15 }) # End Example 18 end it 'matches the expected output' do expect(example.to_a.size).to eq(4) end end context 'example 19' do let(:example) do # Start Example 19 client[:inventory].find('size.h' => { '$lt' => 15 }, 'size.uom' => 'in', 'status' => 'D') # End Example 19 end it 'matches the expected output' do expect(example.to_a.size).to eq(1) end end end context 'query arrays' do before do # Start Example 20 client[:inventory].insert_many([{ item: 'journal', qty: 25, tags: ['blank', 'red'], dim_cm: [ 14, 21 ] }, { item: 'notebook', qty: 50, tags: ['red', 'blank'], dim_cm: [ 14, 21 ] }, { item: 'paper', qty: 100, tags: ['red', 'blank', 'plain'], dim_cm: [ 14, 21 ] }, { item: 'planner', qty: 75, tags: ['blank', 'red'], dim_cm: [ 22.85, 30 ] }, { item: 'postcard', qty: 45, tags: ['blue'], dim_cm: [ 10, 15.25 ] } ]) # End Example 20 end context 'example 21' do let(:example) do # Start Example 21 client[:inventory].find(tags: ['red', 'blank']) # End Example 21 end it 'matches the expected output' do expect(example.to_a.size).to eq(1) end end context 'example 22' do let(:example) do # Start Example 22 client[:inventory].find(tags: { '$all' => ['red', 'blank'] }) # End Example 22 end it 'matches the expected output' do expect(example.to_a.size).to eq(4) end end context 'example 23' do let(:example) do # Start Example 23 client[:inventory].find(tags: 'red') # End Example 23 end it 'matches the expected output' do expect(example.count).to eq(4) end end context 'example 24' do let(:example) do # Start Example 24 client[:inventory].find(dim_cm: { '$gt' => 25 }) # End Example 24 end it 'matches the expected output' do expect(example.count).to eq(1) end end context 'example 25' do let(:example) do # Start Example 25 client[:inventory].find(dim_cm: { '$gt' => 15, '$lt' => 20 }) # End Example 25 end it 'matches the expected output' do expect(example.count).to eq(4) end end context 'example 26' do let(:example) do # Start Example 26 client[:inventory].find(dim_cm: { '$elemMatch' => { '$gt' => 22, '$lt' => 30 } }) # End Example 26 end it 'matches the expected output' do expect(example.count).to eq(1) end end context 'example 27' do let(:example) do # Start Example 27 client[:inventory].find('dim_cm.1' => { '$gt' => 25 }) # End Example 27 end it 'matches the expected output' do expect(example.count).to eq(1) end end context 'example 28' do let(:example) do # Start Example 28 client[:inventory].find(tags: { '$size' => 3 }) # End Example 28 end it 'matches the expected output' do expect(example.count).to eq(1) end end end context 'query array of embedded documents' do before do # Start Example 29 client[:inventory].insert_many([{ item: 'journal', instock: [ { warehouse: 'A', qty: 5 }, { warehouse: 'C', qty: 15 }] }, { item: 'notebook', instock: [ { warehouse: 'C', qty: 5 }] }, { item: 'paper', instock: [ { warehouse: 'A', qty: 60 }, { warehouse: 'B', qty: 15 }] }, { item: 'planner', instock: [ { warehouse: 'A', qty: 40 }, { warehouse: 'B', qty: 5 }] }, { item: 'postcard', instock: [ { warehouse: 'B', qty: 15 }, { warehouse: 'C', qty: 35 }] } ]) # End Example 29 end context 'example 30' do let(:example) do # Start Example 30 client[:inventory].find(instock: { warehouse: 'A', qty: 5 }) # End Example 30 end it 'matches the expected output' do expect(example.to_a.size).to eq(1) end end context 'example 31' do let(:example) do # Start Example 31 client[:inventory].find(instock: { qty: 5, warehouse: 'A' } ) # End Example 31 end it 'matches the expected output' do expect(example.to_a.size).to eq(0) end end context 'example 32' do let(:example) do # Start Example 32 client[:inventory].find('instock.0.qty' => { '$lte' => 20 }) # End Example 32 end it 'matches the expected output' do expect(example.to_a.size).to eq(3) end end context 'example 33' do let(:example) do # Start Example 33 client[:inventory].find('instock.qty' => { '$lte' => 20 }) # End Example 33 end it 'matches the expected output' do expect(example.to_a.size).to eq(5) end end context 'example 34' do let(:example) do # Start Example 34 client[:inventory].find(instock: { '$elemMatch' => { qty: 5, warehouse: 'A' } }) # End Example 34 end it 'matches the expected output' do expect(example.to_a.size).to eq(1) end end context 'example 35' do let(:example) do # Start Example 35 client[:inventory].find(instock: { '$elemMatch' => { qty: { '$gt' => 10, '$lte' => 20 } } }) # End Example 35 end it 'matches the expected output' do expect(example.to_a.size).to eq(3) end end context 'example 36' do let(:example) do # Start Example 36 client[:inventory].find('instock.qty' => { '$gt' => 10, '$lte' => 20 }) # End Example 36 end it 'matches the expected output' do expect(example.to_a.size).to eq(4) end end context 'example 37' do let(:example) do # Start Example 37 client[:inventory].find('instock.qty' => 5, 'instock.warehouse' => 'A') # End Example 37 end it 'matches the expected output' do expect(example.to_a.size).to eq(2) end end end context 'query null' do before do # Start Example 38 client[:inventory].insert_many([{ _id: 1, item: nil }, { _id: 2 }]) # End Example 38 end context 'example 39' do let(:example) do # Start Example 39 client[:inventory].find(item: nil) # End Example 39 end it 'matches the expected output' do expect(example.to_a.size).to eq(2) end end context 'example 40' do let(:example) do # Start Example 40 client[:inventory].find(item: { '$type' => 10 }) # End Example 40 end it 'matches the expected output' do expect(example.to_a.size).to eq(1) end end context 'example 41' do let(:example) do # Start Example 41 client[:inventory].find(item: { '$exists' => false }) # End Example 41 end it 'matches the expected output' do expect(example.to_a.size).to eq(1) end end end context 'projection' do before do # Start Example 42 client[:inventory].insert_many([{ item: 'journal', status: 'A', size: { h: 14, w: 21, uom: 'cm' }, instock: [ { warehouse: 'A', qty: 5 }] }, { item: 'notebook', status: 'A', size: { h: 8.5, w: 11, uom: 'in' }, instock: [ { warehouse: 'C', qty: 5 }] }, { item: 'paper', status: 'D', size: { h: 8.5, w: 11, uom: 'in' }, instock: [ { warehouse: 'A', qty: 60 }] }, { item: 'planner', status: 'D', size: { h: 22.85, w: 30, uom: 'cm' }, instock: [ { warehouse: 'A', qty: 40 }] }, { item: 'postcard', status: 'A', size: { h: 10, w: 15.25, uom: 'cm' }, instock: [ { warehouse: 'B', qty: 15 }, { warehouse: 'C', qty: 35 }] }]) # End Example 42 end context 'example 43' do let(:example) do # Start Example 43 client[:inventory].find(status: 'A') # End Example 43 end it 'matches the expected output' do expect(example.to_a.size).to eq(3) end end context 'example 44' do let!(:example) do # Start Example 44 client[:inventory].find({ status: 'A' }, projection: { item: 1, status: 1 }) # End Example 44 end it 'matches the expected output' do expect(example.to_a[1]['_id']).not_to be_nil expect(example.to_a[1]['item']).not_to be_nil expect(example.to_a[1]['status']).not_to be_nil expect(example.to_a[1]['size']).to be_nil expect(example.to_a[1]['instock']).to be_nil end end context 'example 45' do let!(:example) do # Start Example 45 client[:inventory].find({ status: 'A' }, projection: { item: 1, status: 1, _id: 0 }) # End Example 45 end it 'matches the expected output' do expect(example.to_a[1]['_id']).to be_nil expect(example.to_a[1]['item']).not_to be_nil expect(example.to_a[1]['status']).not_to be_nil expect(example.to_a[1]['size']).to be_nil expect(example.to_a[1]['instock']).to be_nil end end context 'example 46' do let!(:example) do # Start Example 46 client[:inventory].find({ status: 'A' }, projection: { status: 0, instock: 0 }) # End Example 46 end it 'matches the expected output' do expect(example.to_a[1]['_id']).not_to be_nil expect(example.to_a[1]['item']).not_to be_nil expect(example.to_a[1]['status']).to be_nil expect(example.to_a[1]['size']).not_to be_nil expect(example.to_a[1]['instock']).to be_nil end end context 'example 47' do let!(:example) do # Start Example 47 client[:inventory].find({ status: 'A' }, projection: { 'item' => 1, 'status' => 1, 'size.uom' => 1 }) # End Example 47 end it 'matches the expected output' do expect(example.to_a[1]['_id']).not_to be_nil expect(example.to_a[1]['item']).not_to be_nil expect(example.to_a[1]['status']).not_to be_nil expect(example.to_a[1]['size']).not_to be_nil expect(example.to_a[1]['instock']).to be_nil expect(example.to_a[1]['size']).not_to be_nil expect(example.to_a[1]['size']['uom']).not_to be_nil expect(example.to_a[1]['size']['h']).to be_nil expect(example.to_a[1]['size']['w']).to be_nil end end context 'example 48' do let!(:example) do # Start Example 48 client[:inventory].find({ status: 'A' }, projection: { 'size.uom' => 0 }) # End Example 48 end it 'matches the expected output' do expect(example.to_a[1]['_id']).not_to be_nil expect(example.to_a[1]['item']).not_to be_nil expect(example.to_a[1]['status']).not_to be_nil expect(example.to_a[1]['size']).not_to be_nil expect(example.to_a[1]['instock']).not_to be_nil expect(example.to_a[1]['size']).not_to be_nil expect(example.to_a[1]['size']['uom']).to be_nil expect(example.to_a[1]['size']['h']).not_to be_nil expect(example.to_a[1]['size']['w']).not_to be_nil end end context 'example 49' do let!(:example) do # Start Example 49 client[:inventory].find({ status: 'A' }, projection: {'item' => 1, 'status' => 1, 'instock.qty' => 1 }) # End Example 49 end let(:instock_list) do example.to_a[1]['instock'] end it 'matches the expected output' do expect(example.to_a[1]['_id']).not_to be_nil expect(example.to_a[1]['item']).not_to be_nil expect(example.to_a[1]['status']).not_to be_nil expect(example.to_a[1]['size']).to be_nil expect(example.to_a[1]['instock']).not_to be_nil expect(instock_list.collect { |doc| doc['warehouse'] }.compact).to be_empty expect(instock_list.collect { |doc| doc['qty'] }).to eq([5]) end end context 'example 50' do let!(:example) do # Start Example 50 client[:inventory].find({ status: 'A' }, projection: {'item' => 1, 'status' => 1, 'instock' => { '$slice' => -1 } }) # End Example 50 end let(:instock_list) do example.to_a[1]['instock'] end it 'matches the expected output' do expect(example.to_a[1]['_id']).not_to be_nil expect(example.to_a[1]['item']).not_to be_nil expect(example.to_a[1]['status']).not_to be_nil expect(example.to_a[1]['size']).to be_nil expect(example.to_a[1]['instock']).not_to be_nil expect(instock_list.size).to eq(1) end end end context 'update' do before do # Start Example 51 client[:inventory].insert_many([ { item: 'canvas', qty: 100, size: { h: 28, w: 35.5, uom: 'cm' }, status: 'A' }, { item: 'journal', qty: 25, size: { h: 14, w: 21, uom: 'cm' }, status: 'A' }, { item: 'mat', qty: 85, size: { h: 27.9, w: 35.5, uom: 'cm' }, status: 'A' }, { item: 'mousepad', qty: 25, size: { h: 19, w: 22.85, uom: 'cm' }, status: 'P' }, { item: 'notebook', qty: 50, size: { h: 8.5, w: 11, uom: 'in' }, status: 'P' }, { item: 'paper', qty: 100, size: { h: 8.5, w: 11, uom: 'in' }, status: 'D' }, { item: 'planner', qty: 75, size: { h: 22.85, w: 30, uom: 'cm' }, status: 'D' }, { item: 'postcard', qty: 45, size: { h: 10, w: 15.25, uom: 'cm' }, status: 'A' }, { item: 'sketchbook', qty: 80, size: { h: 14, w: 21, uom: 'cm' }, status: 'A' }, { item: 'sketch pad', qty: 95, size: { h: 22.85, w: 30.5, uom: 'cm' }, status: 'A' } ]) # End Example 51 end context 'example 52' do let!(:example) do # Start Example 52 client[:inventory].update_one({ item: 'paper'}, { '$set' => { 'size.uom' => 'cm', 'status' => 'P' }, '$currentDate' => { 'lastModified' => true } }) # End Example 52 end it 'matches the expected output' do expect(client[:inventory].find(item: 'paper').all? { |doc| doc['size']['uom'] == 'cm'}).to be(true) expect(client[:inventory].find(item: 'paper').all? { |doc| doc['status'] == 'P'}).to be(true) expect(client[:inventory].find(item: 'paper').all? { |doc| doc['lastModified'] }).to be(true) end end context 'example 53' do let!(:example) do # Start Example 53 client[:inventory].update_many({ qty: { '$lt' => 50 } }, { '$set' => { 'size.uom' => 'in', 'status' => 'P' }, '$currentDate' => { 'lastModified' => true } }) # End Example 53 end let(:from_db) do client[:inventory].find(qty: { '$lt' => 50 }) end it 'matches the expected output' do expect(from_db.all? { |doc| doc['size']['uom'] == 'in'}).to be(true) expect(from_db.all? { |doc| doc['status'] == 'P'}).to be(true) expect(from_db.all? { |doc| doc['lastModified'] }).to be(true) end end context 'example 54' do let!(:example) do # Start Example 54 client[:inventory].replace_one({ item: 'paper' }, { item: 'paper', instock: [ { warehouse: 'A', qty: 60 }, { warehouse: 'B', qty: 40 } ] }) # End Example 54 end let(:from_db) do client[:inventory].find({ item: 'paper' }, projection: { _id: 0 }) end it 'matches the expected output' do expect(from_db.first.keys.size).to eq(2) expect(from_db.first.key?('item')).to be(true) expect(from_db.first.key?('instock')).to be(true) expect(from_db.first['instock'].size).to eq(2) end end end context 'delete' do before do # Start Example 55 client[:inventory].insert_many([ { item: 'journal', qty: 25, size: { h: 14, w: 21, uom: 'cm' }, status: 'A' }, { item: 'notebook', qty: 50, size: { h: 8.5, w: 11, uom: 'in' }, status: 'P' }, { item: 'paper', qty: 100, size: { h: 8.5, w: 11, uom: 'in' }, status: 'D' }, { item: 'planner', qty: 75, size: { h: 22.85, w: 30, uom: 'cm' }, status: 'D' }, { item: 'postcard', qty: 45, size: { h: 10, w: 15.25, uom: 'cm' }, status: 'A' }, ]) # End Example 55 end context 'example 56' do let(:example) do # Start Example 56 client[:inventory].delete_many({}) # End Example 56 end it 'matches the expected output' do expect(example.deleted_count).to eq(5) expect(client[:inventory].find.to_a.size).to eq(0) end end context 'example 57' do let(:example) do # Start Example 57 client[:inventory].delete_many(status: 'A') # End Example 57 end it 'matches the expected output' do expect(example.deleted_count).to eq(2) expect(client[:inventory].find.to_a.size).to eq(3) end end context 'example 58' do let(:example) do # Start Example 58 client[:inventory].delete_one(status: 'D') # End Example 58 end it 'matches the expected output' do expect(example.deleted_count).to eq(1) expect(client[:inventory].find.to_a.size).to eq(4) end end end end mongo-ruby-driver-2.21.3/spec/integration/size_limit_spec.rb000066400000000000000000000106741505113246500241640ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe 'BSON & command size limits' do # https://jira.mongodb.org/browse/RUBY-3016 retry_test let(:max_document_size) { 16*1024*1024 } before do authorized_collection.delete_many end # This test uses a large document that is significantly smaller than the # size limit. It is a basic sanity check. it 'allows user-provided documents to be 15MiB' do document = { key: 'a' * 15*1024*1024, _id: 'foo' } authorized_collection.insert_one(document) end # This test uses a large document that is significantly larger than the # size limit. It is a basic sanity check. it 'fails single write of oversized documents' do document = { key: 'a' * 17*1024*1024, _id: 'foo' } lambda do authorized_collection.insert_one(document) end.should raise_error(Mongo::Error::MaxBSONSize, /The document exceeds maximum allowed BSON object size after serialization/) end # This test checks our bulk write splitting when documents are not close # to the limit, but where splitting is definitely required. it 'allows split bulk write of medium sized documents' do # 8 documents of 4 MiB each = 32 MiB total data, should be split over # either 2 or 3 bulk writes depending on how well the driver splits documents = [] 1.upto(8) do |index| documents << { key: 'a' * 4*1024*1024, _id: "in#{index}" } end authorized_collection.insert_many(documents) authorized_collection.count_documents.should == 8 end # This test ensures that document which are too big definitely fail insertion. it 'fails bulk write of oversized documents' do documents = [] 1.upto(3) do |index| documents << { key: 'a' * 17*1024*1024, _id: "in#{index}" } end lambda do authorized_collection.insert_many(documents) end.should raise_error(Mongo::Error::MaxBSONSize, /The document exceeds maximum allowed BSON object size after serialization/) authorized_collection.count_documents.should == 0 end it 'allows user-provided documents to be exactly 16MiB' do # The document must contain the _id field, otherwise the server will # add it which will increase the size of the document as persisted by # the server. document = { key: 'a' * (max_document_size - 28), _id: 'foo' } expect(document.to_bson.length).to eq(max_document_size) authorized_collection.insert_one(document) end it 'fails on the driver when a document larger than 16MiB is inserted' do document = { key: 'a' * (max_document_size - 27), _id: 'foo' } expect(document.to_bson.length).to eq(max_document_size+1) lambda do authorized_collection.insert_one(document) end.should raise_error(Mongo::Error::MaxBSONSize, /The document exceeds maximum allowed BSON object size after serialization/) end it 'fails on the driver when an update larger than 16MiB is performed' do document = { "$set" => { key: 'a' * (max_document_size - 25) } } expect(document.to_bson.length).to eq(max_document_size+1) lambda do authorized_collection.update_one({ _id: 'foo' }, document) end.should raise_error(Mongo::Error::MaxBSONSize, /The document exceeds maximum allowed BSON object size after serialization/) end it 'fails on the driver when an delete larger than 16MiB is performed' do document = { key: 'a' * (max_document_size - 14) } expect(document.to_bson.length).to eq(max_document_size+1) lambda do authorized_collection.delete_one(document) end.should raise_error(Mongo::Error::MaxBSONSize, /The document exceeds maximum allowed BSON object size after serialization/) end it 'fails in the driver when a document larger than 16MiB+16KiB is inserted' do document = { key: 'a' * (max_document_size - 27 + 16*1024), _id: 'foo' } expect(document.to_bson.length).to eq(max_document_size+16*1024+1) lambda do authorized_collection.insert_one(document) end.should raise_error(Mongo::Error::MaxBSONSize, /The document exceeds maximum allowed BSON object size after serialization/) end it 'allows bulk writes of multiple documents of exactly 16 MiB each' do documents = [] 1.upto(3) do |index| document = { key: 'a' * (max_document_size - 28), _id: "in#{index}" } expect(document.to_bson.length).to eq(max_document_size) documents << document end authorized_collection.insert_many(documents) authorized_collection.count_documents.should == 3 end end mongo-ruby-driver-2.21.3/spec/integration/snappy_compression_spec.rb000066400000000000000000000015441505113246500257430ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe 'Snappy compression' do require_snappy_compression before do authorized_client['test'].drop end context 'when client has snappy compressor option enabled' do it 'compresses the message to the server' do # Double check that the client has snappy compression enabled expect(authorized_client.options[:compressors]).to include('snappy') expect(Mongo::Protocol::Compressed).to receive(:new).twice.and_call_original expect(Snappy).to receive(:deflate).twice.and_call_original expect(Snappy).to receive(:inflate).twice.and_call_original authorized_client['test'].insert_one(_id: 1, text: 'hello world') document = authorized_client['test'].find(_id: 1).first expect(document['text']).to eq('hello world') end end end mongo-ruby-driver-2.21.3/spec/integration/snapshot_query_examples_spec.rb000066400000000000000000000064721505113246500267770ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe 'Snapshot Query Examples' do require_topology :replica_set, :sharded require_no_auth require_no_tls min_server_fcv '5.0' let(:uri_string) do "mongodb://#{SpecConfig.instance.addresses.join(',')}/?w=majority" end context "Snapshot Query Example 1" do before do client = authorized_client.use('pets') client['cats', write_concern: { w: :majority }].delete_many client['dogs', write_concern: { w: :majority }].delete_many client['cats', write_concern: { w: :majority }].insert_one( name: "Whiskers", color: "white", age: 10, adoptable: true ) client['dogs', write_concern: { w: :majority }].insert_one( name: "Pebbles", color: "Brown", age: 10, adoptable: true ) if ClusterConfig.instance.topology == :sharded run_mongos_distincts "pets", "cats" else wait_for_snapshot(db: 'pets', collection: 'cats') wait_for_snapshot(db: 'pets', collection: 'dogs') end end it "returns a snapshot of the data" do adoptable_pets_count = 0 # Start Snapshot Query Example 1 client = Mongo::Client.new(uri_string, database: "pets") client.start_session(snapshot: true) do |session| adoptable_pets_count = client['cats'].aggregate([ { "$match": { "adoptable": true } }, { "$count": "adoptable_cats_count" } ], session: session).first["adoptable_cats_count"] adoptable_pets_count += client['dogs'].aggregate([ { "$match": { "adoptable": true } }, { "$count": "adoptable_dogs_count" } ], session: session).first["adoptable_dogs_count"] puts adoptable_pets_count end # End Snapshot Query Example 1 expect(adoptable_pets_count).to eq 2 client.close end end context "Snapshot Query Example 2" do retry_test before do client = authorized_client.use('retail') client['sales', write_concern: { w: :majority }].delete_many client['sales', write_concern: { w: :majority }].insert_one( shoeType: "boot", price: 30, saleDate: Time.now ) if ClusterConfig.instance.topology == :sharded run_mongos_distincts "retail", "sales" else wait_for_snapshot(db: 'retail', collection: 'sales') end end it "returns a snapshot of the data" do total = 0 # Start Snapshot Query Example 2 client = Mongo::Client.new(uri_string, database: "retail") client.start_session(snapshot: true) do |session| total = client['sales'].aggregate([ { "$match": { "$expr": { "$gt": [ "$saleDate", { "$dateSubtract": { startDate: "$$NOW", unit: "day", amount: 1 } } ] } } }, { "$count": "total_daily_sales" } ], session: session).first["total_daily_sales"] end # End Snapshot Query Example 2 expect(total).to eq 1 client.close end end end mongo-ruby-driver-2.21.3/spec/integration/srv_monitoring_spec.rb000066400000000000000000000324521505113246500250710ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe 'SRV Monitoring' do clean_slate_for_all require_external_connectivity context 'with SRV lookups mocked at Resolver' do let(:srv_result) do double('srv result').tap do |result| allow(result).to receive(:empty?).and_return(false) allow(result).to receive(:address_strs).and_return( [ClusterConfig.instance.primary_address_str]) end end let(:client) do allow_any_instance_of(Mongo::Srv::Resolver).to receive(:get_records).and_return(srv_result) allow_any_instance_of(Mongo::Srv::Resolver).to receive(:get_txt_options_string) new_local_client_nmio('mongodb+srv://foo.a.b', server_selection_timeout: 3.15) end context 'standalone/replica set' do require_topology :single, :replica_set it 'does not create SRV monitor' do expect(client.cluster.topology).to be_a(Mongo::Cluster::Topology::Unknown) client.cluster.run_sdam_flow( Mongo::Server::Description.new(ClusterConfig.instance.primary_address_str), ClusterConfig.instance.primary_description, ) expect(client.cluster.topology).not_to be_a(Mongo::Cluster::Topology::Unknown) expect(client.cluster.instance_variable_get('@srv_monitor')).to be nil end end context 'sharded cluster' do require_topology :sharded it 'creates SRV monitor' do expect(client.cluster.topology).to be_a(Mongo::Cluster::Topology::Unknown) # Since we force the cluster to run sdam flow which creates a monitor, # we need to manually adjust its state. client.cluster.instance_variable_set('@connecting', true) client.cluster.run_sdam_flow( Mongo::Server::Description.new(ClusterConfig.instance.primary_address_str), ClusterConfig.instance.primary_description, ) expect(client.cluster.topology).to be_a(Mongo::Cluster::Topology::Sharded) expect(client.cluster.instance_variable_get('@srv_monitor')).to be_a(Mongo::Srv::Monitor) # Close the client in the test rather than allowing our post-test cleanup # to take care of it, since the client references test doubles. client.close end end end # These tests require a sharded cluster to be launched on localhost:27017 # and localhost:27018, plus internet connectivity for SRV record lookups. context 'end to end' do require_default_port_deployment # JRuby apparently does not implement non-blocking UDP I/O which is used # by RubyDNS: # NotImplementedError: recvmsg_nonblock is not implemented fails_on_jruby minimum_mri_version '3.0.0' around do |example| # Speed up the tests by listening on the fake ports we are using. done = false servers = [] threads = [27998, 27999].map do |port| Thread.new do server = TCPServer.open(port) servers << server begin loop do break if done server.accept.close rescue nil end ensure server.close end end end begin example.run ensure done = true servers.map(&:close) threads.map(&:kill) threads.map(&:join) end end let(:uri) do "mongodb+srv://test-fake.test.build.10gen.cc/?tls=#{SpecConfig.instance.ssl?}&tlsInsecure=true" end let(:logger) do Logger.new(STDERR, level: Logger::DEBUG) end let(:client) do new_local_client(uri, SpecConfig.instance.monitoring_options.merge( server_selection_timeout: 3.16, socket_timeout: 8.11, connect_timeout: 8.12, resolv_options: { # Using localhost instead of 127.0.0.1 here causes Ruby's resolv # client to drop responses. nameserver: '127.0.0.1', # TODO figure out why the address & port here need to be given # twice - if given once, DNS resolution fails. nameserver_port: [['127.0.0.1', 5300], ['127.0.0.1', 5300]], }, logger: logger, populator_io: false, ), ) end before do # Expedite the polling process allow_any_instance_of(Mongo::Srv::Monitor).to receive(:scan_interval).and_return(1) end context 'sharded cluster' do require_topology :sharded require_multi_mongos it 'updates topology via SRV records' do rules = [ ['_mongodb._tcp.test-fake.test.build.10gen.cc', :srv, [0, 0, 27017, 'localhost.test.build.10gen.cc'], ], ] mock_dns(rules) do client.cluster.next_primary expect(client.cluster.topology).to be_a(Mongo::Cluster::Topology::Sharded) address_strs = client.cluster.servers.map(&:address).map(&:seed).sort expect(address_strs).to eq(%w( localhost.test.build.10gen.cc:27017 )) end # In Evergreen there are replica set nodes on the next port number # after mongos nodes, therefore the addresses in DNS need to accurately # reflect how many mongos we have. rules = [ ['_mongodb._tcp.test-fake.test.build.10gen.cc', :srv, [0, 0, 27018, 'localhost.test.build.10gen.cc'], [0, 0, 27017, 'localhost.test.build.10gen.cc'], ], ] mock_dns(rules) do 15.times do address_strs = client.cluster.servers_list.map(&:address).map(&:seed).sort if address_strs == %w( localhost.test.build.10gen.cc:27017 localhost.test.build.10gen.cc:27018 ) then break end sleep 1 end address_strs = client.cluster.servers_list.map(&:address).map(&:seed).sort expect(address_strs).to eq(%w( localhost.test.build.10gen.cc:27017 localhost.test.build.10gen.cc:27018 )) end # And because we have only two mongos in Evergreen, test removal # separately here. rules = [ ['_mongodb._tcp.test-fake.test.build.10gen.cc', :srv, [0, 0, 27018, 'localhost.test.build.10gen.cc'], ], ] mock_dns(rules) do 15.times do address_strs = client.cluster.servers_list.map(&:address).map(&:seed).sort if address_strs == %w( localhost.test.build.10gen.cc:27018 ) then break end sleep 1 end address_strs = client.cluster.servers_list.map(&:address).map(&:seed).sort expect(address_strs).to eq(%w( localhost.test.build.10gen.cc:27018 )) expect(client.cluster.srv_monitor).to be_running end end end context 'unknown topology' do it 'updates topology via SRV records' do rules = [ ['_mongodb._tcp.test-fake.test.build.10gen.cc', :srv, [0, 0, 27999, 'localhost.test.build.10gen.cc'], ], ] mock_dns(rules) do expect(client.cluster.topology).to be_a(Mongo::Cluster::Topology::Unknown) address_strs = client.cluster.servers_list.map(&:address).map(&:seed).sort expect(address_strs).to eq(%w( localhost.test.build.10gen.cc:27999 )) end rules = [ ['_mongodb._tcp.test-fake.test.build.10gen.cc', :srv, [0, 0, 27998, 'localhost.test.build.10gen.cc'], [0, 0, 27999, 'localhost.test.build.10gen.cc'], ], ] mock_dns(rules) do 15.times do address_strs = client.cluster.servers_list.map(&:address).map(&:seed).sort if address_strs == %w( localhost.test.build.10gen.cc:27998 localhost.test.build.10gen.cc:27999 ) then break end sleep 1 end address_strs = client.cluster.servers_list.map(&:address).map(&:seed).sort expect(address_strs).to eq(%w( localhost.test.build.10gen.cc:27998 localhost.test.build.10gen.cc:27999 )) end rules = [ ['_mongodb._tcp.test-fake.test.build.10gen.cc', :srv, [0, 0, 27997, 'localhost.test.build.10gen.cc'], ], ] mock_dns(rules) do 15.times do address_strs = client.cluster.servers_list.map(&:address).map(&:seed).sort if address_strs == %w( localhost.test.build.10gen.cc:27997 ) then break end sleep 1 end address_strs = client.cluster.servers_list.map(&:address).map(&:seed).sort expect(address_strs).to eq(%w( localhost.test.build.10gen.cc:27997 )) expect(client.cluster.topology).to be_a(Mongo::Cluster::Topology::Unknown) expect(client.cluster.srv_monitor).to be_running end end end context 'unknown to sharded' do require_topology :sharded it 'updates topology via SRV records' do rules = [ ['_mongodb._tcp.test-fake.test.build.10gen.cc', :srv, [0, 0, 27999, 'localhost.test.build.10gen.cc'], ], ] mock_dns(rules) do expect(client.cluster.topology).to be_a(Mongo::Cluster::Topology::Unknown) address_strs = client.cluster.servers_list.map(&:address).map(&:seed).sort expect(address_strs).to eq(%w( localhost.test.build.10gen.cc:27999 )) end rules = [ ['_mongodb._tcp.test-fake.test.build.10gen.cc', :srv, [0, 0, 27017, 'localhost.test.build.10gen.cc'], ], ] mock_dns(rules) do 15.times do address_strs = client.cluster.servers.map(&:address).map(&:seed).sort if address_strs == %w( localhost.test.build.10gen.cc:27017 ) then break end sleep 1 end address_strs = client.cluster.servers.map(&:address).map(&:seed).sort expect(address_strs).to eq(%w( localhost.test.build.10gen.cc:27017 )) expect(client.cluster.topology).to be_a(Mongo::Cluster::Topology::Sharded) expect(client.cluster.srv_monitor).to be_running end end end context 'unknown to replica set' do require_topology :replica_set it 'updates topology via SRV records then stops SRV monitor' do rules = [ ['_mongodb._tcp.test-fake.test.build.10gen.cc', :srv, [0, 0, 27999, 'localhost.test.build.10gen.cc'], ], ] mock_dns(rules) do expect(client.cluster.topology).to be_a(Mongo::Cluster::Topology::Unknown) address_strs = client.cluster.servers_list.map(&:address).map(&:seed).sort expect(address_strs).to eq(%w( localhost.test.build.10gen.cc:27999 )) end rules = [ ['_mongodb._tcp.test-fake.test.build.10gen.cc', :srv, [0, 0, 27017, 'localhost.test.build.10gen.cc'], ], ] mock_dns(rules) do 15.times do address_strs = client.cluster.servers.map(&:address).map(&:seed).sort if address_strs == %w( localhost.test.build.10gen.cc:27017 ) then break end sleep 1 end address_strs = client.cluster.servers.map(&:address).map(&:seed).sort # The actual address will be localhost:27017 or 127.0.0.1:27017, # depending on how the replica set is configured. expect(address_strs.any? { |str| str =~ /27017/ }).to be true # Covers both NoPrimary and WithPrimary replica sets expect(client.cluster.topology).to be_a(Mongo::Cluster::Topology::ReplicaSetNoPrimary) # give the thread another moment to stop sleep 0.1 expect(client.cluster.srv_monitor).not_to be_running end end end context 'when the client mocks the srvServiceName' do let(:uri) do "mongodb+srv://test-fake.test.build.10gen.cc/?tls=#{SpecConfig.instance.ssl?}&tlsInsecure=true&srvServiceName=customname" end it 'finds the records using the custom service name' do rules = [ ['_customname._tcp.test-fake.test.build.10gen.cc', :srv, [0, 0, 27998, 'localhost.test.build.10gen.cc'], [0, 0, 27999, 'localhost.test.build.10gen.cc'], ], ] mock_dns(rules) do 15.times do address_strs = client.cluster.servers_list.map(&:address).map(&:seed).sort if address_strs == %w( localhost.test.build.10gen.cc:27998 localhost.test.build.10gen.cc:27999 ) then break end sleep 1 end address_strs = client.cluster.servers_list.map(&:address).map(&:seed).sort expect(address_strs).to eq(%w( localhost.test.build.10gen.cc:27998 localhost.test.build.10gen.cc:27999 )) end end end end end mongo-ruby-driver-2.21.3/spec/integration/srv_spec.rb000066400000000000000000000031501505113246500226150ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe 'SRV lookup' do context 'end to end' do require_external_connectivity # JRuby apparently does not implement non-blocking UDP I/O which is used # by RubyDNS: # NotImplementedError: recvmsg_nonblock is not implemented fails_on_jruby let(:uri) do "mongodb+srv://test-fake.test.build.10gen.cc/?tls=#{SpecConfig.instance.ssl?}&tlsInsecure=true" end let(:client) do new_local_client(uri, SpecConfig.instance.ssl_options.merge( server_selection_timeout: 3.16, timeout: 4.11, connect_timeout: 4.12, resolv_options: { nameserver: 'localhost', nameserver_port: [['localhost', 5300], ['127.0.0.1', 5300]], }, ), ) end context 'DNS resolver not responding' do it 'fails to create client' do lambda do client end.should raise_error(Mongo::Error::NoSRVRecords, /The DNS query returned no SRV records for 'test-fake.test.build.10gen.cc'/) end it 'times out in connect_timeout' do start_time = Mongo::Utils.monotonic_time lambda do client end.should raise_error(Mongo::Error::NoSRVRecords) elapsed_time = Mongo::Utils.monotonic_time - start_time elapsed_time.should > 4 # The number of queries performed depends on local DNS search suffixes, # therefore we cannot reliably assert how long it would take for this # resolution to time out. #elapsed_time.should < 8 end end end end mongo-ruby-driver-2.21.3/spec/integration/ssl_uri_options_spec.rb000066400000000000000000000016111505113246500252360ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe 'SSL connections with URI options' do # SpecConfig currently creates clients exclusively through non-URI options. # Because we don't currently have a way to create what the URI would look # like for a given client, it's simpler just to test the that TLS works when # configured from a URI on a standalone server without auth required, since # that allows us to build the URI more easily. require_no_auth require_topology :single require_tls let(:hosts) do SpecConfig.instance.addresses.join(',') end let(:uri) do "mongodb://#{hosts}/?tls=true&tlsInsecure=true&tlsCertificateKeyFile=#{SpecConfig.instance.client_pem_path}" end it 'successfully connects and runs an operation' do client = new_local_client(uri) expect { client[:foo].count_documents }.not_to raise_error end end mongo-ruby-driver-2.21.3/spec/integration/step_down_spec.rb000066400000000000000000000147711505113246500240200ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe 'Step down behavior' do require_topology :replica_set # This setup reduces the runtime of the test and makes execution more # reliable. The spec as written requests a simple brute force step down, # but this causes intermittent failures. before(:all) do # These before/after blocks are run even if the tests themselves are # skipped due to server version not being appropriate ClientRegistry.instance.close_all_clients if ClusterConfig.instance.fcv_ish >= '4.2' && ClusterConfig.instance.topology == :replica_set # It seems that a short election timeout can cause unintended elections, # which makes the server close connections which causes the driver to # reconnect which then fails the step down test. # The election timeout here is greater than the catch up period and # step down timeout specified in cluster tools. ClusterTools.instance.set_election_timeout(5) ClusterTools.instance.set_election_handoff(false) end end after(:all) do if ClusterConfig.instance.fcv_ish >= '4.2' && ClusterConfig.instance.topology == :replica_set ClusterTools.instance.set_election_timeout(10) ClusterTools.instance.set_election_handoff(true) ClusterTools.instance.reset_priorities end end let(:subscriber) { Mrss::EventSubscriber.new } let(:test_client) do authorized_client_without_any_retries.with(server_selection_timeout: 20).tap do |client| client.subscribe(Mongo::Monitoring::CONNECTION_POOL, subscriber) end end let(:collection) { test_client['step-down'].with(write: write_concern) } let(:admin_support_client) do ClientRegistry.instance.global_client('root_authorized').use('admin') end describe 'getMore iteration' do min_server_fcv '4.2' require_no_linting let(:subscribed_client) do test_client.tap do |client| client.subscribe(Mongo::Monitoring::COMMAND, subscriber) client.subscribe(Mongo::Monitoring::CONNECTION_POOL, subscriber) end end let(:collection) { subscribed_client['step-down'] } before do collection.insert_many([{test: 1}] * 100) end let(:view) { collection.find({test: 1}, batch_size: 10) } let(:enum) { view.to_enum } it 'continues through step down' do server = subscribed_client.cluster.next_primary server.pool_internal.do_clear server.pool_internal.ready subscriber.clear_events! # get the first item item = enum.next expect(item['test']).to eq(1) connection_created_events = subscriber.published_events.select do |event| event.is_a?(Mongo::Monitoring::Event::Cmap::ConnectionCreated) end expect(connection_created_events).not_to be_empty current_primary = subscribed_client.cluster.next_primary ClusterTools.instance.change_primary subscriber.clear_events! # exhaust the batch 9.times do enum.next end # this should issue a getMore item = enum.next expect(item['test']).to eq(1) get_more_events = subscriber.started_events.select do |event| event.command['getMore'] end expect(get_more_events.length).to eq(1) # getMore should have been sent on the same connection as find connection_created_events = subscriber.published_events.select do |event| event.is_a?(Mongo::Monitoring::Event::Cmap::ConnectionCreated) end expect(connection_created_events).to be_empty end after do # The tests normally operate with a low server selection timeout, # but since this test caused a cluster election we may need to wait # longer for the cluster to reestablish itself. # To keep all other tests' timeouts low, wait for primary to be # elected at the end of this test test_client.cluster.servers.each do |server| server.unknown! end test_client.cluster.next_primary # Since we are changing which server is primary, close all clients # to prevent subsequent tests setting fail points on servers which # are not primary ClientRegistry.instance.close_all_clients end end describe 'writes on connections' do min_server_fcv '4.0' let(:server) do client = test_client.with(app_name: rand) client['test'].insert_one(test: 1) client.cluster.next_primary end let(:fail_point) do { configureFailPoint: 'failCommand', data: { failCommands: ['insert'], errorCode: fail_point_code, }, mode: { times: 1 } } end before do collection.find admin_support_client.command(fail_point) end after do admin_support_client.command(configureFailPoint: 'failCommand', mode: 'off') end describe 'not master - 4.2' do min_server_fcv '4.2' let(:write_concern) { {:w => 1} } # not master let(:fail_point_code) { 10107 } it 'keeps connection open' do subscriber.clear_events! expect do collection.insert_one(test: 1) end.to raise_error(Mongo::Error::OperationFailure, /10107/) expect(subscriber.select_published_events(Mongo::Monitoring::Event::Cmap::PoolCleared).count).to eq(0) expect do collection.insert_one(test: 1) end.to_not raise_error end end describe 'not master - 4.0' do max_server_version '4.0' let(:write_concern) { {:w => 1} } # not master let(:fail_point_code) { 10107 } it 'closes the connection' do subscriber.clear_events! expect do collection.insert_one(test: 1) end.to raise_error(Mongo::Error::OperationFailure, /10107/) expect(subscriber.select_published_events(Mongo::Monitoring::Event::Cmap::PoolCleared).count).to eq(1) expect do collection.insert_one(test: 1) end.to_not raise_error end end describe 'node shutting down' do let(:write_concern) { {:w => 1} } # interrupted at shutdown let(:fail_point_code) { 11600 } it 'closes the connection' do subscriber.clear_events! expect do collection.insert_one(test: 1) end.to raise_error(Mongo::Error::OperationFailure, /11600/) expect(subscriber.select_published_events(Mongo::Monitoring::Event::Cmap::PoolCleared).count).to eq(1) expect do collection.insert_one(test: 1) end.to_not raise_error end end end end mongo-ruby-driver-2.21.3/spec/integration/time_zone_querying_spec.rb000066400000000000000000000027051505113246500257240ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe 'Time zone querying' do let(:collection) { authorized_client[:time_zone_querying] } before do collection.delete_many collection.insert_many([ {id: 1, created_at: Time.utc(2020, 10, 1, 23)}, {id: 2, created_at: Time.utc(2020, 10, 2, 0)}, {id: 3, created_at: Time.utc(2020, 10, 2, 1)}, ]) end context 'UTC time' do let(:time) { Time.utc(2020, 10, 1, 23, 22) } it 'finds correctly' do view = collection.find({created_at: {'$gt' => time}}) expect(view.count).to eq(2) expect(view.map { |doc| doc[:id] }.sort).to eq([2, 3]) end end context 'local time with zone' do let(:time) { Time.parse('2020-10-01T19:30:00-0500') } it 'finds correctly' do view = collection.find({created_at: {'$gt' => time}}) expect(view.count).to eq(1) expect(view.first[:id]).to eq(3) end end context 'when ActiveSupport support is enabled' do before do unless SpecConfig.instance.active_support? skip "ActiveSupport support is not enabled" end end context 'ActiveSupport::TimeWithZone' do let(:time) { Time.parse('2020-10-01T19:30:00-0500').in_time_zone('America/New_York') } it 'finds correctly' do view = collection.find({created_at: {'$gt' => time}}) expect(view.count).to eq(1) expect(view.first[:id]).to eq(3) end end end end mongo-ruby-driver-2.21.3/spec/integration/transaction_pinning_spec.rb000066400000000000000000000067211505113246500260610ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe 'Transaction pinning' do let(:client) { authorized_client.with(max_pool_size: 4) } let(:collection_name) { 'tx_pinning' } let(:collection) { client[collection_name] } before do authorized_client[collection_name].insert_many([{test: 1}] * 200) end let(:server) { client.cluster.next_primary } clean_slate context 'non-lb' do require_topology :sharded min_server_fcv '4.2' # Start several transactions, then complete each of them. # Force each transaction to be on its own connection. before do client.reconnect if client.closed? 4.times do |i| # Collections cannot be created inside transactions. client["tx_pin_t#{i}"].drop client["tx_pin_t#{i}"].create end end after do if pool = server.pool_internal pool.close end end it 'works' do sessions = [] connections = [] 4.times do |i| session = client.start_session session.start_transaction client["tx_pin_t#{i}"].insert_one({test: 1}, session: session) session.pinned_server.should be_a(Mongo::Server) sessions << session connections << server.pool.check_out end server.pool.size.should == 4 connections.each do |c| server.pool.check_in(c) end sessions.each_with_index do |session, i| client["tx_pin_t#{i}"].insert_one({test: 2}, session: session) session.commit_transaction end end end context 'lb' do require_topology :load_balanced min_server_fcv '4.2' # In load-balanced topology, we cannot create new connections to a # particular service. context 'when no connection is available' do require_no_linting before do client.reconnect if client.closed? client["tx_pin"].drop client["tx_pin"].create end it 'raises MissingConnection' do session = client.start_session session.start_transaction client["tx_pin"].insert_one({test: 1}, session: session) session.pinned_server.should be nil session.pinned_connection_global_id.should_not be nil server.pool.size.should == 1 service_id = server.pool.instance_variable_get(:@available_connections).first.service_id server.pool.clear(service_id: service_id) server.pool.size.should == 0 lambda do client["tx_pin"].insert_one({test: 2}, session: session) end.should raise_error(Mongo::Error::MissingConnection) end end context 'when connection is available' do before do client.reconnect if client.closed? end it 'uses the available connection' do sessions = [] connections = [] 4.times do |i| session = client.start_session session.start_transaction client["tx_pin_t#{i}"].insert_one({test: 1}, session: session) session.pinned_server.should be nil session.pinned_connection_global_id.should_not be nil sessions << session connections << server.pool.check_out end server.pool.size.should == 4 connections.each do |c| server.pool.check_in(c) end sessions.each_with_index do |session, i| client["tx_pin_t#{i}"].insert_one({test: 2}, session: session) session.commit_transaction end end end end end mongo-ruby-driver-2.21.3/spec/integration/transactions_api_examples_spec.rb000066400000000000000000000037211505113246500272460ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe 'Transactions API examples' do require_wired_tiger require_transaction_support # Until https://jira.mongodb.org/browse/RUBY-1768 is implemented, limit # the tests to simple configurations require_no_auth require_no_tls let(:uri_string) do "mongodb://#{SpecConfig.instance.addresses.join(',')}" end it 'with_transaction API example 1' do # Start Transactions withTxn API Example 1 # For a replica set, include the replica set name and a seedlist of the members in the URI string; e.g. # uriString = 'mongodb://mongodb0.example.com:27017,mongodb1.example.com:27017/?replicaSet=myRepl' # For a sharded cluster, connect to the mongos instances; e.g. # uri_string = 'mongodb://mongos0.example.com:27017,mongos1.example.com:27017/' client = Mongo::Client.new(uri_string, write_concern: {w: :majority, wtimeout: 1000}) # Prereq: Create collections. client.use('mydb1')['foo'].insert_one(abc: 0) client.use('mydb2')['bar'].insert_one(xyz: 0) # Step 1: Define the callback that specifies the sequence of operations to perform inside the transactions. callback = Proc.new do |my_session| collection_one = client.use('mydb1')['foo'] collection_two = client.use('mydb2')['bar'] # Important: You must pass the session to the operations. collection_one.insert_one({'abc': 1}, session: my_session) collection_two.insert_one({'xyz': 999}, session: my_session) end #. Step 2: Start a client session. session = client.start_session # Step 3: Use with_transaction to start a transaction, execute the callback, and commit (or abort on error). session.with_transaction( read_concern: {level: :local}, write_concern: {w: :majority, wtimeout: 1000}, read: {mode: :primary}, &callback) # End Transactions withTxn API Example 1 # Do not leak clients. client.close end end mongo-ruby-driver-2.21.3/spec/integration/transactions_examples_spec.rb000066400000000000000000000152321505113246500264150ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe 'Transactions examples' do require_wired_tiger require_transaction_support let(:client) do authorized_client.with(read_concern: {level: :majority}, write: {w: :majority}) end before do if SpecConfig.instance.client_debug? Mongo::Logger.logger.level = 0 end end let(:hr) do client.use(:hr).database end let(:reporting) do client.use(:reporting).database end before(:each) do hr[:employees].insert_one(employee: 3, status: 'Active') # Sanity check since this test likes to fail employee = hr[:employees].find({ employee: 3 }, limit: 1).first expect(employee).to_not be_nil reporting[:events].insert_one(employee: 3, status: { new: 'Active', old: nil}) end after(:each) do hr.drop reporting.drop # Work around https://jira.mongodb.org/browse/SERVER-53015 ::Utils.mongos_each_direct_client do |client| client.database.command(flushRouterConfig: 1) end end context 'individual examples' do let(:session) do client.start_session end # Start Transactions Intro Example 1 def update_employee_info(session) employees_coll = session.client.use(:hr)[:employees] events_coll = session.client.use(:reporting)[:events] session.start_transaction(read_concern: { level: :snapshot }, write_concern: { w: :majority }) employees_coll.update_one({ employee: 3 }, { '$set' => { status: 'Inactive'} }, session: session) events_coll.insert_one({ employee: 3, status: { new: 'Inactive', old: 'Active' } }, session: session) begin session.commit_transaction puts 'Transaction committed.' rescue Mongo::Error => e if e.label?('UnknownTransactionCommitResult') puts "UnknownTransactionCommitResult, retrying commit operation..." retry else puts 'Error during commit ...' raise end end end # End Transactions Intro Example 1 context 'Transactions Intro Example 1' do let(:run_transaction) do update_employee_info(session) end it 'makes the changes to the database' do run_transaction employee = hr[:employees].find({ employee: 3 }, limit: 1).first expect(employee).to_not be_nil expect(employee['status']).to eq('Inactive') end end context 'Transactions Retry Example 1' do # Start Transactions Retry Example 1 def run_transaction_with_retry(session) begin yield session # performs transaction rescue Mongo::Error => e puts 'Transaction aborted. Caught exception during transaction.' raise unless e.label?('TransientTransactionError') puts "TransientTransactionError, retrying transaction ..." retry end end # End Transactions Retry Example 1 let(:run_transaction) do run_transaction_with_retry(session) { |s| update_employee_info(s) } end it 'makes the changes to the database' do run_transaction employee = hr[:employees].find({ employee: 3 }, limit: 1).first expect(employee).to_not be_nil expect(employee['status']).to eq('Inactive') end end context 'Transactions Retry Example 2' do # Start Transactions Retry Example 2 def commit_with_retry(session) begin session.commit_transaction puts 'Transaction committed.' rescue Mongo::Error=> e if e.label?('UnknownTransactionCommitResult') puts "UnknownTransactionCommitResult, retrying commit operation..." retry else puts 'Error during commit ...' raise end end end # End Transactions Retry Example 2 let(:run_transaction) do session.start_transaction hr[:employees].insert_one({ employee: 4, status: 'Active' }, session: session) reporting[:events].insert_one({ employee: 4, status: { new: 'Active', old: nil } }, session: session) commit_with_retry(session) end it 'makes the changes to the database' do run_transaction employee = hr[:employees].find({ employee: 4 }, limit: 1).first expect(employee).to_not be_nil expect(employee['status']).to eq('Active') end end end context 'Transactions Retry Example 3 (combined example)' do let(:run_transaction) do # Start Transactions Retry Example 3 def run_transaction_with_retry(session) begin yield session # performs transaction rescue Mongo::Error => e puts 'Transaction aborted. Caught exception during transaction.' raise unless e.label?('TransientTransactionError') puts "TransientTransactionError, retrying transaction ..." retry end end def commit_with_retry(session) begin session.commit_transaction puts 'Transaction committed.' rescue Mongo::Error => e if e.label?('UnknownTransactionCommitResult') puts "UnknownTransactionCommitResult, retrying commit operation ..." retry else puts 'Error during commit ...' raise end end end # updates two collections in a transaction def update_employee_info(session) employees_coll = session.client.use(:hr)[:employees] events_coll = session.client.use(:reporting)[:events] session.start_transaction(read_concern: { level: :snapshot }, write_concern: { w: :majority }, read: {mode: :primary}) employees_coll.update_one({ employee: 3 }, { '$set' => { status: 'Inactive'} }, session: session) events_coll.insert_one({ employee: 3, status: { new: 'Inactive', old: 'Active' } }, session: session) commit_with_retry(session) end session = client.start_session begin run_transaction_with_retry(session) do update_employee_info(session) end rescue StandardError => e # Do something with error raise end # End Transactions Retry Example 3 end it 'makes the changes to the database' do run_transaction employee = hr[:employees].find({ employee: 3 }, limit: 1).first expect(employee).to_not be_nil expect(employee['status']).to eq('Inactive') end end end mongo-ruby-driver-2.21.3/spec/integration/truncated_utf8_spec.rb000066400000000000000000000010161505113246500247410ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe 'truncated UTF-8 in server error messages' do let(:rep) do '(╯°□°)╯︵ ┻━┻' end let(:collection) do authorized_client['truncated_utf8'] end before(:all) do ClientRegistry.instance.global_client('authorized')['truncated_utf8'].indexes.create_one( {k: 1}, unique: true) end it 'works' do pending 'RUBY-2560' collection.insert_one(k: rep*20) collection.insert_one(k: rep*20) end end mongo-ruby-driver-2.21.3/spec/integration/versioned_api_examples_spec.rb000066400000000000000000000072661505113246500265440ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe 'Versioned API examples' do # Until https://jira.mongodb.org/browse/RUBY-1768 is implemented, limit # the tests to simple configurations require_no_auth require_no_tls min_server_version("5.0") let(:uri_string) do "mongodb://#{SpecConfig.instance.addresses.join(',')}/versioned-api-examples" end it 'Versioned API example 1' do # Start Versioned API Example 1 client = Mongo::Client.new(uri_string, server_api: {version: "1"}) # End Versioned API Example 1 # Run a command to ensure the client works. client['test'].find.to_a.should be_a(Array) # Do not leak clients. client.close end it 'Versioned API example 2' do # Start Versioned API Example 2 client = Mongo::Client.new(uri_string, server_api: {version: "1", strict: true}) # End Versioned API Example 2 # Run a command to ensure the client works. client['test'].find.to_a.should be_a(Array) # Do not leak clients. client.close end it 'Versioned API example 3' do # Start Versioned API Example 3 client = Mongo::Client.new(uri_string, server_api: {version: "1", strict: false}) # End Versioned API Example 3 # Run a command to ensure the client works. client['test'].find.to_a.should be_a(Array) # Do not leak clients. client.close end it 'Versioned API example 4' do # Start Versioned API Example 4 client = Mongo::Client.new(uri_string, server_api: {version: "1", deprecation_errors: true}) # End Versioned API Example 4 # Run a command to ensure the client works. client['test'].find.to_a.should be_a(Array) # Do not leak clients. client.close end # See also RUBY-2922 for count in versioned api v1. context 'servers that exclude count from versioned api' do max_server_version '5.0.8' it "Versioned API Strict Migration Example" do client = Mongo::Client.new(uri_string, server_api: {version: "1", strict: true}) client[:sales].drop # Start Versioned API Example 5 client[:sales].insert_many([ { _id: 1, item: "abc", price: 10, quantity: 2, date: DateTime.parse("2021-01-01T08:00:00Z") }, { _id: 2, item: "jkl", price: 20, quantity: 1, date: DateTime.parse("2021-02-03T09:00:00Z") }, { _id: 3, item: "xyz", price: 5, quantity: 5, date: DateTime.parse("2021-02-03T09:05:00Z") }, { _id: 4, item: "abc", price: 10, quantity: 10, date: DateTime.parse("2021-02-15T08:00:00Z") }, { _id: 5, item: "xyz", price: 5, quantity: 10, date: DateTime.parse("2021-02-15T09:05:00Z") }, { _id: 6, item: "xyz", price: 5, quantity: 5, date: DateTime.parse("2021-02-15T12:05:10Z") }, { _id: 7, item: "xyz", price: 5, quantity: 10, date: DateTime.parse("2021-02-15T14:12:12Z") }, { _id: 8, item: "abc", price: 10, quantity: 5, date: DateTime.parse("2021-03-16T20:20:13Z") } ]) # End Versioned API Example 5 expect do client.database.command(count: :sales) end.to raise_error(Mongo::Error::OperationFailure) # Start Versioned API Example 6 # Mongo::Error::OperationFailure: # [323:APIStrictError]: Provided apiStrict:true, but the command count is not in API Version 1. Information on supported commands and migrations in API Version 1 can be found at https://www.mongodb.com/docs/manual/reference/stable-api # End Versioned API Example 6 # Start Versioned API Example 7 client[:sales].count_documents # End Versioned API Example 7 # Start Versioned API Example 8 # 8 # End Versioned API Example 8 # Do not leak clients. client.close end end end mongo-ruby-driver-2.21.3/spec/integration/x509_auth_spec.rb000066400000000000000000000070261505113246500235370ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' # These tests assume the server was started with the certificates in # spec/support/certificates, and has the user that Evergreen scripts create # corresponding to the client certificate. describe 'X.509 auth integration tests' do require_x509_auth let(:authenticated_user_info) do # https://stackoverflow.com/questions/21414608/mongodb-show-current-user info = client.database.command(connectionStatus: 1).documents.first info[:authInfo][:authenticatedUsers].first end let(:authenticated_user_name) { authenticated_user_info[:user] } let(:client) do new_local_client(SpecConfig.instance.addresses, client_options) end let(:base_client_options) { SpecConfig.instance.ssl_options } context 'when auth not specified' do let(:client_options) do base_client_options end it 'does not authenticate' do authenticated_user_info.should be nil end end context 'certificate matching a defined user' do let(:common_name) do "C=US,ST=New York,L=New York City,O=MongoDB,OU=x509,CN=localhost".freeze end let(:subscriber) { Mrss::EventSubscriber.new } shared_examples 'authenticates successfully' do it 'authenticates successfully' do authenticated_user_name.should == common_name end let(:commands) do client.subscribe(Mongo::Monitoring::COMMAND, subscriber) authenticated_user_name commands = subscriber.started_events.map(&:command_name) end context 'server 4.2 and lower' do max_server_version '4.2' it 'uses the authenticate command to authenticate' do commands.should == %w(authenticate connectionStatus) end end context 'server 4.4 and higher' do min_server_fcv '4.4' it 'uses speculative authentication in hello to authenticate' do commands.should == %w(connectionStatus) end end end context 'when user name is not explicitly provided' do let(:client_options) do base_client_options.merge(auth_mech: :mongodb_x509) end it_behaves_like 'authenticates successfully' end context 'when user name is explicitly provided and matches certificate common name' do let(:client_options) do base_client_options.merge(auth_mech: :mongodb_x509, user: common_name) end it_behaves_like 'authenticates successfully' end context 'when user name is explicitly provided and does not match certificate common name' do let(:client_options) do base_client_options.merge(auth_mech: :mongodb_x509, user: 'OU=world,CN=hello') end it 'fails to authenticate' do lambda do authenticated_user_name end.should raise_error(Mongo::Auth::Unauthorized, /Client certificate.*is not authorized/) end # This test applies to both pre-4.4 and 4.4+. # When speculative authentication fails, the response is indistinguishable # from that of a server that does not support speculative authentication, # and we will try to authenticate as a separate command. it 'uses the authenticate command to authenticate' do client.subscribe(Mongo::Monitoring::COMMAND, subscriber) lambda do authenticated_user_name end.should raise_error(Mongo::Auth::Unauthorized, /Client certificate.*is not authorized/) commands = subscriber.started_events.map(&:command_name) commands.should == %w(authenticate) end end end end mongo-ruby-driver-2.21.3/spec/integration/zlib_compression_spec.rb000066400000000000000000000015501505113246500253660ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe 'Zlib compression' do require_zlib_compression before do authorized_client['test'].drop end context 'when client has zlib compressor option enabled' do it 'compresses the message to the server' do # Double check that the client has zlib compression enabled expect(authorized_client.options[:compressors]).to include('zlib') expect(Mongo::Protocol::Compressed).to receive(:new).twice.and_call_original expect(Zlib::Deflate).to receive(:deflate).twice.and_call_original expect(Zlib::Inflate).to receive(:inflate).twice.and_call_original authorized_client['test'].insert_one(_id: 1, text: 'hello world') document = authorized_client['test'].find(_id: 1).first expect(document['text']).to eq('hello world') end end end mongo-ruby-driver-2.21.3/spec/integration/zstd_compression_spec.rb000066400000000000000000000015671505113246500254220ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe 'Zstd compression' do min_server_version '4.2' require_zstd_compression before do authorized_client['test'].drop end context 'when client has snappy compressor option enabled' do it 'compresses the message to the server' do # Double check that the client has zstd compression enabled expect(authorized_client.options[:compressors]).to include('zstd') expect(Mongo::Protocol::Compressed).to receive(:new).twice.and_call_original expect(Zstd).to receive(:compress).twice.and_call_original expect(Zstd).to receive(:decompress).twice.and_call_original authorized_client['test'].insert_one(_id: 1, text: 'hello world') document = authorized_client['test'].find(_id: 1).first expect(document['text']).to eq('hello world') end end end mongo-ruby-driver-2.21.3/spec/kerberos/000077500000000000000000000000001505113246500177365ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/kerberos/kerberos_spec.rb000066400000000000000000000042611505113246500231140ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' describe 'kerberos authentication' do require_mongo_kerberos before(:all) do unless %w(1 yes true).include?(ENV['MONGO_RUBY_DRIVER_KERBEROS_INTEGRATION']&.downcase) skip "Set MONGO_RUBY_DRIVER_KERBEROS_INTEGRATION=1 in environment to run the Kerberos integration tests" end end def require_env_value(key) ENV[key].tap do |value| if value.nil? || value.empty? raise "Value for key #{key} is not present in environment as required" end end end after do client&.close end let(:user) do "#{require_env_value('SASL_USER')}%40#{realm}" end let(:host) do require_env_value('SASL_HOST') end let(:realm) do require_env_value('SASL_REALM') end let(:kerberos_db) do require_env_value('KERBEROS_DB') end let(:auth_source) do require_env_value('SASL_DB') end let(:uri) do uri = "mongodb://#{user}@#{host}/#{kerberos_db}?authMechanism=GSSAPI&authSource=#{auth_source}" end let(:client) do Mongo::Client.new(uri, server_selection_timeout: 6.31) end let(:doc) do client.database[:test].find.first end shared_examples_for 'correctly authenticates' do it 'correctly authenticates' do expect(doc['kerberos']).to eq(true) expect(doc['authenticated']).to eq('yeah') end end it_behaves_like 'correctly authenticates' context 'when host is lowercased' do let(:host) do require_env_value('SASL_HOST').downcase end it_behaves_like 'correctly authenticates' end context 'when host is uppercased' do let(:host) do require_env_value('SASL_HOST').upcase end it_behaves_like 'correctly authenticates' end context 'when canonicalize_host_name is true' do let(:host) do "#{require_env_value('IP_ADDR')}" end let(:uri) do uri = "mongodb://#{user}@#{host}/#{kerberos_db}?authMechanism=GSSAPI&authSource=#{auth_source}&authMechanismProperties=CANONICALIZE_HOST_NAME:true" end it 'correctly authenticates when using the IP' do expect(doc['kerberos']).to eq(true) expect(doc['authenticated']).to eq('yeah') end end end mongo-ruby-driver-2.21.3/spec/lite_spec_helper.rb000066400000000000000000000140471505113246500217630ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all $LOAD_PATH.unshift(File.join(File.dirname(__FILE__), "shared", "lib")) COVERAGE_MIN = 90 CURRENT_PATH = File.expand_path(File.dirname(__FILE__)) SERVER_DISCOVERY_TESTS = Dir.glob("#{CURRENT_PATH}/spec_tests/data/sdam/**/*.yml").sort SDAM_MONITORING_TESTS = Dir.glob("#{CURRENT_PATH}/spec_tests/data/sdam_monitoring/*.yml").sort SERVER_SELECTION_RTT_TESTS = Dir.glob("#{CURRENT_PATH}/spec_tests/data/server_selection_rtt/*.yml").sort CRUD_TESTS = Dir.glob("#{CURRENT_PATH}/spec_tests/data/crud/**/*.yml").sort CONNECTION_STRING_TESTS = Dir.glob("#{CURRENT_PATH}/spec_tests/data/connection_string/*.yml").sort URI_OPTIONS_TESTS = Dir.glob("#{CURRENT_PATH}/spec_tests/data/uri_options/*.yml").sort GRIDFS_TESTS = Dir.glob("#{CURRENT_PATH}/spec_tests/data/gridfs/*.yml").sort TRANSACTIONS_TESTS = Dir.glob("#{CURRENT_PATH}/spec_tests/data/transactions/*.yml").sort TRANSACTIONS_API_TESTS = Dir.glob("#{CURRENT_PATH}/spec_tests/data/transactions_api/*.yml").sort CHANGE_STREAMS_TESTS = Dir.glob("#{CURRENT_PATH}/spec_tests/data/change_streams/*.yml").sort CMAP_TESTS = Dir.glob("#{CURRENT_PATH}/spec_tests/data/cmap/*.yml").sort.select do |f| # Skip tests that are flaky on JRuby. # https://jira.mongodb.org/browse/RUBY-3292 !defined?(JRUBY_VERSION) || !f.include?('pool-checkout-minPoolSize-connection-maxConnecting.yml') end AUTH_TESTS = Dir.glob("#{CURRENT_PATH}/spec_tests/data/auth/*.yml").sort CLIENT_SIDE_ENCRYPTION_TESTS = Dir.glob("#{CURRENT_PATH}/spec_tests/data/client_side_encryption/*.yml").sort.delete_if do |spec| ![ 1, '1', 'yes', 'true' ].include?(ENV['CSOT_SPEC_TESTS']) && spec =~ /.*timeoutMS.yml$/ end # Disable output buffering: https://www.rubyguides.com/2019/02/ruby-io/ STDOUT.sync = true STDERR.sync = true if %w(1 true yes).include?(ENV['CI']&.downcase) autoload :Byebug, 'byebug' else # Load debuggers before loading the driver code, so that breakpoints # can be placed in the driver code on file/class level. begin require 'byebug' rescue LoadError begin require 'ruby-debug' rescue LoadError end end end require 'mongo' require 'pp' if BSON::Environment.jruby? # Autoloading appears to not work in some environments without these # gem calls. May have to do with rubygems version? gem 'ice_nine' gem 'timecop' end autoload :Benchmark, 'benchmark' autoload :IceNine, 'ice_nine' autoload :Timecop, 'timecop' autoload :ChildProcess, 'childprocess' require 'rspec/retry' if BSON::Environment.jruby? require 'concurrent-ruby' PossiblyConcurrentArray = Concurrent::Array else PossiblyConcurrentArray = Array end require 'support/utils' require 'support/spec_config' Mongo::Logger.logger = Logger.new(STDOUT) unless SpecConfig.instance.client_debug? Mongo::Logger.logger.level = Logger::INFO end Encoding.default_external = Encoding::UTF_8 module Mrss autoload :Utils, 'mrss/utils' end require 'mrss/lite_constraints' require 'support/matchers' require 'mrss/event_subscriber' require 'support/common_shortcuts' require 'support/client_registry' require 'support/client_registry_macros' require 'support/mongos_macros' require 'support/macros' require 'support/crypt' require 'support/json_ext_formatter' require 'support/sdam_formatter_integration' require 'support/background_thread_registry' require 'mrss/session_registry' require 'support/local_resource_registry' if SpecConfig.instance.mri? && !SpecConfig.instance.windows? require 'timeout_interrupt' else require 'timeout' TimeoutInterrupt = Timeout end Mrss.patch_mongo_for_session_registry class ExampleTimeout < StandardError; end STANDARD_TIMEOUTS = { stress: 210, jruby: 90, default: 45, }.freeze def timeout_type if ENV['EXAMPLE_TIMEOUT'].to_i > 0 :custom elsif %w(1 true yes).include?(ENV['STRESS']&.downcase) :stress elsif BSON::Environment.jruby? :jruby else :default end end def example_timeout_seconds STANDARD_TIMEOUTS.fetch( timeout_type, (ENV['EXAMPLE_TIMEOUT'] || STANDARD_TIMEOUTS[:default]).to_i ) end RSpec.configure do |config| config.extend(CommonShortcuts::ClassMethods) config.include(CommonShortcuts::InstanceMethods) config.extend(Mrss::LiteConstraints) config.include(ClientRegistryMacros) config.include(MongosMacros) config.extend(Mongo::Macros) # Used for spec/solo/* def require_solo before(:all) do unless %w(1 true yes).include?(ENV['SOLO']) skip 'Set SOLO=1 in environment to run solo tests' end end end def require_atlas before do skip 'Set ATLAS_URI in environment to run atlas tests' if ENV['ATLAS_URI'].nil? end end if SpecConfig.instance.ci? SdamFormatterIntegration.subscribe config.add_formatter(JsonExtFormatter, File.join(File.dirname(__FILE__), '../tmp/rspec.json')) config.around(:each) do |example| SdamFormatterIntegration.assign_log_entries(nil) begin example.run ensure SdamFormatterIntegration.assign_log_entries(example.id) end end end if SpecConfig.instance.ci? if defined?(Rfc::Rif) unless BSON::Environment.jruby? Rfc::Rif.output_object_space_stats = true end # Uncomment this line to log memory and CPU statistics during # test suite execution to diagnose issues potentially related to # system resource exhaustion. #Rfc::Rif.output_system_load = true end end config.expect_with :rspec do |c| c.syntax = [:should, :expect] c.max_formatted_output_length = 10000 end if config.respond_to?(:fuubar_output_pending_results=) config.fuubar_output_pending_results = false end end if SpecConfig.instance.active_support? require "active_support/version" if ActiveSupport.version >= Gem::Version.new(7) # ActiveSupport wants us to require ALL of it all of the time. # See: https://github.com/rails/rails/issues/43851, # https://github.com/rails/rails/issues/43889, etc. require 'active_support' end require "active_support/time" require 'mongo/active_support' end if File.exist?('.env.private') require 'dotenv' Dotenv.load('.env.private') end mongo-ruby-driver-2.21.3/spec/mongo/000077500000000000000000000000001505113246500172415ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/mongo/address/000077500000000000000000000000001505113246500206665ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/mongo/address/ipv4_spec.rb000066400000000000000000000042361505113246500231140ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Address::IPv4 do let(:resolver) do described_class.new(*described_class.parse(address)) end describe 'self.parse' do context 'when a port is provided' do it 'returns the host and port' do expect(described_class.parse('127.0.0.1:27017')).to eq(['127.0.0.1', 27017]) end end context 'when no port is provided' do it 'returns the host and port' do expect(described_class.parse('127.0.0.1')).to eq(['127.0.0.1', 27017]) end end end describe '#initialize' do context 'when a port is provided' do let(:address) do '127.0.0.1:27017' end it 'sets the port' do expect(resolver.port).to eq(27017) end it 'sets the host' do expect(resolver.host).to eq('127.0.0.1') end end context 'when no port is provided' do let(:address) do '127.0.0.1' end it 'sets the port to 27017' do expect(resolver.port).to eq(27017) end it 'sets the host' do expect(resolver.host).to eq('127.0.0.1') end end end describe '#socket' do let(:address) do '127.0.0.1' end context 'when ssl options are provided' do let(:socket) do resolver.socket(5, ssl: true) end it 'returns an ssl socket' do allow_any_instance_of(Mongo::Socket::SSL).to receive(:connect!) expect(socket).to be_a(Mongo::Socket::SSL) end it 'sets the family as ipv4' do allow_any_instance_of(Mongo::Socket::SSL).to receive(:connect!) expect(socket.family).to eq(Socket::PF_INET) end end context 'when ssl options are not provided' do let(:socket) do resolver.socket(5) end it 'returns a tcp socket' do allow_any_instance_of(Mongo::Socket::TCP).to receive(:connect!) expect(socket).to be_a(Mongo::Socket::TCP) end it 'sets the family a ipv4' do allow_any_instance_of(Mongo::Socket::TCP).to receive(:connect!) expect(socket.family).to eq(Socket::PF_INET) end end end end mongo-ruby-driver-2.21.3/spec/mongo/address/ipv6_spec.rb000066400000000000000000000065311505113246500231160ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Address::IPv6 do let(:resolver) do described_class.new(*described_class.parse(address)) end describe 'self.parse' do context 'when a port is provided' do it 'returns the host and port' do expect(described_class.parse('[::1]:27017')).to eq(['::1', 27017]) end end context 'when no port is provided and host is in brackets' do it 'returns the host and port' do expect(described_class.parse('[::1]')).to eq(['::1', 27017]) end end context 'when no port is provided and host is not in brackets' do it 'returns the host and port' do expect(described_class.parse('::1')).to eq(['::1', 27017]) end end context 'when invalid address is provided' do it 'raises ArgumentError' do expect do described_class.parse('::1:27017') end.to raise_error(ArgumentError, 'Invalid IPv6 address: ::1:27017') end it 'rejects extra data around the address' do expect do described_class.parse('[::1]:27017oh') end.to raise_error(ArgumentError, 'Invalid IPv6 address: [::1]:27017oh') end it 'rejects bogus data in brackets' do expect do described_class.parse('[::hello]:27017') end.to raise_error(ArgumentError, 'Invalid IPv6 address: [::hello]:27017') end end end describe '#initialize' do context 'when a port is provided' do let(:address) do '[::1]:27017' end it 'sets the port' do expect(resolver.port).to eq(27017) end it 'sets the host' do expect(resolver.host).to eq('::1') end end context 'when no port is provided' do let(:address) do '[::1]' end it 'sets the port to 27017' do expect(resolver.port).to eq(27017) end it 'sets the host' do expect(resolver.host).to eq('::1') end end end describe '#socket' do # In JRuby 9.3.2.0 Socket::PF_INET6 is nil, causing IPv6 tests to fail. # https://github.com/jruby/jruby/issues/7069 # JRuby 9.2 works correctly, this test is skipped on all JRuby versions # because we intend to remove JRuby support altogether and therefore # adding logic to condition on JRuby versions does not make sense. fails_on_jruby let(:address) do '[::1]' end context 'when ssl options are provided' do let(:socket) do resolver.socket(5, :ssl => true) end it 'returns an ssl socket' do allow_any_instance_of(Mongo::Socket::SSL).to receive(:connect!) expect(socket).to be_a(Mongo::Socket::SSL) end it 'sets the family as ipv6' do allow_any_instance_of(Mongo::Socket::SSL).to receive(:connect!) expect(socket.family).to eq(Socket::PF_INET6) end end context 'when ssl options are not provided' do let(:socket) do resolver.socket(5) end it 'returns a tcp socket' do allow_any_instance_of(Mongo::Socket::TCP).to receive(:connect!) expect(socket).to be_a(Mongo::Socket::TCP) end it 'sets the family a ipv6' do allow_any_instance_of(Mongo::Socket::TCP).to receive(:connect!) expect(socket.family).to eq(Socket::PF_INET6) end end end end mongo-ruby-driver-2.21.3/spec/mongo/address/unix_spec.rb000066400000000000000000000015171505113246500232140ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Address::Unix do let(:resolver) do described_class.new(*described_class.parse(address)) end describe 'self.parse' do it 'returns the host and no port' do expect(described_class.parse('/path/to/socket.sock')).to eq(['/path/to/socket.sock']) end end describe '#initialize' do let(:address) do '/path/to/socket.sock' end it 'sets the host' do expect(resolver.host).to eq('/path/to/socket.sock') end end describe '#socket' do require_unix_socket let(:address) do "/tmp/mongodb-#{SpecConfig.instance.any_port}.sock" end let(:socket) do resolver.socket(5) end it 'returns a unix socket' do expect(socket).to be_a(Mongo::Socket::Unix) end end end mongo-ruby-driver-2.21.3/spec/mongo/address/validator_spec.rb000066400000000000000000000022551505113246500242160ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' class ValidatorHost include Mongo::Address::Validator end describe Mongo::Address::Validator do let(:host) { ValidatorHost.new } describe '#validate_address_str!' do shared_examples_for 'raises InvalidAddress' do it 'raises InvalidAddress' do expect do host.validate_address_str!(address_str) end.to raise_error(Mongo::Error::InvalidAddress) end end shared_examples_for 'passes validation' do it 'passes validation' do expect do host.validate_address_str!(address_str) end.not_to raise_error end end context 'leading dots' do let(:address_str) { '.foo.bar.com' } it_behaves_like 'raises InvalidAddress' end context 'trailing dots' do let(:address_str) { 'foo.bar.com.' } it_behaves_like 'raises InvalidAddress' end context 'runs of multiple dots' do let(:address_str) { 'foo..bar.com' } it_behaves_like 'raises InvalidAddress' end context 'no dots' do let(:address_str) { 'foo' } it_behaves_like 'passes validation' end end end mongo-ruby-driver-2.21.3/spec/mongo/address_spec.rb000066400000000000000000000204471505113246500222340ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Address do describe '#==' do context 'when the other host and port are the same' do let(:address) do described_class.new('127.0.0.1:27017') end let(:other) do described_class.new('127.0.0.1:27017') end it 'returns true' do expect(address).to eq(other) end end context 'when the other port is different' do let(:address) do described_class.new('127.0.0.1:27017') end let(:other) do described_class.new('127.0.0.1:27018') end it 'returns false' do expect(address).to_not eq(other) end end context 'when the other host is different' do let(:address) do described_class.new('127.0.0.1:27017') end let(:other) do described_class.new('127.0.0.2:27017') end it 'returns false' do expect(address).to_not eq(other) end end context 'when the other object is not an address' do let(:address) do described_class.new('127.0.0.1:27017') end it 'returns false' do expect(address).to_not eq('test') end end context 'when the addresses are identical unix sockets' do let(:address) do described_class.new('/path/to/socket.sock') end let(:other) do described_class.new('/path/to/socket.sock') end it 'returns true' do expect(address).to eq(other) end end end describe '#hash' do let(:address) do described_class.new('127.0.0.1:27017') end it 'hashes on the host and port' do expect(address.hash).to eq([ '127.0.0.1', 27017 ].hash) end end describe '#initialize' do context 'when providing an ipv4 host' do context 'when a port is provided' do let(:address) do described_class.new('127.0.0.1:27017') end it 'sets the port' do expect(address.port).to eq(27017) end it 'sets the host' do expect(address.host).to eq('127.0.0.1') end end context 'when no port is provided' do let(:address) do described_class.new('127.0.0.1') end it 'sets the port to 27017' do expect(address.port).to eq(27017) end it 'sets the host' do expect(address.host).to eq('127.0.0.1') end end end context 'when providing an ipv6 host' do context 'when a port is provided' do let(:address) do described_class.new('[::1]:27017') end it 'sets the port' do expect(address.port).to eq(27017) end it 'sets the host' do expect(address.host).to eq('::1') end end context 'when no port is provided' do let(:address) do described_class.new('[::1]') end it 'sets the port to 27017' do expect(address.port).to eq(27017) end it 'sets the host' do expect(address.host).to eq('::1') end end end context 'when providing a DNS entry' do context 'when a port is provided' do let(:address) do described_class.new('localhost:27017') end it 'sets the port' do expect(address.port).to eq(27017) end it 'sets the host' do expect(address.host).to eq('localhost') end end context 'when a port is not provided' do let(:address) do described_class.new('localhost') end it 'sets the port to 27017' do expect(address.port).to eq(27017) end it 'sets the host' do expect(address.host).to eq('localhost') end end end context 'when providing a socket path' do let(:address) do described_class.new('/path/to/socket.sock') end it 'sets the port to nil' do expect(address.port).to be_nil end it 'sets the host' do expect(address.host).to eq('/path/to/socket.sock') end end end describe "#socket" do let(:address) do default_address end let(:host) do address.host end let(:addr_info) do family = (host == 'localhost') ? ::Socket::AF_INET : ::Socket::AF_UNSPEC ::Socket.getaddrinfo(host, nil, family, ::Socket::SOCK_STREAM) end let(:socket_address_or_host) do (host == 'localhost') ? addr_info.first[3] : host end context 'when providing a DNS entry that resolves to both IPv6 and IPv4' do # In JRuby 9.3.2.0 Socket::PF_INET6 is nil, causing IPv6 tests to fail. # https://github.com/jruby/jruby/issues/7069 # JRuby 9.2 works correctly, this test is skipped on all JRuby versions # because we intend to remove JRuby support altogether and therefore # adding logic to condition on JRuby versions does not make sense. fails_on_jruby let(:custom_hostname) do 'not_localhost' end let(:ip) do '127.0.0.1' end let(:address) do Mongo::Address.new("#{custom_hostname}:#{SpecConfig.instance.any_port}") end before do allow(::Socket).to receive(:getaddrinfo).and_return( [ ["AF_INET6", 0, '::2', '::2', ::Socket::AF_INET6, 1, 6], ["AF_INET", 0, custom_hostname, ip, ::Socket::AF_INET, 1, 6]] ) end it "attempts to use IPv6 and fallbacks to IPv4" do expect(address.socket(0.0).host).to eq(ip) end end context 'when creating a socket' do it 'uses the host, not the IP address' do expect(address.socket(0.0).host).to eq(socket_address_or_host) end let(:socket) do if SpecConfig.instance.ssl? address.socket(0.0, SpecConfig.instance.ssl_options).instance_variable_get(:@tcp_socket) else address.socket(0.0).instance_variable_get(:@socket) end end context 'keep-alive options' do fails_on_jruby if Socket.const_defined?(:TCP_KEEPINTVL) it 'sets the socket TCP_KEEPINTVL option' do expect(socket.getsockopt(Socket::IPPROTO_TCP, Socket::TCP_KEEPINTVL).int).to be <= 10 end end if Socket.const_defined?(:TCP_KEEPCNT) it 'sets the socket TCP_KEEPCNT option' do expect(socket.getsockopt(Socket::IPPROTO_TCP, Socket::TCP_KEEPCNT).int).to be <= 9 end end if Socket.const_defined?(:TCP_KEEPIDLE) it 'sets the socket TCP_KEEPIDLE option' do expect(socket.getsockopt(Socket::IPPROTO_TCP, Socket::TCP_KEEPIDLE).int).to be <= 120 end end if Socket.const_defined?(:TCP_USER_TIMEOUT) it 'sets the socket TCP_KEEPIDLE option' do expect(socket.getsockopt(Socket::IPPROTO_TCP, Socket::TCP_USER_TIMEOUT).int).to be <= 210 end end end end describe ':connect_timeout option' do clean_slate let(:address) { Mongo::Address.new('127.0.0.1') } it 'defaults to 10' do RSpec::Mocks.with_temporary_scope do resolved_address = double('address') # This test's expectation expect(resolved_address).to receive(:socket).with(0, {connect_timeout: 10}) expect(Mongo::Address::IPv4).to receive(:new).and_return(resolved_address) address.socket(0) end end end end describe '#to_s' do context 'address with ipv4 host only' do let(:address) { Mongo::Address.new('127.0.0.1') } it 'is host with port' do expect(address.to_s).to eql('127.0.0.1:27017') end end context 'address with ipv4 host and port' do let(:address) { Mongo::Address.new('127.0.0.1:27000') } it 'is host with port' do expect(address.to_s).to eql('127.0.0.1:27000') end end context 'address with ipv6 host only' do let(:address) { Mongo::Address.new('::1') } it 'is host with port' do expect(address.to_s).to eql('[::1]:27017') end end context 'address with ipv6 host and port' do let(:address) { Mongo::Address.new('[::1]:27000') } it 'is host with port' do expect(address.to_s).to eql('[::1]:27000') end end end end mongo-ruby-driver-2.21.3/spec/mongo/auth/000077500000000000000000000000001505113246500202025ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/mongo/auth/aws/000077500000000000000000000000001505113246500207745ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/mongo/auth/aws/credential_cache_spec.rb000066400000000000000000000031401505113246500255660ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' describe Mongo::Auth::Aws::CredentialsCache do let(:subject) do described_class.new end describe '#fetch' do context 'when credentials are not cached' do it 'yields to the block' do expect { |b| subject.fetch(&b) }.to yield_control end it 'sets the credentials' do credentials = double('credentials') subject.fetch { credentials } expect(subject.credentials).to eq(credentials) end end context 'when credentials are cached' do context 'when credentials are not expired' do let(:credentials) do double('credentials', expired?: false) end it 'does not yield to the block' do subject.credentials = credentials expect { |b| subject.fetch(&b) }.not_to yield_control end end end context 'when credentials are expired' do let(:credentials) do double('credentials', expired?: true) end it 'yields to the block' do subject.credentials = credentials expect { |b| subject.fetch(&b) }.to yield_control end it 'sets the credentials' do subject.credentials = credentials new_credentials = double('new credentials') subject.fetch { new_credentials } expect(subject.credentials).to eq(new_credentials) end end end describe '#clear' do it 'clears the credentials' do subject.credentials = double('credentials') subject.clear expect(subject.credentials).to be nil end end end mongo-ruby-driver-2.21.3/spec/mongo/auth/aws/credentials_retriever_spec.rb000066400000000000000000000050401505113246500267160ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' describe Mongo::Auth::Aws::CredentialsRetriever do describe '#credentials' do context 'when credentials should be obtained from endpoints' do let(:cache) do Mongo::Auth::Aws::CredentialsCache.new end let(:subject) do described_class.new(credentials_cache: cache).tap do |retriever| allow(retriever).to receive(:credentials_from_environment).and_return(nil) end end context 'when cached credentials are not expired' do let(:credentials) do double('credentials', expired?: false) end before(:each) do cache.credentials = credentials end it 'returns the cached credentials' do expect(subject.credentials).to eq(credentials) end it 'does not obtain credentials from endpoints' do expect(subject).not_to receive(:obtain_credentials_from_endpoints) described_class.new(credentials_cache: cache).credentials end end shared_examples_for 'obtains credentials from endpoints' do context 'when obtained credentials are not expired' do let(:credentials) do double('credentials', expired?: false) end before(:each) do expect(subject) .to receive(:obtain_credentials_from_endpoints) .and_return(credentials) end it 'returns the obtained credentials' do expect(subject.credentials).not_to be_expired end it 'caches the obtained credentials' do subject.credentials expect(cache.credentials).to eq(credentials) end end context 'when cannot obtain credentials from endpoints' do before(:each) do expect(subject) .to receive(:obtain_credentials_from_endpoints) .and_return(nil) end it 'raises an error' do expect { subject.credentials }.to raise_error(Mongo::Auth::Aws::CredentialsNotFound) end end end context 'when cached credentials expired' do before(:each) do cache.credentials = double('credentials', expired?: true) end it_behaves_like 'obtains credentials from endpoints' end context 'when no credentials cached' do before(:each) do cache.clear end it_behaves_like 'obtains credentials from endpoints' end end end end mongo-ruby-driver-2.21.3/spec/mongo/auth/aws/credentials_spec.rb000066400000000000000000000022221505113246500246260ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' describe Mongo::Auth::Aws::Credentials do describe '#expired?' do context 'when expiration is nil' do let(:credentials) do described_class.new('access_key_id', 'secret_access_key', nil, nil) end it 'returns false' do expect(credentials.expired?).to be false end end context 'when expiration is not nil' do before do Timecop.freeze end after do Timecop.return end context 'when the expiration is more than five minutes away' do let(:credentials) do described_class.new('access_key_id', 'secret_access_key', nil, Time.now.utc + 400) end it 'returns false' do expect(credentials.expired?).to be false end end context 'when the expiration is less than five minutes away' do let(:credentials) do described_class.new('access_key_id', 'secret_access_key', nil, Time.now.utc + 200) end it 'returns true' do expect(credentials.expired?).to be true end end end end end mongo-ruby-driver-2.21.3/spec/mongo/auth/aws/request_region_spec.rb000066400000000000000000000024131505113246500253660ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' AWS_REGION_TEST_CASES = { 'sts.amazonaws.com' => 'us-east-1', 'sts.us-west-2.amazonaws.com' => 'us-west-2', 'sts.us-west-2.amazonaws.com.ch' => 'us-west-2', 'example.com' => 'com', 'localhost' => 'us-east-1', 'sts..com' => Mongo::Error::InvalidServerAuthHost, '.amazonaws.com' => Mongo::Error::InvalidServerAuthHost, 'sts.amazonaws.' => Mongo::Error::InvalidServerAuthHost, '' => Mongo::Error::InvalidServerAuthResponse, 'x' * 256 => Mongo::Error::InvalidServerAuthHost, } describe 'AWS auth region tests' do AWS_REGION_TEST_CASES.each do |host, expected_region| context "host '#{host}'" do let(:request) do Mongo::Auth::Aws::Request.new(access_key_id: 'access_key_id', secret_access_key: 'secret_access_key', session_token: 'session_token', host: host, server_nonce: 'server_nonce', ) end if expected_region.is_a?(String) it 'derives expected region' do request.region.should == expected_region end else it 'fails with an error' do lambda do request.region end.should raise_error(expected_region) end end end end end mongo-ruby-driver-2.21.3/spec/mongo/auth/aws/request_spec.rb000066400000000000000000000042641505113246500240310ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Auth::Aws::Request do describe "#formatted_time" do context "when time is provided and frozen" do let(:original_time) { Time.at(1592399523).freeze } let(:request) do described_class.new(access_key_id: 'access_key_id', secret_access_key: 'secret_access_key', session_token: 'session_token', host: 'host', server_nonce: 'server_nonce', time: original_time ) end it 'doesn\'t modify the time instance variable' do expect { request.formatted_time }.to_not raise_error end it 'returns the correct formatted time' do expect(request.formatted_time).to eq('20200617T131203Z') end end context "when time is not provided" do let(:request) do described_class.new(access_key_id: 'access_key_id', secret_access_key: 'secret_access_key', session_token: 'session_token', host: 'host', server_nonce: 'server_nonce' ) end it 'doesn\'t raise an error on formatted_time' do expect { request.formatted_time }.to_not raise_error end end end describe "#signature" do context "when time is provided and frozen" do let(:original_time) { Time.at(1592399523).freeze } let(:request) do described_class.new(access_key_id: 'access_key_id', secret_access_key: 'secret_access_key', session_token: 'session_token', host: 'host', server_nonce: 'server_nonce', time: original_time ) end it 'doesn\'t raise error on signature' do expect { request.signature }.to_not raise_error end end context "when time is not provided" do let(:request) do described_class.new(access_key_id: 'access_key_id', secret_access_key: 'secret_access_key', session_token: 'session_token', host: 'host', server_nonce: 'server_nonce' ) end it 'doesn\'t raise error on signature' do expect { request.signature }.to_not raise_error end end end end mongo-ruby-driver-2.21.3/spec/mongo/auth/cr_spec.rb000066400000000000000000000021721505113246500221470ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' require 'support/shared/auth_context' describe Mongo::Auth::CR do let(:server) do authorized_client.cluster.next_primary end include_context 'auth unit tests' describe '#login' do before do connection.connect! end context 'when the user is not authorized' do max_server_fcv "4.0" let(:user) do Mongo::Auth::User.new( database: 'driver', user: 'notauser', password: 'password' ) end let(:cr) do described_class.new(user, connection) end let(:login) do cr.login.documents[0] end it 'raises an exception' do expect { cr.login }.to raise_error(Mongo::Auth::Unauthorized) end context 'when compression is used' do require_compression it 'does not compress the message' do expect(Mongo::Protocol::Compressed).not_to receive(:new) expect { cr.login }.to raise_error(Mongo::Auth::Unauthorized) end end end end end mongo-ruby-driver-2.21.3/spec/mongo/auth/gssapi/000077500000000000000000000000001505113246500214705ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/mongo/auth/gssapi/conversation_spec.rb000066400000000000000000000053511505113246500255450ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Auth::Gssapi::Conversation do require_mongo_kerberos let(:user) do Mongo::Auth::User.new(user: 'test') end let(:conversation) do described_class.new(user, 'test.example.com') end let(:authenticator) do double('authenticator') end let(:connection) do double('connection') end before do expect(Mongo::Auth::Gssapi::Authenticator).to receive(:new). with(user, 'test.example.com'). and_return(authenticator) end context 'when the user has a realm', if: RUBY_PLATFORM == 'java' do let(:user) do Mongo::Auth::User.new(user: 'user1@MYREALM.ME') end it 'includes the realm in the username as it was provided' do expect(conversation.user.name).to eq(user.name) end end describe '#start' do let(:query) do conversation.start(connection) end let(:selector) do query.selector end before do expect(authenticator).to receive(:initialize_challenge).and_return('test') end it 'sets the sasl start flag' do expect(selector[:saslStart]).to eq(1) end it 'sets the auto authorize flag' do expect(selector[:autoAuthorize]).to eq(1) end it 'sets the mechanism' do expect(selector[:mechanism]).to eq('GSSAPI') end it 'sets the payload', unless: BSON::Environment.jruby? do expect(selector[:payload]).to start_with('test') end it 'sets the payload', if: BSON::Environment.jruby? do expect(selector[:payload].data).to start_with('test') end end describe '#finalize' do let(:continue_token) do BSON::Environment.jruby? ? BSON::Binary.new('testing') : 'testing' end context 'when the conversation is a success' do let(:reply_document) do BSON::Document.new( 'conversationId' => 1, 'done' => false, 'payload' => continue_token, 'ok' => 1.0, ) end let(:query) do conversation.finalize(reply_document, connection) end let(:selector) do query.selector end before do expect(authenticator).to receive(:evaluate_challenge). with('testing').and_return(continue_token) end it 'sets the conversation id' do expect(selector[:conversationId]).to eq(1) end it 'sets the payload', unless: BSON::Environment.jruby? do expect(selector[:payload]).to eq(continue_token) end it 'sets the payload', if: BSON::Environment.jruby? do expect(selector[:payload].data).to eq(continue_token) end it 'sets the continue flag' do expect(selector[:saslContinue]).to eq(1) end end end end mongo-ruby-driver-2.21.3/spec/mongo/auth/invalid_mechanism_spec.rb000066400000000000000000000006551505113246500252210ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' describe Mongo::Auth::InvalidMechanism do describe 'message' do let(:exception) { described_class.new(:foo) } it 'includes all built in mechanisms' do expect(exception.message).to eq(':foo is invalid, please use one of the following mechanisms: :aws, :gssapi, :mongodb_cr, :mongodb_x509, :plain, :scram, :scram256') end end end mongo-ruby-driver-2.21.3/spec/mongo/auth/ldap/000077500000000000000000000000001505113246500211225ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/mongo/auth/ldap/conversation_spec.rb000066400000000000000000000015301505113246500251720ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Auth::LDAP::Conversation do let(:user) do Mongo::Auth::User.new( database: '$external', user: 'user', password: 'pencil' ) end let(:conversation) do described_class.new(user, double('connection')) end describe '#start' do let(:query) do conversation.start(nil) end let(:selector) do query.selector end it 'sets the sasl start flag' do expect(selector[:saslStart]).to eq(1) end it 'sets the auto authorize flag' do expect(selector[:autoAuthorize]).to eq(1) end it 'sets the mechanism' do expect(selector[:mechanism]).to eq('PLAIN') end it 'sets the payload' do expect(selector[:payload].data).to eq("\x00user\x00pencil") end end end mongo-ruby-driver-2.21.3/spec/mongo/auth/ldap_spec.rb000066400000000000000000000015021505113246500224570ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' require 'support/shared/auth_context' describe Mongo::Auth::LDAP do let(:server) do authorized_client.cluster.next_primary end include_context 'auth unit tests' let(:user) do Mongo::Auth::User.new( database: '$external', user: 'driver', password: 'password', ) end describe '#login' do before do connection.connect! end context 'when the user is not authorized for the database' do let(:cr) do described_class.new(user, connection) end let(:login) do cr.login.documents[0] end it 'attempts to log the user into the connection' do expect { cr.login }.to raise_error(Mongo::Auth::Unauthorized) end end end end mongo-ruby-driver-2.21.3/spec/mongo/auth/scram/000077500000000000000000000000001505113246500213075ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/mongo/auth/scram/conversation_spec.rb000066400000000000000000000122361505113246500253640ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' require 'support/shared/scram_conversation' describe Mongo::Auth::Scram::Conversation do # Test uses global assertions clean_slate_for_all_if_possible include_context 'scram conversation context' let(:conversation) do described_class.new(user, double('connection')) end it_behaves_like 'scram conversation' let(:user) do Mongo::Auth::User.new( database: Mongo::Database::ADMIN, user: 'user', password: 'pencil', # We specify SCRAM-SHA-1 so that we don't accidentally use # SCRAM-SHA-256 on newer server versions. auth_mech: :scram, ) end let(:mechanism) do :scram end describe '#start' do let(:msg) do conversation.start(connection) end before do expect(SecureRandom).to receive(:base64).once.and_return('NDA2NzU3MDY3MDYwMTgy') end let(:command) do msg.payload['command'] end it 'sets the sasl start flag' do expect(command[:saslStart]).to eq(1) end it 'sets the auto authorize flag' do expect(command[:autoAuthorize]).to eq(1) end it 'sets the mechanism' do expect(command[:mechanism]).to eq('SCRAM-SHA-1') end it 'sets the command' do expect(command[:payload].data).to eq('n,,n=user,r=NDA2NzU3MDY3MDYwMTgy') end end describe '#continue' do include_context 'scram continue and finalize replies' before do expect(SecureRandom).to receive(:base64).once.and_return('NDA2NzU3MDY3MDYwMTgy') end context 'when the server rnonce starts with the nonce' do let(:continue_payload) do BSON::Binary.new( 'r=NDA2NzU3MDY3MDYwMTgyt7/+IWaw1HaZZ5NmPJUTWapLpH2Gg+d8,s=AVvQXzAbxweH2RYDICaplw==,i=10000' ) end let(:msg) do conversation.continue(continue_document, connection) end let(:command) do msg.payload['command'] end it 'sets the conversation id' do expect(command[:conversationId]).to eq(1) end it 'sets the command' do expect(command[:payload].data).to eq( 'c=biws,r=NDA2NzU3MDY3MDYwMTgyt7/+IWaw1HaZZ5NmPJUTWapLpH2Gg+d8,p=qYUYNy6SQ9Jucq9rFA9nVgXQdbM=' ) end it 'sets the continue flag' do expect(command[:saslContinue]).to eq(1) end end context 'when the server nonce does not start with the nonce' do let(:continue_payload) do BSON::Binary.new( 'r=NDA2NzU4MDY3MDYwMTgyt7/+IWaw1HaZZ5NmPJUTWapLpH2Gg+d8,s=AVvQXzAbxweH2RYDICaplw==,i=10000' ) end it 'raises an error' do expect { conversation.continue(continue_document, connection) }.to raise_error(Mongo::Error::InvalidNonce) end end end describe '#finalize' do include_context 'scram continue and finalize replies' let(:continue_payload) do BSON::Binary.new( 'r=NDA2NzU3MDY3MDYwMTgyt7/+IWaw1HaZZ5NmPJUTWapLpH2Gg+d8,s=AVvQXzAbxweH2RYDICaplw==,i=10000' ) end before do expect(SecureRandom).to receive(:base64).once.and_return('NDA2NzU3MDY3MDYwMTgy') end context 'when the verifier matches the server signature' do let(:finalize_payload) do BSON::Binary.new('v=gwo9E8+uifshm7ixj441GvIfuUY=') end let(:msg) do conversation.continue(continue_document, connection) conversation.process_continue_response(finalize_document) conversation.finalize(connection) end let(:command) do msg.payload['command'] end it 'sets the conversation id' do expect(command[:conversationId]).to eq(1) end it 'sets the empty command' do expect(command[:payload].data).to eq('') end it 'sets the continue flag' do expect(command[:saslContinue]).to eq(1) end end context 'when the verifier does not match the server signature' do let(:finalize_payload) do BSON::Binary.new('v=LQ+8yhQeVL2a3Dh+TDJ7xHz4Srk=') end it 'raises an error' do expect { conversation.continue(continue_document, connection) conversation.process_continue_response(finalize_document) conversation.finalize(connection) }.to raise_error(Mongo::Error::InvalidSignature) end end context 'when server signature is empty' do let(:finalize_payload) do BSON::Binary.new('v=') end it 'raises an error' do expect { conversation.continue(continue_document, connection) conversation.process_continue_response(finalize_document) conversation.finalize(connection) }.to raise_error(Mongo::Error::InvalidSignature) end end context 'when server signature is not provided' do let(:finalize_payload) do BSON::Binary.new('ok=absolutely') end it 'succeeds but does not mark conversation server verified' do conversation.continue(continue_document, connection) conversation.process_continue_response(finalize_document) conversation.finalize(connection) conversation.server_verified?.should be false end end end end mongo-ruby-driver-2.21.3/spec/mongo/auth/scram256/000077500000000000000000000000001505113246500215445ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/mongo/auth/scram256/conversation_spec.rb000066400000000000000000000103671505113246500256240ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' require 'support/shared/scram_conversation' describe Mongo::Auth::Scram256::Conversation do # Test uses global assertions clean_slate_for_all_if_possible include_context 'scram conversation context' let(:conversation) do described_class.new(user, double('connection')) end it_behaves_like 'scram conversation' let(:user) do Mongo::Auth::User.new( database: Mongo::Database::ADMIN, user: 'user', password: 'pencil', auth_mech: :scram256, ) end let(:mechanism) do :scram256 end describe '#start' do let(:msg) do conversation.start(connection) end before do expect(SecureRandom).to receive(:base64).once.and_return('rOprNGfwEbeRWgbNEkqO') end let(:command) do msg.payload['command'] end it 'sets the sasl start flag' do expect(command[:saslStart]).to eq(1) end it 'sets the auto authorize flag' do expect(command[:autoAuthorize]).to eq(1) end it 'sets the mechanism' do expect(command[:mechanism]).to eq('SCRAM-SHA-256') end it 'sets the payload' do expect(command[:payload].data).to eq('n,,n=user,r=rOprNGfwEbeRWgbNEkqO') end end describe '#continue' do include_context 'scram continue and finalize replies' before do expect(SecureRandom).to receive(:base64).once.and_return('rOprNGfwEbeRWgbNEkqO') end context 'when the server rnonce starts with the nonce' do let(:continue_payload) do BSON::Binary.new( 'r=rOprNGfwEbeRWgbNEkqO%hvYDpWUa2RaTCAfuxFIlj)hNlF$k0,s=W22ZaJ0SNY7soEsUEjb6gQ==,i=4096' ) end let(:msg) do conversation.continue(continue_document, connection) end let(:command) do msg.payload['command'] end it 'sets the conversation id' do expect(command[:conversationId]).to eq(1) end it 'sets the payload' do expect(command[:payload].data).to eq( 'c=biws,r=rOprNGfwEbeRWgbNEkqO%hvYDpWUa2RaTCAfuxFIlj)hNlF$k0,p=dHzbZapWIk4jUhN+Ute9ytag9zjfMHgsqmmiz7AndVQ=' ) end it 'sets the continue flag' do expect(command[:saslContinue]).to eq(1) end end context 'when the server nonce does not start with the nonce' do let(:continue_payload) do BSON::Binary.new( 'r=sOprNGfwEbeRWgbNEkqO%hvYDpWUa2RaTCAfuxFIlj)hNlF$k0,s=W22ZaJ0SNY7soEsUEjb6gQ==,i=4096' ) end it 'raises an error' do expect { conversation.continue(continue_document, connection) }.to raise_error(Mongo::Error::InvalidNonce) end end end describe '#finalize' do include_context 'scram continue and finalize replies' let(:continue_payload) do BSON::Binary.new( 'r=rOprNGfwEbeRWgbNEkqO%hvYDpWUa2RaTCAfuxFIlj)hNlF$k0,s=W22ZaJ0SNY7soEsUEjb6gQ==,i=4096' ) end before do expect(SecureRandom).to receive(:base64).once.and_return('rOprNGfwEbeRWgbNEkqO') end context 'when the verifier matches the server signature' do let(:finalize_payload) do BSON::Binary.new(' v=6rriTRBi23WpRR/wtup+mMhUZUn/dB5nLTJRsjl95G4=') end let(:msg) do conversation.continue(continue_document, connection) conversation.process_continue_response(finalize_document) conversation.finalize(connection) end let(:command) do msg.payload['command'] end it 'sets the conversation id' do expect(command[:conversationId]).to eq(1) end it 'sets the empty payload' do expect(command[:payload].data).to eq('') end it 'sets the continue flag' do expect(command[:saslContinue]).to eq(1) end end context 'when the verifier does not match the server signature' do let(:finalize_payload) do BSON::Binary.new('v=7rriTRBi23WpRR/wtup+mMhUZUn/dB5nLTJRsjl95G4=') end it 'raises an error' do expect do conversation.continue(continue_document, connection) conversation.process_continue_response(finalize_document) conversation.finalize(connection) end.to raise_error(Mongo::Error::InvalidSignature) end end end end mongo-ruby-driver-2.21.3/spec/mongo/auth/scram_negotiation_spec.rb000066400000000000000000000316421505113246500252540ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' # max_pool_size is set to 1 to force a single connection being used for # all operations in a client. describe 'SCRAM-SHA auth mechanism negotiation' do min_server_fcv '4.0' require_no_external_user require_topology :single, :replica_set, :sharded # Test uses global assertions clean_slate let(:create_user!) do root_authorized_admin_client.tap do |client| users = client.database.users if users.info(user.name).any? users.remove(user.name) end client.database.command( createUser: user.name, pwd: password, roles: ['root'], mechanisms: server_user_auth_mechanisms, ) client.close end end let(:password) do user.password end let(:result) do client.database['admin'].find(nil, limit: 1).first end context 'when the configuration is specified in code' do let(:client) do opts = { database: 'admin', user: user.name, password: password }.tap do |o| o[:auth_mech] = auth_mech if auth_mech end new_local_client( SpecConfig.instance.addresses, SpecConfig.instance.test_options.merge(opts).update(max_pool_size: 1) ) end context 'when the user exists' do context 'when the user only can use SCRAM-SHA-1 to authenticate' do let(:server_user_auth_mechanisms) do ['SCRAM-SHA-1'] end let(:user) do Mongo::Auth::User.new( user: 'sha1', password: 'sha1', auth_mech: auth_mech ) end context 'when no auth mechanism is specified' do let(:auth_mech) do nil end it 'authenticates successfully' do create_user! expect { result }.not_to raise_error end end context 'when SCRAM-SHA-1 is specified as the auth mechanism' do let(:auth_mech) do :scram end it 'authenticates successfully' do create_user! expect { result }.not_to raise_error end end context 'when SCRAM-SHA-256 is specified as the auth mechanism' do let(:auth_mech) do :scram256 end it 'fails with a Mongo::Auth::Unauthorized error' do create_user! expect { result }.to raise_error(Mongo::Auth::Unauthorized) end end end context 'when the user only can use SCRAM-SHA-256 to authenticate' do let(:server_user_auth_mechanisms) do ['SCRAM-SHA-256'] end let(:user) do Mongo::Auth::User.new( user: 'sha256', password: 'sha256', auth_mech: auth_mech ) end context 'when no auth mechanism is specified' do let(:auth_mech) do nil end it 'authenticates successfully' do create_user! expect { client.database['admin'].find(options = { limit: 1 }).first }.not_to raise_error end end context 'when SCRAM-SHA-1 is specified as the auth mechanism' do let(:auth_mech) do :scram end it 'fails with a Mongo::Auth::Unauthorized error' do create_user! expect { result }.to raise_error(Mongo::Auth::Unauthorized) end end context 'when SCRAM-SHA-256 is specified as the auth mechanism' do let(:auth_mech) do :scram256 end it 'authenticates successfully' do create_user! expect { result }.not_to raise_error end end end context 'when the user only can use either SCRAM-SHA-1 or SCRAM-SHA-256 to authenticate' do let(:server_user_auth_mechanisms) do ['SCRAM-SHA-1', 'SCRAM-SHA-256'] end let(:user) do Mongo::Auth::User.new( user: 'both', password: 'both', auth_mech: auth_mech ) end context 'when no auth mechanism is specified' do let(:auth_mech) do nil end it 'authenticates successfully' do create_user! expect { result }.not_to raise_error end end context 'when SCRAM-SHA-1 is specified as the auth mechanism' do let(:auth_mech) do :scram end before do create_user! end it 'authenticates successfully' do RSpec::Mocks.with_temporary_scope do mechanism = nil # With speculative auth, Auth is instantiated twice. expect(Mongo::Auth).to receive(:get).at_least(:once).at_most(:twice).and_wrap_original do |m, user, connection| # copy mechanism here rather than whole user # in case something mutates mechanism later mechanism = user.mechanism m.call(user, connection) end expect do result end.not_to raise_error expect(mechanism).to eq(:scram) end end end context 'when SCRAM-SHA-256 is specified as the auth mechanism' do let(:auth_mech) do :scram256 end before do create_user! end it 'authenticates successfully with SCRAM-SHA-256' do RSpec::Mocks.with_temporary_scope do mechanism = nil # With speculative auth, Auth is instantiated twice. expect(Mongo::Auth).to receive(:get).at_least(:once).at_most(:twice).and_wrap_original do |m, user, connection| # copy mechanism here rather than whole user # in case something mutates mechanism later mechanism = user.mechanism m.call(user, connection) end expect { result }.not_to raise_error expect(mechanism).to eq(:scram256) end end end end end context 'when the user does not exist' do let(:auth_mech) do nil end let(:user) do Mongo::Auth::User.new( user: 'nonexistent', password: 'nonexistent', ) end it 'fails with a Mongo::Auth::Unauthorized error' do expect { result }.to raise_error(Mongo::Auth::Unauthorized) end end context 'when the username and password provided require saslprep' do let(:auth_mech) do nil end let(:server_user_auth_mechanisms) do ['SCRAM-SHA-256'] end context 'when the username and password as ASCII' do let(:user) do Mongo::Auth::User.new( user: 'IX', password: 'IX' ) end let(:password) do "I\u00ADX" end it 'authenticates successfully after saslprepping password' do create_user! expect { result }.not_to raise_error end end context 'when the username and password are non-ASCII' do let(:user) do Mongo::Auth::User.new( user: "\u2168", password: "\u2163" ) end let(:password) do "I\u00ADV" end it 'authenticates successfully after saslprepping password' do create_user! expect { result }.not_to raise_error end end end end context 'when the configuration is specified in the URI' do let(:uri) do Utils.create_mongodb_uri( SpecConfig.instance.addresses, username: user.name, password: password, uri_options: SpecConfig.instance.uri_options.merge( auth_mech: auth_mech, ), ) end let(:client) do new_local_client(uri, SpecConfig.instance.monitoring_options.merge(max_pool_size: 1)) end context 'when the user exists' do context 'when the user only can use SCRAM-SHA-1 to authenticate' do let(:server_user_auth_mechanisms) do ['SCRAM-SHA-1'] end let(:user) do Mongo::Auth::User.new( user: 'sha1', password: 'sha1', auth_mech: auth_mech, ) end context 'when no auth mechanism is specified' do let(:auth_mech) do nil end it 'authenticates successfully' do create_user! expect { result }.not_to raise_error end end context 'when SCRAM-SHA-1 is specified as the auth mechanism' do let(:auth_mech) do :scram end it 'authenticates successfully' do create_user! expect { result }.not_to raise_error end end context 'when SCRAM-SHA-256 is specified as the auth mechanism' do let(:auth_mech) do :scram256 end it 'fails with a Mongo::Auth::Unauthorized error' do create_user! expect { result }.to raise_error(Mongo::Auth::Unauthorized) end end end context 'when the user only can use SCRAM-SHA-256 to authenticate' do let(:server_user_auth_mechanisms) do ['SCRAM-SHA-256'] end let(:user) do Mongo::Auth::User.new( user: 'sha256', password: 'sha256', auth_mech: auth_mech, ) end context 'when no auth mechanism is specified' do let(:auth_mech) do nil end it 'authenticates successfully' do create_user! expect { client.database['admin'].find(options = { limit: 1 }).first }.not_to raise_error end end context 'when SCRAM-SHA-1 is specified as the auth mechanism' do let(:auth_mech) do :scram end it 'fails with a Mongo::Auth::Unauthorized error' do create_user! expect { result }.to raise_error(Mongo::Auth::Unauthorized) end end context 'when SCRAM-SHA-256 is specified as the auth mechanism' do let(:auth_mech) do :scram256 end it 'authenticates successfully' do create_user! expect { result }.not_to raise_error end end end context 'when the user only can use either SCRAM-SHA-1 or SCRAM-SHA-256 to authenticate' do let(:server_user_auth_mechanisms) do ['SCRAM-SHA-1', 'SCRAM-SHA-256'] end let(:user) do Mongo::Auth::User.new( user: 'both', password: 'both', auth_mech: auth_mech, ) end context 'when no auth mechanism is specified' do let(:auth_mech) do nil end it 'authenticates successfully' do create_user! expect { result }.not_to raise_error end end context 'when SCRAM-SHA-1 is specified as the auth mechanism' do let(:auth_mech) do :scram end before do create_user! expect(user.mechanism).to eq(:scram) end it 'authenticates successfully' do RSpec::Mocks.with_temporary_scope do mechanism = nil # With speculative auth, Auth is instantiated twice. expect(Mongo::Auth).to receive(:get).at_least(:once).at_most(:twice).and_wrap_original do |m, user, connection| # copy mechanism here rather than whole user # in case something mutates mechanism later mechanism = user.mechanism m.call(user, connection) end expect { result }.not_to raise_error expect(mechanism).to eq(:scram) end end end context 'when SCRAM-SHA-256 is specified as the auth mechanism' do let(:auth_mech) do :scram256 end before do create_user! end it 'authenticates successfully with SCRAM-SHA-256' do RSpec::Mocks.with_temporary_scope do mechanism = nil # With speculative auth, Auth is instantiated twice. expect(Mongo::Auth).to receive(:get).at_least(:once).at_most(:twice).and_wrap_original do |m, user, connection| # copy mechanism here rather than whole user # in case something mutates mechanism later mechanism = user.mechanism m.call(user, connection) end expect { result }.not_to raise_error expect(mechanism).to eq(:scram256) end end end end end end end mongo-ruby-driver-2.21.3/spec/mongo/auth/scram_spec.rb000066400000000000000000000056701505113246500226560ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' require 'support/shared/auth_context' describe Mongo::Auth::Scram do require_no_external_user let(:server) do authorized_client.cluster.next_primary end include_context 'auth unit tests' let(:cache_mod) { Mongo::Auth::CredentialCache } shared_examples_for 'caches scram credentials' do |cache_key| it 'caches scram credentials' do cache_mod.clear expect(cache_mod.store).to be_empty expect(login['ok']).to eq(1) expect(cache_mod.store).not_to be_empty client_key_entry = cache_mod.store.keys.detect do |key| key.include?(test_user.password) && key.include?(cache_key) end expect(client_key_entry).not_to be nil end end shared_examples_for 'works correctly' do before do connection.connect! end describe '#login' do context 'when the user is not authorized' do let(:user) do Mongo::Auth::User.new( database: 'driver', user: 'notauser', password: 'password', auth_mech: auth_mech, ) end let(:authenticator) do described_class.new(user, connection) end it 'raises an exception' do expect do authenticator.login end.to raise_error(Mongo::Auth::Unauthorized) end context 'when compression is used' do require_compression min_server_fcv '3.6' it 'does not compress the message' do expect(Mongo::Protocol::Compressed).not_to receive(:new) expect { authenticator.login }.to raise_error(Mongo::Auth::Unauthorized) end end end context 'when the user is authorized for the database' do let(:authenticator) do described_class.new(test_user, connection) end let(:login) do authenticator.login end it 'logs the user into the connection' do expect(login['ok']).to eq(1) end it_behaves_like 'caches scram credentials', :salted_password it_behaves_like 'caches scram credentials', :client_key it_behaves_like 'caches scram credentials', :server_key context 'if conversation has not verified server signature' do it 'raises an exception' do expect_any_instance_of(Mongo::Auth::ScramConversationBase).to receive(:server_verified?).and_return(false) lambda do login end.should raise_error(Mongo::Error::MissingScramServerSignature) end end end end end context 'when SCRAM-SHA-1 is used' do min_server_fcv '3.0' let(:auth_mech) { :scram } it_behaves_like 'works correctly' end context 'when SCRAM-SHA-256 is used' do min_server_fcv '4.0' let(:auth_mech) { :scram256 } it_behaves_like 'works correctly' end end mongo-ruby-driver-2.21.3/spec/mongo/auth/stringprep/000077500000000000000000000000001505113246500223775ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/mongo/auth/stringprep/profiles/000077500000000000000000000000001505113246500242225ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/mongo/auth/stringprep/profiles/sasl_spec.rb000066400000000000000000000043621505113246500265300ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Auth::StringPrep::Profiles::SASL do let(:prepared_data) do Mongo::Auth::StringPrep.prepare( data, mappings, prohibited, options ) end let(:mappings) do Mongo::Auth::StringPrep::Profiles::SASL::MAPPINGS end let(:prohibited) do Mongo::Auth::StringPrep::Profiles::SASL::PROHIBITED end let(:options) do { normalize: true, bidi: true } end describe 'StringPrep#prepare' do context 'when there is unnecessary punctuation' do let(:data) do "I\u00ADX" end it 'removes the punctuation' do expect(prepared_data).to eq('IX') end end context 'when there are non-ASCII spaces' do let(:data) do "I\u2000X" end it 'replaces them with ASCII spaces' do expect(prepared_data).to eq('I X') end end context 'when the input is ASCII' do let(:data) do 'user' end it 'returns the same string' do expect(prepared_data).to eq('user') end end context 'when the data contains uppercase characters' do let(:data) do 'USER' end it 'preserves case' do expect(prepared_data).to eq('USER') end end context 'when the data contains single-character codes' do let(:data) do "\u00AA" end it 'normalizes the codes' do expect(prepared_data).to eq('a') end end context 'when the data contains multi-character codes' do let(:data) do "\u2168" end it 'normalizes the codes' do expect(prepared_data).to eq('IX') end end context 'when the data contains prohibited input' do let(:data) do "\u0007" end it 'raises an error' do expect { prepared_data }.to raise_error(Mongo::Error::FailedStringPrepValidation) end end context 'when the data contains invalid bidi input' do let(:data) do "\u0627\u0031" end it 'raises an error' do expect { prepared_data }.to raise_error(Mongo::Error::FailedStringPrepValidation) end end end end mongo-ruby-driver-2.21.3/spec/mongo/auth/stringprep_spec.rb000066400000000000000000000102151505113246500237350ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Auth::StringPrep do include Mongo::Auth::StringPrep describe '#prepare' do let(:prepared_data) do prepare(data, mappings, prohibited, options) end context 'with no options' do let(:mappings) do [] end let(:prohibited) do [] end let(:options) do {} end context 'when the data has invalid bidi' do let(:data) do "\u0627\u0031" end it 'does not raise an error' do expect(prepared_data).to eq("\u0627\u0031") end end context 'when the data has unicode codes' do let(:data) do "ua\u030Aer" end it 'does not normalize the data' do expect(prepared_data).to eq("ua\u030Aer") end end end context 'with options specified' do let (:mappings) do [Mongo::Auth::StringPrep::Tables::B1, Mongo::Auth::StringPrep::Tables::B2] end let (:prohibited) do [ Mongo::Auth::StringPrep::Tables::C1_1, Mongo::Auth::StringPrep::Tables::C1_2, Mongo::Auth::StringPrep::Tables::C6, ] end let (:options) do { normalize: true, bidi: true, } end context 'when the input is empty' do let(:data) do '' end it 'returns the empty string' do expect(prepared_data).to eq('') end end context 'when the input is ASCII' do let(:data) do 'user' end it 'returns the same string on ASCII input' do expect(prepared_data).to eq('user') end end context 'when the input contains zero-width spaces' do let(:data) do "u\u200Ber" end it 'removes the zero-width spaces' do expect(prepared_data).to eq('uer') end end context 'when the input contains non-ASCII characters' do let(:data) do "u\u00DFer" end it 'maps the non-ASCII characters to ASCII' do expect(prepared_data).to eq('usser') end end context 'when the input contains unicode codes' do let(:data) do "ua\u030Aer" end it 'unicode normalizes the input' do expect(prepared_data).to eq("u\u00e5er") end end context 'when the input contains prohibited characters' do let(:data) do "u\uFFFDer" end it 'raises an error' do expect { prepared_data }.to raise_error(Mongo::Error::FailedStringPrepValidation) end end context 'when the data is proper bidi' do let(:data) do "\u0627\u0031\u0628" end it 'does not raise an error' do expect( prepared_data ).to eq("\u0627\u0031\u0628") end end context 'when bidi input contains prohibited bidi characters' do let(:data) do "\u0627\u0589\u0628" end it 'raises an error' do expect { prepared_data }.to raise_error(Mongo::Error::FailedStringPrepValidation) end end context 'when bidi input has an invalid first bidi character' do let(:data) do "\u0031\u0627" end it 'raises an error' do expect { prepared_data }.to raise_error(Mongo::Error::FailedStringPrepValidation) end end context 'when bidi input has an invalid last bidi character' do let(:data) do "\u0627\u0031" end it 'raises an error' do expect { prepared_data }.to raise_error(Mongo::Error::FailedStringPrepValidation) end end context 'when bidi input has a bad character' do let(:data) do "\u206D" end it 'raises an error' do expect { prepared_data }.to raise_error(Mongo::Error::FailedStringPrepValidation) end end end end end mongo-ruby-driver-2.21.3/spec/mongo/auth/user/000077500000000000000000000000001505113246500211605ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/mongo/auth/user/view_spec.rb000066400000000000000000000317241505113246500235000ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Auth::User::View do let(:database) { root_authorized_client.database } let(:view) do described_class.new(database) end before do # Separate view instance to not interfere with test assertions view = described_class.new(root_authorized_client.database) begin view.remove('durran') rescue Mongo::Error::OperationFailure end end shared_context 'testing write concern' do let(:subscriber) do Mrss::EventSubscriber.new end let(:client) do root_authorized_client.tap do |client| client.subscribe(Mongo::Monitoring::COMMAND, subscriber) end end let(:view) do described_class.new(client.database) end before do allow_any_instance_of(Mongo::Monitoring::Event::CommandStarted).to receive(:redacted) do |instance, command_name, document| document end end end shared_examples_for 'forwards write concern to server' do # w:2 requires more than one node in the deployment require_topology :replica_set it 'forwards write concern to server' do response expect(event.command['writeConcern']).to eq('w' => 2) end end describe '#create' do context 'when password is not provided' do let(:database) { root_authorized_client.use('$external').database } let(:username) { 'passwordless-user' } let(:response) do view.create( username, # https://stackoverflow.com/questions/55939832/mongodb-external-database-cannot-create-new-user-with-user-defined-role roles: [{role: 'read', db: 'admin'}], ) end before do begin view.remove(username) rescue Mongo::Error::OperationFailure # can be user not found, ignore end end it 'creates the user' do view.info(username).should == [] lambda do response end.should_not raise_error view.info(username).first['user'].should == username end end context 'when a session is not used' do let!(:response) do view.create( 'durran', { password: 'password', roles: [Mongo::Auth::Roles::READ_WRITE], } ) end context 'when user creation was successful' do it 'saves the user in the database' do expect(response).to be_successful end context 'when compression is used' do require_compression min_server_fcv '3.6' it 'does not compress the message' do expect(Mongo::Protocol::Compressed).not_to receive(:new) expect(response).to be_successful end end end context 'when creation was not successful' do it 'raises an exception' do expect { view.create('durran', password: 'password') }.to raise_error(Mongo::Error::OperationFailure) end end end context 'when a session is used' do let(:operation) do view.create( 'durran', password: 'password', roles: [Mongo::Auth::Roles::READ_WRITE], session: session ) end let(:session) do client.start_session end let(:client) do root_authorized_client end it_behaves_like 'an operation using a session' end context 'when write concern is given' do include_context 'testing write concern' let(:response) do view.create( 'durran', password: 'password', roles: [Mongo::Auth::Roles::READ_WRITE], write_concern: {w: 2}, ) end let(:event) do subscriber.single_command_started_event('createUser') end it_behaves_like 'forwards write concern to server' end end describe '#update' do before do view.create( 'durran', password: 'password', roles: [Mongo::Auth::Roles::READ_WRITE] ) end context 'when a user password is updated' do context 'when a session is not used' do let!(:response) do view.update( 'durran', password: '123', roles: [ Mongo::Auth::Roles::READ_WRITE ] ) end it 'updates the password' do expect(response).to be_successful end context 'when compression is used' do require_compression min_server_fcv '3.6' it 'does not compress the message' do expect(Mongo::Protocol::Compressed).not_to receive(:new) expect(response).to be_successful end end end context 'when a session is used' do let(:operation) do view.update( 'durran', password: '123', roles: [ Mongo::Auth::Roles::READ_WRITE ], session: session ) end let(:session) do client.start_session end let(:client) do root_authorized_client end it_behaves_like 'an operation using a session' end end context 'when the roles of a user are updated' do context 'when a session is not used' do let!(:response) do view.update( 'durran', password: 'password', roles: [ Mongo::Auth::Roles::READ ] ) end it 'updates the roles' do expect(response).to be_successful end context 'when compression is used' do require_compression min_server_fcv '3.6' it 'does not compress the message' do expect(Mongo::Protocol::Compressed).not_to receive(:new) expect(response).to be_successful end end end context 'when a session is used' do let(:operation) do view.update( 'durran', password: 'password', roles: [ Mongo::Auth::Roles::READ ], session: session ) end let(:session) do client.start_session end let(:client) do root_authorized_client end it_behaves_like 'an operation using a session' end end context 'when write concern is given' do include_context 'testing write concern' let(:response) do view.update( 'durran', password: 'password1', roles: [Mongo::Auth::Roles::READ_WRITE], write_concern: {w: 2}, ) end let(:event) do subscriber.single_command_started_event('updateUser') end it_behaves_like 'forwards write concern to server' end end describe '#remove' do context 'when a session is not used' do context 'when user removal was successful' do before do view.create( 'durran', password: 'password', roles: [ Mongo::Auth::Roles::READ_WRITE ] ) end let(:response) do view.remove('durran') end it 'saves the user in the database' do expect(response).to be_successful end end context 'when removal was not successful' do it 'raises an exception' do expect { view.remove('notauser') }.to raise_error(Mongo::Error::OperationFailure) end end end context 'when a session is used' do context 'when user removal was successful' do before do view.create( 'durran', password: 'password', roles: [ Mongo::Auth::Roles::READ_WRITE ] ) end let(:operation) do view.remove('durran', session: session) end let(:session) do client.start_session end let(:client) do root_authorized_client end it_behaves_like 'an operation using a session' end context 'when removal was not successful' do let(:failed_operation) do view.remove('notauser', session: session) end let(:session) do client.start_session end let(:client) do root_authorized_client end it_behaves_like 'a failed operation using a session' end end context 'when write concern is given' do include_context 'testing write concern' before do view.create( 'durran', password: 'password', roles: [ Mongo::Auth::Roles::READ_WRITE ] ) end let(:response) do view.remove( 'durran', write_concern: {w: 2}, ) end let(:event) do subscriber.single_command_started_event('dropUser') end it_behaves_like 'forwards write concern to server' end end describe '#info' do context 'when a session is not used' do before do view.remove('emily') rescue nil end context 'when a user exists in the database' do before do view.create( 'emily', password: 'password' ) end it 'returns information for that user' do expect(view.info('emily')).to_not be_empty end end context 'when a user does not exist in the database' do it 'returns nil' do expect(view.info('emily')).to be_empty end end context 'when a user is not authorized' do require_auth let(:view) do described_class.new(unauthorized_client.database) end it 'raises an OperationFailure' do expect do view.info('emily') end.to raise_exception(Mongo::Error::OperationFailure) end end end context 'when a session is used' do context 'when a user exists in the database' do before do view.create( 'durran', password: 'password' ) end let(:operation) do view.info('durran', session: session) end let(:session) do client.start_session end let(:client) do root_authorized_client end it_behaves_like 'an operation using a session' end context 'when a user does not exist in the database' do let(:operation) do view.info('emily', session: session) end let(:session) do client.start_session end let(:client) do root_authorized_client end it_behaves_like 'an operation using a session' end end end context "when the result is a write concern error" do require_topology :replica_set min_server_version '4.0' let(:user) do Mongo::Auth::User.new({ user: 'user', roles: [ Mongo::Auth::Roles::READ_WRITE ], password: 'password' }) end before do authorized_client.use('admin').database.command( configureFailPoint: "failCommand", mode: { times: 1 }, data: { failCommands: [ failCommand ], writeConcernError: { code: 64, codeName: "WriteConcernFailed", errmsg: "waiting for replication timed out", errInfo: { wtimeout: true } } } ) end shared_examples "raises the correct write concern error" do it "raises a write concern error" do expect do view.send(method, input) end.to raise_error(Mongo::Error::OperationFailure, /[64:WriteConcernFailed]/) end it "raises and reports the write concern error correctly" do begin view.send(method, input) rescue Mongo::Error::OperationFailure::Family => e expect(e.write_concern_error?).to be true expect(e.write_concern_error_document).to eq( "code" => 64, "codeName" => "WriteConcernFailed", "errmsg" => "waiting for replication timed out", "errInfo" => { "wtimeout" => true } ) end end end context "when creating a user" do let(:failCommand) { "createUser" } let(:method) { :create } let(:input) { user } after do view.remove(user.name) end include_examples "raises the correct write concern error" end context "when updating a user" do let(:failCommand) { "updateUser" } let(:method) { :update } let(:input) { user.name } before do view.create(user) end after do view.remove(user.name) end include_examples "raises the correct write concern error" end context "when removing a user" do let(:failCommand) { "dropUser" } let(:method) { :remove } let(:input) { user.name } before do view.create(user) end include_examples "raises the correct write concern error" end end end mongo-ruby-driver-2.21.3/spec/mongo/auth/user_spec.rb000066400000000000000000000166511505113246500225300ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Auth::User do let(:options) do { database: 'testing', user: 'user', password: 'pass' } end let(:user) do described_class.new(options) end shared_examples_for 'sets database and auth source to admin' do it 'sets database to admin' do expect(user.database).to eq('admin') end it 'sets auth source to admin' do expect(user.auth_source).to eq('admin') end end shared_examples_for 'sets auth source to $external' do it 'sets auth source to $external' do expect(user.auth_source).to eq('$external') end end describe '#initialize' do let(:user) { Mongo::Auth::User.new(options) } context 'no options' do let(:options) { {} } it 'succeeds' do expect(user).to be_a(Mongo::Auth::User) end it_behaves_like 'sets database and auth source to admin' end context 'invalid mechanism' do let(:options) { {auth_mech: :invalid} } it 'raises ArgumentError' do expect do user end.to raise_error(Mongo::Auth::InvalidMechanism, ":invalid is invalid, please use one of the following mechanisms: :aws, :gssapi, :mongodb_cr, :mongodb_x509, :plain, :scram, :scram256") end end context 'mechanism given as string' do let(:options) { {auth_mech: 'scram'} } context 'not linting' do require_no_linting it 'warns' do expect(Mongo::Logger.logger).to receive(:warn) user end it 'converts mechanism to symbol' do expect(user.mechanism).to eq(:scram) end it_behaves_like 'sets database and auth source to admin' end context 'linting' do require_linting it 'raises LintError' do expect do user end.to raise_error(Mongo::Error::LintError, "Auth mechanism \"scram\" must be specified as a symbol") end end end context 'mechanism given as symbol' do let(:options) { {auth_mech: :scram} } it 'does not warn' do expect(Mongo::Logger.logger).not_to receive(:warn) user end it 'stores mechanism' do expect(user.mechanism).to eq(:scram) end it_behaves_like 'sets database and auth source to admin' end context 'mechanism is x509' do let(:options) { {auth_mech: :mongodb_x509} } it 'sets database to admin' do expect(user.database).to eq('admin') end it_behaves_like 'sets auth source to $external' context 'database is explicitly given' do let(:options) { {auth_mech: :mongodb_x509, database: 'foo'} } it 'sets database to the specified one' do expect(user.database).to eq('foo') end it_behaves_like 'sets auth source to $external' end end it 'sets the database' do expect(user.database).to eq('testing') end it 'sets the name' do expect(user.name).to eq('user') end it 'sets the password' do expect(user.password).to eq('pass') end end describe '#auth_key' do let(:nonce) do end let(:expected) do Digest::MD5.hexdigest("#{nonce}#{user.name}#{user.hashed_password}") end it 'returns the users authentication key' do expect(user.auth_key(nonce)).to eq(expected) end end describe '#encoded_name' do context 'when the user name contains an =' do let(:options) do { user: 'user=' } end it 'escapes the = character to =3D' do expect(user.encoded_name).to eq('user=3D') end it 'returns a UTF-8 string' do expect(user.encoded_name.encoding.name).to eq('UTF-8') end end context 'when the user name contains a ,' do let(:options) do { user: 'user,' } end it 'escapes the , character to =2C' do expect(user.encoded_name).to eq('user=2C') end it 'returns a UTF-8 string' do expect(user.encoded_name.encoding.name).to eq('UTF-8') end end context 'when the user name contains no special characters' do it 'does not alter the user name' do expect(user.name).to eq('user') end it 'returns a UTF-8 string' do expect(user.encoded_name.encoding.name).to eq('UTF-8') end end end describe '#hashed_password' do let(:expected) do Digest::MD5.hexdigest("user:mongo:pass") end it 'returns the hashed password' do expect(user.hashed_password).to eq(expected) end context 'password not given' do let(:options) { {user: 'foo'} } it 'raises MissingPassword' do expect do user.hashed_password end.to raise_error(Mongo::Error::MissingPassword) end end end describe '#sasl_prepped_password' do let(:expected) do 'pass' end it 'returns the clear text password' do expect(user.send(:sasl_prepped_password)).to eq(expected) end it 'returns the password encoded in utf-8' do expect(user.sasl_prepped_password.encoding.name).to eq('UTF-8') end context 'password not given' do let(:options) { {user: 'foo'} } it 'raises MissingPassword' do expect do user.sasl_prepped_password end.to raise_error(Mongo::Error::MissingPassword) end end end describe '#mechanism' do context 'when the option is provided' do let(:options) do { database: 'testing', user: 'user', password: 'pass', auth_mech: :plain } end let(:user) do described_class.new(options) end it 'returns the option' do expect(user.mechanism).to eq(:plain) end end context 'when no option is provided' do let(:user) do described_class.new(options) end it 'returns the default' do expect(user.mechanism).to be_nil end end end describe '#auth_mech_properties' do context 'when the option is provided' do let(:auth_mech_properties) do { service_name: 'test', service_realm: 'test', canonicalize_host_name: true } end let(:options) do { database: 'testing', user: 'user', password: 'pass', auth_mech_properties: auth_mech_properties } end let(:user) do described_class.new(options) end it 'returns the option' do expect(user.auth_mech_properties).to eq(auth_mech_properties) end end context 'when no option is provided' do let(:user) do described_class.new(options) end it 'returns an empty hash' do expect(user.auth_mech_properties).to eq({}) end end end describe '#roles' do context 'when roles are provided' do let(:roles) do [ Mongo::Auth::Roles::ROOT ] end let(:user) do described_class.new(roles: roles) end it 'returns the roles' do expect(user.roles).to eq(roles) end end context 'when no roles are provided' do let(:user) do described_class.new({}) end it 'returns an empty array' do expect(user.roles).to be_empty end end end describe '#spec' do context 'when no password and no roles are set' do let(:user) do described_class.new(user: 'foo') end it 'is a hash with empty roles' do user.spec.should == {roles: []} end end end end mongo-ruby-driver-2.21.3/spec/mongo/auth/x509/000077500000000000000000000000001505113246500207075ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/mongo/auth/x509/conversation_spec.rb000066400000000000000000000025301505113246500247600ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Auth::X509::Conversation do let(:user) do Mongo::Auth::User.new( database: '$external', user: 'user', ) end let(:conversation) do described_class.new(user, double('connection')) end describe '#start' do let(:query) do conversation.start(nil) end let(:selector) do query.selector end it 'sets username' do expect(selector[:user]).to eq('user') end it 'sets the mechanism' do expect(selector[:mechanism]).to eq('MONGODB-X509') end context 'when a username is not provided' do let(:user) do Mongo::Auth::User.new( database: '$external', ) end it 'does not set the username' do expect(selector[:user]).to be_nil end it 'sets the mechanism' do expect(selector[:mechanism]).to eq('MONGODB-X509') end end context 'when the username is nil' do let(:user) do Mongo::Auth::User.new( database: '$external', user: nil ) end it 'does not set the username' do expect(selector.has_key?(:user)).to be(false) end it 'sets the mechanism' do expect(selector[:mechanism]).to eq('MONGODB-X509') end end end end mongo-ruby-driver-2.21.3/spec/mongo/auth/x509_spec.rb000066400000000000000000000031121505113246500222430ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' require 'support/shared/auth_context' describe Mongo::Auth::X509 do let(:server) do authorized_client.cluster.next_primary end include_context 'auth unit tests' let(:user) do Mongo::Auth::User.new(database: '$external') end describe '#initialize' do context 'when user specifies database $external' do let(:user) do Mongo::Auth::User.new(database: '$external') end it 'works' do described_class.new(user, connection) end end context 'when user specifies database other than $external' do let(:user) do Mongo::Auth::User.new(database: 'foo') end it 'raises InvalidConfiguration' do expect do described_class.new(user, connection) end.to raise_error(Mongo::Auth::InvalidConfiguration, /User specifies auth source 'foo', but the only valid auth source for X.509 is '\$external'/) end end end describe '#login' do # When x509 auth is configured, the login would work and this test # requires the login to fail. require_no_external_user context 'when the user is not authorized for the database' do before do connection.connect! end let(:x509) do described_class.new(user, connection) end let(:login) do x509.login.documents[0] end it 'attempts to log the user into the connection' do expect do x509.login end.to raise_error(Mongo::Auth::Unauthorized) end end end end mongo-ruby-driver-2.21.3/spec/mongo/auth_spec.rb000066400000000000000000000025561505113246500215510ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Auth do describe '#get' do context 'when a mongodb_cr user is provided' do let(:user) do Mongo::Auth::User.new(auth_mech: :mongodb_cr) end let(:cr) do described_class.get(user, double('connection')) end it 'returns CR' do expect(cr).to be_a(Mongo::Auth::CR) end end context 'when a mongodb_x509 user is provided' do let(:user) do Mongo::Auth::User.new(auth_mech: :mongodb_x509) end let(:x509) do described_class.get(user, double('connection')) end it 'returns X509' do expect(x509).to be_a(Mongo::Auth::X509) end end context 'when a plain user is provided' do let(:user) do Mongo::Auth::User.new(auth_mech: :plain) end let(:ldap) do described_class.get(user, double('connection')) end it 'returns LDAP' do expect(ldap).to be_a(Mongo::Auth::LDAP) end end context 'when an invalid mechanism is provided' do let(:user) do Mongo::Auth::User.new(auth_mech: :nothing) end it 'raises an error' do expect { described_class.get(user, double('connection')) }.to raise_error(Mongo::Auth::InvalidMechanism) end end end end mongo-ruby-driver-2.21.3/spec/mongo/bson_spec.rb000066400000000000000000000003371505113246500215440ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Symbol do describe '#bson_type' do it 'serializes to a symbol type' do expect(:test.bson_type).to eq(14.chr) end end end mongo-ruby-driver-2.21.3/spec/mongo/bulk_write/000077500000000000000000000000001505113246500214105ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/mongo/bulk_write/ordered_combiner_spec.rb000066400000000000000000000157651505113246500262670ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::BulkWrite::OrderedCombiner do describe '#combine' do let(:combiner) do described_class.new(requests) end context 'when provided a series of delete one' do context 'when the documents are valid' do let(:requests) do [ { delete_one: { filter: { _id: 0 }}}, { delete_one: { filter: { _id: 1 }}} ] end it 'returns a single delete one' do expect(combiner.combine).to eq( [ { delete_one: [ { 'q' => { _id: 0 }, 'limit' => 1 }, { 'q' => { _id: 1 }, 'limit' => 1 } ] } ] ) end end context 'when a document is not valid' do let(:requests) do [ { delete_one: { filter: { _id: 0 }}}, { delete_one: 'whoami' } ] end it 'raises an exception' do expect { combiner.combine }.to raise_error(Mongo::Error::InvalidBulkOperation) end end end context 'when provided a series of delete many' do context 'when the documents are valid' do let(:requests) do [ { delete_many: { filter: { _id: 0 }}}, { delete_many: { filter: { _id: 1 }}} ] end it 'returns a single delete many' do expect(combiner.combine).to eq( [ { delete_many: [ { 'q' => { _id: 0 }, 'limit' => 0 }, { 'q' => { _id: 1 }, 'limit' => 0 } ] } ] ) end end context 'when a document is not valid' do let(:requests) do [ { delete_many: { filter: { _id: 0 }}}, { delete_many: 'whoami' } ] end it 'raises an exception' do expect { combiner.combine }.to raise_error(Mongo::Error::InvalidBulkOperation) end end end context 'when provided a series of insert one' do context 'when providing only one operation' do let(:requests) do [{ insert_one: { _id: 0 }}] end it 'returns a single insert one' do expect(combiner.combine).to eq( [{ insert_one: [{ _id: 0 }]}] ) end end context 'when the documents are valid' do let(:requests) do [{ insert_one: { _id: 0 }}, { insert_one: { _id: 1 }}] end it 'returns a single insert one' do expect(combiner.combine).to eq( [{ insert_one: [{ _id: 0 }, { _id: 1 }]}] ) end end context 'when a document is not valid' do let(:requests) do [{ insert_one: { _id: 0 }}, { insert_one: 'whoami' }] end it 'raises an exception' do expect { combiner.combine }.to raise_error(Mongo::Error::InvalidBulkOperation) end end end context 'when provided a series of replace one' do context 'when the documents are valid' do let(:requests) do [ { replace_one: { filter: { _id: 0 }, replacement: { name: 'test' }}}, { replace_one: { filter: { _id: 1 }, replacement: { name: 'test' }}} ] end it 'returns a single replace one' do expect(combiner.combine).to eq( [ { replace_one: [ { 'q' => { _id: 0 }, 'u' => { name: 'test' }, }, { 'q' => { _id: 1 }, 'u' => { name: 'test' }, }, ] } ] ) end end context 'when a document is not valid' do let(:requests) do [ { replace_one: { filter: { _id: 0 }, replacement: { name: 'test' }}}, { replace_one: 'whoami' } ] end it 'raises an exception' do expect { combiner.combine }.to raise_error(Mongo::Error::InvalidBulkOperation) end end end context 'when provided a series of update one' do context 'when the documents are valid' do let(:requests) do [ { update_one: { filter: { _id: 0 }, update: { '$set' => { name: 'test' }}}}, { update_one: { filter: { _id: 1 }, update: { '$set' => { name: 'test' }}}} ] end it 'returns a single update one' do expect(combiner.combine).to eq( [ { update_one: [ { 'q' => { _id: 0 }, 'u' => { '$set' => { name: 'test' }}, }, { 'q' => { _id: 1 }, 'u' => { '$set' => { name: 'test' }}, }, ] } ] ) end end context 'when a document is not valid' do let(:requests) do [ { update_one: { filter: { _id: 0 }, update: { '$set' => { name: 'test' }}}}, { update_one: 'whoami' } ] end it 'raises an exception' do expect { combiner.combine }.to raise_error(Mongo::Error::InvalidBulkOperation) end end end context 'when provided a series of update many ops' do context 'when the documents are valid' do let(:requests) do [ { update_many: { filter: { _id: 0 }, update: { '$set' => { name: 'test' }}}}, { update_many: { filter: { _id: 1 }, update: { '$set' => { name: 'test' }}}} ] end it 'returns a single update many' do expect(combiner.combine).to eq( [ { update_many: [ { 'q' => { _id: 0 }, 'u' => { '$set' => { name: 'test' }}, 'multi' => true, }, { 'q' => { _id: 1 }, 'u' => { '$set' => { name: 'test' }}, 'multi' => true, }, ] } ] ) end end context 'when a document is not valid' do let(:requests) do [ { update_many: { filter: { _id: 0 }, update: { '$set' => { name: 'test' }}}}, { update_many: 'whoami' } ] end it 'raises an exception' do expect { combiner.combine }.to raise_error(Mongo::Error::InvalidBulkOperation) end end end context 'when provided a mix of operations' do let(:requests) do [ { insert_one: { _id: 0 }}, { delete_one: { filter: { _id: 0 }}}, { insert_one: { _id: 1 }} ] end it 'returns an ordered grouping' do expect(combiner.combine).to eq( [ { insert_one: [{ _id: 0 }]}, { delete_one: [{ 'q' => { _id: 0 }, 'limit' => 1 }]}, { insert_one: [{ _id: 1 }]} ] ) end end end end mongo-ruby-driver-2.21.3/spec/mongo/bulk_write/result_spec.rb000066400000000000000000000063451505113246500242750ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' describe Mongo::BulkWrite::Result do let(:results_document) do {'n_inserted' => 2, 'n' => 3, 'inserted_ids' => [1, 2]} end let(:subject) { described_class.new(results_document, true) } describe 'construction' do it 'works' do expect(subject).to be_a(described_class) end end describe '#inserted_count' do it 'is taken from results document' do expect(subject.inserted_count).to eql(2) end end describe '#inserted_ids' do it 'is taken from results document' do expect(subject.inserted_ids).to eql([1, 2]) end end describe '#deleted_count' do let(:results_document) do {'n_removed' => 2, 'n' => 3} end it 'is taken from results document' do expect(subject.deleted_count).to eql(2) end end describe '#matched_count' do let(:results_document) do {'n_modified' => 1, 'n_matched' => 2, 'n' => 3} end it 'is taken from results document' do expect(subject.matched_count).to eql(2) end end describe '#modified_count' do let(:results_document) do {'n_modified' => 1, 'n_matched' => 2, 'n' => 3} end it 'is taken from results document' do expect(subject.modified_count).to eql(1) end end describe '#upserted_count' do let(:results_document) do {'n_upserted' => 2, 'n' => 3, 'upserted_ids' => [1, 2]} end it 'is taken from results document' do expect(subject.upserted_count).to eql(2) end end describe '#upserted_ids' do let(:results_document) do {'n_upserted' => 2, 'n' => 3, 'upserted_ids' => [1, 2]} end it 'is taken from results document' do expect(subject.upserted_ids).to eql([1, 2]) end end describe '#validate!' do context 'no errors' do it 'returns self' do expect(subject.validate!).to eql(subject) end end context 'with top level error' do let(:results_document) do { 'writeErrors' => [ { 'ok' => 0, 'errmsg' => 'not master', 'code' => 10107, 'codeName' => 'NotMaster', } ] } end it 'raises BulkWriteError' do expect do subject.validate! # BulkWriteErrors don't have any messages on them end.to raise_error(Mongo::Error::BulkWriteError, /not master/) end end context 'with write concern error' do let(:results_document) do {'n' => 1, 'writeConcernErrors' => { 'errmsg' => 'Not enough data-bearing nodes', 'code' => 100, 'codeName' => 'CannotSatisfyWriteConcern', }} end it 'raises BulkWriteError' do expect do subject.validate! # BulkWriteErrors don't have any messages on them end.to raise_error(Mongo::Error::BulkWriteError, nil) end end end describe "#acknowledged?" do [true, false].each do |b| context "when acknowledged is passed as #{b}" do let(:result) { described_class.new(results_document, b) } it "acknowledged? is #{b}" do expect(result.acknowledged?).to be b end end end end end mongo-ruby-driver-2.21.3/spec/mongo/bulk_write/unordered_combiner_spec.rb000066400000000000000000000135251505113246500266220ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::BulkWrite::UnorderedCombiner do describe '#combine' do let(:combiner) do described_class.new(requests) end context 'when provided a series of delete one' do context 'when the documents are valid' do let(:requests) do [ { delete_one: { filter: { _id: 0 }}}, { delete_one: { filter: { _id: 1 }}} ] end it 'returns a single delete one' do expect(combiner.combine).to eq( [ { delete_one: [ { 'q' => { _id: 0 }, 'limit' => 1 }, { 'q' => { _id: 1 }, 'limit' => 1 } ] } ] ) end end context 'when a document is not valid' do let(:requests) do [ { delete_one: { filter: { _id: 0 }}}, { delete_one: 'whoami' } ] end it 'raises an exception' do expect { combiner.combine }.to raise_error(Mongo::Error::InvalidBulkOperation) end end end context 'when provided a series of delete many' do context 'when the documents are valid' do let(:requests) do [ { delete_many: { filter: { _id: 0 }}}, { delete_many: { filter: { _id: 1 }}} ] end it 'returns a single delete many' do expect(combiner.combine).to eq( [ { delete_many: [ { 'q' => { _id: 0 }, 'limit' => 0 }, { 'q' => { _id: 1 }, 'limit' => 0 } ] } ] ) end end context 'when a document is not valid' do let(:requests) do [ { delete_many: { filter: { _id: 0 }}}, { delete_many: 'whoami' } ] end it 'raises an exception' do expect { combiner.combine }.to raise_error(Mongo::Error::InvalidBulkOperation) end end end context 'when provided a series of insert one' do context 'when the documents are valid' do let(:requests) do [{ insert_one: { _id: 0 }}, { insert_one: { _id: 1 }}] end it 'returns a single insert one' do expect(combiner.combine).to eq( [{ insert_one: [{ _id: 0 }, { _id: 1 }]}] ) end end context 'when a document is not valid' do let(:requests) do [{ insert_one: { _id: 0 }}, { insert_one: 'whoami' }] end it 'raises an exception' do expect { combiner.combine }.to raise_error(Mongo::Error::InvalidBulkOperation) end end end context 'when provided a series of update one' do context 'when the documents are valid' do let(:requests) do [ { update_one: { filter: { _id: 0 }, update: { '$set' => { name: 'test' }}}}, { update_one: { filter: { _id: 1 }, update: { '$set' => { name: 'test' }}}} ] end it 'returns a single update one' do expect(combiner.combine).to eq( [ { update_one: [ { 'q' => { _id: 0 }, 'u' => { '$set' => { name: 'test' }}, }, { 'q' => { _id: 1 }, 'u' => { '$set' => { name: 'test' }}, }, ] } ] ) end end context 'when a document is not valid' do let(:requests) do [ { update_one: { filter: { _id: 0 }, update: { '$set' => { name: 'test' }}}}, { update_one: 'whoami' } ] end it 'raises an exception' do expect { combiner.combine }.to raise_error(Mongo::Error::InvalidBulkOperation) end end end context 'when provided a series of update many ops' do context 'when the documents are valid' do let(:requests) do [ { update_many: { filter: { _id: 0 }, update: { '$set' => { name: 'test' }}}}, { update_many: { filter: { _id: 1 }, update: { '$set' => { name: 'test' }}}} ] end it 'returns a single update many' do expect(combiner.combine).to eq( [ { update_many: [ { 'q' => { _id: 0 }, 'u' => { '$set' => { name: 'test' }}, 'multi' => true, }, { 'q' => { _id: 1 }, 'u' => { '$set' => { name: 'test' }}, 'multi' => true, }, ] } ] ) end end context 'when a document is not valid' do let(:requests) do [ { update_many: { filter: { _id: 0 }, update: { '$set' => { name: 'test' }}}}, { update_many: 'whoami' } ] end it 'raises an exception' do expect { combiner.combine }.to raise_error(Mongo::Error::InvalidBulkOperation) end end end context 'when provided a mix of operations' do let(:requests) do [ { insert_one: { _id: 0 }}, { delete_one: { filter: { _id: 0 }}}, { insert_one: { _id: 1 }}, { delete_one: { filter: { _id: 1 }}} ] end it 'returns an unordered mixed grouping' do expect(combiner.combine).to eq( [ { insert_one: [ { _id: 0 }, { _id: 1 } ] }, { delete_one: [ { 'q' => { _id: 0 }, 'limit' => 1 }, { 'q' => { _id: 1 }, 'limit' => 1 } ] } ] ) end end end end mongo-ruby-driver-2.21.3/spec/mongo/bulk_write_spec.rb000066400000000000000000002210401505113246500227460ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::BulkWrite do before do authorized_collection.drop end let(:collection_invalid_write_concern) do authorized_collection.client.with(write: INVALID_WRITE_CONCERN)[authorized_collection.name] end let(:collation) do { locale: 'en_US', strength: 2 } end let(:array_filters) do [{ 'i.y' => 3}] end let(:collection) do authorized_collection end let(:client) do authorized_client end shared_examples_for 'bulk write with write concern yielding operation failure' do require_topology :single it 'raises an OperationFailure' do expect { bulk_write_invalid_write_concern.execute }.to raise_error(Mongo::Error::OperationFailure) end end describe '#execute' do shared_examples_for 'an executable bulk write' do context 'when providing a bad operation' do let(:requests) do [{ not_an_operation: { _id: 0 }}] end it 'raises an exception' do expect { bulk_write.execute }.to raise_error(Mongo::Error::InvalidBulkOperationType) end end context 'when providing no requests' do let(:requests) do [] end it 'raises an exception' do expect { bulk_write.execute }.to raise_error(ArgumentError, /Bulk write requests cannot be empty/) end end context 'when the operations do not need to be split' do context 'when a write error occurs' do let(:requests) do [ { insert_one: { _id: 0 }}, { insert_one: { _id: 1 }}, { insert_one: { _id: 0 }}, { insert_one: { _id: 1 }} ] end let(:error) do begin bulk_write.execute rescue => e e end end it 'raises an exception' do expect { bulk_write.execute }.to raise_error(Mongo::Error::BulkWriteError) end it 'sets the document index on the error' do expect(error.result['writeErrors'].first['index']).to eq(2) end context 'when a session is provided' do let(:extra_options) do { session: session } end let(:client) do authorized_client end let(:failed_operation) do bulk_write.execute end it_behaves_like 'a failed operation using a session' end end context 'when provided a single insert one' do let(:requests) do [{ insert_one: { _id: 0 }}] end let(:result) do bulk_write.execute end it 'inserts the document' do expect(result.inserted_count).to eq(1) expect(authorized_collection.find(_id: 0).count).to eq(1) end it 'only inserts that document' do result expect(authorized_collection.find.first['_id']).to eq(0) end context 'when a session is provided' do let(:operation) do result end let(:extra_options) do { session: session } end let(:client) do authorized_client end it_behaves_like 'an operation using a session' end context 'when there is a write concern error' do it_behaves_like 'bulk write with write concern yielding operation failure' context 'when a session is provided' do let(:extra_options) do {session: session} end let(:client) do collection_invalid_write_concern.client end let(:failed_operation) do bulk_write_invalid_write_concern.execute end it_behaves_like 'a failed operation using a session' end end end context 'when provided multiple insert ones' do let(:requests) do [ { insert_one: { _id: 0 }}, { insert_one: { _id: 1 }}, { insert_one: { _id: 2 }} ] end let(:result) do bulk_write.execute end it 'inserts the documents' do expect(result.inserted_count).to eq(3) expect(authorized_collection.find(_id: { '$in'=> [ 0, 1, 2 ]}).count).to eq(3) end context 'when there is a write failure' do let(:requests) do [{ insert_one: { _id: 1 }}, { insert_one: { _id: 1 }}] end it 'raises a BulkWriteError' do expect { bulk_write.execute }.to raise_error(Mongo::Error::BulkWriteError) end end context 'when there is a write concern error' do it_behaves_like 'bulk write with write concern yielding operation failure' context 'when a session is provided' do let(:extra_options) do {session: session} end let(:client) do collection_invalid_write_concern.client end let(:failed_operation) do bulk_write_invalid_write_concern.execute end it_behaves_like 'a failed operation using a session' end end end context 'when provided a single delete one' do let(:requests) do [{ delete_one: { filter: { _id: 0 }}}] end let(:result) do bulk_write.execute end before do authorized_collection.insert_one({ _id: 0 }) end it 'deletes the document' do expect(result.deleted_count).to eq(1) expect(authorized_collection.find(_id: 0).count).to eq(0) end context 'when the write has specified a hint option' do let(:requests) do [{ delete_one: { filter: { _id: 1 }, hint: '_id_', } }] end context 'with unacknowledged write concern' do let(:bulk_write) do described_class.new( collection, requests, options.merge(write_concern: { w: 0 }) ) end context "on 4.4+ servers" do min_server_version '4.4' it "doesn't raises an error" do expect do bulk_write.execute end.to_not raise_error(Mongo::Error::UnsupportedOption) end end context "on <=4.2 servers" do max_server_version '4.2' it 'raises a client-side error' do expect do bulk_write.execute end.to raise_error(Mongo::Error::UnsupportedOption, /The hint option cannot be specified on an unacknowledged write operation/) end end end # Functionality on more recent servers is sufficiently covered by spec tests. context 'on server versions < 3.4' do max_server_fcv '3.2' it 'raises a client-side error' do expect do bulk_write.execute end.to raise_error(Mongo::Error::UnsupportedOption, /The MongoDB server handling this request does not support the hint option on this command./) end end end context 'when a session is provided' do let(:operation) do result end let(:client) do authorized_client end let(:extra_options) do { session: session } end it_behaves_like 'an operation using a session' end context 'when there is a write concern error' do it_behaves_like 'bulk write with write concern yielding operation failure' context 'when a session is provided' do let(:extra_options) do {session: session} end let(:client) do collection_invalid_write_concern.client end let(:failed_operation) do bulk_write_invalid_write_concern.execute end it_behaves_like 'a failed operation using a session' end context 'when the write has a collation specified' do before do authorized_collection.insert_one(name: 'bang') end let(:requests) do [{ delete_one: { filter: { name: 'BANG' }, collation: collation } }] end context 'when the server selected supports collations' do min_server_fcv '3.4' let!(:result) do bulk_write.execute end it 'applies the collation' do expect(authorized_collection.find(name: 'bang').count).to eq(0) end it 'reports the deleted count' do expect(result.deleted_count).to eq(1) end end context 'when the server selected does not support collations' do max_server_version '3.2' it 'raises an exception' do expect { bulk_write.execute }.to raise_exception(Mongo::Error::UnsupportedCollation) end context 'when a String key is used' do let(:requests) do [{ delete_one: { filter: { name: 'BANG' }, 'collation' => collation } }] end it 'raises an exception' do expect { bulk_write.execute }.to raise_exception(Mongo::Error::UnsupportedCollation) end end end end context 'when a collation is not specified' do before do authorized_collection.insert_one(name: 'bang') end let(:requests) do [{ delete_one: { filter: { name: 'BANG' }}}] end let!(:result) do bulk_write.execute end it 'does not apply the collation' do expect(authorized_collection.find(name: 'bang').count).to eq(1) end it 'reports the deleted count' do expect(result.deleted_count).to eq(0) end end end context 'when bulk executing update_one' do context 'when the write has specified a hint option' do let(:requests) do [{ update_one: { filter: { _id: 1 }, update: { '$set' => { 'x.$[i].y' => 5 } }, hint: '_id_', } }] end # Functionality on more recent servers is sufficiently covered by spec tests. context 'on server versions < 3.4' do max_server_fcv '3.2' it 'raises a client-side error' do expect do bulk_write.execute end.to raise_error(Mongo::Error::UnsupportedOption, /The MongoDB server handling this request does not support the hint option on this command./) end end context 'with unacknowledged write concern' do let(:bulk_write) do described_class.new( collection, requests, options.merge(write_concern: { w: 0 }) ) end context "on 4.2+ servers" do min_server_version '4.2' it "doesn't raises an error" do expect do bulk_write.execute end.to_not raise_error(Mongo::Error::UnsupportedOption) end end context "on <=4.2 servers" do max_server_version '4.0' it 'raises a client-side error' do expect do bulk_write.execute end.to raise_error(Mongo::Error::UnsupportedOption, /The hint option cannot be specified on an unacknowledged write operation/) end end end end context 'when the write has specified arrayFilters' do before do authorized_collection.insert_one(_id: 1, x: [{ y: 1 }, { y: 2 }, { y: 3 }]) end let(:requests) do [{ update_one: { filter: { _id: 1 }, update: { '$set' => { 'x.$[i].y' => 5 } }, array_filters: array_filters, } }] end context 'when the server selected supports arrayFilters' do min_server_fcv '3.6' let!(:result) do bulk_write.execute end it 'applies the arrayFilters' do expect(result.matched_count).to eq(1) expect(result.modified_count).to eq(1) expect(authorized_collection.find(_id: 1).first['x'].last['y']).to eq(5) end end context 'when the server selected does not support arrayFilters' do max_server_version '3.4' it 'raises an exception' do expect { bulk_write.execute }.to raise_exception(Mongo::Error::UnsupportedArrayFilters) end end end end context 'when bulk executing update_many' do context 'when the write has specified a hint option' do let(:requests) do [{ update_many: { filter: { '$or' => [{ _id: 1 }, { _id: 2 }]}, update: { '$set' => { 'x.$[i].y' => 5 } }, hint: '_id_', } }] end # Functionality on more recent servers is sufficiently covered by spec tests. context 'on server versions < 3.4' do max_server_fcv '3.2' it 'raises a client-side error' do expect do bulk_write.execute end.to raise_error(Mongo::Error::UnsupportedOption, /The MongoDB server handling this request does not support the hint option on this command./) end end context 'with unacknowledged write concern' do let(:bulk_write) do described_class.new( collection, requests, options.merge(write_concern: { w: 0 }) ) end context "on 4.2+ servers" do min_server_version '4.2' it "doesn't raises an error" do expect do bulk_write.execute end.to_not raise_error(Mongo::Error::UnsupportedOption) end end context "on <=4.0 servers" do max_server_version '4.0' it 'raises a client-side error' do expect do bulk_write.execute end.to raise_error(Mongo::Error::UnsupportedOption, /The hint option cannot be specified on an unacknowledged write operation/) end end end end context 'when the write has specified arrayFilters' do before do authorized_collection.insert_many([{ _id: 1, x: [ { y: 1 }, { y: 2 }, { y: 3 } ] }, { _id: 2, x: [ { y: 3 }, { y: 2 }, { y: 1 } ] }]) end let(:selector) do { '$or' => [{ _id: 1 }, { _id: 2 }]} end let(:requests) do [{ update_many: { filter: { '$or' => [{ _id: 1 }, { _id: 2 }]}, update: { '$set' => { 'x.$[i].y' => 5 } }, array_filters: array_filters, } }] end context 'when the server selected supports arrayFilters' do min_server_fcv '3.6' let!(:result) do bulk_write.execute end it 'applies the arrayFilters' do expect(result.matched_count).to eq(2) expect(result.modified_count).to eq(2) docs = authorized_collection.find(selector, sort: { _id: 1 }).to_a expect(docs[0]['x']).to eq ([{ 'y' => 1 }, { 'y' => 2 }, { 'y' => 5}]) expect(docs[1]['x']).to eq ([{ 'y' => 5 }, { 'y' => 2 }, { 'y' => 1}]) end end context 'when the server selected does not support arrayFilters' do max_server_version '3.4' it 'raises an exception' do expect { bulk_write.execute }.to raise_exception(Mongo::Error::UnsupportedArrayFilters) end end end end context 'when multiple documents match delete selector' do before do authorized_collection.insert_many([{ a: 1 }, { a: 1 }]) end let(:requests) do [{ delete_one: { filter: { a: 1 }}}] end it 'reports n_removed correctly' do expect(bulk_write.execute.deleted_count).to eq(1) end it 'deletes only matching documents' do bulk_write.execute expect(authorized_collection.find(a: 1).count).to eq(1) end end end context 'when provided multiple delete ones' do let(:requests) do [ { delete_one: { filter: { _id: 0 }}}, { delete_one: { filter: { _id: 1 }}}, { delete_one: { filter: { _id: 2 }}} ] end let(:result) do bulk_write.execute end before do authorized_collection.insert_many([ { _id: 0 }, { _id: 1 }, { _id: 2 } ]) end it 'deletes the documents' do expect(result.deleted_count).to eq(3) expect(authorized_collection.find(_id: { '$in'=> [ 0, 1, 2 ]}).count).to eq(0) end context 'when a session is provided' do let(:operation) do result end let(:client) do authorized_client end let(:extra_options) do { session: session } end it_behaves_like 'an operation using a session' end context 'when there is a write concern error' do it_behaves_like 'bulk write with write concern yielding operation failure' context 'when a session is provided' do let(:extra_options) do {session: session} end let(:client) do collection_invalid_write_concern.client end let(:failed_operation) do bulk_write_invalid_write_concern.execute end it_behaves_like 'a failed operation using a session' end end context 'when the write has a collation specified' do before do authorized_collection.insert_one(name: 'bang') authorized_collection.insert_one(name: 'doink') end let(:requests) do [{ delete_one: { filter: { name: 'BANG' }, collation: collation }}, { delete_one: { filter: { name: 'DOINK' }, collation: collation }}] end context 'when the server selected supports collations' do min_server_fcv '3.4' let!(:result) do bulk_write.execute end it 'applies the collation' do expect(authorized_collection.find(name: { '$in' => ['bang', 'doink']}).count).to eq(0) end it 'reports the deleted count' do expect(result.deleted_count).to eq(2) end end context 'when the server selected does not support collations' do max_server_version '3.2' it 'raises an exception' do expect { bulk_write.execute }.to raise_exception(Mongo::Error::UnsupportedCollation) end context 'when a String key is used' do let(:requests) do [{ delete_one: { filter: { name: 'BANG' }, 'collation' => collation }}, { delete_one: { filter: { name: 'DOINK' }, 'collation' => collation }}] end it 'raises an exception' do expect { bulk_write.execute }.to raise_exception(Mongo::Error::UnsupportedCollation) end end end end context 'when the write does not have a collation specified' do before do authorized_collection.insert_one(name: 'bang') authorized_collection.insert_one(name: 'doink') end let(:requests) do [{ delete_one: { filter: { name: 'BANG' }}}, { delete_one: { filter: { name: 'DOINK' }}}] end let!(:result) do bulk_write.execute end it 'does not apply the collation' do expect(authorized_collection.find(name: { '$in' => ['bang', 'doink']}).count).to eq(2) end it 'reports the deleted count' do expect(result.deleted_count).to eq(0) end end end context 'when provided a single delete many' do let(:requests) do [{ delete_many: { filter: { _id: 0 }}}] end let(:result) do bulk_write.execute end before do authorized_collection.insert_one({ _id: 0 }) end it 'deletes the documents' do expect(result.deleted_count).to eq(1) expect(authorized_collection.find(_id: 0).count).to eq(0) end context 'when the write has specified a hint option' do let(:requests) do [{ delete_many: { filter: { _id: 1 }, hint: '_id_', } }] end # Functionality on more recent servers is sufficiently covered by spec tests. context 'on server versions < 3.4' do max_server_fcv '3.2' it 'raises a client-side error' do expect do bulk_write.execute end.to raise_error(Mongo::Error::UnsupportedOption, /The MongoDB server handling this request does not support the hint option on this command./) end end context 'with unacknowledged write concern' do let(:bulk_write) do described_class.new( collection, requests, options.merge(write_concern: { w: 0 }) ) end context "on 4.4+ servers" do min_server_version '4.4' it "doesn't raises an error" do expect do bulk_write.execute end.to_not raise_error(Mongo::Error::UnsupportedOption) end end context "on <=4.2 servers" do max_server_version '4.2' it 'raises a client-side error' do expect do bulk_write.execute end.to raise_error(Mongo::Error::UnsupportedOption, /The hint option cannot be specified on an unacknowledged write operation/) end end end end context 'when a session is provided' do let(:operation) do result end let(:client) do authorized_client end let(:extra_options) do { session: session } end it_behaves_like 'an operation using a session' end context 'when there is a write concern error' do it_behaves_like 'bulk write with write concern yielding operation failure' context 'when a session is provided' do let(:extra_options) do {session: session} end let(:client) do collection_invalid_write_concern.client end let(:failed_operation) do bulk_write_invalid_write_concern.execute end it_behaves_like 'a failed operation using a session' end end context 'when the write has a collation specified' do before do authorized_collection.insert_one(name: 'bang') authorized_collection.insert_one(name: 'bang') end let(:requests) do [{ delete_many: { filter: { name: 'BANG' }, collation: collation }}] end context 'when the server selected supports collations' do min_server_fcv '3.4' let!(:result) do bulk_write.execute end it 'applies the collation' do expect(authorized_collection.find(name: 'bang').count).to eq(0) end it 'reports the deleted count' do expect(result.deleted_count).to eq(2) end end context 'when the server selected does not support collations' do max_server_version '3.2' it 'raises an exception' do expect { bulk_write.execute }.to raise_exception(Mongo::Error::UnsupportedCollation) end context 'when a String key is used' do let(:requests) do [{ delete_many: { filter: { name: 'BANG' }, 'collation' => collation }}] end it 'raises an exception' do expect { bulk_write.execute }.to raise_exception(Mongo::Error::UnsupportedCollation) end end end end context 'when a collation is not specified' do before do authorized_collection.insert_one(name: 'bang') authorized_collection.insert_one(name: 'bang') end let(:requests) do [{ delete_many: { filter: { name: 'BANG' }}}] end let!(:result) do bulk_write.execute end it 'does not apply the collation' do expect(authorized_collection.find(name: 'bang').count).to eq(2) end it 'reports the deleted count' do expect(result.deleted_count).to eq(0) end end end context 'when provided multiple delete many ops' do let(:requests) do [ { delete_many: { filter: { _id: 0 }}}, { delete_many: { filter: { _id: 1 }}}, { delete_many: { filter: { _id: 2 }}} ] end let(:result) do bulk_write.execute end before do authorized_collection.insert_many([ { _id: 0 }, { _id: 1 }, { _id: 2 } ]) end it 'deletes the documents' do expect(result.deleted_count).to eq(3) expect(authorized_collection.find(_id: { '$in'=> [ 0, 1, 2 ]}).count).to eq(0) end context 'when a session is provided' do let(:operation) do result end let(:client) do authorized_client end let(:extra_options) do { session: session } end it_behaves_like 'an operation using a session' end context 'when there is a write concern error' do it_behaves_like 'bulk write with write concern yielding operation failure' context 'when a session is provided' do let(:operation) do result end let(:client) do authorized_client end let(:extra_options) do {session: session} end it_behaves_like 'an operation using a session' end end context 'when the write has a collation specified' do before do authorized_collection.insert_one(name: 'bang') authorized_collection.insert_one(name: 'bang') authorized_collection.insert_one(name: 'doink') end let(:requests) do [{ delete_many: { filter: { name: 'BANG' }, collation: collation }}, { delete_many: { filter: { name: 'DOINK' }, collation: collation }}] end context 'when the server selected supports collations' do min_server_fcv '3.4' let!(:result) do bulk_write.execute end it 'applies the collation' do expect(authorized_collection.find(name: { '$in' => ['bang', 'doink'] }).count).to eq(0) end it 'reports the deleted count' do expect(result.deleted_count).to eq(3) end end context 'when the server selected does not support collations' do max_server_version '3.2' it 'raises an exception' do expect { bulk_write.execute }.to raise_exception(Mongo::Error::UnsupportedCollation) end context 'when a String key is used' do let(:requests) do [{ delete_many: { filter: { name: 'BANG' }, 'collation' => collation }}, { delete_many: { filter: { name: 'DOINK' }, 'collation' => collation }}] end it 'raises an exception' do expect { bulk_write.execute }.to raise_exception(Mongo::Error::UnsupportedCollation) end end end end context 'when a collation is not specified' do before do authorized_collection.insert_one(name: 'bang') authorized_collection.insert_one(name: 'bang') authorized_collection.insert_one(name: 'doink') end let(:requests) do [{ delete_many: { filter: { name: 'BANG' }}}, { delete_many: { filter: { name: 'DOINK' }}}] end let!(:result) do bulk_write.execute end it 'does not apply the collation' do expect(authorized_collection.find(name: { '$in' => ['bang', 'doink'] }).count).to eq(3) end it 'reports the deleted count' do expect(result.deleted_count).to eq(0) end end end context 'when providing a single replace one' do let(:requests) do [{ replace_one: { filter: { _id: 0 }, replacement: { name: 'test' }}}] end let(:result) do bulk_write.execute end before do authorized_collection.insert_one({ _id: 0 }) end it 'replaces the document' do expect(result.modified_count).to eq(1) expect(authorized_collection.find(_id: 0).first[:name]).to eq('test') end context 'when a hint option is provided' do let(:requests) do [{ replace_one: { filter: { _id: 0 }, replacements: { name: 'test' }, hint: '_id_' } }] end # Functionality on more recent servers is sufficiently covered by spec tests. context 'on server versions < 3.4' do max_server_fcv '3.2' it 'raises a client-side error' do expect do bulk_write.execute end.to raise_error(Mongo::Error::UnsupportedOption, /The MongoDB server handling this request does not support the hint option on this command./) end end context 'with unacknowledged write concern' do let(:bulk_write) do described_class.new( collection, requests, options.merge(write_concern: { w: 0 }) ) end context "on 4.2+ servers" do min_server_version '4.2' it "doesn't raises an error" do expect do bulk_write.execute end.to_not raise_error(Mongo::Error::UnsupportedOption) end end context "on <=4.0 servers" do max_server_version '4.0' it 'raises a client-side error' do expect do bulk_write.execute end.to raise_error(Mongo::Error::UnsupportedOption, /The hint option cannot be specified on an unacknowledged write operation/) end end end end context 'when a session is provided' do let(:operation) do result end let(:client) do authorized_client end let(:extra_options) do { session: session } end it_behaves_like 'an operation using a session' end context 'when there is a write concern error' do it_behaves_like 'bulk write with write concern yielding operation failure' context 'when a session is provided' do let(:extra_options) do {session: session} end let(:client) do collection_invalid_write_concern.client end let(:failed_operation) do bulk_write_invalid_write_concern.execute end it_behaves_like 'a failed operation using a session' end end context 'when the write has a collation specified' do before do authorized_collection.insert_one(name: 'bang') end let(:requests) do [{ replace_one: { filter: { name: 'BANG' }, replacement: { other: 'pong' }, collation: collation }}] end context 'when the server selected supports collations' do min_server_fcv '3.4' let!(:result) do bulk_write.execute end it 'applies the collation' do expect(authorized_collection.find(other: 'pong').count).to eq(1) end it 'reports the upserted id' do expect(result.upserted_ids).to eq([]) end it 'reports the upserted count' do expect(result.upserted_count).to eq(0) end it 'reports the modified count' do expect(result.modified_count).to eq(1) end it 'reports the matched count' do expect(result.matched_count).to eq(1) end end context 'when the server selected does not support collations' do max_server_version '3.2' it 'raises an exception' do expect { bulk_write.execute }.to raise_exception(Mongo::Error::UnsupportedCollation) end context 'when a String key is used' do let(:requests) do [{ replace_one: { filter: { name: 'BANG' }, replacement: { other: 'pong' }, 'collation' => collation }}] end it 'raises an exception' do expect { bulk_write.execute }.to raise_exception(Mongo::Error::UnsupportedCollation) end end end end context 'when the write does not have a collation specified' do before do authorized_collection.insert_one(name: 'bang') end let(:requests) do [{ replace_one: { filter: { name: 'BANG' }, replacement: { other: 'pong' }}}] end let!(:result) do bulk_write.execute end it 'does not apply the collation' do expect(authorized_collection.find(other: 'pong').count).to eq(0) end it 'reports the upserted id' do expect(result.upserted_ids).to eq([]) end it 'reports the upserted count' do expect(result.upserted_count).to eq(0) end it 'reports the modified count' do expect(result.modified_count).to eq(0) end it 'reports the matched count' do expect(result.matched_count).to eq(0) end end end context 'when providing a single update one' do context 'when upsert is false' do let(:requests) do [{ update_one: { filter: { _id: 0 }, update: { "$set" => { name: 'test' }}}}] end let(:result) do bulk_write.execute end before do authorized_collection.insert_one({ _id: 0 }) end it 'updates the document' do result expect(authorized_collection.find(_id: 0).first[:name]).to eq('test') end it 'reports the upserted id' do expect(result.upserted_ids).to eq([]) end it 'reports the upserted count' do expect(result.upserted_count).to eq(0) end it 'reports the modified count' do expect(result.modified_count).to eq(1) end it 'reports the matched count' do expect(result.matched_count).to eq(1) end context 'when a session is provided' do let(:operation) do result end let(:client) do authorized_client end let(:extra_options) do { session: session } end it_behaves_like 'an operation using a session' end context 'when documents match but are not modified' do before do authorized_collection.insert_one({ a: 0 }) end let(:requests) do [{ update_one: { filter: { a: 0 }, update: { "$set" => { a: 0 }}}}] end it 'reports the upserted id' do expect(result.upserted_ids).to eq([]) end it 'reports the upserted count' do expect(result.upserted_count).to eq(0) end it 'reports the modified count' do expect(result.modified_count).to eq(0) end it 'reports the matched count' do expect(result.matched_count).to eq(1) end end context 'when the number of updates exceeds the max batch size' do # Test uses doubles for server descriptions, doubles are # incompatible with freezing which linting does for descriptions require_no_linting let(:batch_size) do 11 end before do allow_any_instance_of(Mongo::Server::Description).to \ receive(:max_write_batch_size).and_return(batch_size - 1) end let(:requests) do batch_size.times.collect do |i| { update_one: { filter: { a: i }, update: { "$set" => { a: i, b: 3 }}, upsert: true }} end end it 'updates the documents and reports the correct number of upserted ids' do expect(result.upserted_ids.size).to eq(batch_size) expect(authorized_collection.find(b: 3).count).to eq(batch_size) end end context 'when there is a write concern error' do it_behaves_like 'bulk write with write concern yielding operation failure' context 'when a session is provided' do let(:extra_options) do {session: session} end let(:client) do collection_invalid_write_concern.client end let(:failed_operation) do bulk_write_invalid_write_concern.execute end it_behaves_like 'a failed operation using a session' end end end context 'when upsert is true' do let(:requests) do [{ update_one: { filter: { _id: 0 }, update: { "$set" => { name: 'test' } }, upsert: true }}] end let(:result) do bulk_write.execute end it 'updates the document' do result expect(authorized_collection.find(_id: 0).first[:name]).to eq('test') end it 'reports the upserted count' do expect(result.upserted_count).to eq(1) end it 'reports the modified_count count' do expect(result.modified_count).to eq(0) end it 'reports the matched count' do expect(result.matched_count).to eq(0) end it 'reports the upserted id' do expect(result.upserted_ids).to eq([0]) end context 'when there is a write concern error' do it_behaves_like 'bulk write with write concern yielding operation failure' end context 'when write_concern is specified as an option' do # In a multi-sharded cluster, the write seems to go to a # different shard from the read require_no_multi_mongos let(:extra_options) do { write_concern: { w: 0 } } end let(:result) do bulk_write.execute end it 'updates the document' do result expect(authorized_collection.find(_id: 0).first[:name]).to eq('test') end it 'does not report the upserted count' do expect(result.upserted_count).to eq(0) end it 'does not report the modified_count count' do expect(result.modified_count).to eq(0) end it 'does not report the matched count' do expect(result.matched_count).to eq(0) end it 'does not report the upserted id' do expect(result.upserted_ids).to eq([]) end end end context 'when the write has a collation specified' do before do authorized_collection.insert_one(name: 'bang') end let(:requests) do [{ update_one: { filter: { name: 'BANG' }, update: { "$set" => { name: 'pong' }}, collation: collation }}] end context 'when the server selected supports collations' do min_server_fcv '3.4' let!(:result) do bulk_write.execute end it 'applies the collation' do expect(authorized_collection.find(name: 'pong').count).to eq(1) end it 'reports the upserted id' do expect(result.upserted_ids).to eq([]) end it 'reports the upserted count' do expect(result.upserted_count).to eq(0) end it 'reports the modified count' do expect(result.modified_count).to eq(1) end it 'reports the matched count' do expect(result.matched_count).to eq(1) end end context 'when the server selected does not support collations' do max_server_version '3.2' it 'raises an exception' do expect { bulk_write.execute }.to raise_exception(Mongo::Error::UnsupportedCollation) end context 'when a String key is used' do let(:requests) do [{ update_one: { filter: { name: 'BANG' }, update: { "$set" => { name: 'pong' }}, 'collation' => collation }}] end it 'raises an exception' do expect { bulk_write.execute }.to raise_exception(Mongo::Error::UnsupportedCollation) end end end end context 'when the write does not have a collation specified' do before do authorized_collection.insert_one(name: 'bang') end let(:requests) do [{ update_one: { filter: { name: 'BANG' }, update: { "$set" => { name: 'pong' }}}}] end let!(:result) do bulk_write.execute end it 'does not apply the collation' do expect(authorized_collection.find(name: 'pong').count).to eq(0) end it 'reports the upserted id' do expect(result.upserted_ids).to eq([]) end it 'reports the upserted count' do expect(result.upserted_count).to eq(0) end it 'reports the modified count' do expect(result.modified_count).to eq(0) end it 'reports the matched count' do expect(result.matched_count).to eq(0) end end end context 'when providing multiple update ones' do context 'when the write has a collation specified' do before do authorized_collection.insert_one(name: 'bang') authorized_collection.insert_one(name: 'doink') end let(:requests) do [{ update_one: { filter: { name: 'BANG' }, update: { "$set" => { name: 'pong' }}, collation: collation }}, { update_one: { filter: { name: 'DOINK' }, update: { "$set" => { name: 'pong' }}, collation: collation }}] end context 'when the server selected supports collations' do min_server_fcv '3.4' let!(:result) do bulk_write.execute end it 'applies the collation' do expect(authorized_collection.find(name: 'pong').count).to eq(2) end it 'reports the upserted id' do expect(result.upserted_ids).to eq([]) end it 'reports the upserted count' do expect(result.upserted_count).to eq(0) end it 'reports the modified count' do expect(result.modified_count).to eq(2) end it 'reports the matched count' do expect(result.matched_count).to eq(2) end end context 'when the server selected does not support collations' do max_server_version '3.2' it 'raises an exception' do expect { bulk_write.execute }.to raise_exception(Mongo::Error::UnsupportedCollation) end context 'when a String key is used' do let(:requests) do [{ update_one: { filter: { name: 'BANG' }, update: { "$set" => { name: 'pong' }}, 'collation' => collation }}, { update_one: { filter: { name: 'DOINK' }, update: { "$set" => { name: 'pong' }}, 'collation' => collation }}] end it 'raises an exception' do expect { bulk_write.execute }.to raise_exception(Mongo::Error::UnsupportedCollation) end end end end context 'when the write does not have a collation specified' do before do authorized_collection.insert_one(name: 'bang') authorized_collection.insert_one(name: 'doink') end let(:requests) do [{ update_one: { filter: { name: 'BANG' }, update: { "$set" => { name: 'pong' }}}}, { update_one: { filter: { name: 'DOINK' }, update: { "$set" => { name: 'pong' }}}}] end let!(:result) do bulk_write.execute end it 'does not apply the collation' do expect(authorized_collection.find(name: 'pong').count).to eq(0) end it 'reports the upserted id' do expect(result.upserted_ids).to eq([]) end it 'reports the upserted count' do expect(result.upserted_count).to eq(0) end it 'reports the modified count' do expect(result.modified_count).to eq(0) end it 'reports the matched count' do expect(result.matched_count).to eq(0) end end context 'when upsert is false' do let(:requests) do [{ update_one: { filter: { _id: 0 }, update: { "$set" => { name: 'test' }}}}, { update_one: { filter: { _id: 1 }, update: { "$set" => { name: 'test' }}}}] end let(:result) do bulk_write.execute end before do authorized_collection.insert_many([{ _id: 0 }, { _id: 1 }]) end it 'updates the document' do result expect(authorized_collection.find(name: 'test').count).to eq(2) end it 'reports the upserted id' do expect(result.upserted_ids).to eq([]) end it 'reports the upserted count' do expect(result.upserted_count).to eq(0) end it 'reports the modified count' do expect(result.modified_count).to eq(2) end it 'reports the matched count' do expect(result.matched_count).to eq(2) end context 'when there is a mix of updates and matched without an update' do let(:requests) do [{ update_one: { filter: { a: 0 }, update: { "$set" => { a: 1 }}}}, { update_one: { filter: { a: 2 }, update: { "$set" => { a: 2 }}}}] end let(:result) do bulk_write.execute end before do authorized_collection.insert_many([{ a: 0 }, { a: 2 }]) end it 'updates the document' do result expect(authorized_collection.find(a: { '$lt' => 3 }).count).to eq(2) end it 'reports the upserted id' do expect(result.upserted_ids).to eq([]) end it 'reports the upserted count' do expect(result.upserted_count).to eq(0) end it 'reports the modified count' do expect(result.modified_count).to eq(1) end it 'reports the matched count' do expect(result.matched_count).to eq(2) end end context 'when there is a write concern error' do it_behaves_like 'bulk write with write concern yielding operation failure' end end context 'when upsert is true' do let(:requests) do [{ update_one: { filter: { _id: 0 }, update: { "$set" => { name: 'test' }}, upsert: true }}, { update_one: { filter: { _id: 1 }, update: { "$set" => { name: 'test1' }}, upsert: true }}] end let(:result) do bulk_write.execute end it 'updates the document' do expect(result.modified_count).to eq(0) expect(authorized_collection.find(name: { '$in' => ['test', 'test1'] }).count).to eq(2) end it 'reports the upserted count' do expect(result.upserted_count).to eq(2) end it 'reports the modified count' do expect(result.modified_count).to eq(0) end it 'reports the matched count' do expect(result.matched_count).to eq(0) end it 'reports the upserted id' do expect(result.upserted_ids).to eq([0, 1]) end context 'when there is a mix of updates, upsert, and matched without an update' do let(:requests) do [{ update_one: { filter: { a: 0 }, update: { "$set" => { a: 1 }}}}, { update_one: { filter: { a: 2 }, update: { "$set" => { a: 2 }}}}, { update_one: { filter: { _id: 3 }, update: { "$set" => { a: 4 }}, upsert: true }}] end let(:result) do bulk_write.execute end before do authorized_collection.insert_many([{ a: 0 }, { a: 2 }]) end it 'updates the documents' do result expect(authorized_collection.find(a: { '$lt' => 3 }).count).to eq(2) expect(authorized_collection.find(a: 4).count).to eq(1) end it 'reports the upserted id' do expect(result.upserted_ids).to eq([3]) end it 'reports the upserted count' do expect(result.upserted_count).to eq(1) end it 'reports the modified count' do expect(result.modified_count).to eq(1) end it 'reports the matched count' do expect(result.matched_count).to eq(2) end end context 'when there is a write concern error' do it_behaves_like 'bulk write with write concern yielding operation failure' end end end context 'when providing a single update many' do context 'when the write has a collation specified' do before do authorized_collection.insert_one(name: 'bang') authorized_collection.insert_one(name: 'bang') end let(:requests) do [{ update_many: { filter: { name: 'BANG' }, update: { "$set" => { name: 'pong' }}, collation: collation }}] end context 'when the server selected supports collations' do min_server_fcv '3.4' let!(:result) do bulk_write.execute end it 'applies the collation' do expect(authorized_collection.find(name: 'pong').count).to eq(2) end it 'reports the upserted id' do expect(result.upserted_ids).to eq([]) end it 'reports the upserted count' do expect(result.upserted_count).to eq(0) end it 'reports the modified count' do expect(result.modified_count).to eq(2) end it 'reports the matched count' do expect(result.matched_count).to eq(2) end end context 'when the server selected does not support collations' do max_server_version '3.2' it 'raises an exception' do expect { bulk_write.execute }.to raise_exception(Mongo::Error::UnsupportedCollation) end context 'when a String key is used' do let(:requests) do [{ update_many: { filter: { name: 'BANG' }, update: { "$set" => { name: 'pong' }}, 'collation' => collation }}] end it 'raises an exception' do expect { bulk_write.execute }.to raise_exception(Mongo::Error::UnsupportedCollation) end end end end context 'when the write does not have a collation specified' do before do authorized_collection.insert_one(name: 'bang') authorized_collection.insert_one(name: 'bang') end let(:requests) do [{ update_many: { filter: { name: 'BANG' }, update: { "$set" => { name: 'pong' }}}}] end let!(:result) do bulk_write.execute end it 'does not apply the collation' do expect(authorized_collection.find(name: 'pong').count).to eq(0) end it 'reports the upserted id' do expect(result.upserted_ids).to eq([]) end it 'reports the upserted count' do expect(result.upserted_count).to eq(0) end it 'reports the modified count' do expect(result.modified_count).to eq(0) end it 'reports the matched count' do expect(result.matched_count).to be(0) end end context 'when upsert is false' do let(:requests) do [{ update_many: { filter: { a: 0 }, update: { "$set" => { name: 'test' }}}}] end let(:result) do bulk_write.execute end before do authorized_collection.insert_many([{ a: 0 }, { a: 0 }]) end it 'updates the documents' do expect(authorized_collection.find(a: 0).count).to eq(2) end it 'reports the upserted ids' do expect(result.upserted_ids).to eq([]) end it 'reports the upserted count' do expect(result.upserted_count).to eq(0) end it 'reports the modified count' do expect(result.modified_count).to eq(2) end it 'reports the matched count' do expect(result.matched_count).to eq(2) end context 'when there is a write concern error' do it_behaves_like 'bulk write with write concern yielding operation failure' end end context 'when upsert is true' do let(:requests) do [{ update_many: { filter: { _id: 0 }, update: { "$set" => { name: 'test' }}, upsert: true }}] end let(:result) do bulk_write.execute end it 'updates the document' do result expect(authorized_collection.find(name: 'test').count).to eq(1) end it 'reports the upserted count' do expect(result.upserted_count).to eq(1) end it 'reports the matched count' do expect(result.matched_count).to eq(0) end it 'reports the modified count' do expect(result.modified_count).to eq(0) end it 'reports the upserted id' do expect(result.upserted_ids).to eq([0]) end context 'when there is a write concern error' do it_behaves_like 'bulk write with write concern yielding operation failure' end end end end context 'when the operations need to be split' do # Test uses doubles for server descriptions, doubles are # incompatible with freezing which linting does for descriptions require_no_linting let(:batch_size) do 11 end let(:connection) do server = client.cluster.next_primary end before do allow_any_instance_of(Mongo::Server::Description).to \ receive(:max_write_batch_size).and_return(batch_size - 1) end context 'when a write error occurs' do let(:requests) do batch_size.times.map do |i| { insert_one: { _id: i }} end end let(:error) do begin bulk_write.execute rescue => e e end end it 'raises an exception' do expect { requests.push({ insert_one: { _id: 5 }}) bulk_write.execute }.to raise_error(Mongo::Error::BulkWriteError) end it 'sets the document index on the error' do requests.push({ insert_one: { _id: 5 }}) expect(error.result['writeErrors'].first['index']).to eq(batch_size) end end context 'when no write errors occur' do let(:requests) do batch_size.times.map do |i| { insert_one: { _id: i }} end end let(:result) do bulk_write.execute end it 'inserts the documents' do expect(result.inserted_count).to eq(batch_size) end it 'combines the inserted ids' do expect(result.inserted_ids.size).to eq(batch_size) end context 'when a session is provided' do let(:operation) do result end let(:client) do authorized_client end let(:extra_options) do { session: session } end it_behaves_like 'an operation using a session' end context 'when retryable writes are supported' do require_wired_tiger min_server_fcv '3.6' require_topology :replica_set, :sharded # In a multi-shard cluster, retries may go to a different server # than original command which these tests are not prepared to handle require_no_multi_mongos let(:subscriber) { Mrss::EventSubscriber.new } let(:client) do authorized_client_with_retry_writes.tap do |client| # We do not unsubscribe any of these subscribers. # This is harmless since they simply store the events in themselves. client.subscribe(Mongo::Monitoring::COMMAND, subscriber) end end let(:collection) do client[authorized_collection.name] end let!(:result) do bulk_write.execute end let(:started_events) do subscriber.started_events.select do |event| event.command['insert'] end end let(:first_txn_number) do Utils.int64_value(started_events[-2].command['txnNumber']) end let(:second_txn_number) do Utils.int64_value(started_events[-1].command['txnNumber']) end it 'inserts the documents' do expect(result.inserted_count).to eq(batch_size) end it 'combines the inserted ids' do expect(result.inserted_ids.size).to eq(batch_size) end it 'publishes the expected number of events' do expect(started_events.length).to eq 2 end it 'increments the transaction number' do expect(second_txn_number). to eq(first_txn_number + 1) end end end end context 'when an operation exceeds the max bson size' do let(:requests) do 5.times.map do |i| { insert_one: { _id: i, x: 'y' * 4000000 }} end end let(:result) do bulk_write.execute end it 'inserts the documents' do expect(result.inserted_count).to eq(5) end context 'when a session is provided' do let(:operation) do result end let(:client) do authorized_client end let(:extra_options) do { session: session } end it_behaves_like 'an operation using a session' end end end context 'when the bulk write is unordered' do let(:bulk_write) do described_class.new(collection, requests, options) end let(:options) do { ordered: false }.merge(extra_options) end let(:extra_options) do {} end let(:bulk_write_invalid_write_concern) do described_class.new(collection_invalid_write_concern, requests, options) end it_behaves_like 'an executable bulk write' end context 'when the bulk write is ordered' do let(:bulk_write) do described_class.new(collection, requests, options) end let(:options) do { ordered: true }.merge(extra_options) end let(:extra_options) do {} end let(:bulk_write_invalid_write_concern) do described_class.new(collection_invalid_write_concern, requests, options) end it_behaves_like 'an executable bulk write' end end describe '#initialize' do let(:requests) do [{ insert_one: { _id: 0 }}] end shared_examples_for 'a bulk write initializer' do it 'sets the collection' do expect(bulk_write.collection).to eq(authorized_collection) end it 'sets the requests' do expect(bulk_write.requests).to eq(requests) end end context 'when no options are provided' do let(:bulk_write) do described_class.new(authorized_collection, requests) end it 'sets empty options' do expect(bulk_write.options).to be_empty end it_behaves_like 'a bulk write initializer' end context 'when options are provided' do let(:bulk_write) do described_class.new(authorized_collection, requests, ordered: true) end it 'sets the options' do expect(bulk_write.options).to eq(ordered: true) end end context 'when nil options are provided' do let(:bulk_write) do described_class.new(authorized_collection, requests, nil) end it 'sets empty options' do expect(bulk_write.options).to be_empty end end end describe '#ordered?' do context 'when no option provided' do let(:bulk_write) do described_class.new(authorized_collection, []) end it 'returns true' do expect(bulk_write).to be_ordered end end context 'when the option is provided' do context 'when the option is true' do let(:options) do { ordered: true } end let(:bulk_write) do described_class.new(authorized_collection, [], options) end it 'returns true' do expect(bulk_write).to be_ordered end end context 'when the option is false' do let(:options) do { ordered: false } end let(:bulk_write) do described_class.new(authorized_collection, [], options) end it 'returns false' do expect(bulk_write).to_not be_ordered end end end end describe 'when the collection has a validator' do min_server_fcv '3.2' let(:collection_with_validator) do authorized_client[:validating, :validator => { :a => { '$exists' => true } }].tap do |c| c.create end end before do begin; authorized_client[:validating].drop; rescue; end collection_with_validator.delete_many collection_with_validator.insert_many([{ :a => 1 }, { :a => 2 }]) end context 'when the documents are invalid' do let(:ops) do [ { insert_one: { :x => 1 } }, { update_one: { filter: { :a => 1 }, update: { '$unset' => { :a => '' } } } }, { replace_one: { filter: { :a => 2 }, replacement: { :x => 2 } } } ] end context 'when bypass_document_validation is not set' do let(:result) do collection_with_validator.bulk_write(ops) end it 'raises BulkWriteError' do expect { result }.to raise_exception(Mongo::Error::BulkWriteError) end end context 'when bypass_document_validation is true' do let(:result2) do collection_with_validator.bulk_write( ops, :bypass_document_validation => true) end it 'executes successfully' do expect(result2.modified_count).to eq(2) expect(result2.inserted_count).to eq(1) end end end end describe "#acknowledged?" do let(:requests) { [ { insert_one: { x: 1 } } ] } let(:options) { {} } let(:bulk_write) do described_class.new( collection, requests, options ) end let(:result) { bulk_write.execute } context "when using unacknowledged writes with one request" do let(:options) { { write_concern: { w: 0 } } } it 'acknowledged? returns false' do expect(result.acknowledged?).to be false end end context "when using unacknowledged writes with multiple requests" do let(:options) { { write_concern: { w: 0 } } } let(:requests) { [ { insert_one: { x: 1 } }, { insert_one: { x: 1 } } ] } it 'acknowledged? returns false' do expect(result.acknowledged?).to be false end end context "when not using unacknowledged writes" do let(:options) { { write_concern: { w: 1 } } } it 'acknowledged? returns true' do expect(result.acknowledged?).to be true end end end end mongo-ruby-driver-2.21.3/spec/mongo/caching_cursor_spec.rb000066400000000000000000000032611505113246500235730ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::CachingCursor do around do |spec| Mongo::QueryCache.clear Mongo::QueryCache.cache { spec.run } end let(:authorized_collection) do authorized_client['caching_cursor'] end before do authorized_collection.drop end let(:server) do view.send(:server_selector).select_server(authorized_client.cluster) end let(:reply) do view.send(:send_initial_query, server, Mongo::Operation::Context.new(client: authorized_client)) end let(:cursor) do described_class.new(view, reply, server) end let(:view) do Mongo::Collection::View.new(authorized_collection) end before do authorized_collection.delete_many 3.times { |i| authorized_collection.insert_one(_id: i) } end describe '#cached_docs' do context 'when no query has been performed' do it 'returns nil' do expect(cursor.cached_docs).to be_nil end end context 'when a query has been performed' do it 'returns the number of documents' do cursor.to_a expect(cursor.cached_docs.length).to eq(3) expect(cursor.cached_docs).to eq([{ '_id' => 0 }, { '_id' => 1 }, { '_id' => 2 }]) end end end describe '#try_next' do it 'fetches the next document' do expect(cursor.try_next).to eq('_id' => 0) expect(cursor.try_next).to eq('_id' => 1) expect(cursor.try_next).to eq('_id' => 2) end end describe '#each' do it 'iterates the cursor' do result = cursor.each.to_a expect(result.length).to eq(3) expect(result).to eq([{ '_id' => 0 }, { '_id' => 1 }, { '_id' => 2 }]) end end end mongo-ruby-driver-2.21.3/spec/mongo/client_construction_spec.rb000066400000000000000000002604611505113246500247010ustar00rootroot00000000000000# frozen_string_literal: true require 'spec_helper' SINGLE_CLIENT = [ '127.0.0.1:27017' ].freeze # let these existing styles stand, rather than going in for a deep refactoring # of these specs. # # possible future work: re-enable these one at a time and do the hard work of # making them right. # # rubocop:disable RSpec/ExpectInHook, RSpec/ExampleLength # rubocop:disable RSpec/ContextWording, RSpec/RepeatedExampleGroupDescription # rubocop:disable RSpec/ExampleWording, Style/BlockComments, RSpec/AnyInstance # rubocop:disable RSpec/VerifiedDoubles describe Mongo::Client do clean_slate let(:subscriber) { Mrss::EventSubscriber.new } describe '.new' do context 'with scan: false' do fails_on_jruby it 'does not perform i/o' do allow_any_instance_of(Mongo::Server::Monitor).to receive(:run!) expect_any_instance_of(Mongo::Server::Monitor).not_to receive(:scan!) # return should be instant c = Timeout.timeout(1) do ClientRegistry.instance.new_local_client([ '1.1.1.1' ], scan: false) end expect(c.cluster.servers).to be_empty c.close end end context 'with default scan: true' do shared_examples 'does not wait for server selection timeout' do let(:logger) do Logger.new($stdout, level: Logger::DEBUG) end let(:subscriber) do Mongo::Monitoring::UnifiedSdamLogSubscriber.new( logger: logger, log_prefix: 'CCS-SDAM' ) end let(:client) do ClientRegistry.instance.new_local_client( [ address ], # Specify server selection timeout here because test suite sets # one by default and it's fairly low SpecConfig.instance.test_options.merge( connect_timeout: 1, socket_timeout: 1, server_selection_timeout: 8, logger: logger, log_prefix: 'CCS-CLIENT', sdam_proc: ->(client) { subscriber.subscribe(client) } ) ) end it 'does not wait for server selection timeout' do time_taken = Benchmark.realtime do # Client is created here. client end puts "client_construction_spec.rb: Cluster is: #{client.cluster.summary}" # Because the first round of sdam waits for server statuses to change # rather than for server selection semaphore on the cluster which # is signaled after topology is updated, the topology here could be # old (i.e. a monitor thread was just about to update the topology # but hasn't quite gotten to it. Add a small delay to compensate. # This issue doesn't apply to real applications which will wait for # server selection semaphore. sleep 0.1 actual_class = client.cluster.topology.class expect([ Mongo::Cluster::Topology::ReplicaSetWithPrimary, Mongo::Cluster::Topology::Single, Mongo::Cluster::Topology::Sharded, Mongo::Cluster::Topology::LoadBalanced, ]).to include(actual_class) expect(time_taken).to be < 5 # run a command to ensure the client is a working one client.database.command(ping: 1) end end context 'when cluster is monitored' do require_topology :single, :replica_set, :sharded # TODO: this test requires there being no outstanding background # monitoring threads running, as otherwise the scan! expectation # can be executed on a thread that belongs to one of the global # clients for instance it 'performs one round of sdam' do # Does not work due to # https://github.com/rspec/rspec-mocks/issues/1242. # # expect_any_instance_of(Mongo::Server::Monitor).to receive(:scan!). # exactly(SpecConfig.instance.addresses.length).times.and_call_original c = new_local_client(SpecConfig.instance.addresses, SpecConfig.instance.test_options) expect(c.cluster.servers).not_to be_empty end # This checks the case of all initial seeds being removed from # cluster during SDAM context 'me mismatch on the only initial seed' do let(:address) do ClusterConfig.instance.alternate_address.to_s end include_examples 'does not wait for server selection timeout' end end context 'when cluster is not monitored' do require_topology :load_balanced let(:address) do ClusterConfig.instance.alternate_address.to_s end include_examples 'does not wait for server selection timeout' end end context 'with monitoring_io: false' do let(:client) do new_local_client(SINGLE_CLIENT, monitoring_io: false) end it 'passes monitoring_io: false to cluster' do expect(client.cluster.options[:monitoring_io]).to be false end end end describe '#initialize' do context 'when providing options' do context 'with auto_encryption_options' do require_libmongocrypt include_context 'define shared FLE helpers' let(:client) do new_local_client_nmio( SpecConfig.instance.addresses, SpecConfig.instance.test_options.merge(client_opts) ) end let(:client_opts) { { auto_encryption_options: auto_encryption_options } } let(:auto_encryption_options) do { key_vault_client: key_vault_client, key_vault_namespace: key_vault_namespace, kms_providers: kms_providers, schema_map: schema_map, bypass_auto_encryption: bypass_auto_encryption, extra_options: extra_options, } end let(:key_vault_client) { new_local_client_nmio(SpecConfig.instance.addresses) } let(:bypass_auto_encryption) { true } let(:extra_options) do { mongocryptd_uri: mongocryptd_uri, mongocryptd_bypass_spawn: mongocryptd_bypass_spawn, mongocryptd_spawn_path: mongocryptd_spawn_path, mongocryptd_spawn_args: mongocryptd_spawn_args, } end let(:mongocryptd_uri) { 'mongodb://localhost:27021' } let(:mongocryptd_bypass_spawn) { true } let(:mongocryptd_spawn_path) { '/spawn/path' } let(:mongocryptd_spawn_args) { [ '--idleShutdownTimeoutSecs=100' ] } shared_examples 'a functioning auto encryption client' do let(:encryption_options) { client.encrypter.options } context 'when auto_encrypt_opts are nil' do let(:auto_encryption_options) { nil } it 'does not raise an exception' do expect { client }.not_to raise_error end end context 'when key_vault_namespace is nil' do let(:key_vault_namespace) { nil } it 'raises an exception' do expect { client }.to raise_error(ArgumentError, /key_vault_namespace option cannot be nil/) end end context 'when key_vault_namespace is incorrectly formatted' do let(:key_vault_namespace) { 'not.good.formatting' } it 'raises an exception' do expect { client }.to raise_error( ArgumentError, /key_vault_namespace option must be in the format database.collection/ ) end end context 'when kms_providers is nil' do let(:kms_providers) { nil } it 'raises an exception' do expect { client }.to raise_error(ArgumentError, /KMS providers options must not be nil/) end end context 'when kms_providers doesn\'t have local or aws keys' do let(:kms_providers) { { random_key: 'hello' } } it 'raises an exception' do expect { client }.to raise_error( ArgumentError, /KMS providers options must have one of the following keys: :aws, :azure, :gcp, :kmip, :local/ ) end end context 'when local kms_provider is incorrectly formatted' do let(:kms_providers) { { local: { wrong_key: 'hello' } } } it 'raises an exception' do expect { client }.to raise_error( ArgumentError, /Local KMS provider options must be in the format: { key: 'MASTER-KEY' }/ ) end end context 'when aws kms_provider is incorrectly formatted' do let(:kms_providers) { { aws: { wrong_key: 'hello' } } } let(:expected_options_format) do "{ access_key_id: 'YOUR-ACCESS-KEY-ID', secret_access_key: 'SECRET-ACCESS-KEY' }" end it 'raises an exception' do expect { client }.to raise_error( ArgumentError, / AWS KMS provider options must be in the format: #{expected_options_format}/ ) end end context 'with an invalid schema map' do let(:schema_map) { '' } it 'raises an exception' do expect { client }.to raise_error(ArgumentError, /schema_map must be a Hash or nil/) end end context 'with valid options' do it 'does not raise an exception' do expect { client }.not_to raise_error end context 'with a nil schema_map' do let(:schema_map) { nil } it 'does not raise an exception' do expect { client }.not_to raise_error end end it 'sets options on the client' do expect(encryption_options[:key_vault_client]).to eq(key_vault_client) expect(encryption_options[:key_vault_namespace]).to eq(key_vault_namespace) # Don't explicitly expect kms_providers to avoid accidentally exposing # sensitive data in evergreen logs expect(encryption_options[:kms_providers]).to be_a(Hash) expect(encryption_options[:schema_map]).to eq(schema_map) expect(encryption_options[:bypass_auto_encryption]).to eq(bypass_auto_encryption) expect(encryption_options[:extra_options][:mongocryptd_uri]).to eq(mongocryptd_uri) expect(encryption_options[:extra_options][:mongocryptd_bypass_spawn]).to eq(mongocryptd_bypass_spawn) expect(encryption_options[:extra_options][:mongocryptd_spawn_path]).to eq(mongocryptd_spawn_path) expect(encryption_options[:extra_options][:mongocryptd_spawn_args]).to eq(mongocryptd_spawn_args) expect(client.encrypter.mongocryptd_client.options[:monitoring_io]).to be false end context 'with default extra options' do let(:auto_encryption_options) do { key_vault_namespace: key_vault_namespace, kms_providers: kms_providers, schema_map: schema_map, } end it 'sets key_vault_client with no encryption options' do key_vault_client = client.encrypter.key_vault_client expect(key_vault_client.options['auto_encryption_options']).to be_nil end it 'sets bypass_auto_encryption to false' do expect(encryption_options[:bypass_auto_encryption]).to be false end it 'sets extra options to defaults' do expect(encryption_options[:extra_options][:mongocryptd_uri]).to eq('mongodb://localhost:27020') expect(encryption_options[:extra_options][:mongocryptd_bypass_spawn]).to be false expect(encryption_options[:extra_options][:mongocryptd_spawn_path]).to eq('mongocryptd') expect(encryption_options[:extra_options][:mongocryptd_spawn_args]) .to eq([ '--idleShutdownTimeoutSecs=60' ]) end end context 'with mongocryptd_spawn_args that don\'t include idleShutdownTimeoutSecs' do let(:mongocryptd_spawn_args) { [ '--otherArgument=true' ] } it 'adds a default value to mongocryptd_spawn_args' do expect(encryption_options[:extra_options][:mongocryptd_spawn_args]) .to eq(mongocryptd_spawn_args + [ '--idleShutdownTimeoutSecs=60' ]) end end context 'with mongocryptd_spawn_args that has idleShutdownTimeoutSecs as two arguments' do let(:mongocryptd_spawn_args) { [ '--idleShutdownTimeoutSecs', 100 ] } it 'does not modify mongocryptd_spawn_args' do expect(encryption_options[:extra_options][:mongocryptd_spawn_args]).to eq(mongocryptd_spawn_args) end end context 'with default key_vault_client' do let(:key_vault_client) { nil } it 'creates a key_vault_client' do key_vault_client = encryption_options[:key_vault_client] expect(key_vault_client).to be_a(described_class) end end end end context 'with AWS KMS providers' do include_context 'with AWS kms_providers' do it_behaves_like 'a functioning auto encryption client' end end context 'with local KMS providers' do include_context 'with local kms_providers' do it_behaves_like 'a functioning auto encryption client' end end end context 'timeout options' do let(:client) do new_local_client( SpecConfig.instance.addresses, SpecConfig.instance.authorized_test_options.merge(options) ) end context 'when network timeouts are zero' do let(:options) { { socket_timeout: 0, connect_timeout: 0 } } it 'sets options to zeros' do expect(client.options[:socket_timeout]).to be == 0 expect(client.options[:connect_timeout]).to be == 0 end it 'connects and performs operations successfully' do expect { client.database.command(ping: 1) } .not_to raise_error end end %i[ socket_timeout connect_timeout ].each do |option| context "when #{option} is negative" do let(:options) { { option => -1 } } it 'fails client creation' do expect { client } .to raise_error(ArgumentError, /#{option} must be a non-negative number/) end end context "when #{option} is of the wrong type" do let(:options) { { option => '42' } } it 'fails client creation' do expect { client } .to raise_error(ArgumentError, /#{option} must be a non-negative number/) end end end context 'when :connect_timeout is very small' do # The driver reads first and checks the deadline second. # This means the read (in a monitor) can technically take more than # the connect timeout. Restrict to TLS configurations to make # the network I/O take longer. require_tls let(:options) do { connect_timeout: 1e-6, server_selection_timeout: 2 } end it 'allows client creation' do expect { client }.not_to raise_error end context 'non-lb' do require_topology :single, :replica_set, :sharded it 'fails server selection due to very small timeout' do expect { client.database.command(ping: 1) } .to raise_error(Mongo::Error::NoServerAvailable) end end context 'lb' do require_topology :load_balanced it 'fails the operation after successful server selection' do expect { client.database.command(ping: 1) } .to raise_error(Mongo::Error::SocketTimeoutError, /socket took over.*to connect/) end end end context 'when :socket_timeout is very small' do # The driver reads first and checks the deadline second. # This means the read (in a monitor) can technically take more than # the connect timeout. Restrict to TLS configurations to make # the network I/O take longer. require_tls let(:options) do { socket_timeout: 1e-6, server_selection_timeout: 2 } end it 'allows client creation' do expect { client }.not_to raise_error end retry_test it 'fails operations due to very small timeout' do expect { client.database.command(ping: 1) } .to raise_error(Mongo::Error::SocketTimeoutError) end end end context 'retry_writes option' do let(:client) do new_local_client_nmio(SpecConfig.instance.addresses, options) end context 'when retry_writes is true' do let(:options) do { retry_writes: true } end it 'sets retry_writes to true' do expect(client.options['retry_writes']).to be true end end context 'when retry_writes is false' do let(:options) do { retry_writes: false } end it 'sets retry_writes to false' do expect(client.options['retry_writes']).to be false end end context 'when retry_writes is not given' do let(:options) { {} } it 'sets retry_writes to true' do expect(client.options['retry_writes']).to be true end end end context 'when compressors are provided' do let(:client) do new_local_client( SpecConfig.instance.addresses, SpecConfig.instance.all_test_options.merge(options) ) end context 'when the compressor is not supported by the driver' do require_warning_clean let(:options) do { compressors: %w[ snoopy ] } end it 'does not set the compressor and warns' do expect(Mongo::Logger.logger).to receive(:warn).with(/Unsupported compressor/) expect(client.options['compressors']).to be_nil end it 'sets the compression key of the handshake document to an empty array' do expect(client.cluster.app_metadata.send(:document)[:compression]).to eq([]) end context 'when one supported compressor and one unsupported compressor are provided' do require_compression min_server_fcv '3.6' let(:options) do { compressors: %w[ zlib snoopy ] } end it 'does not set the unsupported compressor and warns' do expect(Mongo::Logger.logger).to receive(:warn).at_least(:once) expect(client.options['compressors']).to eq(%w[ zlib ]) end it 'sets the compression key of the handshake document to the list of supported compressors' do expect(client.cluster.app_metadata.send(:document)[:compression]).to eq(%w[ zlib ]) end end end context 'when the compressor is not supported by the server' do max_server_version '3.4' let(:options) do { compressors: %w[ zlib ] } end it 'does not set the compressor and warns' do expect(Mongo::Logger.logger).to receive(:warn).at_least(:once) expect(client.cluster.next_primary.monitor.compressor).to be_nil end end context 'when zlib compression is requested' do require_zlib_compression let(:options) do { compressors: %w[ zlib ] } end it 'sets the compressor' do expect(client.options['compressors']).to eq(options[:compressors]) end it 'sends the compressor in the compression key of the handshake document' do expect(client.cluster.app_metadata.send(:document)[:compression]).to eq(options[:compressors]) end context 'when server supports compression' do min_server_fcv '3.6' it 'uses compression for messages' do expect(Mongo::Protocol::Compressed).to receive(:new).at_least(:once).and_call_original client[TEST_COLL].find({}, limit: 1).first end end it 'does not use compression for authentication messages' do expect(Mongo::Protocol::Compressed).not_to receive(:new) client.cluster.next_primary.send(:with_connection, &:connect!) end end context 'when snappy compression is requested and supported by the server' do min_server_version '3.6' let(:options) do { compressors: %w[ snappy ] } end context 'when snappy gem is installed' do require_snappy_compression it 'creates the client' do expect(client.options['compressors']).to eq(%w[ snappy ]) end end context 'when snappy gem is not installed' do require_no_snappy_compression it 'raises an exception' do expect do client end.to raise_error(Mongo::Error::UnmetDependency, /Cannot enable snappy compression/) end end end context 'when zstd compression is requested and supported by the server' do min_server_version '4.2' let(:options) do { compressors: %w[ zstd ] } end context 'when zstd gem is installed' do require_zstd_compression it 'creates the client' do expect(client.options['compressors']).to eq(%w[ zstd ]) end end context 'when zstd gem is not installed' do require_no_zstd_compression it 'raises an exception' do expect do client end.to raise_error(Mongo::Error::UnmetDependency, /Cannot enable zstd compression/) end end end end context 'when compressors are not provided' do require_no_compression let(:client) do authorized_client end it 'does not set the compressor' do expect(client.options['compressors']).to be_nil end it 'sets the compression key of the handshake document to an empty array' do expect(client.cluster.app_metadata.send(:document)[:compression]).to eq([]) end it 'does not use compression for messages' do client[TEST_COLL].find({}, limit: 1).first expect(Mongo::Protocol::Compressed).not_to receive(:new) end end context 'when a zlib_compression_level option is provided' do require_compression min_server_fcv '3.6' let(:client) do new_local_client_nmio( SpecConfig.instance.addresses, SpecConfig.instance.test_options.merge(zlib_compression_level: 1) ) end it 'sets the option on the client' do expect(client.options[:zlib_compression_level]).to eq(1) end end context 'when ssl options are provided' do let(:options) do { ssl: true, ssl_ca_cert: SpecConfig.instance.ca_cert_path, ssl_ca_cert_string: 'ca cert string', ssl_ca_cert_object: 'ca cert object', ssl_cert: SpecConfig.instance.client_cert_path, ssl_cert_string: 'cert string', ssl_cert_object: 'cert object', ssl_key: SpecConfig.instance.client_key_path, ssl_key_string: 'key string', ssl_key_object: 'key object', ssl_key_pass_phrase: 'passphrase', ssl_verify: true, } end let(:client) do new_local_client_nmio(SINGLE_CLIENT, options) end it 'sets the ssl option' do expect(client.options[:ssl]).to eq(options[:ssl]) end it 'sets the ssl_ca_cert option' do expect(client.options[:ssl_ca_cert]).to eq(options[:ssl_ca_cert]) end it 'sets the ssl_ca_cert_string option' do expect(client.options[:ssl_ca_cert_string]).to eq(options[:ssl_ca_cert_string]) end it 'sets the ssl_ca_cert_object option' do expect(client.options[:ssl_ca_cert_object]).to eq(options[:ssl_ca_cert_object]) end it 'sets the ssl_cert option' do expect(client.options[:ssl_cert]).to eq(options[:ssl_cert]) end it 'sets the ssl_cert_string option' do expect(client.options[:ssl_cert_string]).to eq(options[:ssl_cert_string]) end it 'sets the ssl_cert_object option' do expect(client.options[:ssl_cert_object]).to eq(options[:ssl_cert_object]) end it 'sets the ssl_key option' do expect(client.options[:ssl_key]).to eq(options[:ssl_key]) end it 'sets the ssl_key_string option' do expect(client.options[:ssl_key_string]).to eq(options[:ssl_key_string]) end it 'sets the ssl_key_object option' do expect(client.options[:ssl_key_object]).to eq(options[:ssl_key_object]) end it 'sets the ssl_key_pass_phrase option' do expect(client.options[:ssl_key_pass_phrase]).to eq(options[:ssl_key_pass_phrase]) end it 'sets the ssl_verify option' do expect(client.options[:ssl_verify]).to eq(options[:ssl_verify]) end end context 'when no database is provided' do let(:client) do new_local_client_nmio(SINGLE_CLIENT, read: { mode: :secondary }) end it 'defaults the database to admin' do expect(client.database.name).to eq('admin') end end context 'when a database is provided' do let(:client) do new_local_client_nmio(SINGLE_CLIENT, database: :testdb) end it 'sets the current database' do expect(client[:users].name).to eq('users') end end context 'when providing a custom logger' do let(:logger) do Logger.new($stdout).tap do |l| l.level = Logger::FATAL end end let(:client) do authorized_client.with(logger: logger) end it 'does not use the global logger' do expect(client.cluster.logger).not_to eq(Mongo::Logger.logger) end end context 'when providing a heartbeat_frequency' do let(:client) do new_local_client_nmio(SINGLE_CLIENT, heartbeat_frequency: 2) end it 'sets the heartbeat frequency' do expect(client.cluster.options[:heartbeat_frequency]).to eq(client.options[:heartbeat_frequency]) end end context 'when max_connecting is provided' do let(:client) do new_local_client_nmio(SINGLE_CLIENT, options) end context 'when max_connecting is a positive integer' do let(:options) do { max_connecting: 5 } end it 'sets the max connecting' do expect(client.options[:max_connecting]).to eq(options[:max_connecting]) end end context 'when max_connecting is a negative integer' do let(:options) do { max_connecting: -5 } end it 'raises an exception' do expect { client }.to raise_error(Mongo::Error::InvalidMaxConnecting) end end end context 'when min_pool_size is provided' do let(:client) { new_local_client_nmio(SINGLE_CLIENT, options) } context 'when max_pool_size is provided' do context 'when the min_pool_size is greater than the max_pool_size' do let(:options) { { min_pool_size: 20, max_pool_size: 10 } } it 'raises an Exception' do expect { client } .to raise_exception(Mongo::Error::InvalidMinPoolSize) end end context 'when the min_pool_size is less than the max_pool_size' do let(:options) { { min_pool_size: 10, max_pool_size: 20 } } it 'sets the option' do expect(client.options[:min_pool_size]).to eq(options[:min_pool_size]) expect(client.options[:max_pool_size]).to eq(options[:max_pool_size]) end end context 'when the min_pool_size is equal to the max_pool_size' do let(:options) { { min_pool_size: 10, max_pool_size: 10 } } it 'sets the option' do expect(client.options[:min_pool_size]).to eq(options[:min_pool_size]) expect(client.options[:max_pool_size]).to eq(options[:max_pool_size]) end end context 'when max_pool_size is zero (unlimited)' do let(:options) { { min_pool_size: 10, max_pool_size: 0 } } it 'sets the option' do expect(client.options[:min_pool_size]).to eq(options[:min_pool_size]) expect(client.options[:max_pool_size]).to eq(options[:max_pool_size]) end end end context 'when max_pool_size is not provided' do context 'when the min_pool_size is greater than the default max_pool_size' do let(:options) { { min_pool_size: 30 } } it 'raises an Exception' do expect { client } .to raise_exception(Mongo::Error::InvalidMinPoolSize) end end context 'when the min_pool_size is less than the default max_pool_size' do let(:options) { { min_pool_size: 3 } } it 'sets the option' do expect(client.options[:min_pool_size]).to eq(options[:min_pool_size]) end end context 'when the min_pool_size is equal to the max_pool_size' do let(:options) do { min_pool_size: Mongo::Server::ConnectionPool::DEFAULT_MAX_SIZE } end it 'sets the option' do expect(client.options[:min_pool_size]).to eq(options[:min_pool_size]) end end end end context 'when max_pool_size is 0 (unlimited)' do let(:client) { new_local_client_nmio(SINGLE_CLIENT, options) } let(:options) { { max_pool_size: 0 } } it 'sets the option' do expect(client.options[:max_pool_size]).to eq(options[:max_pool_size]) end end context 'when max_pool_size and min_pool_size are both nil' do let(:options) { { min_pool_size: nil, max_pool_size: nil } } let(:client) { new_local_client_nmio(SINGLE_CLIENT, options) } it 'does not set either option' do expect(client.options[:max_pool_size]).to be_nil expect(client.options[:min_pool_size]).to be_nil end end context 'when platform details are specified' do let(:app_metadata) do client.cluster.app_metadata end let(:client) do new_local_client_nmio(SINGLE_CLIENT, platform: 'mongoid-6.0.2') end it 'includes the platform info in the app metadata' do expect(app_metadata.client_document[:platform]).to match(/mongoid-6\.0\.2/) end end context 'when platform details are not specified' do let(:app_metadata) do client.cluster.app_metadata end let(:client) do new_local_client_nmio(SINGLE_CLIENT) end context 'mri' do require_mri let(:platform_string) do [ "Ruby #{RUBY_VERSION}", RUBY_PLATFORM, RbConfig::CONFIG['build'], 'A', ].join(', ') end it 'does not include the platform info in the app metadata' do expect(app_metadata.client_document[:platform]).to eq(platform_string) end end context 'jruby' do require_jruby let(:platform_string) do [ "JRuby #{JRUBY_VERSION}", "like Ruby #{RUBY_VERSION}", RUBY_PLATFORM, "JVM #{java.lang.System.get_property('java.version')}", RbConfig::CONFIG['build'], 'A', ].join(', ') end it 'does not include the platform info in the app metadata' do expect(app_metadata.client_document[:platform]).to eq(platform_string) end end end end context 'when providing a connection string' do context 'when the string uses the SRV Protocol' do require_external_connectivity let(:uri) { 'mongodb+srv://test5.test.build.10gen.cc/testdb' } let(:client) { new_local_client_nmio(uri) } it 'sets the database' do expect(client.options[:database]).to eq('testdb') end end context 'when a database is provided' do let(:uri) { 'mongodb://127.0.0.1:27017/testdb' } let(:client) { new_local_client_nmio(uri) } it 'sets the database' do expect { client[:users] }.not_to raise_error end end context 'when a database is not provided' do let(:uri) { 'mongodb://127.0.0.1:27017' } let(:client) { new_local_client_nmio(uri) } it 'defaults the database to admin' do expect(client.database.name).to eq('admin') end end context 'when URI options are provided' do let(:uri) { 'mongodb://127.0.0.1:27017/testdb?w=3' } let(:client) { new_local_client_nmio(uri) } let(:expected_options) do Mongo::Options::Redacted.new( write_concern: { w: 3 }, monitoring_io: false, database: 'testdb', retry_writes: true, retry_reads: true ) end it 'sets the options' do expect(client.options).to eq(expected_options) end context 'when max_connecting is provided' do context 'when max_connecting is a positive integer' do let(:uri) do 'mongodb://127.0.0.1:27017/?maxConnecting=10' end it 'sets the max connecting' do expect(client.options[:max_connecting]).to eq(10) end end context 'when max_connecting is a negative integer' do let(:uri) do 'mongodb://127.0.0.1:27017/?maxConnecting=0' end it 'raises an exception' do expect { client }.to raise_error(Mongo::Error::InvalidMaxConnecting) end end end context 'when min_pool_size is provided' do context 'when max_pool_size is provided' do context 'when the min_pool_size is greater than the max_pool_size' do let(:uri) do 'mongodb://127.0.0.1:27017/?minPoolSize=20&maxPoolSize=10' end it 'raises an Exception' do expect { client } .to raise_exception(Mongo::Error::InvalidMinPoolSize) end end context 'when the min_pool_size is less than the max_pool_size' do let(:uri) do 'mongodb://127.0.0.1:27017/?minPoolSize=10&maxPoolSize=20' end it 'sets the option' do expect(client.options[:min_pool_size]).to eq(10) expect(client.options[:max_pool_size]).to eq(20) end end context 'when the min_pool_size is equal to the max_pool_size' do let(:uri) do 'mongodb://127.0.0.1:27017/?minPoolSize=10&maxPoolSize=10' end it 'sets the option' do expect(client.options[:min_pool_size]).to eq(10) expect(client.options[:max_pool_size]).to eq(10) end end context 'when max_pool_size is 0 (unlimited)' do let(:uri) do 'mongodb://127.0.0.1:27017/?minPoolSize=10&maxPoolSize=0' end it 'sets the option' do expect(client.options[:min_pool_size]).to eq(10) expect(client.options[:max_pool_size]).to eq(0) end end end context 'when max_pool_size is not provided' do context 'when the min_pool_size is greater than the default max_pool_size' do let(:uri) { 'mongodb://127.0.0.1:27017/?minPoolSize=30' } it 'raises an Exception' do expect { client } .to raise_exception(Mongo::Error::InvalidMinPoolSize) end end context 'when the min_pool_size is less than the default max_pool_size' do let(:uri) { 'mongodb://127.0.0.1:27017/?minPoolSize=3' } it 'sets the option' do expect(client.options[:min_pool_size]).to eq(3) end end context 'when the min_pool_size is equal to the max_pool_size' do let(:uri) { 'mongodb://127.0.0.1:27017/?minPoolSize=5' } it 'sets the option' do expect(client.options[:min_pool_size]).to eq(5) end end end end context 'when retryReads URI option is given' do context 'it is false' do let(:uri) { 'mongodb://127.0.0.1:27017/testdb?retryReads=false' } it 'sets the option on the client' do expect(client.options[:retry_reads]).to be false end end context 'it is true' do let(:uri) { 'mongodb://127.0.0.1:27017/testdb?retryReads=true' } it 'sets the option on the client' do expect(client.options[:retry_reads]).to be true end end end context 'when retryWrites URI option is given' do context 'it is false' do let(:uri) { 'mongodb://127.0.0.1:27017/testdb?retryWrites=false' } it 'sets the option on the client' do expect(client.options[:retry_writes]).to be false end end context 'it is true' do let(:uri) { 'mongodb://127.0.0.1:27017/testdb?retryWrites=true' } it 'sets the option on the client' do expect(client.options[:retry_writes]).to be true end end end end context 'when options are provided not in the string' do let(:uri) { 'mongodb://127.0.0.1:27017/testdb' } let(:client) do new_local_client_nmio(uri, write: { w: 3 }) end let(:expected_options) do Mongo::Options::Redacted.new( write: { w: 3 }, monitoring_io: false, database: 'testdb', retry_writes: true, retry_reads: true ) end it 'sets the options' do expect(client.options).to eq(expected_options) end end context 'when options are provided in the URI and as Ruby options' do let(:uri) { 'mongodb://127.0.0.1:27017/testdb?w=3' } let(:client) do new_local_client_nmio(uri, option_name => { w: 4 }) end let(:expected_options) do Mongo::Options::Redacted.new( option_name => { w: 4 }, monitoring_io: false, database: 'testdb', retry_writes: true, retry_reads: true ) end shared_examples_for 'allows explicit options to take preference' do it 'allows explicit options to take preference' do expect(client.options).to eq(expected_options) end end context 'when using :write' do let(:option_name) { :write } it_behaves_like 'allows explicit options to take preference' end context 'when using :write_concern' do let(:option_name) { :write_concern } it_behaves_like 'allows explicit options to take preference' end end context 'when a replica set name is provided' do let(:uri) { 'mongodb://127.0.0.1:27017/testdb?replicaSet=testing' } let(:client) { new_local_client_nmio(uri) } it 'sets the correct cluster topology' do expect(client.cluster.topology).to be_a(Mongo::Cluster::Topology::ReplicaSetNoPrimary) end end end context 'when Ruby options are provided' do let(:client) do new_local_client_nmio(SINGLE_CLIENT, options) end describe 'connection option conflicts' do context 'direct_connection: true and multiple seeds' do let(:client) do new_local_client_nmio([ '127.0.0.1:27017', '127.0.0.2:27017' ], direct_connection: true) end it 'is rejected' do expect { client } .to raise_error(ArgumentError, /direct_connection=true cannot be used with multiple seeds/) end end context 'direct_connection: true and connect: :direct' do let(:options) do { direct_connection: true, connect: :direct } end it 'is accepted' do expect(client.options[:direct_connection]).to be true expect(client.options[:connect]).to be :direct end end context 'direct_connection: true and connect: :replica_set' do let(:options) do { direct_connection: true, connect: :replica_set } end it 'is rejected' do expect { client } .to raise_error( ArgumentError, /Conflicting client options: direct_connection=true and connect=replica_set/ ) end end context 'direct_connection: true and connect: :sharded' do let(:options) do { direct_connection: true, connect: :sharded } end it 'is rejected' do expect { client } .to raise_error(ArgumentError, /Conflicting client options: direct_connection=true and connect=sharded/) end end context 'direct_connection: false and connect: :direct' do let(:options) do { direct_connection: false, connect: :direct } end it 'is rejected' do expect { client } .to raise_error(ArgumentError, /Conflicting client options: direct_connection=false and connect=direct/) end end context 'direct_connection: false and connect: :replica_set' do let(:options) do { direct_connection: false, connect: :replica_set, replica_set: 'foo' } end it 'is accepted' do expect(client.options[:direct_connection]).to be false expect(client.options[:connect]).to be :replica_set end end context 'direct_connection: false and connect: :sharded' do let(:options) do { direct_connection: false, connect: :sharded } end it 'is accepted' do expect(client.options[:direct_connection]).to be false expect(client.options[:connect]).to be :sharded end end context 'load_balanced: true and multiple seeds' do let(:client) do new_local_client_nmio([ '127.0.0.1:27017', '127.0.0.2:27017' ], load_balanced: true) end it 'is rejected' do expect { client } .to raise_error(ArgumentError, /load_balanced=true cannot be used with multiple seeds/) end end context 'load_balanced: false and multiple seeds' do let(:client) do new_local_client_nmio([ '127.0.0.1:27017', '127.0.0.2:27017' ], load_balanced: false) end it 'is accepted' do expect { client }.not_to raise_error expect(client.options[:load_balanced]).to be false end end context 'load_balanced: true and direct_connection: true' do let(:client) do new_local_client_nmio(SINGLE_CLIENT, load_balanced: true, direct_connection: true) end it 'is rejected' do expect { client } .to raise_error(ArgumentError, /direct_connection=true cannot be used with load_balanced=true/) end end context 'load_balanced: true and direct_connection: false' do let(:client) do new_local_client_nmio(SINGLE_CLIENT, load_balanced: true, direct_connection: false) end it 'is accepted' do expect { client }.not_to raise_error expect(client.options[:load_balanced]).to be true expect(client.options[:direct_connection]).to be false end end context 'load_balanced: false and direct_connection: true' do let(:client) do new_local_client_nmio(SINGLE_CLIENT, load_balanced: false, direct_connection: true) end it 'is accepted' do expect { client }.not_to raise_error expect(client.options[:load_balanced]).to be false expect(client.options[:direct_connection]).to be true end end [ :direct, 'direct', :sharded, 'sharded' ].each do |v| context "load_balanced: true and connect: #{v.inspect}" do let(:client) do new_local_client_nmio(SINGLE_CLIENT, load_balanced: true, connect: v) end it 'is rejected' do expect { client } .to raise_error(ArgumentError, /connect=#{v} cannot be used with load_balanced=true/) end end end [ nil ].each do |v| context "load_balanced: true and connect: #{v.inspect}" do let(:client) do new_local_client_nmio(SINGLE_CLIENT, load_balanced: true, connect: v) end it 'is accepted' do expect { client }.not_to raise_error expect(client.options[:load_balanced]).to be true expect(client.options[:connect]).to eq v end end end [ :load_balanced, 'load_balanced' ].each do |v| context "load_balanced: true and connect: #{v.inspect}" do let(:client) do new_local_client_nmio(SINGLE_CLIENT, load_balanced: true, connect: v) end it 'is accepted' do expect { client }.not_to raise_error expect(client.options[:load_balanced]).to be true expect(client.options[:connect]).to eq v end end context "replica_set and connect: #{v.inspect}" do let(:client) do new_local_client_nmio(SINGLE_CLIENT, replica_set: 'foo', connect: v) end it 'is rejected' do expect { client } .to raise_error(ArgumentError, /connect=load_balanced cannot be used with replica_set option/) end end context "direct_connection=true and connect: #{v.inspect}" do let(:client) do new_local_client_nmio(SINGLE_CLIENT, direct_connection: true, connect: v) end it 'is rejected' do expect { client } .to raise_error( ArgumentError, /Conflicting client options: direct_connection=true and connect=load_balanced/ ) end end context "multiple seed addresses and connect: #{v.inspect}" do let(:client) do new_local_client_nmio([ '127.0.0.1:27017', '127.0.0.1:1234' ], connect: v) end it 'is rejected' do expect { client } .to raise_error(ArgumentError, /connect=load_balanced cannot be used with multiple seeds/) end end end [ :replica_set, 'replica_set' ].each do |v| context "load_balanced: true and connect: #{v.inspect}" do let(:client) do new_local_client_nmio(SINGLE_CLIENT, load_balanced: true, connect: v, replica_set: 'x') end it 'is rejected' do expect { client } .to raise_error(ArgumentError, /connect=replica_set cannot be used with load_balanced=true/) end end context "load_balanced: true and #{v.inspect} option" do let(:client) do new_local_client_nmio(SINGLE_CLIENT, load_balanced: true, v => 'rs') end it 'is rejected' do expect { client } .to raise_error(ArgumentError, /load_balanced=true cannot be used with replica_set option/) end end end context 'srv_max_hosts > 0 and load_balanced: true' do let(:client) do new_local_client_nmio(SINGLE_CLIENT, srv_max_hosts: 1, load_balanced: true) end it 'it is rejected' do expect { client } .to raise_error(ArgumentError, /:srv_max_hosts > 0 cannot be used with :load_balanced=true/) end end context 'srv_max_hosts > 0 and replica_set' do let(:client) do new_local_client_nmio(SINGLE_CLIENT, srv_max_hosts: 1, replica_set: 'rs') end it 'it is rejected' do expect do client end.to raise_error(ArgumentError, /:srv_max_hosts > 0 cannot be used with :replica_set option/) end end context 'srv_max_hosts < 0' do let(:client) do new_local_client_nmio(SINGLE_CLIENT, srv_max_hosts: -1) end it 'is accepted and does not add the srv_max_hosts to uri_options' do expect { client }.not_to raise_error expect(client.options).not_to have_key(:srv_max_hosts) end end context 'srv_max_hosts invalid type' do let(:client) do new_local_client_nmio(SINGLE_CLIENT, srv_max_hosts: 'foo') end it 'is accepted and does not add the srv_max_hosts to uri_options' do expect { client }.not_to raise_error expect(client.options).not_to have_key(:srv_max_hosts) end end context 'srv_max_hosts with non-SRV URI' do let(:client) do new_local_client_nmio(SINGLE_CLIENT, srv_max_hosts: 1) end it 'is rejected' do expect { client } .to raise_error(ArgumentError, /:srv_max_hosts cannot be used on non-SRV URI/) end end context 'srv_service_name with non-SRV URI' do let(:client) do new_local_client_nmio(SINGLE_CLIENT, srv_service_name: 'customname') end it 'is rejected' do expect { client } .to raise_error(ArgumentError, /:srv_service_name cannot be used on non-SRV URI/) end end end context 'with SRV lookups mocked at Resolver' do let(:srv_result) do double('srv result').tap do |result| allow(result).to receive(:empty?).and_return(false) allow(result).to receive(:address_strs).and_return( [ ClusterConfig.instance.primary_address_str ] ) end end let(:client) do allow_any_instance_of(Mongo::Srv::Resolver).to receive(:get_records).and_return(srv_result) allow_any_instance_of(Mongo::Srv::Resolver).to receive(:get_txt_options_string) new_local_client_nmio('mongodb+srv://foo.a.b', options) end context 'when setting srv_max_hosts' do let(:srv_max_hosts) { 1 } let(:options) { { srv_max_hosts: srv_max_hosts } } it 'is accepted and sets srv_max_hosts' do expect { client }.not_to raise_error expect(client.options[:srv_max_hosts]).to eq(srv_max_hosts) end end context 'when setting srv_max_hosts to 0' do let(:srv_max_hosts) { 0 } let(:options) { { srv_max_hosts: srv_max_hosts } } it 'is accepted sets srv_max_hosts' do expect { client }.not_to raise_error expect(client.options[:srv_max_hosts]).to eq(srv_max_hosts) end end context 'when setting srv_service_name' do let(:srv_service_name) { 'customname' } let(:options) { { srv_service_name: srv_service_name } } it 'is accepted and sets srv_service_name' do expect { client }.not_to raise_error expect(client.options[:srv_service_name]).to eq(srv_service_name) end end end context ':bg_error_backtrace option' do [ true, false, nil, 42 ].each do |valid_value| context "valid value: #{valid_value.inspect}" do let(:options) do { bg_error_backtrace: valid_value } end it 'is accepted' do expect(client.options[:bg_error_backtrace]).to be == valid_value end end end context 'invalid value type' do let(:options) do { bg_error_backtrace: 'yes' } end it 'is rejected' do expect { client } .to raise_error( ArgumentError, /:bg_error_backtrace option value must be true, false, nil or a positive integer/ ) end end context 'invalid value' do [ 0, -1, 42.0 ].each do |invalid_value| context "invalid value: #{invalid_value.inspect}" do let(:options) do { bg_error_backtrace: invalid_value } end it 'is rejected' do expect { client } .to raise_error( ArgumentError, /:bg_error_backtrace option value must be true, false, nil or a positive integer/ ) end end end end end describe ':read option' do %i[ primary primary_preferred secondary secondary_preferred nearest ].each do |sym| describe sym.to_s do context 'when given as symbol' do let(:options) do { read: { mode: sym } } end it 'is accepted' do # the key got converted to a string here expect(client.read_preference).to eq({ 'mode' => sym }) end end context 'when given as string' do let(:options) do { read: { mode: sym.to_s } } end # string keys are not documented as being allowed # but the code accepts them it 'is accepted' do # the key got converted to a string here # the value remains a string expect(client.read_preference).to eq({ 'mode' => sym.to_s }) end end end end context 'when not linting' do require_no_linting it 'rejects bogus read preference as symbol' do expect do new_local_client_nmio(SINGLE_CLIENT, read: { mode: :bogus }) end.to raise_error( Mongo::Error::InvalidReadOption, 'Invalid read preference value: {"mode"=>:bogus}: ' \ 'mode bogus is not one of recognized modes' ) end it 'rejects bogus read preference as string' do expect do new_local_client_nmio(SINGLE_CLIENT, read: { mode: 'bogus' }) end.to raise_error( Mongo::Error::InvalidReadOption, 'Invalid read preference value: {"mode"=>"bogus"}: mode bogus is not one of recognized modes' ) end it 'rejects read option specified as a string' do expect do new_local_client_nmio(SINGLE_CLIENT, read: 'primary') end.to raise_error( Mongo::Error::InvalidReadOption, 'Invalid read preference value: "primary": ' \ 'the read preference must be specified as a hash: { mode: "primary" }' ) end it 'rejects read option specified as a symbol' do expect do new_local_client_nmio(SINGLE_CLIENT, read: :primary) end.to raise_error( Mongo::Error::InvalidReadOption, 'Invalid read preference value: :primary: ' \ 'the read preference must be specified as a hash: { mode: :primary }' ) end end end context 'when setting read concern options' do min_server_fcv '3.2' context 'when read concern is valid' do let(:options) do { read_concern: { level: :local } } end it 'does not warn' do expect(Mongo::Logger.logger).not_to receive(:warn) new_local_client_nmio(SpecConfig.instance.addresses, options) end end context 'when read concern has an invalid key' do require_no_linting let(:options) do { read_concern: { hello: :local } } end it 'logs a warning' do expect(Mongo::Logger.logger).to receive(:warn).with(/Read concern has invalid keys: hello/) new_local_client_nmio(SpecConfig.instance.addresses, options) end end context 'when read concern has a non-user-settable key' do let(:options) do { read_concern: { after_cluster_time: 100 } } end it 'raises an exception' do expect do new_local_client_nmio(SpecConfig.instance.addresses, options) end.to raise_error( Mongo::Error::InvalidReadConcern, 'The after_cluster_time read_concern option cannot be specified by the user' ) end end end context 'when an invalid option is provided' do let(:options) do { ssl: false, invalid: :test } end it 'does not set the option' do expect(client.options.keys).not_to include('invalid') end it 'sets the valid options' do expect(client.options.keys).to include('ssl') end it 'warns that an invalid option has been specified' do expect(Mongo::Logger.logger).to receive(:warn) expect(client.options.keys).not_to include('invalid') end end =begin WriteConcern object support context 'when write concern is provided via a WriteConcern object' do let(:options) do { write_concern: wc } end let(:wc) { Mongo::WriteConcern.get(w: 2) } it 'stores write concern options in client options' do expect(client.options[:write_concern]).to eq( Mongo::Options::Redacted.new(w: 2)) end it 'caches write concern object' do expect(client.write_concern).to be wc end end =end context ':wrapping_libraries option' do let(:options) do { wrapping_libraries: wrapping_libraries } end context 'valid input' do context 'symbol keys' do let(:wrapping_libraries) do [ { name: 'Mongoid', version: '7.1.2' } ].freeze end it 'works' do expect(client.options[:wrapping_libraries]).to be == [ { 'name' => 'Mongoid', 'version' => '7.1.2' } ] end end context 'string keys' do let(:wrapping_libraries) do [ { 'name' => 'Mongoid', 'version' => '7.1.2' } ].freeze end it 'works' do expect(client.options[:wrapping_libraries]).to be == [ { 'name' => 'Mongoid', 'version' => '7.1.2' } ] end end context 'Redacted keys' do let(:wrapping_libraries) do [ Mongo::Options::Redacted.new(name: 'Mongoid', version: '7.1.2') ].freeze end it 'works' do expect(client.options[:wrapping_libraries]).to be == [ { 'name' => 'Mongoid', 'version' => '7.1.2' } ] end end context 'two libraries' do let(:wrapping_libraries) do [ { name: 'Mongoid', version: '7.1.2' }, { name: 'Rails', version: '4.0', platform: 'Foobar' }, ].freeze end it 'works' do expect(client.options[:wrapping_libraries]).to be == [ { 'name' => 'Mongoid', 'version' => '7.1.2' }, { 'name' => 'Rails', 'version' => '4.0', 'platform' => 'Foobar' }, ] end end context 'empty array' do let(:wrapping_libraries) do [] end it 'works' do expect(client.options[:wrapping_libraries]).to be == [] end end context 'empty array' do let(:wrapping_libraries) do nil end it 'works' do expect(client.options[:wrapping_libraries]).to be_nil end end end context 'valid input' do context 'hash given instead of an array' do let(:wrapping_libraries) do { name: 'Mongoid', version: '7.1.2' }.freeze end it 'is rejected' do expect { client } .to raise_error(ArgumentError, /:wrapping_libraries must be an array of hashes/) end end context 'invalid keys' do let(:wrapping_libraries) do [ { name: 'Mongoid', invalid: '7.1.2' } ].freeze end it 'is rejected' do expect { client } .to raise_error(ArgumentError, /:wrapping_libraries element has invalid keys/) end end context 'value includes |' do let(:wrapping_libraries) do [ { name: 'Mongoid|on|Rails', version: '7.1.2' } ].freeze end it 'is rejected' do expect { client } .to raise_error(ArgumentError, /:wrapping_libraries element value cannot include '|'/) end end end end context ':auth_mech_properties option' do context 'is nil' do let(:options) { { auth_mech_properties: nil } } it 'creates the client without the option' do expect(client.options).not_to have_key(:auth_mech_properties) end end end context ':server_api parameter' do context 'is a hash with symbol keys' do context 'using known keys' do let(:options) do { server_api: { version: '1', strict: true, deprecation_errors: false, } } end it 'is accepted' do expect(client.options[:server_api]).to be == { 'version' => '1', 'strict' => true, 'deprecation_errors' => false, } end end context 'using an unknown version' do let(:options) do { server_api: { version: '42' } } end it 'is rejected' do expect { client } .to raise_error(ArgumentError, 'Unknown server API version: 42') end end context 'using an unknown option' do let(:options) do { server_api: { vversion: '1' } } end it 'is rejected' do expect { client } .to raise_error(ArgumentError, 'Unknown keys under :server_api: "vversion"') end end context 'using a value which is not a hash' do let(:options) do { server_api: 42 } end it 'is rejected' do expect { client } .to raise_error(ArgumentError, ':server_api value must be a hash: 42') end end end context 'when connected to a pre-OP_MSG server' do max_server_version '3.4' let(:options) do { server_api: { version: 1 } } end let(:client) do new_local_client( SpecConfig.instance.addresses, SpecConfig.instance.all_test_options.merge(options) ) end it 'constructs the client' do expect(client).to be_a(described_class) end it 'does not discover servers' do client.cluster.servers_list.each do |s| expect(s.status).to eq('UNKNOWN') end end it 'fails operations' do expect { client.command(ping: 1) } .to raise_error(Mongo::Error::NoServerAvailable) end end end end context 'when making a block client' do context 'when the block doesn\'t raise an error' do let(:block_client) do c = nil described_class.new( SpecConfig.instance.addresses, SpecConfig.instance.test_options.merge(database: SpecConfig.instance.test_db) ) do |client| c = client end c end it 'is closed after block' do expect(block_client.cluster.connected?).to be false end context 'with auto encryption options' do require_libmongocrypt min_server_fcv '4.2' require_enterprise clean_slate include_context 'define shared FLE helpers' include_context 'with local kms_providers' let(:auto_encryption_options) do { key_vault_client: key_vault_client, key_vault_namespace: key_vault_namespace, kms_providers: kms_providers, schema_map: schema_map, extra_options: extra_options, } end let(:key_vault_client) { new_local_client_nmio(SpecConfig.instance.addresses) } let(:block_client) do c = nil described_class.new( SpecConfig.instance.addresses, SpecConfig.instance.test_options.merge( auto_encryption_options: auto_encryption_options, database: SpecConfig.instance.test_db ) ) do |client| c = client end c end it 'closes all clients after block' do expect(block_client.cluster.connected?).to be false [ block_client.encrypter.mongocryptd_client, block_client.encrypter.key_vault_client, block_client.encrypter.metadata_client ].each do |crypt_client| expect(crypt_client.cluster.connected?).to be false end end end end context 'when the block raises an error' do it 'is closed after the block' do block_client_raise = nil expect do described_class.new( SpecConfig.instance.addresses, SpecConfig.instance.test_options.merge(database: SpecConfig.instance.test_db) ) do |client| block_client_raise = client raise 'This is an error!' end end.to raise_error(StandardError, 'This is an error!') expect(block_client_raise.cluster.connected?).to be false end end context 'when the hosts given include the protocol' do it 'raises an error on mongodb://' do expect do described_class.new([ 'mongodb://127.0.0.1:27017/test' ]) end.to raise_error(ArgumentError, "Host 'mongodb://127.0.0.1:27017/test' should not contain protocol. " \ 'Did you mean to not use an array?') end it 'raises an error on mongodb+srv://' do expect do described_class.new([ 'mongodb+srv://127.0.0.1:27017/test' ]) end.to raise_error(ArgumentError, "Host 'mongodb+srv://127.0.0.1:27017/test' should not contain protocol. " \ 'Did you mean to not use an array?') end it 'raises an error on multiple items' do expect do described_class.new([ '127.0.0.1:27017', 'mongodb+srv://127.0.0.1:27017/test' ]) end.to raise_error(ArgumentError, "Host 'mongodb+srv://127.0.0.1:27017/test' should not contain protocol. " \ 'Did you mean to not use an array?') end it 'raises an error only at beginning of string' do expect do described_class .new([ 'somethingmongodb://127.0.0.1:27017/test', 'mongodb+srv://127.0.0.1:27017/test' ]) end.to raise_error(ArgumentError, "Host 'mongodb+srv://127.0.0.1:27017/test' should not contain protocol. " \ 'Did you mean to not use an array?') end it 'raises an error with different case' do expect { described_class.new([ 'MongOdB://127.0.0.1:27017/test' ]) } .to raise_error(ArgumentError, "Host 'MongOdB://127.0.0.1:27017/test' should not contain protocol. " \ 'Did you mean to not use an array?') end end end end shared_examples_for 'duplicated client with duplicated monitoring' do let(:monitoring) { client.send(:monitoring) } let(:new_monitoring) { new_client.send(:monitoring) } it 'duplicates monitoring' do expect(new_monitoring).not_to eql(monitoring) end it 'copies monitoring subscribers' do monitoring.subscribers.clear client.subscribe(Mongo::Monitoring::SERVER_HEARTBEAT, subscriber) expect(monitoring.present_subscribers.length).to eq(1) expect(monitoring.subscribers[Mongo::Monitoring::SERVER_HEARTBEAT].length).to eq(1) # this duplicates the client expect(new_monitoring.present_subscribers.length).to eq(1) expect(new_monitoring.subscribers[Mongo::Monitoring::SERVER_HEARTBEAT].length).to eq(1) end it 'does not change subscribers on original client' do monitoring.subscribers.clear client.subscribe(Mongo::Monitoring::SERVER_HEARTBEAT, subscriber) expect(monitoring.present_subscribers.length).to eq(1) expect(monitoring.subscribers[Mongo::Monitoring::SERVER_HEARTBEAT].length).to eq(1) new_client.subscribe(Mongo::Monitoring::SERVER_HEARTBEAT, subscriber) new_client.subscribe(Mongo::Monitoring::SERVER_HEARTBEAT, subscriber) expect(new_monitoring.present_subscribers.length).to eq(1) expect(new_monitoring.subscribers[Mongo::Monitoring::SERVER_HEARTBEAT].length).to eq(3) # original client should not have gotten any of the new subscribers expect(monitoring.present_subscribers.length).to eq(1) expect(monitoring.subscribers[Mongo::Monitoring::SERVER_HEARTBEAT].length).to eq(1) end end shared_examples_for 'duplicated client with reused monitoring' do let(:monitoring) { client.send(:monitoring) } let(:new_monitoring) { new_client.send(:monitoring) } it 'reuses monitoring' do expect(new_monitoring).to eql(monitoring) end end shared_examples_for 'duplicated client with clean slate monitoring' do let(:monitoring) { client.send(:monitoring) } let(:new_monitoring) { new_client.send(:monitoring) } it 'does not reuse monitoring' do expect(new_monitoring).not_to eql(monitoring) end it 'resets monitoring subscribers' do monitoring.subscribers.clear client.subscribe(Mongo::Monitoring::SERVER_HEARTBEAT, subscriber) expect(monitoring.present_subscribers.length).to eq(1) expect(monitoring.subscribers[Mongo::Monitoring::SERVER_HEARTBEAT].length).to eq(1) # this duplicates the client # 7 is how many subscribers driver sets up by default expect(new_monitoring.present_subscribers.length).to eq(7) # ... none of which are for heartbeats expect(new_monitoring.subscribers[Mongo::Monitoring::SERVER_HEARTBEAT].length).to eq(0) end it 'does not change subscribers on original client' do monitoring.subscribers.clear client.subscribe(Mongo::Monitoring::SERVER_HEARTBEAT, subscriber) expect(monitoring.present_subscribers.length).to eq(1) expect(monitoring.subscribers[Mongo::Monitoring::SERVER_HEARTBEAT].length).to eq(1) new_client.subscribe(Mongo::Monitoring::SERVER_HEARTBEAT, subscriber) new_client.subscribe(Mongo::Monitoring::SERVER_HEARTBEAT, subscriber) # 7 default subscribers + heartbeat expect(new_monitoring.present_subscribers.length).to eq(8) # the heartbeat subscriber on the original client is not inherited expect(new_monitoring.subscribers[Mongo::Monitoring::SERVER_HEARTBEAT].length).to eq(2) # original client should not have gotten any of the new subscribers expect(monitoring.present_subscribers.length).to eq(1) expect(monitoring.subscribers[Mongo::Monitoring::SERVER_HEARTBEAT].length).to eq(1) end end describe '#use' do let(:client) do new_local_client_nmio(SINGLE_CLIENT, database: SpecConfig.instance.test_db) end shared_examples_for 'a database switching object' do it 'returns the new client' do expect(client.send(:database).name).to eq('ruby-driver') end it 'keeps the same cluster' do expect(database.cluster).to equal(client.cluster) end end context 'when provided a string' do let(:database) do client.use('testdb') end it_behaves_like 'a database switching object' end context 'when provided a symbol' do let(:database) do client.use(:testdb) end it_behaves_like 'a database switching object' end context 'when providing nil' do it 'raises an exception' do expect { client.use(nil) } .to raise_error(Mongo::Error::InvalidDatabaseName) end end end describe '#with' do let(:client) do new_local_client_nmio(SINGLE_CLIENT, database: SpecConfig.instance.test_db) end context 'when providing nil' do it 'returns the cloned client' do expect(client.with(nil)).to eq(client) end end context 'when the app_name is changed' do let(:client) { authorized_client } let(:original_options) { client.options } let(:new_options) { { app_name: 'client_test' } } let(:new_client) { authorized_client.with(new_options) } it 'returns a new client' do expect(new_client).not_to equal(client) end it 'replaces the existing options' do expect(new_client.options).to eq(client.options.merge(new_options)) end it 'does not modify the original client' do expect(client.options).to eq(original_options) end it 'does not keep the same cluster' do expect(new_client.cluster).not_to be(client.cluster) end end context 'when direct_connection option is given' do let(:client) do options = SpecConfig.instance.test_options options.delete(:connect) new_local_client(SpecConfig.instance.addresses, options) end let(:new_client) do client.with(new_options) end before do expect(client.options[:direct_connection]).to be_nil end context 'direct_connection set to false' do let(:new_options) do { direct_connection: false } end it 'is accepted' do expect(new_client.options[:direct_connection]).to be false end end context 'direct_connection set to true' do let(:new_options) do { direct_connection: true } end context 'in single topology' do require_topology :single it 'is accepted' do expect(new_client.options[:direct_connection]).to be true expect(new_client.cluster.topology).to be_a(Mongo::Cluster::Topology::Single) end end context 'in replica set or sharded cluster topology' do require_topology :replica_set, :sharded it 'is rejected' do expect { new_client } .to raise_error(ArgumentError, /direct_connection=true cannot be used with topologies other than Single/) end context 'when a new cluster is created' do let(:new_options) do { direct_connection: true, app_name: 'new-client' } end it 'is rejected' do expect { new_client } .to raise_error(ArgumentError, /direct_connection=true cannot be used with topologies other than Single/) end end end end end context 'when the write concern is not changed' do let(:client) do new_local_client_nmio( SINGLE_CLIENT, read: { mode: :secondary }, write: { w: 1 }, database: SpecConfig.instance.test_db ) end let(:new_client) { client.with(read: { mode: :primary }) } let(:new_options) do Mongo::Options::Redacted.new( read: { mode: :primary }, write: { w: 1 }, monitoring_io: false, database: SpecConfig.instance.test_db, retry_writes: true, retry_reads: true ) end let(:original_options) do Mongo::Options::Redacted.new( read: { mode: :secondary }, write: { w: 1 }, monitoring_io: false, database: SpecConfig.instance.test_db, retry_writes: true, retry_reads: true ) end it 'returns a new client' do expect(new_client).not_to equal(client) end it 'replaces the existing options' do expect(new_client.options).to eq(new_options) end it 'does not modify the original client' do expect(client.options).to eq(original_options) end it 'keeps the same cluster' do expect(new_client.cluster).to be(client.cluster) end end context 'when the write concern is changed' do let(:client) do new_local_client( SINGLE_CLIENT, { monitoring_io: false }.merge(client_options) ) end let(:client_options) do { write: { w: 1 } } end context 'when the write concern has not been accessed' do let(:new_client) { client.with(write: { w: 0 }) } let(:get_last_error) do new_client.write_concern.get_last_error end it 'returns the correct write concern' do expect(get_last_error).to be_nil end end context 'when the write concern has been accessed' do let(:new_client) do client.write_concern client.with(write: { w: 0 }) end let(:get_last_error) do new_client.write_concern.get_last_error end it 'returns the correct write concern' do expect(get_last_error).to be_nil end end context 'when write concern is given as :write' do let(:client_options) do { write: { w: 1 } } end it 'sets :write option' do expect(client.options[:write]).to eq(Mongo::Options::Redacted.new(w: 1)) end it 'does not set :write_concern option' do expect(client.options[:write_concern]).to be_nil end it 'returns correct write concern' do expect(client.write_concern).to be_a(Mongo::WriteConcern::Acknowledged) expect(client.write_concern.options).to eq(w: 1) end end context 'when write concern is given as :write_concern' do let(:client_options) do { write_concern: { w: 1 } } end it 'sets :write_concern option' do expect(client.options[:write_concern]).to eq(Mongo::Options::Redacted.new(w: 1)) end it 'does not set :write option' do expect(client.options[:write]).to be_nil end it 'returns correct write concern' do expect(client.write_concern).to be_a(Mongo::WriteConcern::Acknowledged) expect(client.write_concern.options).to eq(w: 1) end end context 'when write concern is given as both :write and :write_concern' do context 'with identical values' do let(:client_options) do { write: { w: 1 }, write_concern: { w: 1 } } end it 'sets :write_concern option' do expect(client.options[:write_concern]).to eq(Mongo::Options::Redacted.new(w: 1)) end it 'sets :write option' do expect(client.options[:write]).to eq(Mongo::Options::Redacted.new(w: 1)) end it 'returns correct write concern' do expect(client.write_concern).to be_a(Mongo::WriteConcern::Acknowledged) expect(client.write_concern.options).to eq(w: 1) end end context 'with different values' do let(:client_options) do { write: { w: 1 }, write_concern: { w: 2 } } end it 'raises an exception' do expect do client end.to raise_error(ArgumentError, /If :write and :write_concern are both given, they must be identical/) end end end context 'when #with uses a different write concern option name' do context 'from :write_concern to :write' do let(:client_options) do { write_concern: { w: 1 } } end let(:new_client) { client.with(write: { w: 2 }) } it 'uses the new option' do expect(new_client.options[:write]).to eq(Mongo::Options::Redacted.new(w: 2)) expect(new_client.options[:write_concern]).to be_nil end end context 'from :write to :write_concern' do let(:client_options) do { write: { w: 1 } } end let(:new_client) { client.with(write_concern: { w: 2 }) } it 'uses the new option' do expect(new_client.options[:write_concern]).to eq(Mongo::Options::Redacted.new(w: 2)) expect(new_client.options[:write]).to be_nil end end end end context 'when an invalid option is provided' do let(:new_client) do client.with(invalid: :option, ssl: false) end it 'does not set the invalid option' do expect(new_client.options.keys).not_to include('invalid') end it 'sets the valid options' do expect(new_client.options.keys).to include('ssl') end it 'warns that an invalid option has been specified' do expect(Mongo::Logger.logger).to receive(:warn) expect(new_client.options.keys).not_to include('invalid') end end context 'when client is created with ipv6 address' do let(:client) do new_local_client_nmio([ '[::1]:27017' ], database: SpecConfig.instance.test_db) end context 'when providing nil' do it 'returns the cloned client' do expect(client.with(nil)).to eq(client) end end context 'when changing options' do let(:new_options) { { app_name: 'client_test' } } let(:new_client) { client.with(new_options) } it 'returns a new client' do expect(new_client).not_to equal(client) end end end context 'when new client has a new cluster' do let(:client) do new_local_client( SINGLE_CLIENT, database: SpecConfig.instance.test_db, server_selection_timeout: 0.5, socket_timeout: 0.1, connect_timeout: 0.1, populator_io: false ) end let(:new_client) do client.with(app_name: 'client_construction_spec').tap do |new_client| expect(new_client.cluster).not_to eql(client.cluster) end end it_behaves_like 'duplicated client with clean slate monitoring' end context 'when new client shares cluster with original client' do let(:new_client) do client.with(database: 'client_construction_spec').tap do |new_client| expect(new_client.cluster).to eql(client.cluster) end end it_behaves_like 'duplicated client with reused monitoring' end # Since we either reuse monitoring or reset it to a clean slate # in #with, the consistent behavior is to never transfer sdam_proc to # the new client. context 'when sdam_proc is given on original client' do let(:sdam_proc) do proc do |client| client.subscribe(Mongo::Monitoring::SERVER_HEARTBEAT, subscriber) end end let(:client) do new_local_client( SpecConfig.instance.addresses, SpecConfig.instance.test_options.merge( sdam_proc: sdam_proc, connect_timeout: 3.08, socket_timeout: 3.09, server_selection_timeout: 2.92, heartbeat_frequency: 100, database: SpecConfig.instance.test_db ) ) end let(:new_client) do client.with(app_name: 'foo').tap do |new_client| expect(new_client.cluster).not_to be == client.cluster end end before do client.cluster.next_primary events = subscriber.select_started_events(Mongo::Monitoring::Event::ServerHeartbeatStarted) if ClusterConfig.instance.topology == :load_balanced # No server monitoring in LB topology expect(events.length).to be == 0 else expect(events.length).to be > 0 end end it 'does not copy sdam_proc option to new client' do expect(new_client.options[:sdam_proc]).to be_nil end it 'does not notify subscribers set up by sdam_proc' do # On 4.4, the push monitor also is receiving heartbeats. # Give those some time to be processed. sleep 2 if ClusterConfig.instance.topology == :load_balanced # No server monitoring in LB topology expect(subscriber.started_events.length).to eq 0 else expect(subscriber.started_events.length).to be > 0 end subscriber.started_events.clear # If this test takes longer than heartbeat interval, # subscriber may receive events from the original client. new_client.cluster.next_primary # Diagnostics # rubocop:disable Style/IfUnlessModifier, Lint/Debugger unless subscriber.started_events.empty? p subscriber.started_events end # rubocop:enable Style/IfUnlessModifier, Lint/Debugger expect(subscriber.started_events.length).to eq 0 expect(new_client.cluster.topology.class).not_to be Mongo::Cluster::Topology::Unknown end end context 'when :server_api is changed' do let(:client) do new_local_client_nmio(SINGLE_CLIENT) end let(:new_client) do client.with(server_api: { version: '1' }) end it 'changes :server_api' do expect(new_client.options[:server_api]).to be == { 'version' => '1' } end end context 'when :server_api is cleared' do let(:client) do new_local_client_nmio(SINGLE_CLIENT, server_api: { version: '1' }) end let(:new_client) do client.with(server_api: nil) end it 'clears :server_api' do expect(new_client.options[:server_api]).to be_nil end end end describe '#dup' do let(:client) do new_local_client_nmio( SINGLE_CLIENT, read: { mode: :primary }, database: SpecConfig.instance.test_db ) end let(:new_client) { client.dup } it 'creates a client with Redacted options' do expect(new_client.options).to be_a(Mongo::Options::Redacted) end it_behaves_like 'duplicated client with reused monitoring' end end # rubocop:enable RSpec/ExpectInHook, RSpec/ExampleLength # rubocop:enable RSpec/ContextWording, RSpec/RepeatedExampleGroupDescription # rubocop:enable RSpec/ExampleWording, Style/BlockComments, RSpec/AnyInstance # rubocop:enable RSpec/VerifiedDoubles mongo-ruby-driver-2.21.3/spec/mongo/client_encryption_spec.rb000066400000000000000000000264111505113246500243340ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::ClientEncryption do require_libmongocrypt include_context 'define shared FLE helpers' let(:client) do ClientRegistry.instance.new_local_client( SpecConfig.instance.addresses, SpecConfig.instance.test_options ) end let(:client_encryption) do described_class.new(client, { key_vault_namespace: key_vault_namespace, kms_providers: kms_providers }) end describe '#initialize' do shared_examples 'a functioning ClientEncryption' do context 'with nil key_vault_namespace' do let(:key_vault_namespace) { nil } it 'raises an exception' do expect do client_encryption end.to raise_error(ArgumentError, /:key_vault_namespace option cannot be nil/) end end context 'with invalid key_vault_namespace' do let(:key_vault_namespace) { 'three.word.namespace' } it 'raises an exception' do expect do client_encryption end.to raise_error(ArgumentError, /invalid key vault namespace/) end end context 'with valid options' do it 'creates a ClientEncryption object' do expect do client_encryption end.not_to raise_error end end end context 'with local KMS providers' do include_context 'with local kms_providers' it_behaves_like 'a functioning ClientEncryption' end context 'with AWS KMS providers' do include_context 'with AWS kms_providers' it_behaves_like 'a functioning ClientEncryption' end context 'with invalid KMS provider information' do let(:kms_providers) { { random_key: {} } } it 'raises an exception' do expect do client_encryption end.to raise_error(ArgumentError, /KMS providers options must have one of the following keys/) end end end describe '#create_data_key' do let(:data_key_id) { client_encryption.create_data_key(kms_provider_name, options) } let(:key_alt_names) { nil } shared_examples 'it creates a data key' do |with_key_alt_names: false| it 'returns the data key id and inserts it into the key vault collection' do expect(data_key_id).to be_uuid documents = client.use(key_vault_db)[key_vault_coll].find(_id: data_key_id) expect(documents.count).to eq(1) if with_key_alt_names expect(documents.first['keyAltNames']).to match_array(key_alt_names) else expect(documents.first['keyAltNames']).to be_nil end end end shared_examples 'it supports key_alt_names' do let(:options) { base_options.merge(key_alt_names: key_alt_names) } context 'with one value in key_alt_names' do let(:key_alt_names) { ['keyAltName1'] } it_behaves_like 'it creates a data key', **{ with_key_alt_names: true } end context 'with multiple values in key_alt_names' do let(:key_alt_names) { ['keyAltName1', 'keyAltName2'] } it_behaves_like 'it creates a data key', **{ with_key_alt_names: true } end context 'with empty key_alt_names' do let(:key_alt_names) { [] } it_behaves_like 'it creates a data key' end context 'with invalid key_alt_names option' do let(:key_alt_names) { 'keyAltName1' } it 'raises an exception' do expect do data_key_id end.to raise_error(ArgumentError, /key_alt_names option must be an Array/) end end context 'with invalid key_alt_names values' do let(:key_alt_names) { ['keyAltNames1', 3] } it 'raises an exception' do expect do data_key_id end.to raise_error(ArgumentError, /values of the :key_alt_names option Array must be Strings/) end end end context 'with AWS KMS provider' do include_context 'with AWS kms_providers' let(:base_options) { { master_key: { region: aws_region, key: aws_arn } } } it_behaves_like 'it supports key_alt_names' context 'with nil options' do let(:options) { nil } it 'raises an exception' do expect do data_key_id end.to raise_error(ArgumentError, /Key document options must not be nil/) end end context 'with nil master key' do let(:options) { { master_key: nil } } it 'raises an exception' do expect do data_key_id end.to raise_error(ArgumentError, /Key document options must contain a key named :master_key with a Hash value/) end end context 'with invalid master key' do let(:options) { { master_key: 'master-key' } } it 'raises an exception' do expect do data_key_id end.to raise_error(ArgumentError, /Key document options must contain a key named :master_key with a Hash value/) end end context 'with empty master key' do let(:options) { { master_key: {} } } it 'raises an exception' do expect do data_key_id end.to raise_error(ArgumentError, /The specified KMS provider options are invalid: {}. AWS key document must be in the format: { region: 'REGION', key: 'KEY' }/) end end context 'with nil region' do let(:options) { { master_key: { region: nil, key: aws_arn } } } it 'raises an exception' do expect do data_key_id end.to raise_error(ArgumentError, /The region option must be a String with at least one character; currently have nil/) end end context 'with invalid region' do let(:options) { { master_key: { region: 5, key: aws_arn } } } it 'raises an exception' do expect do data_key_id end.to raise_error(ArgumentError, /The region option must be a String with at least one character; currently have 5/) end end context 'with nil key' do let(:options) { { master_key: { key: nil, region: aws_region } } } it 'raises an exception' do expect do data_key_id end.to raise_error(ArgumentError, /The key option must be a String with at least one character; currently have nil/) end end context 'with invalid key' do let(:options) { { master_key: { key: 5, region: aws_region } } } it 'raises an exception' do expect do data_key_id end.to raise_error(ArgumentError, /The key option must be a String with at least one character; currently have 5/) end end context 'with invalid endpoint' do let(:options) { { master_key: { key: aws_arn, region: aws_region, endpoint: 5 } } } it 'raises an exception' do expect do data_key_id end.to raise_error(ArgumentError, /The endpoint option must be a String with at least one character; currently have 5/) end end context 'with nil endpoint' do let(:options) do { master_key: { key: aws_arn, region: aws_region, endpoint: nil } } end it_behaves_like 'it creates a data key' end context 'with valid endpoint, no port' do let(:options) do { master_key: { key: aws_arn, region: aws_region, endpoint: aws_endpoint_host } } end it_behaves_like 'it creates a data key' end context 'with valid endpoint' do let(:options) { data_key_options } it_behaves_like 'it creates a data key' end context 'with https' do let(:options) do { master_key: { key: aws_arn, region: aws_region, endpoint: "https://#{aws_endpoint_host}:#{aws_endpoint_port}" } } end it_behaves_like 'it creates a data key' end context 'with invalid endpoint' do let(:options) do { master_key: { key: aws_arn, region: aws_region, endpoint: "invalid-nonsense-endpoint.com" } } end it 'raises an exception' do expect do data_key_id end.to raise_error(Mongo::Error::KmsError, /SocketError|ResolutionError/) end end context 'when socket connect errors out' do let(:options) { data_key_options } before do allow_any_instance_of(OpenSSL::SSL::SSLSocket) .to receive(:connect) .and_raise('Error while connecting to socket') end it 'raises a KmsError' do skip 'https://jira.mongodb.org/browse/RUBY-3375' expect do data_key_id end.to raise_error(Mongo::Error::KmsError, /Error while connecting to socket/) end end context 'when socket connect errors out' do let(:options) { data_key_options } before do allow_any_instance_of(OpenSSL::SSL::SSLSocket) .to receive(:sysclose) .and_raise('Error while closing socket') end it 'does not raise an exception' do expect do data_key_id end.not_to raise_error end end end context 'with local KMS provider' do include_context 'with local kms_providers' let(:options) { {} } let(:base_options) { {} } it_behaves_like 'it supports key_alt_names' it_behaves_like 'it creates a data key' end end describe '#encrypt/decrypt' do let(:value) { ssn } let(:encrypted_value) { encrypted_ssn } before do key_vault_collection.drop key_vault_collection.insert_one(data_key) end shared_examples 'an encrypter' do let(:encrypted) do client_encryption.encrypt( value, { key_id: key_id, key_alt_name: key_alt_name, algorithm: 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic' } ) end context 'with key_id option' do let(:key_alt_name) { nil } it 'correctly encrypts a string' do expect(encrypted).to be_ciphertext expect(encrypted.data).to eq(Base64.decode64(encrypted_value)) end end context 'with key_alt_name option' do let(:key_id) { nil } it 'correctly encrypts a string' do expect(encrypted).to be_ciphertext expect(encrypted.data).to eq(Base64.decode64(encrypted_value)) end end end shared_examples 'a decrypter' do it 'correctly decrypts a string' do encrypted = BSON::Binary.new(Base64.decode64(encrypted_value), :ciphertext) result = client_encryption.decrypt(encrypted) expect(result).to eq(value) end end context 'with local KMS providers' do include_context 'with local kms_providers' it_behaves_like 'an encrypter' it_behaves_like 'a decrypter' end context 'with AWS KMS providers' do include_context 'with AWS kms_providers' it_behaves_like 'an encrypter' it_behaves_like 'a decrypter' end end end mongo-ruby-driver-2.21.3/spec/mongo/client_spec.rb000066400000000000000000001133331505113246500220620ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' DEFAULT_LOCAL_HOST = '127.0.0.1:27017' ALT_LOCAL_HOST = '127.0.0.1:27010' # NB: tests for .new, #initialize, #use, #with and #dup are in # client_construction_spec.rb. describe Mongo::Client do let(:subscriber) { Mrss::EventSubscriber.new } let(:monitored_client) do root_authorized_client.tap do |client| client.subscribe(Mongo::Monitoring::COMMAND, subscriber) end end describe '#==' do let(:client) do new_local_client_nmio( [ DEFAULT_LOCAL_HOST ], read: { mode: :primary }, database: SpecConfig.instance.test_db ) end context 'when the other is a client' do context 'when the options and cluster are equal' do let(:other) do new_local_client_nmio( [ DEFAULT_LOCAL_HOST ], read: { mode: :primary }, database: SpecConfig.instance.test_db ) end it 'returns true' do expect(client).to eq(other) end end context 'when the options are not equal' do let(:other) do new_local_client_nmio( [ DEFAULT_LOCAL_HOST ], read: { mode: :secondary }, database: SpecConfig.instance.test_db ) end it 'returns false' do expect(client).not_to eq(other) end end context 'when cluster is not equal' do let(:other) do new_local_client_nmio( [ ALT_LOCAL_HOST ], read: { mode: :primary }, database: SpecConfig.instance.test_db ) end it 'returns false' do expect(client).not_to eq(other) end end end context 'when the other is not a client' do it 'returns false' do expect(client).not_to eq('test') end end end describe '#[]' do let(:client) do new_local_client_nmio([ DEFAULT_LOCAL_HOST ], database: SpecConfig.instance.test_db) end shared_examples_for 'a collection switching object' do before do client.use(:dbtest) end it 'returns the new collection' do expect(collection.name).to eq('users') end end context 'when provided a string' do let(:collection) do client['users'] end it_behaves_like 'a collection switching object' end context 'when provided a symbol' do let(:collection) do client[:users] end it_behaves_like 'a collection switching object' end end describe '#eql' do let(:client) do new_local_client_nmio( [ DEFAULT_LOCAL_HOST ], read: { mode: :primary }, database: SpecConfig.instance.test_db ) end context 'when the other is a client' do context 'when the options and cluster are equal' do let(:other) do new_local_client_nmio( [ DEFAULT_LOCAL_HOST ], read: { mode: :primary }, database: SpecConfig.instance.test_db ) end it 'returns true' do expect(client).to eql(other) end end context 'when the options are not equal' do let(:other) do new_local_client_nmio( [ DEFAULT_LOCAL_HOST ], read: { mode: :secondary }, database: SpecConfig.instance.test_db ) end it 'returns false' do expect(client).not_to eql(other) end end context 'when the cluster is not equal' do let(:other) do new_local_client_nmio( [ ALT_LOCAL_HOST ], read: { mode: :primary }, database: SpecConfig.instance.test_db ) end it 'returns false' do expect(client).not_to eql(other) end end end context 'when the other is not a client' do let(:client) do new_local_client_nmio( [ DEFAULT_LOCAL_HOST ], read: { mode: :primary }, database: SpecConfig.instance.test_db ) end it 'returns false' do expect(client).not_to eql('test') end end end describe '#hash' do let(:client) do new_local_client_nmio( [ DEFAULT_LOCAL_HOST ], read: { mode: :primary }, local_threshold: 0.010, server_selection_timeout: 10000, database: SpecConfig.instance.test_db ) end let(:default_options) { Mongo::Options::Redacted.new( retry_writes: true, retry_reads: true, monitoring_io: false) } let(:options) do Mongo::Options::Redacted.new(read: { mode: :primary }, local_threshold: 0.010, server_selection_timeout: 10000, database: SpecConfig.instance.test_db) end let(:expected) do [ client.cluster, default_options.merge(options) ].hash end it 'returns a hash of the cluster and options' do expect(client.hash).to eq(expected) end end describe '#inspect' do let(:client) do new_local_client_nmio( [ DEFAULT_LOCAL_HOST ], read: { mode: :primary }, database: SpecConfig.instance.test_db ) end it 'returns the cluster information' do expect(client.inspect).to match(/Cluster(.|\n)*topology=(.|\n)*servers=/) end context 'when there is sensitive data in the options' do let(:client) do new_local_client_nmio( [ DEFAULT_LOCAL_HOST ], read: { mode: :primary }, database: SpecConfig.instance.test_db, password: 'some_password', user: 'emily' ) end it 'does not print out sensitive data' do expect(client.inspect).not_to match('some_password') end end end describe '#server_selector' do context 'when there is a read preference set' do let(:client) do new_local_client_nmio([ DEFAULT_LOCAL_HOST ], database: SpecConfig.instance.test_db, read: mode, server_selection_timeout: 2) end let(:server_selector) do client.server_selector end context 'when mode is primary' do let(:mode) do { mode: :primary } end it 'returns a primary server selector' do expect(server_selector).to be_a(Mongo::ServerSelector::Primary) end it 'passes the options to the cluster' do expect(client.cluster.options[:server_selection_timeout]).to eq(2) end end context 'when mode is primary_preferred' do let(:mode) do { mode: :primary_preferred } end it 'returns a primary preferred server selector' do expect(server_selector).to be_a(Mongo::ServerSelector::PrimaryPreferred) end end context 'when mode is secondary' do let(:mode) do { mode: :secondary } end it 'uses a Secondary server selector' do expect(server_selector).to be_a(Mongo::ServerSelector::Secondary) end end context 'when mode is secondary preferred' do let(:mode) do { mode: :secondary_preferred } end it 'uses a Secondary server selector' do expect(server_selector).to be_a(Mongo::ServerSelector::SecondaryPreferred) end end context 'when mode is nearest' do let(:mode) do { mode: :nearest } end it 'uses a Secondary server selector' do expect(server_selector).to be_a(Mongo::ServerSelector::Nearest) end end context 'when no mode provided' do let(:client) do new_local_client_nmio([ DEFAULT_LOCAL_HOST ], database: SpecConfig.instance.test_db, server_selection_timeout: 2) end it 'returns a primary server selector' do expect(server_selector).to be_a(Mongo::ServerSelector::Primary) end end context 'when the read preference is printed' do let(:client) do new_local_client_nmio(SpecConfig.instance.addresses, options) end let(:options) do { user: 'Emily', password: 'sensitive_data', server_selection_timeout: 0.1 } end before do allow(client.database.cluster).to receive(:single?).and_return(false) end let(:error) do begin client.database.command(ping: 1) rescue StandardError => e e end end it 'redacts sensitive client options' do expect(error.message).not_to match(options[:password]) end end end end describe '#read_preference' do let(:client) do new_local_client_nmio([ DEFAULT_LOCAL_HOST ], database: SpecConfig.instance.test_db, read: mode, server_selection_timeout: 2) end let(:preference) do client.read_preference end context 'when mode is primary' do let(:mode) do { mode: :primary } end it 'returns a primary read preference' do expect(preference).to eq(BSON::Document.new(mode)) end end context 'when mode is primary_preferred' do let(:mode) do { mode: :primary_preferred } end it 'returns a primary preferred read preference' do expect(preference).to eq(BSON::Document.new(mode)) end end context 'when mode is secondary' do let(:mode) do { mode: :secondary } end it 'returns a secondary read preference' do expect(preference).to eq(BSON::Document.new(mode)) end end context 'when mode is secondary preferred' do let(:mode) do { mode: :secondary_preferred } end it 'returns a secondary preferred read preference' do expect(preference).to eq(BSON::Document.new(mode)) end end context 'when mode is nearest' do let(:mode) do { mode: :nearest } end it 'returns a nearest read preference' do expect(preference).to eq(BSON::Document.new(mode)) end end context 'when no mode provided' do let(:client) do new_local_client_nmio([ DEFAULT_LOCAL_HOST ], database: SpecConfig.instance.test_db, server_selection_timeout: 2) end it 'returns nil' do expect(preference).to be_nil end end end describe '#write_concern' do let(:concern) { client.write_concern } context 'when no option was provided to the client' do let(:client) { new_local_client_nmio([ DEFAULT_LOCAL_HOST ], database: SpecConfig.instance.test_db) } it 'does not set the write concern' do expect(concern).to be_nil end end context 'when an option is provided' do context 'when the option is acknowledged' do let(:client) do new_local_client_nmio([ DEFAULT_LOCAL_HOST ], write: { j: true }, database: SpecConfig.instance.test_db) end it 'returns a acknowledged write concern' do expect(concern.get_last_error).to eq(getlasterror: 1, j: true) end end context 'when the option is unacknowledged' do context 'when the w is 0' do let(:client) do new_local_client_nmio([ DEFAULT_LOCAL_HOST ], write: { w: 0 }, database: SpecConfig.instance.test_db) end it 'returns an unacknowledged write concern' do expect(concern.get_last_error).to be_nil end end context 'when the w is -1' do let(:client) do new_local_client_nmio([ DEFAULT_LOCAL_HOST ], write: { w: -1 }, database: SpecConfig.instance.test_db) end it 'raises an error' do expect { concern }.to raise_error(Mongo::Error::InvalidWriteConcern) end end end end end [ [ :max_read_retries, 1 ], [ :read_retry_interval, 5 ], [ :max_write_retries, 1 ], ].each do |opt, default| describe "##{opt}" do let(:client_options) { {} } let(:client) do new_local_client_nmio([ DEFAULT_LOCAL_HOST ], client_options) end it "defaults to #{default}" do expect(default).not_to be nil expect(client.options[opt]).to be nil expect(client.send(opt)).to eq(default) end context 'specified on client' do let(:client_options) { { opt => 2 } } it 'inherits from client' do expect(client.options[opt]).to eq(2) expect(client.send(opt)).to eq(2) end end end end shared_context 'ensure test db exists' do before(:all) do # Ensure the database we are querying exists. # When the entire test suite is run, it will generally have been # created by a previous test, but if this test is run on a fresh # deployment the database won't exist. client = ClientRegistry.instance.global_client('authorized') client['any-collection-name'].insert_one(any: :value) end end describe '#database' do let(:database) { client.database } context 'when client has :server_api option' do let(:client) do new_local_client_nmio([ 'localhost' ], server_api: { version: '1' }) end it 'is not transfered to the collection' do expect(database.options[:server_api]).to be_nil end end end describe '#database_names' do it 'returns a list of database names' do expect(root_authorized_client.database_names).to include( 'admin' ) end context 'when filter criteria is present' do min_server_fcv '3.6' include_context 'ensure test db exists' let(:result) do root_authorized_client.database_names(filter) end let(:filter) do { name: SpecConfig.instance.test_db } end it 'returns a filtered list of database names' do expect(result.length).to eq(1) expect(result.first).to eq(filter[:name]) end end context 'with comment' do min_server_version '4.4' it 'returns a list of database names and send comment' do result = monitored_client.database_names({}, comment: 'comment') expect(result).to include('admin') command = subscriber.command_started_events('listDatabases').last&.command expect(command).not_to be_nil expect(command['comment']).to eq('comment') end end context 'with timeout_ms' do # To make it easier with failCommand require_topology :single min_server_version '4.4' before do root_authorized_client.use('admin').command({ configureFailPoint: "failCommand", mode: "alwaysOn", data: { failCommands: ["listDatabases"], blockConnection: true, blockTimeMS: 100 } }) end after do root_authorized_client.use('admin').command({ configureFailPoint: "failCommand", mode: "off" }) end context 'when timeout_ms is set on command level' do context 'when there is not enough time' do it 'raises' do expect do monitored_client.database_names({}, timeout_ms: 50) end.to raise_error(Mongo::Error::TimeoutError) end end context 'when there is enough time' do it 'does not raise' do expect do monitored_client.database_names({}, timeout_ms: 200) end.not_to raise_error end end end context 'when timeout_ms is set on client level' do context 'when there is not enough time' do let(:client) do root_authorized_client.with(timeout_ms: 50) end it 'raises' do expect do client.database_names({}) end.to raise_error(Mongo::Error::TimeoutError) end end context 'when there is enough time' do let(:client) do root_authorized_client.with(timeout_ms: 200) end it 'does not raise' do expect do monitored_client.database_names({}) end.not_to raise_error end end end end end describe '#list_databases' do it 'returns a list of database info documents' do expect( root_authorized_client.list_databases.collect do |i| i['name'] end).to include('admin') end context 'when filter criteria is present' do include_context 'ensure test db exists' let(:result) do root_authorized_client.list_databases(filter) end let(:filter) do { name: SpecConfig.instance.test_db } end it 'returns a filtered list of database info documents' do expect(result.length).to eq(1) expect(result[0]['name']).to eq(filter[:name]) end end context 'when name_only is true' do let(:command) do Utils.get_command_event(root_authorized_client, 'listDatabases') do |client| client.list_databases({}, true) end.command end it 'sends the command with the nameOnly flag set to true' do expect(command[:nameOnly]).to be(true) end end context 'when authorized_databases is provided' do min_server_fcv '4.0' let(:client_options) do root_authorized_client.options.merge(heartbeat_frequency: 100, monitoring: true) end let(:subscriber) { Mrss::EventSubscriber.new } let(:client) do ClientRegistry.instance.new_local_client( SpecConfig.instance.addresses, client_options ).tap do |cl| cl.subscribe(Mongo::Monitoring::COMMAND, subscriber) end end let(:command) do subscriber.started_events.find { |c| c.command_name == 'listDatabases' }.command end let(:authDb) do { authorized_databases: true } end let(:noAuthDb) do { authorized_databases: false } end before do client.list_databases({}, true, authDb) client.list_databases({}, true, noAuthDb) end let(:events) do subscriber.command_started_events('listDatabases') end it 'sends the command with the authorizedDatabases flag set to true' do expect(events.length).to eq(2) command = events.first.command expect(command[:authorizedDatabases]).to be(true) end it 'sends the command with the authorizedDatabases flag set to nil' do command = events.last.command expect(command[:authorizedDatabases]).to be_nil end end context 'with comment' do min_server_version '4.4' it 'returns a list of database names and send comment' do result = monitored_client.list_databases({}, false, comment: 'comment').collect do |i| i['name'] end expect(result).to include('admin') command = subscriber.command_started_events('listDatabases').last&.command expect(command).not_to be_nil expect(command['comment']).to eq('comment') end end context 'with timeout_ms' do # To make it easier with failCommand require_topology :single min_server_version '4.4' before do root_authorized_client.use('admin').command({ configureFailPoint: "failCommand", mode: "alwaysOn", data: { failCommands: ["listDatabases"], blockConnection: true, blockTimeMS: 100 } }) end after do root_authorized_client.use('admin').command({ configureFailPoint: "failCommand", mode: "off" }) end context 'when timeout_ms is set on command level' do context 'when there is not enough time' do it 'raises' do expect do monitored_client.list_databases({}, false, timeout_ms: 50) end.to raise_error(Mongo::Error::TimeoutError) end end context 'when there is enough time' do it 'does not raise' do expect do monitored_client.list_databases({}, false, timeout_ms: 200) end.not_to raise_error end end end context 'when timeout_ms is set on client level' do context 'when there is not enough time' do let(:client) do root_authorized_client.with(timeout_ms: 50) end it 'raises' do expect do client.list_databases({}) end.to raise_error(Mongo::Error::TimeoutError) end end context 'when there is enough time' do let(:client) do root_authorized_client.with(timeout_ms: 200) end it 'does not raise' do expect do monitored_client.list_databases({}) end.not_to raise_error end end end end end describe '#list_mongo_databases' do let(:options) do { read: { mode: :secondary } } end let(:client) do root_authorized_client.with(options) end let(:result) do client.list_mongo_databases end it 'returns a list of Mongo::Database objects' do expect(result).to all(be_a(Mongo::Database)) end it 'creates database with specified options' do expect(result.first.options[:read]).to eq(BSON::Document.new(options)[:read]) end context 'when filter criteria is present' do min_server_fcv '3.6' include_context 'ensure test db exists' let(:result) do client.list_mongo_databases(filter) end let(:filter) do { name: SpecConfig.instance.test_db } end it 'returns a filtered list of Mongo::Database objects' do expect(result.length).to eq(1) expect(result.first.name).to eq(filter[:name]) end end context 'with comment' do min_server_version '4.4' it 'returns a list of database names and send comment' do result = monitored_client.list_mongo_databases({}, comment: 'comment') expect(result).to all(be_a(Mongo::Database)) command = subscriber.command_started_events('listDatabases').last&.command expect(command).not_to be_nil expect(command['comment']).to eq('comment') end end end describe '#close' do let(:client) do new_local_client_nmio([ DEFAULT_LOCAL_HOST ]) end it 'disconnects the cluster and returns true' do RSpec::Mocks.with_temporary_scope do expect(client.cluster).to receive(:close).and_call_original expect(client.close).to be(true) end end end describe '#reconnect' do let(:client) do new_local_client_nmio([ ClusterConfig.instance.primary_address_str ]) end it 'replaces the cluster' do old_id = client.cluster.object_id client.reconnect new_id = client.cluster.object_id expect(new_id).not_to eql(old_id) end it 'replaces the session pool' do old_id = client.cluster.session_pool.object_id client.reconnect new_id = client.cluster.session_pool.object_id expect(new_id).not_to eql(old_id) end it 'returns true' do expect(client.reconnect).to be(true) end end describe '#collections' do before do authorized_client.database[:users].drop authorized_client.database[:users].create end let(:collection) do Mongo::Collection.new(authorized_client.database, 'users') end it 'refers the current database collections' do expect(authorized_client.collections).to include(collection) expect(authorized_client.collections).to all(be_a(Mongo::Collection)) end end describe '#start_session' do let(:session) do authorized_client.start_session end context 'when sessions are supported' do min_server_fcv '3.6' require_topology :replica_set, :sharded it 'creates a session' do expect(session).to be_a(Mongo::Session) end retry_test tries: 4 it 'sets the last use field to the current time' do expect(session.instance_variable_get(:@server_session).last_use).to be_within(1).of(Time.now) end context 'when options are provided' do let(:options) do { causal_consistency: true } end let(:session) do authorized_client.start_session(options) end it 'sets the options on the session' do expect(session.options[:causal_consistency]).to eq(options[:causal_consistency]) end end context 'when options are not provided' do it 'does not set options on the session' do expect(session.options).to eq({ implicit: false }) end end context 'when a session is checked out and checked back in' do let!(:session_a) do authorized_client.start_session end let!(:session_b) do authorized_client.start_session end let!(:session_a_server_session) do session_a.instance_variable_get(:@server_session) end let!(:session_b_server_session) do session_b.instance_variable_get(:@server_session) end before do session_a_server_session.next_txn_num session_a_server_session.next_txn_num session_b_server_session.next_txn_num session_b_server_session.next_txn_num session_a.end_session session_b.end_session end it 'is returned to the front of the queue' do expect(authorized_client.start_session.instance_variable_get(:@server_session)).to be(session_b_server_session) expect(authorized_client.start_session.instance_variable_get(:@server_session)).to be(session_a_server_session) end it 'preserves the transaction numbers on the server sessions' do expect(authorized_client.start_session.next_txn_num).to be(3) expect(authorized_client.start_session.next_txn_num).to be(3) end end context 'when an implicit session is used' do before do authorized_client.database.command(ping: 1) end let(:pool) do authorized_client.cluster.session_pool end let!(:before_last_use) do pool.instance_variable_get(:@queue)[0].last_use end it 'uses the session and updates the last use time' do authorized_client.database.command(ping: 1) expect(before_last_use).to be < (pool.instance_variable_get(:@queue)[0].last_use) end end context 'when an implicit session is used without enough connections' do require_no_multi_mongos require_wired_tiger let(:client) do authorized_client.with(options).tap do |cl| cl.subscribe(Mongo::Monitoring::COMMAND, subscriber) end end let(:options) do { max_pool_size: 1, retry_writes: true } end shared_examples 'a single connection' do # JRuby, due to being concurrent, does not like rspec setting mocks # in threads while other threads are calling the methods being mocked. # My theory is that rspec removes & redefines methods as part of # the mocking process, but while a method is undefined JRuby is # running another thread that calls it leading to this exception: # NoMethodError: undefined method `with_connection' for # fails_on_jruby before do sessions_checked_out = 0 allow_any_instance_of(Mongo::Server).to receive(:with_connection).and_wrap_original do |m, *args, **kwargs, &block| m.call(*args, **kwargs) do |connection| sessions_checked_out = 0 res = block.call(connection) expect(sessions_checked_out).to be < 2 res end end end it "doesn't have any live sessions" do threads.each do |thread| thread.join end end end context 'when doing three inserts' do let(:threads) do (1..3).map do |i| Thread.new do client['test'].insert_one({ test: "test#{i}" }) end end end include_examples 'a single connection' end context 'when doing an insert and two updates' do let(:threads) do threads = [] threads << Thread.new do client['test'].insert_one({ test: 'test' }) end threads << Thread.new do client['test'].update_one({ test: 'test' }, { '$set' => { test: 'test2' } }) end threads << Thread.new do client['test'].update_one({ test: 'test' }, { '$set' => { test: 'test2' } }) end threads end include_examples 'a single connection' end context 'when doing an insert, update and delete' do let(:threads) do threads = [] threads << Thread.new do client['test'].insert_one({ test: 'test' }) end threads << Thread.new do client['test'].update_one({ test: 'test' }, { '$set' => { test: 'test2' } }) end threads << Thread.new do client['test'].delete_one({ test: 'test' }) end threads end include_examples 'a single connection' end context 'when doing an insert, update and find' do let(:threads) do threads = [] threads << Thread.new do client['test'].insert_one({ test: 'test' }) end threads << Thread.new do client['test'].update_one({ test: 'test' }, { '$set' => { test: 'test2' } }) end threads << Thread.new do client['test'].find({ test: 'test' }).to_a end threads end include_examples 'a single connection' end context 'when doing an insert, update and bulk write' do let(:threads) do threads = [] threads << Thread.new do client['test'].insert_one({ test: 'test' }) end threads << Thread.new do client['test'].update_one({ test: 'test' }, { '$set' => { test: 'test2' } }) end threads << Thread.new do client['test'].bulk_write([ { insert_one: { test: 'test1' } }, { update_one: { filter: { test: 'test1' }, update: { '$set' => { test: 'test2' } } } } ]) end threads end include_examples 'a single connection' end context 'when doing an insert, update and find_one_and_delete' do let(:threads) do threads = [] threads << Thread.new do client['test'].insert_one({ test: 'test' }) end threads << Thread.new do client['test'].update_one({ test: 'test' }, { '$set' => { test: 'test2' } }) end threads << Thread.new do client['test'].find_one_and_delete({ test: 'test' }) end threads end include_examples 'a single connection' end context 'when doing an insert, update and find_one_and_update' do let(:threads) do threads = [] threads << Thread.new do client['test'].insert_one({ test: 'test' }) end threads << Thread.new do client['test'].update_one({ test: 'test' }, { '$set' => { test: 'test2' } }) end threads << Thread.new do client['test'].find_one_and_update({ test: 'test' }, { test: 'test2' }) end threads end include_examples 'a single connection' end context 'when doing an insert, update and find_one_and_replace' do let(:threads) do threads = [] threads << Thread.new do client['test'].insert_one({ test: 'test' }) end threads << Thread.new do client['test'].update_one({ test: 'test' }, { '$set' => { test: 'test2' } }) end threads << Thread.new do client['test'].find_one_and_replace({ test: 'test' }, { test: 'test2' }) end threads end include_examples 'a single connection' end context 'when doing an insert, update and a replace' do let(:threads) do threads = [] threads << Thread.new do client['test'].insert_one({ test: 'test' }) end threads << Thread.new do client['test'].update_one({ test: 'test' }, { '$set' => { test: 'test2' } }) end threads << Thread.new do client['test'].replace_one({ test: 'test' }, { test: 'test2' }) end threads end include_examples 'a single connection' end context 'when doing all of the operations' do let(:threads) do threads = [] threads << Thread.new do client['test'].insert_one({ test: 'test' }) end threads << Thread.new do client['test'].update_one({ test: 'test' }, { '$set' => { test: 1 } }) end threads << Thread.new do client['test'].find_one_and_replace({ test: 'test' }, { test: 'test2' }) end threads << Thread.new do client['test'].delete_one({ test: 'test' }) end threads << Thread.new do client['test'].find({ test: 'test' }).to_a end threads << Thread.new do client['test'].bulk_write([ { insert_one: { test: 'test1' } }, { update_one: { filter: { test: 'test1' }, update: { '$set' => { test: 'test2' } } } } ]) end threads << Thread.new do client['test'].find_one_and_delete({ test: 'test' }) end threads << Thread.new do client['test'].find_one_and_update({ test: 'test' }, { test: 'test2' }) end threads << Thread.new do client['test'].find_one_and_replace({ test: 'test' }, { test: 'test2' }) end threads << Thread.new do client['test'].replace_one({ test: 'test' }, { test: 'test2' }) end threads end include_examples 'a single connection' end end end context 'when two clients have the same cluster' do min_server_fcv '3.6' require_topology :replica_set, :sharded let(:client) do authorized_client.with(read: { mode: :secondary }) end let(:session) do authorized_client.start_session end it 'allows the session to be used across the clients' do client[TEST_COLL].insert_one({ a: 1 }, session: session) end end context 'when two clients have different clusters' do min_server_fcv '3.6' require_topology :replica_set, :sharded let(:client) do another_authorized_client end let(:session) do authorized_client.start_session end it 'raises an exception' do expect { client[TEST_COLL].insert_one({ a: 1 }, session: session) }.to raise_exception(Mongo::Error::InvalidSession) end end context 'when sessions are not supported' do max_server_version '3.4' it 'raises an exception' do expect { session }.to raise_exception(Mongo::Error::InvalidSession) end end context 'when CSOT is set on the client' do require_topology :replica_set let(:timeout_ms) { 10 } let(:timeout_sec) { timeout_ms / 1_000.0 } let(:client) do authorized_client.with(timeout_ms: timeout_ms) end it 'uses CSOT timeout set on the client' do expect_any_instance_of(Mongo::ServerSelector::PrimaryPreferred).to( receive(:select_server).with(anything, {timeout: timeout_sec}).and_call_original ) client.start_session end end end describe '#summary' do context 'monitoring omitted' do let(:client) do new_local_client_nmio( [ DEFAULT_LOCAL_HOST ], read: { mode: :primary }, database: SpecConfig.instance.test_db ) end it 'indicates lack of monitoring' do expect(client.summary).to match /servers=.*UNKNOWN.*NO-MONITORING/ end end context 'monitoring present' do require_topology :single, :replica_set, :sharded let(:client) do authorized_client end it 'does not indicate lack of monitoring' do expect(client.summary).to match /servers=.*(?:STANDALONE|PRIMARY|MONGOS)/ expect(client.summary).not_to match /servers=.*(?:STANDALONE|PRIMARY|MONGOS).*NO-MONITORING/ end end context 'background threads killed' do let(:client) do authorized_client.tap do |client| client.cluster.servers.map do |server| server.monitor&.stop! end end end it 'does not indicate lack of monitoring' do expect(client.summary).to match /servers=.*(STANDALONE|PRIMARY|MONGOS|\bLB\b).*NO-MONITORING/ end end end end mongo-ruby-driver-2.21.3/spec/mongo/cluster/000077500000000000000000000000001505113246500207225ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/mongo/cluster/cursor_reaper_spec.rb000066400000000000000000000132471505113246500251430ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Cluster::CursorReaper do let(:cluster) { double('cluster') } before do authorized_collection.drop end let(:reaper) do described_class.new(cluster) end let(:active_cursor_ids) do reaper.instance_variable_get(:@active_cursor_ids) end describe '#intialize' do it 'initializes a hash for servers and their kill cursors ops' do expect(reaper.instance_variable_get(:@to_kill)).to be_a(Hash) end it 'initializes a set for the list of active cursors' do expect(reaper.instance_variable_get(:@active_cursor_ids)).to be_a(Set) end end describe '#schedule_kill_cursor' do let(:address) { Mongo::Address.new('localhost') } let(:server) do double('server').tap do |server| allow(server).to receive(:address).and_return(address) end end let(:session) do double(Mongo::Session) end let(:cursor_id) { 1 } let(:cursor_kill_spec_1) do Mongo::Cursor::KillSpec.new( cursor_id: cursor_id, coll_name: 'c', db_name: 'd', server_address: address, connection_global_id: 1, session: session, ) end let(:cursor_kill_spec_2) do Mongo::Cursor::KillSpec.new( cursor_id: cursor_id, coll_name: 'c', db_name: 'q', server_address: address, connection_global_id: 1, session: session, ) end let(:to_kill) { reaper.instance_variable_get(:@to_kill)} context 'when the cursor is on the list of active cursors' do before do reaper.register_cursor(cursor_id) end context 'when there is not a list already for the server' do before do reaper.schedule_kill_cursor(cursor_kill_spec_1) reaper.read_scheduled_kill_specs end it 'initializes the list of op specs to a set' do expect(to_kill.keys).to eq([ address ]) expect(to_kill[address]).to contain_exactly(cursor_kill_spec_1) end end context 'when there is a list of ops already for the server' do before do reaper.schedule_kill_cursor(cursor_kill_spec_1) reaper.read_scheduled_kill_specs reaper.schedule_kill_cursor(cursor_kill_spec_2) reaper.read_scheduled_kill_specs end it 'adds the op to the server list' do expect(to_kill.keys).to eq([ address ]) expect(to_kill[address]).to contain_exactly(cursor_kill_spec_1, cursor_kill_spec_2) end context 'when the same op is added more than once' do before do reaper.schedule_kill_cursor(cursor_kill_spec_2) reaper.read_scheduled_kill_specs end it 'does not allow duplicates ops for a server' do expect(to_kill.keys).to eq([ address ]) expect(to_kill[address]).to contain_exactly(cursor_kill_spec_1, cursor_kill_spec_2) end end end end context 'when the cursor is not on the list of active cursors' do before do reaper.schedule_kill_cursor(cursor_kill_spec_1) end it 'does not add the kill cursors op spec to the list' do expect(to_kill).to eq({}) end end end describe '#register_cursor' do context 'when the cursor id is nil' do let(:cursor_id) do nil end it 'raises exception' do expect do reaper.register_cursor(cursor_id) end.to raise_error(ArgumentError, /register_cursor called with nil cursor_id/) end end context 'when the cursor id is 0' do let(:cursor_id) do 0 end it 'raises exception' do expect do reaper.register_cursor(cursor_id) end.to raise_error(ArgumentError, /register_cursor called with cursor_id=0/) end end context 'when the cursor id is a valid id' do let(:cursor_id) do 2 end before do reaper.register_cursor(cursor_id) end it 'registers the cursor id as active' do expect(active_cursor_ids).to eq(Set.new([2])) end end end describe '#unregister_cursor' do context 'when the cursor id is in the active cursors list' do before do reaper.register_cursor(2) reaper.unregister_cursor(2) end it 'removes the cursor id' do expect(active_cursor_ids.size).to eq(0) end end end context 'when a non-exhausted cursor goes out of scope' do let(:docs) do 103.times.collect { |i| { a: i } } end let(:periodic_executor) do cluster.instance_variable_get(:@periodic_executor) end let(:cluster) do authorized_client.cluster end let(:cursor) do view = authorized_collection.find view.to_enum.next cursor = view.instance_variable_get(:@cursor) end around do |example| authorized_collection.insert_many(docs) periodic_executor.stop! cluster.schedule_kill_cursor( cursor.kill_spec( cursor.instance_variable_get(:@server) ) ) periodic_executor.flush example.run periodic_executor.run! end it 'schedules the kill cursor op' do expect { cursor.to_a # Mongo::Error::SessionEnded is raised here because the periodic executor # called in around block kills the cursor and closes the session. # This code is normally scheduled in cursor finalizer, so the cursor object # is garbage collected when the code is executed. So, a user won't get # this exception. }.to raise_exception(Mongo::Error::SessionEnded) end end end mongo-ruby-driver-2.21.3/spec/mongo/cluster/periodic_executor_spec.rb000066400000000000000000000005571505113246500260040ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Cluster::PeriodicExecutor do let(:cluster) { double('cluster') } let(:executor) do described_class.new(cluster) end describe '#log_warn' do it 'works' do expect do executor.log_warn('test warning') end.not_to raise_error end end end mongo-ruby-driver-2.21.3/spec/mongo/cluster/socket_reaper_spec.rb000066400000000000000000000020111505113246500251010ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Cluster::SocketReaper do let(:cluster) do authorized_client.cluster end let(:reaper) do described_class.new(cluster) end describe '#initialize' do it 'takes a cluster as an argument' do expect(reaper).to be_a(described_class) end end describe '#execute' do before do # Ensure all servers are discovered cluster.servers_list.each do |server| server.scan! end # Stop the reaper that is attached to the cluster, since it # runs the same code we are running and can interfere with our assertions cluster.instance_variable_get('@periodic_executor').stop! end it 'calls close_idle_sockets on each connection pool in the cluster' do RSpec::Mocks.with_temporary_scope do cluster.servers.each do |s| expect(s.pool).to receive(:close_idle_sockets).and_call_original end reaper.execute end end end end mongo-ruby-driver-2.21.3/spec/mongo/cluster/topology/000077500000000000000000000000001505113246500225765ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/mongo/cluster/topology/replica_set_spec.rb000066400000000000000000000411571505113246500264370ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Cluster::Topology::ReplicaSetNoPrimary do let(:address) do Mongo::Address.new('127.0.0.1:27017') end let(:listeners) do Mongo::Event::Listeners.new end let(:monitoring) do Mongo::Monitoring.new(monitoring: false) end # Cluster needs a topology and topology needs a cluster... # This temporary cluster is used for topology construction. let(:temp_cluster) do double('temp cluster').tap do |cluster| allow(cluster).to receive(:servers_list).and_return([]) end end let(:cluster) do double('cluster').tap do |cl| allow(cl).to receive(:topology).and_return(topology) allow(cl).to receive(:app_metadata).and_return(app_metadata) allow(cl).to receive(:options).and_return({}) end end describe '#servers' do let(:mongos) do Mongo::Server.new(address, cluster, monitoring, listeners, SpecConfig.instance.test_options.merge(monitoring_io: false) ).tap do |server| allow(server).to receive(:description).and_return(mongos_description) end end let(:standalone) do Mongo::Server.new(address, cluster, monitoring, listeners, SpecConfig.instance.test_options.merge(monitoring_io: false) ).tap do |server| allow(server).to receive(:description).and_return(standalone_description) end end let(:replica_set) do Mongo::Server.new(address, cluster, monitoring, listeners, SpecConfig.instance.test_options.merge(monitoring_io: false) ).tap do |server| allow(server).to receive(:description).and_return(replica_set_description) end end let(:replica_set_two) do Mongo::Server.new(address, cluster, monitoring, listeners, SpecConfig.instance.test_options.merge(monitoring_io: false) ).tap do |server| allow(server).to receive(:description).and_return(replica_set_two_description) end end let(:mongos_description) do Mongo::Server::Description.new(address, { 'msg' => 'isdbgrid', 'minWireVersion' => 2, 'maxWireVersion' => 8, 'ok' => 1 }) end let(:standalone_description) do Mongo::Server::Description.new(address, { 'isWritablePrimary' => true, 'minWireVersion' => 2, 'maxWireVersion' => 8, 'ok' => 1 }) end let(:replica_set_description) do Mongo::Server::Description.new(address, { 'isWritablePrimary' => true, 'minWireVersion' => 2, 'maxWireVersion' => 8, 'setName' => 'testing', 'ok' => 1 }) end let(:replica_set_two_description) do Mongo::Server::Description.new(address, { 'isWritablePrimary' => true, 'minWireVersion' => 2, 'maxWireVersion' => 8, 'setName' => 'test', 'ok' => 1 }) end context 'when a replica set name is provided' do let(:topology) do described_class.new({ :replica_set_name => 'testing' }, monitoring, temp_cluster) end let(:servers) do topology.servers([ mongos, standalone, replica_set, replica_set_two ]) end it 'returns only replica set members is the provided set' do expect(servers).to eq([ replica_set ]) end end end describe '.replica_set?' do it 'returns true' do expect(described_class.new({replica_set_name: 'foo'}, monitoring, temp_cluster)).to be_replica_set end end describe '.sharded?' do it 'returns false' do expect(described_class.new({replica_set_name: 'foo'}, monitoring, temp_cluster)).to_not be_sharded end end describe '.single?' do it 'returns false' do expect(described_class.new({replica_set_name: 'foo'}, monitoring, temp_cluster)).to_not be_single end end describe '#max_election_id' do let(:election_id) { BSON::ObjectId.new } it 'returns value given in constructor options' do topology = described_class.new({replica_set_name: 'foo', max_election_id: election_id}, monitoring, temp_cluster) expect(topology.max_election_id).to eql(election_id) end end describe '#max_set_version' do it 'returns value given in constructor options' do topology = described_class.new({replica_set_name: 'foo', max_set_version: 5}, monitoring, temp_cluster) expect(topology.max_set_version).to eq(5) end end describe '#has_readable_servers?' do let(:topology) do described_class.new({replica_set_name: 'foo'}, monitoring, temp_cluster) end let(:cluster) do double('cluster', servers: servers, single?: false, replica_set?: true, sharded?: false, unknown?: false, ) end context 'when the read preference is primary' do let(:selector) do Mongo::ServerSelector.get(:mode => :primary) end context 'when a primary exists' do let(:servers) do [ double('server', primary?: true, # for runs with linting enabled average_round_trip_time: 42, ) ] end it 'returns true' do expect(topology).to have_readable_server(cluster, selector) end end context 'when a primary does not exist' do let(:servers) do [ double('server', primary?: false) ] end it 'returns false' do expect(topology).to_not have_readable_server(cluster, selector) end end end context 'when the read preference is primary preferred' do let(:selector) do Mongo::ServerSelector.get(:mode => :primary_preferred) end context 'when a primary exists' do let(:servers) do [ double('server', primary?: true, secondary?: false, # for runs with linting enabled average_round_trip_time: 42, ) ] end it 'returns true' do expect(topology).to have_readable_server(cluster, selector) end end context 'when a primary does not exist' do let(:servers) do [ double('server', primary?: false, secondary?: true, average_round_trip_time: 0.01) ] end it 'returns true' do expect(topology).to have_readable_server(cluster, selector) end end end context 'when the read preference is secondary' do let(:selector) do Mongo::ServerSelector.get(:mode => :secondary) end context 'when a secondary exists' do let(:servers) do [ double('server', primary?: false, secondary?: true, average_round_trip_time: 0.01) ] end it 'returns true' do expect(topology).to have_readable_server(cluster, selector) end end context 'when a secondary does not exist' do let(:servers) do [ double('server', primary?: true, secondary?: false) ] end it 'returns false' do expect(topology).to_not have_readable_server(cluster, selector) end end end context 'when the read preference is secondary preferred' do let(:selector) do Mongo::ServerSelector.get(:mode => :secondary_preferred) end context 'when a secondary exists' do let(:servers) do [ double('server', primary?: false, secondary?: true, average_round_trip_time: 0.01) ] end it 'returns true' do expect(topology).to have_readable_server(cluster, selector) end end context 'when a secondary does not exist' do let(:servers) do [ double('server', secondary?: false, primary?: true, # for runs with linting enabled average_round_trip_time: 42, ) ] end it 'returns true' do expect(topology).to have_readable_server(cluster, selector) end end end context 'when the read preference is nearest' do let(:selector) do Mongo::ServerSelector.get(:mode => :nearest) end let(:servers) do [ double('server', primary?: false, secondary?: true, average_round_trip_time: 0.01) ] end it 'returns true' do expect(topology).to have_readable_server(cluster, selector) end end context 'when the read preference is not provided' do context 'when a primary exists' do let(:servers) do [ double('server', primary?: true, secondary?: false, # for runs with linting enabled average_round_trip_time: 42, ) ] end it 'returns true' do expect(topology).to have_readable_server(cluster) end end context 'when a primary does not exist' do let(:servers) do [ double('server', primary?: false, secondary?: true, average_round_trip_time: 0.01) ] end it 'returns false' do expect(topology).to_not have_readable_server(cluster) end end end end describe '#has_writable_servers?' do let(:topology) do described_class.new({replica_set_name: 'foo'}, monitoring, temp_cluster) end context 'when a primary server exists' do let(:primary) do double('server', :primary? => true, # for runs with linting enabled average_round_trip_time: 42, ) end let(:secondary) do double('server', :primary? => false, # for runs with linting enabled average_round_trip_time: 42, ) end let(:cluster) do double('cluster', single?: false, replica_set?: true, sharded?: false, servers: [ primary, secondary ], ) end it 'returns true' do expect(topology).to have_writable_server(cluster) end end context 'when no primary server exists' do let(:server) do double('server', :primary? => false) end let(:cluster) do double('cluster', single?: false, replica_set?: true, sharded?: false, servers: [ server ], ) end it 'returns false' do expect(topology).to_not have_writable_server(cluster) end end end describe '#new_max_set_version' do context 'initially nil' do let(:topology) do described_class.new({replica_set_name: 'foo'}, monitoring, temp_cluster).tap do |topology| expect(topology.max_set_version).to be nil end end context 'description with non-nil max set version' do let(:description) do Mongo::Server::Description.new('a', { 'setVersion' => 5 }).tap do |description| expect(description.set_version).to eq(5) end end it 'is set to max set version in description' do expect(topology.new_max_set_version(description)).to eq(5) end end context 'description with nil max set version' do let(:description) do Mongo::Server::Description.new('a').tap do |description| expect(description.set_version).to be nil end end it 'is nil' do expect(topology.new_max_set_version(description)).to be nil end end end context 'initially not nil' do let(:topology) do described_class.new({replica_set_name: 'foo', max_set_version: 4}, monitoring, temp_cluster ).tap do |topology| expect(topology.max_set_version).to eq(4) end end context 'description with a higher max set version' do let(:description) do Mongo::Server::Description.new('a', { 'setVersion' => 5 }).tap do |description| expect(description.set_version).to eq(5) end end it 'is set to max set version in description' do expect(topology.new_max_set_version(description)).to eq(5) end end context 'description with a lower max set version' do let(:description) do Mongo::Server::Description.new('a', { 'setVersion' => 3 }).tap do |description| expect(description.set_version).to eq(3) end end it 'is set to topology max set version' do expect(topology.new_max_set_version(description)).to eq(4) end end context 'description with nil max set version' do let(:description) do Mongo::Server::Description.new('a').tap do |description| expect(description.set_version).to be nil end end it 'is set to topology max set version' do expect(topology.new_max_set_version(description)).to eq(4) end end end end describe '#new_max_election_id' do context 'initially nil' do let(:topology) do described_class.new({replica_set_name: 'foo'}, monitoring, temp_cluster, ).tap do |topology| expect(topology.max_election_id).to be nil end end context 'description with non-nil max election id' do let(:new_election_id) { BSON::ObjectId.from_string('7fffffff000000000000004f') } let(:description) do Mongo::Server::Description.new('a', { 'electionId' => new_election_id }).tap do |description| expect(description.election_id).to be new_election_id end end it 'is set to max election id in description' do expect(topology.new_max_election_id(description)).to be new_election_id end end context 'description with nil max election id' do let(:description) do Mongo::Server::Description.new('a').tap do |description| expect(description.election_id).to be nil end end it 'is nil' do expect(topology.new_max_election_id(description)).to be nil end end end context 'initially not nil' do let(:old_election_id) { BSON::ObjectId.from_string('7fffffff000000000000004c') } let(:topology) do described_class.new({replica_set_name: 'foo', max_election_id: old_election_id}, monitoring, temp_cluster, ).tap do |topology| expect(topology.max_election_id).to be old_election_id end end context 'description with a higher max election id' do let(:new_election_id) { BSON::ObjectId.from_string('7fffffff000000000000004f') } let(:description) do Mongo::Server::Description.new('a', { 'electionId' => new_election_id }).tap do |description| expect(description.election_id).to be new_election_id end end it 'is set to max election id in description' do expect(topology.new_max_election_id(description)).to be new_election_id end end context 'description with a lower max election id' do let(:low_election_id) { BSON::ObjectId.from_string('7fffffff0000000000000042') } let(:description) do Mongo::Server::Description.new('a', { 'electionId' => low_election_id }).tap do |description| expect(description.election_id).to be low_election_id end end it 'is set to topology max election id' do expect(topology.new_max_election_id(description)).to be old_election_id end end context 'description with nil max election id' do let(:description) do Mongo::Server::Description.new('a').tap do |description| expect(description.election_id).to be nil end end it 'is set to topology max election id' do expect(topology.new_max_election_id(description)).to be old_election_id end end end end describe '#summary' do require_no_linting let(:desc) do Mongo::Server::Description.new(Mongo::Address.new('127.0.0.2:27017')) end let(:topology) do described_class.new({replica_set_name: 'foo'}, monitoring, temp_cluster) end it 'renders correctly' do expect(topology).to receive(:server_descriptions).and_return({desc.address.to_s => desc}) expect(topology.summary).to eq('ReplicaSetNoPrimary[127.0.0.2:27017,name=foo]') end context 'with max set version and max election id' do let(:topology) do described_class.new({ replica_set_name: 'foo', max_set_version: 5, max_election_id: BSON::ObjectId.from_string('7fffffff0000000000000042'), }, monitoring, temp_cluster) end it 'renders correctly' do expect(topology).to receive(:server_descriptions).and_return({desc.address.to_s => desc}) expect(topology.summary).to eq('ReplicaSetNoPrimary[127.0.0.2:27017,name=foo,v=5,e=7fffffff0000000000000042]') end end end end mongo-ruby-driver-2.21.3/spec/mongo/cluster/topology/sharded_spec.rb000066400000000000000000000076441505113246500255620ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Cluster::Topology::Sharded do let(:address) do Mongo::Address.new('127.0.0.1:27017') end # Cluster needs a topology and topology needs a cluster... # This temporary cluster is used for topology construction. let(:temp_cluster) do double('temp cluster').tap do |cluster| allow(cluster).to receive(:servers_list).and_return([]) end end let(:topology) do described_class.new({}, monitoring, temp_cluster) end let(:monitoring) do Mongo::Monitoring.new(monitoring: false) end let(:listeners) do Mongo::Event::Listeners.new end let(:cluster) do double('cluster').tap do |cl| allow(cl).to receive(:topology).and_return(topology) allow(cl).to receive(:app_metadata).and_return(app_metadata) allow(cl).to receive(:options).and_return({}) end end let(:mongos) do Mongo::Server.new(address, cluster, monitoring, listeners, SpecConfig.instance.test_options.merge(monitoring_io: false) ).tap do |server| allow(server).to receive(:description).and_return(mongos_description) end end let(:standalone) do Mongo::Server.new(address, cluster, monitoring, listeners, SpecConfig.instance.test_options.merge(monitoring_io: false) ).tap do |server| allow(server).to receive(:description).and_return(standalone_description) end end let(:replica_set) do Mongo::Server.new(address, cluster, monitoring, listeners, SpecConfig.instance.test_options.merge(monitoring_io: false) ).tap do |server| allow(server).to receive(:description).and_return(replica_set_description) end end let(:mongos_description) do Mongo::Server::Description.new(address, { 'msg' => 'isdbgrid', 'minWireVersion' => 2, 'maxWireVersion' => 8, 'ok' => 1 }) end let(:standalone_description) do Mongo::Server::Description.new(address, { 'isWritablePrimary' => true, 'minWireVersion' => 2, 'maxWireVersion' => 8, 'ok' => 1 }) end let(:replica_set_description) do Mongo::Server::Description.new(address, { 'isWritablePrimary' => true, 'minWireVersion' => 2, 'maxWireVersion' => 8, 'setName' => 'testing', 'ok' => 1 }) end describe '#initialize' do let(:topology) do Mongo::Cluster::Topology::Sharded.new( {replica_set_name: 'foo'}, monitoring, temp_cluster) end it 'does not accept RS name' do expect do topology end.to raise_error(ArgumentError, 'Topology Mongo::Cluster::Topology::Sharded cannot have the :replica_set_name option set') end end describe '.servers' do let(:servers) do topology.servers([ mongos, standalone, replica_set ]) end it 'returns only mongos servers' do expect(servers).to eq([ mongos ]) end end describe '.replica_set?' do it 'returns false' do expect(topology).to_not be_replica_set end end describe '.sharded?' do it 'returns true' do expect(topology).to be_sharded end end describe '.single?' do it 'returns false' do expect(topology).to_not be_single end end describe '#has_readable_servers?' do it 'returns true' do expect(topology).to have_readable_server(nil, nil) end end describe '#has_writable_servers?' do it 'returns true' do expect(topology).to have_writable_server(nil) end end describe '#summary' do require_no_linting let(:desc1) do Mongo::Server::Description.new(Mongo::Address.new('127.0.0.2:27017')) end let(:desc2) do Mongo::Server::Description.new(Mongo::Address.new('127.0.0.2:27027')) end it 'renders correctly' do expect(topology).to receive(:server_descriptions).and_return({ desc1.address.to_s => desc1, desc2.address.to_s => desc2, }) expect(topology.summary).to eq('Sharded[127.0.0.2:27017,127.0.0.2:27027]') end end end mongo-ruby-driver-2.21.3/spec/mongo/cluster/topology/single_spec.rb000066400000000000000000000117761505113246500254320ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Cluster::Topology::Single do let(:address) do Mongo::Address.new('127.0.0.1:27017') end let(:monitoring) do Mongo::Monitoring.new(monitoring: false) end # Cluster needs a topology and topology needs a cluster... # This temporary cluster is used for topology construction. let(:temp_cluster) do double('temp cluster').tap do |cluster| allow(cluster).to receive(:servers_list).and_return([]) end end let(:topology) do described_class.new({}, monitoring, temp_cluster) end let(:listeners) do Mongo::Event::Listeners.new end let(:cluster) do double('cluster').tap do |cl| allow(cl).to receive(:app_metadata).and_return(app_metadata) allow(cl).to receive(:topology).and_return(topology) allow(cl).to receive(:options).and_return({}) end end describe '.servers' do let(:mongos) do Mongo::Server.new(address, cluster, monitoring, listeners, SpecConfig.instance.test_options.merge(monitoring_io: false) ).tap do |server| allow(server).to receive(:description).and_return(mongos_description) end end let(:standalone) do Mongo::Server.new(address, cluster, monitoring, listeners, SpecConfig.instance.test_options.merge(monitoring_io: false) ).tap do |server| allow(server).to receive(:description).and_return(standalone_description) end end let(:standalone_two) do Mongo::Server.new(address, cluster, monitoring, listeners, SpecConfig.instance.test_options.merge(monitoring_io: false) ).tap do |server| allow(server).to receive(:description).and_return(standalone_description) end end let(:replica_set) do Mongo::Server.new(address, cluster, monitoring, listeners, SpecConfig.instance.test_options.merge(monitoring_io: false) ).tap do |server| allow(server).to receive(:description).and_return(replica_set_description) end end let(:mongos_description) do Mongo::Server::Description.new(address, { 'msg' => 'isdbgrid' }) end let(:standalone_description) do Mongo::Server::Description.new(address, { 'isWritablePrimary' => true, 'minWireVersion' => 2, 'maxWireVersion' => 8, 'ok' => 1 }) end let(:replica_set_description) do Mongo::Server::Description.new(address, { 'isWritablePrimary' => true, 'minWireVersion' => 2, 'maxWireVersion' => 8, 'setName' => 'testing' }) end let(:servers) do topology.servers([ mongos, standalone, standalone_two, replica_set ]) end it 'returns all data-bearing non-unknown servers' do # mongos and replica_set do not have ok: 1 in their descriptions, # and are considered unknown. expect(servers).to eq([ standalone, standalone_two ]) end end describe '#initialize' do context 'with RS name' do let(:topology) do Mongo::Cluster::Topology::Single.new( {replica_set_name: 'foo'}, monitoring, temp_cluster) end it 'accepts RS name' do expect(topology.replica_set_name).to eq('foo') end end context 'with more than one server in topology' do let(:topology) do Mongo::Cluster::Topology::Single.new({}, monitoring, temp_cluster) end let(:server_1) do double('server').tap do |server| allow(server).to receive(:address).and_return(Mongo::Address.new('one')) end end let(:server_2) do double('server').tap do |server| allow(server).to receive(:address).and_return(Mongo::Address.new('two')) end end let(:temp_cluster) do double('temp cluster').tap do |cluster| allow(cluster).to receive(:servers_list).and_return([server_1, server_2]) end end it 'fails' do expect do topology end.to raise_error(ArgumentError, /Cannot instantiate a single topology with more than one server in the cluster: one, two/) end end end describe '.replica_set?' do it 'returns false' do expect(topology).to_not be_replica_set end end describe '.sharded?' do it 'returns false' do expect(topology).to_not be_sharded end end describe '.single?' do it 'returns true' do expect(topology).to be_single end end describe '#has_readable_servers?' do it 'returns true' do expect(topology).to have_readable_server(nil, nil) end end describe '#has_writable_servers?' do it 'returns true' do expect(topology).to have_writable_server(nil) end end describe '#summary' do require_no_linting let(:desc) do Mongo::Server::Description.new(Mongo::Address.new('127.0.0.2:27017')) end it 'renders correctly' do expect(topology).to receive(:server_descriptions).and_return({desc.address.to_s => desc}) expect(topology.summary).to eq('Single[127.0.0.2:27017]') end end end mongo-ruby-driver-2.21.3/spec/mongo/cluster/topology/unknown_spec.rb000066400000000000000000000042551505113246500256420ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Cluster::Topology::Unknown do let(:monitoring) do Mongo::Monitoring.new(monitoring: false) end # Cluster needs a topology and topology needs a cluster... # This temporary cluster is used for topology construction. let(:temp_cluster) do double('temp cluster').tap do |cluster| allow(cluster).to receive(:servers_list).and_return([]) end end let(:topology) do described_class.new({}, monitoring, temp_cluster) end describe '#initialize' do let(:topology) do Mongo::Cluster::Topology::Unknown.new( {replica_set_name: 'foo'}, monitoring, temp_cluster) end it 'does not accept RS name' do expect do topology end.to raise_error(ArgumentError, 'Topology Mongo::Cluster::Topology::Unknown cannot have the :replica_set_name option set') end end describe '.servers' do let(:servers) do topology.servers([ double('mongos'), double('standalone') ]) end it 'returns an empty array' do expect(servers).to eq([ ]) end end describe '.replica_set?' do it 'returns false' do expect(topology).to_not be_replica_set end end describe '.sharded?' do it 'returns false' do expect(topology).not_to be_sharded end end describe '.single?' do it 'returns false' do expect(topology).not_to be_single end end describe '.unknown?' do it 'returns true' do expect(topology.unknown?).to be(true) end end describe '#has_readable_servers?' do it 'returns false' do expect(topology).to_not have_readable_server(nil, nil) end end describe '#has_writable_servers?' do it 'returns false' do expect(topology).to_not have_writable_server(nil) end end describe '#summary' do require_no_linting let(:desc) do Mongo::Server::Description.new(Mongo::Address.new('127.0.0.2:27017')) end it 'renders correctly' do expect(topology).to receive(:server_descriptions).and_return({desc.address.to_s => desc}) expect(topology.summary).to eq('Unknown[127.0.0.2:27017]') end end end mongo-ruby-driver-2.21.3/spec/mongo/cluster/topology_spec.rb000066400000000000000000000133301505113246500241350ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' describe Mongo::Cluster::Topology do let(:monitoring) do Mongo::Monitoring.new(monitoring: false) end let(:cluster) { Mongo::Cluster.new(['a'], Mongo::Monitoring.new, monitoring_io: false) } describe '.initial' do context 'when provided a replica set option' do let(:topology) do described_class.initial(cluster, monitoring, connect: :replica_set, replica_set_name: 'foo') end it 'returns a replica set topology' do expect(topology).to be_a(Mongo::Cluster::Topology::ReplicaSetNoPrimary) end context 'when the option is a String (due to YAML parsing)' do let(:topology) do described_class.initial(cluster, monitoring, connect: 'replica_set', replica_set_name: 'foo') end it 'returns a replica set topology' do expect(topology).to be_a(Mongo::Cluster::Topology::ReplicaSetNoPrimary) end end end context 'when provided a single option' do let(:topology) do described_class.initial(cluster, monitoring, connect: :direct) end it 'returns a single topology' do expect(topology).to be_a(Mongo::Cluster::Topology::Single) end it 'sets the seed on the topology' do expect(topology.addresses).to eq(['a']) end context 'when the option is a String (due to YAML parsing)' do let(:topology) do described_class.initial(cluster, monitoring, connect: 'direct') end it 'returns a single topology' do expect(topology).to be_a(Mongo::Cluster::Topology::Single) end it 'sets the seed on the topology' do expect(topology.addresses).to eq(['a']) end end end context 'when provided a sharded option' do let(:topology) do described_class.initial(cluster, monitoring, connect: :sharded) end it 'returns a sharded topology' do expect(topology).to be_a(Mongo::Cluster::Topology::Sharded) end context 'when the option is a String (due to YAML parsing)' do let(:topology) do described_class.initial(cluster, monitoring, connect: 'sharded') end it 'returns a sharded topology' do expect(topology).to be_a(Mongo::Cluster::Topology::Sharded) end end end context 'when provided no option' do context 'when a set name is in the options' do let(:topology) do described_class.initial(cluster, monitoring, replica_set_name: 'testing') end it 'returns a replica set topology' do expect(topology).to be_a(Mongo::Cluster::Topology::ReplicaSetNoPrimary) end end context 'when no set name is in the options' do let(:topology) do described_class.initial(cluster, monitoring, {}) end it 'returns an unknown topology' do expect(topology).to be_a(Mongo::Cluster::Topology::Unknown) end end end end describe '#logical_session_timeout' do require_no_linting let(:listeners) do Mongo::Event::Listeners.new end let(:monitoring) do Mongo::Monitoring.new(monitoring: false) end let(:server_one) do Mongo::Server.new(Mongo::Address.new('a:27017'), cluster, monitoring, listeners, monitoring_io: false) end let(:server_two) do Mongo::Server.new(Mongo::Address.new('b:27017'), cluster, monitoring, listeners, monitoring_io: false) end let(:servers) do [ server_one, server_two ] end let(:topology) do Mongo::Cluster::Topology::Sharded.new({}, monitoring, cluster) end before do expect(cluster).to receive(:servers_list).and_return(servers) end context 'when servers are data bearing' do before do expect(server_one.description).to receive(:primary?).and_return(true) allow(server_two.description).to receive(:primary?).and_return(true) end context 'when one server has a nil logical session timeout value' do before do expect(server_one.description).to receive(:logical_session_timeout).and_return(7) expect(server_two.description).to receive(:logical_session_timeout).and_return(nil) end it 'returns nil' do expect(topology.logical_session_timeout).to be(nil) end end context 'when all servers have a logical session timeout value' do before do expect(server_one.description).to receive(:logical_session_timeout).and_return(7) expect(server_two.description).to receive(:logical_session_timeout).and_return(3) end it 'returns the minimum' do expect(topology.logical_session_timeout).to be(3) end end context 'when no servers have a logical session timeout value' do before do expect(server_one.description).to receive(:logical_session_timeout).and_return(nil) allow(server_two.description).to receive(:logical_session_timeout).and_return(nil) end it 'returns nil' do expect(topology.logical_session_timeout).to be(nil) end end end context 'when servers are not data bearing' do before do expect(server_one).to be_unknown expect(server_two).to be_unknown end context 'when all servers have a logical session timeout value' do before do expect(server_one).not_to receive(:logical_session_timeout) expect(server_two).not_to receive(:logical_session_timeout) end it 'returns nil' do expect(topology.logical_session_timeout).to be nil end end end end end mongo-ruby-driver-2.21.3/spec/mongo/cluster_spec.rb000066400000000000000000000544661505113246500223000ustar00rootroot00000000000000# frozen_string_literal: true require 'spec_helper' require 'support/recording_logger' # let these existing styles stand, rather than going in for a deep refactoring # of these specs. # # possible future work: re-enable these one at a time and do the hard work of # making them right. # # rubocop:disable RSpec/ContextWording, RSpec/VerifiedDoubles, RSpec/MessageSpies # rubocop:disable RSpec/ExpectInHook, RSpec/ExampleLength describe Mongo::Cluster do let(:monitoring) do Mongo::Monitoring.new(monitoring: false) end let(:cluster_with_semaphore) do register_cluster( described_class.new( SpecConfig.instance.addresses, monitoring, SpecConfig.instance.test_options.merge( server_selection_semaphore: Mongo::Semaphore.new ) ) ) end let(:cluster_without_io) do register_cluster( described_class.new( SpecConfig.instance.addresses, monitoring, SpecConfig.instance.test_options.merge(monitoring_io: false) ) ) end let(:cluster) { cluster_without_io } describe 'initialize' do context 'when there are duplicate addresses' do let(:addresses) do SpecConfig.instance.addresses + SpecConfig.instance.addresses end let(:cluster_with_dup_addresses) do register_cluster( described_class.new(addresses, monitoring, SpecConfig.instance.test_options) ) end it 'does not raise an exception' do expect { cluster_with_dup_addresses }.not_to raise_error end end context 'when topology is load-balanced' do require_topology :load_balanced it 'emits SDAM events' do allow(monitoring).to receive(:succeeded) register_cluster( described_class.new( SpecConfig.instance.addresses, monitoring, SpecConfig.instance.test_options ) ) expect(monitoring).to have_received(:succeeded).with( Mongo::Monitoring::TOPOLOGY_OPENING, any_args ) expect(monitoring).to have_received(:succeeded).with( Mongo::Monitoring::TOPOLOGY_CHANGED, any_args ).twice expect(monitoring).to have_received(:succeeded).with( Mongo::Monitoring::SERVER_OPENING, any_args ) expect(monitoring).to have_received(:succeeded).with( Mongo::Monitoring::SERVER_DESCRIPTION_CHANGED, any_args ) end end context 'when a non-genuine host is detected' do before { described_class.new(host_names, monitoring, logger: logger, monitoring_io: false) } let(:logger) { RecordingLogger.new } shared_examples 'an action that logs' do it 'writes a warning to the log' do expect(logger.lines).to include(a_string_matching(expected_log_output)) end end context 'when CosmosDB is detected' do let(:host_names) { %w[ xyz.cosmos.azure.com ] } let(:expected_log_output) { %r{https://www.mongodb.com/supportability/cosmosdb} } it_behaves_like 'an action that logs' end context 'when DocumentDB is detected' do let(:expected_log_output) { %r{https://www.mongodb.com/supportability/documentdb} } context 'with docdb uri' do let(:host_names) { [ 'xyz.docdb.amazonaws.com' ] } it_behaves_like 'an action that logs' end context 'with docdb-elastic uri' do let(:host_names) { [ 'xyz.docdb-elastic.amazonaws.com' ] } it_behaves_like 'an action that logs' end end end end describe '#==' do context 'when the other is a cluster' do context 'when the addresses are the same' do context 'when the options are the same' do let(:other) do described_class.new( SpecConfig.instance.addresses, monitoring, SpecConfig.instance.test_options.merge(monitoring_io: false) ) end it 'returns true' do expect(cluster_without_io).to eq(other) end end context 'when the options are not the same' do let(:other) do described_class.new( [ '127.0.0.1:27017' ], monitoring, SpecConfig.instance.test_options.merge(replica_set: 'test', monitoring_io: false) ) end it 'returns false' do expect(cluster_without_io).not_to eq(other) end end end context 'when the addresses are not the same' do let(:other) do described_class.new( [ '127.0.0.1:27999' ], monitoring, SpecConfig.instance.test_options.merge(monitoring_io: false) ) end it 'returns false' do expect(cluster_without_io).not_to eq(other) end end end context 'when the other is not a cluster' do it 'returns false' do expect(cluster_without_io).not_to eq('test') end end end describe '#has_readable_server?' do let(:selector) do Mongo::ServerSelector.primary end it 'delegates to the topology' do expect(cluster_without_io.has_readable_server?) .to eq(cluster_without_io.topology.has_readable_server?(cluster_without_io)) end end describe '#has_writable_server?' do it 'delegates to the topology' do expect(cluster_without_io.has_writable_server?) .to eq(cluster_without_io.topology.has_writable_server?(cluster_without_io)) end end describe '#inspect' do let(:preference) do Mongo::ServerSelector.primary end it 'displays the cluster seeds and topology' do expect(cluster_without_io.inspect).to include('topology') expect(cluster_without_io.inspect).to include('servers') end end describe '#replica_set_name' do let(:preference) do Mongo::ServerSelector.primary end context 'when the option is provided' do let(:cluster) do described_class.new( [ '127.0.0.1:27017' ], monitoring, { monitoring_io: false, connect: :replica_set, replica_set: 'testing' } ) end it 'returns the name' do expect(cluster.replica_set_name).to eq('testing') end end context 'when the option is not provided' do let(:cluster) do described_class.new( [ '127.0.0.1:27017' ], monitoring, { monitoring_io: false, connect: :direct } ) end it 'returns nil' do expect(cluster.replica_set_name).to be_nil end end end describe '#scan!' do let(:preference) do Mongo::ServerSelector.primary end let(:known_servers) do cluster.instance_variable_get(:@servers) end let(:server) { known_servers.first } let(:monitor) do double('monitor') end before do expect(server).to receive(:monitor).at_least(:once).and_return(monitor) expect(monitor).to receive(:scan!) # scan! complains that there isn't a monitor on the server, calls summary allow(monitor).to receive(:running?) end it 'returns true' do expect(cluster.scan!).to be true end end describe '#servers' do let(:cluster) { cluster_with_semaphore } context 'when topology is single' do before do skip 'Topology is not a single server' unless ClusterConfig.instance.single_server? end context 'when the server is a mongos' do require_topology :sharded it 'returns the mongos' do expect(cluster.servers.size).to eq(1) end end context 'when the server is a replica set member' do require_topology :replica_set it 'returns the replica set member' do expect(cluster.servers.size).to eq(1) end end end context 'when the cluster has no servers' do let(:servers) do [] end before do cluster_without_io.instance_variable_set(:@servers, servers) cluster_without_io.instance_variable_set(:@topology, topology) end context 'when topology is Single' do let(:topology) do Mongo::Cluster::Topology::Single.new({}, monitoring, cluster_without_io) end it 'returns an empty array' do expect(cluster_without_io.servers).to eq([]) end end context 'when topology is ReplicaSetNoPrimary' do let(:topology) do Mongo::Cluster::Topology::ReplicaSetNoPrimary.new({ replica_set_name: 'foo' }, monitoring, cluster_without_io) end it 'returns an empty array' do expect(cluster_without_io.servers).to eq([]) end end context 'when topology is Sharded' do let(:topology) do Mongo::Cluster::Topology::Sharded.new({}, monitoring, cluster_without_io) end it 'returns an empty array' do expect(cluster_without_io.servers).to eq([]) end end context 'when topology is Unknown' do let(:topology) do Mongo::Cluster::Topology::Unknown.new({}, monitoring, cluster_without_io) end it 'returns an empty array' do expect(cluster_without_io.servers).to eq([]) end end end end describe '#add' do context 'topology is Sharded' do require_topology :sharded let(:topology) do Mongo::Cluster::Topology::Sharded.new({}, cluster) end before do cluster.add('a') end it 'creates server with nil last_scan' do server = cluster.servers_list.detect do |srv| srv.address.seed == 'a' end expect(server).not_to be_nil expect(server.last_scan).to be_nil expect(server.last_scan_monotime).to be_nil end end end describe '#close' do let(:cluster) { cluster_with_semaphore } let(:known_servers) do cluster.instance_variable_get(:@servers) end let(:periodic_executor) do cluster.instance_variable_get(:@periodic_executor) end describe 'closing' do before do expect(known_servers).to all receive(:close).and_call_original expect(periodic_executor).to receive(:stop!).and_call_original end it 'disconnects each server and the cursor reaper and returns nil' do expect(cluster.close).to be_nil end end describe 'repeated closing' do before do expect(known_servers).to all receive(:close).and_call_original expect(periodic_executor).to receive(:stop!).and_call_original end let(:monitoring) { Mongo::Monitoring.new } let(:subscriber) { Mrss::EventSubscriber.new } it 'publishes server closed event once' do monitoring.subscribe(Mongo::Monitoring::SERVER_CLOSED, subscriber) expect(cluster.close).to be_nil expect(subscriber.first_event('server_closed_event')).not_to be_nil subscriber.succeeded_events.clear expect(cluster.close).to be_nil expect(subscriber.first_event('server_closed_event')).to be_nil end end end describe '#reconnect!' do let(:cluster) { cluster_with_semaphore } let(:periodic_executor) do cluster.instance_variable_get(:@periodic_executor) end before do cluster.next_primary expect(cluster.servers).to all receive(:reconnect!).and_call_original expect(periodic_executor).to receive(:restart!).and_call_original end it 'reconnects each server and the cursor reaper and returns true' do expect(cluster.reconnect!).to be(true) end end describe '#remove' do let(:address_a) { Mongo::Address.new('127.0.0.1:25555') } let(:address_b) { Mongo::Address.new('127.0.0.1:25556') } let(:monitoring) { Mongo::Monitoring.new(monitoring: false) } let(:server_a) do register_server( Mongo::Server.new( address_a, cluster, monitoring, Mongo::Event::Listeners.new, monitor: false ) ) end let(:server_b) do register_server( Mongo::Server.new( address_b, cluster, monitoring, Mongo::Event::Listeners.new, monitor: false ) ) end let(:servers) do [ server_a, server_b ] end let(:addresses) do [ address_a, address_b ] end before do cluster.instance_variable_set(:@servers, servers) cluster.remove('127.0.0.1:25555') end it 'removes the host from the list of servers' do expect(cluster.instance_variable_get(:@servers)).to eq([ server_b ]) end it 'removes the host from the list of addresses' do expect(cluster.addresses).to eq([ address_b ]) end end describe '#next_primary' do let(:cluster) do # We use next_primary to wait for server selection, and this is # also the method we are testing. authorized_client.tap { |client| client.cluster.next_primary }.cluster end let(:primary_candidates) do if cluster.single? || cluster.load_balanced? || cluster.sharded? cluster.servers else cluster.servers.select(&:primary?) end end it 'always returns the primary, mongos, or standalone' do expect(primary_candidates).to include(cluster.next_primary) end end describe '#app_metadata' do it 'returns an AppMetadata object' do expect(cluster_without_io.app_metadata).to be_a(Mongo::Server::AppMetadata) end context 'when the client has an app_name set' do let(:cluster) do authorized_client.with(app_name: 'cluster_test', monitoring_io: false).cluster end it 'constructs an AppMetadata object with the app_name' do expect(cluster.app_metadata.client_document[:application]).to eq('name' => 'cluster_test') end end context 'when the client does not have an app_name set' do let(:cluster) do authorized_client.cluster end it 'constructs an AppMetadata object with no app_name' do expect(cluster.app_metadata.client_document[:application]).to be_nil end end end describe '#cluster_time' do let(:operation) do client.command(ping: 1) end let(:operation_with_session) do client.command({ ping: 1 }, session: session) end let(:second_operation) do client.command({ ping: 1 }, session: session) end it_behaves_like 'an operation updating cluster time' end describe '#update_cluster_time' do let(:cluster) do described_class.new( SpecConfig.instance.addresses, monitoring, SpecConfig.instance.test_options.merge( heartbeat_frequency: 1000, monitoring_io: false ) ) end let(:result) do double('result', cluster_time: cluster_time_doc) end context 'when the cluster_time variable is nil' do before do cluster.instance_variable_set(:@cluster_time, nil) cluster.update_cluster_time(result) end context 'when the cluster time received is nil' do let(:cluster_time_doc) do nil end it 'does not set the cluster_time variable' do expect(cluster.cluster_time).to be_nil end end context 'when the cluster time received is not nil' do let(:cluster_time_doc) do BSON::Document.new(Mongo::Cluster::CLUSTER_TIME => BSON::Timestamp.new(1, 1)) end it 'sets the cluster_time variable to the cluster time doc' do expect(cluster.cluster_time).to eq(cluster_time_doc) end end end context 'when the cluster_time variable has a value' do before do cluster.instance_variable_set( :@cluster_time, Mongo::ClusterTime.new( Mongo::Cluster::CLUSTER_TIME => BSON::Timestamp.new(1, 1) ) ) cluster.update_cluster_time(result) end context 'when the cluster time received is nil' do let(:cluster_time_doc) do nil end it 'does not update the cluster_time variable' do expect(cluster.cluster_time).to eq( BSON::Document.new( Mongo::Cluster::CLUSTER_TIME => BSON::Timestamp.new(1, 1) ) ) end end context 'when the cluster time received is not nil' do context 'when the cluster time received is greater than the cluster_time variable' do let(:cluster_time_doc) do BSON::Document.new(Mongo::Cluster::CLUSTER_TIME => BSON::Timestamp.new(1, 2)) end it 'sets the cluster_time variable to the cluster time' do expect(cluster.cluster_time).to eq(cluster_time_doc) end end context 'when the cluster time received is less than the cluster_time variable' do let(:cluster_time_doc) do BSON::Document.new(Mongo::Cluster::CLUSTER_TIME => BSON::Timestamp.new(0, 1)) end it 'does not set the cluster_time variable to the cluster time' do expect(cluster.cluster_time).to eq( BSON::Document.new( Mongo::Cluster::CLUSTER_TIME => BSON::Timestamp.new(1, 1) ) ) end end context 'when the cluster time received is equal to the cluster_time variable' do let(:cluster_time_doc) do BSON::Document.new(Mongo::Cluster::CLUSTER_TIME => BSON::Timestamp.new(1, 1)) end it 'does not change the cluster_time variable' do expect(cluster.cluster_time).to eq( BSON::Document.new( Mongo::Cluster::CLUSTER_TIME => BSON::Timestamp.new(1, 1) ) ) end end end end end describe '#validate_session_support!' do shared_examples 'supports sessions' do it 'supports sessions' do expect { cluster.validate_session_support! } .not_to raise_error end end shared_examples 'does not support sessions' do it 'does not support sessions' do expect { cluster.validate_session_support! } .to raise_error(Mongo::Error::SessionsNotSupported) end end context 'in server < 3.6' do max_server_version '3.4' let(:cluster) { client.cluster } context 'in single topology' do require_topology :single let(:client) { ClientRegistry.instance.global_client('authorized') } it_behaves_like 'does not support sessions' end context 'in single topology with replica set name set' do require_topology :replica_set let(:client) do new_local_client( [ SpecConfig.instance.addresses.first ], SpecConfig.instance.test_options.merge( connect: :direct, replica_set: ClusterConfig.instance.replica_set_name ) ) end it_behaves_like 'does not support sessions' end context 'in replica set topology' do require_topology :replica_set let(:client) { ClientRegistry.instance.global_client('authorized') } it_behaves_like 'does not support sessions' end context 'in sharded topology' do require_topology :sharded let(:client) { ClientRegistry.instance.global_client('authorized') } it_behaves_like 'does not support sessions' end end context 'in server 3.6+' do min_server_fcv '3.6' let(:cluster) { client.cluster } context 'in single topology' do require_topology :single let(:client) { ClientRegistry.instance.global_client('authorized') } # Contrary to the session spec, 3.6 and 4.0 standalone servers # report a logical session timeout and thus are considered to # support sessions. it_behaves_like 'supports sessions' end context 'in single topology with replica set name set' do require_topology :replica_set let(:client) do new_local_client( [ SpecConfig.instance.addresses.first ], SpecConfig.instance.test_options.merge( connect: :direct, replica_set: ClusterConfig.instance.replica_set_name ) ) end it_behaves_like 'supports sessions' end context 'in replica set topology' do require_topology :replica_set let(:client) { ClientRegistry.instance.global_client('authorized') } it_behaves_like 'supports sessions' end context 'in sharded topology' do require_topology :sharded let(:client) { ClientRegistry.instance.global_client('authorized') } it_behaves_like 'supports sessions' end end end { max_read_retries: 1, read_retry_interval: 5 }.each do |opt, default| describe "##{opt}" do let(:client_options) { {} } let(:client) do new_local_client_nmio([ '127.0.0.1:27017' ], client_options) end let(:cluster) do client.cluster end it "defaults to #{default}" do expect(default).not_to be_nil expect(client.options[opt]).to be_nil expect(cluster.send(opt)).to eq(default) end context 'specified on client' do let(:client_options) { { opt => 2 } } it 'inherits from client' do expect(client.options[opt]).to eq(2) expect(cluster.send(opt)).to eq(2) end end end end describe '#summary' do let(:default_address) { SpecConfig.instance.addresses.first } context 'cluster has unknown servers' do # Servers are never unknown in load-balanced topology. require_topology :single, :replica_set, :sharded it 'includes unknown servers' do expect(cluster.servers_list).to all be_unknown expect(cluster.summary).to match(/Server address=#{default_address}/) end end context 'cluster has known servers' do let(:client) { ClientRegistry.instance.global_client('authorized') } let(:cluster) { client.cluster } before do wait_for_all_servers(cluster) end it 'includes known servers' do cluster.servers_list.each do |server| expect(server).not_to be_unknown end expect(cluster.summary).to match(/Server address=#{default_address}/) end end end end # rubocop:enable RSpec/ContextWording, RSpec/VerifiedDoubles, RSpec/MessageSpies # rubocop:enable RSpec/ExpectInHook, RSpec/ExampleLength mongo-ruby-driver-2.21.3/spec/mongo/cluster_time_spec.rb000066400000000000000000000106111505113246500232760ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' describe Mongo::ClusterTime do describe '#>=' do context 'equal but different objects' do let(:one) { described_class.new(clusterTime: BSON::Timestamp.new(123, 456)) } let(:two) { described_class.new(clusterTime: BSON::Timestamp.new(123, 456)) } it 'is true' do expect(one).to be >= two end end context 'first is greater' do let(:one) { described_class.new(clusterTime: BSON::Timestamp.new(124, 456)) } let(:two) { described_class.new(clusterTime: BSON::Timestamp.new(123, 456)) } it 'is true' do expect(one).to be >= two end end context 'second is greater' do let(:one) { described_class.new(clusterTime: BSON::Timestamp.new(123, 456)) } let(:two) { described_class.new(clusterTime: BSON::Timestamp.new(123, 457)) } it 'is false' do expect(one).not_to be >= two end end end describe '#>' do context 'equal but different objects' do let(:one) { described_class.new(clusterTime: BSON::Timestamp.new(123, 456)) } let(:two) { described_class.new(clusterTime: BSON::Timestamp.new(123, 456)) } it 'is false' do expect(one).not_to be > two end end context 'first is greater' do let(:one) { described_class.new(clusterTime: BSON::Timestamp.new(124, 456)) } let(:two) { described_class.new(clusterTime: BSON::Timestamp.new(123, 456)) } it 'is true' do expect(one).to be > two end end context 'second is greater' do let(:one) { described_class.new(clusterTime: BSON::Timestamp.new(123, 456)) } let(:two) { described_class.new(clusterTime: BSON::Timestamp.new(123, 457)) } it 'is false' do expect(one).not_to be > two end end end describe '#<=' do context 'equal but different objects' do let(:one) { described_class.new(clusterTime: BSON::Timestamp.new(123, 456)) } let(:two) { described_class.new(clusterTime: BSON::Timestamp.new(123, 456)) } it 'is true' do expect(one).to be <= two end end context 'first is greater' do let(:one) { described_class.new(clusterTime: BSON::Timestamp.new(124, 456)) } let(:two) { described_class.new(clusterTime: BSON::Timestamp.new(123, 456)) } it 'is false' do expect(one).not_to be <= two end end context 'second is greater' do let(:one) { described_class.new(clusterTime: BSON::Timestamp.new(123, 456)) } let(:two) { described_class.new(clusterTime: BSON::Timestamp.new(123, 457)) } it 'is true' do expect(one).to be <= two end end end describe '#<' do context 'equal but different objects' do let(:one) { described_class.new(clusterTime: BSON::Timestamp.new(123, 456)) } let(:two) { described_class.new(clusterTime: BSON::Timestamp.new(123, 456)) } it 'is false' do expect(one).not_to be < two end end context 'first is greater' do let(:one) { described_class.new(clusterTime: BSON::Timestamp.new(124, 456)) } let(:two) { described_class.new(clusterTime: BSON::Timestamp.new(123, 456)) } it 'is false' do expect(one).not_to be < two end end context 'second is greater' do let(:one) { described_class.new(clusterTime: BSON::Timestamp.new(123, 456)) } let(:two) { described_class.new(clusterTime: BSON::Timestamp.new(123, 457)) } it 'is true' do expect(one).to be < two end end end describe '#==' do context 'equal but different objects' do let(:one) { described_class.new(clusterTime: BSON::Timestamp.new(123, 456)) } let(:two) { described_class.new(clusterTime: BSON::Timestamp.new(123, 456)) } it 'is true' do expect(one).to be == two end end context 'first is greater' do let(:one) { described_class.new(clusterTime: BSON::Timestamp.new(124, 456)) } let(:two) { described_class.new(clusterTime: BSON::Timestamp.new(123, 456)) } it 'is false' do expect(one).not_to be == two end end context 'second is greater' do let(:one) { described_class.new(clusterTime: BSON::Timestamp.new(123, 456)) } let(:two) { described_class.new(clusterTime: BSON::Timestamp.new(123, 457)) } it 'is false' do expect(one).not_to be == two end end end end mongo-ruby-driver-2.21.3/spec/mongo/collection/000077500000000000000000000000001505113246500213745ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/mongo/collection/view/000077500000000000000000000000001505113246500223465ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/mongo/collection/view/aggregation_spec.rb000066400000000000000000000432111505113246500261750ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Collection::View::Aggregation do let(:pipeline) do [] end let(:view_options) do {} end let(:options) do {} end let(:selector) do {} end let(:view) do Mongo::Collection::View.new(authorized_collection, selector, view_options) end let(:aggregation) do described_class.new(view, pipeline, options) end let(:server) do double('server') end let(:session) do double('session') end let(:aggregation_spec) do aggregation.send(:aggregate_spec, session, nil) end before do authorized_collection.delete_many end describe '#allow_disk_use' do let(:new_agg) do aggregation.allow_disk_use(true) end it 'sets the value in the options' do expect(new_agg.allow_disk_use).to be true end end describe '#each' do let(:documents) do [ { city: "Berlin", pop: 18913, neighborhood: "Kreuzberg" }, { city: "Berlin", pop: 84143, neighborhood: "Mitte" }, { city: "New York", pop: 40270, neighborhood: "Brooklyn" } ] end let(:pipeline) do [{ "$group" => { "_id" => "$city", "totalpop" => { "$sum" => "$pop" } } }] end before do authorized_collection.delete_many authorized_collection.insert_many(documents) end context 'when provided a session' do let(:options) do { session: session } end let(:operation) do aggregation.to_a end let(:client) do authorized_client end it_behaves_like 'an operation using a session' end context 'when a block is provided' do context 'when no batch size is provided' do it 'yields to each document' do aggregation.each do |doc| expect(doc[:totalpop]).to_not be_nil end end end context 'when a batch size of 0 is provided' do let(:aggregation) do described_class.new(view.batch_size(0), pipeline, options) end it 'yields to each document' do aggregation.each do |doc| expect(doc[:totalpop]).to_not be_nil end end end context 'when a batch size of greater than zero is provided' do let(:aggregation) do described_class.new(view.batch_size(5), pipeline, options) end it 'yields to each document' do aggregation.each do |doc| expect(doc[:totalpop]).to_not be_nil end end end end context 'when no block is provided' do it 'returns an enumerated cursor' do expect(aggregation.each).to be_a(Enumerator) end end context 'when an invalid pipeline operator is provided' do let(:pipeline) do [{ '$invalid' => 'operator' }] end it 'raises an OperationFailure' do expect { aggregation.to_a }.to raise_error(Mongo::Error::OperationFailure) end end context 'when the initial response has no results but an active cursor' do let(:documents) do [ { city: 'a'*6000000 }, { city: 'b'*6000000 } ] end let(:options) do {} end let(:pipeline) do [{ '$sample' => { 'size' => 2 } }] end it 'iterates over the result documents' do expect(aggregation.to_a.size).to eq(2) end end context 'when the view has a write concern' do let(:collection) do authorized_collection.with(write: INVALID_WRITE_CONCERN) end let(:view) do Mongo::Collection::View.new(collection, selector, view_options) end context 'when the server supports write concern on the aggregate command' do min_server_fcv '3.4' it 'does not apply the write concern' do expect(aggregation.to_a.size).to eq(2) end end context 'when the server does not support write concern on the aggregation command' do max_server_version '3.2' it 'does not apply the write concern' do expect(aggregation.to_a.size).to eq(2) end end end end describe '#initialize' do let(:options) do { :cursor => true } end it 'sets the view' do expect(aggregation.view).to eq(view) end it 'sets the pipeline' do expect(aggregation.pipeline).to eq(pipeline) end it 'sets the options' do expect(aggregation.options).to eq(BSON::Document.new(options)) end it 'dups the options' do expect(aggregation.options).not_to be(options) end end describe '#explain' do it 'executes an explain' do expect(aggregation.explain).to_not be_empty end context 'session id' do min_server_fcv '3.6' require_topology :replica_set, :sharded let(:options) do { session: session } end let(:subscriber) { Mrss::EventSubscriber.new } let(:client) do authorized_client.tap do |client| client.subscribe(Mongo::Monitoring::COMMAND, subscriber) end end let(:session) do client.start_session end let(:view) do Mongo::Collection::View.new(client[TEST_COLL], selector, view_options) end let(:command) do aggregation.explain subscriber.started_events.find { |c| c.command_name == 'aggregate'}.command end it 'sends the session id' do expect(command['lsid']).to eq(session.session_id) end end context 'when a collation is specified' do before do authorized_collection.insert_many([ { name: 'bang' }, { name: 'bang' }]) end let(:pipeline) do [{ "$match" => { "name" => "BANG" } }] end let(:result) do aggregation.explain['$cursor']['queryPlanner']['collation']['locale'] end context 'when the server selected supports collations' do min_server_fcv '3.4' shared_examples_for 'applies the collation' do context 'when the collation key is a String' do let(:options) do { 'collation' => { locale: 'en_US', strength: 2 } } end it 'applies the collation' do expect(result).to eq('en_US') end end context 'when the collation key is a Symbol' do let(:options) do { collation: { locale: 'en_US', strength: 2 } } end it 'applies the collation' do expect(result).to eq('en_US') end end end context '4.0-' do max_server_version '4.0' it_behaves_like 'applies the collation' end context '4.2+' do min_server_fcv '4.2' let(:result) do if aggregation.explain.key?('queryPlanner') aggregation.explain['queryPlanner']['collation']['locale'] else # 7.2+ sharded cluster aggregation.explain['shards'].first.last['queryPlanner']['collation']['locale'] end end it_behaves_like 'applies the collation' end end context 'when the server selected does not support collations' do max_server_version '3.2' let(:options) do { collation: { locale: 'en_US', strength: 2 } } end it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedCollation) end context 'when a String key is used' do let(:options) do { 'collation' => { locale: 'en_US', strength: 2 } } end it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedCollation) end end end end end describe '#aggregate_spec' do context 'when a read preference is given' do let(:read_preference) do BSON::Document.new({mode: :secondary}) end it 'includes the read preference in the spec' do spec = aggregation.send(:aggregate_spec, session, read_preference) expect(spec[:read]).to eq(read_preference) end end context 'when allow_disk_use is set' do let(:aggregation) do described_class.new(view, pipeline, options).allow_disk_use(true) end it 'includes the option in the spec' do expect(aggregation_spec[:selector][:allowDiskUse]).to eq(true) end context 'when allow_disk_use is specified as an option' do let(:options) do { :allow_disk_use => true } end let(:aggregation) do described_class.new(view, pipeline, options) end it 'includes the option in the spec' do expect(aggregation_spec[:selector][:allowDiskUse]).to eq(true) end context 'when #allow_disk_use is also called' do let(:options) do { :allow_disk_use => true } end let(:aggregation) do described_class.new(view, pipeline, options).allow_disk_use(false) end it 'overrides the first option with the second' do expect(aggregation_spec[:selector][:allowDiskUse]).to eq(false) end end end end context 'when max_time_ms is an option' do let(:options) do { :max_time_ms => 100 } end it 'includes the option in the spec' do expect(aggregation_spec[:selector][:maxTimeMS]).to eq(options[:max_time_ms]) end end context 'when comment is an option' do let(:options) do { :comment => 'testing' } end it 'includes the option in the spec' do expect(aggregation_spec[:selector][:comment]).to eq(options[:comment]) end end context 'when batch_size is set' do context 'when batch_size is set on the view' do let(:view_options) do { :batch_size => 10 } end it 'uses the batch_size on the view' do expect(aggregation_spec[:selector][:cursor][:batchSize]).to eq(view_options[:batch_size]) end end context 'when batch_size is provided in the options' do let(:options) do { :batch_size => 20 } end it 'includes the option in the spec' do expect(aggregation_spec[:selector][:cursor][:batchSize]).to eq(options[:batch_size]) end context 'when batch_size is also set on the view' do let(:view_options) do { :batch_size => 10 } end it 'overrides the view batch_size with the option batch_size' do expect(aggregation_spec[:selector][:cursor][:batchSize]).to eq(options[:batch_size]) end end end end context 'when a hint is specified' do let(:options) do { 'hint' => { 'y' => 1 } } end it 'includes the option in the spec' do expect(aggregation_spec[:selector][:hint]).to eq(options['hint']) end end context 'when batch_size is set' do let(:options) do { :batch_size => 10 } end it 'sets a batch size document in the spec' do expect(aggregation_spec[:selector][:cursor][:batchSize]).to eq(options[:batch_size]) end end context 'when batch_size is not set' do let(:options) do {} end it 'sets an empty document in the spec' do expect(aggregation_spec[:selector][:cursor]).to eq({}) end end end context 'when the aggregation has a collation defined' do before do authorized_collection.insert_many([ { name: 'bang' }, { name: 'bang' }]) end let(:pipeline) do [{ "$match" => { "name" => "BANG" } }] end let(:options) do { collation: { locale: 'en_US', strength: 2 } } end let(:result) do aggregation.collect { |doc| doc['name']} end context 'when the server selected supports collations' do min_server_fcv '3.4' it 'applies the collation' do expect(result).to eq(['bang', 'bang']) end end context 'when the server selected does not support collations' do max_server_version '3.2' it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedCollation) end context 'when a String key is used' do let(:options) do { 'collation' => { locale: 'en_US', strength: 2 } } end it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedCollation) end end end end context 'when $out is in the pipeline' do [['$out', 'string'], [:$out, 'symbol']].each do |op, type| context "when #{op} is a #{type}" do let(:pipeline) do [{ "$group" => { "_id" => "$city", "totalpop" => { "$sum" => "$pop" } } }, { op => 'output_collection' } ] end before do authorized_client['output_collection'].delete_many end let(:features) do double() end let(:server) do double().tap do |server| allow(server).to receive(:features).and_return(features) end end context 'when the view has a write concern' do let(:collection) do authorized_collection.with(write: INVALID_WRITE_CONCERN) end let(:view) do Mongo::Collection::View.new(collection, selector, view_options) end context 'when the server supports write concern on the aggregate command' do min_server_fcv '3.4' it 'uses the write concern' do expect { aggregation.to_a }.to raise_exception(Mongo::Error::OperationFailure) end end context 'when the server does not support write concern on the aggregation command' do max_server_version '3.2' let(:documents) do [ { city: "Berlin", pop: 18913, neighborhood: "Kreuzberg" }, { city: "Berlin", pop: 84143, neighborhood: "Mitte" }, { city: "New York", pop: 40270, neighborhood: "Brooklyn" } ] end before do authorized_collection.insert_many(documents) aggregation.to_a end it 'does not apply the write concern' do expect(authorized_client['output_collection'].find.count).to eq(2) end end end end end end context "when there is a filter on the view" do context "when broken_view_aggregate is turned off" do config_override :broken_view_aggregate, false let(:documents) do [ { city: "Berlin", pop: 18913, neighborhood: "Kreuzberg" }, { city: "Berlin", pop: 84143, neighborhood: "Mitte" }, { city: "New York", pop: 40270, neighborhood: "Brooklyn" } ] end let(:pipeline) do [{ "$project" => { city: 1 } }] end let(:view) do authorized_collection.find(city: "Berlin") end before do authorized_collection.delete_many authorized_collection.insert_many(documents) end it "uses the filter on the view" do expect(aggregation.to_a.length).to eq(2) end it "adds a match stage" do expect(aggregation.pipeline.length).to eq(2) expect(aggregation.pipeline.first).to eq({ :$match => { "city" => "Berlin" } }) end end context "when broken_view_aggregate is turned on" do config_override :broken_view_aggregate, true let(:documents) do [ { city: "Berlin", pop: 18913, neighborhood: "Kreuzberg" }, { city: "Berlin", pop: 84143, neighborhood: "Mitte" }, { city: "New York", pop: 40270, neighborhood: "Brooklyn" } ] end let(:pipeline) do [{ "$project" => { city: 1 } }] end let(:view) do authorized_collection.find(city: "Berlin") end before do authorized_collection.delete_many authorized_collection.insert_many(documents) end it "ignores the view filter" do expect(aggregation.to_a.length).to eq(3) end it "does not add a match stage" do expect(aggregation.pipeline.length).to eq(1) expect(aggregation.pipeline).to eq([ { "$project" => { city: 1 } } ]) end end end context "when there is no filter on the view" do with_config_values :broken_view_aggregate, true, false do let(:documents) do [ { city: "Berlin", pop: 18913, neighborhood: "Kreuzberg" }, { city: "Berlin", pop: 84143, neighborhood: "Mitte" }, { city: "New York", pop: 40270, neighborhood: "Brooklyn" } ] end let(:pipeline) do [{ "$project" => { city: 1 } }] end let(:view) do authorized_collection.find end before do authorized_collection.delete_many authorized_collection.insert_many(documents) end it "ignores the view filter" do expect(aggregation.to_a.length).to eq(3) end it "does not add a match stage" do expect(aggregation.pipeline.length).to eq(1) expect(aggregation.pipeline).to eq([ { "$project" => { city: 1 } } ]) end end end end mongo-ruby-driver-2.21.3/spec/mongo/collection/view/builder/000077500000000000000000000000001505113246500237745ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/mongo/collection/view/builder/find_command_spec.rb000066400000000000000000000305271505113246500277600ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # TODO convert, move or delete these tests as part of RUBY-2706. =begin require 'lite_spec_helper' describe Mongo::Collection::View::Builder::FindCommand do let(:client) do new_local_client_nmio(['127.0.0.1:27017']) end let(:base_collection) { client['find-command-spec'] } describe '#specification' do let(:view) do Mongo::Collection::View.new(base_collection, filter, options) end let(:builder) do described_class.new(view, nil) end let(:specification) do builder.specification end let(:selector) do specification[:selector] end context 'when the options are standard' do let(:filter) do { 'name' => 'test' } end let(:options) do { sort: { _id: 1 }, projection: { name: 1 }, hint: { name: 1 }, skip: 10, limit: 20, batch_size: 5, single_batch: false, comment: "testing", max_scan: 200, max_time_ms: 40, max_value: { name: 'joe' }, min_value: { name: 'albert' }, return_key: true, show_disk_loc: true, snapshot: true, tailable: true, oplog_replay: true, no_cursor_timeout: true, await_data: true, allow_partial_results: true, collation: { locale: 'en_US' } } end context 'when the operation has a session' do let(:session) do double('session') end let(:builder) do described_class.new(view, session) end it 'adds the session to the specification' do expect(builder.specification[:session]).to be(session) end end it 'maps the collection name' do expect(selector['find']).to eq(base_collection.name) end it 'maps the filter' do expect(selector['filter']).to eq(filter) end it 'maps sort' do expect(selector['sort']).to eq('_id' => 1) end it 'maps projection' do expect(selector['projection']).to eq('name' => 1) end it 'maps hint' do expect(selector['hint']).to eq('name' => 1) end it 'maps skip' do expect(selector['skip']).to eq(10) end it 'maps limit' do expect(selector['limit']).to eq(20) end it 'maps batch size' do expect(selector['batchSize']).to eq(5) end it 'maps single batch' do expect(selector['singleBatch']).to be false end it 'maps comment' do expect(selector['comment']).to eq('testing') end it 'maps max scan' do expect(selector['maxScan']).to eq(200) end it 'maps max time ms' do expect(selector['maxTimeMS']).to eq(40) end it 'maps max' do expect(selector['max']).to eq('name' => 'joe') end it 'maps min' do expect(selector['min']).to eq('name' => 'albert') end it 'maps return key' do expect(selector['returnKey']).to be true end it 'maps show record id' do expect(selector['showRecordId']).to be true end it 'maps snapshot' do expect(selector['snapshot']).to be true end it 'maps tailable' do expect(selector['tailable']).to be true end it 'maps oplog_replay' do expect(selector['oplogReplay']).to be true end it 'warns when using oplog_replay' do client.should receive(:log_warn).with('oplogReplay is deprecated and ignored by MongoDB 4.4 and later') selector end it 'maps no cursor timeout' do expect(selector['noCursorTimeout']).to be true end it 'maps await data' do expect(selector['awaitData']).to be true end it 'maps allow partial results' do expect(selector['allowPartialResults']).to be true end it 'maps collation' do expect(selector['collation']).to eq('locale' => 'en_US') end end context 'when there is a limit' do let(:filter) do { 'name' => 'test' } end context 'when limit is 0' do context 'when batch_size is also 0' do let(:options) do { limit: 0, batch_size: 0 } end it 'does not set the singleBatch' do expect(selector['singleBatch']).to be nil end it 'does not set the limit' do expect(selector['limit']).to be nil end it 'does not set the batch size' do expect(selector['batchSize']).to be nil end end context 'when batch_size is not set' do let(:options) do { limit: 0 } end it 'does not set the singleBatch' do expect(selector['singleBatch']).to be nil end it 'does not set the limit' do expect(selector['limit']).to be nil end it 'does not set the batch size' do expect(selector['batchSize']).to be nil end end end context 'when the limit is negative' do context 'when there is a batch_size' do context 'when the batch_size is positive' do let(:options) do { limit: -1, batch_size: 3 } end it 'sets single batch to true' do expect(selector['singleBatch']).to be true end it 'converts the limit to a positive value' do expect(selector['limit']).to be(options[:limit].abs) end it 'sets the batch size' do expect(selector['batchSize']).to be(options[:batch_size]) end end context 'when the batch_size is negative' do let(:options) do { limit: -1, batch_size: -3 } end it 'sets single batch to true' do expect(selector['singleBatch']).to be true end it 'converts the limit to a positive value' do expect(selector['limit']).to be(options[:limit].abs) end it 'sets the batch size to the limit' do expect(selector['batchSize']).to be(options[:limit].abs) end end end context 'when there is not a batch_size' do let(:options) do { limit: -5 } end it 'sets single batch to true' do expect(selector['singleBatch']).to be true end it 'converts the limit to a positive value' do expect(selector['limit']).to be(options[:limit].abs) end it 'does not set the batch size' do expect(selector['batchSize']).to be_nil end end end context 'when the limit is positive' do context 'when there is a batch_size' do context 'when the batch_size is positive' do let(:options) do { limit: 5, batch_size: 3 } end it 'does not set singleBatch' do expect(selector['singleBatch']).to be nil end it 'sets the limit' do expect(selector['limit']).to be(options[:limit]) end it 'sets the batch size' do expect(selector['batchSize']).to be(options[:batch_size]) end end context 'when the batch_size is negative' do let(:options) do { limit: 5, batch_size: -3 } end it 'sets the singleBatch' do expect(selector['singleBatch']).to be true end it 'sets the limit' do expect(selector['limit']).to be(options[:limit]) end it 'sets the batch size to a positive value' do expect(selector['batchSize']).to be(options[:batch_size].abs) end end end context 'when there is not a batch_size' do let(:options) do { limit: 5 } end it 'does not set the singleBatch' do expect(selector['singleBatch']).to be nil end it 'sets the limit' do expect(selector['limit']).to be(options[:limit]) end it 'does not set the batch size' do expect(selector['batchSize']).to be nil end end end end context 'when there is a batch_size' do let(:filter) do { 'name' => 'test' } end context 'when there is no limit' do context 'when the batch_size is positive' do let(:options) do { batch_size: 3 } end it 'does not set the singleBatch' do expect(selector['singleBatch']).to be nil end it 'does not set the limit' do expect(selector['limit']).to be nil end it 'sets the batch size' do expect(selector['batchSize']).to be(options[:batch_size]) end end context 'when the batch_size is negative' do let(:options) do { batch_size: -3 } end it 'sets the singleBatch' do expect(selector['singleBatch']).to be true end it 'does not set the limit' do expect(selector['limit']).to be nil end it 'sets the batch size to a positive value' do expect(selector['batchSize']).to be(options[:batch_size].abs) end end context 'when batch_size is 0' do let(:options) do { batch_size: 0 } end it 'does not set the singleBatch' do expect(selector['singleBatch']).to be nil end it 'does not set the limit' do expect(selector['limit']).to be nil end it 'does not set the batch size' do expect(selector['batchSize']).to be nil end end end end context 'when limit and batch_size are negative' do let(:filter) do { 'name' => 'test' } end let(:options) do { limit: -1, batch_size: -3 } end it 'sets single batch to true' do expect(selector['singleBatch']).to be true end it 'converts the limit to a positive value' do expect(selector['limit']).to be(options[:limit].abs) end end context 'when cursor_type is specified' do let(:filter) do { 'name' => 'test' } end context 'when cursor_type is :tailable' do let(:options) do { cursor_type: :tailable, } end it 'maps to tailable' do expect(selector['tailable']).to be true end it 'does not map to awaitData' do expect(selector['awaitData']).to be_nil end end context 'when cursor_type is :tailable_await' do let(:options) do { cursor_type: :tailable_await, } end it 'maps to tailable' do expect(selector['tailable']).to be true end it 'maps to awaitData' do expect(selector['awaitData']).to be true end end end context 'when the collection has a read concern defined' do let(:collection) do base_collection.with(read_concern: { level: 'invalid' }) end let(:view) do Mongo::Collection::View.new(collection, {}) end it 'applies the read concern of the collection' do expect(selector['readConcern']).to eq(BSON::Document.new(level: 'invalid')) end context 'when explain is called for the find' do let(:collection) do base_collection.with(read_concern: { level: 'invalid' }) end let(:view) do Mongo::Collection::View.new(collection, {}) end it 'applies the read concern of the collection' do expect( builder.explain_specification[:selector][:explain][:readConcern]).to eq(BSON::Document.new(level: 'invalid')) end end end context 'when the collection does not have a read concern defined' do let(:filter) do {} end let(:options) do {} end it 'does not apply a read concern' do expect(selector['readConcern']).to be_nil end end end end =end mongo-ruby-driver-2.21.3/spec/mongo/collection/view/builder/op_query_spec.rb000066400000000000000000000070021505113246500271750ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # TODO convert, move or delete these tests as part of RUBY-2706. =begin require 'spec_helper' describe Mongo::Collection::View::Builder::OpQuery do describe '#specification' do let(:filter) do { 'name' => 'test' } end let(:builder) do described_class.new(view) end let(:specification) do builder.specification end let(:view) do Mongo::Collection::View.new(authorized_collection, filter, options) end context 'when there are modifiers in the options' do let(:options) do { sort: { _id: 1 }, projection: { name: 1 }, hint: { name: 1 }, skip: 10, limit: 20, batch_size: 5, single_batch: false, comment: "testing", max_scan: 200, max_time_ms: 40, max_value: { name: 'joe' }, min_value: { name: 'albert' }, return_key: true, show_disk_loc: true, snapshot: true, tailable: true, oplog_replay: true, no_cursor_timeout: true, tailable_await: true, allow_partial_results: true, read_concern: { level: 'local' } } end let(:selector) do specification[:selector] end let(:opts) do specification[:options] end let(:flags) do opts[:flags] end it 'maps the collection name' do expect(specification[:coll_name]).to eq(authorized_collection.name) end it 'maps the filter' do expect(selector['$query']).to eq(filter) end it 'maps sort' do expect(selector['$orderby']).to eq('_id' => 1) end it 'maps projection' do expect(opts['project']).to eq('name' => 1) end it 'maps hint' do expect(selector['$hint']).to eq('name' => 1) end it 'maps skip' do expect(opts['skip']).to eq(10) end it 'maps limit' do expect(opts['limit']).to eq(20) end it 'maps batch size' do expect(opts['batch_size']).to eq(5) end it 'maps comment' do expect(selector['$comment']).to eq('testing') end it 'maps max scan' do expect(selector['$maxScan']).to eq(200) end it 'maps max time ms' do expect(selector['$maxTimeMS']).to eq(40) end it 'maps max' do expect(selector['$max']).to eq('name' => 'joe') end it 'maps min' do expect(selector['$min']).to eq('name' => 'albert') end it 'does not map read concern' do expect(selector['$readConcern']).to be_nil expect(selector['readConcern']).to be_nil expect(opts['readConcern']).to be_nil end it 'maps return key' do expect(selector['$returnKey']).to be true end it 'maps show record id' do expect(selector['$showDiskLoc']).to be true end it 'maps snapshot' do expect(selector['$snapshot']).to be true end it 'maps tailable' do expect(flags).to include(:tailable_cursor) end it 'maps oplog replay' do expect(flags).to include(:oplog_replay) end it 'maps no cursor timeout' do expect(flags).to include(:no_cursor_timeout) end it 'maps await data' do expect(flags).to include(:await_data) end it 'maps allow partial results' do expect(flags).to include(:partial) end end end end =end mongo-ruby-driver-2.21.3/spec/mongo/collection/view/change_stream_resume_spec.rb000066400000000000000000000220151505113246500300650ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Collection::View::ChangeStream do require_wired_tiger min_server_fcv '3.6' require_topology :replica_set max_example_run_time 7 let(:pipeline) do [] end let(:options) do {} end let(:view_options) do {} end let(:client) do authorized_client_without_any_retry_reads end let(:collection) do client['mcv-change-stream'] end let(:view) do Mongo::Collection::View.new(collection, {}, view_options) end let(:change_stream) do @change_stream = described_class.new(view, pipeline, nil, options) end let(:cursor) do change_stream.instance_variable_get(:@cursor) end let(:change_stream_document) do change_stream.send(:instance_variable_set, '@resuming', false) change_stream.send(:pipeline)[0]['$changeStream'] end let(:connection_description) do Mongo::Server::Description.new( double('description address'), { 'minWireVersion' => 0, 'maxWireVersion' => 2 } ) end let(:result) do Mongo::Operation::GetMore::Result.new( Mongo::Protocol::Message.new, connection_description, ) end context 'when an error is encountered the first time the command is run' do include PrimarySocket before do expect(primary_socket).to receive(:write).and_raise(error).once end let(:document) do change_stream.to_enum.next end shared_examples_for 'a resumable change stream' do before do expect(view.send(:server_selector)).to receive(:select_server).twice.and_call_original change_stream collection.insert_one(a: 1) end it 'runs the command again while using the same read preference and caches the resume token' do expect(document[:fullDocument][:a]).to eq(1) expect(change_stream_document[:resumeAfter]).to eq(document[:_id]) end context 'when provided a session' do let(:options) do { session: session} end let(:session) do client.start_session end before do change_stream.to_enum.next end it 'does not close the session' do expect(session.ended?).to be(false) end end end shared_examples_for 'a non-resumed change stream' do it 'does not run the command again and instead raises the error' do expect do document end.to raise_exception(error) end end context 'when the error is a resumable error' do context 'when the error is a SocketError' do let(:error) do Mongo::Error::SocketError end it_behaves_like 'a non-resumed change stream' end context 'when the error is a SocketTimeoutError' do let(:error) do Mongo::Error::SocketTimeoutError end it_behaves_like 'a non-resumed change stream' end context "when the error is a 'not master' error" do let(:error) do Mongo::Error::OperationFailure.new('not master') end it_behaves_like 'a non-resumed change stream' end context "when the error is a 'node is recovering' error" do let(:error) do Mongo::Error::OperationFailure.new('node is recovering') end it_behaves_like 'a non-resumed change stream' end end context 'when the error is another server error' do let(:error) do Mongo::Error::MissingResumeToken end before do expect(view.send(:server_selector)).to receive(:select_server).and_call_original end it_behaves_like 'a non-resumed change stream' context 'when provided a session' do let(:options) do { session: session} end let(:session) do client.start_session end before do expect do change_stream end.to raise_error(error) end it 'does not close the session' do expect(session.ended?).to be(false) end end end end context 'when a killCursors command is issued for the cursor' do context 'using Enumerable' do require_mri before do change_stream collection.insert_one(a:1) enum.next collection.insert_one(a:2) end let(:enum) do change_stream.to_enum end it 'resumes on a cursor not found error' do original_cursor_id = cursor.id client.use(:admin).command({ killCursors: collection.name, cursors: [cursor.id] }) expect do enum.next end.not_to raise_error end end context 'using try_next' do before do change_stream collection.insert_one(a:1) expect(change_stream.try_next).to be_a(BSON::Document) collection.insert_one(a:2) end it 'resumes on a cursor not found error' do original_cursor_id = cursor.id client.use(:admin).command({ killCursors: collection.name, cursors: [cursor.id] }) expect do change_stream.try_next end.not_to raise_error end end end context 'when a server error is encountered during a getMore' do fails_on_jruby shared_examples_for 'a change stream that is not resumed' do before do change_stream collection.insert_one(a: 1) enum.next collection.insert_one(a: 2) expect(cursor).to receive(:get_more).once.and_raise(error) end let(:enum) do change_stream.to_enum end let(:document) do enum.next end it 'is not resumed' do expect do document end.to raise_error(error) end end context 'when the error is a resumable error' do shared_examples_for 'a change stream that encounters an error from a getMore' do before do change_stream collection.insert_one(a: 1) enum.next collection.insert_one(a: 2) expect(cursor).to receive(:get_more).once.and_raise(error) end let(:enum) do change_stream.to_enum end let(:document) do enum.next end it 'runs the command again while using the same read preference and caching the resume token' do expect(cursor).to receive(:close).and_call_original expect(view.send(:server_selector)).to receive(:select_server).once.and_call_original expect(Mongo::Operation::Aggregate).to receive(:new).and_call_original expect(document[:fullDocument][:a]).to eq(2) expect(change_stream_document[:resumeAfter]).to eq(document[:_id]) end context 'when provided a session' do let(:options) do { session: session} end let(:session) do client.start_session end before do enum.next end it 'does not close the session' do expect(session.ended?).to be(false) end end end context 'when the error is a SocketError' do let(:error) do Mongo::Error::SocketError end it_behaves_like 'a change stream that encounters an error from a getMore' end context 'when the error is a SocketTimeoutError' do let(:error) do Mongo::Error::SocketTimeoutError end it_behaves_like 'a change stream that encounters an error from a getMore' end context "when the error is 'not master'" do let(:error) do Mongo::Error::OperationFailure.new('not master', result) end it_behaves_like 'a change stream that is not resumed' end context "when the error is 'node is recovering'" do let(:error) do Mongo::Error::OperationFailure.new('node is recovering', result) end it_behaves_like 'a change stream that is not resumed' end end context 'when the error is another server error' do before do change_stream collection.insert_one(a: 1) enum.next collection.insert_one(a: 2) expect(cursor).to receive(:get_more).and_raise(Mongo::Error::MissingResumeToken) expect(Mongo::Operation::Aggregate).not_to receive(:new) end let(:enum) do change_stream.to_enum end it 'does not run the command again and instead raises the error' do expect { enum.next }.to raise_exception(Mongo::Error::MissingResumeToken) end context 'when provided a session' do let(:options) do { session: session} end let(:session) do client.start_session end before do expect do enum.next end.to raise_error(Mongo::Error::MissingResumeToken) end it 'does not close the session' do expect(session.ended?).to be(false) end end end end end mongo-ruby-driver-2.21.3/spec/mongo/collection/view/change_stream_spec.rb000066400000000000000000000321131505113246500265050ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Collection::View::ChangeStream do require_wired_tiger min_server_fcv '3.6' require_topology :replica_set max_example_run_time 7 let(:pipeline) do [] end let(:options) do {} end let(:view_options) do {} end let(:client) do authorized_client_without_any_retry_reads end let(:collection) do client['mcv-change-stream'] end let(:view) do Mongo::Collection::View.new(collection, {}, view_options) end let(:change_stream) do @change_stream = described_class.new(view, pipeline, nil, options) end let(:change_stream_document) do change_stream.send(:instance_variable_set, '@resuming', false) change_stream.send(:pipeline)[0]['$changeStream'] end let!(:sample_resume_token) do stream = collection.watch collection.insert_one(a: 1) doc = stream.to_enum.next stream.close doc[:_id] end let(:command_selector) do command_spec[:selector] end let(:command_spec) do change_stream.send(:instance_variable_set, '@resuming', false) change_stream.send(:aggregate_spec, double('session'), nil) end let(:cursor) do change_stream.instance_variable_get(:@cursor) end let(:error) do begin change_stream rescue => e e else nil end end before do collection.delete_many end after do # Only close the change stream if one was successfully created by the test if @change_stream @change_stream.close end end describe '#initialize' do it 'sets the view' do expect(change_stream.view).to be(view) end it 'sets the options' do expect(change_stream.options).to eq(options) end context 'when full_document is provided' do context "when the value is 'default'" do let(:options) do { full_document: 'default' } end it 'sets the fullDocument value to default' do expect(change_stream_document[:fullDocument]).to eq('default') end end context "when the value is 'updateLookup'" do let(:options) do { full_document: 'updateLookup' } end it 'sets the fullDocument value to updateLookup' do expect(change_stream_document[:fullDocument]).to eq('updateLookup') end end end context 'when full_document is not provided' do it "does not set fullDocument" do expect(change_stream_document).not_to have_key(:fullDocument) end end context 'when resume_after is provided' do let(:options) do { resume_after: sample_resume_token } end it 'sets the resumeAfter value to the provided document' do expect(change_stream_document[:resumeAfter]).to eq(sample_resume_token) end end context 'when max_await_time_ms is provided' do let(:options) do { max_await_time_ms: 10 } end it 'sets the maxTimeMS value to the provided document' do expect(command_selector[:maxTimeMS]).to eq(10) end end context 'when batch_size is provided' do let(:options) do { batch_size: 5 } end it 'sets the batchSize value to the provided document' do expect(command_selector[:cursor][:batchSize]).to eq(5) end end context 'when collation is provided' do let(:options) do { 'collation' => { locale: 'en_US', strength: 2 } } end it 'sets the collation value to the provided document' do expect(command_selector['collation']).to eq(BSON::Document.new(options['collation'])) end end context 'when a changeStream operator is provided by the user as well' do let(:pipeline) do [ { '$changeStream' => { fullDocument: 'default' } }] end it 'raises the error from the server' do expect(error).to be_a(Mongo::Error::OperationFailure) expect(error.message).to include('is only valid as the first stage in a pipeline') end end context 'when the collection has a readConcern' do let(:collection) do client['mcv-change-stream'].with( read_concern: { level: 'majority' }) end let(:view) do Mongo::Collection::View.new(collection, {}, options) end it 'uses the read concern of the collection' do expect(command_selector[:readConcern]).to eq('level' => 'majority') end end context 'when no pipeline is supplied' do it 'uses an empty pipeline' do expect(command_selector[:pipeline][0].keys).to eq(['$changeStream']) end end context 'when other pipeline operators are supplied' do context 'when the other pipeline operators are supported' do let(:pipeline) do [{ '$project' => { '_id' => 0 }}] end it 'uses the pipeline operators' do expect(command_selector[:pipeline][1]).to eq(pipeline[0]) end end context 'when the other pipeline operators are not supported' do let(:pipeline) do [{ '$unwind' => '$test' }] end it 'sends the pipeline to the server without a custom error' do expect { change_stream }.to raise_exception(Mongo::Error::OperationFailure) end context 'when the operation fails' do let!(:before_last_use) do session.instance_variable_get(:@server_session).last_use end let!(:before_operation_time) do (session.operation_time || 0) end let(:pipeline) do [ { '$invalid' => '$test' }] end let(:options) do { session: session } end let!(:operation_result) do begin; change_stream; rescue => e; e; end end let(:session) do client.start_session end it 'raises an error' do expect(operation_result.class).to eq(Mongo::Error::OperationFailure) end it 'updates the last use value' do expect(session.instance_variable_get(:@server_session).last_use).not_to eq(before_last_use) end it 'updates the operation time value' do expect(session.operation_time).not_to eq(before_operation_time) end end end end context 'when the initial batch is empty' do before do change_stream end it 'does not close the cursor' do expect(cursor).to be_a(Mongo::Cursor) expect(cursor.closed?).to be false end end context 'when provided a session' do let(:options) do { session: session } end let(:operation) do change_stream collection.insert_one(a: 1) change_stream.to_enum.next end context 'when the session is created from the same client used for the operation' do let(:session) do client.start_session end let(:server_session) do session.instance_variable_get(:@server_session) end let!(:before_last_use) do server_session.last_use end let!(:before_operation_time) do (session.operation_time || 0) end let!(:operation_result) do operation end it 'updates the last use value' do expect(server_session.last_use).not_to eq(before_last_use) end it 'updates the operation time value' do expect(session.operation_time).not_to eq(before_operation_time) end it 'does not close the session when the operation completes' do expect(session.ended?).to be(false) end end context 'when a session from another client is provided' do let(:session) do another_authorized_client.with(retry_reads: false).start_session end let(:operation_result) do operation end it 'raises an exception' do expect { operation_result }.to raise_exception(Mongo::Error::InvalidSession) end end context 'when the session is ended before it is used' do let(:session) do client.start_session end before do session.end_session end let(:operation_result) do operation end it 'raises an exception' do expect { operation_result }.to raise_exception(Mongo::Error::InvalidSession) end end end end describe '#close' do context 'ignores any exceptions or errors' do [ Mongo::Error::OperationFailure, Mongo::Error::SocketError, Mongo::Error::SocketTimeoutError ].each do |err| it "ignores #{err}" do expect(cursor).to receive(:close).and_raise(err) change_stream.close end end end context 'when documents have not been retrieved and the stream is closed' do before do expect(cursor).to receive(:close).and_call_original change_stream.close end it 'closes the cursor' do expect(change_stream.instance_variable_get(:@cursor)).to be(nil) expect(change_stream.closed?).to be(true) end it 'raises an error when the stream is attempted to be iterated' do expect { change_stream.to_enum.next }.to raise_exception(StopIteration) end end context 'when some documents have been retrieved and the stream is closed before sending getMore' do fails_on_jruby before do change_stream collection.insert_one(a: 1) enum.next change_stream.close end let(:enum) do change_stream.to_enum end it 'raises an error' do expect { enum.next }.to raise_exception(StopIteration) end end end describe '#closed?' do context 'when the change stream has not been closed' do it 'returns false' do expect(change_stream.closed?).to be(false) end end context 'when the change stream has been closed' do before do change_stream.close end it 'returns false' do expect(change_stream.closed?).to be(true) end end end context 'when the first response does not contain the resume token' do let(:pipeline) do # This removes id from change stream document which is used as # resume token [{ '$project' => { _id: 0 } }] end before do change_stream collection.insert_one(a: 1) end context 'pre-4.2 server' do max_server_version '4.0' it 'driver raises an exception and closes the cursor' do expect(cursor).to receive(:close).and_call_original expect { change_stream.to_enum.next }.to raise_exception(Mongo::Error::MissingResumeToken) end end context '4.2+ server' do min_server_fcv '4.2' it 'server errors, driver closes the cursor' do expect(cursor).to receive(:close).and_call_original expect { change_stream.to_enum.next }.to raise_exception(Mongo::Error::OperationFailure, /Encountered an event whose _id field, which contains the resume token, was modified by the pipeline. Modifying the _id field of an event makes it impossible to resume the stream from that point. Only transformations that retain the unmodified _id field are allowed./) end end end describe '#inspect' do it 'includes the Ruby object_id in the formatted string' do expect(change_stream.inspect).to include(change_stream.object_id.to_s) end context 'when resume_after is provided' do let(:options) do { resume_after: sample_resume_token } end it 'includes resume_after value in the formatted string' do expect(change_stream.inspect).to include(sample_resume_token.to_s) end end context 'when max_await_time_ms is provided' do let(:options) do { max_await_time_ms: 10 } end it 'includes the max_await_time value in the formatted string' do expect(change_stream.inspect).to include({ 'max_await_time_ms' => 10 }.to_s) end end context 'when batch_size is provided' do let(:options) do { batch_size: 5 } end it 'includes the batch_size value in the formatted string' do expect(change_stream.inspect).to include({ 'batch_size' => 5 }.to_s) end end context 'when collation is provided' do let(:options) do { 'collation' => { locale: 'en_US', strength: 2 } } end it 'includes the collation value in the formatted string' do expect(change_stream.inspect).to include({ 'collation' => { 'locale' => 'en_US', 'strength' => 2 } }.to_s) end end context 'when pipeline operators are provided' do let(:pipeline) do [{ '$project' => { '_id' => 0 }}] end it 'includes the filters in the formatted string' do expect(change_stream.inspect).to include([{ '$project' => { '_id' => 0 }}].to_s) end end end end mongo-ruby-driver-2.21.3/spec/mongo/collection/view/explainable_spec.rb000066400000000000000000000052741505113246500262010ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Collection::View::Explainable do let(:selector) do {} end let(:options) do {} end let(:view) do Mongo::Collection::View.new(authorized_collection, selector, options) end before do authorized_collection.delete_many end describe '#explain' do shared_examples 'executes the explain' do context '3.0+ server' do min_server_fcv '3.0' context 'not sharded' do require_topology :single, :replica_set it 'executes the explain' do explain[:queryPlanner][:namespace].should == authorized_collection.namespace end end context 'sharded' do require_topology :sharded context 'pre-3.2 server' do max_server_version '3.0' it 'executes the explain' do skip 'https://jira.mongodb.org/browse/RUBY-3399' explain[:queryPlanner][:parsedQuery].should be_a(Hash) end end context '3.2+ server' do min_server_fcv '3.2' it 'executes the explain' do skip 'https://jira.mongodb.org/browse/RUBY-3399' explain[:queryPlanner][:mongosPlannerVersion].should == 1 end end end end context '2.6 server' do max_server_version '2.6' it 'executes the explain' do explain[:cursor].should == 'BasicCursor' end end end context 'without arguments' do let(:explain) do view.explain end include_examples 'executes the explain' end context 'with verbosity argument' do let(:explain) do view.explain(verbosity: verbosity) end shared_examples 'triggers server error' do # 3.0 does not produce the error. min_server_fcv '3.2' it 'triggers server error' do lambda do explain end.should raise_error(Mongo::Error::OperationFailure, /verbosity string must be|value .* for field .*verbosity.* is not a valid value/) end end context 'valid symbol value' do let(:verbosity) { :query_planner } include_examples 'executes the explain' end context 'valid string value' do let(:verbosity) { 'executionStats' } include_examples 'executes the explain' end context 'invalid symbol value' do let(:verbosity) { :bogus } include_examples 'triggers server error' end context 'invalid string value' do let(:verbosity) { 'bogus' } include_examples 'triggers server error' end end end end mongo-ruby-driver-2.21.3/spec/mongo/collection/view/immutable_spec.rb000066400000000000000000000021341505113246500256640ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Collection::View::Immutable do let(:selector) do {} end let(:options) do {} end let(:view) do Mongo::Collection::View.new(authorized_collection, selector, options) end before do authorized_collection.delete_many end describe '#configure' do context 'when the options have modifiers' do let(:options) do { :max_time_ms => 500 } end let(:new_view) do view.projection(_id: 1) end it 'returns a new view' do expect(view).not_to be(new_view) end it 'creates a new options hash' do expect(view.options).not_to be(new_view.options) end it 'keeps the modifier fields already in the options hash' do expect(new_view.modifiers[:$maxTimeMS]).to eq(500) end it 'sets the option' do expect(new_view.projection).to eq('_id' => 1) end it 'creates a new modifiers document' do expect(view.modifiers).not_to be(new_view.modifiers) end end end end mongo-ruby-driver-2.21.3/spec/mongo/collection/view/iterable_spec.rb000066400000000000000000000016351505113246500255010ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Collection::View::Iterable do let(:selector) do {} end let(:options) do {} end let(:view) do Mongo::Collection::View.new(authorized_collection, selector, options) end before do authorized_collection.drop end describe '#each' do context 'when allow_disk_use is provided' do let(:options) { { allow_disk_use: true } } # Other cases are adequately covered by spec tests. context 'on server versions < 3.2' do max_server_fcv '3.0' it 'raises an exception' do expect do view.each do |document| #Do nothing end end.to raise_error(Mongo::Error::UnsupportedOption, /The MongoDB server handling this request does not support the allow_disk_use option on this command/) end end end end end mongo-ruby-driver-2.21.3/spec/mongo/collection/view/map_reduce_spec.rb000066400000000000000000000526271505113246500260250ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Collection::View::MapReduce do clean_slate_on_evergreen let(:map) do %Q{ function() { emit(this.name, { population: this.population }); }} end let(:reduce) do %Q{ function(key, values) { var result = { population: 0 }; values.forEach(function(value) { result.population += value.population; }); return result; }} end let(:documents) do [ { name: 'Berlin', population: 3000000 }, { name: 'London', population: 9000000 } ] end let(:selector) do {} end let(:view_options) do {} end let(:view) do authorized_client.cluster.servers.map do |server| server.pool.ready end Mongo::Collection::View.new(authorized_collection, selector, view_options) end let(:options) do {} end let(:map_reduce_spec) do map_reduce.send(:map_reduce_spec, double('session')) end before do authorized_collection.delete_many authorized_collection.insert_many(documents) end let(:map_reduce) do described_class.new(view, map, reduce, options) end describe '#initialize' do it 'warns of deprecation' do Mongo::Logger.logger.should receive(:warn).with('MONGODB | The map_reduce operation is deprecated, please use the aggregation pipeline instead') map_reduce end end describe '#map_function' do it 'returns the map function' do expect(map_reduce.map_function).to eq(map) end end describe '#reduce_function' do it 'returns the reduce function' do expect(map_reduce.reduce_function).to eq(reduce) end end describe '#map' do let(:results) do map_reduce.map do |document| document end end it 'calls the Enumerable method' do expect(results.sort_by { |d| d['_id'] }).to eq(map_reduce.to_a.sort_by { |d| d['_id'] }) end end describe '#reduce' do let(:results) do map_reduce.reduce(0) { |sum, doc| sum + doc['value']['population'] } end it 'calls the Enumerable method' do expect(results).to eq(12000000) end end describe '#each' do context 'when no options are provided' do it 'iterates over the documents in the result' do map_reduce.each do |document| expect(document[:value]).to_not be_nil end end end context 'when provided a session' do let(:options) do { session: session } end let(:operation) do map_reduce.to_a end let(:client) do authorized_client end it_behaves_like 'an operation using a session' end context 'when out is in the options' do before do authorized_client['output_collection'].delete_many end context 'when out is a string' do let(:options) do { :out => 'output_collection' } end it 'iterates over the documents in the result' do map_reduce.each do |document| expect(document[:value]).to_not be_nil end end end context 'when out is a document' do let(:options) do { :out => { replace: 'output_collection' } } end it 'iterates over the documents in the result' do map_reduce.each do |document| expect(document[:value]).to_not be_nil end end end end context 'when out is inline' do let(:new_map_reduce) do map_reduce.out(inline: 1) end it 'iterates over the documents in the result' do new_map_reduce.each do |document| expect(document[:value]).to_not be_nil end end end context 'when out is a collection' do before do authorized_client['output_collection'].delete_many end context 'when #each is called without a block' do let(:new_map_reduce) do map_reduce.out(replace: 'output_collection') end before do new_map_reduce.each end it 'executes the map reduce' do expect(new_map_reduce.to_a.sort_by { |d| d['_id'] }).to eq(map_reduce.to_a.sort_by { |d| d['_id'] }) end end context 'when the option is to replace' do let(:new_map_reduce) do map_reduce.out(replace: 'output_collection') end it 'iterates over the documents in the result' do new_map_reduce.each do |document| expect(document[:value]).to_not be_nil end end it 'fetches the results from the collection' do expect(new_map_reduce.count).to eq(2) end context 'when provided a session' do let(:options) do { session: session } end let(:operation) do new_map_reduce.to_a end let(:client) do authorized_client end it_behaves_like 'an operation using a session' end context 'when the output collection is iterated' do min_server_fcv '3.6' require_topology :replica_set, :sharded let(:options) do { session: session } end let(:session) do client.start_session end let(:view) do Mongo::Collection::View.new(client[TEST_COLL], selector, view_options) end let(:subscriber) { Mrss::EventSubscriber.new } let(:client) do authorized_client.tap do |client| client.subscribe(Mongo::Monitoring::COMMAND, subscriber) end end let(:find_command) do subscriber.started_events[-1].command end before do begin; client[TEST_COLL].create; rescue; end begin; client.use('another-db')[TEST_COLL].create; rescue; end end it 'uses the session when iterating over the output collection' do new_map_reduce.to_a expect(find_command["lsid"]).to eq(BSON::Document.new(session.session_id)) end end context 'when another db is specified' do min_server_fcv '3.6' require_topology :single, :replica_set require_no_auth let(:new_map_reduce) do map_reduce.out(db: 'another-db', replace: 'output_collection') end it 'iterates over the documents in the result' do new_map_reduce.each do |document| expect(document[:value]).to_not be_nil end end it 'fetches the results from the collection' do expect(new_map_reduce.count).to eq(2) end end end context 'when the option is to merge' do let(:new_map_reduce) do map_reduce.out(merge: 'output_collection') end it 'iterates over the documents in the result' do new_map_reduce.each do |document| expect(document[:value]).to_not be_nil end end it 'fetches the results from the collection' do expect(new_map_reduce.count).to eq(2) end context 'when another db is specified' do min_server_fcv '3.0' require_topology :single, :replica_set require_no_auth let(:new_map_reduce) do map_reduce.out(db: 'another-db', merge: 'output_collection') end it 'iterates over the documents in the result' do new_map_reduce.each do |document| expect(document[:value]).to_not be_nil end end it 'fetches the results from the collection' do expect(new_map_reduce.count).to eq(2) end end end context 'when the option is to reduce' do let(:new_map_reduce) do map_reduce.out(reduce: 'output_collection') end it 'iterates over the documents in the result' do new_map_reduce.each do |document| expect(document[:value]).to_not be_nil end end it 'fetches the results from the collection' do expect(new_map_reduce.count).to eq(2) end context 'when another db is specified' do min_server_fcv '3.0' require_topology :single, :replica_set require_no_auth let(:new_map_reduce) do map_reduce.out(db: 'another-db', reduce: 'output_collection') end it 'iterates over the documents in the result' do new_map_reduce.each do |document| expect(document[:value]).to_not be_nil end end it 'fetches the results from the collection' do expect(new_map_reduce.count).to eq(2) end end end context 'when the option is a collection name' do let(:new_map_reduce) do map_reduce.out('output_collection') end it 'fetches the results from the collection' do expect(new_map_reduce.count).to eq(2) end end end context 'when the view has a selector' do context 'when the selector is basic' do let(:selector) do { 'name' => 'Berlin' } end it 'applies the selector to the map/reduce' do map_reduce.each do |document| expect(document[:_id]).to eq('Berlin') end end it 'includes the selector in the operation spec' do expect(map_reduce_spec[:selector][:query]).to eq(selector) end end context 'when the selector is advanced' do let(:selector) do { :$query => { 'name' => 'Berlin' }} end it 'applies the selector to the map/reduce' do map_reduce.each do |document| expect(document[:_id]).to eq('Berlin') end end it 'includes the selector in the operation spec' do expect(map_reduce_spec[:selector][:query]).to eq(selector[:$query]) end end end context 'when the view has a limit' do let(:view_options) do { limit: 1 } end it 'applies the limit to the map/reduce' do map_reduce.each do |document| expect(document[:_id]).to eq('Berlin') end end end end describe '#execute' do context 'when output is to a collection' do let(:options) do { out: 'output_collection' } end let!(:result) do map_reduce.execute end it 'executes the map reduce' do expect(authorized_client['output_collection'].count).to eq(2) end it 'returns a result object' do expect(result).to be_a(Mongo::Operation::Result) end end context 'when there is no output' do let(:result) do map_reduce.execute end it 'executes the map reduce' do expect(result.documents.size).to eq(2) end it 'returns a result object' do expect(result).to be_a(Mongo::Operation::Result) end end context 'when a session is provided' do let(:session) do authorized_client.start_session end let(:options) do { session: session } end let(:operation) do map_reduce.execute end let(:failed_operation) do described_class.new(view, '$invalid', reduce, options).execute end let(:client) do authorized_client end it_behaves_like 'an operation using a session' it_behaves_like 'a failed operation using a session' end end describe '#finalize' do let(:finalize) do %Q{ function(key, value) { value.testing = test; return value; }} end let(:new_map_reduce) do map_reduce.finalize(finalize) end it 'sets the finalize function' do expect(new_map_reduce.finalize).to eq(finalize) end it 'includes the finalize function in the operation spec' do expect(new_map_reduce.send(:map_reduce_spec, double('session'))[:selector][:finalize]).to eq(finalize) end end describe '#js_mode' do let(:new_map_reduce) do map_reduce.js_mode(true) end it 'sets the js mode value' do expect(new_map_reduce.js_mode).to be true end it 'includes the js mode value in the operation spec' do expect(new_map_reduce.send(:map_reduce_spec, double('session'))[:selector][:jsMode]).to be(true) end end describe '#out' do let(:location) do { 'replace' => 'testing' } end let(:new_map_reduce) do map_reduce.out(location) end it 'sets the out value' do expect(new_map_reduce.out).to eq(location) end it 'includes the out value in the operation spec' do expect(new_map_reduce.send(:map_reduce_spec, double('session'))[:selector][:out]).to eq(location) end context 'when out is not defined' do it 'defaults to inline' do expect(map_reduce_spec[:selector][:out]).to eq('inline' => 1) end end context 'when out is specified in the options' do let(:location) do { 'replace' => 'testing' } end let(:options) do { :out => location } end it 'sets the out value' do expect(map_reduce.out).to eq(location) end it 'includes the out value in the operation spec' do expect(map_reduce_spec[:selector][:out]).to eq(location) end end context 'when out is not inline' do let(:location) do { 'replace' => 'testing' } end let(:options) do { :out => location } end it 'does not allow the operation on a secondary' do expect(map_reduce.send(:secondary_ok?)).to be false end context 'when the server is not valid for writing' do clean_slate require_warning_clean require_no_linting before do stop_monitoring(authorized_client) end it 'reroutes the operation to a primary' do RSpec::Mocks.with_temporary_scope do allow(map_reduce).to receive(:valid_server?).and_return(false) expect(Mongo::Logger.logger).to receive(:warn).once do |msg| expect(msg).to include('Rerouting the MapReduce operation to the primary server') end map_reduce.to_a end end context 'when the view has a write concern' do let(:collection) do authorized_collection.with(write: INVALID_WRITE_CONCERN) end let(:view) do authorized_client.cluster.servers.map do |server| server.pool.ready end Mongo::Collection::View.new(collection, selector, view_options) end shared_examples_for 'map reduce that writes accepting write concern' do context 'when the server supports write concern on the mapReduce command' do min_server_fcv '3.4' require_topology :single it 'uses the write concern' do expect { map_reduce.to_a }.to raise_exception(Mongo::Error::OperationFailure) end end context 'when the server does not support write concern on the mapReduce command' do max_server_version '3.2' it 'does not apply the write concern' do expect(map_reduce.to_a.size).to eq(2) end end end context 'when out is a String' do let(:options) do { :out => 'new-collection' } end it_behaves_like 'map reduce that writes accepting write concern' end context 'when out is a document and not inline' do let(:options) do { :out => { merge: 'exisiting-collection' } } end it_behaves_like 'map reduce that writes accepting write concern' end context 'when out is a document but inline is specified' do let(:options) do { :out => { inline: 1 } } end it 'does not use the write concern' do expect(map_reduce.to_a.size).to eq(2) end end end end context 'when the server is a valid for writing' do clean_slate require_warning_clean require_no_linting before do stop_monitoring(authorized_client) end it 'does not reroute the operation to a primary' do # We produce a deprecation warning, but there shouldn't be # the reroute warning. expect(Mongo::Logger.logger).to receive(:warn).once do |msg| expect(msg).not_to include('Rerouting the MapReduce operation to the primary server') end map_reduce.to_a end end end end describe '#scope' do let(:object) do { 'value' => 'testing' } end let(:new_map_reduce) do map_reduce.scope(object) end it 'sets the scope object' do expect(new_map_reduce.scope).to eq(object) end it 'includes the scope object in the operation spec' do expect(new_map_reduce.send(:map_reduce_spec, double('session'))[:selector][:scope]).to eq(object) end end describe '#verbose' do let(:verbose) do false end let(:new_map_reduce) do map_reduce.verbose(verbose) end it 'sets the verbose value' do expect(new_map_reduce.verbose).to be(false) end it 'includes the verbose option in the operation spec' do expect(new_map_reduce.send(:map_reduce_spec, double('session'))[:selector][:verbose]).to eq(verbose) end end context 'when limit is set on the view' do let(:limit) do 3 end let(:view_options) do { limit: limit } end it 'includes the limit in the operation spec' do expect(map_reduce_spec[:selector][:limit]).to be(limit) end end context 'when sort is set on the view' do let(:sort) do { name: -1 } end let(:view_options) do { sort: sort } end it 'includes the sort object in the operation spec' do expect(map_reduce_spec[:selector][:sort][:name]).to eq(sort[:name]) end end context 'when the collection has a read preference' do let(:read_preference) do {mode: :secondary} end it 'includes the read preference in the spec' do allow(authorized_collection).to receive(:read_preference).and_return(read_preference) expect(map_reduce_spec[:read]).to eq(read_preference) end end context 'when collation is specified' do let(:map) do %Q{ function() { emit(this.name, 1); }} end let(:reduce) do %Q{ function(key, values) { return Array.sum(values); }} end before do authorized_collection.insert_many([ { name: 'bang' }, { name: 'bang' }]) end let(:selector) do { name: 'BANG' } end context 'when the server selected supports collations' do min_server_fcv '3.4' context 'when the collation key is a String' do let(:options) do { 'collation' => { locale: 'en_US', strength: 2 } } end it 'applies the collation' do expect(map_reduce.first['value']).to eq(2) end end context 'when the collation key is a Symbol' do let(:options) do { collation: { locale: 'en_US', strength: 2 } } end it 'applies the collation' do expect(map_reduce.first['value']).to eq(2) end end end context 'when the server selected does not support collations' do max_server_version '3.2' context 'when the map reduce has collation specified in its options' do let(:options) do { collation: { locale: 'en_US', strength: 2 } } end it 'raises an exception' do expect { map_reduce.to_a }.to raise_exception(Mongo::Error::UnsupportedCollation) end context 'when a String key is used' do let(:options) do { 'collation' => { locale: 'en_US', strength: 2 } } end it 'raises an exception' do expect { map_reduce.to_a }.to raise_exception(Mongo::Error::UnsupportedCollation) end end end context 'when the view has collation specified in its options' do let(:view_options) do { collation: { locale: 'en_US', strength: 2 } } end it 'raises an exception' do expect { map_reduce.to_a }.to raise_exception(Mongo::Error::UnsupportedCollation) end context 'when a String key is used' do let(:options) do { 'collation' => { locale: 'en_US', strength: 2 } } end it 'raises an exception' do expect { map_reduce.to_a }.to raise_exception(Mongo::Error::UnsupportedCollation) end end end end end describe '#map_reduce_spec' do context 'when read preference is given' do let(:view_options) do { read: {mode: :secondary} } end context 'selector' do # For compatibility with released versions of Mongoid, this method # must return read preference under the :read key. it 'contains read preference' do map_reduce_spec[:selector][:read].should == {'mode' => :secondary} end end end end end mongo-ruby-driver-2.21.3/spec/mongo/collection/view/readable_spec.rb000066400000000000000000001644471505113246500254640ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Collection::View::Readable do let(:selector) do {} end let(:options) do {} end let(:view) do Mongo::Collection::View.new(authorized_collection, selector, options) end before do authorized_collection.delete_many end shared_examples_for 'a read concern aware operation' do context 'when a read concern is provided' do min_server_fcv '3.2' let(:new_view) do Mongo::Collection::View.new(new_collection, selector, options) end context 'when the read concern is valid' do let(:new_collection) do authorized_collection.with(read_concern: { level: 'local' }) end it 'sends the read concern' do expect { result }.to_not raise_error end end context 'when the read concern is not valid' do let(:new_collection) do authorized_collection.with(read_concern: { level: 'na' }) end it 'raises an exception' do expect { result }.to raise_error(Mongo::Error::OperationFailure) end end end end describe '#allow_partial_results' do let(:new_view) do view.allow_partial_results end it 'sets the flag' do expect(new_view.options[:allow_partial_results]).to be true end it 'returns a new View' do expect(new_view).not_to be(view) end end describe '#allow_disk_use' do let(:new_view) do view.allow_disk_use end it 'sets the flag' do expect(new_view.options[:allow_disk_use]).to be true end it 'returns the new View' do expect(new_view).not_to be(view) end end describe '#aggregate' do let(:documents) do [ { city: "Berlin", pop: 18913, neighborhood: "Kreuzberg" }, { city: "Berlin", pop: 84143, neighborhood: "Mitte" }, { city: "New York", pop: 40270, neighborhood: "Brooklyn" } ] end let(:pipeline) do [{ "$group" => { "_id" => "$city", "totalpop" => { "$sum" => "$pop" } } }] end before do authorized_collection.insert_many(documents) end let(:aggregation) do view.aggregate(pipeline) end context 'when incorporating read concern' do let(:result) do new_view.aggregate(pipeline, options).to_a end it_behaves_like 'a read concern aware operation' end context 'when not iterating the aggregation' do it 'returns the aggregation object' do expect(aggregation).to be_a(Mongo::Collection::View::Aggregation) end end context 'when iterating the aggregation' do it 'yields to each document' do aggregation.each do |doc| expect(doc[:totalpop]).to_not be_nil end end end context 'when options are specified' do let(:agg_options) do { :max_time_ms => 500 } end let(:aggregation) do view.aggregate(pipeline, agg_options) end it 'passes the option to the Aggregation object' do expect(aggregation.options[:max_time_ms]).to eq(agg_options[:max_time_ms]) end end context "when using methods to set aggregate options" do context "when the broken_view_options flag is off" do config_override :broken_view_options, false let(:aggregate) do view.send(opt, param).aggregate(pipeline, options) end context "when a :allow_disk_use is given" do let(:aggregate) do view.allow_disk_use.aggregate(pipeline, options) end let(:opt) { :allow_disk_use } it "sets the option correctly" do expect(aggregate.options[opt]).to eq(true) end end context "when a :batch_size is given" do let(:opt) { :batch_size } let(:param) { 2 } it "sets the option correctly" do expect(aggregate.options[opt]).to eq(param) end end context "when a :max_time_ms is given" do let(:opt) { :max_time_ms } let(:param) { 2 } it "sets the option correctly" do expect(aggregate.options[opt]).to eq(param) end end context "when a :max_await_time_ms is given" do let(:opt) { :max_await_time_ms } let(:param) { 2 } it "sets the option correctly" do expect(aggregate.options[opt]).to eq(param) end end context "when a :comment is given" do let(:opt) { :comment } let(:param) { "comment" } it "sets the option correctly" do expect(aggregate.options[opt]).to eq(param) end end context "when a :hint is given" do let(:opt) { :hint } let(:param) { "_id_" } it "sets the option correctly" do expect(aggregate.options[opt]).to eq(param) end end context "when a :session is given on the view" do let(:opt) { :session } let(:param) { authorized_client.start_session } let(:aggregate) do authorized_collection.find({}, session: param).aggregate(pipeline, options) end after do param.end_session end context "when broken_view_options is false" do config_override :broken_view_options, false it "sets the option correctly" do expect(aggregate.options[opt]).to eq(param) end end context "when broken_view_options is true" do config_override :broken_view_options, true it "does not set the option" do expect(aggregate.options[opt]).to be nil end end end context "when also including in options" do let(:aggregate) do view.limit(1).aggregate(pipeline, { limit: 2 }) end it "sets the option correctly" do expect(aggregate.options[:limit]).to eq(2) end end end context "when the broken_view_options flag is on" do config_override :broken_view_options, true let(:aggregate) do view.send(opt, param).aggregate(pipeline, options) end context "when a :allow_disk_use is given" do let(:aggregate) do view.allow_disk_use.aggregate(pipeline, options) end let(:opt) { :allow_disk_use } it "doesn't set the option correctly" do expect(aggregate.options[opt]).to be_nil end end context "when a :batch_size is given" do let(:opt) { :batch_size } let(:param) { 2 } it "doesn't set the option correctly" do expect(aggregate.options[opt]).to be_nil end end context "when a :max_time_ms is given" do let(:opt) { :max_time_ms } let(:param) { 2 } it "doesn't set the option correctly" do expect(aggregate.options[opt]).to be_nil end end context "when a :max_await_time_ms is given" do let(:opt) { :max_await_time_ms } let(:param) { 2 } it "doesn't set the option correctly" do expect(aggregate.options[opt]).to be_nil end end context "when a :comment is given" do let(:opt) { :comment } let(:param) { "comment" } it "doesn't set the option correctly" do expect(aggregate.options[opt]).to be_nil end end context "when a :hint is given" do let(:opt) { :hint } let(:param) { "_id_" } it "doesn't set the option correctly" do expect(aggregate.options[opt]).to be_nil end end context "when also including in options" do let(:aggregate) do view.limit(1).aggregate(pipeline, { limit: 2 }) end it "sets the option correctly" do expect(aggregate.options[:limit]).to eq(2) end end end end end describe '#map_reduce' do let(:map) do %Q{ function() { emit(this.name, { population: this.population }); }} end let(:reduce) do %Q{ function(key, values) { var result = { population: 0 }; values.forEach(function(value) { result.population += value.population; }); return result; }} end let(:documents) do [ { name: 'Berlin', population: 3000000 }, { name: 'London', population: 9000000 } ] end before do authorized_collection.insert_many(documents) end let(:map_reduce) do view.map_reduce(map, reduce) end context 'when incorporating read concern' do let(:result) do new_view.map_reduce(map, reduce, options).to_a end it_behaves_like 'a read concern aware operation' end context 'when a session supporting causal consistency is used' do let(:view) do Mongo::Collection::View.new(collection, selector, session: session) end let(:operation) do begin; view.map_reduce(map, reduce).to_a; rescue; end end let(:command) do operation subscriber.started_events.find { |cmd| cmd.command_name == 'mapReduce' }.command end it_behaves_like 'an operation supporting causally consistent reads' end context 'when not iterating the map/reduce' do it 'returns the map/reduce object' do expect(map_reduce).to be_a(Mongo::Collection::View::MapReduce) end end context 'when iterating the map/reduce' do it 'yields to each document' do map_reduce.each do |doc| expect(doc[:_id]).to_not be_nil end end end context "when using methods to set map_reduce options" do let(:map_reduce) do view.send(opt, param).map_reduce(map, reduce, options) end context "when a :limit is given" do let(:opt) { :limit } let(:param) { 1 } it "sets the option correctly" do expect(map_reduce.options[opt]).to eq(param) end end context "when a :sort is given" do let(:opt) { :sort } let(:param) { { 'x' => Mongo::Index::ASCENDING } } it "sets the option correctly" do expect(map_reduce.options[opt]).to eq(param) end end context "when also including in options" do let(:map_reduce) do view.limit(1).map_reduce(map, reduce, { limit: 2}) end it "sets the option correctly" do expect(map_reduce.options[:limit]).to eq(2) end end context "when a :session is given on the view" do let(:opt) { :session } let(:param) { authorized_client.start_session } let(:map_reduce) do authorized_collection.find({}, session: param).map_reduce(map, reduce, options) end after do param.end_session end with_config_values :broken_view_options, true, false do it "sets the option correctly" do expect(map_reduce.options[opt]).to eq(param) end end end end end describe '#batch_size' do let(:options) do { :batch_size => 13 } end context 'when a batch size is specified' do let(:new_batch_size) do 15 end it 'sets the batch size' do new_view = view.batch_size(new_batch_size) expect(new_view.batch_size).to eq(new_batch_size) end it 'returns a new View' do expect(view.batch_size(new_batch_size)).not_to be(view) end end context 'when a batch size is not specified' do it 'returns the batch_size' do expect(view.batch_size).to eq(options[:batch_size]) end end end describe '#comment' do let(:options) do { :comment => 'test1' } end context 'when a comment is specified' do let(:new_comment) do 'test2' end it 'sets the comment' do new_view = view.comment(new_comment) expect(new_view.comment).to eq(new_comment) end it 'returns a new View' do expect(view.comment(new_comment)).not_to be(view) end end context 'when a comment is not specified' do it 'returns the comment' do expect(view.comment).to eq(options[:comment]) end end end describe '#count' do let(:documents) do (1..10).map{ |i| { field: "test#{i}" }} end before do authorized_collection.delete_many authorized_collection.insert_many(documents) end let(:result) do view.count(options) end context 'when incorporating read concern' do let(:result) do new_view.count(options) end it_behaves_like 'a read concern aware operation' end context 'when a selector is provided' do let(:selector) do { field: 'test1' } end it 'returns the count of matching documents' do expect(view.count).to eq(1) end it 'returns an integer' do expect(view.count).to be_a(Integer) end end context 'when no selector is provided' do it 'returns the count of matching documents' do expect(view.count).to eq(10) end end context 'not sharded' do require_topology :single, :replica_set it 'takes a read preference option' do # Secondary may be delayed, since this tests wants 10 documents # it must query the primary expect(view.count(read: { mode: :primary })).to eq(10) end end context 'when a read preference is set on the view' do require_topology :single, :replica_set let(:client) do # Set a timeout otherwise, the test will hang for 30 seconds. authorized_client.with(server_selection_timeout: 1) end let(:collection) do client[authorized_collection.name] end before do allow(client.cluster).to receive(:single?).and_return(false) end let(:view) do Mongo::Collection::View.new(collection, selector, options) end let(:view_with_read_pref) do view.read(:mode => :secondary, :tag_sets => [{ 'non' => 'existent' }]) end let(:result) do view_with_read_pref.count end it 'uses the read preference setting on the view' do expect { result }.to raise_exception(Mongo::Error::NoServerAvailable) end end context 'when the collection has a read preference set' do let(:client) do # Set a timeout in case the collection read_preference does get used. # Otherwise, the test will hang for 30 seconds. authorized_client.with(server_selection_timeout: 1) end let(:read_preference) do { :mode => :secondary, :tag_sets => [{ 'non' => 'existent' }] } end let(:collection) do client[authorized_collection.name, read: read_preference] end let(:view) do Mongo::Collection::View.new(collection, selector, options) end context 'when a read preference argument is provided' do let(:result) do view.count(read: { mode: :primary }) end it 'uses the read preference passed to the method' do expect(result).to eq(10) end end context 'when a read preference is set on the view' do let(:view_with_read_pref) do view.read(mode: :primary) end let(:result) do view_with_read_pref.count end it 'uses the read preference of the view' do expect(result).to eq(10) end end context 'when no read preference argument is provided' do require_topology :single, :replica_set before do allow(view.collection.client.cluster).to receive(:single?).and_return(false) end let(:result) do view.count end it 'uses the read preference of the collection' do expect { result }.to raise_exception(Mongo::Error::NoServerAvailable) end end context 'when the collection does not have a read preference set' do require_topology :single, :replica_set let(:client) do authorized_client.with(server_selection_timeout: 1) end before do allow(view.collection.client.cluster).to receive(:single?).and_return(false) end let(:collection) do client[authorized_collection.name] end let(:view) do Mongo::Collection::View.new(collection, selector, options) end let(:result) do read_preference = { :mode => :secondary, :tag_sets => [{ 'non' => 'existent' }] } view.count(read: read_preference) end it 'uses the read preference passed to the method' do expect { result }.to raise_exception(Mongo::Error::NoServerAvailable) end end context 'when a read preference is set on the view' do let(:view_with_read_pref) do view.read(:mode => :primary) end let(:result) do view_with_read_pref.count end it 'uses the read preference passed to the method' do expect(result).to eq(10) end end end it 'takes a max_time_ms option' do expect { view.count(max_time_ms: 0.1) }.to raise_error(Mongo::Error::OperationFailure) end it 'sets the max_time_ms option on the command' do expect(view.count(max_time_ms: 100)).to eq(10) end context 'when a collation is specified' do let(:selector) do { name: 'BANG' } end let(:result) do view.count end before do authorized_collection.insert_one(name: 'bang') end let(:options) do { collation: { locale: 'en_US', strength: 2 } } end context 'when the server selected supports collations' do min_server_fcv '3.4' it 'applies the collation to the count' do expect(result).to eq(1) end end context 'when the server selected does not support collations' do max_server_version '3.2' it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedCollation) end context 'when a String key is used' do let(:options) do { 'collation' => { locale: 'en_US', strength: 2 } } end it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedCollation) end end end end context 'when a collation is specified in the method options' do let(:selector) do { name: 'BANG' } end let(:result) do view.count(count_options) end before do authorized_collection.insert_one(name: 'bang') end let(:count_options) do { collation: { locale: 'en_US', strength: 2 } } end context 'when the server selected supports collations' do min_server_fcv '3.4' it 'applies the collation to the count' do expect(result).to eq(1) end end context 'when the server selected does not support collations' do max_server_version '3.2' it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedCollation) end context 'when a String key is used' do let(:count_options) do { 'collation' => { locale: 'en_US', strength: 2 } } end it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedCollation) end end end end context "when using methods to set count options" do let(:obj_path) { [:selector, opt] } shared_examples "a count option" do context "when the broken_view_options flag is off" do config_override :broken_view_options, false it "sets the option correctly" do expect(Mongo::Operation::Count).to receive(:new).once.and_wrap_original do |m, *args| opts = args.first.slice(*args.first.keys - [:session]) expect(opts.dig(*obj_path)).to eq(param) m.call(*args) end view.send(opt, param).count(options) end end context "when the broken_view_options flag is on" do config_override :broken_view_options, true it "doesn't set the option correctly" do expect(Mongo::Operation::Count).to receive(:new).once.and_wrap_original do |m, *args| opts = args.first.slice(*args.first.keys - [:session]) expect(opts.dig(*obj_path)).to be_nil m.call(*args) end view.send(opt, param).count(options) end end end context "when a :hint is given" do let(:opt) { :hint } let(:param) { "_id_" } it_behaves_like "a count option" end context "when a :max_time_ms is given" do let(:opt) { :max_time_ms } let(:param) { 5000 } let(:obj_path) { [:selector, :maxTimeMS] } it_behaves_like "a count option" end context "when a :comment is given" do let(:opt) { :comment } let(:param) { "comment" } let(:obj_path) { opt } it_behaves_like "a count option" end context "when a :limit is given" do let(:opt) { :limit } let(:param) { 1 } it_behaves_like "a count option" end context "when a :skip is given" do let(:opt) { :skip } let(:param) { 1 } it_behaves_like "a count option" end context "when a :session is given on the view" do let(:opt) { :session } let(:param) { authorized_client.start_session } let(:aggregate) do authorized_collection.find({}, session: param).aggregate(pipeline, options) end after do param.end_session end with_config_values :broken_view_options, true, false do it "sets the option correctly" do expect(Mongo::Operation::Count).to receive(:new).once.and_wrap_original do |m, *args| expect(args.first[opt]).to eq(param) m.call(*args) end authorized_collection.find({}, session: param).count(options) end end end context "when also including in options" do with_config_values :broken_view_options, true, false do it "gives options higher precedence" do expect(Mongo::Operation::Count).to receive(:new).once.and_wrap_original do |m, *args| opts = args.first.slice(:selector) expect(opts.dig(:selector, :limit)).to eq(2) m.call(*args) end view.limit(1).count({ limit: 2 }) end end end end end describe "#estimated_document_count" do let(:result) do view.estimated_document_count(options) end context 'when limit is set' do it 'raises an error' do expect { view.limit(5).estimated_document_count(options) }.to raise_error(ArgumentError, "Cannot call estimated_document_count when querying with limit") end end context 'when skip is set' do it 'raises an error' do expect { view.skip(5).estimated_document_count(options) }.to raise_error(ArgumentError, "Cannot call estimated_document_count when querying with skip") end end context 'when limit passed as an option' do it 'raises an error' do expect { view.estimated_document_count(options.merge(limit: 5)) }.to raise_error(ArgumentError, "Cannot call estimated_document_count when querying with limit") end end context 'when skip passed as an option' do it 'raises an error' do expect { view.estimated_document_count(options.merge(skip: 5)) }.to raise_error(ArgumentError, "Cannot call estimated_document_count when querying with skip") end end context 'when collection has documents' do let(:documents) do (1..10).map{ |i| { field: "test#{i}" }} end before do authorized_collection.delete_many authorized_collection.insert_many(documents) end context 'when a selector is provided' do let(:selector) do { field: 'test1' } end it 'raises an error' do expect { result }.to raise_error(ArgumentError, "Cannot call estimated_document_count when querying with a filter") end end context 'when no selector is provided' do it 'returns the estimated count of matching documents' do expect(view.estimated_document_count).to eq(10) end end end context 'when collection does not exist' do let(:view) do Mongo::Collection::View.new( authorized_client['nonexistent-collection-for-estimated-document-count'], selector, options) end it 'returns 0' do view.estimated_document_count.should == 0 end end context "when using methods to set options" do context "when the broken_view_options flag is on" do config_override :broken_view_options, true context "when a :max_time_ms is given" do let(:opt) { :max_time_ms } let(:param) { 5000 } it "doesn't set the option correctly" do expect(Mongo::Operation::Count).to receive(:new).once.and_wrap_original do |m, *args| opts = args.first.slice(*args.first.keys - [:session]) expect(opts.dig(:selector, :maxTimeMS)).to be_nil m.call(*args) end view.send(opt, param).estimated_document_count(options) end end context "when a :comment is given" do let(:opt) { :comment } let(:param) { "comment" } let(:obj_path) { opt } it "doesn't set the option correctly" do expect(Mongo::Operation::Count).to receive(:new).once.and_wrap_original do |m, *args| opts = args.first.slice(*args.first.keys - [:session]) expect(opts[opt]).to be_nil m.call(*args) end view.send(opt, param).estimated_document_count(options) end end end context "when the broken_view_options flag is off" do config_override :broken_view_options, false context "when a :max_time_ms is given" do let(:opt) { :max_time_ms } let(:param) { 5000 } it "sets the option correctly" do expect(Mongo::Operation::Count).to receive(:new).once.and_wrap_original do |m, *args| opts = args.first.slice(*args.first.keys - [:session]) expect(opts.dig(:selector, :maxTimeMS)).to eq(param) m.call(*args) end view.send(opt, param).estimated_document_count(options) end end context "when a :comment is given" do let(:opt) { :comment } let(:param) { "comment" } let(:obj_path) { opt } it "sets the option correctly" do expect(Mongo::Operation::Count).to receive(:new).once.and_wrap_original do |m, *args| opts = args.first.slice(*args.first.keys - [:session]) expect(opts[opt]).to eq(param) m.call(*args) end view.send(opt, param).estimated_document_count(options) end end context "when a :session is given on the view" do let(:opt) { :session } let(:param) { authorized_client.start_session } after do param.end_session end with_config_values :broken_view_options, true, false do it "sets the option correctly" do expect(Mongo::Operation::Count).to receive(:new).once.and_wrap_original do |m, *args| expect(args.first[opt]).to eq(param) m.call(*args) end authorized_collection.find({}, session: param).estimated_document_count(options) end end end context "when also including in options" do with_config_values :broken_view_options, true, false do it "gives options higher precedence" do expect(Mongo::Operation::Count).to receive(:new).once.and_wrap_original do |m, *args| opts = args.first.slice(:selector) expect(opts.dig(:selector, :maxTimeMS)).to eq(2000) m.call(*args) end view.max_time_ms(1500).estimated_document_count({ max_time_ms: 2000 }) end end end end end end describe '#count_documents' do context 'when session is given' do min_server_fcv '3.6' let(:subscriber) { Mrss::EventSubscriber.new } before do authorized_collection.client.subscribe(Mongo::Monitoring::COMMAND, subscriber) end let(:connection) do double('connection').tap do |connection| allow(connection).to receive_message_chain(:server, :cluster).and_return(authorized_client.cluster) end end it 'passes the session' do authorized_collection.client.with_session do |session| session.materialize_if_needed session_id = session.session_id authorized_collection.count_documents({}, session: session) event = subscriber.single_command_started_event('aggregate') event.command['lsid'].should == session_id end end end context "when using methods to set count options" do shared_examples "a count option" do context "when the broken_view_options flag is on" do config_override :broken_view_options, true it "doesn't set the option correctly" do expect_any_instance_of(Mongo::Collection::View).to receive(:aggregate).once.and_wrap_original do |m, *args| opts = args[1] expect(opts[opt]).to be_nil m.call(*args) end view.send(opt, param).count_documents(options) end end context "when the broken_view_options flag is off" do config_override :broken_view_options, false it "sets the option correctly" do expect_any_instance_of(Mongo::Collection::View).to receive(:aggregate).once.and_wrap_original do |m, *args| opts = args[1] expect(opts[opt]).to eq(param) m.call(*args) end view.send(opt, param).count_documents(options) end end end context "when a :hint is given" do let(:opt) { :hint } let(:param) { "_id_" } it_behaves_like "a count option" end context "when a :max_time_ms is given" do let(:opt) { :max_time_ms } let(:param) { 5000 } it_behaves_like "a count option" end context "when a :comment is given" do let(:opt) { :comment } let(:param) { "comment" } it_behaves_like "a count option" end context "when a :limit is given" do context "when the broken_view_options flag is false" do config_override :broken_view_options, false it "sets the option correctly" do expect_any_instance_of(Mongo::Collection::View).to receive(:aggregate).once.and_wrap_original do |m, *args| pipeline, opts = args expect(pipeline[1][:'$limit']).to eq(1) m.call(*args) end view.limit(1).count_documents(options) end end context "when the broken_view_options flag is on" do config_override :broken_view_options, true it "doesn't set the option correctly" do expect_any_instance_of(Mongo::Collection::View).to receive(:aggregate).once.and_wrap_original do |m, *args| pipeline, opts = args expect(pipeline[1][:'$limit']).to be_nil m.call(*args) end view.limit(1).count_documents(options) end end end context "when a :skip is given" do context "when the broken_view_options flag is on" do config_override :broken_view_options, true it "doesn't set the option correctly" do expect_any_instance_of(Mongo::Collection::View).to receive(:aggregate).once.and_wrap_original do |m, *args| pipeline, opts = args expect(pipeline[1][:'$skip']).to be_nil m.call(*args) end view.skip(1).count_documents(options) end end context "when the broken_view_options flag is off" do config_override :broken_view_options, false it "sets the option correctly" do expect_any_instance_of(Mongo::Collection::View).to receive(:aggregate).once.and_wrap_original do |m, *args| pipeline, opts = args expect(pipeline[1][:'$skip']).to eq(1) m.call(*args) end view.skip(1).count_documents(options) end end end context "when a :session is given on the view" do let(:opt) { :session } let(:param) { authorized_client.start_session } after do param.end_session end context "when broken_view_options is false" do config_override :broken_view_options, false it "sets the option correctly" do expect_any_instance_of(Mongo::Collection::View).to receive(:aggregate).once.and_wrap_original do |m, *args| expect(args[1][opt]).to eq(param) m.call(*args) end authorized_collection.find({}, session: param).count_documents(options) end end context "when broken_view_options is true" do config_override :broken_view_options, true it "does not set the option" do expect_any_instance_of(Mongo::Collection::View).to receive(:aggregate).once.and_wrap_original do |m, *args| expect(args[1][opt]).to be nil m.call(*args) end authorized_collection.find({}, session: param).count_documents(options) end end end context "when also including in options" do with_config_values :broken_view_options, true, false do it "gives options higher precedence" do expect_any_instance_of(Mongo::Collection::View).to receive(:aggregate).once.and_wrap_original do |m, *args| pipeline, opts = args expect(pipeline[1][:'$limit']).to eq(2) m.call(*args) end view.limit(1).count_documents({ limit: 2 }) end end end end end describe '#distinct' do context 'when incorporating read concern' do let(:result) do new_view.distinct(:field, options) end it_behaves_like 'a read concern aware operation' end context 'when a selector is provided' do let(:selector) do { field: 'test' } end let(:documents) do (1..3).map{ |i| { field: "test" }} end before do authorized_collection.insert_many(documents) end context 'when the field is a symbol' do let(:distinct) do view.distinct(:field) end it 'returns the distinct values' do expect(distinct).to eq([ 'test' ]) end end context 'when the field is a string' do let(:distinct) do view.distinct('field') end it 'returns the distinct values' do expect(distinct).to eq([ 'test' ]) end end context 'when the field is nil' do let(:distinct) do view.distinct(nil) end it 'raises ArgumentError' do expect do distinct end.to raise_error(ArgumentError, 'Field name for distinct operation must be not nil') end end context 'when the field does not exist' do let(:distinct) do view.distinct(:doesnotexist) end it 'returns an empty array' do expect(distinct).to be_empty end end end context 'when no selector is provided' do let(:documents) do (1..3).map{ |i| { field: "test#{i}" }} end before do authorized_collection.insert_many(documents) end context 'when the field is a symbol' do let(:distinct) do view.distinct(:field) end it 'returns the distinct values' do expect(distinct.sort).to eq([ 'test1', 'test2', 'test3' ]) end end context 'when the field is a string' do let(:distinct) do view.distinct('field') end it 'returns the distinct values' do expect(distinct.sort).to eq([ 'test1', 'test2', 'test3' ]) end end context 'when the field is nil' do let(:distinct) do view.distinct(nil) end it 'raises ArgumentError' do expect do distinct end.to raise_error(ArgumentError, 'Field name for distinct operation must be not nil') end end end context 'when a read preference is set on the view' do require_topology :single, :replica_set let(:client) do # Set a timeout otherwise, the test will hang for 30 seconds. authorized_client.with(server_selection_timeout: 1) end let(:collection) do client[authorized_collection.name] end before do allow(client.cluster).to receive(:single?).and_return(false) end let(:view) do Mongo::Collection::View.new(collection, selector, options) end let(:view_with_read_pref) do view.read(:mode => :secondary, :tag_sets => [{ 'non' => 'existent' }]) end let(:result) do view_with_read_pref.distinct(:field) end it 'uses the read preference setting on the view' do expect { result }.to raise_exception(Mongo::Error::NoServerAvailable) end end context 'when the collection has a read preference set' do let(:documents) do (1..3).map{ |i| { field: "test#{i}" }} end before do authorized_collection.insert_many(documents) end let(:client) do # Set a timeout in case the collection read_preference does get used. # Otherwise, the test will hang for 30 seconds. authorized_client.with(server_selection_timeout: 1) end let(:read_preference) do { :mode => :secondary, :tag_sets => [{ 'non' => 'existent' }] } end let(:collection) do client[authorized_collection.name, read: read_preference] end let(:view) do Mongo::Collection::View.new(collection, selector, options) end context 'when a read preference argument is provided' do let(:distinct) do view.distinct(:field, read: { mode: :primary }) end it 'uses the read preference passed to the method' do expect(distinct.sort).to eq([ 'test1', 'test2', 'test3' ]) end end context 'when no read preference argument is provided' do require_topology :single, :replica_set before do allow(view.collection.client.cluster).to receive(:single?).and_return(false) end let(:distinct) do view.distinct(:field) end it 'uses the read preference of the collection' do expect { distinct }.to raise_exception(Mongo::Error::NoServerAvailable) end end context 'when the collection does not have a read preference set' do require_topology :single, :replica_set let(:documents) do (1..3).map{ |i| { field: "test#{i}" }} end before do authorized_collection.insert_many(documents) allow(view.collection.client.cluster).to receive(:single?).and_return(false) end let(:client) do authorized_client.with(server_selection_timeout: 1) end let(:collection) do client[authorized_collection.name] end let(:view) do Mongo::Collection::View.new(collection, selector, options) end let(:distinct) do read_preference = { :mode => :secondary, :tag_sets => [{ 'non' => 'existent' }] } view.distinct(:field, read: read_preference) end it 'uses the read preference passed to the method' do expect { distinct }.to raise_exception(Mongo::Error::NoServerAvailable) end end context 'when a read preference is set on the view' do let(:view_with_read_pref) do view.read(:mode => :secondary, :tag_sets => [{ 'non' => 'existent' }]) end let(:distinct) do view_with_read_pref.distinct(:field, read: { mode: :primary }) end it 'uses the read preference passed to the method' do expect(distinct.sort).to eq([ 'test1', 'test2', 'test3' ]) end end end context 'when a max_time_ms is specified' do let(:documents) do (1..3).map{ |i| { field: "test" }} end before do authorized_collection.insert_many(documents) end it 'sets the max_time_ms option on the command' do expect { view.distinct(:field, max_time_ms: 0.1) }.to raise_error(Mongo::Error::OperationFailure) end it 'sets the max_time_ms option on the command' do expect(view.distinct(:field, max_time_ms: 100)).to eq([ 'test' ]) end end context 'when the field does not exist' do it 'returns an empty array' do expect(view.distinct(:nofieldexists)).to be_empty end end context 'when a collation is specified on the view' do let(:result) do view.distinct(:name) end before do authorized_collection.insert_one(name: 'bang') authorized_collection.insert_one(name: 'BANG') end let(:options) do { collation: { locale: 'en_US', strength: 2 } } end context 'when the server selected supports collations' do min_server_fcv '3.4' it 'applies the collation to the distinct' do expect(result).to eq(['bang']) end end context 'when the server selected does not support collations' do max_server_version '3.2' it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedCollation) end context 'when a String key is used' do let(:options) do { 'collation' => { locale: 'en_US', strength: 2 } } end it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedCollation) end end end end context 'when a collation is specified in the method options' do let(:result) do view.distinct(:name, distinct_options) end before do authorized_collection.insert_one(name: 'bang') authorized_collection.insert_one(name: 'BANG') end let(:distinct_options) do { collation: { locale: 'en_US', strength: 2 } } end context 'when the server selected supports collations' do min_server_fcv '3.4' it 'applies the collation to the distinct' do expect(result).to eq(['bang']) end end context 'when the server selected does not support collations' do max_server_version '3.2' it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedCollation) end context 'when a String key is used' do let(:distinct_options) do { 'collation' => { locale: 'en_US', strength: 2 } } end it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedCollation) end end end end context 'when a collation is not specified' do let(:result) do view.distinct(:name) end before do authorized_collection.insert_one(name: 'bang') authorized_collection.insert_one(name: 'BANG') end it 'does not apply the collation to the distinct' do expect(result).to match_array(['bang', 'BANG']) end end context "when using methods to set options" do context "when a :max_time_ms is given" do let(:opt) { :max_time_ms } let(:param) { 5000 } context "when the broken_view_options flag is on" do config_override :broken_view_options, true it "doesn't set the option correctly" do expect(Mongo::Operation::Distinct).to receive(:new).once.and_wrap_original do |m, *args| opts = args.first.slice(*args.first.keys - [:session]) expect(opts.dig(:selector, :maxTimeMS)).to be_nil m.call(*args) end view.send(opt, param).distinct(:name, options) end end context "when the broken_view_options flag is off" do config_override :broken_view_options, false it "sets the option correctly" do expect(Mongo::Operation::Distinct).to receive(:new).once.and_wrap_original do |m, *args| opts = args.first.slice(*args.first.keys - [:session]) expect(opts.dig(:selector, :maxTimeMS)).to eq(param) m.call(*args) end view.send(opt, param).distinct(:name, options) end end end context "when a :comment is given" do let(:opt) { :comment } let(:param) { "comment" } let(:obj_path) { opt } context "when the broken_view_options flag is on" do config_override :broken_view_options, true it "doesn't set the option correctly" do expect(Mongo::Operation::Distinct).to receive(:new).once.and_wrap_original do |m, *args| opts = args.first.slice(*args.first.keys - [:session]) expect(opts[opt]).to be_nil m.call(*args) end view.send(opt, param).distinct(:name, options) end end context "when the broken_view_options flag is off" do config_override :broken_view_options, false it "sets the option correctly" do expect(Mongo::Operation::Distinct).to receive(:new).once.and_wrap_original do |m, *args| opts = args.first.slice(*args.first.keys - [:session]) expect(opts[opt]).to eq(param) m.call(*args) end view.send(opt, param).distinct(:name, options) end end end context "when a :session is given on the view" do let(:opt) { :session } let(:param) { authorized_client.start_session } after do param.end_session end with_config_values :broken_view_options, true, false do it "sets the option correctly" do expect(Mongo::Operation::Distinct).to receive(:new).once.and_wrap_original do |m, *args| expect(args.first[opt]).to eq(param) m.call(*args) end authorized_collection.find({}, session: param).distinct(options) end end end context "when also including in options" do with_config_values :broken_view_options, true, false do it "gives options higher precedence" do expect(Mongo::Operation::Distinct).to receive(:new).once.and_wrap_original do |m, *args| opts = args.first.slice(:selector) expect(opts.dig(:selector, :maxTimeMS)).to eq(2000) m.call(*args) end view.max_time_ms(1500).distinct(:name, { max_time_ms: 2000 }) end end end end end describe '#hint' do context 'when a hint is specified' do let(:options) do { :hint => { 'x' => Mongo::Index::ASCENDING } } end let(:new_hint) do { 'x' => Mongo::Index::DESCENDING } end it 'sets the hint' do new_view = view.hint(new_hint) expect(new_view.hint).to eq(new_hint) end it 'returns a new View' do expect(view.hint(new_hint)).not_to be(view) end end context 'when a hint is not specified' do let(:options) do { :hint => 'x' } end it 'returns the hint' do expect(view.hint).to eq(options[:hint]) end end end describe '#limit' do context 'when a limit is specified' do let(:options) do { :limit => 5 } end let(:new_limit) do 10 end it 'sets the limit' do new_view = view.limit(new_limit) expect(new_view.limit).to eq(new_limit) end it 'returns a new View' do expect(view.limit(new_limit)).not_to be(view) end end context 'when a limit is not specified' do let(:options) do { :limit => 5 } end it 'returns the limit' do expect(view.limit).to eq(options[:limit]) end end end describe '#max_scan' do let(:new_view) do view.max_scan(10) end it 'sets the value in the options' do expect(new_view.max_scan).to eq(10) end end describe '#max_value' do let(:new_view) do view.max_value(_id: 1) end it 'sets the value in the options' do expect(new_view.max_value).to eq('_id' => 1) end end describe '#min_value' do let(:new_view) do view.min_value(_id: 1) end it 'sets the value in the options' do expect(new_view.min_value).to eq('_id' => 1) end end describe '#no_cursor_timeout' do let(:new_view) do view.no_cursor_timeout end it 'sets the flag' do expect(new_view.options[:no_cursor_timeout]).to be true end it 'returns a new View' do expect(new_view).not_to be(view) end context 'when sending to server' do let(:subscriber) { Mrss::EventSubscriber.new } before do authorized_collection.client.subscribe(Mongo::Monitoring::COMMAND, subscriber) end let(:event) do subscriber.single_command_started_event('find') end it 'is sent to server' do new_view.to_a event.command.slice('noCursorTimeout').should == {'noCursorTimeout' => true} end end context 'integration test' do require_topology :single # The number of open cursors with the option set to prevent timeout. def current_no_timeout_count root_authorized_client .command(serverStatus: 1) .documents .first .fetch('metrics') .fetch('cursor') .fetch('open') .fetch('noTimeout') end it 'is applied on the server' do # Initialize collection with two documents. new_view.collection.insert_many([{}, {}]) expect(new_view.count).to be == 2 # Initial "noTimeout" count should be zero. states = [current_no_timeout_count] # The "noTimeout" count should be one while iterating. new_view.batch_size(1).each { states << current_no_timeout_count } # Final "noTimeout" count should be back to zero. states << current_no_timeout_count # This succeeds on: # commit aab776ebdfb15ddb9765039f7300e15796de0c5c # # This starts failing with [0, 0, 0, 0] from: # commit 2d9f0217ec904a1952a1ada2136502eefbca562e expect(states).to be == [0, 1, 1, 0] end end end describe '#projection' do let(:options) do { :projection => { 'x' => 1 } } end context 'when projection are specified' do let(:new_projection) do { 'y' => 1 } end before do authorized_collection.insert_one(y: 'value', a: 'other_value') end it 'sets the projection' do new_view = view.projection(new_projection) expect(new_view.projection).to eq(new_projection) end it 'returns a new View' do expect(view.projection(new_projection)).not_to be(view) end it 'returns only that field on the collection' do expect(view.projection(new_projection).first.keys).to match_array(['_id', 'y']) end end context 'when projection is not specified' do it 'returns the projection' do expect(view.projection).to eq(options[:projection]) end end context 'when projection is not a document' do let(:new_projection) do 'y' end it 'raises an error' do expect do view.projection(new_projection) end.to raise_error(Mongo::Error::InvalidDocument) end end end describe '#read' do context 'when a read pref is specified' do let(:options) do { :read => { :mode => :secondary } } end let(:new_read) do { :mode => :secondary_preferred } end it 'sets the read preference' do new_view = view.read(new_read) expect(new_view.read).to eq(BSON::Document.new(new_read)) end it 'returns a new View' do expect(view.read(new_read)).not_to be(view) end end context 'when a read pref is not specified' do let(:options) do { :read => {:mode => :secondary} } end it 'returns the read preference' do expect(view.read).to eq(BSON::Document.new(options[:read])) end context 'when no read pref is set on initialization' do let(:options) do {} end it 'returns the collection read preference' do expect(view.read).to eq(authorized_collection.read_preference) end end end end describe '#show_disk_loc' do let(:options) do { :show_disk_loc => true } end context 'when show_disk_loc is specified' do let(:new_show_disk_loc) do false end it 'sets the show_disk_loc value' do new_view = view.show_disk_loc(new_show_disk_loc) expect(new_view.show_disk_loc).to eq(new_show_disk_loc) end it 'returns a new View' do expect(view.show_disk_loc(new_show_disk_loc)).not_to be(view) end end context 'when show_disk_loc is not specified' do it 'returns the show_disk_loc value' do expect(view.show_disk_loc).to eq(options[:show_disk_loc]) end end end describe '#modifiers' do let(:options) do { :modifiers => { '$orderby' => 1 } } end context 'when a modifiers document is specified' do let(:new_modifiers) do { '$orderby' => -1 } end it 'sets the new_modifiers document' do new_view = view.modifiers(new_modifiers) expect(new_view.modifiers).to eq(new_modifiers) end it 'returns a new View' do expect(view.modifiers(new_modifiers)).not_to be(view) end end context 'when a modifiers document is not specified' do it 'returns the modifiers value' do expect(view.modifiers).to eq(options[:modifiers]) end end end describe '#max_time_ms' do let(:options) do { :max_time_ms => 200 } end context 'when max_time_ms is specified' do let(:new_max_time_ms) do 300 end it 'sets the max_time_ms value' do new_view = view.max_time_ms(new_max_time_ms) expect(new_view.max_time_ms).to eq(new_max_time_ms) end it 'returns a new View' do expect(view.max_time_ms(new_max_time_ms)).not_to be(view) end end context 'when max_time_ms is not specified' do it 'returns the max_time_ms value' do expect(view.max_time_ms).to eq(options[:max_time_ms]) end end end describe '#cusor_type' do let(:options) do { :cursor_type => :tailable } end context 'when cursor_type is specified' do let(:new_cursor_type) do :tailable_await end it 'sets the cursor_type value' do new_view = view.cursor_type(new_cursor_type) expect(new_view.cursor_type).to eq(new_cursor_type) end it 'returns a new View' do expect(view.cursor_type(new_cursor_type)).not_to be(view) end end context 'when cursor_type is not specified' do it 'returns the cursor_type value' do expect(view.cursor_type).to eq(options[:cursor_type]) end end end describe '#skip' do context 'when a skip is specified' do let(:options) do { :skip => 5 } end let(:new_skip) do 10 end it 'sets the skip value' do new_view = view.skip(new_skip) expect(new_view.skip).to eq(new_skip) end it 'returns a new View' do expect(view.skip(new_skip)).not_to be(view) end end context 'when a skip is not specified' do let(:options) do { :skip => 5 } end it 'returns the skip value' do expect(view.skip).to eq(options[:skip]) end end end describe '#snapshot' do let(:new_view) do view.snapshot(true) end it 'sets the value in the options' do expect(new_view.snapshot).to be true end end describe '#sort' do context 'when a sort is specified' do let(:options) do { :sort => { 'x' => Mongo::Index::ASCENDING }} end let(:new_sort) do { 'x' => Mongo::Index::DESCENDING } end it 'sets the sort option' do new_view = view.sort(new_sort) expect(new_view.sort).to eq(new_sort) end it 'returns a new View' do expect(view.sort(new_sort)).not_to be(view) end end context 'when a sort is not specified' do let(:options) do { :sort => { 'x' => Mongo::Index::ASCENDING }} end it 'returns the sort' do expect(view.sort).to eq(options[:sort]) end end end end mongo-ruby-driver-2.21.3/spec/mongo/collection/view/writable_spec.rb000066400000000000000000001440761505113246500255320ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Collection::View::Writable do let(:selector) do {} end let(:options) do {} end let(:view_collection) do authorized_collection end let(:view) do Mongo::Collection::View.new(view_collection, selector, options) end before do authorized_collection.delete_many end describe '#find_one_and_delete' do before do authorized_collection.insert_many([{ field: 'test1' }]) end context 'when hint option is provided' do # Functionality on more recent servers is sufficiently covered by spec tests. context 'on server versions < 4.2' do max_server_fcv '4.0' it 'raises a client-side exception' do expect do view.find_one_and_delete(hint: '_id_') end.to raise_error(Mongo::Error::UnsupportedOption, /The MongoDB server handling this request does not support the hint option on this command./) end end context 'when the write concern is unacknowledged' do let(:view_collection) do client = authorized_client.with(write_concern: { w: 0 }) client[authorized_collection.name] end context "on 4.4+ servers" do min_server_version '4.4' it "doesn't raise an error" do expect do view.find_one_and_delete(hint: '_id_') end.to_not raise_error(Mongo::Error::UnsupportedOption) end end context "on <=4.2 servers" do max_server_version '4.2' it 'raises a client-side error' do expect do view.find_one_and_delete(hint: '_id_') end.to raise_error(Mongo::Error::UnsupportedOption, /The hint option cannot be specified on an unacknowledged write operation/) end end end end context 'when a matching document is found' do let(:selector) do { field: 'test1' } end context 'when no options are provided' do let!(:document) do view.find_one_and_delete end it 'deletes the document from the database' do expect(view.to_a).to be_empty end it 'returns the document' do expect(document['field']).to eq('test1') end end context 'when a projection is provided' do let!(:document) do view.projection(_id: 1).find_one_and_delete end it 'deletes the document from the database' do expect(view.to_a).to be_empty end it 'returns the document with limited fields' do expect(document['field']).to be_nil expect(document['_id']).to_not be_nil end end context 'when a sort is provided' do let!(:document) do view.sort(field: 1).find_one_and_delete end it 'deletes the document from the database' do expect(view.to_a).to be_empty end it 'returns the document with limited fields' do expect(document['field']).to eq('test1') end end context 'when collation is specified' do let(:selector) do { name: 'BANG' } end let(:result) do view.find_one_and_delete end before do authorized_collection.insert_one(name: 'bang') end let(:options) do { collation: { locale: 'en_US', strength: 2 } } end context 'when the server selected supports collations' do min_server_fcv '3.4' it 'applies the collation' do expect(result['name']).to eq('bang') end end context 'when the server selected does not support collations' do max_server_version '3.2' it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedCollation) end context 'when a String key is used' do let(:options) do { 'collation' => { locale: 'en_US', strength: 2 } } end it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedCollation) end end end end context 'when collation is not specified' do let(:selector) do { name: 'BANG' } end let(:result) do view.find_one_and_delete end before do authorized_collection.insert_one(name: 'bang') end it 'does not apply the collation' do expect(result).to be_nil end end context 'when collation is specified as a method option' do let(:selector) do { name: 'BANG' } end let(:result) do view.find_one_and_delete(method_options) end before do authorized_collection.insert_one(name: 'bang') end let(:method_options) do { collation: { locale: 'en_US', strength: 2 } } end context 'when the server selected supports collations' do min_server_fcv '3.4' it 'applies the collation' do expect(result['name']).to eq('bang') end end context 'when the server selected does not support collations' do max_server_version '3.2' it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedCollation) end context 'when a String key is used' do let(:method_options) do { 'collation' => { locale: 'en_US', strength: 2 } } end it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedCollation) end end end end end context 'when no matching document is found' do let(:selector) do { field: 'test5' } end let!(:document) do view.find_one_and_delete end it 'returns nil' do expect(document).to be_nil end end end describe '#find_one_and_replace' do before do authorized_collection.insert_many([{ field: 'test1', other: 'sth' }]) end context 'when hint option is provided' do # Functionality on more recent servers is sufficiently covered by spec tests. context 'on server versions < 4.2' do max_server_fcv '4.0' it 'raises a client-side exception' do expect do view.find_one_and_replace({ field: 'testing' }, { hint: '_id_' }) end.to raise_error(Mongo::Error::UnsupportedOption, /The MongoDB server handling this request does not support the hint option on this command./) end end context 'when the write concern is unacknowledged' do let(:view_collection) do client = authorized_client.with(write_concern: { w: 0 }) client[authorized_collection.name] end context "on 4.4+ servers" do min_server_version '4.4' it "doesn't raise an error" do expect do view.find_one_and_replace({ field: 'testing' }, { hint: '_id_' }) end.to_not raise_error(Mongo::Error::UnsupportedOption) end end context "on <=4.2 servers" do max_server_version '4.2' it 'raises a client-side error' do expect do view.find_one_and_replace({ field: 'testing' }, { hint: '_id_' }) end.to raise_error(Mongo::Error::UnsupportedOption, /The hint option cannot be specified on an unacknowledged write operation/) end end end end context 'when a matching document is found' do let(:selector) do { field: 'test1' } end context 'when no options are provided' do let(:document) do view.find_one_and_replace({ field: 'testing' }) end it 'returns the original document' do expect(document['field']).to eq('test1') end end context 'when return_document options are provided' do let(:document) do view.find_one_and_replace({ field: 'testing' }, :return_document => :after) end it 'returns the new document' do expect(document['field']).to eq('testing') end it 'replaces the document' do expect(document['other']).to be_nil end end context 'when a projection is provided' do let(:document) do view.projection(_id: 1).find_one_and_replace({ field: 'testing' }) end it 'returns the document with limited fields' do expect(document['field']).to be_nil expect(document['_id']).to_not be_nil end end context 'when a sort is provided' do let(:document) do view.sort(field: 1).find_one_and_replace({ field: 'testing' }) end it 'returns the original document' do expect(document['field']).to eq('test1') end end context 'when collation is provided' do let(:selector) do { name: 'BANG' } end let(:result) do view.find_one_and_replace(name: 'doink') end before do authorized_collection.insert_one(name: 'bang') end let(:options) do { collation: { locale: 'en_US', strength: 2 } } end context 'when the server selected supports collations' do min_server_fcv '3.4' it 'applies the collation' do expect(result['name']).to eq('bang') expect(authorized_collection.find({ name: 'doink' }, limit: -1).first['name']).to eq('doink') end end context 'when the server selected does not support collations' do max_server_version '3.2' it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedCollation) end context 'when a String key is used' do let(:options) do { 'collation' => { locale: 'en_US', strength: 2 } } end it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedCollation) end end end end context 'when collation is provided as a method option' do let(:selector) do { name: 'BANG' } end let(:result) do view.find_one_and_replace({ name: 'doink' }, method_options) end before do authorized_collection.insert_one(name: 'bang') end let(:method_options) do { collation: { locale: 'en_US', strength: 2 } } end context 'when the server selected supports collations' do min_server_fcv '3.4' it 'applies the collation' do expect(result['name']).to eq('bang') expect(authorized_collection.find({ name: 'doink' }, limit: -1).first['name']).to eq('doink') end end context 'when the server selected does not support collations' do max_server_version '3.2' it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedCollation) end context 'when a String key is used' do let(:method_options) do { 'collation' => { locale: 'en_US', strength: 2 } } end it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedCollation) end end end end context 'when collation is not provided' do let(:selector) do { name: 'BANG' } end let(:result) do view.find_one_and_replace(name: 'doink') end before do authorized_collection.insert_one(name: 'bang') end it 'does not apply the collation' do expect(result).to be_nil end end end context 'when no matching document is found' do context 'when no upsert options are provided' do let(:selector) do { field: 'test5' } end let(:document) do view.find_one_and_replace({ field: 'testing' }) end it 'returns nil' do expect(document).to be_nil end end context 'when upsert options are provided' do let(:selector) do { field: 'test5' } end let(:document) do view.find_one_and_replace({ field: 'testing' }, :upsert => true, :return_document => :after) end it 'returns the new document' do expect(document['field']).to eq('testing') end end end end describe '#find_one_and_update' do before do authorized_collection.insert_many([{ field: 'test1' }]) end context 'when hint option is provided' do # Functionality on more recent servers is sufficiently covered by spec tests. context 'on server versions < 4.2' do max_server_fcv '4.0' it 'raises a client-side exception' do expect do view.find_one_and_update({ '$set' => { field: 'testing' } }, { hint: '_id_' }) end.to raise_error(Mongo::Error::UnsupportedOption, /The MongoDB server handling this request does not support the hint option on this command./) end end context 'when the write concern is unacknowledged' do let(:view_collection) do client = authorized_client.with(write_concern: { w: 0 }) client[authorized_collection.name] end context "on 4.4+ servers" do min_server_version '4.4' it "doesn't raise an error" do expect do view.find_one_and_update({ '$set' => { field: 'testing' } }, { hint: '_id_' }) end.to_not raise_error(Mongo::Error::UnsupportedOption) end end context "on <=4.2 servers" do max_server_version '4.2' it 'raises a client-side error' do expect do view.find_one_and_update({ '$set' => { field: 'testing' } }, { hint: '_id_' }) end.to raise_error(Mongo::Error::UnsupportedOption, /The hint option cannot be specified on an unacknowledged write operation/) end end end end context 'when a matching document is found' do let(:selector) do { field: 'test1' } end context 'when no options are provided' do let(:document) do view.find_one_and_update({ '$set' => { field: 'testing' }}) end it 'returns the original document' do expect(document['field']).to eq('test1') end end context 'when return_document options are provided' do let(:document) do view.find_one_and_update({ '$set' => { field: 'testing' }}, :return_document => :after) end it 'returns the new document' do expect(document['field']).to eq('testing') end end context 'when a projection is provided' do let(:document) do view.projection(_id: 1).find_one_and_update({ '$set' => { field: 'testing' }}) end it 'returns the document with limited fields' do expect(document['field']).to be_nil expect(document['_id']).to_not be_nil end end context 'when a sort is provided' do let(:document) do view.sort(field: 1).find_one_and_update({ '$set' => { field: 'testing' } }) end it 'returns the original document' do expect(document['field']).to eq('test1') end end end context 'when a collation is specified' do let(:selector) do { name: 'BANG' } end let(:result) do view.find_one_and_update({ '$set' => { other: 'doink' } }) end before do authorized_collection.insert_one(name: 'bang') end let(:options) do { collation: { locale: 'en_US', strength: 2 } } end context 'when the server selected supports collations' do min_server_fcv '3.4' it 'applies the collation' do expect(result['name']).to eq('bang') expect(authorized_collection.find({ name: 'bang' }, limit: -1).first['other']).to eq('doink') end end context 'when the server selected does not support collations' do max_server_version '3.2' it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedCollation) end context 'when a String key is used' do let(:options) do { 'collation' => { locale: 'en_US', strength: 2 } } end it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedCollation) end end end end context 'when a collation is specified as a method option' do let(:selector) do { name: 'BANG' } end let(:result) do view.find_one_and_update({ '$set' => { other: 'doink' } }, method_options) end before do authorized_collection.insert_one(name: 'bang') end let(:method_options) do { collation: { locale: 'en_US', strength: 2 } } end context 'when the server selected supports collations' do min_server_fcv '3.4' it 'applies the collation' do expect(result['name']).to eq('bang') expect(authorized_collection.find({ name: 'bang' }, limit: -1).first['other']).to eq('doink') end end context 'when the server selected does not support collations' do max_server_version '3.2' it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedCollation) end context 'when a String key is used' do let(:method_options) do { 'collation' => { locale: 'en_US', strength: 2 } } end it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedCollation) end end end end context 'when no collation is specified' do let(:selector) do { name: 'BANG' } end let(:result) do view.find_one_and_update({ '$set' => { other: 'doink' } }) end before do authorized_collection.insert_one(name: 'bang') end it 'does not apply the collation' do expect(result).to be_nil end end context 'when no matching document is found' do let(:selector) do { field: 'test5' } end let(:document) do view.find_one_and_update({ '$set' => { field: 'testing' }}) end it 'returns nil' do expect(document).to be_nil end end end describe '#delete_many' do context 'when a hint option is provided' do # Functionality on more recent servers is sufficiently covered by spec tests. context 'on server versions < 3.4' do max_server_fcv '3.2' it 'raises a client-side exception' do expect do view.delete_many(hint: '_id_') end.to raise_error(Mongo::Error::UnsupportedOption, /The MongoDB server handling this request does not support the hint option on this command./) end end context 'when the write concern is unacknowledged' do let(:view_collection) do client = authorized_client.with(write_concern: { w: 0 }) client[authorized_collection.name] end context "on 4.4+ servers" do min_server_version '4.4' it "doesn't raise an error" do expect do view.delete_many(hint: '_id_') end.to_not raise_error(Mongo::Error::UnsupportedOption) end end context "on <=4.2 servers" do max_server_version '4.2' it 'raises a client-side error' do expect do view.delete_many(hint: '_id_') end.to raise_error(Mongo::Error::UnsupportedOption, /The hint option cannot be specified on an unacknowledged write operation/) end end end end context 'when a selector was provided' do let(:selector) do { field: 'test1' } end before do authorized_collection.insert_many([{ field: 'test1' }, { field: 'test2' }]) end let(:response) do view.delete_many end it 'deletes the matching documents in the collection' do expect(response.written_count).to eq(1) end end context 'when no selector was provided' do before do authorized_collection.insert_many([{ field: 'test1' }, { field: 'test2' }]) end let(:response) do view.delete_many end it 'deletes all the documents in the collection' do expect(response.written_count).to eq(2) end end context 'when a collation is specified' do let(:selector) do { name: 'BANG' } end let(:result) do view.delete_many end before do authorized_collection.insert_one(name: 'bang') authorized_collection.insert_one(name: 'bang') end let(:options) do { collation: { locale: 'en_US', strength: 2 } } end context 'when the server selected supports collations' do min_server_fcv '3.4' it 'applies the collation' do expect(result.written_count).to eq(2) expect(authorized_collection.find(name: 'bang').to_a.size).to eq(0) end end context 'when the server selected does not support collations' do max_server_version '3.2' it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedCollation) end context 'when a String key is used' do let(:options) do { 'collation' => { locale: 'en_US', strength: 2 } } end it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedCollation) end end end end context 'when a collation is specified as a method option' do let(:selector) do { name: 'BANG' } end let(:result) do view.delete_many(method_options) end before do authorized_collection.insert_one(name: 'bang') authorized_collection.insert_one(name: 'bang') end let(:method_options) do { collation: { locale: 'en_US', strength: 2 } } end context 'when the server selected supports collations' do min_server_fcv '3.4' it 'applies the collation' do expect(result.written_count).to eq(2) expect(authorized_collection.find(name: 'bang').to_a.size).to eq(0) end end context 'when the server selected does not support collations' do max_server_version '3.2' it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedCollation) end context 'when a String key is used' do let(:method_options) do { 'collation' => { locale: 'en_US', strength: 2 } } end it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedCollation) end end end end context 'when a collation is not specified' do let(:selector) do { name: 'BANG' } end let(:result) do view.delete_many end before do authorized_collection.insert_one(name: 'bang') authorized_collection.insert_one(name: 'bang') end it 'does not apply the collation' do expect(result.written_count).to eq(0) expect(authorized_collection.find(name: 'bang').to_a.size).to eq(2) end end end describe '#delete_one' do context 'when a hint option is provided' do # Functionality on more recent servers is sufficiently covered by spec tests. context 'on server versions < 3.4' do max_server_fcv '3.2' it 'raises a client-side exception' do expect do view.delete_one(hint: '_id_') end.to raise_error(Mongo::Error::UnsupportedOption, /The MongoDB server handling this request does not support the hint option on this command./) end end context 'when the write concern is unacknowledged' do let(:view_collection) do client = authorized_client.with(write_concern: { w: 0 }) client[authorized_collection.name] end context "on 4.4+ servers" do min_server_version '4.4' it "doesn't raise an error" do expect do view.delete_one(hint: '_id_') end.to_not raise_error(Mongo::Error::UnsupportedOption) end end context "on <=4.2 servers" do max_server_version '4.2' it 'raises a client-side error' do expect do view.delete_one(hint: '_id_') end.to raise_error(Mongo::Error::UnsupportedOption, /The hint option cannot be specified on an unacknowledged write operation/) end end end end context 'when a selector was provided' do let(:selector) do { field: 'test1' } end before do authorized_collection.insert_many([ { field: 'test1' }, { field: 'test1' }, { field: 'test1' } ]) end let(:response) do view.delete_one end it 'deletes the first matching document in the collection' do expect(response.written_count).to eq(1) end end context 'when no selector was provided' do before do authorized_collection.insert_many([{ field: 'test1' }, { field: 'test2' }]) end let(:response) do view.delete_one end it 'deletes the first document in the collection' do expect(response.written_count).to eq(1) end end context 'when a collation is provided' do let(:selector) do { name: 'BANG' } end let(:result) do view.delete_one end before do authorized_collection.insert_one(name: 'bang') end let(:options) do { collation: { locale: 'en_US', strength: 2 } } end context 'when the server selected supports collations' do min_server_fcv '3.4' it 'applies the collation' do expect(result.written_count).to eq(1) expect(authorized_collection.find(name: 'bang').to_a.size).to eq(0) end end context 'when the server selected does not support collations' do max_server_version '3.2' it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedCollation) end context 'when a String key is used' do let(:options) do { 'collation' => { locale: 'en_US', strength: 2 } } end it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedCollation) end end end end context 'when a collation is provided as a method_option' do let(:selector) do { name: 'BANG' } end let(:result) do view.delete_one(method_options) end before do authorized_collection.insert_one(name: 'bang') end let(:method_options) do { collation: { locale: 'en_US', strength: 2 } } end context 'when the server selected supports collations' do min_server_fcv '3.4' it 'applies the collation' do expect(result.written_count).to eq(1) expect(authorized_collection.find(name: 'bang').to_a.size).to eq(0) end end context 'when the server selected does not support collations' do max_server_version '3.2' it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedCollation) end context 'when a String key is used' do let(:method_options) do { 'collation' => { locale: 'en_US', strength: 2 } } end it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedCollation) end end end end context 'when a collation is not specified' do let(:selector) do {name: 'BANG'} end let(:result) do view.delete_one end before do authorized_collection.insert_one(name: 'bang') end it 'does not apply the collation' do expect(result.written_count).to eq(0) expect(authorized_collection.find(name: 'bang').to_a.size).to eq(1) end end end describe '#replace_one' do context 'when a hint option is provided' do # Functionality on more recent servers is sufficiently covered by spec tests. context 'on server versions < 3.4' do max_server_fcv '3.2' it 'raises a client-side exception' do expect do view.replace_one({ field: 'testing' }, { hint: '_id_' }) end.to raise_error(Mongo::Error::UnsupportedOption, /The MongoDB server handling this request does not support the hint option on this command./) end end context 'when the write concern is unacknowledged' do let(:view_collection) do client = authorized_client.with(write_concern: { w: 0 }) client[authorized_collection.name] end context "on 4.2+ servers" do min_server_version '4.2' it "doesn't raise an error" do expect do view.replace_one({ field: 'testing' }, { hint: '_id_' }) end.to_not raise_error(Mongo::Error::UnsupportedOption) end end context "on <=4.0 servers" do max_server_version '4.0' it 'raises a client-side error' do expect do view.replace_one({ field: 'testing' }, { hint: '_id_' }) end.to raise_error(Mongo::Error::UnsupportedOption, /The hint option cannot be specified on an unacknowledged write operation/) end end end end context 'when a selector was provided' do let(:selector) do { field: 'test1' } end before do authorized_collection.insert_many([{ field: 'test1' }, { field: 'test1' }]) end let!(:response) do view.replace_one({ field: 'testing' }) end let(:updated) do authorized_collection.find(field: 'testing').first end it 'updates the first matching document in the collection' do expect(response.written_count).to eq(1) end it 'updates the documents in the collection' do expect(updated[:field]).to eq('testing') end end context 'when no selector was provided' do before do authorized_collection.insert_many([{ field: 'test1' }, { field: 'test2' }]) end let!(:response) do view.replace_one({ field: 'testing' }) end let(:updated) do authorized_collection.find(field: 'testing').first end it 'updates the first document in the collection' do expect(response.written_count).to eq(1) end it 'updates the documents in the collection' do expect(updated[:field]).to eq('testing') end end context 'when upsert is false' do let!(:response) do view.replace_one({ field: 'test1' }, upsert: false) end let(:updated) do authorized_collection.find(field: 'test1').to_a end it 'reports that no documents were written' do expect(response.written_count).to eq(0) end it 'does not insert the document' do expect(updated).to be_empty end end context 'when upsert is true' do let!(:response) do view.replace_one({ field: 'test1' }, upsert: true) end let(:updated) do authorized_collection.find(field: 'test1').first end it 'reports that a document was written' do expect(response.written_count).to eq(1) end it 'inserts the document' do expect(updated[:field]).to eq('test1') end end context 'when upsert is not specified' do let!(:response) do view.replace_one({ field: 'test1' }) end let(:updated) do authorized_collection.find(field: 'test1').to_a end it 'reports that no documents were written' do expect(response.written_count).to eq(0) end it 'does not insert the document' do expect(updated).to be_empty end end context 'when a collation is specified' do let(:selector) do { name: 'BANG' } end let(:result) do view.replace_one({ name: 'doink' }) end before do authorized_collection.insert_one(name: 'bang') end let(:options) do { collation: { locale: 'en_US', strength: 2 } } end context 'when the server selected supports collations' do min_server_fcv '3.4' it 'applies the collation' do expect(result.written_count).to eq(1) expect(authorized_collection.find(name: 'doink').to_a.size).to eq(1) end end context 'when the server selected does not support collations' do max_server_version '3.2' it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedCollation) end context 'when a String key is used' do let(:options) do { 'collation' => { locale: 'en_US', strength: 2 } } end it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedCollation) end end end end context 'when a collation is specified as method option' do let(:selector) do { name: 'BANG' } end let(:result) do view.replace_one({ name: 'doink' }, method_options) end before do authorized_collection.insert_one(name: 'bang') end let(:method_options) do { collation: { locale: 'en_US', strength: 2 } } end context 'when the server selected supports collations' do min_server_fcv '3.4' it 'applies the collation' do expect(result.written_count).to eq(1) expect(authorized_collection.find(name: 'doink').to_a.size).to eq(1) end end context 'when the server selected does not support collations' do max_server_version '3.2' it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedCollation) end context 'when a String key is used' do let(:method_options) do { 'collation' => { locale: 'en_US', strength: 2 } } end it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedCollation) end end end end context 'when a collation is not specified' do let(:selector) do { name: 'BANG' } end let(:result) do view.replace_one(name: 'doink') end before do authorized_collection.insert_one(name: 'bang') end it 'does not apply the collation' do expect(result.written_count).to eq(0) expect(authorized_collection.find(name: 'bang').to_a.size).to eq(1) end end end describe '#update_many' do context 'when a hint option is provided' do # Functionality on more recent servers is sufficiently covered by spec tests. context 'on server versions < 3.4' do max_server_fcv '3.2' it 'raises a client-side exception' do expect do view.update_many({ '$set' => { field: 'testing' } }, { hint: '_id_' }) end.to raise_error(Mongo::Error::UnsupportedOption, /The MongoDB server handling this request does not support the hint option on this command./) end end context 'when the write concern is unacknowledged' do let(:view_collection) do client = authorized_client.with(write_concern: { w: 0 }) client[authorized_collection.name] end context "on 4.2+ servers" do min_server_version '4.2' it "doesn't raise an error" do expect do view.update_many({ '$set' => { field: 'testing' } }, { hint: '_id_' }) end.to_not raise_error(Mongo::Error::UnsupportedOption) end end context "on <=4.0 servers" do max_server_version '4.0' it 'raises a client-side error' do expect do view.update_many({ '$set' => { field: 'testing' } }, { hint: '_id_' }) end.to raise_error(Mongo::Error::UnsupportedOption, /The hint option cannot be specified on an unacknowledged write operation/) end end end end context 'when a selector was provided' do let(:selector) do { field: 'test' } end before do authorized_collection.insert_many([{ field: 'test' }, { field: 'test' }]) end let!(:response) do view.update_many('$set'=> { field: 'testing' }) end let(:updated) do authorized_collection.find(field: 'testing').first end it 'returns the number updated' do expect(response.written_count).to eq(2) end it 'updates the documents in the collection' do expect(updated[:field]).to eq('testing') end end context 'when no selector was provided' do before do authorized_collection.insert_many([{ field: 'test1' }, { field: 'test2' }]) end let!(:response) do view.update_many('$set'=> { field: 'testing' }) end let(:updated) do authorized_collection.find end it 'returns the number updated' do expect(response.written_count).to eq(2) end it 'updates all the documents in the collection' do updated.each do |doc| expect(doc[:field]).to eq('testing') end end end context 'when upsert is false' do let(:response) do view.update_many({ '$set'=> { field: 'testing' } }, upsert: false) end let(:updated) do authorized_collection.find.to_a end it 'reports that no documents were updated' do expect(response.written_count).to eq(0) end it 'updates no documents in the collection' do expect(updated).to be_empty end end context 'when upsert is true' do let!(:response) do view.update_many({ '$set'=> { field: 'testing' } }, upsert: true) end let(:updated) do authorized_collection.find.first end it 'reports that a document was written' do expect(response.written_count).to eq(1) end it 'inserts a document into the collection' do expect(updated[:field]).to eq('testing') end end context 'when upsert is not specified' do let(:response) do view.update_many({ '$set'=> { field: 'testing' } }) end let(:updated) do authorized_collection.find.to_a end it 'reports that no documents were updated' do expect(response.written_count).to eq(0) end it 'updates no documents in the collection' do expect(updated).to be_empty end end context 'when a collation is specified' do let(:selector) do { name: 'BANG' } end let(:result) do view.update_many({ '$set' => { other: 'doink' } }) end before do authorized_collection.insert_one(name: 'bang') authorized_collection.insert_one(name: 'baNG') end let(:options) do { collation: { locale: 'en_US', strength: 2 } } end context 'when the server selected supports collations' do min_server_fcv '3.4' it 'applies the collation' do expect(result.written_count).to eq(2) expect(authorized_collection.find(other: 'doink').to_a.size).to eq(2) end end context 'when the server selected does not support collations' do max_server_version '3.2' it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedCollation) end context 'when a String key is used' do let(:options) do { 'collation' => { locale: 'en_US', strength: 2 } } end it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedCollation) end end end end context 'when a collation is specified as a method option' do let(:selector) do { name: 'BANG' } end let(:result) do view.update_many({ '$set' => { other: 'doink' } }, method_options) end before do authorized_collection.insert_one(name: 'bang') authorized_collection.insert_one(name: 'baNG') end let(:method_options) do { collation: { locale: 'en_US', strength: 2 } } end context 'when the server selected supports collations' do min_server_fcv '3.4' it 'applies the collation' do expect(result.written_count).to eq(2) expect(authorized_collection.find(other: 'doink').to_a.size).to eq(2) end end context 'when the server selected does not support collations' do max_server_version '3.2' it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedCollation) end context 'when a String key is used' do let(:method_options) do { 'collation' => { locale: 'en_US', strength: 2 } } end it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedCollation) end end end end context 'when collation is not specified' do let(:selector) do {name: 'BANG'} end let(:result) do view.update_many('$set' => {other: 'doink'}) end before do authorized_collection.insert_one(name: 'bang') authorized_collection.insert_one(name: 'baNG') end it 'does not apply the collation' do expect(result.written_count).to eq(0) end end end describe '#update_one' do context 'when a hint option is provided' do # Functionality on more recent servers is sufficiently covered by spec tests. context 'on server versions < 3.4' do max_server_fcv '3.2' it 'raises a client-side exception' do expect do view.update_one({ '$set' => { field: 'testing' } }, { hint: '_id_' }) end.to raise_error(Mongo::Error::UnsupportedOption, /The MongoDB server handling this request does not support the hint option on this command./) end end context 'when the write concern is unacknowledged' do let(:view_collection) do client = authorized_client.with(write_concern: { w: 0 }) client[authorized_collection.name] end context "on 4.2+ servers" do min_server_version '4.2' it "doesn't raise an error" do expect do view.update_one({ '$set' => { field: 'testing' } }, { hint: '_id_' }) end.to_not raise_error(Mongo::Error::UnsupportedOption) end end context "on <=4.0 servers" do max_server_version '4.0' it 'raises a client-side error' do expect do view.update_one({ '$set' => { field: 'testing' } }, { hint: '_id_' }) end.to raise_error(Mongo::Error::UnsupportedOption, /The hint option cannot be specified on an unacknowledged write operation/) end end end end context 'when a selector was provided' do let(:selector) do { field: 'test1' } end before do authorized_collection.insert_many([{ field: 'test1' }, { field: 'test1' }]) end let!(:response) do view.update_one('$set'=> { field: 'testing' }) end let(:updated) do authorized_collection.find(field: 'testing').first end it 'updates the first matching document in the collection' do expect(response.written_count).to eq(1) end it 'updates the documents in the collection' do expect(updated[:field]).to eq('testing') end end context 'when no selector was provided' do before do authorized_collection.insert_many([{ field: 'test1' }, { field: 'test2' }]) end let!(:response) do view.update_one('$set'=> { field: 'testing' }) end let(:updated) do authorized_collection.find(field: 'testing').first end it 'updates the first document in the collection' do expect(response.written_count).to eq(1) end it 'updates the documents in the collection' do expect(updated[:field]).to eq('testing') end end context 'when upsert is false' do let(:response) do view.update_one({ '$set'=> { field: 'testing' } }, upsert: false) end let(:updated) do authorized_collection.find.to_a end it 'reports that no documents were updated' do expect(response.written_count).to eq(0) end it 'updates no documents in the collection' do expect(updated).to be_empty end end context 'when upsert is true' do let!(:response) do view.update_one({ '$set'=> { field: 'testing' } }, upsert: true) end let(:updated) do authorized_collection.find.first end it 'reports that a document was written' do expect(response.written_count).to eq(1) end it 'inserts a document into the collection' do expect(updated[:field]).to eq('testing') end end context 'when upsert is not specified' do let(:response) do view.update_one({ '$set'=> { field: 'testing' } }) end let(:updated) do authorized_collection.find.to_a end it 'reports that no documents were updated' do expect(response.written_count).to eq(0) end it 'updates no documents in the collection' do expect(updated).to be_empty end end context 'when there is a collation specified' do let(:selector) do { name: 'BANG' } end let(:result) do view.update_one({ '$set' => { other: 'doink' } }) end before do authorized_collection.insert_one(name: 'bang') end let(:options) do { collation: { locale: 'en_US', strength: 2 } } end context 'when the server selected supports collations' do min_server_fcv '3.4' it 'applies the collation' do expect(result.written_count).to eq(1) expect(authorized_collection.find(other: 'doink').to_a.size).to eq(1) end end context 'when the server selected does not support collations' do max_server_version '3.2' it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedCollation) end context 'when a String key is used' do let(:options) do { 'collation' => { locale: 'en_US', strength: 2 } } end it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedCollation) end end end end context 'when there is a collation specified as a method option' do let(:selector) do { name: 'BANG' } end let(:result) do view.update_one({ '$set' => { other: 'doink' } }, method_options) end before do authorized_collection.insert_one(name: 'bang') end let(:method_options) do { collation: { locale: 'en_US', strength: 2 } } end context 'when the server selected supports collations' do min_server_fcv '3.4' it 'applies the collation' do expect(result.written_count).to eq(1) expect(authorized_collection.find(other: 'doink').to_a.size).to eq(1) end end context 'when the server selected does not support collations' do max_server_version '3.2' it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedCollation) end context 'when a String key is used' do let(:method_options) do { 'collation' => { locale: 'en_US', strength: 2 } } end it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedCollation) end end end end context 'when a collation is not specified' do let(:selector) do { name: 'BANG' } end let(:result) do view.update_one('$set' => { other: 'doink' }) end before do authorized_collection.insert_one(name: 'bang') end it 'does not apply the collation' do expect(result.written_count).to eq(0) end end end end mongo-ruby-driver-2.21.3/spec/mongo/collection/view_spec.rb000066400000000000000000000221301505113246500237030ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Collection::View do let(:filter) do {} end let(:options) do {} end let(:view) do described_class.new(authorized_collection, filter, options) end before do authorized_collection.delete_many end describe '#==' do context 'when the other object is not a collection view' do let(:other) { 'test' } it 'returns false' do expect(view).to_not eq(other) end end context 'when the views have the same collection, filter, and options' do let(:other) do described_class.new(authorized_collection, filter, options) end it 'returns true' do expect(view).to eq(other) end end context 'when two views have a different collection' do let(:other_collection) do authorized_client[:other] end let(:other) do described_class.new(other_collection, filter, options) end it 'returns false' do expect(view).not_to eq(other) end end context 'when two views have a different filter' do let(:other_filter) do { 'name' => 'Emily' } end let(:other) do described_class.new(authorized_collection, other_filter, options) end it 'returns false' do expect(view).not_to eq(other) end end context 'when two views have different options' do let(:other_options) do { 'limit' => 20 } end let(:other) do described_class.new(authorized_collection, filter, other_options) end it 'returns false' do expect(view).not_to eq(other) end end end describe 'copy' do let(:view_clone) do view.clone end it 'dups the options' do expect(view.options).not_to be(view_clone.options) end it 'dups the filter' do expect(view.filter).not_to be(view_clone.filter) end it 'references the same collection' do expect(view.collection).to be(view_clone.collection) end end describe '#each' do let(:documents) do (1..10).map{ |i| { field: "test#{i}" }} end before do authorized_collection.delete_many authorized_collection.insert_many(documents) end context 'when a block is not provided' do let(:enumerator) do view.each end it 'returns an enumerator' do enumerator.each do |doc| expect(doc).to have_key('field') end end end describe '#close_query' do let(:options) do { :batch_size => 1 } end let(:cursor) do view.instance_variable_get(:@cursor) end before do view.to_enum.next if ClusterConfig.instance.fcv_ish < '3.2' cursor.instance_variable_set(:@cursor_id, 1) end end it 'sends a kill cursors command for the cursor' do expect(cursor).to receive(:close).and_call_original view.close_query end end describe 'collation' do context 'when the view has a collation set' do let(:options) do { collation: { locale: 'en_US', strength: 2 } } end let(:filter) do { name: 'BANG' } end before do authorized_collection.insert_one(name: 'bang') end let(:result) do view.limit(-1).first end context 'when the server selected supports collations' do min_server_fcv '3.4' it 'applies the collation' do expect(result['name']).to eq('bang') end end context 'when the server selected does not support collations' do max_server_version '3.2' it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedCollation) end context 'when a String key is used' do let(:options) do { 'collation' => { locale: 'en_US', strength: 2 } } end it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedCollation) end end end end context 'when the view does not have a collation set' do let(:filter) do { name: 'BANG' } end before do authorized_collection.insert_one(name: 'bang') end let(:result) do view.limit(-1).first end it 'does not apply the collation' do expect(result).to be_nil end end end end describe '#hash' do let(:other) do described_class.new(authorized_collection, filter, options) end it 'returns a unique value based on collection, filter, options' do expect(view.hash).to eq(other.hash) end context 'when two views only have different collections' do let(:other_collection) do authorized_client[:other] end let(:other) do described_class.new(other_collection, filter, options) end it 'returns different hash values' do expect(view.hash).not_to eq(other.hash) end end context 'when two views only have different filter' do let(:other_filter) do { 'name' => 'Emily' } end let(:other) do described_class.new(authorized_collection, other_filter, options) end it 'returns different hash values' do expect(view.hash).not_to eq(other.hash) end end context 'when two views only have different options' do let(:other_options) do { 'limit' => 20 } end let(:other) do described_class.new(authorized_collection, filter, other_options) end it 'returns different hash values' do expect(view.hash).not_to eq(other.hash) end end end describe '#initialize' do context 'when the filter is not a valid document' do let(:filter) do 'y' end let(:options) do { limit: 5 } end it 'raises an error' do expect do view end.to raise_error(Mongo::Error::InvalidDocument) end end context 'when the filter and options are standard' do let(:filter) do { 'name' => 'test' } end let(:options) do { 'sort' => { 'name' => 1 }} end it 'parses a standard filter' do expect(view.filter).to eq(filter) end it 'parses standard options' do expect(view.options).to eq(options) end it 'only freezes the view filter, not the user filter' do expect(view.filter.frozen?).to be(true) expect(filter.frozen?).to be(false) end it 'only freezes the view options, not the user options' do expect(view.options.frozen?).to be(true) expect(options.frozen?).to be(false) end end context 'when the filter contains modifiers' do let(:filter) do { :$query => { :name => 'test' }, :$comment => 'testing' } end let(:options) do { :sort => { name: 1 }} end it 'parses a standard filter' do expect(view.filter).to eq('name' => 'test') end it 'parses standard options' do expect(view.options).to eq('sort' => { 'name' => 1 }, 'comment' => 'testing') end end context 'when the options contain modifiers' do let(:filter) do { 'name' => 'test' } end let(:options) do { :sort => { name: 1 }, :modifiers => { :$comment => 'testing'}} end it 'parses a standard filter' do expect(view.filter).to eq('name' => 'test') end it 'parses standard options' do expect(view.options).to eq('sort' => { 'name' => 1 }, 'comment' => 'testing') end end context 'when the filter and options both contain modifiers' do let(:filter) do { :$query => { 'name' => 'test' }, :$hint => { name: 1 }} end let(:options) do { :sort => { name: 1 }, :modifiers => { :$comment => 'testing' }} end it 'parses a standard filter' do expect(view.filter).to eq('name' => 'test') end it 'parses standard options' do expect(view.options).to eq( 'sort' => { 'name' => 1 }, 'comment' => 'testing', 'hint' => { 'name' => 1 } ) end end end describe '#inspect' do context 'when there is a namespace, filter, and options' do let(:options) do { 'limit' => 5 } end let(:filter) do { 'name' => 'Emily' } end it 'returns a string' do expect(view.inspect).to be_a(String) end it 'returns a string containing the collection namespace' do expect(view.inspect).to match(/.*#{authorized_collection.namespace}.*/) end it 'returns a string containing the filter' do expect(view.inspect).to match(/.*#{filter.inspect}.*/) end it 'returns a string containing the options' do expect(view.inspect).to match(/.*#{options.inspect}.*/) end end end end mongo-ruby-driver-2.21.3/spec/mongo/collection_crud_spec.rb000066400000000000000000003503071505113246500237600ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Collection do retry_test let(:subscriber) { Mrss::EventSubscriber.new } let(:client) do authorized_client.tap do |client| client.subscribe(Mongo::Monitoring::COMMAND, subscriber) end end let(:authorized_collection) { client['collection_spec'] } before do authorized_client['collection_spec'].drop end let(:collection_invalid_write_concern) do authorized_collection.client.with(write: INVALID_WRITE_CONCERN)[authorized_collection.name] end let(:collection_with_validator) do authorized_client[:validating] end describe '#find' do describe 'updating cluster time' do let(:operation) do client[TEST_COLL].find.first end let(:operation_with_session) do client[TEST_COLL].find({}, session: session).first end let(:second_operation) do client[TEST_COLL].find({}, session: session).first end it_behaves_like 'an operation updating cluster time' end context 'when provided a filter' do let(:view) do authorized_collection.find(name: 1) end it 'returns a authorized_collection view for the filter' do expect(view.filter).to eq('name' => 1) end end context 'when provided no filter' do let(:view) do authorized_collection.find end it 'returns a authorized_collection view with an empty filter' do expect(view.filter).to be_empty end end context 'when providing a bad filter' do let(:view) do authorized_collection.find('$or' => []) end it 'raises an exception when iterating' do expect { view.to_a }.to raise_exception(Mongo::Error::OperationFailure) end end context 'when iterating the authorized_collection view' do before do authorized_collection.insert_many([{ field: 'test1' }, { field: 'test2' }]) end let(:view) do authorized_collection.find end it 'iterates over the documents' do view.each do |document| expect(document).to_not be_nil end end end context 'when the user is not authorized' do require_auth let(:view) do unauthorized_collection.find end it 'iterates over the documents' do expect { view.each{ |document| document } }.to raise_error(Mongo::Error::OperationFailure) end end context 'when documents contain potential error message fields' do [ 'errmsg', 'error', Mongo::Operation::Result::OK ].each do |field| context "when the document contains a '#{field}' field" do let(:value) do 'testing' end let(:view) do authorized_collection.find end before do authorized_collection.insert_one({ field => value }) end it 'iterates over the documents' do view.each do |document| expect(document[field]).to eq(value) end end end end end context 'when provided options' do context 'when a session is provided' do require_wired_tiger let(:operation) do authorized_collection.find({}, session: session).to_a end let(:session) do authorized_client.start_session end let(:failed_operation) do client[authorized_collection.name].find({ '$._id' => 1 }, session: session).to_a end let(:client) do authorized_client end it_behaves_like 'an operation using a session' it_behaves_like 'a failed operation using a session' end context 'session id' do min_server_fcv '3.6' require_topology :replica_set, :sharded require_wired_tiger let(:options) do { session: session } end let(:session) do client.start_session end let(:view) do Mongo::Collection::View.new(client[TEST_COLL], selector, view_options) end let(:command) do client[TEST_COLL].find({}, session: session).explain subscriber.started_events.find { |c| c.command_name == 'explain' }.command end it 'sends the session id' do expect(command['lsid']).to eq(session.session_id) end end context 'when a session supporting causal consistency is used' do require_wired_tiger let(:operation) do collection.find({}, session: session).to_a end let(:command) do operation subscriber.started_events.find { |cmd| cmd.command_name == 'find' }.command end it_behaves_like 'an operation supporting causally consistent reads' end let(:view) do authorized_collection.find({}, options) end context 'when provided :allow_partial_results' do let(:options) do { allow_partial_results: true } end it 'returns a view with :allow_partial_results set' do expect(view.options[:allow_partial_results]).to be(options[:allow_partial_results]) end end context 'when provided :batch_size' do let(:options) do { batch_size: 100 } end it 'returns a view with :batch_size set' do expect(view.options[:batch_size]).to eq(options[:batch_size]) end end context 'when provided :comment' do let(:options) do { comment: 'slow query' } end it 'returns a view with :comment set' do expect(view.modifiers[:$comment]).to eq(options[:comment]) end end context 'when provided :cursor_type' do let(:options) do { cursor_type: :tailable } end it 'returns a view with :cursor_type set' do expect(view.options[:cursor_type]).to eq(options[:cursor_type]) end end context 'when provided :max_time_ms' do let(:options) do { max_time_ms: 500 } end it 'returns a view with :max_time_ms set' do expect(view.modifiers[:$maxTimeMS]).to eq(options[:max_time_ms]) end end context 'when provided :modifiers' do let(:options) do { modifiers: { '$orderby' => Mongo::Index::ASCENDING } } end it 'returns a view with modifiers set' do expect(view.modifiers).to eq(options[:modifiers]) end it 'dups the modifiers hash' do expect(view.modifiers).not_to be(options[:modifiers]) end end context 'when provided :no_cursor_timeout' do let(:options) do { no_cursor_timeout: true } end it 'returns a view with :no_cursor_timeout set' do expect(view.options[:no_cursor_timeout]).to eq(options[:no_cursor_timeout]) end end context 'when provided :oplog_replay' do let(:options) do { oplog_replay: false } end it 'returns a view with :oplog_replay set' do expect(view.options[:oplog_replay]).to eq(options[:oplog_replay]) end end context 'when provided :projection' do let(:options) do { projection: { 'x' => 1 } } end it 'returns a view with :projection set' do expect(view.options[:projection]).to eq(options[:projection]) end end context 'when provided :skip' do let(:options) do { skip: 5 } end it 'returns a view with :skip set' do expect(view.options[:skip]).to eq(options[:skip]) end end context 'when provided :sort' do let(:options) do { sort: { 'x' => Mongo::Index::ASCENDING } } end it 'returns a view with :sort set' do expect(view.modifiers[:$orderby]).to eq(options[:sort]) end end context 'when provided :collation' do let(:options) do { collation: { 'locale' => 'en_US' } } end it 'returns a view with :collation set' do expect(view.options[:collation]).to eq(options[:collation]) end end end end describe '#insert_many' do let(:result) do authorized_collection.insert_many([{ name: 'test1' }, { name: 'test2' }]) end it 'inserts the documents into the collection' do expect(result.inserted_count).to eq(2) end it 'contains the ids in the result' do expect(result.inserted_ids.size).to eq(2) end context 'when an enumerable is used instead of an array' do context 'when the enumerable is not empty' do let(:source_data) do [{ name: 'test1' }, { name: 'test2' }] end let(:result) do authorized_collection.insert_many(source_data.lazy) end it 'should accepts them without raising an error' do expect { result }.to_not raise_error expect(result.inserted_count).to eq(source_data.size) end end context 'when the enumerable is empty' do let(:source_data) do [] end let(:result) do authorized_collection.insert_many(source_data.lazy) end it 'should raise ArgumentError' do expect do result end.to raise_error(ArgumentError, /Bulk write requests cannot be empty/) end end end context 'when a session is provided' do let(:session) do authorized_client.start_session end let(:operation) do authorized_collection.insert_many([{ name: 'test1' }, { name: 'test2' }], session: session) end let(:failed_operation) do authorized_collection.insert_many([{ _id: 'test1' }, { _id: 'test1' }], session: session) end let(:client) do authorized_client end it_behaves_like 'an operation using a session' it_behaves_like 'a failed operation using a session' end context 'when unacknowledged writes is used with an explicit session' do let(:collection_with_unacknowledged_write_concern) do authorized_collection.with(write: { w: 0 }) end let(:operation) do collection_with_unacknowledged_write_concern.insert_many([{ name: 'test1' }, { name: 'test2' }], session: session) end it_behaves_like 'an explicit session with an unacknowledged write' end context 'when unacknowledged writes is used with an implicit session' do let(:collection_with_unacknowledged_write_concern) do client.with(write: { w: 0 })[TEST_COLL] end let(:operation) do collection_with_unacknowledged_write_concern.insert_many([{ name: 'test1' }, { name: 'test2' }]) end it_behaves_like 'an implicit session with an unacknowledged write' end context 'when a document contains dotted keys' do let(:docs) do [ { 'first.name' => 'test1' }, { name: 'test2' } ] end let(:view) { authorized_collection.find({}, { sort: { name: 1 } }) } it 'inserts the documents correctly' do expect { authorized_collection.insert_many(docs) }.to_not raise_error expect(view.count).to eq(2) expect(view.first['first.name']).to eq('test1') expect(view.to_a[1]['name']).to eq('test2') end end context 'when the client has a custom id generator' do let(:generator) do Class.new do def generate 1 end end.new end let(:custom_client) do authorized_client.with(id_generator: generator) end let(:custom_collection) do custom_client['custom_id_generator_test_collection'] end before do custom_collection.delete_many custom_collection.insert_many([{ name: 'testing' }]) expect(custom_collection.count).to eq(1) end it 'inserts with the custom id' do expect(custom_collection.count).to eq(1) expect(custom_collection.find.first[:_id]).to eq(1) end end context 'when the inserts fail' do let(:result) do authorized_collection.insert_many([{ _id: 1 }, { _id: 1 }]) end it 'raises an BulkWriteError' do expect { result }.to raise_exception(Mongo::Error::BulkWriteError) end end context "when the documents exceed the max bson size" do let(:documents) do [{ '_id' => 1, 'name' => '1'*17000000 }] end it 'raises a MaxBSONSize error' do expect { authorized_collection.insert_many(documents) }.to raise_error(Mongo::Error::MaxBSONSize) end end context 'when the documents are sent with OP_MSG' do min_server_fcv '3.6' let(:documents) do [{ '_id' => 1, 'name' => '1'*16777191 }, { '_id' => 'y' }] end before do authorized_collection.insert_many(documents) end let(:insert_events) do subscriber.started_events.select { |e| e.command_name == 'insert' } end it 'sends the documents in one OP_MSG' do expect(insert_events.size).to eq(1) expect(insert_events[0].command['documents']).to eq(documents) end end context 'when collection has a validator' do min_server_fcv '3.2' around(:each) do |spec| authorized_client[:validating].drop authorized_client[:validating, :validator => { :a => { '$exists' => true } }].tap do |c| c.create end spec.run collection_with_validator.drop end context 'when the document is valid' do let(:result) do collection_with_validator.insert_many([{ a: 1 }, { a: 2 }]) end it 'inserts successfully' do expect(result.inserted_count).to eq(2) end end context 'when the document is invalid' do context 'when bypass_document_validation is not set' do let(:result2) do collection_with_validator.insert_many([{ x: 1 }, { x: 2 }]) end it 'raises a BulkWriteError' do expect { result2 }.to raise_exception(Mongo::Error::BulkWriteError) end end context 'when bypass_document_validation is true' do let(:result3) do collection_with_validator.insert_many( [{ x: 1 }, { x: 2 }], :bypass_document_validation => true) end it 'inserts successfully' do expect(result3.inserted_count).to eq(2) end end end end context 'when unacknowledged writes is used' do let(:collection_with_unacknowledged_write_concern) do authorized_collection.with(write: { w: 0 }) end let(:result) do collection_with_unacknowledged_write_concern.insert_many([{ _id: 1 }, { _id: 1 }]) end it 'does not raise an exception' do expect(result.inserted_count).to be(0) end end context 'when various options passed in' do # w: 2 requires a replica set require_topology :replica_set # https://jira.mongodb.org/browse/RUBY-2306 min_server_fcv '3.6' let(:session) do authorized_client.start_session end let(:events) do subscriber.command_started_events('insert') end let(:collection) do authorized_collection.with(write_concern: {w: 2}) end let!(:command) do Utils.get_command_event(authorized_client, 'insert') do |client| collection.insert_many([{ name: 'test1' }, { name: 'test2' }], session: session, write_concern: {w: 1}, bypass_document_validation: true) end.command end it 'inserts many successfully with correct options sent to server' do expect(events.length).to eq(1) expect(command[:writeConcern]).to_not be_nil expect(command[:writeConcern][:w]).to eq(1) expect(command[:bypassDocumentValidation]).to be(true) end end end describe '#insert_one' do describe 'updating cluster time' do let(:operation) do client[TEST_COLL].insert_one({ name: 'testing' }) end let(:operation_with_session) do client[TEST_COLL].insert_one({ name: 'testing' }, session: session) end let(:second_operation) do client[TEST_COLL].insert_one({ name: 'testing' }, session: session) end it_behaves_like 'an operation updating cluster time' end let(:result) do authorized_collection.insert_one({ name: 'testing' }) end it 'inserts the document into the collection'do expect(result.written_count).to eq(1) end it 'contains the id in the result' do expect(result.inserted_id).to_not be_nil end context 'when a session is provided' do let(:session) do authorized_client.start_session end let(:operation) do authorized_collection.insert_one({ name: 'testing' }, session: session) end let(:failed_operation) do authorized_collection.insert_one({ _id: 'testing' }) authorized_collection.insert_one({ _id: 'testing' }, session: session) end let(:client) do authorized_client end it_behaves_like 'an operation using a session' it_behaves_like 'a failed operation using a session' end context 'when unacknowledged writes is used with an explicit session' do let(:collection_with_unacknowledged_write_concern) do authorized_collection.with(write: { w: 0 }) end let(:operation) do collection_with_unacknowledged_write_concern.insert_one({ name: 'testing' }, session: session) end it_behaves_like 'an explicit session with an unacknowledged write' end context 'when unacknowledged writes is used with an implicit session' do let(:collection_with_unacknowledged_write_concern) do client.with(write: { w: 0 })[TEST_COLL] end let(:operation) do collection_with_unacknowledged_write_concern.insert_one({ name: 'testing' }) end it_behaves_like 'an implicit session with an unacknowledged write' end context 'when various options passed in' do # https://jira.mongodb.org/browse/RUBY-2306 min_server_fcv '3.6' let(:session) do authorized_client.start_session end let(:events) do subscriber.command_started_events('insert') end let(:collection) do authorized_collection.with(write_concern: {w: 3}) end let!(:command) do Utils.get_command_event(authorized_client, 'insert') do |client| collection.insert_one({name: 'test1'}, session: session, write_concern: {w: 1}, bypass_document_validation: true) end.command end it 'inserts one successfully with correct options sent to server' do expect(events.length).to eq(1) expect(command[:writeConcern]).to_not be_nil expect(command[:writeConcern][:w]).to eq(1) expect(command[:bypassDocumentValidation]).to be(true) end end context 'when the document contains dotted keys' do let(:doc) do { 'testing.test' => 'value' } end it 'inserts the document correctly' do expect { authorized_collection.insert_one(doc) }.to_not raise_error expect(authorized_collection.count).to eq(1) expect(authorized_collection.find.first['testing.test']).to eq('value') end end context 'when the document is nil' do let(:result) do authorized_collection.insert_one(nil) end it 'raises an ArgumentError' do expect { result }.to raise_error(ArgumentError, "Document to be inserted cannot be nil") end end context 'when the insert fails' do let(:result) do authorized_collection.insert_one(_id: 1) authorized_collection.insert_one(_id: 1) end it 'raises an OperationFailure' do expect { result }.to raise_exception(Mongo::Error::OperationFailure) end end context 'when the client has a custom id generator' do let(:generator) do Class.new do def generate 1 end end.new end let(:custom_client) do authorized_client.with(id_generator: generator) end let(:custom_collection) do custom_client[TEST_COLL] end before do custom_collection.delete_many custom_collection.insert_one({ name: 'testing' }) end it 'inserts with the custom id' do expect(custom_collection.find.first[:_id]).to eq(1) end end context 'when collection has a validator' do min_server_fcv '3.2' around(:each) do |spec| authorized_client[:validating, :validator => { :a => { '$exists' => true } }].tap do |c| c.create end spec.run collection_with_validator.drop end context 'when the document is valid' do let(:result) do collection_with_validator.insert_one({ a: 1 }) end it 'inserts successfully' do expect(result.written_count).to eq(1) end end context 'when the document is invalid' do context 'when bypass_document_validation is not set' do let(:result2) do collection_with_validator.insert_one({ x: 1 }) end it 'raises a OperationFailure' do expect { result2 }.to raise_exception(Mongo::Error::OperationFailure) end end context 'when bypass_document_validation is true' do let(:result3) do collection_with_validator.insert_one( { x: 1 }, :bypass_document_validation => true) end it 'inserts successfully' do expect(result3.written_count).to eq(1) end end end end end describe '#bulk_write' do context 'when various options passed in' do min_server_fcv '3.2' require_topology :replica_set # https://jira.mongodb.org/browse/RUBY-2306 min_server_fcv '3.6' let(:requests) do [ { insert_one: { name: "anne" }}, { insert_one: { name: "bob" }}, { insert_one: { name: "charlie" }} ] end let(:session) do authorized_client.start_session end let!(:command) do Utils.get_command_event(authorized_client, 'insert') do |client| collection.bulk_write(requests, session: session, write_concern: {w: 1}, bypass_document_validation: true) end.command end let(:events) do subscriber.command_started_events('insert') end let(:collection) do authorized_collection.with(write_concern: {w: 2}) end it 'inserts successfully with correct options sent to server' do expect(collection.count).to eq(3) expect(events.length).to eq(1) expect(command[:writeConcern]).to_not be_nil expect(command[:writeConcern][:w]).to eq(1) expect(command[:bypassDocumentValidation]).to eq(true) end end end describe '#aggregate' do describe 'updating cluster time' do let(:operation) do client[TEST_COLL].aggregate([]).first end let(:operation_with_session) do client[TEST_COLL].aggregate([], session: session).first end let(:second_operation) do client[TEST_COLL].aggregate([], session: session).first end it_behaves_like 'an operation updating cluster time' end context 'when a session supporting causal consistency is used' do require_wired_tiger let(:operation) do collection.aggregate([], session: session).first end let(:command) do operation subscriber.started_events.find { |cmd| cmd.command_name == 'aggregate' }.command end it_behaves_like 'an operation supporting causally consistent reads' end it 'returns an Aggregation object' do expect(authorized_collection.aggregate([])).to be_a(Mongo::Collection::View::Aggregation) end context 'when options are provided' do let(:options) do { :allow_disk_use => true, :bypass_document_validation => true } end it 'sets the options on the Aggregation object' do expect(authorized_collection.aggregate([], options).options).to eq(BSON::Document.new(options)) end context 'when the :comment option is provided' do let(:options) do { :comment => 'testing' } end it 'sets the options on the Aggregation object' do expect(authorized_collection.aggregate([], options).options).to eq(BSON::Document.new(options)) end end context 'when a session is provided' do let(:session) do authorized_client.start_session end let(:operation) do authorized_collection.aggregate([], session: session).to_a end let(:failed_operation) do authorized_collection.aggregate([ { '$invalid' => 1 }], session: session).to_a end let(:client) do authorized_client end it_behaves_like 'an operation using a session' it_behaves_like 'a failed operation using a session' end context 'when a hint is provided' do let(:options) do { 'hint' => { 'y' => 1 } } end it 'sets the options on the Aggregation object' do expect(authorized_collection.aggregate([], options).options).to eq(options) end end context 'when collation is provided' do before do authorized_collection.insert_many([ { name: 'bang' }, { name: 'bang' }]) end let(:pipeline) do [{ "$match" => { "name" => "BANG" } }] end let(:options) do { collation: { locale: 'en_US', strength: 2 } } end let(:result) do authorized_collection.aggregate(pipeline, options).collect { |doc| doc['name']} end context 'when the server selected supports collations' do min_server_fcv '3.4' it 'applies the collation' do expect(result).to eq(['bang', 'bang']) end end context 'when the server selected does not support collations' do max_server_version '3.2' it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedCollation) end context 'when a String key is used' do let(:options) do { 'collation' => { locale: 'en_US', strength: 2 } } end it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedCollation) end end end end end end describe '#count_documents' do before do authorized_collection.delete_many end context 'no argument provided' do context 'when collection is empty' do it 'returns 0 matching documents' do expect(authorized_collection.count_documents).to eq(0) end end context 'when collection is not empty' do let(:documents) do documents = [] 1.upto(10) do |index| documents << { key: 'a', _id: "in#{index}" } end documents end before do authorized_collection.insert_many(documents) end it 'returns 10 matching documents' do expect(authorized_collection.count_documents).to eq(10) end end end context 'when transactions are enabled' do require_wired_tiger require_transaction_support before do # Ensure that the collection is created authorized_collection.insert_one(x: 1) authorized_collection.delete_many({}) end let(:session) do authorized_client.start_session end it 'successfully starts a transaction and executes a transaction' do session.start_transaction expect( session.instance_variable_get(:@state) ).to eq(Mongo::Session::STARTING_TRANSACTION_STATE) expect(authorized_collection.count_documents({}, { session: session })).to eq(0) expect( session.instance_variable_get(:@state) ).to eq(Mongo::Session::TRANSACTION_IN_PROGRESS_STATE) authorized_collection.insert_one({ x: 1 }, { session: session }) expect(authorized_collection.count_documents({}, { session: session })).to eq(1) session.commit_transaction expect( session.instance_variable_get(:@state) ).to eq(Mongo::Session::TRANSACTION_COMMITTED_STATE) end end end describe '#count' do let(:documents) do (1..10).map{ |i| { field: "test#{i}" }} end before do authorized_collection.insert_many(documents) end it 'returns an integer count' do expect(authorized_collection.count).to eq(10) end context 'when options are provided' do it 'passes the options to the count' do expect(authorized_collection.count({}, limit: 5)).to eq(5) end context 'when a session is provided' do require_wired_tiger let(:session) do authorized_client.start_session end let(:operation) do authorized_collection.count({}, session: session) end let(:failed_operation) do authorized_collection.count({ '$._id' => 1 }, session: session) end let(:client) do authorized_client end it_behaves_like 'an operation using a session' it_behaves_like 'a failed operation using a session' end context 'when a session supporting causal consistency is used' do require_wired_tiger let(:operation) do collection.count({}, session: session) end let(:command) do operation subscriber.started_events.find { |cmd| cmd.command_name == 'count' }.command end it_behaves_like 'an operation supporting causally consistent reads' end context 'when a collation is specified' do let(:selector) do { name: 'BANG' } end let(:result) do authorized_collection.count(selector, options) end before do authorized_collection.insert_one(name: 'bang') end let(:options) do { collation: { locale: 'en_US', strength: 2 } } end context 'when the server selected supports collations' do min_server_fcv '3.4' it 'applies the collation to the count' do expect(result).to eq(1) end end context 'when the server selected does not support collations' do max_server_version '3.2' it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedCollation) end context 'when a String key is used' do let(:options) do { 'collation' => { locale: 'en_US', strength: 2 } } end it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedCollation) end end end end end end describe '#distinct' do let(:documents) do (1..3).map{ |i| { field: "test#{i}" }} end before do authorized_collection.insert_many(documents) end it 'returns the distinct values' do expect(authorized_collection.distinct(:field).sort).to eq([ 'test1', 'test2', 'test3' ]) end context 'when a selector is provided' do it 'returns the distinct values' do expect(authorized_collection.distinct(:field, field: 'test1')).to eq([ 'test1' ]) end end context 'when options are provided' do it 'passes the options to the distinct command' do expect(authorized_collection.distinct(:field, {}, max_time_ms: 100).sort).to eq([ 'test1', 'test2', 'test3' ]) end context 'when a session is provided' do require_wired_tiger let(:session) do authorized_client.start_session end let(:operation) do authorized_collection.distinct(:field, {}, session: session) end let(:failed_operation) do authorized_collection.distinct(:field, { '$._id' => 1 }, session: session) end let(:client) do authorized_client end it_behaves_like 'an operation using a session' it_behaves_like 'a failed operation using a session' end end context 'when a session supporting causal consistency is used' do require_wired_tiger let(:operation) do collection.distinct(:field, {}, session: session) end let(:command) do operation subscriber.started_events.find { |cmd| cmd.command_name == 'distinct' }.command end it_behaves_like 'an operation supporting causally consistent reads' end context 'when a collation is specified' do let(:result) do authorized_collection.distinct(:name, {}, options) end before do authorized_collection.insert_one(name: 'bang') authorized_collection.insert_one(name: 'BANG') end let(:options) do { collation: { locale: 'en_US', strength: 2 } } end context 'when the server selected supports collations' do min_server_fcv '3.4' it 'applies the collation to the distinct' do expect(result).to eq(['bang']) end end context 'when the server selected does not support collations' do max_server_version '3.2' it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedCollation) end context 'when a String key is used' do let(:options) do { 'collation' => { locale: 'en_US', strength: 2 } } end it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedCollation) end end end end context 'when a collation is not specified' do let(:result) do authorized_collection.distinct(:name) end before do authorized_collection.insert_one(name: 'bang') authorized_collection.insert_one(name: 'BANG') end it 'does not apply the collation to the distinct' do expect(result).to match_array(['bang', 'BANG']) end end end describe '#delete_one' do context 'when a selector was provided' do let(:selector) do { field: 'test1' } end before do authorized_collection.insert_many([ { field: 'test1' }, { field: 'test1' }, { field: 'test1' } ]) end let(:response) do authorized_collection.delete_one(selector) end it 'deletes the first matching document in the collection' do expect(response.deleted_count).to eq(1) end end context 'when no selector was provided' do before do authorized_collection.insert_many([{ field: 'test1' }, { field: 'test2' }]) end let(:response) do authorized_collection.delete_one end it 'deletes the first document in the collection' do expect(response.deleted_count).to eq(1) end end context 'when the delete fails' do require_topology :single let(:result) do collection_invalid_write_concern.delete_one end it 'raises an OperationFailure' do expect { result }.to raise_exception(Mongo::Error::OperationFailure) end end context 'when a session is provided' do let(:session) do authorized_client.start_session end let(:operation) do authorized_collection.delete_one({}, session: session) end let(:failed_operation) do authorized_collection.delete_one({ '$._id' => 1}, session: session) end let(:client) do authorized_client end it_behaves_like 'an operation using a session' it_behaves_like 'a failed operation using a session' end context 'when unacknowledged writes is used' do let(:collection_with_unacknowledged_write_concern) do authorized_collection.with(write: { w: 0 }) end let(:operation) do collection_with_unacknowledged_write_concern.delete_one({}, session: session) end it_behaves_like 'an explicit session with an unacknowledged write' end context 'when unacknowledged writes is used with an implicit session' do let(:collection_with_unacknowledged_write_concern) do client.with(write: { w: 0 })[TEST_COLL] end let(:operation) do collection_with_unacknowledged_write_concern.delete_one end it_behaves_like 'an implicit session with an unacknowledged write' end context 'when a collation is provided' do let(:selector) do { name: 'BANG' } end let(:result) do authorized_collection.delete_one(selector, options) end before do authorized_collection.insert_one(name: 'bang') end let(:options) do { collation: { locale: 'en_US', strength: 2 } } end context 'when the server selected supports collations' do min_server_fcv '3.4' it 'applies the collation' do expect(result.written_count).to eq(1) expect(authorized_collection.find(name: 'bang').count).to eq(0) end context 'when unacknowledged writes is used' do let(:collection_with_unacknowledged_write_concern) do authorized_collection.with(write: { w: 0 }) end let(:result) do collection_with_unacknowledged_write_concern.delete_one(selector, options) end it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedCollation) end context 'when a String key is used' do let(:options) do { 'collation' => { locale: 'en_US', strength: 2 } } end it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedCollation) end end end end context 'when the server selected does not support collations' do max_server_version '3.2' it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedCollation) end context 'when a String key is used' do let(:options) do { 'collation' => { locale: 'en_US', strength: 2 } } end it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedCollation) end end end end context 'when collation is not specified' do let(:selector) do { name: 'BANG' } end let(:result) do authorized_collection.delete_one(selector) end before do authorized_collection.insert_one(name: 'bang') end it 'does not apply the collation' do expect(result.written_count).to eq(0) expect(authorized_collection.find(name: 'bang').count).to eq(1) end end context 'when various options passed in' do # w: 2 requires a replica set require_topology :replica_set # https://jira.mongodb.org/browse/RUBY-2306 min_server_fcv '3.6' before do authorized_collection.insert_many([{ name: 'test1' }, { name: 'test2' }]) end let(:selector) do {name: 'test2'} end let(:session) do authorized_client.start_session end let(:events) do subscriber.command_started_events('delete') end let(:collection) do authorized_collection.with(write_concern: {w: 2}) end let!(:command) do Utils.get_command_event(authorized_client, 'delete') do |client| collection.delete_one(selector, session: session, write_concern: {w: 1}, bypass_document_validation: true) end.command end it 'deletes one successfully with correct options sent to server' do expect(events.length).to eq(1) expect(command[:writeConcern]).to_not be_nil expect(command[:writeConcern][:w]).to eq(1) expect(command[:bypassDocumentValidation]).to eq(true) end end end describe '#delete_many' do before do authorized_collection.insert_many([{ field: 'test1' }, { field: 'test2' }]) end context 'when a selector was provided' do let(:selector) do { field: 'test1' } end it 'deletes the matching documents in the collection' do expect(authorized_collection.delete_many(selector).deleted_count).to eq(1) end end context 'when no selector was provided' do it 'deletes all the documents in the collection' do expect(authorized_collection.delete_many.deleted_count).to eq(2) end end context 'when the deletes fail' do require_topology :single let(:result) do collection_invalid_write_concern.delete_many end it 'raises an OperationFailure' do expect { result }.to raise_exception(Mongo::Error::OperationFailure) end end context 'when a session is provided' do let(:session) do authorized_client.start_session end let(:operation) do authorized_collection.delete_many({}, session: session) end let(:failed_operation) do authorized_collection.delete_many({ '$._id' => 1}, session: session) end let(:client) do authorized_client end it_behaves_like 'an operation using a session' it_behaves_like 'a failed operation using a session' end context 'when unacknowledged writes are used with an explicit session' do let(:collection_with_unacknowledged_write_concern) do authorized_collection.with(write: { w: 0 }) end let(:operation) do collection_with_unacknowledged_write_concern.delete_many({ '$._id' => 1}, session: session) end it_behaves_like 'an explicit session with an unacknowledged write' end context 'when unacknowledged writes are used with an implicit session' do let(:collection_with_unacknowledged_write_concern) do client.with(write: { w: 0 })[TEST_COLL] end let(:operation) do collection_with_unacknowledged_write_concern.delete_many({ '$._id' => 1 }) end it_behaves_like 'an implicit session with an unacknowledged write' end context 'when a collation is specified' do let(:selector) do { name: 'BANG' } end let(:result) do authorized_collection.delete_many(selector, options) end before do authorized_collection.insert_one(name: 'bang') authorized_collection.insert_one(name: 'bang') end let(:options) do { collation: { locale: 'en_US', strength: 2 } } end context 'when the server selected supports collations' do min_server_fcv '3.4' it 'applies the collation' do expect(result.written_count).to eq(2) expect(authorized_collection.find(name: 'bang').count).to eq(0) end context 'when unacknowledged writes is used' do let(:collection_with_unacknowledged_write_concern) do authorized_collection.with(write: { w: 0 }) end let(:result) do collection_with_unacknowledged_write_concern.delete_many(selector, options) end it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedCollation) end context 'when a String key is used' do let(:options) do { 'collation' => { locale: 'en_US', strength: 2 } } end it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedCollation) end end end end context 'when the server selected does not support collations' do max_server_version '3.2' it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedCollation) end context 'when a String key is used' do let(:options) do { 'collation' => { locale: 'en_US', strength: 2 } } end it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedCollation) end end end end context 'when a collation is not specified' do let(:selector) do { name: 'BANG' } end let(:result) do authorized_collection.delete_many(selector) end before do authorized_collection.insert_one(name: 'bang') authorized_collection.insert_one(name: 'bang') end it 'does not apply the collation' do expect(result.written_count).to eq(0) expect(authorized_collection.find(name: 'bang').count).to eq(2) end end context 'when various options passed in' do # w: 2 requires a replica set require_topology :replica_set # https://jira.mongodb.org/browse/RUBY-2306 min_server_fcv '3.6' before do collection.insert_many([{ name: 'test1' }, { name: 'test2' }, { name: 'test3'}]) end let(:selector) do {name: 'test1'} end let(:session) do authorized_client.start_session end let(:events) do subscriber.command_started_events('delete') end let(:collection) do authorized_collection.with(write_concern: {w: 1}) end let!(:command) do Utils.get_command_event(authorized_client, 'delete') do |client| collection.delete_many(selector, session: session, write_concern: {w: 2}, bypass_document_validation: true) end.command end it 'deletes many successfully with correct options sent to server' do expect(events.length).to eq(1) expect(command[:writeConcern]).to_not be_nil expect(command[:writeConcern][:w]).to eq(2) expect(command[:bypassDocumentValidation]).to be(true) end end end describe '#parallel_scan' do max_server_version '4.0' require_topology :single, :replica_set let(:documents) do (1..200).map do |i| { name: "testing-scan-#{i}" } end end before do authorized_collection.insert_many(documents) end let(:cursors) do authorized_collection.parallel_scan(2) end it 'returns an array of cursors' do cursors.each do |cursor| expect(cursor.class).to be(Mongo::Cursor) end end it 'returns the correct number of documents' do expect( cursors.reduce(0) { |total, cursor| total + cursor.to_a.size } ).to eq(200) end context 'when a session is provided' do require_wired_tiger let(:cursors) do authorized_collection.parallel_scan(2, session: session) end let(:operation) do cursors.reduce(0) { |total, cursor| total + cursor.to_a.size } end let(:failed_operation) do authorized_collection.parallel_scan(-2, session: session) end let(:client) do authorized_client end it_behaves_like 'an operation using a session' it_behaves_like 'a failed operation using a session' end context 'when a session is not provided' do let(:collection) { client['test'] } let(:cursors) do collection.parallel_scan(2) end let(:operation) do cursors.reduce(0) { |total, cursor| total + cursor.to_a.size } end let(:failed_operation) do collection.parallel_scan(-2) end let(:command) do operation event = subscriber.started_events.find { |cmd| cmd.command_name == 'parallelCollectionScan' } expect(event).not_to be_nil event.command end it_behaves_like 'an operation not using a session' it_behaves_like 'a failed operation not using a session' end context 'when a session supporting causal consistency is used' do require_wired_tiger before do collection.drop collection.create end let(:cursors) do collection.parallel_scan(2, session: session) end let(:operation) do cursors.reduce(0) { |total, cursor| total + cursor.to_a.size } end let(:command) do operation event = subscriber.started_events.find { |cmd| cmd.command_name == 'parallelCollectionScan' } expect(event).not_to be_nil event.command end it_behaves_like 'an operation supporting causally consistent reads' end context 'when a read concern is provided' do require_wired_tiger min_server_fcv '3.2' let(:result) do authorized_collection.with(options).parallel_scan(2) end context 'when the read concern is valid' do let(:options) do { read_concern: { level: 'local' }} end it 'sends the read concern' do expect { result }.to_not raise_error end end context 'when the read concern is not valid' do let(:options) do { read_concern: { level: 'idontknow' }} end it 'raises an exception' do expect { result }.to raise_error(Mongo::Error::OperationFailure) end end end context 'when the collection has a read preference' do require_topology :single, :replica_set before do allow(collection.client.cluster).to receive(:single?).and_return(false) end let(:client) do authorized_client.with(server_selection_timeout: 0.2) end let(:collection) do client[authorized_collection.name, read: { :mode => :secondary, :tag_sets => [{ 'non' => 'existent' }] }] end let(:result) do collection.parallel_scan(2) end it 'uses that read preference' do expect { result }.to raise_exception(Mongo::Error::NoServerAvailable) end end context 'when a max time ms value is provided' do require_topology :single, :replica_set let(:result) do authorized_collection.parallel_scan(2, options) end context 'when the read concern is valid' do let(:options) do { max_time_ms: 5 } end it 'sends the max time ms value' do expect { result }.to_not raise_error end end context 'when the max time ms is not valid' do let(:options) do { max_time_ms: 0.1 } end it 'raises an exception' do expect { result }.to raise_error(Mongo::Error::OperationFailure) end end end end describe '#replace_one' do let(:selector) do { field: 'test1' } end context 'when a selector was provided' do before do authorized_collection.insert_many([{ field: 'test1' }, { field: 'test1' }]) end let!(:response) do authorized_collection.replace_one(selector, { field: 'testing' }) end let(:updated) do authorized_collection.find(field: 'testing').first end it 'updates the first matching document in the collection' do expect(response.modified_count).to eq(1) end it 'updates the documents in the collection' do expect(updated[:field]).to eq('testing') end end context 'when upsert is false' do let!(:response) do authorized_collection.replace_one(selector, { field: 'test1' }, upsert: false) end let(:updated) do authorized_collection.find(field: 'test1').to_a end it 'reports that no documents were written' do expect(response.modified_count).to eq(0) end it 'does not insert the document' do expect(updated).to be_empty end end context 'when upsert is true' do let!(:response) do authorized_collection.replace_one(selector, { field: 'test1' }, upsert: true) end let(:updated) do authorized_collection.find(field: 'test1').first end it 'reports that a document was written' do expect(response.written_count).to eq(1) end it 'inserts the document' do expect(updated[:field]).to eq('test1') end end context 'when upsert is not specified' do let!(:response) do authorized_collection.replace_one(selector, { field: 'test1' }) end let(:updated) do authorized_collection.find(field: 'test1').to_a end it 'reports that no documents were written' do expect(response.modified_count).to eq(0) end it 'does not insert the document' do expect(updated).to be_empty end end context 'when the replace has an invalid key' do context "when validate_update_replace is true" do config_override :validate_update_replace, true let(:result) do authorized_collection.replace_one(selector, { '$s' => 'test1' }) end it 'raises an InvalidReplacementDocument error' do expect { result }.to raise_exception(Mongo::Error::InvalidReplacementDocument) end end context "when validate_update_replace is false" do config_override :validate_update_replace, false let(:result) do authorized_collection.replace_one(selector, { '$set' => { 'test1' => 1 } }) end it 'does not raise an error' do expect { result }.to_not raise_exception end end end context 'when collection has a validator' do min_server_fcv '3.2' around(:each) do |spec| collection_with_validator.drop authorized_client[:validating, :validator => { :a => { '$exists' => true } }].tap do |c| c.create end spec.run collection_with_validator.drop end before do collection_with_validator.insert_one({ a: 1 }) end context 'when the document is valid' do let(:result) do collection_with_validator.replace_one({ a: 1 }, { a: 5 }) end it 'replaces successfully' do expect(result.modified_count).to eq(1) end end context 'when the document is invalid' do context 'when bypass_document_validation is not set' do let(:result2) do collection_with_validator.replace_one({ a: 1 }, { x: 5 }) end it 'raises OperationFailure' do expect { result2 }.to raise_exception(Mongo::Error::OperationFailure) end end context 'when bypass_document_validation is true' do let(:result3) do collection_with_validator.replace_one( { a: 1 }, { x: 1 }, :bypass_document_validation => true) end it 'replaces successfully' do expect(result3.written_count).to eq(1) end end end end context 'when a collation is specified' do let(:selector) do { name: 'BANG' } end let(:result) do authorized_collection.replace_one(selector, { name: 'doink' }, options) end before do authorized_collection.insert_one(name: 'bang') end let(:options) do { collation: { locale: 'en_US', strength: 2 } } end context 'when the server selected supports collations' do min_server_fcv '3.4' it 'applies the collation' do expect(result.written_count).to eq(1) expect(authorized_collection.find(name: 'doink').count).to eq(1) end context 'when unacknowledged writes is used' do let(:collection_with_unacknowledged_write_concern) do authorized_collection.with(write: { w: 0 }) end let(:result) do collection_with_unacknowledged_write_concern.replace_one(selector, { name: 'doink' }, options) end it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedCollation) end context 'when a String key is used' do let(:options) do { 'collation' => { locale: 'en_US', strength: 2 } } end it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedCollation) end end end end context 'when the server selected does not support collations' do max_server_version '3.2' it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedCollation) end context 'when a String key is used' do let(:options) do { 'collation' => { locale: 'en_US', strength: 2 } } end it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedCollation) end end end end context 'when a collation is not specified' do let(:selector) do { name: 'BANG' } end let(:result) do authorized_collection.replace_one(selector, { name: 'doink' }) end before do authorized_collection.insert_one(name: 'bang') end it 'does not apply the collation' do expect(result.written_count).to eq(0) expect(authorized_collection.find(name: 'bang').count).to eq(1) end end context 'when a session is provided' do let(:selector) do { name: 'BANG' } end before do authorized_collection.insert_one(name: 'bang') end let(:session) do authorized_client.start_session end let(:operation) do authorized_collection.replace_one(selector, { name: 'doink' }, session: session) end let(:failed_operation) do authorized_collection.replace_one({ '$._id' => 1 }, { name: 'doink' }, session: session) end let(:client) do authorized_client end it_behaves_like 'an operation using a session' it_behaves_like 'a failed operation using a session' end context 'when unacknowledged writes is used with an explicit session' do let(:collection_with_unacknowledged_write_concern) do authorized_collection.with(write: { w: 0 }) end let(:operation) do collection_with_unacknowledged_write_concern.replace_one({ a: 1 }, { x: 5 }, session: session) end it_behaves_like 'an explicit session with an unacknowledged write' end context 'when unacknowledged writes is used with an implicit session' do let(:collection_with_unacknowledged_write_concern) do client.with(write: { w: 0 })[TEST_COLL] end let(:operation) do collection_with_unacknowledged_write_concern.replace_one({ a: 1 }, { x: 5 }) end it_behaves_like 'an implicit session with an unacknowledged write' end context 'when various options passed in' do # w: 2 requires a replica set require_topology :replica_set # https://jira.mongodb.org/browse/RUBY-2306 min_server_fcv '3.6' before do authorized_collection.insert_one({field: 'test1'}) end let(:session) do authorized_client.start_session end let(:events) do subscriber.command_started_events('update') end let(:collection) do authorized_collection.with(write_concern: {w: 3}) end let(:updated) do collection.find(field: 'test4').first end let!(:command) do Utils.get_command_event(authorized_client, 'update') do |client| collection.replace_one(selector, { field: 'test4'}, session: session, :return_document => :after, write_concern: {w: 2}, upsert: true, bypass_document_validation: true) end.command end it 'replaced one successfully with correct options sent to server' do expect(updated[:field]).to eq('test4') expect(events.length).to eq(1) expect(command[:writeConcern]).to_not be_nil expect(command[:writeConcern][:w]).to eq(2) expect(command[:bypassDocumentValidation]).to be(true) expect(command[:updates][0][:upsert]).to be(true) end end end describe '#update_many' do let(:selector) do { field: 'test' } end context 'when a selector was provided' do before do authorized_collection.insert_many([{ field: 'test' }, { field: 'test' }]) end let!(:response) do authorized_collection.update_many(selector, '$set'=> { field: 'testing' }) end let(:updated) do authorized_collection.find(field: 'testing').to_a.last end it 'returns the number updated' do expect(response.modified_count).to eq(2) end it 'updates the documents in the collection' do expect(updated[:field]).to eq('testing') end end context 'when upsert is false' do let(:response) do authorized_collection.update_many(selector, { '$set'=> { field: 'testing' } }, upsert: false) end let(:updated) do authorized_collection.find.to_a end it 'reports that no documents were updated' do expect(response.modified_count).to eq(0) end it 'updates no documents in the collection' do expect(updated).to be_empty end end context 'when upsert is true' do let!(:response) do authorized_collection.update_many(selector, { '$set'=> { field: 'testing' } }, upsert: true) end let(:updated) do authorized_collection.find.sort(_id: 1).to_a.last end it 'reports that a document was written' do expect(response.written_count).to eq(1) end it 'inserts a document into the collection' do expect(updated[:field]).to eq('testing') end end context 'when upsert is not specified' do let(:response) do authorized_collection.update_many(selector, { '$set'=> { field: 'testing' } }) end let(:updated) do authorized_collection.find.to_a end it 'reports that no documents were updated' do expect(response.modified_count).to eq(0) end it 'updates no documents in the collection' do expect(updated).to be_empty end end context 'when arrayFilters is provided' do let(:selector) do { '$or' => [{ _id: 0 }, { _id: 1 }]} end context 'when the server supports arrayFilters' do min_server_fcv '3.6' before do authorized_collection.insert_many([{ _id: 0, x: [ { y: 1 }, { y: 2 }, { y: 3 } ] }, { _id: 1, x: [ { y: 3 }, { y: 2 }, { y: 1 } ] }]) end let(:result) do authorized_collection.update_many(selector, { '$set' => { 'x.$[i].y' => 5 } }, options) end context 'when a Symbol key is used' do let(:options) do { array_filters: [{ 'i.y' => 3 }] } end it 'applies the arrayFilters' do expect(result.matched_count).to eq(2) expect(result.modified_count).to eq(2) docs = authorized_collection.find(selector, sort: { _id: 1 }).to_a expect(docs[0]['x']).to eq ([{ 'y' => 1 }, { 'y' => 2 }, { 'y' => 5 }]) expect(docs[1]['x']).to eq ([{ 'y' => 5 }, { 'y' => 2 }, { 'y' => 1 }]) end end context 'when a String key is used' do let(:options) do { 'array_filters' => [{ 'i.y' => 3 }] } end it 'applies the arrayFilters' do expect(result.matched_count).to eq(2) expect(result.modified_count).to eq(2) docs = authorized_collection.find({}, sort: { _id: 1 }).to_a expect(docs[0]['x']).to eq ([{ 'y' => 1 }, { 'y' => 2 }, { 'y' => 5 }]) expect(docs[1]['x']).to eq ([{ 'y' => 5 }, { 'y' => 2 }, { 'y' => 1 }]) end end end context 'when the server does not support arrayFilters' do max_server_version '3.4' let(:result) do authorized_collection.update_many(selector, { '$set' => { 'x.$[i].y' => 5 } }, options) end context 'when a Symbol key is used' do let(:options) do { array_filters: [{ 'i.y' => 3 }] } end it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedArrayFilters) end end context 'when a String key is used' do let(:options) do { 'array_filters' => [{ 'i.y' => 3 }] } end it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedArrayFilters) end end end end context 'when the updates fail' do let(:result) do authorized_collection.update_many(selector, { '$s'=> { field: 'testing' } }) end it 'raises an OperationFailure' do expect { result }.to raise_exception(Mongo::Error::OperationFailure) end end context 'when collection has a validator' do min_server_fcv '3.2' around(:each) do |spec| authorized_client[:validating, :validator => { :a => { '$exists' => true } }].tap do |c| c.create end spec.run collection_with_validator.drop end before do collection_with_validator.insert_many([{ a: 1 }, { a: 2 }]) end context 'when the document is valid' do let(:result) do collection_with_validator.update_many( { :a => { '$gt' => 0 } }, '$inc' => { :a => 1 } ) end it 'updates successfully' do expect(result.modified_count).to eq(2) end end context 'when the document is invalid' do context 'when bypass_document_validation is not set' do let(:result2) do collection_with_validator.update_many( { :a => { '$gt' => 0 } }, '$unset' => { :a => '' }) end it 'raises OperationFailure' do expect { result2 }.to raise_exception(Mongo::Error::OperationFailure) end end context 'when bypass_document_validation is true' do let(:result3) do collection_with_validator.update_many( { :a => { '$gt' => 0 } }, { '$unset' => { :a => '' } }, :bypass_document_validation => true) end it 'updates successfully' do expect(result3.written_count).to eq(2) end end end end context 'when a collation is specified' do let(:selector) do { name: 'BANG' } end let(:result) do authorized_collection.update_many(selector, { '$set' => { other: 'doink' } }, options) end before do authorized_collection.insert_one(name: 'bang') authorized_collection.insert_one(name: 'baNG') end let(:options) do { collation: { locale: 'en_US', strength: 2 } } end context 'when the server selected supports collations' do min_server_fcv '3.4' it 'applies the collation' do expect(result.written_count).to eq(2) expect(authorized_collection.find(other: 'doink').count).to eq(2) end context 'when unacknowledged writes is used' do let(:collection_with_unacknowledged_write_concern) do authorized_collection.with(write: { w: 0 }) end let(:result) do collection_with_unacknowledged_write_concern.update_many(selector, { '$set' => { other: 'doink' } }, options) end it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedCollation) end context 'when a String key is used' do let(:options) do { 'collation' => { locale: 'en_US', strength: 2 } } end it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedCollation) end end end end context 'when the server selected does not support collations' do max_server_version '3.2' it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedCollation) end context 'when a String key is used' do let(:options) do { 'collation' => { locale: 'en_US', strength: 2 } } end it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedCollation) end end end end context 'when collation is not specified' do let(:selector) do {name: 'BANG'} end let(:result) do authorized_collection.update_many(selector, { '$set' => {other: 'doink'} }) end before do authorized_collection.insert_one(name: 'bang') authorized_collection.insert_one(name: 'baNG') end it 'does not apply the collation' do expect(result.written_count).to eq(0) end end context 'when a session is provided' do let(:selector) do { name: 'BANG' } end let(:operation) do authorized_collection.update_many(selector, { '$set' => {other: 'doink'} }, session: session) end before do authorized_collection.insert_one(name: 'bang') authorized_collection.insert_one(name: 'baNG') end let(:session) do authorized_client.start_session end let(:failed_operation) do authorized_collection.update_many({ '$._id' => 1 }, { '$set' => {other: 'doink'} }, session: session) end let(:client) do authorized_client end it_behaves_like 'an operation using a session' it_behaves_like 'a failed operation using a session' end context 'when unacknowledged writes is used with an explicit session' do let(:collection_with_unacknowledged_write_concern) do authorized_collection.with(write: { w: 0 }) end let(:operation) do collection_with_unacknowledged_write_concern.update_many({a: 1}, { '$set' => {x: 1} }, session: session) end it_behaves_like 'an explicit session with an unacknowledged write' end context 'when unacknowledged writes is used with an implicit session' do let(:collection_with_unacknowledged_write_concern) do client.with(write: { w: 0 })[TEST_COLL] end let(:operation) do collection_with_unacknowledged_write_concern.update_many({a: 1}, {'$set' => {x: 1}}) end it_behaves_like 'an implicit session with an unacknowledged write' end context 'when various options passed in' do # w: 2 requires a replica set require_topology :replica_set # https://jira.mongodb.org/browse/RUBY-2306 min_server_fcv '3.6' before do collection.insert_many([{ field: 'test' }, { field: 'test2' }], session: session) end let(:session) do authorized_client.start_session end let(:collection) do authorized_collection.with(write_concern: {w: 1}) end let(:events) do subscriber.command_started_events('update') end let!(:command) do Utils.get_command_event(authorized_client, 'update') do |client| collection.update_many(selector, {'$set'=> { field: 'testing' }}, session: session, write_concern: {w: 2}, bypass_document_validation: true, upsert: true) end.command end it 'updates many successfully with correct options sent to server' do expect(events.length).to eq(1) expect(collection.options[:write_concern]).to eq(w: 1) expect(command[:writeConcern][:w]).to eq(2) expect(command[:bypassDocumentValidation]).to be(true) expect(command[:updates][0][:upsert]).to be(true) end end end describe '#update_one' do let(:selector) do { field: 'test1' } end context 'when a selector was provided' do before do authorized_collection.insert_many([{ field: 'test1' }, { field: 'test1' }]) end let!(:response) do authorized_collection.update_one(selector, '$set'=> { field: 'testing' }) end let(:updated) do authorized_collection.find(field: 'testing').first end it 'updates the first matching document in the collection' do expect(response.modified_count).to eq(1) end it 'updates the documents in the collection' do expect(updated[:field]).to eq('testing') end end context 'when upsert is false' do let(:response) do authorized_collection.update_one(selector, { '$set'=> { field: 'testing' } }, upsert: false) end let(:updated) do authorized_collection.find.to_a end it 'reports that no documents were updated' do expect(response.modified_count).to eq(0) end it 'updates no documents in the collection' do expect(updated).to be_empty end end context 'when upsert is true' do let!(:response) do authorized_collection.update_one(selector, { '$set'=> { field: 'testing' } }, upsert: true) end let(:updated) do authorized_collection.find.first end it 'reports that a document was written' do expect(response.written_count).to eq(1) end it 'inserts a document into the collection' do expect(updated[:field]).to eq('testing') end end context 'when upsert is not specified' do let(:response) do authorized_collection.update_one(selector, { '$set'=> { field: 'testing' } }) end let(:updated) do authorized_collection.find.to_a end it 'reports that no documents were updated' do expect(response.modified_count).to eq(0) end it 'updates no documents in the collection' do expect(updated).to be_empty end end context 'when the update fails' do let(:result) do authorized_collection.update_one(selector, { '$s'=> { field: 'testing' } }) end it 'raises an OperationFailure' do expect { result }.to raise_exception(Mongo::Error::OperationFailure) end end context 'when collection has a validator' do min_server_fcv '3.2' around(:each) do |spec| authorized_client[:validating, :validator => { :a => { '$exists' => true } }].tap do |c| c.create end spec.run collection_with_validator.drop end before do collection_with_validator.insert_one({ a: 1 }) end context 'when the document is valid' do let(:result) do collection_with_validator.update_one( { :a => { '$gt' => 0 } }, '$inc' => { :a => 1 } ) end it 'updates successfully' do expect(result.modified_count).to eq(1) end end context 'when the document is invalid' do context 'when bypass_document_validation is not set' do let(:result2) do collection_with_validator.update_one( { :a => { '$gt' => 0 } }, '$unset' => { :a => '' }) end it 'raises OperationFailure' do expect { result2 }.to raise_exception(Mongo::Error::OperationFailure) end end context 'when bypass_document_validation is true' do let(:result3) do collection_with_validator.update_one( { :a => { '$gt' => 0 } }, { '$unset' => { :a => '' } }, :bypass_document_validation => true) end it 'updates successfully' do expect(result3.written_count).to eq(1) end end end end context 'when there is a collation specified' do let(:selector) do { name: 'BANG' } end let(:result) do authorized_collection.update_one(selector, { '$set' => { other: 'doink' } }, options) end before do authorized_collection.insert_one(name: 'bang') end let(:options) do { collation: { locale: 'en_US', strength: 2 } } end context 'when the server selected supports collations' do min_server_fcv '3.4' it 'applies the collation' do expect(result.written_count).to eq(1) expect(authorized_collection.find(other: 'doink').count).to eq(1) end context 'when unacknowledged writes is used' do let(:collection_with_unacknowledged_write_concern) do authorized_collection.with(write: { w: 0 }) end let(:result) do collection_with_unacknowledged_write_concern.update_one(selector, { '$set' => { other: 'doink' } }, options) end it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedCollation) end context 'when a String key is used' do let(:options) do { 'collation' => { locale: 'en_US', strength: 2 } } end it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedCollation) end end end end context 'when the server selected does not support collations' do max_server_version '3.2' it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedCollation) end context 'when a String key is used' do let(:options) do { 'collation' => { locale: 'en_US', strength: 2 } } end it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedCollation) end end end end context 'when a collation is not specified' do let(:selector) do { name: 'BANG' } end let(:result) do authorized_collection.update_one(selector, { '$set' => { other: 'doink' } }) end before do authorized_collection.insert_one(name: 'bang') end it 'does not apply the collation' do expect(result.written_count).to eq(0) end end context 'when arrayFilters is provided' do let(:selector) do { _id: 0} end context 'when the server supports arrayFilters' do min_server_fcv '3.6' before do authorized_collection.insert_one(_id: 0, x: [{ y: 1 }, { y: 2 }, {y: 3 }]) end let(:result) do authorized_collection.update_one(selector, { '$set' => { 'x.$[i].y' => 5 } }, options) end context 'when a Symbol key is used' do let(:options) do { array_filters: [{ 'i.y' => 3 }] } end it 'applies the arrayFilters' do expect(result.matched_count).to eq(1) expect(result.modified_count).to eq(1) expect(authorized_collection.find(selector).first['x'].last['y']).to eq(5) end end context 'when a String key is used' do let(:options) do { 'array_filters' => [{ 'i.y' => 3 }] } end it 'applies the arrayFilters' do expect(result.matched_count).to eq(1) expect(result.modified_count).to eq(1) expect(authorized_collection.find(selector).first['x'].last['y']).to eq(5) end end end context 'when the server does not support arrayFilters' do max_server_version '3.4' let(:result) do authorized_collection.update_one(selector, { '$set' => { 'x.$[i].y' => 5 } }, options) end context 'when a Symbol key is used' do let(:options) do { array_filters: [{ 'i.y' => 3 }] } end it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedArrayFilters) end end context 'when a String key is used' do let(:options) do { 'array_filters' => [{ 'i.y' => 3 }] } end it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedArrayFilters) end end end end context 'when the documents are sent with OP_MSG' do min_server_fcv '3.6' let(:documents) do [{ '_id' => 1, 'name' => '1'*16777191 }, { '_id' => 'y' }] end before do authorized_collection.insert_many([{ field: 'test1' }, { field: 'test1' }]) client[TEST_COLL].update_one({ a: 1 }, {'$set' => { 'name' => '1'*16777149 }}) end let(:update_events) do subscriber.started_events.select { |e| e.command_name == 'update' } end it 'sends the documents in one OP_MSG' do expect(update_events.size).to eq(1) end end context 'when a session is provided' do before do authorized_collection.insert_many([{ field: 'test1' }, { field: 'test1' }]) end let(:session) do authorized_client.start_session end let(:operation) do authorized_collection.update_one({ field: 'test' }, { '$set'=> { field: 'testing' } }, session: session) end let(:failed_operation) do authorized_collection.update_one({ '$._id' => 1 }, { '$set'=> { field: 'testing' } }, session: session) end let(:client) do authorized_client end it_behaves_like 'an operation using a session' it_behaves_like 'a failed operation using a session' end context 'when unacknowledged writes is used with an explicit session' do let(:collection_with_unacknowledged_write_concern) do authorized_collection.with(write: { w: 0 }) end let(:operation) do collection_with_unacknowledged_write_concern.update_one({ a: 1 }, { '$set' => { x: 1 } }, session: session) end it_behaves_like 'an explicit session with an unacknowledged write' end context 'when unacknowledged writes is used with an implicit session' do let(:collection_with_unacknowledged_write_concern) do client.with(write: { w: 0 })[TEST_COLL] end let(:operation) do collection_with_unacknowledged_write_concern.update_one({ a: 1 }, { '$set' => { x: 1 }}) end it_behaves_like 'an implicit session with an unacknowledged write' end context 'when various options passed in' do # w: 2 requires a replica set require_topology :replica_set # https://jira.mongodb.org/browse/RUBY-2306 min_server_fcv '3.6' before do collection.insert_many([{ field: 'test1' }, { field: 'test2' }], session: session) end let(:session) do authorized_client.start_session end let(:collection) do authorized_collection.with(write_concern: {w: 1}) end let(:events) do subscriber.command_started_events('update') end let!(:command) do Utils.get_command_event(authorized_client, 'update') do |client| collection.update_one(selector, { '$set'=> { field: 'testing' } }, session: session, write_concern: {w: 2}, bypass_document_validation: true, :return_document => :after, upsert: true) end.command end it 'updates one successfully with correct options sent to server' do expect(events.length).to eq(1) expect(command[:writeConcern]).to_not be_nil expect(command[:writeConcern][:w]).to eq(2) expect(collection.options[:write_concern]).to eq(w:1) expect(command[:bypassDocumentValidation]).to be(true) expect(command[:updates][0][:upsert]).to be(true) end end end describe '#find_one_and_delete' do before do authorized_collection.insert_many([{ field: 'test1' }]) end let(:selector) do { field: 'test1' } end context 'when a matching document is found' do context 'when a session is provided' do let(:operation) do authorized_collection.find_one_and_delete(selector, session: session) end let(:failed_operation) do authorized_collection.find_one_and_delete({ '$._id' => 1 }, session: session) end let(:session) do authorized_client.start_session end let(:client) do authorized_client end it_behaves_like 'an operation using a session' it_behaves_like 'a failed operation using a session' end context 'when no options are provided' do let!(:document) do authorized_collection.find_one_and_delete(selector) end it 'deletes the document from the database' do expect(authorized_collection.find.to_a).to be_empty end it 'returns the document' do expect(document['field']).to eq('test1') end end context 'when a projection is provided' do let!(:document) do authorized_collection.find_one_and_delete(selector, projection: { _id: 1 }) end it 'deletes the document from the database' do expect(authorized_collection.find.to_a).to be_empty end it 'returns the document with limited fields' do expect(document['field']).to be_nil expect(document['_id']).to_not be_nil end end context 'when a sort is provided' do let!(:document) do authorized_collection.find_one_and_delete(selector, sort: { field: 1 }) end it 'deletes the document from the database' do expect(authorized_collection.find.to_a).to be_empty end it 'returns the document with limited fields' do expect(document['field']).to eq('test1') end end context 'when max_time_ms is provided' do it 'includes the max_time_ms value in the command' do expect { authorized_collection.find_one_and_delete(selector, max_time_ms: 0.1) }.to raise_error(Mongo::Error::OperationFailure) end end end context 'when no matching document is found' do let(:selector) do { field: 'test5' } end let!(:document) do authorized_collection.find_one_and_delete(selector) end it 'returns nil' do expect(document).to be_nil end end context 'when the operation fails' do let(:result) do authorized_collection.find_one_and_delete(selector, max_time_ms: 0.1) end it 'raises an OperationFailure' do expect { result }.to raise_exception(Mongo::Error::OperationFailure) end end context 'when write_concern is provided' do min_server_fcv '3.2' require_topology :single it 'uses the write concern' do expect { authorized_collection.find_one_and_delete(selector, write_concern: { w: 2 }) }.to raise_error(Mongo::Error::OperationFailure) end end context 'when the collection has a write concern' do min_server_fcv '3.2' require_topology :single let(:collection) do authorized_collection.with(write: { w: 2 }) end it 'uses the write concern' do expect { collection.find_one_and_delete(selector, write_concern: { w: 2 }) }.to raise_error(Mongo::Error::OperationFailure) end end context 'when collation is specified' do let(:selector) do { name: 'BANG' } end let(:result) do authorized_collection.find_one_and_delete(selector, options) end before do authorized_collection.insert_one(name: 'bang') end let(:options) do { collation: { locale: 'en_US', strength: 2 } } end context 'when the server selected supports collations' do min_server_fcv '3.4' it 'applies the collation' do expect(result['name']).to eq('bang') expect(authorized_collection.find(name: 'bang').count).to eq(0) end end context 'when the server selected does not support collations' do max_server_version '3.2' it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedCollation) end context 'when a String key is used' do let(:options) do { 'collation' => { locale: 'en_US', strength: 2 } } end it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedCollation) end end end end context 'when collation is not specified' do let(:selector) do { name: 'BANG' } end let(:result) do authorized_collection.find_one_and_delete(selector) end before do authorized_collection.insert_one(name: 'bang') end it 'does not apply the collation' do expect(result).to be_nil end end context 'when various options passed in' do # w: 2 requires a replica set require_topology :replica_set # https://jira.mongodb.org/browse/RUBY-2306 min_server_fcv '3.6' before do authorized_collection.delete_many authorized_collection.insert_many([{ name: 'test1' }, { name: 'test2' }]) end let(:collection) do authorized_collection.with(write_concern: {w: 2}) end let(:session) do authorized_client.start_session end let!(:command) do Utils.get_command_event(authorized_client, 'findAndModify') do |client| collection.find_one_and_delete(selector, session: session, write_concern: {w: 2}, bypass_document_validation: true, max_time_ms: 300) end.command end let(:events) do subscriber.command_started_events('findAndModify') end it 'finds and deletes successfully with correct options sent to server' do expect(events.length).to eq(1) expect(command[:writeConcern]).to_not be_nil expect(command[:writeConcern][:w]).to eq(2) expect(command[:bypassDocumentValidation]).to eq(true) expect(command[:maxTimeMS]).to eq(300) end end end describe '#find_one_and_update' do let(:selector) do { field: 'test1' } end before do authorized_collection.insert_many([{ field: 'test1' }]) end context 'when a matching document is found' do context 'when no options are provided' do let(:document) do authorized_collection.find_one_and_update(selector, { '$set' => { field: 'testing' }}) end it 'returns the original document' do expect(document['field']).to eq('test1') end end context 'when a session is provided' do let(:operation) do authorized_collection.find_one_and_update(selector, { '$set' => { field: 'testing' }}, session: session) end let(:failed_operation) do authorized_collection.find_one_and_update({ '$._id' => 1 }, { '$set' => { field: 'testing' }}, session: session) end let(:session) do authorized_client.start_session end let(:client) do authorized_client end it_behaves_like 'an operation using a session' it_behaves_like 'a failed operation using a session' end context 'when no options are provided' do let(:document) do authorized_collection.find_one_and_update(selector, { '$set' => { field: 'testing' }}) end it 'returns the original document' do expect(document['field']).to eq('test1') end end context 'when return_document options are provided' do context 'when return_document is :after' do let(:document) do authorized_collection.find_one_and_update(selector, { '$set' => { field: 'testing' }}, :return_document => :after) end it 'returns the new document' do expect(document['field']).to eq('testing') end end context 'when return_document is :before' do let(:document) do authorized_collection.find_one_and_update(selector, { '$set' => { field: 'testing' }}, :return_document => :before) end it 'returns the original document' do expect(document['field']).to eq('test1') end end end context 'when a projection is provided' do let(:document) do authorized_collection.find_one_and_update(selector, { '$set' => { field: 'testing' }}, projection: { _id: 1 }) end it 'returns the document with limited fields' do expect(document['field']).to be_nil expect(document['_id']).to_not be_nil end end context 'when a sort is provided' do let(:document) do authorized_collection.find_one_and_update(selector, { '$set' => { field: 'testing' }}, sort: { field: 1 }) end it 'returns the original document' do expect(document['field']).to eq('test1') end end end context 'when max_time_ms is provided' do it 'includes the max_time_ms value in the command' do expect { authorized_collection.find_one_and_update(selector, { '$set' => { field: 'testing' }}, max_time_ms: 0.1) }.to raise_error(Mongo::Error::OperationFailure) end end context 'when no matching document is found' do let(:selector) do { field: 'test5' } end let(:document) do authorized_collection.find_one_and_update(selector, { '$set' => { field: 'testing' }}) end it 'returns nil' do expect(document).to be_nil end end context 'when no matching document is found' do context 'when no upsert options are provided' do let(:selector) do { field: 'test5' } end let(:document) do authorized_collection.find_one_and_update(selector, { '$set' => { field: 'testing' }}) end it 'returns nil' do expect(document).to be_nil end end context 'when upsert options are provided' do let(:selector) do { field: 'test5' } end let(:document) do authorized_collection.find_one_and_update(selector, { '$set' => { field: 'testing' }}, :upsert => true, :return_document => :after) end it 'returns the new document' do expect(document['field']).to eq('testing') end end end context 'when the operation fails' do let(:result) do authorized_collection.find_one_and_update(selector, { '$set' => { field: 'testing' }}, max_time_ms: 0.1) end it 'raises an OperationFailure' do expect { result }.to raise_exception(Mongo::Error::OperationFailure) end end context 'when collection has a validator' do min_server_fcv '3.2' around(:each) do |spec| authorized_client[:validating].drop authorized_client[:validating, :validator => { :a => { '$exists' => true } }].tap do |c| c.create end spec.run collection_with_validator.drop end before do collection_with_validator.insert_one({ a: 1 }) end context 'when the document is valid' do let(:result) do collection_with_validator.find_one_and_update( { a: 1 }, { '$inc' => { :a => 1 } }, :return_document => :after) end it 'updates successfully' do expect(result['a']).to eq(2) end end context 'when the document is invalid' do context 'when bypass_document_validation is not set' do let(:result2) do collection_with_validator.find_one_and_update( { a: 1 }, { '$unset' => { :a => '' } }, :return_document => :after) end it 'raises OperationFailure' do expect { result2 }.to raise_exception(Mongo::Error::OperationFailure) end end context 'when bypass_document_validation is true' do let(:result3) do collection_with_validator.find_one_and_update( { a: 1 }, { '$unset' => { :a => '' } }, :bypass_document_validation => true, :return_document => :after) end it 'updates successfully' do expect(result3['a']).to be_nil end end end end context 'when write_concern is provided' do min_server_fcv '3.2' require_topology :single it 'uses the write concern' do expect { authorized_collection.find_one_and_update(selector, { '$set' => { field: 'testing' }}, write_concern: { w: 2 }) }.to raise_error(Mongo::Error::OperationFailure) end end context 'when the collection has a write concern' do min_server_fcv '3.2' require_topology :single let(:collection) do authorized_collection.with(write: { w: 2 }) end it 'uses the write concern' do expect { collection.find_one_and_update(selector, { '$set' => { field: 'testing' }}, write_concern: { w: 2 }) }.to raise_error(Mongo::Error::OperationFailure) end end context 'when a collation is specified' do let(:selector) do { name: 'BANG' } end let(:result) do authorized_collection.find_one_and_update(selector, { '$set' => { other: 'doink' } }, options) end before do authorized_collection.insert_one(name: 'bang') end let(:options) do { collation: { locale: 'en_US', strength: 2 } } end context 'when the server selected supports collations' do min_server_fcv '3.4' it 'applies the collation' do expect(result['name']).to eq('bang') expect(authorized_collection.find({ name: 'bang' }, limit: -1).first['other']).to eq('doink') end end context 'when the server selected does not support collations' do max_server_version '3.2' it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedCollation) end context 'when a String key is used' do let(:options) do { 'collation' => { locale: 'en_US', strength: 2 } } end it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedCollation) end end end end context 'when there is no collation specified' do let(:selector) do { name: 'BANG' } end let(:result) do authorized_collection.find_one_and_update(selector, { '$set' => { other: 'doink' } }) end before do authorized_collection.insert_one(name: 'bang') end it 'does not apply the collation' do expect(result).to be_nil end end context 'when arrayFilters is provided' do let(:selector) do { _id: 0 } end context 'when the server supports arrayFilters' do min_server_fcv '3.6' before do authorized_collection.insert_one(_id: 0, x: [{ y: 1 }, { y: 2 }, { y: 3 }]) end let(:result) do authorized_collection.find_one_and_update(selector, { '$set' => { 'x.$[i].y' => 5 } }, options) end context 'when a Symbol key is used' do let(:options) do { array_filters: [{ 'i.y' => 3 }] } end it 'applies the arrayFilters' do expect(result['x']).to eq([{ 'y' => 1 }, { 'y' => 2 }, { 'y' => 3 }]) expect(authorized_collection.find(selector).first['x'].last['y']).to eq(5) end end context 'when a String key is used' do let(:options) do { 'array_filters' => [{ 'i.y' => 3 }] } end it 'applies the arrayFilters' do expect(result['x']).to eq([{ 'y' => 1 }, { 'y' => 2 }, { 'y' => 3 }]) expect(authorized_collection.find(selector).first['x'].last['y']).to eq(5) end end end context 'when the server selected does not support arrayFilters' do max_server_version '3.4' let(:result) do authorized_collection.find_one_and_update(selector, { '$set' => { 'x.$[i].y' => 5 } }, options) end context 'when a Symbol key is used' do let(:options) do { array_filters: [{ 'i.y' => 3 }] } end it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedArrayFilters) end end context 'when a String key is used' do let(:options) do { 'array_filters' => [{ 'i.y' => 3 }] } end it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedArrayFilters) end end end end context 'when various options passed in' do # w: 2 requires a replica set require_topology :replica_set # https://jira.mongodb.org/browse/RUBY-2306 min_server_fcv '3.6' let(:session) do authorized_client.start_session end let(:events) do subscriber.command_started_events('findAndModify') end let(:collection) do authorized_collection.with(write_concern: {w: 2}) end let(:selector) do {field: 'test1'} end before do collection.insert_one({field: 'test1'}, session: session) end let!(:command) do Utils.get_command_event(authorized_client, 'findAndModify') do |client| collection.find_one_and_update(selector, { '$set' => {field: 'testing'}}, :return_document => :after, write_concern: {w: 1}, upsert: true, bypass_document_validation: true, max_time_ms: 100, session: session) end.command end it 'find and updates successfully with correct options sent to server' do expect(events.length).to eq(1) expect(command[:writeConcern]).to_not be_nil expect(command[:writeConcern][:w]).to eq(1) expect(command[:upsert]).to eq(true) expect(command[:bypassDocumentValidation]).to be(true) expect(command[:maxTimeMS]).to eq(100) end end end describe '#find_one_and_replace' do before do authorized_collection.insert_many([{ field: 'test1', other: 'sth' }]) end let(:selector) do { field: 'test1' } end context 'when a matching document is found' do context 'when no options are provided' do let(:document) do authorized_collection.find_one_and_replace(selector, { field: 'testing' }) end it 'returns the original document' do expect(document['field']).to eq('test1') end end context 'when a session is provided' do let(:operation) do authorized_collection.find_one_and_replace(selector, { field: 'testing' }, session: session) end let(:failed_operation) do authorized_collection.find_one_and_replace({ '$._id' => 1}, { field: 'testing' }, session: session) end let(:session) do authorized_client.start_session end let(:client) do authorized_client end it_behaves_like 'an operation using a session' it_behaves_like 'a failed operation using a session' end context 'when return_document options are provided' do context 'when return_document is :after' do let(:document) do authorized_collection.find_one_and_replace(selector, { field: 'testing' }, :return_document => :after) end it 'returns the new document' do expect(document['field']).to eq('testing') end end context 'when return_document is :before' do let(:document) do authorized_collection.find_one_and_replace(selector, { field: 'testing' }, :return_document => :before) end it 'returns the original document' do expect(document['field']).to eq('test1') end end end context 'when a projection is provided' do let(:document) do authorized_collection.find_one_and_replace(selector, { field: 'testing' }, projection: { _id: 1 }) end it 'returns the document with limited fields' do expect(document['field']).to be_nil expect(document['_id']).to_not be_nil end end context 'when a sort is provided' do let(:document) do authorized_collection.find_one_and_replace(selector, { field: 'testing' }, :sort => { field: 1 }) end it 'returns the original document' do expect(document['field']).to eq('test1') end end end context 'when no matching document is found' do context 'when no upsert options are provided' do let(:selector) do { field: 'test5' } end let(:document) do authorized_collection.find_one_and_replace(selector, { field: 'testing' }) end it 'returns nil' do expect(document).to be_nil end end context 'when upsert options are provided' do let(:selector) do { field: 'test5' } end let(:document) do authorized_collection.find_one_and_replace(selector, { field: 'testing' }, :upsert => true, :return_document => :after) end it 'returns the new document' do expect(document['field']).to eq('testing') end end end context 'when max_time_ms is provided' do it 'includes the max_time_ms value in the command' do expect { authorized_collection.find_one_and_replace(selector, { field: 'testing' }, max_time_ms: 0.1) }.to raise_error(Mongo::Error::OperationFailure) end end context 'when the operation fails' do let(:result) do authorized_collection.find_one_and_replace(selector, { field: 'testing' }, max_time_ms: 0.1) end it 'raises an OperationFailure' do expect { result }.to raise_exception(Mongo::Error::OperationFailure) end end context 'when collection has a validator' do min_server_fcv '3.2' around(:each) do |spec| authorized_client[:validating].drop authorized_client[:validating, :validator => { :a => { '$exists' => true } }].tap do |c| c.create end spec.run collection_with_validator.drop end before do collection_with_validator.insert_one({ a: 1 }) end context 'when the document is valid' do let(:result) do collection_with_validator.find_one_and_replace( { a: 1 }, { a: 5 }, :return_document => :after) end it 'replaces successfully when document is valid' do expect(result[:a]).to eq(5) end end context 'when the document is invalid' do context 'when bypass_document_validation is not set' do let(:result2) do collection_with_validator.find_one_and_replace( { a: 1 }, { x: 5 }, :return_document => :after) end it 'raises OperationFailure' do expect { result2 }.to raise_exception(Mongo::Error::OperationFailure) end end context 'when bypass_document_validation is true' do let(:result3) do collection_with_validator.find_one_and_replace( { a: 1 }, { x: 1 }, :bypass_document_validation => true, :return_document => :after) end it 'replaces successfully' do expect(result3[:x]).to eq(1) expect(result3[:a]).to be_nil end end end end context 'when write_concern is provided' do min_server_fcv '3.2' require_topology :single it 'uses the write concern' do expect { authorized_collection.find_one_and_replace(selector, { field: 'testing' }, write_concern: { w: 2 }) }.to raise_error(Mongo::Error::OperationFailure) end end context 'when the collection has a write concern' do min_server_fcv '3.2' require_topology :single let(:collection) do authorized_collection.with(write: { w: 2 }) end it 'uses the write concern' do expect { collection.find_one_and_replace(selector, { field: 'testing' }, write_concern: { w: 2 }) }.to raise_error(Mongo::Error::OperationFailure) end end context 'when collation is provided' do let(:selector) do { name: 'BANG' } end let(:result) do authorized_collection.find_one_and_replace(selector, { name: 'doink' }, options) end before do authorized_collection.insert_one(name: 'bang') end let(:options) do { collation: { locale: 'en_US', strength: 2 } } end context 'when the server selected supports collations' do min_server_fcv '3.4' it 'applies the collation' do expect(result['name']).to eq('bang') expect(authorized_collection.find(name: 'doink').count).to eq(1) end end context 'when the server selected does not support collations' do max_server_version '3.2' it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedCollation) end context 'when a String key is used' do let(:options) do { 'collation' => { locale: 'en_US', strength: 2 } } end it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedCollation) end end end end context 'when collation is not specified' do let(:selector) do { name: 'BANG' } end let(:result) do authorized_collection.find_one_and_replace(selector, { name: 'doink' }) end before do authorized_collection.insert_one(name: 'bang') end it 'does not apply the collation' do expect(result).to be_nil end end context 'when various options passed in' do # https://jira.mongodb.org/browse/RUBY-2306 min_server_fcv '3.6' before do authorized_collection.insert_one({field: 'test1'}) end let(:session) do authorized_client.start_session end let(:events) do subscriber.command_started_events('findAndModify') end let(:collection) do authorized_collection.with(write_concern: { w: 2 }) end let!(:command) do Utils.get_command_event(authorized_client, 'findAndModify') do |client| collection.find_one_and_replace(selector, { '$set' => {field: 'test5'}}, :return_document => :after, write_concern: {w: 1}, session: session, upsert: true, bypass_document_validation: false, max_time_ms: 200) end.command end it 'find and replaces successfully with correct options sent to server' do expect(events.length).to eq(1) expect(command[:writeConcern]).to_not be_nil expect(command[:writeConcern][:w]).to eq(1) expect(command[:upsert]).to be(true) expect(command[:bypassDocumentValidation]).to be false expect(command[:maxTimeMS]).to eq(200) end end end context 'when unacknowledged writes is used on find_one_and_update' do let(:selector) do { name: 'BANG' } end let(:collection_with_unacknowledged_write_concern) do authorized_collection.with(write: { w: 0 }) end let(:result) do collection_with_unacknowledged_write_concern.find_one_and_update(selector, { '$set' => { field: 'testing' }}, write_concern: { w: 0 }) end it 'does not raise an exception' do expect(result).to be_nil end end context "when creating collection with view_on and pipeline" do before do authorized_client["my_view"].drop authorized_collection.insert_one({ bar: "here!" }) authorized_client["my_view", view_on: authorized_collection.name, pipeline: [ { :'$project' => { "baz": "$bar" } } ] ].create end it "the view has a document" do expect(authorized_client["my_view"].find.to_a.length).to eq(1) end it "applies the pipeline" do expect(authorized_client["my_view"].find.first).to have_key("baz") expect(authorized_client["my_view"].find.first["baz"]).to eq("here!") end end end mongo-ruby-driver-2.21.3/spec/mongo/collection_ddl_spec.rb000066400000000000000000000347431505113246500235710ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Collection do let(:subscriber) { Mrss::EventSubscriber.new } let(:client) do authorized_client.tap do |client| client.subscribe(Mongo::Monitoring::COMMAND, subscriber) end end let(:authorized_collection) { client['collection_spec'] } before do authorized_client['collection_spec'].drop end describe '#create' do before do authorized_client[:specs].drop end let(:database) do authorized_client.database end context 'when the collection has no options' do let(:collection) do described_class.new(database, :specs) end let!(:response) do collection.create end it 'executes the command' do expect(response).to be_successful end it 'creates the collection in the database' do expect(database.collection_names).to include('specs') end end context 'when the collection has options' do context 'when the collection is capped' do shared_examples 'a capped collection command' do let!(:response) do collection.create end let(:options) do { :capped => true, :size => 1024 } end it 'executes the command' do expect(response).to be_successful end it 'sets the collection as capped' do expect(collection).to be_capped end it 'creates the collection in the database' do expect(database.collection_names).to include('specs') end end shared_examples 'a validated collection command' do let!(:response) do collection.create end let(:options) do { :validator => { fieldName: { '$gte' => 1024 } }, :validationLevel => 'strict' } end let(:collection_info) do database.list_collections.find { |i| i['name'] == 'specs' } end it 'executes the command' do expect(response).to be_successful end it 'sets the collection with validators' do expect(collection_info['options']['validator']).to eq({ 'fieldName' => { '$gte' => 1024 } }) end it 'creates the collection in the database' do expect(database.collection_names).to include('specs') end end context 'when instantiating a collection directly' do let(:collection) do described_class.new(database, :specs, options) end it_behaves_like 'a capped collection command' it_behaves_like 'a validated collection command' end context 'when instantiating a collection through the database' do let(:collection) do authorized_client[:specs, options] end it_behaves_like 'a capped collection command' it_behaves_like 'a validated collection command' end context 'when instantiating a collection using create' do before do authorized_client[:specs].drop end let!(:response) do authorized_client[:specs].create(options) end let(:collection) do authorized_client[:specs] end let(:collstats) do collection.aggregate([ {'$collStats' => { 'storageStats' => {} }} ]).first end let(:storage_stats) do collstats.fetch('storageStats', {}) end let(:options) do { :capped => true, :size => 4096, :max => 512 } end it 'executes the command' do expect(response).to be_successful end it 'sets the collection as capped' do expect(collection).to be_capped end it 'creates the collection in the database' do expect(database.collection_names).to include('specs') end it "applies the options" do expect(storage_stats["capped"]).to be true expect(storage_stats["max"]).to eq(512) expect(storage_stats["maxSize"]).to eq(4096) end end end context 'when the collection has a write concern' do before do database[:specs].drop end let(:options) do { write: INVALID_WRITE_CONCERN } end let(:collection) do described_class.new(database, :specs, options) end context 'when the server supports write concern on the create command' do require_topology :replica_set it 'applies the write concern' do expect{ collection.create }.to raise_exception(Mongo::Error::OperationFailure) end end context 'when write concern passed in as an option' do require_topology :replica_set before do database['collection_spec'].drop end let(:events) do subscriber.command_started_events('create') end let(:options) do { write_concern: {w: 1} } end let!(:collection) do authorized_collection.with(options) end let!(:command) do Utils.get_command_event(authorized_client, 'create') do |client| collection.create({ write_concern: {w: 2} }) end.command end it 'applies the write concern passed in as an option' do expect(events.length).to eq(1) expect(command[:writeConcern][:w]).to eq(2) end end end context 'when the collection has a collation' do shared_examples 'a collection command with a collation option' do let(:response) do collection.create end let(:options) do { :collation => { locale: 'fr' } } end let(:collection_info) do database.list_collections.find { |i| i['name'] == 'specs' } end before do collection.drop end it 'executes the command' do expect(response).to be_successful end it 'sets the collection with a collation' do response expect(collection_info['options']['collation']['locale']).to eq('fr') end it 'creates the collection in the database' do response expect(database.collection_names).to include('specs') end end context 'when instantiating a collection directly' do let(:collection) do described_class.new(database, :specs, options) end it_behaves_like 'a collection command with a collation option' end context 'when instantiating a collection through the database' do let(:collection) do authorized_client[:specs, options] end it_behaves_like 'a collection command with a collation option' end context 'when passing the options through create' do let(:collection) do authorized_client[:specs] end let(:response) do collection.create(options) end let(:options) do { :collation => { locale: 'fr' } } end let(:collection_info) do database.list_collections.find { |i| i['name'] == 'specs' } end before do collection.drop end it 'executes the command' do expect(response).to be_successful end it 'sets the collection with a collation' do response expect(collection_info['options']['collation']['locale']).to eq('fr') end it 'creates the collection in the database' do response expect(database.collection_names).to include('specs') end end end context 'when a session is provided' do let(:collection) do authorized_client[:specs] end let(:operation) do collection.create(session: session) end let(:session) do authorized_client.start_session end let(:client) do authorized_client end let(:failed_operation) do authorized_client[:specs, invalid: true].create(session: session) end before do collection.drop end it_behaves_like 'an operation using a session' it_behaves_like 'a failed operation using a session' end end context 'when collation has a strength' do let(:band_collection) do described_class.new(database, :bands) end before do band_collection.delete_many band_collection.insert_many([{ name: "Depeche Mode" }, { name: "New Order" }]) end let(:options) do { collation: { locale: 'en_US', strength: 2 } } end let(:band_result) do band_collection.find({ name: 'DEPECHE MODE' }, options) end it 'finds Capitalize from UPPER CASE' do expect(band_result.count_documents).to eq(1) end end end describe '#drop' do let(:database) do authorized_client.database end let(:collection) do described_class.new(database, :specs) end context 'when the collection exists' do before do authorized_client[:specs].drop collection.create # wait for the collection to be created sleep 0.4 end context 'when a session is provided' do let(:operation) do collection.drop(session: session) end let(:failed_operation) do collection.with(write: INVALID_WRITE_CONCERN).drop(session: session) end let(:session) do authorized_client.start_session end let(:client) do authorized_client end it_behaves_like 'an operation using a session' context 'can set write concern' do require_set_write_concern it_behaves_like 'a failed operation using a session' end end context 'when the collection does not have a write concern set' do let!(:response) do collection.drop end it 'executes the command' do expect(response).to be_successful end it 'drops the collection from the database' do expect(database.collection_names).to_not include('specs') end context 'when the collection does not exist' do require_set_write_concern max_server_fcv '6.99.99' it 'does not raise an error' do expect(database['non-existent-coll'].drop).to be(false) end end end context 'when the collection has a write concern' do let(:write_options) do { write: INVALID_WRITE_CONCERN } end let(:collection_with_write_options) do collection.with(write_options) end context 'when the server supports write concern on the drop command' do require_set_write_concern it 'applies the write concern' do expect{ collection_with_write_options.drop }.to raise_exception(Mongo::Error::OperationFailure) end end context 'when write concern passed in as an option' do require_set_write_concern let(:events) do subscriber.command_started_events('drop') end let(:options) do { write_concern: {w: 1} } end let!(:collection) do authorized_collection.with(options) end let!(:command) do Utils.get_command_event(authorized_client, 'drop') do |client| collection.drop({ write_concern: {w: 0} }) end.command end it 'applies the write concern passed in as an option' do expect(events.length).to eq(1) expect(command[:writeConcern][:w]).to eq(0) end end end end context 'when the collection does not exist' do require_set_write_concern max_server_fcv '6.99.99' before do begin collection.drop rescue Mongo::Error::OperationFailure end end it 'returns false' do expect(collection.drop).to be(false) end end context "when providing a pipeline in create" do let(:options) do { view_on: "specs", pipeline: [ { :'$project' => { "baz": "$bar" } } ] } end before do authorized_client["my_view"].drop authorized_client[:specs].drop end it "the pipeline gets passed to the command" do expect(Mongo::Operation::Create).to receive(:new).and_wrap_original do |m, *args| expect(args.first.slice(:selector)[:selector]).to have_key(:pipeline) expect(args.first.slice(:selector)[:selector]).to have_key(:viewOn) m.call(*args) end expect_any_instance_of(Mongo::Operation::Create).to receive(:execute) authorized_client[:specs].create(options) end end end describe '#indexes' do let(:index_spec) do { name: 1 } end let(:batch_size) { nil } let(:index_names) do authorized_collection.indexes(batch_size: batch_size).collect { |i| i['name'] } end before do authorized_collection.indexes.create_one(index_spec, unique: true) end it 'returns a list of indexes' do expect(index_names).to include(*'name_1', '_id_') end context 'when a session is provided' do require_wired_tiger let(:session) do authorized_client.start_session end let(:operation) do authorized_collection.indexes(batch_size: batch_size, session: session).collect { |i| i['name'] } end let(:failed_operation) do authorized_collection.indexes(batch_size: -100, session: session).collect { |i| i['name'] } end let(:client) do authorized_client end it_behaves_like 'an operation using a session' it_behaves_like 'a failed operation using a session' end context 'when batch size is specified' do let(:batch_size) { 1 } it 'returns a list of indexes' do expect(index_names).to include(*'name_1', '_id_') end end end end mongo-ruby-driver-2.21.3/spec/mongo/collection_spec.rb000066400000000000000000000560231505113246500227410ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Collection do let(:subscriber) { Mrss::EventSubscriber.new } let(:client) do authorized_client.tap do |client| client.subscribe(Mongo::Monitoring::COMMAND, subscriber) end end let(:authorized_collection) { client['collection_spec'] } before do authorized_client['collection_spec'].drop end describe '#==' do let(:database) do Mongo::Database.new(authorized_client, :test) end let(:collection) do described_class.new(database, :users) end context 'when the names are the same' do context 'when the databases are the same' do let(:other) do described_class.new(database, :users) end it 'returns true' do expect(collection).to eq(other) end end context 'when the databases are not the same' do let(:other_db) do Mongo::Database.new(authorized_client, :testing) end let(:other) do described_class.new(other_db, :users) end it 'returns false' do expect(collection).to_not eq(other) end end context 'when the options are the same' do let(:other) do described_class.new(database, :users) end it 'returns true' do expect(collection).to eq(other) end end context 'when the options are not the same' do let(:other) do described_class.new(database, :users, :capped => true) end it 'returns false' do expect(collection).to_not eq(other) end end end context 'when the names are not the same' do let(:other) do described_class.new(database, :sounds) end it 'returns false' do expect(collection).to_not eq(other) end end context 'when the object is not a collection' do it 'returns false' do expect(collection).to_not eq('test') end end end describe '#initialize' do let(:client) do new_local_client(SpecConfig.instance.addresses, SpecConfig.instance.test_options.merge(monitoring_io: false)) end let(:database) { client.database } context 'write concern given in :write option' do let(:collection) do Mongo::Collection.new(database, 'foo', write: {w: 1}) end it 'stores write concern' do expect(collection.write_concern).to be_a(Mongo::WriteConcern::Acknowledged) expect(collection.write_concern.options).to eq(w: 1) end it 'stores write concern under :write' do expect(collection.options[:write]).to eq(w: 1) expect(collection.options[:write_concern]).to be nil end end context 'write concern given in :write_concern option' do let(:collection) do Mongo::Collection.new(database, 'foo', write_concern: {w: 1}) end it 'stores write concern' do expect(collection.write_concern).to be_a(Mongo::WriteConcern::Acknowledged) expect(collection.write_concern.options).to eq(w: 1) end it 'stores write concern under :write_concern' do expect(collection.options[:write_concern]).to eq(w: 1) expect(collection.options[:write]).to be nil end end context 'write concern given in both :write and :write_concern options' do context 'identical values' do let(:collection) do Mongo::Collection.new(database, 'foo', write: {w: 1}, write_concern: {w: 1}) end it 'stores write concern' do expect(collection.write_concern).to be_a(Mongo::WriteConcern::Acknowledged) expect(collection.write_concern.options).to eq(w: 1) end it 'stores write concern under both options' do expect(collection.options[:write]).to eq(w: 1) expect(collection.options[:write_concern]).to eq(w: 1) end end context 'different values' do let(:collection) do Mongo::Collection.new(database, 'foo', write: {w: 1}, write_concern: {w: 2}) end it 'raises an exception' do expect do collection end.to raise_error(ArgumentError, /If :write and :write_concern are both given, they must be identical/) end end end =begin WriteConcern object support context 'when write concern is provided via a WriteConcern object' do let(:collection) do Mongo::Collection.new(database, 'foo', write_concern: wc) end let(:wc) { Mongo::WriteConcern.get(w: 2) } it 'stores write concern options in collection options' do expect(collection.options[:write_concern]).to eq(w: 2) end it 'caches write concern object' do expect(collection.write_concern).to be wc end end =end end describe '#with' do let(:client) do new_local_client_nmio(SpecConfig.instance.addresses, SpecConfig.instance.test_options.merge( SpecConfig.instance.auth_options )) end let(:database) do Mongo::Database.new(client, SpecConfig.instance.test_db) end let(:collection) do database.collection('test-collection') end let(:new_collection) do collection.with(new_options) end context 'when new read options are provided' do let(:new_options) do { read: { mode: :secondary } } end it 'returns a new collection' do expect(new_collection).not_to be(collection) end it 'sets the new read options on the new collection' do expect(new_collection.read_preference).to eq(new_options[:read]) end context 'when the client has a server selection timeout setting' do let(:client) do new_local_client(SpecConfig.instance.addresses, SpecConfig.instance.test_options.merge(server_selection_timeout: 2, monitoring_io: false)) end it 'passes the the server_selection_timeout to the cluster' do expect(client.cluster.options[:server_selection_timeout]).to eq(client.options[:server_selection_timeout]) end end context 'when the client has a read preference set' do let(:client) do authorized_client.with(client_options).tap do |client| expect(client.options[:read]).to eq(Mongo::Options::Redacted.new( mode: :primary_preferred)) client.subscribe(Mongo::Monitoring::COMMAND, subscriber) end end let(:client_options) do { read: { mode: :primary_preferred }, monitoring_io: false, } end let(:new_options) do { read: { mode: :secondary } } end it 'sets the new read options on the new collection' do # This is strictly a Hash, not a BSON::Document like the client's # read preference. expect(new_collection.read_preference).to eq(mode: :secondary) end it 'duplicates the read option' do expect(new_collection.read_preference).not_to eql(client.read_preference) end context 'when reading from collection' do # Since we are requesting a secondary read, we need a replica set. require_topology :replica_set let(:client_options) do {read: { mode: :primary_preferred }} end shared_examples_for "uses collection's read preference when reading" do it "uses collection's read preference when reading" do expect do new_collection.find.to_a.count end.not_to raise_error event = subscriber.started_events.detect do |event| event.command['find'] end actual_rp = event.command['$readPreference'] expect(actual_rp).to eq(expected_read_preference) end end context 'post-OP_MSG server' do min_server_fcv '3.6' context 'standalone' do require_topology :single let(:expected_read_preference) do nil end it_behaves_like "uses collection's read preference when reading" end context 'RS, sharded' do require_topology :replica_set, :sharded let(:expected_read_preference) do {'mode' => 'secondary'} end it_behaves_like "uses collection's read preference when reading" end end context 'pre-OP-MSG server' do max_server_version '3.4' let(:expected_read_preference) do nil end it_behaves_like "uses collection's read preference when reading" end end end context 'when the client has a read preference and server selection timeout set' do let(:client) do new_local_client(SpecConfig.instance.addresses, SpecConfig.instance.test_options.merge( read: { mode: :primary_preferred }, server_selection_timeout: 2, monitoring_io: false )) end it 'sets the new read options on the new collection' do expect(new_collection.read_preference).to eq(new_options[:read]) end it 'passes the server_selection_timeout setting to the cluster' do expect(client.cluster.options[:server_selection_timeout]).to eq(client.options[:server_selection_timeout]) end end end context 'when new write options are provided' do let(:new_options) do { write: { w: 5 } } end it 'returns a new collection' do expect(new_collection).not_to be(collection) end it 'sets the new write options on the new collection' do expect(new_collection.write_concern.options).to eq(Mongo::WriteConcern.get(new_options[:write]).options) end context 'when the client has a write concern set' do let(:client) do new_local_client(SpecConfig.instance.addresses, SpecConfig.instance.test_options.merge( write: INVALID_WRITE_CONCERN, monitoring_io: false, )) end it 'sets the new write options on the new collection' do expect(new_collection.write_concern.options).to eq(Mongo::WriteConcern.get(new_options[:write]).options) end context 'when client uses :write_concern and collection uses :write' do let(:client) do new_local_client(SpecConfig.instance.addresses, SpecConfig.instance.test_options.merge( write_concern: {w: 1}, monitoring_io: false, )) end it 'uses :write from collection options only' do expect(new_collection.options[:write]).to eq(w: 5) expect(new_collection.options[:write_concern]).to be nil end end context 'when client uses :write and collection uses :write_concern' do let(:client) do new_local_client(SpecConfig.instance.addresses, SpecConfig.instance.test_options.merge( write: {w: 1}, monitoring_io: false, )) end let(:new_options) do { write_concern: { w: 5 } } end it 'uses :write_concern from collection options only' do expect(new_collection.options[:write_concern]).to eq(w: 5) expect(new_collection.options[:write]).to be nil end end context 'when collection previously had :wrte_concern and :write is used with a different value' do let(:collection) do database.collection(:users, write_concern: {w: 2}) end let(:new_options) do { write: { w: 5 } } end it 'uses the new option' do expect(new_collection.options[:write]).to eq(w: 5) expect(new_collection.options[:write_concern]).to be nil end end context 'when collection previously had :wrte and :write_concern is used with a different value' do let(:collection) do database.collection(:users, write: {w: 2}) end let(:new_options) do { write_concern: { w: 5 } } end it 'uses the new option' do expect(new_collection.options[:write_concern]).to eq(w: 5) expect(new_collection.options[:write]).to be nil end end context 'when collection previously had :wrte_concern and :write is used with the same value' do let(:collection) do database.collection(:users, write_concern: {w: 2}) end let(:new_options) do { write: { w: 2 } } end it 'uses the new option' do expect(new_collection.options[:write]).to eq(w: 2) expect(new_collection.options[:write_concern]).to be nil end end context 'when collection previously had :wrte and :write_concern is used with the same value' do let(:collection) do database.collection(:users, write: {w: 2}) end let(:new_options) do { write_concern: { w: 2 } } end it 'uses the new option' do expect(new_collection.options[:write]).to be nil expect(new_collection.options[:write_concern]).to eq(w: 2) end end end end context 'when new read and write options are provided' do let(:new_options) do { read: { mode: :secondary }, write: { w: 4} } end it 'returns a new collection' do expect(new_collection).not_to be(collection) end it 'sets the new read options on the new collection' do expect(new_collection.read_preference).to eq(new_options[:read]) end it 'sets the new write options on the new collection' do expect(new_collection.write_concern.options).to eq(Mongo::WriteConcern.get(new_options[:write]).options) end context 'when the client has a server selection timeout setting' do let(:client) do new_local_client(SpecConfig.instance.addresses, SpecConfig.instance.test_options.merge( server_selection_timeout: 2, monitoring_io: false, )) end it 'passes the server_selection_timeout setting to the cluster' do expect(client.cluster.options[:server_selection_timeout]).to eq(client.options[:server_selection_timeout]) end end context 'when the client has a read preference set' do let(:client) do new_local_client(SpecConfig.instance.addresses, SpecConfig.instance.test_options.merge( read: { mode: :primary_preferred }, monitoring_io: false, )) end it 'sets the new read options on the new collection' do expect(new_collection.read_preference).to eq(new_options[:read]) expect(new_collection.read_preference).not_to be(client.read_preference) end end end context 'when neither read nor write options are provided' do let(:new_options) do { some_option: 'invalid' } end it 'raises an error' do expect { new_collection }.to raise_exception(Mongo::Error::UnchangeableCollectionOption) end end end describe '#read_preference' do let(:collection) do described_class.new(authorized_client.database, :users, options) end let(:options) { {} } context 'when a read preference is set in the options' do let(:options) do { read: { mode: :secondary } } end it 'returns the read preference' do expect(collection.read_preference).to eq(options[:read]) end end context 'when a read preference is not set in the options' do context 'when the database has a read preference set' do let(:client) do authorized_client.with(read: { mode: :secondary_preferred }) end let(:collection) do described_class.new(client.database, :users, options) end it 'returns the database read preference' do expect(collection.read_preference).to eq(BSON::Document.new({ mode: :secondary_preferred })) end end context 'when the database does not have a read preference' do it 'returns nil' do expect(collection.read_preference).to be_nil end end end end describe '#server_selector' do let(:collection) do described_class.new(authorized_client.database, :users, options) end let(:options) { {} } context 'when a read preference is set in the options' do let(:options) do { read: { mode: :secondary } } end it 'returns the server selector for that read preference' do expect(collection.server_selector).to be_a(Mongo::ServerSelector::Secondary) end end context 'when a read preference is not set in the options' do context 'when the database has a read preference set' do let(:client) do authorized_client.with(read: { mode: :secondary_preferred }) end let(:collection) do described_class.new(client.database, :users, options) end it 'returns the server selector for that read preference' do expect(collection.server_selector).to be_a(Mongo::ServerSelector::SecondaryPreferred) end end context 'when the database does not have a read preference' do it 'returns a primary server selector' do expect(collection.server_selector).to be_a(Mongo::ServerSelector::Primary) end end end end describe '#capped?' do let(:database) do authorized_client.database end context 'when the collection is capped' do let(:collection) do described_class.new(database, :specs, :capped => true, :size => 4096, :max => 512) end let(:collstats) do collection.aggregate([ {'$collStats' => { 'storageStats' => {} }} ]).first end let(:storage_stats) do collstats.fetch('storageStats', {}) end before do authorized_client[:specs].drop collection.create end it 'returns true' do expect(collection).to be_capped end it "applies the options" do expect(storage_stats["capped"]).to be true expect(storage_stats["max"]).to eq(512) expect(storage_stats["maxSize"]).to eq(4096) end end context 'when the collection is not capped' do let(:collection) do described_class.new(database, :specs) end before do authorized_client[:specs].drop collection.create end it 'returns false' do expect(collection).to_not be_capped end end end describe '#inspect' do it 'includes the object id' do expect(authorized_collection.inspect).to include(authorized_collection.object_id.to_s) end it 'includes the namespace' do expect(authorized_collection.inspect).to include(authorized_collection.namespace) end end describe '#watch' do context 'when change streams can be tested' do require_wired_tiger min_server_fcv '3.6' require_topology :replica_set let(:change_stream) do authorized_collection.watch end let(:enum) do change_stream.to_enum end before do change_stream authorized_collection.insert_one(a: 1) end context 'when no options are provided' do context 'when the operation type is an insert' do it 'returns the change' do expect(enum.next[:fullDocument][:a]).to eq(1) end end context 'when the operation type is an update' do before do authorized_collection.update_one({ a: 1 }, { '$set' => { a: 2 } }) end let(:change_doc) do enum.next enum.next end it 'returns the change' do expect(change_doc[:operationType]).to eq('update') expect(change_doc[:updateDescription][:updatedFields]).to eq('a' => 2) end end end context 'when options are provided' do context 'when full_document is updateLookup' do let(:change_stream) do authorized_collection.watch([], full_document: 'updateLookup').to_enum end before do authorized_collection.update_one({ a: 1 }, { '$set' => { a: 2 } }) end let(:change_doc) do enum.next enum.next end it 'returns the change' do expect(change_doc[:operationType]).to eq('update') expect(change_doc[:fullDocument][:a]).to eq(2) end end context 'when batch_size is provided' do before do Thread.new do sleep 1 authorized_collection.insert_one(a: 2) authorized_collection.insert_one(a: 3) end end let(:change_stream) do authorized_collection.watch([], batch_size: 2) end it 'returns the documents in the batch size specified' do expect(change_stream.instance_variable_get(:@cursor)).to receive(:get_more).once.and_call_original enum.next end end context 'when collation is provided' do before do authorized_collection.update_one({ a: 1 }, { '$set' => { a: 2 } }) end let(:change_doc) do enum.next end let(:change_stream) do authorized_collection.watch([ { '$match' => { operationType: 'UPDATE'}}], collation: { locale: 'en_US', strength: 2 } ).to_enum end it 'returns the change' do expect(change_doc['operationType']).to eq('update') expect(change_doc['updateDescription']['updatedFields']['a']).to eq(2) end end end end context 'when the change stream is empty' do require_wired_tiger min_server_fcv '3.6' require_topology :replica_set context 'when setting the max_await_time_ms' do let(:change_stream) do authorized_collection.watch([], max_await_time_ms: 3000) end let(:enum) { change_stream.to_enum } let(:get_more) { subscriber.started_events.detect { |e| e.command['getMore'] }.command } it 'sets the option correctly' do enum.try_next expect(get_more).not_to be_nil expect(get_more['maxTimeMS']).to be == 3000 end it "waits the appropriate amount of time" do start_time = Mongo::Utils.monotonic_time enum.try_next end_time = Mongo::Utils.monotonic_time expect(end_time-start_time).to be >= 3 end end end end end mongo-ruby-driver-2.21.3/spec/mongo/condition_variable_spec.rb000066400000000000000000000041661505113246500244420ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' describe Mongo::ConditionVariable do let(:lock) { Mutex.new } let(:condition_variable) do described_class.new(lock) end it 'waits until signaled' do result = nil consumer = Thread.new do lock.synchronize do result = condition_variable.wait(3) end end # Context switch to start the thread sleep 0.1 start_time = Mongo::Utils.monotonic_time lock.synchronize do condition_variable.signal end consumer.join (Mongo::Utils.monotonic_time - start_time).should < 1 end it 'waits until broadcast' do result = nil consumer = Thread.new do lock.synchronize do result = condition_variable.wait(3) end end # Context switch to start the thread sleep 0.1 start_time = Mongo::Utils.monotonic_time lock.synchronize do condition_variable.broadcast end consumer.join (Mongo::Utils.monotonic_time - start_time).should < 1 end it 'times out' do result = nil consumer = Thread.new do lock.synchronize do result = condition_variable.wait(2) end end # Context switch to start the thread sleep 0.1 start_time = Mongo::Utils.monotonic_time consumer.join (Mongo::Utils.monotonic_time - start_time).should > 1 end context "when acquiring the lock and waiting" do it "releases the lock while waiting" do lock_acquired = false Timeout::timeout(1) do thread = Thread.new do until lock_acquired sleep 0.1 end lock.synchronize do condition_variable.signal end end lock.synchronize do lock_acquired = true condition_variable.wait(10) end end end end context "when waiting but not signaling" do it "waits until timeout" do lock.synchronize do start = Mongo::Utils.monotonic_time condition_variable.wait(1) duration = Mongo::Utils.monotonic_time - start expect(duration).to be > 1 end end end end mongo-ruby-driver-2.21.3/spec/mongo/config/000077500000000000000000000000001505113246500205065ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/mongo/config/options_spec.rb000066400000000000000000000026671505113246500235530ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require "spec_helper" describe Mongo::Config::Options do let(:config) do Mongo::Config end describe "#defaults" do it "returns the default options" do expect(config.defaults).to_not be_empty end end describe "#option" do context "when a default is provided" do after do config.reset end it "defines a getter" do expect(config.validate_update_replace).to be false end it "defines a setter" do expect(config.validate_update_replace = true).to be true expect(config.validate_update_replace).to be true end it "defines a presence check" do expect(config.validate_update_replace?).to be false end end context 'when option is not a boolean' do before do config.validate_update_replace = 'foo' end after do config.reset end context 'presence check' do it 'is a boolean' do expect(config.validate_update_replace?).to be true end end end end describe "#reset" do before do config.validate_update_replace = true config.reset end it "resets the settings to the defaults" do expect(config.validate_update_replace).to be false end end describe "#settings" do it "returns the settings" do expect(config.settings).to_not be_empty end end end mongo-ruby-driver-2.21.3/spec/mongo/config_spec.rb000066400000000000000000000027221505113246500220500ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require "spec_helper" describe Mongo::Config do shared_examples "a config option" do before do Mongo::Config.reset end context 'when the value is false' do before do Mongo.send("#{option}=", false) end it "is set to false" do expect(Mongo.send(option)).to be(false) end end context 'when the value is true' do before do Mongo.send("#{option}=", true) end it "is set to true" do expect(Mongo.send(option)).to be(true) end end context "when it is not set in the config" do it "it is set to its default" do expect(Mongo.send(option)).to be(default) end end end context 'when setting the validate_update_replace option in the config' do let(:option) { :validate_update_replace } let(:default) { false } it_behaves_like "a config option" end describe "#options=" do context "when an option" do before do described_class.options = { validate_update_replace: true } end it "assigns the option correctly" do expect(described_class.validate_update_replace).to be true end end context "when provided a non-existent option" do it "raises an error" do expect { described_class.options = { bad_option: true } }.to raise_error(Mongo::Error::InvalidConfigOption) end end end end mongo-ruby-driver-2.21.3/spec/mongo/crypt/000077500000000000000000000000001505113246500204025ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/mongo/crypt/auto_decryption_context_spec.rb000066400000000000000000000067631505113246500267310ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'mongo' require 'lite_spec_helper' describe Mongo::Crypt::AutoDecryptionContext do require_libmongocrypt include_context 'define shared FLE helpers' let(:credentials) { Mongo::Crypt::KMS::Credentials.new(kms_providers) } let(:mongocrypt) { Mongo::Crypt::Handle.new(credentials, logger: logger) } let(:context) { described_class.new(mongocrypt, io, command) } let(:logger) { nil } let(:io) { double("Mongo::ClientEncryption::IO") } let(:command) do { "find": "test", "filter": { "ssn": "457-55-5462" } } end describe '#initialize' do shared_examples 'a functioning AutoDecryptionContext' do it 'initializes without error' do expect do context end.not_to raise_error end context 'with nil command' do let(:command) { nil } it 'raises an exception' do expect do context end.to raise_error(Mongo::Error::CryptError, /Attempted to pass nil data to libmongocrypt/) end end context 'with non-document command' do let(:command) { 'command-to-decrypt' } it 'raises an exception' do expect do context end.to raise_error(Mongo::Error::CryptError, /Attempted to pass invalid data to libmongocrypt/) end end end context 'when mongocrypt is initialized with local KMS provider options' do include_context 'with local kms_providers' it_behaves_like 'a functioning AutoDecryptionContext' end context 'when mongocrypt is initialized with AWS KMS provider options' do include_context 'with AWS kms_providers' it_behaves_like 'a functioning AutoDecryptionContext' end context 'when mongocrypt is initialized with Azure KMS provider options' do include_context 'with Azure kms_providers' it_behaves_like 'a functioning AutoDecryptionContext' end context 'when mongocrypt is initialized with GCP KMS provider options' do include_context 'with GCP kms_providers' it_behaves_like 'a functioning AutoDecryptionContext' end context 'when mongocrypt is initialized with KMIP KMS provider options' do include_context 'with KMIP kms_providers' it_behaves_like 'a functioning AutoDecryptionContext' end context 'with verbose logging' do include_context 'with local kms_providers' before(:all) do # Logging from libmongocrypt requires the C library to be built with the -DENABLE_TRACE=ON # option; none of the pre-built packages on Evergreen have been built with logging enabled. # # It is still useful to be able to run these tests locally to confirm that logging is working # while debugging any problems. # # For now, skip this test by default and revisit once we have determined how we want to # package libmongocrypt with the Ruby driver (see: https://jira.mongodb.org/browse/RUBY-1966) skip "These tests require libmongocrypt to be built with the '-DENABLE_TRACE=ON' cmake option." + " They also require the MONGOCRYPT_TRACE environment variable to be set to 'ON'." end let(:logger) do ::Logger.new(STDOUT).tap do |logger| logger.level = ::Logger::DEBUG end end it 'receives log messages from libmongocrypt' do expect(logger).to receive(:debug).with(/mongocrypt_ctx_decrypt_init/) context end end end end mongo-ruby-driver-2.21.3/spec/mongo/crypt/auto_encrypter_spec.rb000066400000000000000000000302721505113246500250100ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' require 'tempfile' describe Mongo::Crypt::AutoEncrypter do require_libmongocrypt min_server_fcv '4.2' require_enterprise clean_slate include_context 'define shared FLE helpers' let(:auto_encrypter) do described_class.new( auto_encryption_options.merge( client: authorized_client.use(:auto_encryption), extra_options: auto_encrypter_extra_options ) ) end let(:auto_encrypter_extra_options) do # Spawn mongocryptd on non-default port for sharded cluster tests extra_options end let(:client) { authorized_client } let(:db_name) { 'auto_encryption' } let(:collection_name) { 'users' } let(:command) do { 'insert' => collection_name, 'ordered' => true, 'lsid' => { 'id' => BSON::Binary.new(Base64.decode64("CzgjT+byRK+FKUWG6QbyjQ==\n"), :uuid) }, 'documents' => [ { 'ssn' => ssn, '_id' => BSON::ObjectId('5e16516e781d8a89b94df6df') } ] } end let(:encrypted_command) do command.merge( 'documents' => [ { 'ssn' => BSON::Binary.new(Base64.decode64(encrypted_ssn), :ciphertext), '_id' => BSON::ObjectId('5e16516e781d8a89b94df6df') } ] ) end let(:operation_context) { Mongo::Operation::Context.new } shared_context 'with jsonSchema validator' do before do users_collection = client.use(db_name)[collection_name] users_collection.drop client.use(db_name)[collection_name, { 'validator' => { '$jsonSchema' => schema_map } } ].create end end shared_context 'without jsonSchema validator' do before do users_collection = client.use(db_name)[collection_name] users_collection.drop users_collection.create end end shared_examples 'a functioning auto encrypter' do describe '#encrypt' do it 'replaces the ssn field with a BSON::Binary' do result = auto_encrypter.encrypt(db_name, command, operation_context) expect(result).to eq(encrypted_command) end end describe '#decrypt' do it 'returns the unencrypted document' do result = auto_encrypter.decrypt(encrypted_command, operation_context) expect(result).to eq(command) end end end before do key_vault_collection.drop key_vault_collection.insert_one(data_key) end after do auto_encrypter.close end describe '#initialize' do include_context 'with local kms_providers' let(:auto_encryption_options) do { kms_providers: local_kms_providers, key_vault_namespace: key_vault_namespace, schema_map: { "#{db_name}.#{collection_name}": schema_map }, } end let(:auto_encrypter) do described_class.new( auto_encryption_options.merge( client: client, # Spawn mongocryptd on non-default port for sharded cluster tests extra_options: extra_options ) ) end context 'when client has an unlimited pool' do let(:client) do new_local_client_nmio( SpecConfig.instance.addresses, SpecConfig.instance.test_options.merge( max_pool_size: 0, database: 'auto_encryption' ), ) end it 'reuses the client as key_vault_client and metadata_client' do expect(auto_encrypter.key_vault_client).to eq(client) expect(auto_encrypter.metadata_client).to eq(client) end end context 'when client has a limited pool' do let(:client) do new_local_client_nmio( SpecConfig.instance.addresses, SpecConfig.instance.test_options.merge( max_pool_size: 20, database: 'auto_encryption' ), ) end it 'creates new client for key_vault_client and metadata_client' do expect(auto_encrypter.key_vault_client).not_to eq(client) expect(auto_encrypter.metadata_client).not_to eq(client) end end context 'when crypt shared library is available' do it 'does not create a mongocryptd client' do allow_any_instance_of(Mongo::Crypt::Handle).to receive(:"crypt_shared_lib_available?").and_return true expect(auto_encrypter.mongocryptd_client).to be_nil end end end shared_examples 'with schema map in auto encryption commands' do include_context 'without jsonSchema validator' let(:auto_encryption_options) do { kms_providers: kms_providers, kms_tls_options: kms_tls_options, key_vault_namespace: key_vault_namespace, schema_map: { "#{db_name}.#{collection_name}": schema_map }, } end context 'with AWS KMS providers' do include_context 'with AWS kms_providers' it_behaves_like 'a functioning auto encrypter' end context 'with Azure KMS providers' do include_context 'with Azure kms_providers' it_behaves_like 'a functioning auto encrypter' end context 'with GCP KMS providers' do include_context 'with GCP kms_providers' it_behaves_like 'a functioning auto encrypter' end context 'with KMIP KMS providers' do include_context 'with KMIP kms_providers' it_behaves_like 'a functioning auto encrypter' end context 'with local KMS providers' do include_context 'with local kms_providers' it_behaves_like 'a functioning auto encrypter' end end shared_examples 'with schema map file in auto encryption commands' do include_context 'without jsonSchema validator' let(:schema_map_file) do file = Tempfile.new('schema_map.json') file.write(JSON.dump( { "#{db_name}.#{collection_name}" => schema_map } )) file.flush file end after do schema_map_file.close end let(:auto_encryption_options) do { kms_providers: kms_providers, kms_tls_options: kms_tls_options, key_vault_namespace: key_vault_namespace, schema_map_path: schema_map_file.path } end context 'with AWS KMS providers' do include_context 'with AWS kms_providers' it_behaves_like 'a functioning auto encrypter' end context 'with Azure KMS providers' do include_context 'with Azure kms_providers' it_behaves_like 'a functioning auto encrypter' end context 'with GCP KMS providers' do include_context 'with GCP kms_providers' it_behaves_like 'a functioning auto encrypter' end context 'with KMIP KMS providers' do include_context 'with KMIP kms_providers' it_behaves_like 'a functioning auto encrypter' end context 'with local KMS providers' do include_context 'with local kms_providers' it_behaves_like 'a functioning auto encrypter' end end shared_examples 'with schema map collection validator' do include_context 'with jsonSchema validator' let(:auto_encryption_options) do { kms_providers: kms_providers, kms_tls_options: kms_tls_options, key_vault_namespace: key_vault_namespace } end context 'with AWS KMS providers' do include_context 'with AWS kms_providers' it_behaves_like 'a functioning auto encrypter' end context 'with Azure KMS providers' do include_context 'with Azure kms_providers' it_behaves_like 'a functioning auto encrypter' end context 'with GCP KMS providers' do include_context 'with GCP kms_providers' it_behaves_like 'a functioning auto encrypter' end context 'with GCP KMS providers and PEM key' do require_mri include_context 'with GCP kms_providers' let(:kms_providers) do { gcp: { email: SpecConfig.instance.fle_gcp_email, private_key: OpenSSL::PKey.read( Base64.decode64(SpecConfig.instance.fle_gcp_private_key) ).export, } } end it_behaves_like 'a functioning auto encrypter' end context 'with KMIP KMS providers' do include_context 'with KMIP kms_providers' it_behaves_like 'a functioning auto encrypter' end context 'with local KMS providers' do include_context 'with local kms_providers' it_behaves_like 'a functioning auto encrypter' end end shared_examples 'with no validator or client option' do include_context 'without jsonSchema validator' let(:auto_encryption_options) do { kms_providers: kms_providers, kms_tls_options: kms_tls_options, key_vault_namespace: key_vault_namespace, } end context 'with AWS KMS providers' do include_context 'with AWS kms_providers' describe '#encrypt' do it 'does not perform encryption' do result = auto_encrypter.encrypt(db_name, command, operation_context) expect(result).to eq(command) end end describe '#decrypt' do it 'still performs decryption' do result = auto_encrypter.decrypt(encrypted_command, operation_context) expect(result).to eq(command) end end end context 'with Azure KMS providers' do include_context 'with Azure kms_providers' describe '#encrypt' do it 'does not perform encryption' do result = auto_encrypter.encrypt(db_name, command, operation_context) expect(result).to eq(command) end end describe '#decrypt' do it 'still performs decryption' do result = auto_encrypter.decrypt(encrypted_command, operation_context) expect(result).to eq(command) end end end context 'with GCP KMS providers' do include_context 'with GCP kms_providers' describe '#encrypt' do it 'does not perform encryption' do result = auto_encrypter.encrypt(db_name, command, operation_context) expect(result).to eq(command) end end describe '#decrypt' do it 'still performs decryption' do result = auto_encrypter.decrypt(encrypted_command, operation_context) expect(result).to eq(command) end end end context 'with KMIP KMS providers' do include_context 'with KMIP kms_providers' describe '#encrypt' do it 'does not perform encryption' do result = auto_encrypter.encrypt(db_name, command, operation_context) expect(result).to eq(command) end end describe '#decrypt' do it 'still performs decryption' do result = auto_encrypter.decrypt(encrypted_command, operation_context) expect(result).to eq(command) end end end context 'with local KMS providers' do include_context 'with local kms_providers' describe '#encrypt' do it 'does not perform encryption' do result = auto_encrypter.encrypt(db_name, command, operation_context) expect(result).to eq(command) end end describe '#decrypt' do it 'still performs decryption' do result = auto_encrypter.decrypt(encrypted_command, operation_context) expect(result).to eq(command) end end end end context 'when using crypt shared library' do min_server_version '6.0.0' let(:auto_encrypter_extra_options) do { crypt_shared_lib_path: SpecConfig.instance.crypt_shared_lib_path } end let(:auto_encryption_options) do { kms_providers: kms_providers, kms_tls_options: kms_tls_options, key_vault_namespace: key_vault_namespace, schema_map: { "#{db_name}.#{collection_name}": schema_map }, } end it_behaves_like 'with schema map in auto encryption commands' it_behaves_like 'with schema map file in auto encryption commands' it_behaves_like 'with schema map collection validator' it_behaves_like 'with no validator or client option' end context 'when using mongocryptd' do it_behaves_like 'with schema map in auto encryption commands' it_behaves_like 'with schema map file in auto encryption commands' it_behaves_like 'with schema map collection validator' it_behaves_like 'with no validator or client option' end end mongo-ruby-driver-2.21.3/spec/mongo/crypt/auto_encryption_context_spec.rb000066400000000000000000000073211505113246500267320ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'mongo' require 'lite_spec_helper' describe Mongo::Crypt::AutoEncryptionContext do require_libmongocrypt include_context 'define shared FLE helpers' let(:credentials) { Mongo::Crypt::KMS::Credentials.new(kms_providers) } let(:mongocrypt) { Mongo::Crypt::Handle.new(credentials, logger: logger) } let(:context) { described_class.new(mongocrypt, io, db_name, command) } let(:logger) { nil } let(:io) { double("Mongo::ClientEncryption::IO") } let(:db_name) { 'admin' } let(:command) do { "find": "test", "filter": { "ssn": "457-55-5462" } } end describe '#initialize' do shared_examples 'a functioning AutoEncryptionContext' do context 'with valid command' do it 'initializes context' do expect do context end.not_to raise_error end end context 'with invalid command' do let(:command) do { incorrect_key: 'value' } end it 'raises an exception' do expect do context end.to raise_error(/command not supported for auto encryption: incorrect_key/) end end context 'with nil command' do let(:command) { nil } it 'raises an exception' do expect do context end.to raise_error(Mongo::Error::CryptError, /Attempted to pass nil data to libmongocrypt/) end end context 'with non-document command' do let(:command) { 'command-to-encrypt' } it 'raises an exception' do expect do context end.to raise_error(Mongo::Error::CryptError, /Attempted to pass invalid data to libmongocrypt/) end end end context 'with local KMS providers' do include_context 'with local kms_providers' it_behaves_like 'a functioning AutoEncryptionContext' end context 'with AWS KMS providers' do include_context 'with AWS kms_providers' it_behaves_like 'a functioning AutoEncryptionContext' end context 'with Azure KMS providers' do include_context 'with Azure kms_providers' it_behaves_like 'a functioning AutoEncryptionContext' end context 'with GCP KMS providers' do include_context 'with GCP kms_providers' it_behaves_like 'a functioning AutoEncryptionContext' end context 'with KMIP KMS providers' do include_context 'with KMIP kms_providers' it_behaves_like 'a functioning AutoEncryptionContext' end context 'with verbose logging' do include_context 'with local kms_providers' before(:all) do # Logging from libmongocrypt requires the C library to be built with the -DENABLE_TRACE=ON # option; none of the pre-built packages on Evergreen have been built with logging enabled. # # It is still useful to be able to run these tests locally to confirm that logging is working # while debugging any problems. # # For now, skip this test by default and revisit once we have determined how we want to # package libmongocrypt with the Ruby driver (see: https://jira.mongodb.org/browse/RUBY-1966) skip "These tests require libmongocrypt to be built with the '-DENABLE_TRACE=ON' cmake option." + " They also require the MONGOCRYPT_TRACE environment variable to be set to 'ON'." end let(:logger) do ::Logger.new(STDOUT).tap do |logger| logger.level = ::Logger::DEBUG end end it 'receives log messages from libmongocrypt' do expect(logger).to receive(:debug).with(/mongocrypt_ctx_encrypt_init/) context end end end end mongo-ruby-driver-2.21.3/spec/mongo/crypt/binary_spec.rb000066400000000000000000000055101505113246500232260ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' describe Mongo::Crypt::Binary do require_libmongocrypt let(:data) { 'I love Ruby' } let(:binary) { described_class.from_data(data) } describe '#initialize' do context 'with nil data' do let(:binary) { described_class.new } it 'creates a new Mongo::Crypt::Binary object' do expect do binary end.not_to raise_error end end context 'with valid data' do let(:binary) { described_class.new(data: data) } it 'creates a new Mongo::Crypt::Binary object' do expect do binary end.not_to raise_error end end context 'with pointer' do let(:pointer) { Mongo::Crypt::Binding.mongocrypt_binary_new } let(:binary) { described_class.new(pointer: pointer) } after do Mongo::Crypt::Binding.mongocrypt_binary_destroy(pointer) end it 'creates a new Mongo::Crypt::Binary object from pointer' do expect do binary end.not_to raise_error expect(binary.ref).to eq(pointer) end end end describe '#self.from_data' do let(:binary) { described_class.from_data(data) } it 'creates a new Mongo::Crypt::Binary object' do expect do binary end.not_to raise_error end end describe '#self.from_pointer' do let(:pointer) { Mongo::Crypt::Binding.mongocrypt_binary_new } let(:binary) { described_class.from_pointer(pointer) } after do Mongo::Crypt::Binding.mongocrypt_binary_destroy(pointer) end it 'creates a new Mongo::Crypt::Binary object from pointer' do expect do binary end.not_to raise_error expect(binary.ref).to eq(pointer) end end describe '#to_s' do it 'returns the original string' do expect(binary.to_s).to eq(data) end end describe '#write' do # Binary must have enough space pre-allocated let(:binary) { described_class.from_data("\00" * data.length) } it 'writes data to the binary object' do expect(binary.write(data)).to be true expect(binary.to_s).to eq(data) end context 'with no space allocated' do let(:binary) { described_class.new } it 'returns false' do expect do binary.write(data) end.to raise_error(ArgumentError, /Cannot write #{data.length} bytes of data to a Binary object that was initialized with 0 bytes/) end end context 'without enough space allocated' do let(:binary) { described_class.from_data("\00" * (data.length - 1)) } it 'returns false' do expect do binary.write(data) end.to raise_error(ArgumentError, /Cannot write #{data.length} bytes of data to a Binary object that was initialized with #{data.length - 1} bytes/) end end end end mongo-ruby-driver-2.21.3/spec/mongo/crypt/binding/000077500000000000000000000000001505113246500220145ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/mongo/crypt/binding/binary_spec.rb000066400000000000000000000042571505113246500246470ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' describe 'Mongo::Crypt::Binding' do describe 'binary_t bindings' do require_libmongocrypt let(:bytes) { [104, 101, 108, 108, 111] } let(:bytes_pointer) do # FFI::MemoryPointer automatically frees memory when it goes out of scope p = FFI::MemoryPointer.new(bytes.size) p.write_array_of_type(FFI::TYPE_UINT8, :put_uint8, bytes) end after do Mongo::Crypt::Binding.mongocrypt_binary_destroy(binary) end describe '#mongocrypt_binary_new' do let(:binary) { Mongo::Crypt::Binding.mongocrypt_binary_new } it 'returns a pointer' do expect(binary).to be_a_kind_of(FFI::Pointer) end end describe '#mongocrypt_binary_new_from_data' do let(:binary) { Mongo::Crypt::Binding.mongocrypt_binary_new_from_data(bytes_pointer, bytes.length) } it 'returns a pointer' do expect(binary).to be_a_kind_of(FFI::Pointer) end end describe '#mongocrypt_binary_data' do let(:binary) { Mongo::Crypt::Binding.mongocrypt_binary_new_from_data(bytes_pointer, bytes.length) } it 'returns the pointer to the data' do expect(Mongo::Crypt::Binding.mongocrypt_binary_data(binary)).to eq(bytes_pointer) end end describe '#get_binary_data_direct' do let(:binary) { Mongo::Crypt::Binding.mongocrypt_binary_new_from_data(bytes_pointer, bytes.length) } it 'returns the pointer to the data' do expect(Mongo::Crypt::Binding.get_binary_data_direct(binary)).to eq(bytes_pointer) end end describe '#mongocrypt_binary_len' do let(:binary) { Mongo::Crypt::Binding.mongocrypt_binary_new_from_data(bytes_pointer, bytes.length) } it 'returns the length of the data' do expect(Mongo::Crypt::Binding.mongocrypt_binary_len(binary)).to eq(bytes.length) end end describe '#get_binary_len_direct' do let(:binary) { Mongo::Crypt::Binding.mongocrypt_binary_new_from_data(bytes_pointer, bytes.length) } it 'returns the length of the data' do expect(Mongo::Crypt::Binding.get_binary_len_direct(binary)).to eq(bytes.length) end end end end mongo-ruby-driver-2.21.3/spec/mongo/crypt/binding/context_spec.rb000066400000000000000000000200241505113246500250350ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' require_relative '../helpers/mongo_crypt_spec_helper' shared_context 'initialized for data key creation' do let(:master_key) { "ru\xfe\x00" * 24 } let(:kms_providers) do BSON::Document.new({ local: { key: BSON::Binary.new(master_key, :generic), } }) end let(:binary) do MongoCryptSpecHelper.mongocrypt_binary_t_from(kms_providers.to_bson.to_s) end let(:key_document) do MongoCryptSpecHelper.mongocrypt_binary_t_from( BSON::Document.new({provider: 'local'}).to_bson.to_s) end before do Mongo::Crypt::Binding.mongocrypt_setopt_kms_providers(mongocrypt, binary) MongoCryptSpecHelper.bind_crypto_hooks(mongocrypt) Mongo::Crypt::Binding.mongocrypt_init(mongocrypt) Mongo::Crypt::Binding.mongocrypt_ctx_setopt_key_encryption_key(context, key_document) end after do Mongo::Crypt::Binding.mongocrypt_binary_destroy(key_document) Mongo::Crypt::Binding.mongocrypt_binary_destroy(binary) end end shared_context 'initialized for explicit encryption' do # TODO: replace with code showing how to generate this value let(:key_id) { "\xDEd\x00\xDC\x0E\xF8J\x99\x97\xFA\xCC\x04\xBF\xAA\x00\xF5" } let(:key_id_binary) { MongoCryptSpecHelper.mongocrypt_binary_t_from(key_id) } let(:value) do { 'v': 'Hello, world!' }.to_bson.to_s end let(:value_binary) { MongoCryptSpecHelper.mongocrypt_binary_t_from(value) } before do MongoCryptSpecHelper.bind_crypto_hooks(mongocrypt) Mongo::Crypt::Binding.mongocrypt_init(mongocrypt) Mongo::Crypt::Binding.mongocrypt_ctx_setopt_key_id(context, key_id_binary) Mongo::Crypt::Binding.mongocrypt_ctx_setopt_algorithm( context, 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic', -1 ) end after do Mongo::Crypt::Binding.mongocrypt_binary_destroy(key_id_binary) Mongo::Crypt::Binding.mongocrypt_binary_destroy(value_binary) end end describe 'Mongo::Crypt::Binding' do describe 'mongocrypt_ctx_t bindings' do require_libmongocrypt fails_on_jruby let(:mongocrypt) { Mongo::Crypt::Binding.mongocrypt_new } let(:context) { Mongo::Crypt::Binding.mongocrypt_ctx_new(mongocrypt) } after do Mongo::Crypt::Binding.mongocrypt_destroy(mongocrypt) Mongo::Crypt::Binding.mongocrypt_ctx_destroy(context) end describe '#mongocrypt_ctx_new' do it 'returns a pointer' do expect(context).to be_a_kind_of(FFI::Pointer) end end describe '#mongocrypt_ctx_status' do let(:status) { Mongo::Crypt::Binding.mongocrypt_status_new } after do Mongo::Crypt::Binding.mongocrypt_status_destroy(status) end context 'for a new mongocrypt_ctx_t object' do it 'returns an ok status' do Mongo::Crypt::Binding.mongocrypt_ctx_status(context, status) expect(Mongo::Crypt::Binding.mongocrypt_status_type(status)).to eq(:ok) end end end describe '#mongocrypt_ctx_datakey_init' do let(:result) do Mongo::Crypt::Binding.mongocrypt_ctx_datakey_init(context) end context 'a master key option and KMS provider have been set' do include_context 'initialized for data key creation' it 'returns true' do expect(result).to be true end end end describe '#mongocrypt_ctx_setopt_key_id' do let(:binary) { MongoCryptSpecHelper.mongocrypt_binary_t_from(uuid) } let(:result) do Mongo::Crypt::Binding.mongocrypt_ctx_setopt_key_id(context, binary) end before do Mongo::Crypt::Binding.mongocrypt_init(mongocrypt) end after do Mongo::Crypt::Binding.mongocrypt_binary_destroy(binary) end context 'with valid key id' do # 16-byte binary uuid string # TODO: replace with code showing how to generate this value let(:uuid) { "\xDEd\x00\xDC\x0E\xF8J\x99\x97\xFA\xCC\x04\xBF\xAA\x00\xF5" } it 'returns true' do expect(result).to be true end end context 'with invalid key id' do # invalid uuid string -- a truncated string of bytes let(:uuid) { "\xDEd\x00\xDC\x0E\xF8J\x99\x97\xFA\xCC\x04\xBF" } it 'returns false' do expect(result).to be false end end end describe '#mongocrypt_ctx_setopt_algorithm' do let(:result) do Mongo::Crypt::Binding.mongocrypt_ctx_setopt_algorithm( context, algo, -1 ) end before do Mongo::Crypt::Binding.mongocrypt_init(mongocrypt) end context 'with deterministic algorithm' do let(:algo) { 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic' } it 'returns true' do expect(result).to be true end end context 'with random algorithm' do let(:algo) { 'AEAD_AES_256_CBC_HMAC_SHA_512-Random' } it 'returns true' do expect(result).to be true end end context 'with invalid algorithm' do let(:algo) { 'fake-algorithm' } it 'returns false' do expect(result).to be false end end context 'with nil algorithm' do let(:algo) { nil } it 'returns false' do expect(result).to be false end end end describe '#mongocrypt_ctx_explicit_encrypt_init' do let(:result) do Mongo::Crypt::Binding.mongocrypt_ctx_explicit_encrypt_init(context, value_binary) end context 'a key_id and algorithm have been set' do include_context 'initialized for explicit encryption' it 'returns true' do expect(result).to be true end end end describe '#mongocrypt_ctx_mongo_op' do context 'ctx is initialized for explicit encryption' do include_context 'initialized for explicit encryption' before do Mongo::Crypt::Binding.mongocrypt_ctx_explicit_encrypt_init(context, value_binary) end let(:out_binary) { Mongo::Crypt::Binding.mongocrypt_binary_new } let(:result) { Mongo::Crypt::Binding.mongocrypt_ctx_mongo_op(context, out_binary) } after do Mongo::Crypt::Binding.mongocrypt_binary_destroy(out_binary) end it 'returns a BSON document' do expect(result).to be true data = Mongo::Crypt::Binding.get_binary_data_direct(out_binary) len = Mongo::Crypt::Binding.get_binary_len_direct(out_binary) response = data.get_array_of_uint8(0, len).pack('C*') expect(response).to be_a_kind_of(String) end end end describe '#mongocrypt_ctx_state' do let(:result) do Mongo::Crypt::Binding.mongocrypt_ctx_state(context) end context 'the mongocrypt_ctx has been properly initialized' do include_context 'initialized for data key creation' before do Mongo::Crypt::Binding.mongocrypt_ctx_datakey_init(context) end it 'returns ready state' do expect(result).to eq(:ready) end end end describe '#mongocrypt_ctx_setopt_query_type' do let(:result) do Mongo::Crypt::Binding.mongocrypt_ctx_setopt_query_type( context, query_type, -1 ) end before do Mongo::Crypt::Binding.mongocrypt_init(mongocrypt) end context 'with equality query type' do let(:query_type) do "equality" end it 'returns true' do expect(result).to be true end end end describe '#mongocrypt_ctx_setopt_contention_factor' do let(:result) do Mongo::Crypt::Binding.mongocrypt_ctx_setopt_contention_factor( context, contention_factor ) end before do Mongo::Crypt::Binding.mongocrypt_init(mongocrypt) end context 'with non zero contention factor' do let(:contention_factor) do 10 end it 'returns true' do expect(result).to be true end end end end end mongo-ruby-driver-2.21.3/spec/mongo/crypt/binding/helpers_spec.rb000066400000000000000000000023351505113246500250200ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' describe 'Mongo::Crypt::Binding' do describe 'helper methods' do require_libmongocrypt describe '#validate_document' do context 'with BSON::Document data' do it 'does not raise an exception' do expect do Mongo::Crypt::Binding.validate_document(BSON::Document.new) end.not_to raise_error end end context 'with Hash data' do it 'does not raise an exception' do expect do Mongo::Crypt::Binding.validate_document({}) end.not_to raise_error end end context 'with nil data' do it 'raises an exception' do expect do Mongo::Crypt::Binding.validate_document(nil) end.to raise_error(Mongo::Error::CryptError, /Attempted to pass nil data to libmongocrypt/) end end context 'with non-document data' do it 'raises an exception' do expect do Mongo::Crypt::Binding.validate_document('not a bson document') end.to raise_error(Mongo::Error::CryptError, /Attempted to pass invalid data to libmongocrypt/) end end end end end mongo-ruby-driver-2.21.3/spec/mongo/crypt/binding/mongocrypt_spec.rb000066400000000000000000000063041505113246500255570ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' require_relative '../helpers/mongo_crypt_spec_helper' describe 'Mongo::Crypt::Binding' do describe 'mongocrypt_t binding' do require_libmongocrypt after do Mongo::Crypt::Binding.mongocrypt_destroy(mongocrypt) end describe '#mongocrypt_new' do let(:mongocrypt) { Mongo::Crypt::Binding.mongocrypt_new } it 'returns a pointer' do expect(mongocrypt).to be_a_kind_of(FFI::Pointer) end end describe '#mongocrypt_init' do let(:key_bytes) { [114, 117, 98, 121] * 24 } # 96 bytes let(:kms_providers) do BSON::Document.new({ local: { key: BSON::Binary.new(key_bytes.pack('C*'), :generic), } }) end let(:binary) do data = kms_providers.to_bson.to_s Mongo::Crypt::Binding.mongocrypt_binary_new_from_data( FFI::MemoryPointer.from_string(data), data.bytesize, ) end let(:mongocrypt) do Mongo::Crypt::Binding.mongocrypt_new.tap do |mongocrypt| Mongo::Crypt::Binding.mongocrypt_setopt_kms_providers(mongocrypt, binary) end end after do Mongo::Crypt::Binding.mongocrypt_binary_destroy(binary) end context 'with valid kms option' do before do MongoCryptSpecHelper.bind_crypto_hooks(mongocrypt) end it 'returns true' do expect(Mongo::Crypt::Binding.mongocrypt_init(mongocrypt)).to be true end end context 'with invalid kms option' do before do MongoCryptSpecHelper.bind_crypto_hooks(mongocrypt) end let(:key_bytes) { [114, 117, 98, 121] * 23 } # NOT 96 bytes it 'returns false' do expect(Mongo::Crypt::Binding.mongocrypt_init(mongocrypt)).to be false end end end describe '#mongocrypt_status' do let(:status) { Mongo::Crypt::Binding.mongocrypt_status_new } let(:mongocrypt) { mongocrypt = Mongo::Crypt::Binding.mongocrypt_new } after do Mongo::Crypt::Binding.mongocrypt_status_destroy(status) end context 'for a new mongocrypt_t object' do it 'returns an ok status' do Mongo::Crypt::Binding.mongocrypt_status(mongocrypt, status) expect(Mongo::Crypt::Binding.mongocrypt_status_type(status)).to eq(:ok) end end context 'for a mongocrypt_t object with invalid kms options' do let(:key_bytes) { [114, 117, 98, 121] * 23 } # NOT 96 bytes let(:binary) do p = FFI::MemoryPointer.new(key_bytes.size) .write_array_of_type(FFI::TYPE_UINT8, :put_uint8, key_bytes) Mongo::Crypt::Binding.mongocrypt_binary_new_from_data(p, key_bytes.length) end after do Mongo::Crypt::Binding.mongocrypt_binary_destroy(binary) end it 'returns a error_client status' do Mongo::Crypt::Binding.mongocrypt_setopt_kms_providers(mongocrypt, binary) Mongo::Crypt::Binding.mongocrypt_status(mongocrypt, status) expect(Mongo::Crypt::Binding.mongocrypt_status_type(status)).to eq(:error_client) end end end end end mongo-ruby-driver-2.21.3/spec/mongo/crypt/binding/status_spec.rb000066400000000000000000000051321505113246500246770ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' describe 'Mongo::Crypt::Binding' do describe 'mongocrypt_status_t binding' do require_libmongocrypt let(:status) { Mongo::Crypt::Binding.mongocrypt_status_new } let(:message) { "Operation unauthorized" } let(:status_with_info) do Mongo::Crypt::Binding.mongocrypt_status_set( status, :error_client, 401, message, message.length + 1 ) status end after do Mongo::Crypt::Binding.mongocrypt_status_destroy(status) end describe '#mongocrypt_status_new' do it 'returns a pointer' do expect(status).to be_a_kind_of(FFI::Pointer) end end describe '#mongocrypt_status_type' do context 'when status has no type' do it 'returns :ok/0' do expect(Mongo::Crypt::Binding.mongocrypt_status_type(status)).to eq(:ok) end end context 'when status has type' do it 'returns type' do expect(Mongo::Crypt::Binding.mongocrypt_status_type(status_with_info)).to eq(:error_client) end end end describe '#mongocrypt_status_code' do context 'when status has no code' do it 'returns 0' do expect(Mongo::Crypt::Binding.mongocrypt_status_code(status)).to eq(0) end end context 'when status has code' do it 'returns code' do expect(Mongo::Crypt::Binding.mongocrypt_status_code(status_with_info)).to eq(401) end end end describe '#mongocrypt_status_message' do context 'when status has no message' do it 'returns nil' do expect(Mongo::Crypt::Binding.mongocrypt_status_message(status, nil)).to eq(nil) end end context 'when status has message' do it 'returns message' do expect(Mongo::Crypt::Binding.mongocrypt_status_message(status_with_info, nil)).to eq(message) end end end describe '#mongocrypt_status_ok' do context 'when status_type is not ok' do it 'returns false' do expect(Mongo::Crypt::Binding.mongocrypt_status_ok(status_with_info)).to be false end end context 'when status_type is ok' do let(:message) { 'Operation successful' } let(:status_with_info) do Mongo::Crypt::Binding.mongocrypt_status_set(status, :ok, 200, message, message.length + 1) status end it 'returns true' do expect(Mongo::Crypt::Binding.mongocrypt_status_ok(status_with_info)).to be true end end end end end mongo-ruby-driver-2.21.3/spec/mongo/crypt/binding/version_spec.rb000066400000000000000000000040311505113246500250360ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' describe 'Mongo::Crypt::Binding' do require_libmongocrypt describe '#mongocrypt_version' do let(:version) { Mongo::Crypt::Binding.mongocrypt_version(nil) } it 'is a string' do expect(version).to be_a_kind_of(String) end it 'is in the x.y.z-tag format' do expect(version).to match(/\A(\d+.){2}(\d+)?(-[A-Za-z\+\d]+)?\z/) end end describe '#validate_version' do context 'when not satisfied' do let(:older_version) do Mongo::Crypt::Binding::MIN_LIBMONGOCRYPT_VERSION.to_s.sub(/^\d+/, '0') end it 'raises an error' do expect do Mongo::Crypt::Binding.validate_version(older_version) end.to raise_error(LoadError, /libmongocrypt version .* or above is required, but version .* was found./) end end context 'when satisfied' do let(:newer_version) do Mongo::Crypt::Binding::MIN_LIBMONGOCRYPT_VERSION.bump.to_s end it 'does not raise and error' do expect do Mongo::Crypt::Binding.validate_version(newer_version) end.not_to raise_error(LoadError, /libmongocrypt version .* or above is required, but version .* was found./) end end context 'when in a non-parsable format' do let(:base_version) { Mongo::Crypt::Binding::MIN_LIBMONGOCRYPT_VERSION.to_s } shared_examples_for 'non-standard version format' do it 'does not raise an exception' do expect do Mongo::Crypt::Binding.validate_version(version) end.not_to raise_error end end context 'when the version is MAJOR.MINOR.PATH-dev+datecommit' do let(:version) { "#{base_version}-dev+20220730git8f8675fa11" } include_examples 'non-standard version format' end context 'when the version is MAJOR.MINOR.PATH-date+commit' do let(:version) { "#{base_version}-20230601+git9b07846bef" } include_examples 'non-standard version format' end end end end mongo-ruby-driver-2.21.3/spec/mongo/crypt/binding_unloaded_spec.rb000066400000000000000000000021441505113246500252270ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' describe 'Mongo::Crypt::Binding' do require_no_libmongocrypt before(:all) do if ENV['FLE'] == 'helper' skip 'FLE=helper is incompatible with unloaded binding tests' end end context 'when load fails' do # JRuby 9.3.2.0 converts our custom LoadErrors to generic NameErrors # and trashes the exception messages. # https://github.com/jruby/jruby/issues/7070 # JRuby 9.2 works correctly, this test is skipped on all JRuby versions # because we intend to remove JRuby support altogether and therefore # adding logic to condition on JRuby versions does not make sense. fails_on_jruby it 'retries loading at the next reference' do lambda do Mongo::Crypt::Binding end.should raise_error(LoadError, /no path to libmongocrypt specified/) # second load should also be attempted and should fail with the # LoadError exception lambda do Mongo::Crypt::Binding end.should raise_error(LoadError, /no path to libmongocrypt specified/) end end end mongo-ruby-driver-2.21.3/spec/mongo/crypt/data_key_context_spec.rb000066400000000000000000000067601505113246500252770ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'mongo' require 'base64' require 'lite_spec_helper' describe Mongo::Crypt::DataKeyContext do require_libmongocrypt include_context 'define shared FLE helpers' let(:credentials) { Mongo::Crypt::KMS::Credentials.new(kms_providers) } let(:kms_tls_options) do {} end let(:mongocrypt) do Mongo::Crypt::Handle.new(credentials, kms_tls_options) end let(:io) { double("Mongo::Crypt::EncryptionIO") } let(:key_alt_names) { [] } let(:context) { described_class.new(mongocrypt, io, key_document, key_alt_names, nil) } describe '#initialize' do shared_examples 'it properly sets key_alt_names' do context 'with one key_alt_names' do let(:key_alt_names) { ['keyAltName1'] } it 'does not raise an exception' do expect do context end.not_to raise_error end end context 'with multiple key_alt_names' do let(:key_alt_names) { ['keyAltName1', 'keyAltName2'] } it 'does not raise an exception' do expect do context end.not_to raise_error end end context 'with empty key_alt_names' do let(:key_alt_names) { [] } it 'does not raise an exception' do expect do context end.not_to raise_error end end context 'with invalid key_alt_names' do let(:key_alt_names) { ['keyAltName1', 3] } it 'does raises an exception' do expect do context end.to raise_error(ArgumentError, /All values of the :key_alt_names option Array must be Strings/) end end context 'with non-array key_alt_names' do let(:key_alt_names) { "keyAltName1" } it 'does raises an exception' do expect do context end.to raise_error(ArgumentError, /key_alt_names option must be an Array/) end end end context 'with aws kms provider' do include_context 'with AWS kms_providers' let(:key_document) do Mongo::Crypt::KMS::MasterKeyDocument.new( 'aws', { master_key: { region: 'us-east-2', key: 'arn' } } ) end it_behaves_like 'it properly sets key_alt_names' context 'with valid options' do it 'does not raise an exception' do expect do context end.not_to raise_error end end context 'with valid endpoint' do let(:key_document) do Mongo::Crypt::KMS::MasterKeyDocument.new( 'aws', { master_key: { region: 'us-east-2', key: 'arn', endpoint: 'kms.us-east-2.amazonaws.com:443' } } ) end it 'does not raise an exception' do expect do context end.not_to raise_error end end end end describe '#run_state_machine' do # TODO: test with AWS KMS provider context 'with local KMS provider' do include_context 'with local kms_providers' let(:key_document) do Mongo::Crypt::KMS::MasterKeyDocument.new( 'local', { master_key: { key: 'MASTER-KEY' } } ) end let(:operation_context) { Mongo::Operation::Context.new } it 'creates a data key' do expect(context.run_state_machine(operation_context)).to be_a_kind_of(Hash) end end end end mongo-ruby-driver-2.21.3/spec/mongo/crypt/encryption_io_spec.rb000066400000000000000000000072251505113246500246300ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'mongo' require 'spec_helper' describe Mongo::Crypt::EncryptionIO do let(:subject) do described_class.new( key_vault_namespace: 'foo.bar', key_vault_client: authorized_client, metadata_client: authorized_client.with(auto_encryption_options: nil), mongocryptd_options: mongocryptd_options, ) end describe '#spawn_mongocryptd' do context 'no spawn path' do let(:mongocryptd_options) do { mongocryptd_spawn_args: ['test'], } end it 'fails with an exception' do lambda do subject.send(:spawn_mongocryptd) end.should raise_error(ArgumentError, /Cannot spawn mongocryptd process when no.*mongocryptd_spawn_path/) end end context 'no spawn args' do let(:mongocryptd_options) do { mongocryptd_spawn_path: 'echo', } end it 'fails with an exception' do lambda do subject.send(:spawn_mongocryptd) end.should raise_error(ArgumentError, /Cannot spawn mongocryptd process when no.*mongocryptd_spawn_args/) end end context 'empty array for spawn args' do let(:mongocryptd_options) do { mongocryptd_spawn_path: 'echo', mongocryptd_spawn_args: [], } end it 'fails with an exception' do lambda do subject.send(:spawn_mongocryptd) end.should raise_error(ArgumentError, /Cannot spawn mongocryptd process when no.*mongocryptd_spawn_args/) end end context 'good spawn path and args' do let(:mongocryptd_options) do { mongocryptd_spawn_path: 'echo', mongocryptd_spawn_args: ['hi'], } end it 'spawns' do subject.send(:spawn_mongocryptd) end end context '-- for args to emulate no args' do let(:mongocryptd_options) do { mongocryptd_spawn_path: 'echo', mongocryptd_spawn_args: ['--'], } end it 'spawns' do subject.send(:spawn_mongocryptd) end end end describe '#mark_command' do let(:mock_client) do double('mongocryptd client').tap do |client| database = double('mock database') expect(database).to receive(:command).and_raise(Mongo::Error::NoServerAvailable.new(Mongo::ServerSelector::Primary.new, nil, 'test message')) allow(database).to receive(:command).and_return([]) expect(client).to receive(:database).at_least(:once).and_return(database) end end let(:base_options) do { mongocryptd_spawn_path: 'echo', mongocryptd_spawn_args: ['--'], } end let(:subject) do described_class.new( mongocryptd_client: mock_client, key_vault_namespace: 'foo.bar', key_vault_client: authorized_client, metadata_client: authorized_client.with(auto_encryption_options: nil), mongocryptd_options: mongocryptd_options, ) end context ':mongocryptd_bypass_spawn not given' do let(:mongocryptd_options) do base_options end it 'spawns' do expect(subject).to receive(:spawn_mongocryptd) subject.mark_command({}) end end context ':mongocryptd_bypass_spawn given' do let(:mongocryptd_options) do base_options.merge( mongocryptd_bypass_spawn: true, ) end it 'does not spawn' do expect(subject).not_to receive(:spawn_mongocryptd) lambda do subject.mark_command({}) end.should raise_error(Mongo::Error::NoServerAvailable, /test message/) end end end end mongo-ruby-driver-2.21.3/spec/mongo/crypt/explicit_decryption_context_spec.rb000066400000000000000000000066001505113246500275700ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'mongo' require 'lite_spec_helper' describe Mongo::Crypt::ExplicitDecryptionContext do require_libmongocrypt include_context 'define shared FLE helpers' let(:credentials) { Mongo::Crypt::KMS::Credentials.new(kms_providers) } let(:mongocrypt) { Mongo::Crypt::Handle.new(credentials, logger: logger) } let(:context) { described_class.new(mongocrypt, io, value) } let(:logger) { nil } let(:io) { double("Mongo::ClientEncryption::IO") } # A binary string representing a value previously encrypted by libmongocrypt let(:encrypted_data) do "\x01\xDF2~\x89\xD2+N}\x84;i(\xE5\xF4\xBF \x024\xE5\xD2\n\x9E\x97\x9F\xAF\x9D\xC7\xC9\x1A\a\x87z\xAE_;r\xAC\xA9\xF6n\x1D\x0F\xB5\xB1#O\xB7\xCA\xEE$/\xF1\xFA\b\xA7\xEC\xDB\xB6\xD4\xED\xEAMw3+\xBBv\x18\x97\xF9\x99\xD5\x13@\x80y\n{\x19R\xD3\xF0\xA1C\x05\xF7)\x93\x9Bh\x8AA.\xBB\xD3&\xEA" end let(:value) do { 'v': BSON::Binary.new(encrypted_data, :ciphertext) } end describe '#initialize' do context 'when mongocrypt is initialized with local KMS provider options' do include_context 'with local kms_providers' it 'initializes context' do expect do context end.not_to raise_error end end context 'when mongocrypt is initialized with AWS KMS provider options' do include_context 'with AWS kms_providers' it 'initializes context' do expect do context end.not_to raise_error end end context 'when mongocrypt is initialized with Azure KMS provider options' do include_context 'with Azure kms_providers' it 'initializes context' do expect do context end.not_to raise_error end end context 'when mongocrypt is initialized with GCP KMS provider options' do include_context 'with GCP kms_providers' it 'initializes context' do expect do context end.not_to raise_error end end context 'when mongocrypt is initialized with KMIP KMS provider options' do include_context 'with KMIP kms_providers' it 'initializes context' do expect do context end.not_to raise_error end end context 'with verbose logging' do include_context 'with local kms_providers' before(:all) do # Logging from libmongocrypt requires the C library to be built with the -DENABLE_TRACE=ON # option; none of the pre-built packages on Evergreen have been built with logging enabled. # # It is still useful to be able to run these tests locally to confirm that logging is working # while debugging any problems. # # For now, skip this test by default and revisit once we have determined how we want to # package libmongocrypt with the Ruby driver (see: https://jira.mongodb.org/browse/RUBY-1966) skip "These tests require libmongocrypt to be built with the '-DENABLE_TRACE=ON' cmake option." + " They also require the MONGOCRYPT_TRACE environment variable to be set to 'ON'." end let(:logger) do ::Logger.new(STDOUT).tap do |logger| logger.level = ::Logger::DEBUG end end it 'receives log messages from libmongocrypt' do expect(logger).to receive(:debug).with(/mongocrypt_ctx_explicit_decrypt_init/) context end end end end mongo-ruby-driver-2.21.3/spec/mongo/crypt/explicit_encryption_context_spec.rb000066400000000000000000000172771505113246500276160ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'mongo' require 'lite_spec_helper' describe Mongo::Crypt::ExplicitEncryptionContext do require_libmongocrypt include_context 'define shared FLE helpers' let(:credentials) { Mongo::Crypt::KMS::Credentials.new(kms_providers) } let(:mongocrypt) { Mongo::Crypt::Handle.new(credentials, logger: logger) } let(:context) { described_class.new(mongocrypt, io, value, options) } let(:logger) { nil } let(:io) { double("Mongo::ClientEncryption::IO") } let(:value) { { 'v': 'Hello, world!' } } let(:options) do { key_id: key_id, key_alt_name: key_alt_name, algorithm: algorithm } end describe '#initialize' do shared_examples 'a functioning ExplicitEncryptionContext' do context 'with nil key_id and key_alt_name options' do let(:key_id) { nil } let(:key_alt_name) { nil } it 'raises an exception' do expect do context end.to raise_error(ArgumentError, /:key_id and :key_alt_name options cannot both be nil/) end end context 'with both key_id and key_alt_name options' do it 'raises an exception' do expect do context end.to raise_error(ArgumentError, /:key_id and :key_alt_name options cannot both be present/) end end context 'with invalid key_id' do let(:key_id) { 'random string' } let(:key_alt_name) { nil } it 'raises an exception' do expect do context end.to raise_error(ArgumentError, /Expected the :key_id option to be a BSON::Binary object/) end end context 'with invalid key_alt_name' do let(:key_id) { nil } let(:key_alt_name) { 5 } it 'raises an exception' do expect do context end.to raise_error(ArgumentError, /key_alt_name option must be a String/) end end context 'with valid key_alt_name' do let(:key_id) { nil } context 'with nil algorithm' do let(:algorithm) { nil } it 'raises exception' do expect do context end.to raise_error(Mongo::Error::CryptError, /passed null algorithm/) end end context 'with invalid algorithm' do let(:algorithm) { 'unsupported-algorithm' } it 'raises an exception' do expect do context end.to raise_error(Mongo::Error::CryptError, /unsupported algorithm/) end end it 'initializes context' do expect do context end.not_to raise_error end end context 'with valid key_id' do let(:key_alt_name) { nil } context 'with nil algorithm' do let(:algorithm) { nil } it 'raises exception' do expect do context end.to raise_error(Mongo::Error::CryptError, /passed null algorithm/) end end context 'with invalid algorithm' do let(:algorithm) { 'unsupported-algorithm' } it 'raises an exception' do expect do context end.to raise_error(Mongo::Error::CryptError, /unsupported algorithm/) end end it 'initializes context' do expect do context end.not_to raise_error end end context 'with query_type' do let(:key_alt_name) { nil } it 'raises exception' do expect do described_class.new( mongocrypt, io, value, options.merge(query_type: "equality") ) end.to raise_error(ArgumentError, /query_type is allowed only for "Indexed" or "Range" algorithm/) end end context 'with contention_factor' do let(:key_alt_name) { nil } it 'raises exception' do expect do described_class.new( mongocrypt, io, value, options.merge(contention_factor: 10) ) end.to raise_error(ArgumentError, /contention_factor is allowed only for "Indexed" or "Range" algorithm/) end end context 'with Indexed algorithm' do let(:algorithm) do 'Indexed' end let(:key_alt_name) do nil end it 'initializes context' do expect do described_class.new( mongocrypt, io, value, options.merge(contention_factor: 0) ) end.not_to raise_error end context 'with query_type' do it 'initializes context' do expect do described_class.new( mongocrypt, io, value, options.merge(query_type: "equality", contention_factor: 0) ) end.not_to raise_error end end context 'with contention_factor' do it 'initializes context' do expect do described_class.new( mongocrypt, io, value, options.merge(contention_factor: 10) ) end.not_to raise_error end end end end context 'when mongocrypt is initialized with AWS KMS provider options' do include_context 'with AWS kms_providers' it_behaves_like 'a functioning ExplicitEncryptionContext' end context 'when mongocrypt is initialized with Azure KMS provider options' do include_context 'with Azure kms_providers' it_behaves_like 'a functioning ExplicitEncryptionContext' end context 'when mongocrypt is initialized with GCP KMS provider options' do include_context 'with GCP kms_providers' it_behaves_like 'a functioning ExplicitEncryptionContext' end context 'when mongocrypt is initialized with KMIP KMS provider options' do include_context 'with KMIP kms_providers' it_behaves_like 'a functioning ExplicitEncryptionContext' end context 'when mongocrypt is initialized with local KMS provider options' do include_context 'with local kms_providers' it_behaves_like 'a functioning ExplicitEncryptionContext' end context 'with verbose logging' do include_context 'with local kms_providers' before(:all) do # Logging from libmongocrypt requires the C library to be built with the -DENABLE_TRACE=ON # option; none of the pre-built packages on Evergreen have been built with logging enabled. # # It is still useful to be able to run these tests locally to confirm that logging is working # while debugging any problems. # # For now, skip this test by default and revisit once we have determined how we want to # package libmongocrypt with the Ruby driver (see: https://jira.mongodb.org/browse/RUBY-1966) skip "These tests require libmongocrypt to be built with the '-DENABLE_TRACE=ON' cmake option." + " They also require the MONGOCRYPT_TRACE environment variable to be set to 'ON'." end let(:key_alt_name) { nil } let(:logger) do ::Logger.new(STDOUT).tap do |logger| logger.level = ::Logger::DEBUG end end it 'receives log messages from libmongocrypt' do expect(logger).to receive(:debug).with(/mongocrypt_ctx_setopt_key_id/) expect(logger).to receive(:debug).with(/mongocrypt_ctx_setopt_algorithm/) expect(logger).to receive(:debug).with(/mongocrypt_ctx_explicit_encrypt_init/) context end end end end mongo-ruby-driver-2.21.3/spec/mongo/crypt/handle_spec.rb000066400000000000000000000154121505113246500231770ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'mongo' require 'base64' require 'spec_helper' describe Mongo::Crypt::Handle do require_libmongocrypt include_context 'define shared FLE helpers' describe '#initialize' do let(:credentials) { Mongo::Crypt::KMS::Credentials.new(kms_providers) } let(:kms_tls_options) { {} } let(:handle) do described_class.new( credentials, kms_tls_options, schema_map: schema_map, schema_map_path: schema_map_path, bypass_query_analysis: bypass_query_analysis, crypt_shared_lib_path: crypt_shared_lib_path, crypt_shared_lib_required: crypt_shared_lib_required, explicit_encryption_only: explicit_encryption_only, ) end let(:schema_map) do nil end let(:schema_map_path) do nil end let(:bypass_query_analysis) do nil end let(:crypt_shared_lib_path) do nil end let(:crypt_shared_lib_required) do nil end let(:explicit_encryption_only) do nil end shared_examples 'a functioning Mongo::Crypt::Handle' do context 'with valid schema map' do it 'does not raise an exception' do expect { handle }.not_to raise_error end end context 'with valid schema map in a file' do let(:schema_map_path) do schema_map_file_path end context 'without schema_map set' do let(:schema_map) do nil end it 'does not raise an exception' do expect { handle }.not_to raise_error end end context 'with schema_map set' do it 'raises an exception' do expect { handle }.to raise_error(ArgumentError, /Cannot set both schema_map and schema_map_path options/) end end end context 'with invalid schema map' do let(:schema_map) { '' } it 'raises an exception' do expect { handle }.to raise_error(ArgumentError, /invalid schema_map; schema_map must be a Hash or nil/) end end context 'with nil schema map' do let(:schema_map) { nil } it 'does not raise an exception' do expect { handle }.not_to raise_error end end context 'with crypt_shared_lib_path' do min_server_version '6.0.0' context 'with correct path' do let(:crypt_shared_lib_path) do SpecConfig.instance.crypt_shared_lib_path end it 'loads the crypt shared lib' do expect(handle.crypt_shared_lib_version).not_to eq(0) end end context 'with incorrect path' do let(:crypt_shared_lib_path) do '/some/bad/path/mongo_crypt_v1.so' end it 'raises an exception' do expect { handle }.to raise_error(Mongo::Error::CryptError) end end end context 'with crypt_shared_lib_required' do min_server_version '6.0.0' context 'set to true' do let(:crypt_shared_lib_required) do true end context 'when shared lib is available' do let(:crypt_shared_lib_path) do SpecConfig.instance.crypt_shared_lib_path end it 'does not raise an exception' do expect { handle }.not_to raise_error end end context 'when shared lib is not available' do let(:crypt_shared_lib_path) do '/some/bad/path/mongo_crypt_v1.so' end it 'raises an exception' do expect { handle }.to raise_error(Mongo::Error::CryptError) end end end end context 'if bypass_query_analysis is true' do min_server_version '6.0.0' let(:bypass_query_analysis) do true end it 'does not load the crypt shared lib' do expect(Mongo::Crypt::Binding).not_to receive(:setopt_append_crypt_shared_lib_search_path) expect(handle.crypt_shared_lib_version).to eq(0) end end context 'if explicit_encryption_only is true' do min_server_version '6.0.0' let(:explicit_encryption_only) do true end it 'does not load the crypt shared lib' do expect(Mongo::Crypt::Binding).not_to receive(:setopt_append_crypt_shared_lib_search_path) expect(handle.crypt_shared_lib_version).to eq(0) end end end context 'local' do context 'with invalid local kms master key' do let(:kms_providers) do { local: { key: 'ruby' * 23 # NOT 96 bytes } } end it 'raises an exception' do expect { handle }.to raise_error(Mongo::Error::CryptError, /local key must be 96 bytes \(libmongocrypt error code 1\)/) end end context 'with valid local kms_providers' do include_context 'with local kms_providers' it_behaves_like 'a functioning Mongo::Crypt::Handle' end end context 'AWS' do context 'with valid AWS kms_providers' do include_context 'with AWS kms_providers' it_behaves_like 'a functioning Mongo::Crypt::Handle' end context 'with empty AWS kms_providers' do let(:kms_providers) do { aws: {} } end it 'instructs libmongocrypt to handle empty AWS credentials' do expect(Mongo::Crypt::Binding).to receive( :setopt_use_need_kms_credentials_state ).once.and_call_original handle end end end context 'Azure' do context 'with valid azure kms_providers' do include_context 'with Azure kms_providers' it_behaves_like 'a functioning Mongo::Crypt::Handle' end end context 'GCP' do context 'with valid gcp kms_providers' do include_context 'with GCP kms_providers' it_behaves_like 'a functioning Mongo::Crypt::Handle' end end context 'GCP with PEM private key' do require_mri context 'with valid gcp kms_providers' do include_context 'with GCP kms_providers' let(:kms_providers) do { gcp: { email: SpecConfig.instance.fle_gcp_email, private_key: OpenSSL::PKey.read( Base64.decode64(SpecConfig.instance.fle_gcp_private_key) ).export, } } end it_behaves_like 'a functioning Mongo::Crypt::Handle' end end context 'KMIP' do context 'with valid kmip kms_providers' do include_context 'with KMIP kms_providers' it_behaves_like 'a functioning Mongo::Crypt::Handle' end end end end mongo-ruby-driver-2.21.3/spec/mongo/crypt/helpers/000077500000000000000000000000001505113246500220445ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/mongo/crypt/helpers/mongo_crypt_spec_helper.rb000066400000000000000000000061131505113246500273030ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all module MongoCryptSpecHelper def bind_crypto_hooks(mongocrypt) Mongo::Crypt::Binding.mongocrypt_setopt_crypto_hooks( mongocrypt, method(:aes_encrypt), method(:aes_decrypt), method(:random), method(:hmac_sha_512), method(:hmac_sha_256), method(:hmac_hash), nil ) end module_function :bind_crypto_hooks def mongocrypt_binary_t_from(string) bytes = string.unpack('C*') p = FFI::MemoryPointer .new(bytes.size) .write_array_of_type(FFI::TYPE_UINT8, :put_uint8, bytes) Mongo::Crypt::Binding.mongocrypt_binary_new_from_data(p, bytes.length) end module_function :mongocrypt_binary_t_from private def string_from_binary(binary_p) str_p = Mongo::Crypt::Binding.get_binary_data_direct(binary_p) len = Mongo::Crypt::Binding.get_binary_len_direct(binary_p) str_p.read_string(len) end module_function :string_from_binary def write_to_binary(binary_p, data) str_p = Mongo::Crypt::Binding.get_binary_data_direct(binary_p) str_p.put_bytes(0, data) end module_function :write_to_binary def aes_encrypt(_, key_binary_p, iv_binary_p, input_binary_p, output_binary_p, response_length_p, status_p) key = string_from_binary(key_binary_p) iv = string_from_binary(iv_binary_p) input = string_from_binary(input_binary_p) output = Mongo::Crypt::Hooks.aes(key, iv, input) write_to_binary(output_binary_p, output) response_length_p.write_int(output.length) true end module_function :aes_encrypt def aes_decrypt(_, key_binary_p, iv_binary_p, input_binary_p, output_binary_p, response_length_p, status_p) key = string_from_binary(key_binary_p) iv = string_from_binary(iv_binary_p) input = string_from_binary(input_binary_p) output = Mongo::Crypt::Hooks.aes(key, iv, input, decrypt: true) write_to_binary(output_binary_p, output) response_length_p.write_int(output.length) true end module_function :aes_decrypt def random(_, output_binary_p, num_bytes, status_p) output = Mongo::Crypt::Hooks.random(num_bytes) write_to_binary(output_binary_p, output) true end module_function :random def hmac_sha_512(_, key_binary_p, input_binary_p, output_binary_p, status_p) key = string_from_binary(key_binary_p) input = string_from_binary(input_binary_p) output = Mongo::Crypt::Hooks.hmac_sha('SHA512', key, input) write_to_binary(output_binary_p, output) true end module_function :hmac_sha_512 def hmac_sha_256(_, key_binary_p, input_binary_p, output_binary_p, status_p) key = string_from_binary(key_binary_p) input = string_from_binary(input_binary_p) output = Mongo::Crypt::Hooks.hmac_sha('SHA256', key, input) write_to_binary(output_binary_p, output) true end module_function :hmac_sha_256 def hmac_hash(_, input_binary_p, output_binary_p, status_p) input = string_from_binary(input_binary_p) output = Mongo::Crypt::Hooks.hash_sha256(input) write_to_binary(output_binary_p, output) true end module_function :hmac_hash end mongo-ruby-driver-2.21.3/spec/mongo/crypt/hooks_spec.rb000066400000000000000000000046661505113246500231000ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'mongo' require 'base64' require 'lite_spec_helper' describe Mongo::Crypt::Hooks do context '#rsaes_pkcs_signature' do let(:private_key_data_b64) do 'MIIEvgIBADANBgkqhkiG9w0BAQEFAASCBKgwggSkAgEAAoIBAQC4JOyv5z05cL18ztpknRC7CFY2gYol4DAKerdVUoDJxCTmFMf39dVUEqD0WDiw/qcRtSO1/FRut08PlSPmvbyKetsLoxlpS8lukSzEFpFK7+L+R4miFOl6HvECyg7lbC1H/WGAhIz9yZRlXhRo9qmO/fB6PV9IeYtU+1xYuXicjCDPp36uuxBAnCz7JfvxJ3mdVc0vpSkbSb141nWuKNYR1mgyvvL6KzxO6mYsCo4hRAdhuizD9C4jDHk0V2gDCFBk0h8SLEdzStX8L0jG90/Og4y7J1b/cPo/kbYokkYisxe8cPlsvGBf+rZex7XPxc1yWaP080qeABJb+S88O//LAgMBAAECggEBAKVxP1m3FzHBUe2NZ3fYCc0Qa2zjK7xl1KPFp2u4CU+9sy0oZJUqQHUdm5CMprqWwIHPTftWboFenmCwrSXFOFzujljBO7Z3yc1WD3NJl1ZNepLcsRJ3WWFH5V+NLJ8Bdxlj1DMEZCwr7PC5+vpnCuYWzvT0qOPTl9RNVaW9VVjHouJ9Fg+s2DrShXDegFabl1iZEDdI4xScHoYBob06A5lw0WOCTayzw0Naf37lM8Y4psRAmI46XLiF/Vbuorna4hcChxDePlNLEfMipICcuxTcei1RBSlBa2t1tcnvoTy6cuYDqqImRYjp1KnMKlKQBnQ1NjS2TsRGm+F0FbreVCECgYEA4IDJlm8q/hVyNcPe4OzIcL1rsdYN3bNm2Y2O/YtRPIkQ446ItyxD06d9VuXsQpFp9jNACAPfCMSyHpPApqlxdc8z/xATlgHkcGezEOd1r4E7NdTpGg8y6Rj9b8kVlED6v4grbRhKcU6moyKUQT3+1B6ENZTOKyxuyDEgTwZHtFECgYEA0fqdv9h9s77d6eWmIioP7FSymq93pC4umxf6TVicpjpMErdD2ZfJGulN37dq8FOsOFnSmFYJdICj/PbJm6p1i8O21lsFCltEqVoVabJ7/0alPfdG2U76OeBqI8ZubL4BMnWXAB/VVEYbyWCNpQSDTjHQYs54qa2I0dJB7OgJt1sCgYEArctFQ02/7H5Rscl1yo3DBXO94SeiCFSPdC8f2Kt3MfOxvVdkAtkjkMACSbkoUsgbTVqTYSEOEc2jTgR3iQ13JgpHaFbbsq64V0QP3TAxbLIQUjYGVgQaF1UfLOBv8hrzgj45z/ST/G80lOl595+0nCUbmBcgG1AEWrmdF0/3RmECgYAKvIzKXXB3+19vcT2ga5Qq2l3TiPtOGsppRb2XrNs9qKdxIYvHmXo/9QP1V3SRW0XoD7ez8FpFabp42cmPOxUNk3FK3paQZABLxH5pzCWI9PzIAVfPDrm+sdnbgG7vAnwfL2IMMJSA3aDYGCbF9EgefG+STcpfqq7fQ6f5TBgLFwKBgCd7gn1xYL696SaKVSm7VngpXlczHVEpz3kStWR5gfzriPBxXgMVcWmcbajRser7ARpCEfbxM1UJyv6oAYZWVSNErNzNVb4POqLYcCNySuC6xKhs9FrEQnyKjyk8wI4VnrEMGrQ8e+qYSwYk9Gh6dKGoRMAPYVXQAO0fIsHF/T0a' end let(:signature) do Base64.decode64( 'VocBRhpMmQ2XCzVehWSqheQLnU889gf3dhU4AnVnQTJjsKx/CM23qKDPkZDd2A/BnQsp99SN7ksIX5Raj0TPwyN5OCN/YrNFNGoOFlTsGhgP/hyE8X3Duiq6sNO0SMvRYNPFFGlJFsp1Fw3Z94eYMg4/Wpw5s4+Jo5Zm/qY7aTJIqDKDQ3CNHLeJgcMUOc9sz01/GzoUYKDVODHSxrYEk5ireFJFz9vP8P7Ha+VDUZuQIQdXer9NBbGFtYmWprY3nn4D3Dw93Sn0V0dIqYeIo91oKyslvMebmUM95S2PyIJdEpPb2DJDxjvX/0LLwSWlSXRWy9gapWoBkb4ynqZBsg==' ) end let(:input) do 'data to sign' end it 'signs data with private key' do expect( subject.rsaes_pkcs_signature(private_key_data_b64, input) ).to eq(signature) end end end mongo-ruby-driver-2.21.3/spec/mongo/crypt/kms/000077500000000000000000000000001505113246500211745ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/mongo/crypt/kms/azure/000077500000000000000000000000001505113246500223225ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/mongo/crypt/kms/azure/credentials_retriever_spec.rb000066400000000000000000000052711505113246500302520ustar00rootroot00000000000000# frozen_string_literal: true require 'lite_spec_helper' describe Mongo::Crypt::KMS::Azure::CredentialsRetriever do # The tests here require fake azure server, which is started in FLE # configurations on evergreen. If you want to run these tests locally, # you need to start the server manually. See .evergreen/run-tests.sh # for the command to start the server. before do skip 'These tests require fake azure server to be running' unless SpecConfig.instance.fle? end let(:metadata_host) do 'localhost:8080' end describe '.fetch_access_token' do context 'when response is valid' do let(:token) do described_class.fetch_access_token(metadata_host: metadata_host) end it 'returns access token' do expect(token.access_token).to eq('magic-cookie') end it 'returns expiration time' do expect(token.expires_in).to eq(70) end end context 'when response contains empty json' do it 'raises error' do expect do described_class.fetch_access_token( extra_headers: { 'X-MongoDB-HTTP-TestParams' => 'case=empty-json' }, metadata_host: metadata_host ) end.to raise_error(Mongo::Crypt::KMS::CredentialsNotFound) end end context 'when response contains invalid json' do it 'raises error' do expect do described_class.fetch_access_token( extra_headers: { 'X-MongoDB-HTTP-TestParams' => 'case=bad-json' }, metadata_host: metadata_host ) end.to raise_error(Mongo::Crypt::KMS::CredentialsNotFound) end end context 'when metadata host responds with 500' do it 'raises error' do expect do described_class.fetch_access_token( extra_headers: { 'X-MongoDB-HTTP-TestParams' => 'case=500' }, metadata_host: metadata_host ) end.to raise_error(Mongo::Crypt::KMS::CredentialsNotFound) end end context 'when metadata host responds with 404' do it 'raises error' do expect do described_class.fetch_access_token( extra_headers: { 'X-MongoDB-HTTP-TestParams' => 'case=404' }, metadata_host: metadata_host ) end.to raise_error(Mongo::Crypt::KMS::CredentialsNotFound) end end context 'when metadata host is slow' do # On JRuby Timeout.timeout does not work in this case. fails_on_jruby it 'raises error' do expect do described_class.fetch_access_token( extra_headers: { 'X-MongoDB-HTTP-TestParams' => 'case=slow' }, metadata_host: metadata_host ) end.to raise_error(Mongo::Crypt::KMS::CredentialsNotFound) end end end end mongo-ruby-driver-2.21.3/spec/mongo/crypt/kms/credentials_spec.rb000066400000000000000000000252701505113246500250360ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'mongo' require 'lite_spec_helper' describe Mongo::Crypt::KMS::Credentials do require_libmongocrypt include_context 'define shared FLE helpers' context 'AWS' do let (:params) do Mongo::Crypt::KMS::AWS::Credentials.new(kms_provider) end %i(access_key_id secret_access_key).each do |key| context "with nil AWS #{key}" do let(:kms_provider) do { access_key_id: SpecConfig.instance.fle_aws_key, secret_access_key: SpecConfig.instance.fle_aws_secret, }.update({key => nil}) end it 'raises an exception' do expect do params end.to raise_error(ArgumentError, /The #{key} option must be a String with at least one character; currently have nil/) end end context "with non-string AWS #{key}" do let(:kms_provider) do { access_key_id: SpecConfig.instance.fle_aws_key, secret_access_key: SpecConfig.instance.fle_aws_secret, }.update({key => 5}) end it 'raises an exception' do expect do params end.to raise_error(ArgumentError, /The #{key} option must be a String with at least one character; currently have 5/) end end context "with empty string AWS #{key}" do let(:kms_provider) do { access_key_id: SpecConfig.instance.fle_aws_key, secret_access_key: SpecConfig.instance.fle_aws_secret, }.update({key => ''}) end it 'raises an exception' do expect do params end.to raise_error(ArgumentError, /The #{key} option must be a String with at least one character; it is currently an empty string/) end end end context 'with valid params' do let(:kms_provider) do { access_key_id: SpecConfig.instance.fle_aws_key, secret_access_key: SpecConfig.instance.fle_aws_secret, } end it 'returns valid libmongocrypt credentials' do expect(params.to_document).to eq( BSON::Document.new({ accessKeyId: SpecConfig.instance.fle_aws_key, secretAccessKey: SpecConfig.instance.fle_aws_secret, }) ) end end end context 'Azure' do let (:params) do Mongo::Crypt::KMS::Azure::Credentials.new(kms_provider) end %i(tenant_id client_id client_secret).each do |param| context "with nil azure #{param}" do let(:kms_provider) do { tenant_id: SpecConfig.instance.fle_azure_tenant_id, client_id: SpecConfig.instance.fle_azure_client_id, client_secret: SpecConfig.instance.fle_azure_client_secret }.update(param => nil) end it 'raises an exception' do expect do params end.to raise_error(ArgumentError, /The #{param} option must be a String with at least one character; currently have nil/) end end context "with non-string azure #{param}" do let(:kms_provider) do { tenant_id: SpecConfig.instance.fle_azure_tenant_id, client_id: SpecConfig.instance.fle_azure_client_id, client_secret: SpecConfig.instance.fle_azure_client_secret }.update(param => 5) end it 'raises an exception' do expect do params end.to raise_error(ArgumentError, /The #{param} option must be a String with at least one character; currently have 5/) end end context "with empty string azure #{param}" do let(:kms_provider) do { tenant_id: SpecConfig.instance.fle_azure_tenant_id, client_id: SpecConfig.instance.fle_azure_client_id, client_secret: SpecConfig.instance.fle_azure_client_secret }.update(param => '') end it 'raises an exception' do expect do params end.to raise_error(ArgumentError, /The #{param} option must be a String with at least one character; it is currently an empty string/) end end end context "with non-string azure identity_platform_endpoint" do let(:kms_provider) do { tenant_id: SpecConfig.instance.fle_azure_tenant_id, client_id: SpecConfig.instance.fle_azure_client_id, client_secret: SpecConfig.instance.fle_azure_client_secret, identity_platform_endpoint: 5 } end it 'raises an exception' do expect do params end.to raise_error(ArgumentError, /The identity_platform_endpoint option must be a String with at least one character; currently have 5/) end end context "with empty string azure identity_platform_endpoint" do let(:kms_provider) do { tenant_id: SpecConfig.instance.fle_azure_tenant_id, client_id: SpecConfig.instance.fle_azure_client_id, client_secret: SpecConfig.instance.fle_azure_client_secret, identity_platform_endpoint: '' } end it 'raises an exception' do expect do params end.to raise_error(ArgumentError, /The identity_platform_endpoint option must be a String with at least one character; it is currently an empty string/) end end context 'with valid params' do let(:kms_provider) do { tenant_id: SpecConfig.instance.fle_azure_tenant_id, client_id: SpecConfig.instance.fle_azure_client_id, client_secret: SpecConfig.instance.fle_azure_client_secret, } end it 'returns valid libmongocrypt credentials' do expect(params.to_document).to eq( BSON::Document.new({ tenantId: SpecConfig.instance.fle_azure_tenant_id, clientId: SpecConfig.instance.fle_azure_client_id, clientSecret: SpecConfig.instance.fle_azure_client_secret, }) ) end end end context 'GCP' do let (:params) do Mongo::Crypt::KMS::GCP::Credentials.new(kms_provider) end %i(email private_key).each do |key| context "with nil GCP #{key}" do let(:kms_provider) do { email: SpecConfig.instance.fle_gcp_email, private_key: SpecConfig.instance.fle_gcp_private_key, }.update({key => nil}) end it 'raises an exception' do expect do params end.to raise_error(ArgumentError, /The #{key} option must be a String with at least one character; currently have nil/) end end context "with non-string GCP #{key}" do let(:kms_provider) do { email: SpecConfig.instance.fle_gcp_email, private_key: SpecConfig.instance.fle_gcp_private_key, }.update({key => 5}) end it 'raises an exception' do expect do params end.to raise_error(ArgumentError, /The #{key} option must be a String with at least one character; currently have 5/) end end context "with empty string GCP #{key}" do let(:kms_provider) do { email: SpecConfig.instance.fle_gcp_email, private_key: SpecConfig.instance.fle_gcp_private_key, }.update({key => ''}) end it 'raises an exception' do expect do params end.to raise_error(ArgumentError, /The #{key} option must be a String with at least one character; it is currently an empty string/) end end end context 'with valid params' do let(:kms_provider) do { email: SpecConfig.instance.fle_gcp_email, private_key: SpecConfig.instance.fle_gcp_private_key, } end it 'returns valid libmongocrypt credentials' do expect(params.to_document).to eq( BSON::Document.new({ email: SpecConfig.instance.fle_gcp_email, privateKey: BSON::Binary.new(SpecConfig.instance.fle_gcp_private_key, :generic), }) ) end context 'PEM private key' do require_mri before(:all) do if RUBY_VERSION < "3.0" skip "Ruby version 3.0 or higher required" end end let(:private_key_pem) do OpenSSL::PKey.read( Base64.decode64(SpecConfig.instance.fle_gcp_private_key) ).export end let(:kms_provider) do { email: SpecConfig.instance.fle_gcp_email, private_key: private_key_pem, } end it 'returns valid libmongocrypt credentials' do private_key = params.to_document[:privateKey] expect(Base64.decode64(private_key.data)).to eq( Base64.decode64(SpecConfig.instance.fle_gcp_private_key) ) end end end context 'with access token' do let(:kms_provider) do { access_token: 'access_token' } end it 'returns valid libmongocrypt credentials' do expect(params.to_document).to eq( BSON::Document.new({ accessToken: 'access_token' }) ) end end end context 'KMIP' do let (:params) do Mongo::Crypt::KMS::KMIP::Credentials.new(kms_provider) end context "with nil KMIP endpoint" do let(:kms_provider) do { endpoint: nil } end it 'raises an exception' do expect do params end.to raise_error(ArgumentError, /The endpoint option must be a String with at least one character; currently have nil/) end end context "with non-string KMIP endpoint" do let(:kms_provider) do { endpoint: 5, } end it 'raises an exception' do expect do params end.to raise_error(ArgumentError, /The endpoint option must be a String with at least one character; currently have 5/) end end context "with empty string KMIP endpoint" do let(:kms_provider) do { endpoint: '', } end it 'raises an exception' do expect do params end.to raise_error(ArgumentError, /The endpoint option must be a String with at least one character; it is currently an empty string/) end end context 'with valid params' do let(:kms_provider) do { endpoint: SpecConfig.instance.fle_kmip_endpoint, } end it 'returns valid libmongocrypt credentials' do expect(params.to_document).to eq( BSON::Document.new({ endpoint: SpecConfig.instance.fle_kmip_endpoint, }) ) end end end end mongo-ruby-driver-2.21.3/spec/mongo/crypt/kms_spec.rb000066400000000000000000000030651505113246500225370ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'mongo' require 'lite_spec_helper' describe Mongo::Crypt::KMS do context 'Validations' do context '.validate_tls_options' do it 'returns valid options for nil parameter' do expect( Mongo::Crypt::KMS::Validations.validate_tls_options(nil) ).to eq({}) end it 'accepts empty hash' do expect( Mongo::Crypt::KMS::Validations.validate_tls_options({}) ).to eq({}) end it 'does not allow disabled ssl' do expect { Mongo::Crypt::KMS::Validations.validate_tls_options( { aws: {ssl: false} } ) }.to raise_error(ArgumentError, /TLS is required/) end it 'does not allow insecure tls options' do %i( ssl_verify_certificate ssl_verify_hostname ).each do |insecure_opt| expect { Mongo::Crypt::KMS::Validations.validate_tls_options( { aws: {insecure_opt => false} } ) }.to raise_error(ArgumentError, /Insecure TLS options prohibited/) end end it 'allows valid options' do expect do Mongo::Crypt::KMS::Validations.validate_tls_options( { aws: { ssl: true, ssl_cert_string: 'Content is not validated', ssl_verify_ocsp_endpoint: false } } ) end.not_to raise_error end end end end mongo-ruby-driver-2.21.3/spec/mongo/crypt/status_spec.rb000066400000000000000000000064271505113246500232750ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' describe Mongo::Crypt::Status do require_libmongocrypt let(:status) { described_class.new } let(:label) { :error_client } let(:code) { 401 } let(:message) { 'Unauthorized' } let(:status_with_info) do status.update(label, code, message) end describe '#initialize' do it 'doesn\'t throw an error' do expect { status }.not_to raise_error end end describe '#self.from_pointer' do let(:pointer) { Mongo::Crypt::Binding.mongocrypt_status_new } let(:status) { described_class.from_pointer(pointer) } after do Mongo::Crypt::Binding.mongocrypt_status_destroy(pointer) end it 'creates a status from the pointer passed in' do expect do status end.not_to raise_error expect(status.ref).to eq(pointer) end end describe '#update' do context 'with invalid label' do it 'raises an exception' do expect do status.update(:random_label, 0, '') end.to raise_error(ArgumentError, /random_label is an invalid value for a Mongo::Crypt::Status label/) end it 'works with an empty message' do status.update(:ok, 0, '') expect(status.message).to eq('') end end end describe '#label' do context 'new status' do it 'returns :ok' do expect(status.label).to eq(:ok) end end context 'status with info' do it 'returns label' do expect(status_with_info.label).to eq(label) end end end describe '#code' do context 'new status' do it 'returns 0' do expect(status.code).to eq(0) end end context 'status with info' do it 'returns code' do expect(status_with_info.code).to eq(code) end end end describe '#message' do context 'new status' do it 'returns an empty string' do expect(status.message).to eq('') end end context 'status with info' do it 'returns a message' do expect(status_with_info.message).to eq(message) end end end describe '#ok?' do context 'new status' do it 'returns true' do expect(status.ok?).to be true end end context 'status with info' do it 'returns false' do expect(status_with_info.ok?).to be false end end end describe '#crypt_error' do context 'when status is ok' do before do status.update(:ok, 0, '') end it 'does not raise exception' do expect do status.raise_crypt_error end.not_to raise_error end end context 'when status is :error_kms' do before do status.update(:error_kms, 100, 'KMS error message') end it 'raises exception' do expect do status.raise_crypt_error end.to raise_error(Mongo::Error::KmsError, 'KMS error message (libmongocrypt error code 100)') end end context 'when status is error client' do before do status.update(:error_client, 2, 'Client Error') end it 'raises exception' do expect do status.raise_crypt_error end.to raise_error(Mongo::Error::CryptError, 'Client Error (libmongocrypt error code 2)') end end end end mongo-ruby-driver-2.21.3/spec/mongo/crypt_spec.rb000066400000000000000000000011761505113246500217460ustar00rootroot00000000000000# frozen_string_literal: true require 'spec_helper' describe Mongo::Crypt do describe '.validate_ffi!' do context 'when ffi is available' do context 'when ffi is loaded' do it 'does not raise' do expect do described_class.validate_ffi! end.not_to raise_error end end end # There is no reasonably simple way to test the path where ffi is not # available. The ffi gem is a part of our standard test dependencies, so # it's always available. So, we would need a dedicated configuration # just to test this feature; it seems to be an overhead. end end mongo-ruby-driver-2.21.3/spec/mongo/cursor/000077500000000000000000000000001505113246500205565ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/mongo/cursor/builder/000077500000000000000000000000001505113246500222045ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/mongo/cursor/builder/get_more_command_spec.rb000066400000000000000000000110161505113246500270410ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # TODO convert, move or delete these tests as part of RUBY-2706. =begin require 'spec_helper' describe Mongo::Cursor::Builder::GetMoreCommand do describe '#specification' do let(:reply) do Mongo::Protocol::Reply.allocate.tap do |reply| allow(reply).to receive(:cursor_id).and_return(8000) end end let(:description) do Mongo::Server::Description.new( double('description address'), { 'minWireVersion' => 0, 'maxWireVersion' => 2 } ) end let(:result) do Mongo::Operation::Result.new(reply, description) end let(:cursor) do Mongo::Cursor.new(view, result, authorized_primary) end let(:builder) do described_class.new(cursor) end let(:specification) do builder.specification end let(:selector) do specification[:selector] end context 'when the operation has a session' do let(:view) do Mongo::Collection::View.new(authorized_collection) end let(:session) do double('session') end let(:builder) do described_class.new(cursor, session) end it 'adds the session to the specification' do expect(builder.specification[:session]).to be(session) end end shared_examples_for 'a getMore command builder' do it 'includes the database name' do expect(specification[:db_name]).to eq(SpecConfig.instance.test_db) end it 'includes getMore with cursor id' do expect(selector[:getMore]).to eq(BSON::Int64.new(8000)) end it 'includes the collection name' do expect(selector[:collection]).to eq(TEST_COLL) end end context 'when the query is standard' do let(:view) do Mongo::Collection::View.new(authorized_collection) end it_behaves_like 'a getMore command builder' it 'does not include max time' do expect(selector[:maxTimeMS]).to be_nil end it 'does not include batch size' do expect(selector[:batchSize]).to be_nil end end context 'when the query has a batch size' do let(:view) do Mongo::Collection::View.new(authorized_collection, {}, batch_size: 10) end it_behaves_like 'a getMore command builder' it 'does not include max time' do expect(selector[:maxTimeMS]).to be_nil end it 'includes batch size' do expect(selector[:batchSize]).to eq(10) end end context 'when a max await time is specified' do context 'when the cursor is not tailable' do let(:view) do Mongo::Collection::View.new(authorized_collection, {}, max_await_time_ms: 100) end it_behaves_like 'a getMore command builder' it 'does not include max time' do expect(selector[:maxTimeMS]).to be_nil end it 'does not include max await time' do expect(selector[:maxAwaitTimeMS]).to be_nil end it 'does not include batch size' do expect(selector[:batchSize]).to be_nil end end context 'when the cursor is tailable' do context 'when await data is true' do let(:view) do Mongo::Collection::View.new( authorized_collection, {}, await_data: true, tailable: true, max_await_time_ms: 100 ) end it_behaves_like 'a getMore command builder' it 'includes max time' do expect(selector[:maxTimeMS]).to eq(100) end it 'does not include max await time' do expect(selector[:maxAwaitTimeMS]).to be_nil end it 'does not include batch size' do expect(selector[:batchSize]).to be_nil end end context 'when await data is false' do let(:view) do Mongo::Collection::View.new( authorized_collection, {}, tailable: true, max_await_time_ms: 100 ) end it_behaves_like 'a getMore command builder' it 'does not include max time' do expect(selector[:maxTimeMS]).to be_nil end it 'does not include max await time' do expect(selector[:maxAwaitTimeMS]).to be_nil end it 'does not include batch size' do expect(selector[:batchSize]).to be_nil end end end end end end =end mongo-ruby-driver-2.21.3/spec/mongo/cursor/builder/op_get_more_spec.rb000066400000000000000000000026621505113246500260500ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # TODO convert, move or delete these tests as part of RUBY-2706. =begin require 'spec_helper' describe Mongo::Cursor::Builder::OpGetMore do describe '#specification' do let(:reply) do Mongo::Protocol::Reply.allocate.tap do |reply| allow(reply).to receive(:cursor_id).and_return(8000) end end let(:description) do Mongo::Server::Description.new( double('description address'), { 'minWireVersion' => 0, 'maxWireVersion' => 2 } ) end let(:result) do Mongo::Operation::Result.new(reply, description) end let(:view) do Mongo::Collection::View.new( authorized_collection, {}, tailable: true, max_time_ms: 100 ) end let(:cursor) do Mongo::Cursor.new(view, result, authorized_primary) end let(:builder) do described_class.new(cursor) end let(:specification) do builder.specification end it 'includes to return' do expect(specification[:to_return]).to eq(0) end it 'includes the cursor id' do expect(specification[:cursor_id]).to eq(BSON::Int64.new(8000)) end it 'includes the database name' do expect(specification[:db_name]).to eq(SpecConfig.instance.test_db) end it 'includes the collection name' do expect(specification[:coll_name]).to eq(TEST_COLL) end end end =end mongo-ruby-driver-2.21.3/spec/mongo/cursor_spec.rb000066400000000000000000000566621505113246500221340ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Cursor do let(:authorized_collection) do authorized_client['cursor_spec_collection'] end let(:context) do Mongo::Operation::Context.new(client: authorized_client) end before do authorized_collection.drop end describe '#initialize' do let(:server) do view.send(:server_selector).select_server(authorized_client.cluster) end let(:reply) do view.send(:send_initial_query, server, context) end let(:cursor) do described_class.new(view, reply, server) end before do documents = [{test: 1}] * 10 authorized_collection.insert_many(documents) end shared_context 'with initialized pool' do before do ClientRegistry.instance.close_all_clients # These tests really like creating pools (and thus scheduling # the pools' finalizers) when querying collections. # Deal with this by pre-creating pools for all known servers. cluster = authorized_collection.client.cluster cluster.next_primary cluster.servers.each do |server| reset_pool(server) end end after do authorized_client.cluster.servers.each do |server| if pool = server.pool_internal pool.close end end end end context 'cursor exhausted by initial result' do include_context 'with initialized pool' require_no_linting let(:view) do Mongo::Collection::View.new(authorized_collection) end it 'does not schedule the finalizer' do # Due to https://jira.mongodb.org/browse/RUBY-1772, restrict # the scope of the assertion RSpec::Mocks.with_temporary_scope do expect(ObjectSpace).not_to receive(:define_finalizer) cursor end end end context 'cursor not exhausted by initial result' do include_context 'with initialized pool' require_no_linting let(:view) do Mongo::Collection::View.new(authorized_collection, {}, batch_size: 2) end it 'schedules the finalizer' do # Due to https://jira.mongodb.org/browse/RUBY-1772, restrict # the scope of the assertion RSpec::Mocks.with_temporary_scope do expect(ObjectSpace).to receive(:define_finalizer) cursor end end end context 'server is unknown' do require_topology :single, :replica_set, :sharded let(:server) do view.send(:server_selector).select_server(authorized_client.cluster).tap do |server| authorized_client.cluster.close server.unknown! end end let(:view) do Mongo::Collection::View.new(authorized_collection) end it 'raises ServerNotUsable' do lambda do cursor end.should raise_error(Mongo::Error::ServerNotUsable) end end end describe '#each' do let(:server) do view.send(:server_selector).select_server(authorized_client.cluster) end let(:reply) do view.send(:send_initial_query, server, context) end let(:cursor) do described_class.new(view, reply, server) end context 'when no options are provided to the view' do let(:view) do Mongo::Collection::View.new(authorized_collection) end context 'when the initial query retrieves all documents' do let(:documents) do (1..10).map{ |i| { field: "test#{i}" }} end before do authorized_collection.insert_many(documents) end it 'returns the correct amount' do expect(cursor.to_a.count).to eq(10) end it 'iterates the documents' do cursor.each do |doc| expect(doc).to have_key('field') end end end context 'when the initial query does not retrieve all documents' do let(:documents) do (1..102).map{ |i| { field: "test#{i}" }} end before do authorized_collection.insert_many(documents) end context 'when a getMore gets a socket error' do let(:op) do double('operation') end before do expect(cursor).to receive(:get_more_operation).and_return(op).ordered if SpecConfig.instance.connect_options[:connect] == :load_balanced expect(op).to receive(:execute_with_connection).and_raise(Mongo::Error::SocketError).ordered else expect(op).to receive(:execute).and_raise(Mongo::Error::SocketError).ordered end end it 'raises the error' do expect do cursor.each do |doc| end end.to raise_error(Mongo::Error::SocketError) end end context 'when no errors occur' do it 'returns the correct amount' do expect(cursor.to_a.count).to eq(102) end it 'iterates the documents' do cursor.each do |doc| expect(doc).to have_key('field') end end end end end context 'when options are provided to the view' do let(:documents) do (1..10).map{ |i| { field: "test#{i}" }} end before do authorized_collection.drop authorized_collection.insert_many(documents) end context 'when a limit is provided' do context 'when no batch size is provided' do context 'when the limit is positive' do let(:view) do Mongo::Collection::View.new(authorized_collection, {}, :limit => 2) end it 'returns the correct amount' do expect(cursor.to_a.count).to eq(2) end it 'iterates the documents' do cursor.each do |doc| expect(doc).to have_key('field') end end end context 'when the limit is negative' do let(:view) do Mongo::Collection::View.new(authorized_collection, {}, :limit => -2) end it 'returns the positive number of documents' do expect(cursor.to_a.count).to eq(2) end it 'iterates the documents' do cursor.each do |doc| expect(doc).to have_key('field') end end end context 'when the limit is zero' do let(:view) do Mongo::Collection::View.new(authorized_collection, {}, :limit => 0) end it 'returns all documents' do expect(cursor.to_a.count).to eq(10) end it 'iterates the documents' do cursor.each do |doc| expect(doc).to have_key('field') end end end end context 'when a batch size is provided' do context 'when the batch size is less than the limit' do let(:view) do Mongo::Collection::View.new( authorized_collection, {}, :limit => 5, :batch_size => 3 ) end it 'returns the limited number of documents' do expect(cursor.to_a.count).to eq(5) end it 'iterates the documents' do cursor.each do |doc| expect(doc).to have_key('field') end end end context 'when the batch size is more than the limit' do let(:view) do Mongo::Collection::View.new( authorized_collection, {}, :limit => 5, :batch_size => 7 ) end it 'returns the limited number of documents' do expect(cursor.to_a.count).to eq(5) end it 'iterates the documents' do cursor.each do |doc| expect(doc).to have_key('field') end end end context 'when the batch size is the same as the limit' do let(:view) do Mongo::Collection::View.new( authorized_collection, {}, :limit => 5, :batch_size => 5 ) end it 'returns the limited number of documents' do expect(cursor.to_a.count).to eq(5) end it 'iterates the documents' do cursor.each do |doc| expect(doc).to have_key('field') end end end end end end context 'when the cursor is not fully iterated and is garbage collected' do let(:documents) do (1..6).map{ |i| { field: "test#{i}" }} end let(:cluster) do authorized_client.cluster end before do authorized_collection.insert_many(documents) cluster.schedule_kill_cursor( cursor.kill_spec(cursor.instance_variable_get(:@server)) ) end let(:view) do Mongo::Collection::View.new( authorized_collection, {}, :batch_size => 2, ) end let!(:cursor) do view.to_enum.next view.instance_variable_get(:@cursor) end it 'schedules a kill cursors op' do cluster.instance_variable_get(:@periodic_executor).flush expect do cursor.to_a # Mongo::Error::SessionEnded is raised here because the periodic executor # called above kills the cursor and closes the session. # This code is normally scheduled in cursor finalizer, so the cursor object # is garbage collected when the code is executed. So, a user won't get # this exception. end.to raise_exception(Mongo::Error::SessionEnded) end context 'when the cursor is unregistered before the kill cursors operations are executed' do # Sometimes JRuby yields 4 documents even though we are allowing # repeated cursor iteration below fails_on_jruby it 'does not send a kill cursors operation for the unregistered cursor' do # We need to verify that the cursor was able to retrieve more documents # from the server so that more than one batch is successfully received cluster.unregister_cursor(cursor.id) # The initial read is done on an enum obtained from the cursor. # The read below is done directly on the cursor. These are two # different objects. In MRI, iterating like this yields all of the # documents, hence we retrieved one document in the setup and # we expect to retrieve the remaining 5 here. In JRuby it appears that # the enum may buffers the first batch, such that the second document # sometimes is lost to the iteration and we retrieve 4 documents below. # But sometimes we get all 5 documents. In either case, all of the # documents are retrieved via two batches thus fulfilling the # requirement of the test to continue iterating the cursor. =begin When repeated iteration of cursors is prohibited, these are the expectations if BSON::Environment.jruby? expected_counts = [4, 5] else expected_counts = [5] end =end # Since currently repeated iteration of cursors is allowed, calling # to_a on the cursor would perform such an iteration and return # all documents of the initial read. expected_counts = [6] expect(expected_counts).to include(cursor.to_a.size) end end end context 'when the cursor is fully iterated' do let(:documents) do (1..3).map{ |i| { field: "test#{i}" }} end before do authorized_collection.delete_many authorized_collection.insert_many(documents) end let(:view) do authorized_collection.find({}, batch_size: 2) end let(:cursor) do view.instance_variable_get(:@cursor) end let!(:cursor_id) do enum.next enum.next cursor.id end let(:enum) do view.to_enum end let(:cursor_reaper) do authorized_collection.client.cluster.instance_variable_get(:@cursor_reaper) end it 'removes the cursor id from the active cursors tracked by the cluster cursor manager' do enum.next expect(cursor_reaper.instance_variable_get(:@active_cursor_ids)).not_to include(cursor_id) end end end context 'when an implicit session is used' do min_server_fcv '3.6' let(:subscriber) { Mrss::EventSubscriber.new } let(:subscribed_client) do authorized_client.tap do |client| client.subscribe(Mongo::Monitoring::COMMAND, subscriber) end end let(:collection) do subscribed_client[TEST_COLL] end before do collection.delete_many collection.insert_many(documents) end let(:cursor) do view.instance_variable_get(:@cursor) end let(:enum) do view.to_enum end let(:session_pool_ids) do queue = view.client.cluster.session_pool.instance_variable_get(:@queue) queue.collect { |s| s.session_id } end let(:find_events) do subscriber.started_events.select { |e| e.command_name == "find" } end context 'when all results are retrieved in the first response' do let(:documents) do (1..2).map{ |i| { field: "test#{i}" }} end let(:view) do collection.find end it 'returns the session to the cluster session pool' do 1.times { enum.next } expect(find_events.collect { |event| event.command['lsid'] }.uniq.size).to eq(1) expect(session_pool_ids).to include(find_events.collect { |event| event.command['lsid'] }.uniq.first) end end context 'when a getMore is needed to retrieve all results' do min_server_fcv '3.6' require_topology :single, :replica_set let(:documents) do (1..4).map{ |i| { field: "test#{i}" }} end let(:view) do collection.find({}, batch_size: 2, limit: 4) end context 'when result set is not iterated fully but the known # of documents is retrieved' do # These tests set up a collection with 4 documents and find all # of them but, instead of iterating the result set to completion, # manually retrieve the 4 documents that are expected to exist. # On 4.9 and lower servers, the server closes the cursor after # retrieving the 4 documents. # On 5.0, the server does not close the cursor after the 4 documents # have been retrieved, and the client must attempt to retrieve the # next batch (which would be empty) for the server to realize that # the result set is fully iterated and close the cursor. max_server_version '4.9' context 'when not all documents are iterated' do it 'returns the session to the cluster session pool' do 3.times { enum.next } expect(find_events.collect { |event| event.command['lsid'] }.uniq.size).to eq(1) expect(session_pool_ids).to include(find_events.collect { |event| event.command['lsid'] }.uniq.first) end end context 'when the same number of documents is iterated as # in the collection' do it 'returns the session to the cluster session pool' do 4.times { enum.next } expect(find_events.collect { |event| event.command['lsid'] }.uniq.size).to eq(1) expect(session_pool_ids).to include(find_events.collect { |event| event.command['lsid'] }.uniq.first) end end end context 'when result set is iterated fully' do it 'returns the session to the cluster session pool' do # Iterate fully and assert that there are 4 documents total enum.to_a.length.should == 4 expect(find_events.collect { |event| event.command['lsid'] }.uniq.size).to eq(1) expect(session_pool_ids).to include(find_events.collect { |event| event.command['lsid'] }.uniq.first) end end end context 'when the result set is iterated fully and the cursor id is non-zero' do min_server_fcv '5.0' let(:documents) do (1..5).map{ |i| { field: "test#{i}" }} end let(:view) { collection.find(field:{'$gte'=>BSON::MinKey.new}).sort(field:1).limit(5).batch_size(4) } before do view.to_a end it 'schedules a get more command' do get_more_commands = subscriber.started_events.select { |e| e.command_name == 'getMore' } expect(get_more_commands.length).to be 1 end it 'has a non-zero cursor id on successful get more' do get_more_commands = subscriber.succeeded_events.select { |e| e.command_name == 'getMore' } expect(get_more_commands.length).to be 1 expect(get_more_commands[0].reply['cursor']['id']).to_not be 0 end it 'schedules a kill cursors command' do get_more_commands = subscriber.started_events.select { |e| e.command_name == 'killCursors' } expect(get_more_commands.length).to be 1 end end end describe '#inspect' do let(:view) do Mongo::Collection::View.new(authorized_collection) end let(:query_spec) do { selector: {}, options: {}, db_name: SpecConfig.instance.test_db, coll_name: TEST_COLL } end let(:conn_desc) do double('connection description').tap do |cd| allow(cd).to receive(:service_id).and_return(nil) end end let(:reply) do double('reply').tap do |reply| allow(reply).to receive(:is_a?).with(Mongo::Operation::Result).and_return(true) allow(reply).to receive(:namespace) allow(reply).to receive(:connection_description).and_return(conn_desc) allow(reply).to receive(:cursor_id).and_return(42) allow(reply).to receive(:connection_global_id).and_return(1) if SpecConfig.instance.connect_options[:connect] == :load_balanced allow(reply).to receive(:connection).and_return(nil) end end end let(:cursor) do described_class.new(view, reply, authorized_primary) end it 'returns a string' do expect(cursor.inspect).to be_a(String) end it 'returns a string containing the collection view inspect string' do expect(cursor.inspect).to match(/.*#{view.inspect}.*/) end end describe '#to_a' do let(:view) do Mongo::Collection::View.new(authorized_collection, {}, batch_size: 10) end let(:query_spec) do { :selector => {}, :options => {}, :db_name => SpecConfig.instance.test_db, :coll_name => authorized_collection.name } end let(:reply) do view.send(:send_initial_query, authorized_primary, context) end let(:cursor) do described_class.new(view, reply, authorized_primary) end context 'after partially iterating the cursor' do before do authorized_collection.drop docs = [] 100.times do |i| docs << {a: i} end authorized_collection.insert_many(docs) end context 'after #each was called once' do before do cursor.each do |doc| break end end it 'iterates from the beginning of the view' do expect(cursor.to_a.map { |doc| doc['a'] }).to eq((0..99).to_a) end end context 'after exactly one batch was iterated' do before do cursor.each_with_index do |doc, i| break if i == 9 end end it 'iterates from the beginning of the view' do expect(cursor.to_a.map { |doc| doc['a'] }).to eq((0..99).to_a) end end context 'after two batches were iterated' do before do cursor.each_with_index do |doc, i| break if i == 19 end end =begin Behavior of pre-2.10 driver: it 'skips the second batch' do expect(cursor.to_a.map { |doc| doc['a'] }).to eq((0..9).to_a + (20..99).to_a) end =end it 'raises InvalidCursorOperation' do expect do cursor.to_a end.to raise_error(Mongo::Error::InvalidCursorOperation, 'Cannot restart iteration of a cursor which issued a getMore') end end end end describe '#close' do let(:view) do Mongo::Collection::View.new( authorized_collection, {}, batch_size: 2, ) end let(:server) do view.send(:server_selector).select_server(authorized_client.cluster) end let(:reply) do view.send(:send_initial_query, server, context) end let(:cursor) do described_class.new(view, reply, server) end let(:documents) do (1..10).map{ |i| { field: "test#{i}" }} end before do authorized_collection.drop authorized_collection.insert_many(documents) end it 'closes' do expect(cursor).not_to be_closed cursor.close expect(cursor).to be_closed end context 'when closed from another thread' do it 'raises an error' do Thread.new do cursor.close end sleep(1) expect(cursor).to be_closed expect do cursor.to_a end.to raise_error Mongo::Error::InvalidCursorOperation end end context 'when there is a socket error during close' do clean_slate require_no_linting before do reset_pool(server) end after do server.pool.close end it 'does not raise an error' do cursor if SpecConfig.instance.connect_options[:connect] == :load_balanced expect(cursor.connection).to receive(:deliver) .at_least(:once) .and_raise(Mongo::Error::SocketError, "test error") else server.with_connection do |conn| expect(conn).to receive(:deliver) .at_least(:once) .and_raise(Mongo::Error::SocketError, "test error") end end expect do cursor.close end.not_to raise_error end end end describe '#batch_size' do let(:subscriber) { Mrss::EventSubscriber.new } let(:subscribed_client) do authorized_client.tap do |client| client.subscribe(Mongo::Monitoring::COMMAND, subscriber) end end let(:collection) do subscribed_client[TEST_COLL] end let(:view) do collection.find({}, limit: limit) end before do collection.drop collection.insert_many([].fill({ "bar": "baz" }, 0, 102)) end context 'when limit is 0 and batch_size is not set' do let(:limit) do 0 end it 'does not set batch_size' do view.to_a get_more_commands = subscriber.started_events.select { |e| e.command_name == 'getMore' } expect(get_more_commands.length).to eq(1) expect(get_more_commands.first.command.keys).not_to include('batchSize') end end context 'when limit is not zero and batch_size is not set' do let(:limit) do 1000 end it 'sets batch_size' do view.to_a get_more_commands = subscriber.started_events.select { |e| e.command_name == 'getMore' } expect(get_more_commands.length).to eq(1) expect(get_more_commands.first.command.keys).to include('batchSize') end end end end mongo-ruby-driver-2.21.3/spec/mongo/database_spec.rb000066400000000000000000001057311505113246500223530ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Database do shared_context 'more than 100 collections' do let(:client) do root_authorized_client.use('many-collections') end before do 120.times do |i| client["coll-#{i}"].drop client["coll-#{i}"].create end end end let(:subscriber) { Mrss::EventSubscriber.new } let(:monitored_client) do root_authorized_client.tap do |client| client.subscribe(Mongo::Monitoring::COMMAND, subscriber) end end describe '#==' do let(:database) do described_class.new(authorized_client, SpecConfig.instance.test_db) end context 'when the names are the same' do let(:other) do described_class.new(authorized_client, SpecConfig.instance.test_db) end it 'returns true' do expect(database).to eq(other) end end context 'when the names are not the same' do let(:other) do described_class.new(authorized_client, :other) end it 'returns false' do expect(database).to_not eq(other) end end context 'when the object is not a database' do it 'returns false' do expect(database).to_not eq('test') end end end describe '#[]' do let(:database) do described_class.new(authorized_client, SpecConfig.instance.test_db) end context 'when providing a valid name' do let(:collection) do database[:users] end it 'returns a new collection' do expect(collection.name).to eq('users') end end context 'when providing an invalid name' do it 'raises an error' do expect do database[nil] end.to raise_error(Mongo::Error::InvalidCollectionName) end end context 'when the client has options' do let(:client) do new_local_client([default_address.host], SpecConfig.instance.test_options.merge(read: { mode: :secondary })) end let(:database) do client.database end let(:collection) do database[:with_read_pref] end it 'applies the options to the collection' do expect(collection.server_selector).to eq(Mongo::ServerSelector.get(mode: :secondary)) expect(collection.read_preference).to eq(BSON::Document.new(mode: :secondary)) end context ':server_api option' do let(:client) do new_local_client_nmio(['localhost'], server_api: {version: '1'}) end it 'is not transfered to the collection' do client.options[:server_api].should == {'version' => '1'} collection.options[:server_api].should be nil end end end context 'when providing :server_api option' do it 'is rejected' do lambda do database['foo', server_api: {version: '1'}] end.should raise_error(ArgumentError, 'The :server_api option cannot be specified for collection objects. It can only be specified on Client level') end end end describe '#collection_names' do let(:database) do described_class.new(authorized_client, SpecConfig.instance.test_db) end before do database['users'].drop database['users'].create end let(:actual) do database.collection_names end it 'returns the stripped names of the collections' do expect(actual).to include('users') end it 'does not include system collections' do expect(actual).to_not include('version') expect(actual).to_not include('system.version') end context 'on 2.6 server' do max_server_version '2.6' end it 'does not include collections with $ in names' do expect(actual.none? { |name| name.include?('$') }).to be true end context 'when provided a session' do let(:operation) do database.collection_names(session: session) end let(:client) do authorized_client end it_behaves_like 'an operation using a session' end context 'when specifying a batch size' do it 'returns the stripped names of the collections' do expect(database.collection_names(batch_size: 1).to_a).to include('users') end end context 'when there are more collections than the initial batch size' do before do 2.times do |i| database["#{i}_dalmatians"].drop end 2.times do |i| database["#{i}_dalmatians"].create end end it 'returns all collections' do collection_names = database.collection_names(batch_size: 1) expect(collection_names).to include('0_dalmatians') expect(collection_names).to include('1_dalmatians') end end context 'when provided a filter' do min_server_fcv '3.0' before do database['users2'].drop database['users2'].create end let(:result) do database.collection_names(filter: { name: 'users2' }) end it 'returns users2 collection' do expect(result.length).to eq(1) expect(result.first).to eq('users2') end end context 'when provided authorized_collections or not' do context 'on server versions >= 4.0' do min_server_fcv '4.0' let(:database) do described_class.new(client, SpecConfig.instance.test_db) end let(:subscriber) { Mrss::EventSubscriber.new } let(:client) do authorized_client.tap do |client| client.subscribe(Mongo::Monitoring::COMMAND, subscriber) end end context 'when authorized_collections is provided' do let(:options) do { authorized_collections: true } end let!(:result) do database.collections(options) end let(:events) do subscriber.command_started_events('listCollections') end it 'passes authorized_collections to the server' do expect(events.length).to eq(1) command = events.first.command expect(command['authorizedCollections']).to eq(true) end end context 'when no options are provided' do let!(:result) do database.collection_names end let(:events) do subscriber.command_started_events('listCollections') end it 'authorized_collections not passed to server' do expect(events.length).to eq(1) command = events.first.command expect(command['nameOnly']).to eq(true) expect(command['authorizedCollections']).to be_nil end end end end context 'when there are more than 100 collections' do include_context 'more than 100 collections' let(:collection_names) do client.database.collection_names.sort end it 'lists all collections' do collection_names.length.should == 120 collection_names.should include('coll-0') collection_names.should include('coll-119') end end context 'with comment' do min_server_version '4.4' it 'returns collection names and send comment' do database = described_class.new(monitored_client, SpecConfig.instance.test_db) database.collection_names(comment: "comment") command = subscriber.command_started_events("listCollections").last&.command expect(command).not_to be_nil expect(command["comment"]).to eq("comment") end end end describe '#list_collections' do let(:database) do described_class.new(authorized_client, SpecConfig.instance.test_db) end let(:result) do database.list_collections.map do |info| info['name'] end end before do database['acol'].drop database['acol'].create end context 'server 3.0+' do min_server_fcv '3.0' it 'returns a list of the collections info' do expect(result).to include('acol') end context 'with more than one collection' do before do database['anothercol'].drop database['anothercol'].create expect(database.collections.length).to be > 1 end let(:result) do database.list_collections(filter: { name: 'anothercol' }).map do |info| info['name'] end end it 'can filter by collection name' do expect(result.length).to eq(1) expect(result.first).to eq('anothercol') end end end context 'server 2.6' do max_server_fcv '2.6' it 'returns a list of the collections info' do expect(result).to include("#{SpecConfig.instance.test_db}.acol") end end it 'does not include collections with $ in names' do expect(result.none? { |name| name.include?('$') }).to be true end context 'on admin database' do let(:database) do described_class.new(root_authorized_client, 'admin') end shared_examples 'does not include system collections' do it 'does not include system collections' do expect(result.none? { |name| name =~ /(^|\.)system\./ }).to be true end end context 'server 4.7+' do min_server_fcv '4.7' # https://jira.mongodb.org/browse/SERVER-35804 require_topology :single, :replica_set include_examples 'does not include system collections' it 'returns results' do expect(result).to include('acol') end end context 'server 3.0-4.5' do min_server_fcv '3.0' max_server_version '4.5' include_examples 'does not include system collections' it 'returns results' do expect(result).to include('acol') end end context 'server 2.6' do max_server_version '2.6' include_examples 'does not include system collections' it 'returns results' do expect(result).to include('admin.acol') end end end context 'when provided authorized_collections or name_only options or not' do context 'on server versions >= 4.0' do min_server_fcv '4.0' let(:database) do described_class.new(client, SpecConfig.instance.test_db) end let(:subscriber) { Mrss::EventSubscriber.new } let(:client) do authorized_client.tap do |client| client.subscribe(Mongo::Monitoring::COMMAND, subscriber) end end context 'when both are provided' do let(:options) do { name_only: true, authorized_collections: true } end let!(:result) do database.list_collections(options) end let(:events) do subscriber.command_started_events('listCollections') end it 'passes original options to the server' do expect(events.length).to eq(1) command = events.first.command expect(command['nameOnly']).to eq(true) expect(command['authorizedCollections']).to eq(true) end end context 'when name_only is provided' do let(:options) do { name_only: false } end let!(:result) do database.list_collections(options) end let(:events) do subscriber.command_started_events('listCollections') end it 'no options passed to server because false' do expect(events.length).to eq(1) command = events.first.command expect(command['nameOnly']).to be_nil expect(command['authorizedCollections']).to be_nil end end context 'when no options provided' do let!(:result) do database.list_collections end let(:events) do subscriber.command_started_events('listCollections') end it 'no options passed to server because none provided' do expect(events.length).to eq(1) command = events.first.command expect(command['nameOnly']).to be_nil expect(command['authorizedCollections']).to be_nil end end end end context 'when there are more than 100 collections' do include_context 'more than 100 collections' let(:collections) do client.database.list_collections end let(:collection_names) do # 2.6 server prefixes collection names with database name collections.map { |info| info['name'].sub(/^many-collections\./, '') }.sort end it 'lists all collections' do collections.length.should == 120 collection_names.should include('coll-0') collection_names.should include('coll-119') end end context 'with comment' do min_server_version '4.4' it 'returns collection names and send comment' do database = described_class.new(monitored_client, SpecConfig.instance.test_db) database.list_collections(comment: "comment") command = subscriber.command_started_events("listCollections").last&.command expect(command).not_to be_nil expect(command["comment"]).to eq("comment") end end end describe '#collections' do context 'when the database exists' do let(:database) do described_class.new(authorized_client, SpecConfig.instance.test_db) end let(:collection) do Mongo::Collection.new(database, 'users') end before do database['users'].drop database['users'].create end it 'returns collection objects for each name' do expect(database.collections).to include(collection) end it 'does not include collections with $ in names' do expect(database.collections.none? { |c| c.name.include?('$') }).to be true end end context 'on admin database' do let(:database) do described_class.new(root_authorized_client, 'admin') end it 'does not include the system collections' do collection_names = database.collections.map(&:name) expect(collection_names).not_to include('system.version') expect(collection_names.none? { |name| name =~ /(^|\.)system\./ }).to be true end end context 'when the database does not exist' do let(:database) do described_class.new(authorized_client, 'invalid_database') end it 'returns an empty list' do expect(database.collections).to be_empty end end context 'when the user is not authorized' do require_auth let(:database) do described_class.new(unauthorized_client, SpecConfig.instance.test_db) end it 'raises an exception' do expect { database.collections }.to raise_error(Mongo::Error::OperationFailure) end end context 'when provided a filter' do min_server_fcv '3.0' let(:database) do described_class.new(authorized_client, SpecConfig.instance.test_db) end let(:collection2) do Mongo::Collection.new(database, 'users2') end before do database['users1'].drop database['users1'].create database['users2'].drop database['users2'].create end let(:result) do database.collections(filter: { name: 'users2' }) end it 'returns users2 collection' do expect(result.length).to eq(1) expect(database.collections).to include(collection2) end end context 'when provided authorized_collections or not' do context 'on server versions >= 4.0' do min_server_fcv '4.0' let(:database) do described_class.new(client, SpecConfig.instance.test_db) end let(:subscriber) { Mrss::EventSubscriber.new } let(:client) do authorized_client.tap do |client| client.subscribe(Mongo::Monitoring::COMMAND, subscriber) end end context 'when authorized_collections are provided as false' do let(:options) do { authorized_collections: false } end let!(:result) do database.collections(options) end let(:events) do subscriber.command_started_events('listCollections') end it 'authorized_collections not passed to server because false' do expect(events.length).to eq(1) command = events.first.command expect(command['nameOnly']).to eq(true) expect(command['authorizedCollections']).to be_nil end end context 'when authorized_collections are provided as true' do let(:options) do { authorized_collections: true } end let!(:result) do database.collections(options) end let(:events) do subscriber.command_started_events('listCollections') end it 'authorized_collections not passed to server because false' do expect(events.length).to eq(1) command = events.first.command expect(command['nameOnly']).to eq(true) expect(command['authorizedCollections']).to eq(true) end end context 'when no options are provided' do let!(:result) do database.collections end let(:events) do subscriber.command_started_events('listCollections') end it 'authorized_collections not passed to server because not provided' do expect(events.length).to eq(1) command = events.first.command expect(command['authorizedCollections']).to be_nil end end end end context 'when there are more than 100 collections' do include_context 'more than 100 collections' let(:collections) do client.database.collections end let(:collection_names) do collections.map(&:name).sort end it 'lists all collections' do collections.length.should == 120 collection_names.should include('coll-0') collection_names.should include('coll-119') end end context 'with comment' do min_server_version '4.4' it 'returns collection names and send comment' do database = described_class.new(monitored_client, SpecConfig.instance.test_db) database.collections(comment: "comment") command = subscriber.command_started_events("listCollections").last&.command expect(command).not_to be_nil expect(command["comment"]).to eq("comment") end end end describe '#command' do let(:database) do described_class.new(authorized_client, SpecConfig.instance.test_db) end it 'sends the query command to the cluster' do expect(database.command(:ping => 1).written_count).to eq(0) end it 'does not mutate the command selector' do expect(database.command({:ping => 1}.freeze).written_count).to eq(0) end context 'when provided a session' do min_server_fcv '3.6' let(:operation) do client.database.command({ :ping => 1 }, session: session) end let(:failed_operation) do client.database.command({ :invalid => 1 }, session: session) end let(:session) do client.start_session end let(:subscriber) { Mrss::EventSubscriber.new } let(:client) do authorized_client.tap do |client| client.subscribe(Mongo::Monitoring::COMMAND, subscriber) end end it_behaves_like 'an operation using a session' it_behaves_like 'a failed operation using a session' let(:full_command) do subscriber.started_events.find { |cmd| cmd.command_name == 'ping' }.command end it 'does not add a afterClusterTime field' do # Ensure that the session has an operation time client.database.command({ ping: 1 }, session: session) operation expect(full_command['readConcern']).to be_nil end end context 'when a read concern is provided' do min_server_fcv '3.2' context 'when the read concern is valid' do it 'sends the read concern' do expect { database.command(:ping => 1, readConcern: { level: 'local' }) }.to_not raise_error end end context 'when the read concern is not valid' do require_topology :single, :replica_set it 'raises an exception' do expect { database.command(:ping => 1, readConcern: { level: 'yay' }) }.to raise_error(Mongo::Error::OperationFailure) end end end context 'when no read preference is provided' do require_topology :single, :replica_set let!(:primary_server) do database.cluster.next_primary end it 'uses read preference of primary' do RSpec::Mocks.with_temporary_scope do expect(primary_server).to receive(:with_connection).with(any_args).and_call_original expect(database.command(ping: 1)).to be_successful end end end context 'when the client has a read preference set' do require_topology :single, :replica_set let!(:primary_server) do database.cluster.next_primary end let(:read_preference) do { :mode => :secondary, :tag_sets => [{ 'non' => 'existent' }] } end let(:client) do authorized_client.with(read: read_preference) end let(:database) do described_class.new(client, SpecConfig.instance.test_db, client.options) end it 'does not use the client read preference 'do RSpec::Mocks.with_temporary_scope do expect(primary_server).to receive(:with_connection).with(any_args).and_call_original expect(database.command(ping: 1)).to be_successful end end end context 'when there is a read preference argument provided' do require_topology :single, :replica_set let(:read_preference) do { :mode => :secondary, :tag_sets => [{ 'non' => 'existent' }] } end let(:client) do authorized_client.with(server_selection_timeout: 0.2) end let(:database) do described_class.new(client, SpecConfig.instance.test_db, client.options) end before do allow(database.cluster).to receive(:single?).and_return(false) end it 'uses the read preference argument' do expect { database.command({ ping: 1 }, read: read_preference) }.to raise_error(Mongo::Error::NoServerAvailable) end end context 'when the client has a server_selection_timeout set' do require_topology :single, :replica_set let(:client) do authorized_client.with(server_selection_timeout: 0) end let(:database) do described_class.new(client, SpecConfig.instance.test_db, client.options) end it 'uses the client server_selection_timeout' do expect { database.command(ping: 1) }.to raise_error(Mongo::Error::NoServerAvailable) end end context 'when a write concern is not defined on the client/database object' do context 'when a write concern is provided in the selector' do require_topology :single let(:cmd) do { insert: TEST_COLL, documents: [ { a: 1 } ], writeConcern: INVALID_WRITE_CONCERN } end it 'uses the write concern' do expect { database.command(cmd) }.to raise_exception(Mongo::Error::OperationFailure) end end end context 'when a write concern is defined on the client/database object' do let(:client_options) do { write: INVALID_WRITE_CONCERN } end let(:database) do described_class.new(authorized_client.with(client_options), SpecConfig.instance.test_db) end context 'when a write concern is not in the command selector' do let(:cmd) do { insert: TEST_COLL, documents: [ { a: 1 } ] } end it 'does not apply a write concern' do expect(database.command(cmd).written_count).to eq(1) end end context 'when a write concern is provided in the command selector' do require_topology :single let(:cmd) do { insert: TEST_COLL, documents: [ { a: 1 } ], writeConcern: INVALID_WRITE_CONCERN } end it 'uses the write concern' do expect { database.command(cmd) }.to raise_exception(Mongo::Error::OperationFailure) end end end context 'when client server api is not set' do require_no_required_api_version min_server_fcv '4.7' it 'passes server api parameters' do lambda do database.command(ping: 1, apiVersion: 'does-not-exist') end.should raise_error( an_instance_of(Mongo::Error::OperationFailure).and having_attributes(code: 322)) end end context 'when client server api is set' do require_required_api_version min_server_fcv '4.7' it 'reports server api conflict' do lambda do database.command(ping: 1, apiVersion: 'does-not-exist') end.should raise_error(Mongo::Error::ServerApiConflict) end end end describe '#drop' do let(:database) do described_class.new(authorized_client, SpecConfig.instance.test_db) end it 'drops the database' do expect(database.drop).to be_successful end context 'when provided a session' do let(:operation) do database.drop(session: session) end let(:client) do authorized_client end it_behaves_like 'an operation using a session' end context 'when the client/database has a write concern' do let(:client_options) do { write: INVALID_WRITE_CONCERN, database: :safe_to_drop } end let(:client) do root_authorized_client.with(client_options) end let(:database_with_write_options) do client.database end context 'when the server supports write concern on the dropDatabase command' do min_server_fcv '3.4' require_topology :single it 'applies the write concern' do expect{ database_with_write_options.drop }.to raise_exception(Mongo::Error::OperationFailure) end end context 'when write concern is passed in as an option' do min_server_fcv '3.4' require_topology :single let(:client_options) do { write_concern: {w: 0}, database: :test } end let(:session) do client.start_session end let(:subscriber) { Mrss::EventSubscriber.new } let(:client) do root_authorized_client.tap do |client| client.subscribe(Mongo::Monitoring::COMMAND, subscriber) end.with(client_options) end let(:events) do subscriber.command_started_events('dropDatabase') end let(:database_test_wc) do client.database end let!(:command) do Utils.get_command_event(client, 'dropDatabase') do |client| database_test_wc.drop({ write_concern: {w: 'majority'} }) end.command end it 'applies the write concern passed in as an option' do expect(events.length).to eq(1) expect(command).to_not be_nil expect(command[:writeConcern][:w]).to eq('majority') end end context 'when the server does not support write concern on the dropDatabase command' do max_server_version '3.2' it 'does not apply the write concern' do expect(database_with_write_options.drop).to be_successful end end end end describe '#initialize' do context 'when provided a valid name' do let(:database) do described_class.new(authorized_client, SpecConfig.instance.test_db) end it 'sets the name as a string' do expect(database.name).to eq(SpecConfig.instance.test_db) end it 'sets the client' do expect(database.client).to eq(authorized_client) end end context 'when the name is nil' do it 'raises an error' do expect do described_class.new(authorized_client, nil) end.to raise_error(Mongo::Error::InvalidDatabaseName) end end end describe '#inspect' do let(:database) do described_class.new(authorized_client, SpecConfig.instance.test_db) end it 'includes the object id' do expect(database.inspect).to include(database.object_id.to_s) end it 'includes the name' do expect(database.inspect).to include(database.name) end end describe '#fs' do require_topology :single, :replica_set let(:database) do described_class.new(authorized_client, SpecConfig.instance.test_db) end shared_context 'a GridFS database' do it 'returns a Grid::FS for the db' do expect(fs).to be_a(Mongo::Grid::FSBucket) end context 'when operating on the fs' do let(:file) do Mongo::Grid::File.new('Hello!', :filename => 'test.txt') end before do fs.files_collection.delete_many fs.chunks_collection.delete_many end let(:from_db) do fs.insert_one(file) fs.find({ filename: 'test.txt' }, limit: 1).first end it 'returns the assembled file from the db' do expect(from_db['filename']).to eq(file.info.filename) end end end context 'when no options are provided' do let(:fs) do database.fs end it_behaves_like 'a GridFS database' end context 'when a custom prefix is provided' do context 'when the option is fs_name' do let(:fs) do database.fs(:fs_name => 'grid') end it 'sets the custom prefix' do expect(fs.prefix).to eq('grid') end it_behaves_like 'a GridFS database' end context 'when the option is bucket_name' do let(:fs) do database.fs(:bucket_name => 'grid') end it 'sets the custom prefix' do expect(fs.prefix).to eq('grid') end it_behaves_like 'a GridFS database' end end end describe '#write_concern' do let(:client) do new_local_client(['127.0.0.1:27017'], {monitoring_io: false}.merge(client_options)) end let(:database) { client.database } context 'when client write concern uses :write' do let(:client_options) do { :write => { :w => 1 } } end it 'is the correct write concern' do expect(database.write_concern).to be_a(Mongo::WriteConcern::Acknowledged) expect(database.write_concern.options).to eq(w: 1) end end context 'when client write concern uses :write_concern' do let(:client_options) do { :write_concern => { :w => 1 } } end it 'is the correct write concern' do expect(database.write_concern).to be_a(Mongo::WriteConcern::Acknowledged) expect(database.write_concern.options).to eq(w: 1) end end end describe '#aggregate' do min_server_fcv '3.6' let(:client) do root_authorized_admin_client end let(:database) { client.database } let(:pipeline) do [{'$currentOp' => {}}] end describe 'updating cluster time' do # The shared examples use their own client which we cannot override # from here, and it uses the wrong credentials for admin database which # is the one we need for our pipeline when auth is on. require_no_auth let(:database_via_client) do client.use(:admin).database end let(:operation) do database_via_client.aggregate(pipeline).first end let(:operation_with_session) do database_via_client.aggregate(pipeline, session: session).first end let(:second_operation) do database_via_client.aggregate(pipeline, session: session).first end it_behaves_like 'an operation updating cluster time' end it 'returns an Aggregation object' do expect(database.aggregate(pipeline)).to be_a(Mongo::Collection::View::Aggregation) end context 'when options are provided' do let(:options) do { :allow_disk_use => true, :bypass_document_validation => true } end it 'sets the options on the Aggregation object' do expect(database.aggregate(pipeline, options).options).to eq(BSON::Document.new(options)) end context 'when the :comment option is provided' do let(:options) do { :comment => 'testing' } end it 'sets the options on the Aggregation object' do expect(database.aggregate(pipeline, options).options).to eq(BSON::Document.new(options)) end end context 'when a session is provided' do let(:session) do client.start_session end let(:operation) do database.aggregate(pipeline, session: session).to_a end let(:failed_operation) do database.aggregate([ { '$invalid' => 1 }], session: session).to_a end it_behaves_like 'an operation using a session' it_behaves_like 'a failed operation using a session' end context 'when a hint is provided' do let(:options) do { 'hint' => { 'y' => 1 } } end it 'sets the options on the Aggregation object' do expect(database.aggregate(pipeline, options).options).to eq(options) end end context 'when collation is provided' do let(:pipeline) do [{ "$currentOp" => {} }] end let(:options) do { collation: { locale: 'en_US', strength: 2 } } end let(:result) do database.aggregate(pipeline, options).collect { |doc| doc.keys.grep(/host/).first } end context 'when the server selected supports collations' do min_server_fcv '3.4' it 'applies the collation' do expect(result.uniq).to eq(['host']) end end context 'when the server selected does not support collations' do max_server_version '3.2' it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedCollation) end context 'when a String key is used' do let(:options) do { 'collation' => { locale: 'en_US', strength: 2 } } end it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedCollation) end end end end end end end mongo-ruby-driver-2.21.3/spec/mongo/distinguishing_semaphore_spec.rb000066400000000000000000000022701505113246500256740ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' describe Mongo::DistinguishingSemaphore do let(:semaphore) do described_class.new end it 'waits until signaled' do result = nil consumer = Thread.new do result = semaphore.wait(3) end # Context switch to start the thread sleep 0.1 start_time = Mongo::Utils.monotonic_time semaphore.signal consumer.join (Mongo::Utils.monotonic_time - start_time).should < 1 result.should be true end it 'waits until broadcast' do result = nil consumer = Thread.new do result = semaphore.wait(3) end # Context switch to start the thread sleep 0.1 start_time = Mongo::Utils.monotonic_time semaphore.broadcast consumer.join (Mongo::Utils.monotonic_time - start_time).should < 1 result.should be true end it 'times out' do result = nil consumer = Thread.new do result = semaphore.wait(2) end # Context switch to start the thread sleep 0.1 start_time = Mongo::Utils.monotonic_time consumer.join (Mongo::Utils.monotonic_time - start_time).should > 1 result.should be false end end mongo-ruby-driver-2.21.3/spec/mongo/error/000077500000000000000000000000001505113246500203725ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/mongo/error/bulk_write_error_spec.rb000066400000000000000000000021701505113246500253110ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' describe Mongo::Error::BulkWriteError do let(:result) do { 'writeErrors' => [ { 'code' => 1, 'errmsg' => 'message1' }, { 'code' => 2, 'errmsg' => 'message2' }, ] } end let(:error) { described_class.new(result) } before do error.add_note('note1') error.add_note('note2') end describe '#result' do it 'returns the result' do expect(error.result).to eq(result) end end describe '#labels' do it 'returns an empty array' do expect(error.labels).to eq([]) end end describe '#message' do it 'is correct' do expect(error.message).to eq("Multiple errors: [1]: message1; [2]: message2 (note1, note2)") end end describe '#to_s' do it 'is correct' do expect(error.to_s).to eq("Multiple errors: [1]: message1; [2]: message2 (note1, note2)") end end describe '#inspect' do it 'is correct' do expect(error.inspect).to eq("#") end end end mongo-ruby-driver-2.21.3/spec/mongo/error/crypt_error_spec.rb000066400000000000000000000012661505113246500243100ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' describe Mongo::Error::CryptError do let(:label) { :error_client } let(:code) { 401 } let(:message) { 'Operation unauthorized' } describe '#initialize' do context 'with code' do let(:error) { described_class.new(message, code: code) } it 'correctly generates the error message' do expect(error.message).to eq("#{message} (libmongocrypt error code #{code})") end end context 'with code' do let(:error) { described_class.new(message) } it 'correctly generates the error message' do expect(error.message).to eq(message) end end end end mongo-ruby-driver-2.21.3/spec/mongo/error/max_bson_size_spec.rb000066400000000000000000000017031505113246500245720ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' describe Mongo::Error::MaxBSONSize do describe 'message' do context 'when constructor is given no arguments' do let(:error) do described_class.new end it 'is the predefined message' do error.message.should == 'The document exceeds maximum allowed BSON size' end end context 'when constructor is given an integer argument' do let(:error) do described_class.new(42) end it 'is the predefined message with the size added' do error.message.should == 'The document exceeds maximum allowed BSON size. The maximum allowed size is 42' end end context 'when constructor is given a string argument' do let(:error) do described_class.new('hello world') end it 'is the provided message' do error.message.should == 'hello world' end end end end mongo-ruby-driver-2.21.3/spec/mongo/error/no_server_available_spec.rb000066400000000000000000000017021505113246500257330ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' describe Mongo::Error::NoServerAvailable do describe 'message' do let(:selector) do Mongo::ServerSelector::Primary.new end let(:cluster) do Mongo::Cluster.new(['127.0.0.1:27017'], Mongo::Monitoring.new, monitoring_io: false) end let(:error) do Mongo::Error::NoServerAvailable.new(selector, cluster) end it 'is correct' do expect(error.message).to eq('No primary server is available in cluster: #]> with timeout=30, LT=0.015') end context 'when cluster is nil' do let(:error) do Mongo::Error::NoServerAvailable.new(selector, nil) end it 'is correct' do expect(error.message).to eq('No primary server is available with timeout=30, LT=0.015') end end end end mongo-ruby-driver-2.21.3/spec/mongo/error/notable_spec.rb000066400000000000000000000025451505113246500233630ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' describe Mongo::Error::Notable do let(:exception_cls) do # Since Notable is a module, we need a class that includes it for testing Mongo::Error end context 'when there are no notes' do let(:exception) do exception_cls.new('hello world') end describe '#message' do it 'is correct' do exception.message.should == 'hello world' end end describe '#to_s' do it 'is correct' do exception.to_s.should == 'hello world' end end describe '#inspect' do it 'is correct' do exception.inspect.should == '#' end end end context 'when there are notes' do let(:exception) do exception_cls.new('hello world').tap do |exception| exception.add_note('brilliant') exception.add_note('weird') end end describe '#message' do it 'is correct' do exception.message.should == 'hello world (brilliant, weird)' end end describe '#to_s' do it 'is correct' do exception.to_s.should == 'hello world (brilliant, weird)' end end describe '#inspect' do it 'is correct' do exception.inspect.should == '#' end end end end mongo-ruby-driver-2.21.3/spec/mongo/error/operation_failure_heavy_spec.rb000066400000000000000000000064441505113246500266440ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Error::OperationFailure do describe '#write_concern_error' do # Fail point will work on 4.0 mongod but requires 4.2 for mongos min_server_fcv '4.2' # Fail point must be set on the same server to which the query is sent require_no_multi_mongos # https://github.com/mongodb/specifications/commit/7745234f93039a83ae42589a6c0cdbefcffa32fa let(:fail_point_command) do { "configureFailPoint": "failCommand", "data": { "failCommands": ["insert"], "writeConcernError": { "code": 100, "codeName": "UnsatisfiableWriteConcern", "errmsg": "Not enough data-bearing nodes", "errInfo": { "writeConcern": { "w": 2, "wtimeout": 0, "provenance": "clientSupplied" } } } }, "mode": { "times": 1 } } end it 'exposes all server-provided fields' do authorized_client.use('admin').command(fail_point_command) begin authorized_client['foo'].insert_one(test: 1) rescue Mongo::Error::OperationFailure::Family => exc expect(exc.details).to eq(exc.document['writeConcernError']['errInfo']) expect(exc.server_message).to eq(exc.document['writeConcernError']['errmsg']) expect(exc.code).to eq(exc.document['writeConcernError']['code']) else fail 'Expected an OperationFailure' end exc.write_concern_error_document.should == { 'code' => 100, 'codeName' => 'UnsatisfiableWriteConcern', 'errmsg' => 'Not enough data-bearing nodes', 'errInfo' => { 'writeConcern' => { 'w' => 2, 'wtimeout' => 0, 'provenance' => 'clientSupplied', }, }, } end end describe 'WriteError details' do min_server_fcv '5.0' let(:subscriber) { Mrss::EventSubscriber.new } let(:subscribed_client) do authorized_client.tap do |client| client.subscribe(Mongo::Monitoring::COMMAND, subscriber) end end let(:collection_name) { 'write_error_prose_spec' } let(:collection) do subscribed_client[:collection_name].drop subscribed_client[:collection_name, { 'validator' => { 'x' => { '$type' => 'string' }, } }].create subscribed_client[:collection_name] end context 'when there is a write error' do it 'succeeds and prints the error' do begin collection.insert_one({x: 1}) rescue Mongo::Error::OperationFailure::Family => e insert_events = subscriber.succeeded_events.select { |e| e.command_name == "insert" } expect(insert_events.length).to eq 1 expect(e.message).to match(/\[#{e.code}(:.*)?\].+ -- .+/) expect(e.details).to eq(e.document['writeErrors'][0]['errInfo']) expect(e.server_message).to eq(e.document['writeErrors'][0]['errmsg']) expect(e.code).to eq(e.document['writeErrors'][0]['code']) expect(e.code).to eq 121 expect(e.details).to eq(insert_events[0].reply['writeErrors'][0]['errInfo']) else fail 'Expected an OperationFailure' end end end end end mongo-ruby-driver-2.21.3/spec/mongo/error/operation_failure_spec.rb000066400000000000000000000346731505113246500254550ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' describe Mongo::Error::OperationFailure do describe '#code' do subject do described_class.new('not master (10107)', nil, :code => 10107, :code_name => 'NotMaster') end it 'returns the code' do expect(subject.code).to eq(10107) end end describe '#code_name' do subject do described_class.new('not master (10107)', nil, :code => 10107, :code_name => 'NotMaster') end it 'returns the code name' do expect(subject.code_name).to eq('NotMaster') end end describe '#write_retryable?' do context 'when there is a read retryable message' do let(:error) { Mongo::Error::OperationFailure.new('problem: socket exception', nil) } it 'returns false' do expect(error.write_retryable?).to eql(false) end end context 'when there is a write retryable message' do let(:error) { Mongo::Error::OperationFailure.new('problem: node is recovering', nil) } it 'returns true' do expect(error.write_retryable?).to eql(true) end end context 'when there is a non-retryable message' do let(:error) { Mongo::Error::OperationFailure.new('something happened', nil) } it 'returns false' do expect(error.write_retryable?).to eql(false) end end context 'when there is a retryable code' do let(:error) { Mongo::Error::OperationFailure.new('no message', nil, :code => 91, :code_name => 'ShutdownInProgress') } it 'returns true' do expect(error.write_retryable?).to eql(true) end end context 'when there is a non-retryable code' do let(:error) { Mongo::Error::OperationFailure.new('no message', nil, :code => 43, :code_name => 'SomethingHappened') } it 'returns false' do expect(error.write_retryable?).to eql(false) end end end describe '#change_stream_resumable?' do context 'when there is a resumable code' do context 'getMore response' do let(:result) do Mongo::Operation::GetMore::Result.new( Mongo::Protocol::Message.new, description) end let(:error) { Mongo::Error::OperationFailure.new('no message', result, :code => 91, :code_name => 'ShutdownInProgress') } context 'wire protocol version < 9' do let(:description) do Mongo::Server::Description.new( '', { 'minWireVersion' => 0, 'maxWireVersion' => 8, } ) end it 'returns true' do expect(error.change_stream_resumable?).to eql(true) end end context 'wire protocol version >= 9' do let(:description) do Mongo::Server::Description.new( '', { 'minWireVersion' => 0, 'maxWireVersion' => 9, } ) end it 'returns false' do # Error code is not consulted with wire version >= 9 expect(error.change_stream_resumable?).to eql(false) end end end context 'not a getMore response' do let(:result) do Mongo::Operation::Result.new( Mongo::Protocol::Message.new, description) end let(:error) { Mongo::Error::OperationFailure.new('no message', nil, :code => 91, :code_name => 'ShutdownInProgress') } context 'wire protocol version < 9' do let(:description) do Mongo::Server::Description.new( '', { 'minWireVersion' => 0, 'maxWireVersion' => 8, } ) end it 'returns false' do expect(error.change_stream_resumable?).to eql(false) end end end end context 'when there is a non-resumable code' do context 'getMore response' do let(:result) do Mongo::Operation::GetMore::Result.new( Mongo::Protocol::Message.new, description) end let(:error) { Mongo::Error::OperationFailure.new('no message', result, :code => 136, :code_name => 'CappedPositionLost') } context 'wire protocol version < 9' do let(:description) do Mongo::Server::Description.new( '', { 'minWireVersion' => 0, 'maxWireVersion' => 8, } ) end it 'returns false' do expect(error.change_stream_resumable?).to eql(false) end end context 'wire protocol version >= 9' do let(:description) do Mongo::Server::Description.new( '', { 'minWireVersion' => 0, 'maxWireVersion' => 9, } ) end it 'returns false' do expect(error.change_stream_resumable?).to eql(false) end end end context 'not a getMore response' do let(:result) do Mongo::Operation::Result.new( Mongo::Protocol::Message.new, description) end let(:error) { Mongo::Error::OperationFailure.new('no message', nil, :code => 136, :code_name => 'CappedPositionLost') } it 'returns false' do expect(error.change_stream_resumable?).to eql(false) end end end context 'when there is a non-resumable label' do context 'getMore response' do let(:result) do Mongo::Operation::GetMore::Result.new( Mongo::Protocol::Message.new, description) end let(:error) { Mongo::Error::OperationFailure.new('no message', result, :code => 91, :code_name => 'ShutdownInProgress', :labels => ['NonResumableChangeStreamError']) } context 'wire protocol version < 9' do let(:description) do Mongo::Server::Description.new( '', { 'minWireVersion' => 0, 'maxWireVersion' => 8, } ) end it 'returns true' do # Error code is consulted => error is resumable expect(error.change_stream_resumable?).to eql(true) end end context 'wire protocol version >= 9' do let(:description) do Mongo::Server::Description.new( '', { 'minWireVersion' => 0, 'maxWireVersion' => 9, } ) end it 'returns false' do # Error code is not consulted, there is no resumable label => # error is not resumable expect(error.change_stream_resumable?).to eql(false) end end end context 'when the error code is 43 (CursorNotFound)' do let(:error) { Mongo::Error::OperationFailure.new(nil, result, code: 43, code_name: 'CursorNotFound') } let(:result) do Mongo::Operation::GetMore::Result.new( Mongo::Protocol::Message.new, description) end context 'wire protocol < 9' do let(:description) do Mongo::Server::Description.new( '', { 'minWireVersion' => 0, 'maxWireVersion' => 8, } ) end it 'returns true' do # CursorNotFound exceptions are resumable even if they don't have # a ResumableChangeStreamError label because the server is not aware # of the cursor id, and thus cannot determine if it is a change stream. expect(error.change_stream_resumable?).to be true end end context 'wire protocol >= 9' do let(:description) do Mongo::Server::Description.new( '', { 'minWireVersion' => 0, 'maxWireVersion' => 9, } ) end it 'returns true' do # CursorNotFound exceptions are resumable even if they don't have # a ResumableChangeStreamError label because the server is not aware # of the cursor id, and thus cannot determine if it is a change stream. expect(error.change_stream_resumable?).to be true end end end context 'not a getMore response' do let(:result) do Mongo::Operation::Result.new( Mongo::Protocol::Message.new, description) end let(:error) { Mongo::Error::OperationFailure.new('no message', result, :code => 91, :code_name => 'ShutdownInProgress', :labels => ['NonResumableChangeStreamError']) } context 'wire protocol version < 9' do let(:description) do Mongo::Server::Description.new( '', { 'minWireVersion' => 0, 'maxWireVersion' => 8, } ) end it 'returns false' do expect(error.change_stream_resumable?).to eql(false) end end end end end describe '#labels' do context 'when the result is nil' do subject do described_class.new('not master (10107)', nil, :code => 10107, :code_name => 'NotMaster') end it 'has no labels' do expect(subject.labels).to eq([]) end end context 'when the result is not nil' do let(:reply_document) do { 'code' => 251, 'codeName' => 'NoSuchTransaction', 'errorLabels' => labels, } end let(:reply) do Mongo::Protocol::Reply.new.tap do |r| # Because this was not created by Mongo::Protocol::Reply::deserialize, we need to manually # initialize the fields. r.instance_variable_set(:@documents, [reply_document]) r.instance_variable_set(:@flags, []) end end let(:result) do Mongo::Operation::Result.new(reply, Mongo::Server::Description.new('')) end subject do begin result.send(:raise_operation_failure) rescue => e e end end context 'when the error has no labels' do let(:labels) do [] end it 'has the correct labels' do expect(subject.labels).to eq(labels) end end context 'when the error has labels' do let(:labels) do %w(TransientTransactionError) end it 'has the correct labels' do expect(subject.labels).to eq(labels) end end end end describe '#not_master?' do [10107, 13435].each do |code| context "error code #{code}" do subject do described_class.new("thingy (#{code})", nil, :code => code, :code_name => 'thingy') end it 'is true' do expect(subject.not_master?).to be true end end end # node is recovering error codes [11600, 11602, 13436, 189, 91].each do |code| context "error code #{code}" do subject do described_class.new("thingy (#{code})", nil, :code => code, :code_name => 'thingy') end it 'is false' do expect(subject.not_master?).to be false end end end context 'another error code' do subject do described_class.new('some error (123)', nil, :code => 123, :code_name => 'SomeError') end it 'is false' do expect(subject.not_master?).to be false end end context 'not master in message with different code' do subject do described_class.new('not master (999)', nil, :code => 999, :code_name => nil) end it 'is false' do expect(subject.not_master?).to be false end end context 'not master in message without code' do subject do described_class.new('not master)', nil) end it 'is true' do expect(subject.not_master?).to be true end end context 'not master or secondary text' do subject do described_class.new('not master or secondary (999)', nil, :code => 999, :code_name => nil) end it 'is false' do expect(subject.not_master?).to be false end end end describe '#node_recovering?' do [11600, 11602, 13436, 189, 91].each do |code| context "error code #{code}" do subject do described_class.new("thingy (#{code})", nil, :code => code, :code_name => 'thingy') end it 'is true' do expect(subject.node_recovering?).to be true end end end # not master error codes [10107, 13435].each do |code| context "error code #{code}" do subject do described_class.new("thingy (#{code})", nil, :code => code, :code_name => 'thingy') end it 'is false' do expect(subject.node_recovering?).to be false end end end context 'another error code' do subject do described_class.new('some error (123)', nil, :code => 123, :code_name => 'SomeError') end it 'is false' do expect(subject.node_recovering?).to be false end end context 'node is recovering in message with different code' do subject do described_class.new('node is recovering (999)', nil, :code => 999, :code_name => nil) end it 'is false' do expect(subject.node_recovering?).to be false end end context 'node is recovering in message without code' do subject do described_class.new('node is recovering', nil) end it 'is true' do expect(subject.node_recovering?).to be true end end context 'not master or secondary text with a code' do subject do described_class.new('not master or secondary (999)', nil, :code => 999, :code_name => nil) end it 'is false' do expect(subject.node_recovering?).to be false end end context 'not master or secondary text without code' do subject do described_class.new('not master or secondary', nil) end it 'is true' do expect(subject.node_recovering?).to be true end end end end mongo-ruby-driver-2.21.3/spec/mongo/error/parser_spec.rb000066400000000000000000000315761505113246500232410ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Error::Parser do let(:parser) do described_class.new(document) end describe '#message' do context 'when the document contains no error message' do let(:document) do { 'ok' => 1 } end it 'returns an empty string' do expect(parser.message).to be_empty end end context 'when the document contains an errmsg' do let(:document) do { 'errmsg' => 'no such command: notacommand', 'code' => 59 } end it 'returns the message' do expect(parser.message).to eq('[59]: no such command: notacommand') end end context 'when the document contains an errmsg and code name' do let(:document) do { 'errmsg' => 'no such command: notacommand', 'code' => 59, 'codeName' => 'foo' } end it 'returns the message' do expect(parser.message).to eq('[59:foo]: no such command: notacommand') end end =begin context 'when the document contains writeErrors' do context 'when only a single error exists' do let(:document) do { 'writeErrors' => [{ 'code' => 9, 'errmsg' => 'Unknown modifier: $st' }]} end it 'returns the message' do expect(parser.message).to eq('[9]: Unknown modifier: $st') end end context 'when multiple errors exist' do let(:document) do { 'writeErrors' => [ { 'code' => 9, 'errmsg' => 'Unknown modifier: $st' }, { 'code' => 9, 'errmsg' => 'Unknown modifier: $bl' } ] } end it 'returns the messages concatenated' do expect(parser.message).to eq( 'Multiple errors: 9: Unknown modifier: $st; 9: Unknown modifier: $bl' ) end end context 'when multiple errors with code names exist' do let(:document) do { 'writeErrors' => [ { 'code' => 9, 'codeName' => 'foo', 'errmsg' => 'Unknown modifier: $st' }, { 'code' => 9, 'codeName' => 'foo', 'errmsg' => 'Unknown modifier: $bl' }, ] } end it 'returns the messages concatenated' do expect(parser.message).to eq( 'Multiple errors: [9:foo]: Unknown modifier: $st; [9:foo]: Unknown modifier: $bl' ) end end end =end context 'when the document contains $err' do let(:document) do { '$err' => 'not authorized for query', 'code' => 13 } end it 'returns the message' do expect(parser.message).to eq('[13]: not authorized for query') end end context 'when the document contains err' do let(:document) do { 'err' => 'not authorized for query', 'code' => 13 } end it 'returns the message' do expect(parser.message).to eq('[13]: not authorized for query') end end context 'when the document contains a writeConcernError' do let(:document) do { 'writeConcernError' => { 'code' => 100, 'errmsg' => 'Not enough data-bearing nodes' } } end it 'returns the message' do expect(parser.message).to eq('[100]: Not enough data-bearing nodes') end end end describe '#code' do context 'when document contains code and ok: 1' do let(:document) do { 'ok' => 1, 'errmsg' => 'not master', 'code' => 10107, 'codeName' => 'NotMaster' } end it 'returns nil' do expect(parser.code).to be nil end end context 'when document contains code and ok: 1.0' do let(:document) do { 'ok' => 1.0, 'errmsg' => 'not master', 'code' => 10107, 'codeName' => 'NotMaster' } end it 'returns nil' do expect(parser.code).to be nil end end context 'when document contains code' do let(:document) do { 'ok' => 0, 'errmsg' => 'not master', 'code' => 10107, 'codeName' => 'NotMaster' } end it 'returns the code' do expect(parser.code).to eq(10107) end context 'with legacy option' do let(:parser) do described_class.new(document, nil, legacy: true) end it 'returns nil' do expect(parser.code).to be nil end end end context 'when document does not contain code' do let(:document) do { 'ok' => 0, 'errmsg' => 'not master' } end it 'returns nil' do expect(parser.code).to eq(nil) end end context 'when the document contains a writeConcernError with a code' do let(:document) do { 'writeConcernError' => { 'code' => 100, 'errmsg' => 'Not enough data-bearing nodes' } } end it 'returns the code' do expect(parser.code).to eq(100) end end context 'when the document contains a writeConcernError without a code' do let(:document) do { 'writeConcernError' => { 'errmsg' => 'Not enough data-bearing nodes' } } end it 'returns nil' do expect(parser.code).to be nil end end context 'when both top level code and write concern code are present' do let(:document) do { 'ok' => 0, 'errmsg' => 'not master', 'code' => 10107, 'codeName' => 'NotMaster', 'writeConcernError' => { 'code' => 100, 'errmsg' => 'Not enough data-bearing nodes' } } end it 'returns top level code' do expect(parser.code).to eq(10107) end end end describe '#code_name' do context 'when document contains code name and ok: 1' do let(:document) do { 'ok' => 1, 'errmsg' => 'not master', 'code' => 10107, 'codeName' => 'NotMaster' } end it 'returns nil' do expect(parser.code_name).to be nil end end context 'when document contains code name and ok: 1.0' do let(:document) do { 'ok' => 1.0, 'errmsg' => 'not master', 'code' => 10107, 'codeName' => 'NotMaster' } end it 'returns nil' do expect(parser.code_name).to be nil end end context 'when document contains code name' do let(:document) do { 'ok' => 0, 'errmsg' => 'not master', 'code' => 10107, 'codeName' => 'NotMaster' } end it 'returns the code name' do expect(parser.code_name).to eq('NotMaster') end context 'with legacy option' do let(:parser) do described_class.new(document, nil, legacy: true) end it 'returns nil' do expect(parser.code_name).to be nil end end end context 'when document does not contain code name' do let(:document) do { 'ok' => 0, 'errmsg' => 'not master' } end it 'returns nil' do expect(parser.code_name).to eq(nil) end end context 'when the document contains a writeConcernError with a code' do let(:document) do { 'writeConcernError' => { 'code' => 100, 'codeName' => 'CannotSatisfyWriteConcern', 'errmsg' => 'Not enough data-bearing nodes' } } end it 'returns the code name' do expect(parser.code_name).to eq('CannotSatisfyWriteConcern') end end context 'when the document contains a writeConcernError without a code' do let(:document) do { 'writeConcernError' => { 'code' => 100, 'errmsg' => 'Not enough data-bearing nodes' } } end it 'returns nil' do expect(parser.code_name).to be nil end end context 'when both top level code and write concern code are present' do let(:document) do { 'ok' => 0, 'errmsg' => 'not master', 'code' => 10107, 'codeName' => 'NotMaster', 'writeConcernError' => { 'code' => 100, 'errmsg' => 'Not enough data-bearing nodes' } } end it 'returns top level code' do expect(parser.code_name).to eq('NotMaster') end end end describe '#write_concern_error?' do context 'there is a write concern error' do let(:document) do { 'ok' => 1, 'writeConcernError' => { 'code' => 100, 'errmsg' => 'Not enough data-bearing nodes' } } end it 'is true' do expect(parser.write_concern_error?).to be true end end context 'there is no write concern error' do let(:document) do { 'ok' => 0, 'errmsg' => 'not master', 'code' => 10107, 'codeName' => 'NotMaster', } end it 'is false' do expect(parser.write_concern_error?).to be false end end context 'there is a top level error and write concern error' do let(:document) do { 'ok' => 0, 'errmsg' => 'not master', 'code' => 10107, 'codeName' => 'NotMaster', 'writeConcernError' => { 'code' => 100, 'errmsg' => 'Not enough data-bearing nodes' } } end it 'is true' do expect(parser.write_concern_error?).to be true end end end describe '#write_concern_error_code' do context 'there is a write concern error' do let(:document) do { 'ok' => 1, 'writeConcernError' => { 'code' => 100, 'errmsg' => 'Not enough data-bearing nodes' } } end it 'is true' do expect(parser.write_concern_error_code).to eq(100) end end context 'there is no write concern error' do let(:document) do { 'ok' => 0, 'errmsg' => 'not master', 'code' => 10107, 'codeName' => 'NotMaster', } end it 'is nil' do expect(parser.write_concern_error_code).to be nil end end context 'there is a top level error and write concern error' do let(:document) do { 'ok' => 0, 'errmsg' => 'not master', 'code' => 10107, 'codeName' => 'NotMaster', 'writeConcernError' => { 'code' => 100, 'errmsg' => 'Not enough data-bearing nodes' } } end it 'is true' do expect(parser.write_concern_error_code).to eq(100) end end end describe '#write_concern_error_code_name' do context 'there is a write concern error' do let(:document) do { 'ok' => 1, 'writeConcernError' => { 'code' => 100, 'codeName' => 'SomeCodeName', 'errmsg' => 'Not enough data-bearing nodes' } } end it 'is the code name' do expect(parser.write_concern_error_code_name).to eq('SomeCodeName') end end context 'there is no write concern error' do let(:document) do { 'ok' => 0, 'errmsg' => 'not master', 'code' => 10107, 'codeName' => 'NotMaster', } end it 'is nil' do expect(parser.write_concern_error_code_name).to be nil end end context 'there is a top level error and write concern error' do let(:document) do { 'ok' => 0, 'errmsg' => 'not master', 'code' => 10107, 'codeName' => 'NotMaster', 'writeConcernError' => { 'code' => 100, 'codeName' => 'SomeCodeName', 'errmsg' => 'Not enough data-bearing nodes' } } end it 'is the code name' do expect(parser.write_concern_error_code_name).to eq('SomeCodeName') end end end describe '#document' do let(:document) do { 'ok' => 0, 'errmsg' => 'not master', 'code' => 10107, 'codeName' => 'NotMaster' } end it 'returns the document' do expect(parser.document).to eq(document) end end describe '#replies' do context 'when there are no replies' do let(:document) do { 'ok' => 0, 'errmsg' => 'not master', 'code' => 10107, 'codeName' => 'NotMaster' } end it 'returns nil' do expect(parser.replies).to eq(nil) end end end describe '#labels' do let(:document) do { 'code' => 251, 'codeName' => 'NoSuchTransaction', 'errorLabels' => labels, } end context 'when there are no labels' do let(:labels) do [] end it 'has the correct labels' do expect(parser.labels).to eq(labels) end end context 'when there are labels' do let(:labels) do %w(TransientTransactionError) end it 'has the correct labels' do expect(parser.labels).to eq(labels) end end end describe '#wtimeout' do context 'when document contains wtimeout' do let(:document) do { 'ok' => 1, 'writeConcernError' => { 'errmsg' => 'replication timed out', 'code' => 64, 'errInfo' => {'wtimeout' => true}} } end it 'returns true' do expect(parser.wtimeout).to be true end end context 'when document does not contain wtimeout' do let(:document) do { 'ok' => 1, 'writeConcernError' => { 'errmsg' => 'replication did not time out', 'code' => 55 }} end it 'returns nil' do expect(parser.wtimeout).to be nil end end end end mongo-ruby-driver-2.21.3/spec/mongo/error/pool_cleared_error_spec.rb000066400000000000000000000006341505113246500255750ustar00rootroot00000000000000# frozen_string_literal: true require 'lite_spec_helper' describe Mongo::Error::PoolClearedError do describe '#initialize' do let(:error) do described_class.new( instance_double(Mongo::Address), instance_double(Mongo::Server::ConnectionPool) ) end it 'appends TransientTransactionError' do expect(error.labels).to include('TransientTransactionError') end end end mongo-ruby-driver-2.21.3/spec/mongo/error/unsupported_option_spec.rb000066400000000000000000000040111505113246500257050ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Error::UnsupportedOption do describe '.hint_error' do context 'with no options' do let(:error) { described_class.hint_error } it 'creates an error with a default message' do expect(error.message).to eq( "The MongoDB server handling this request does not support the hint " \ "option on this command. The hint option is supported on update commands " \ "on MongoDB server versions 4.2 and later and on findAndModify and delete " \ "commands on MongoDB server versions 4.4 and later" ) end context 'with unacknowledged_write: true' do let(:error) { described_class.hint_error(unacknowledged_write: true) } it 'creates an error with a default unacknowledged writes message' do expect(error.message).to eq( "The hint option cannot be specified on an unacknowledged " \ "write operation. Remove the hint option or perform this " \ "operation with a write concern of at least { w: 1 }" ) end end end end describe '.allow_disk_use_error' do let(:error) { described_class.allow_disk_use_error } it 'creates an error with a default message' do expect(error.message).to eq( "The MongoDB server handling this request does not support the allow_disk_use " \ "option on this command. The allow_disk_use option is supported on find commands " \ "on MongoDB server versions 4.4 and later" ) end end describe '.commit_quorum_error' do let(:error) { described_class.commit_quorum_error } it 'creates an error with a default message' do expect(error.message).to eq( "The MongoDB server handling this request does not support the commit_quorum " \ "option on this command. The commit_quorum option is supported on createIndexes commands " \ "on MongoDB server versions 4.4 and later" ) end end end mongo-ruby-driver-2.21.3/spec/mongo/event/000077500000000000000000000000001505113246500203625ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/mongo/event/publisher_spec.rb000066400000000000000000000020411505113246500237130ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Event::Publisher do describe '#publish' do let(:listeners) do Mongo::Event::Listeners.new end let(:klass) do Class.new do include Mongo::Event::Publisher def initialize(listeners) @event_listeners = listeners end end end let(:publisher) do klass.new(listeners) end let(:listener) do double('listener') end context 'when the event has listeners' do before do listeners.add_listener('test', listener) listeners.add_listener('test', listener) end it 'handles the event for each listener' do expect(listener).to receive(:handle).with('test').twice publisher.publish('test', 'test') end end context 'when the event has no listeners' do it 'does not handle anything' do expect(listener).to receive(:handle).never publisher.publish('test', 'test') end end end end mongo-ruby-driver-2.21.3/spec/mongo/event/subscriber_spec.rb000066400000000000000000000012531505113246500240650ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Event::Subscriber do let(:listeners) do Mongo::Event::Listeners.new end let(:klass) do Class.new do include Mongo::Event::Subscriber def initialize(listeners) @event_listeners = listeners end end end describe '#subscribe_to' do let(:listener) do double('listener') end let(:subscriber) do klass.new(listeners) end it 'adds subscribes the listener to the publisher' do expect(listeners).to receive(:add_listener).with('test', listener) subscriber.subscribe_to('test', listener) end end end mongo-ruby-driver-2.21.3/spec/mongo/grid/000077500000000000000000000000001505113246500201665ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/mongo/grid/file/000077500000000000000000000000001505113246500211055ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/mongo/grid/file/chunk_spec.rb000066400000000000000000000106251505113246500235600ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' require 'stringio' describe Mongo::Grid::File::Chunk do let(:data) do BSON::Binary.new('testing') end let(:file_id) do BSON::ObjectId.new end let(:file_info) do Mongo::Grid::File::Info.new(:files_id => file_id) end describe '#==' do let(:chunk) do described_class.new(:data => data, :files_id => file_id, :n => 5) end context 'when the other is not a chunk' do it 'returns false' do expect(chunk).to_not eq('test') end end context 'when the other object is a chunk' do context 'when the documents are equal' do it 'returns true' do expect(chunk).to eq(chunk) end end context 'when the documents are not equal' do let(:other) do described_class.new(:data => data, :files_id => file_id, :n => 6) end it 'returns false' do expect(chunk).to_not eq(other) end end end end describe '.assemble' do let(:data_size) do Mongo::Grid::File::Chunk::DEFAULT_SIZE * 3 end let(:raw_data) do +'testing' end let(:data) do BSON::Binary.new(raw_data) end let(:assembled) do described_class.assemble(chunks) end before do (1..data_size).each{ |i| raw_data << '1' } end let(:chunks) do described_class.split(raw_data, file_info) end it 'returns the chunks assembled into the raw data' do expect(assembled).to eq(raw_data) end end describe '#document' do let(:chunk) do described_class.new(:data => data, :files_id => file_id, :n => 5) end let(:document) do chunk.document end it 'sets the data' do expect(document[:data]).to eq(data) end it 'sets the files_id' do expect(document[:files_id]).to eq(file_id) end it 'sets the position' do expect(document[:n]).to eq(5) end it 'sets an object id' do expect(document[:_id]).to be_a(BSON::ObjectId) end context 'when asking for the document multiple times' do it 'returns the same document' do expect(document[:_id]).to eq(chunk.document[:_id]) end end end describe '#initialize' do let(:chunk) do described_class.new(:data => data, :files_id => file_id, :n => 5) end it 'sets the document' do expect(chunk.data).to eq(data) end it 'sets a default id' do expect(chunk.id).to be_a(BSON::ObjectId) end end describe '#to_bson' do let(:chunk) do described_class.new(:data => data, :files_id => file_id, :n => 5) end let(:document) do chunk.document end it 'returns the document as bson' do expect(chunk.to_bson.to_s).to eq(document.to_bson.to_s) end end describe '.split' do context 'when the data is smaller than the default size' do let(:raw_data) do +'testing' end let(:data) do BSON::Binary.new(raw_data) end let(:chunks) do described_class.split(raw_data, file_info) end let(:chunk) do chunks.first end it 'returns a single chunk' do expect(chunks.size).to eq(1) end it 'sets the correct chunk position' do expect(chunk.n).to eq(0) end it 'sets the correct chunk data' do expect(chunk.data).to eq(data) end end context 'when the data is larger that the default size' do let(:data_size) do Mongo::Grid::File::Chunk::DEFAULT_SIZE * 3 end let(:raw_data) do +'testing' end let(:data) do BSON::Binary.new(raw_data) end let(:assembled) do full_data = +'' chunks.each do |chunk| full_data << chunk.data.data end full_data end before do (1..data_size).each{ |i| raw_data << '1' } end let(:chunks) do described_class.split(raw_data, file_info) end it 'returns the correct number of chunks' do expect(chunks.size).to eq(4) end it 'sets the correct chunk positions' do expect(chunks[0].n).to eq(0) expect(chunks[1].n).to eq(1) expect(chunks[2].n).to eq(2) expect(chunks[3].n).to eq(3) end it 'does to miss any bytes' do expect(assembled).to eq(raw_data) end end end end mongo-ruby-driver-2.21.3/spec/mongo/grid/file/info_spec.rb000066400000000000000000000045321505113246500234030ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Grid::File::Info do describe '#==' do let(:upload_date) do Time.now.utc end let(:info) do described_class.new(:filename => 'test.txt', :length => 7, :uploadDate => upload_date) end context 'when the other is not a file info object' do it 'returns false' do expect(info).to_not eq('test') end end context 'when the other object is file info object' do context 'when the documents are equal' do it 'returns true' do expect(info).to eq(info) end end context 'when the documents are not equal' do let(:other) do described_class.new(:filename => 'testing.txt') end it 'returns false' do expect(info).to_not eq(other) end end end end describe '#initialize' do context 'when provided only a filename and length' do let(:info) do described_class.new(:filename => 'test.txt', :length => 7) end it 'sets the default id' do expect(info.id).to be_a(BSON::ObjectId) end it 'sets the upload date' do expect(info.upload_date).to be_a(Time) end it 'sets the chunk size' do expect(info.chunk_size).to eq(Mongo::Grid::File::Chunk::DEFAULT_SIZE) end it 'sets the content type' do expect(info.content_type).to eq(Mongo::Grid::File::Info::DEFAULT_CONTENT_TYPE) end end end describe '#inspect' do let(:info) do described_class.new(:filename => 'test.txt', :length => 7) end it 'includes the chunk size' do expect(info.inspect).to include(info.chunk_size.to_s) end it 'includes the filename' do expect(info.inspect).to include(info.filename) end it 'includes the md5' do expect(info.inspect).to include(info.md5.to_s) end it 'includes the id' do expect(info.inspect).to include(info.id.to_s) end end context 'when there are extra options' do let(:info) do described_class.new(:filename => 'test.txt', :extra_field => 'extra') end it 'includes them in the document written to the database' do expect(info.document['extra_field']).to eq('extra') expect(info.document[:extra_field]).to eq('extra') end end end mongo-ruby-driver-2.21.3/spec/mongo/grid/file_spec.rb000066400000000000000000000076001505113246500224470ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Grid::File do describe '#==' do let(:file) do described_class.new('test', :filename => 'test.txt') end context 'when the object is not a file' do it 'returns false' do expect(file).to_not eq('testing') end end context 'when the object is a file' do context 'when the objects are equal' do it 'returns true' do expect(file).to eq(file) end end context 'when the objects are not equal' do let(:other) do described_class.new('tester', :filename => 'test.txt') end it 'returns false' do expect(file).to_not eq(other) end end end end describe '#initialize' do let(:data_size) do Mongo::Grid::File::Chunk::DEFAULT_SIZE * 3 end let(:data) do +'testing' end before do (1..data_size).each{ |i| data << '1' } end context 'when provided data and file information' do let(:file) do described_class.new(data, :filename => 'test.txt') end it 'creates the chunks' do expect(file.chunks.size).to eq(4) end it 'returns data' do expect(file.data).to eq(data) end end context 'when data is a ruby file' do let(:ruby_file) do File.open(__FILE__) end let(:data) do ruby_file.read end let(:file) do described_class.new(data, :filename => File.basename(ruby_file.path)) end it 'creates the chunks' do expect(file.chunks.size).to eq(4) end it 'returns data' do expect(file.data).to eq(data) end end context 'when data is an IO object' do let(:io) do StringIO.new('testing') end let(:file) do described_class.new(io, filename: "test.txt") end it 'creates the chunks' do expect(file.chunks).not_to be_empty end it 'returns data' do expect(file.data).to eq 'testing' end end context 'when using idiomatic ruby field names' do let(:time) do Time.now.utc end let(:file) do described_class.new( data, :filename => 'test.txt', :chunk_size => 100, :upload_date => time, :content_type => 'text/plain' ) end it 'normalizes the chunk size name' do expect(file.chunk_size).to eq(100) end it 'normalizes the upload date name' do expect(file.upload_date).to eq(time) end it 'normalizes the content type name' do expect(file.content_type).to eq('text/plain') end end context 'when provided chunks and file information' do let(:file_id) do BSON::ObjectId.new end let(:info) do BSON::Document.new( :_id => file_id, :uploadDate => Time.now.utc, :filename => 'test.txt', :chunkSize => Mongo::Grid::File::Chunk::DEFAULT_SIZE, :length => data.length, :contentType => Mongo::Grid::File::Info::DEFAULT_CONTENT_TYPE ) end let(:chunks) do Mongo::Grid::File::Chunk.split( data, Mongo::Grid::File::Info.new(info) ).map{ |chunk| chunk.document } end let(:file) do described_class.new(chunks, info) end it 'sets the chunks' do expect(file.chunks.size).to eq(4) end it 'assembles to data' do expect(file.data).to eq(data) end it 'sets the file information' do expect(file.info.id).to eq(info[:_id]) end end end describe '#inspect' do let(:file) do described_class.new('Hi', :filename => 'test.txt') end it 'includes the filename' do expect(file.inspect).to include('test.txt') end end end mongo-ruby-driver-2.21.3/spec/mongo/grid/fs_bucket_spec.rb000066400000000000000000000744611505113246500235060ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Grid::FSBucket do let(:fs) do described_class.new(client.database, options) end # A different instance so that fs creates indexes correctly let(:support_fs) do described_class.new(client.database, options) end let(:client) do authorized_client end let(:options) do { } end let(:filename) do 'specs.rb' end let(:file) do File.open(__FILE__) end before do support_fs.files_collection.drop rescue nil support_fs.chunks_collection.drop rescue nil end describe '#initialize' do it 'sets the files collection' do expect(fs.files_collection.name).to eq('fs.files') end it 'sets the chunks collection' do expect(fs.chunks_collection.name).to eq('fs.chunks') end context 'when options are provided' do let(:fs) do described_class.new(authorized_client.database, options) end context 'when a write concern is set' do context 'when the option :write is provided' do let(:options) do { write: { w: 2 } } end it 'sets the write concern' do expect(fs.send(:write_concern).options).to eq(Mongo::WriteConcern.get(w: 2).options) end end end context 'when a read preference is set' do context 'when given as a hash with symbol keys' do let(:options) do { read: { mode: :secondary } } end it 'returns the read preference as a BSON::Document' do expect(fs.send(:read_preference)).to be_a(BSON::Document) expect(fs.send(:read_preference)).to eq('mode' => :secondary) end end context 'when given as a BSON::Document' do let(:options) do BSON::Document.new(read: { mode: :secondary }) end it 'returns the read preference as set' do expect(fs.send(:read_preference)).to eq(options[:read]) end end end context 'when a read preference is not set' do let(:database) do authorized_client.with(read: { mode: :secondary }).database end let(:fs) do described_class.new(database, options) end it 'uses the read preference of the database' do expect(fs.read_preference).to be(database.read_preference) end end context 'when a write stream is opened' do let(:stream) do fs.open_upload_stream('test.txt') end let(:fs) do described_class.new(authorized_client.database, options) end context 'when a write option is specified' do let(:options) do { write: { w: 2 } } end it 'passes the write concern to the write stream' do expect(stream.write_concern.options).to eq(Mongo::WriteConcern.get(options[:write]).options) end end context 'when disable_md5 is not specified' do it 'does not set the option on the write stream' do expect(stream.options[:disable_md5]).to be_nil end end context 'when disable_md5 is specified' do context 'when disable_md5 is true' do let(:options) do { disable_md5: true } end it 'passes the option to the write stream' do expect(stream.options[:disable_md5]).to be(true) end end context 'when disable_md5 is false' do let(:options) do { disable_md5: false } end it 'passes the option to the write stream' do expect(stream.options[:disable_md5]).to be(false) end end end end end end describe '#find' do let(:fs) do described_class.new(authorized_client.database) end context 'when there is no selector provided' do let(:files) do [ Mongo::Grid::File.new('hello world!', :filename => 'test.txt'), Mongo::Grid::File.new('goodbye world!', :filename => 'test1.txt') ] end before do files.each do |file| fs.insert_one(file) end end it 'returns a collection view' do expect(fs.find).to be_a(Mongo::Collection::View) end it 'iterates over the documents in the result' do fs.find.each do |document| expect(document).to_not be_nil end end end context 'when provided a filter' do let(:view) do fs.find(filename: 'test.txt') end it 'returns a collection view for the filter' do expect(view.filter).to eq('filename' => 'test.txt') end end context 'when options are provided' do let(:view) do fs.find({filename: 'test.txt'}, options) end context 'when provided allow_disk_use' do context 'when allow_disk_use is true' do let(:options) { { allow_disk_use: true } } it 'sets allow_disk_use on the view' do expect(view.options[:allow_disk_use]).to be true end end context 'when allow_disk_use is false' do let(:options) { { allow_disk_use: false } } it 'sets allow_disk_use on the view' do expect(view.options[:allow_disk_use]).to be false end end end context 'when provided batch_size' do let(:options) do { batch_size: 5 } end it 'sets the batch_size on the view' do expect(view.batch_size).to eq(options[:batch_size]) end end context 'when provided limit' do let(:options) do { limit: 5 } end it 'sets the limit on the view' do expect(view.limit).to eq(options[:limit]) end end context 'when provided no_cursor_timeout' do let(:options) do { no_cursor_timeout: true } end it 'sets the no_cursor_timeout on the view' do expect(view.options[:no_cursor_timeout]).to eq(options[:no_cursor_timeout]) end end context 'when provided skip' do let(:options) do { skip: 5 } end it 'sets the skip on the view' do expect(view.skip).to eq(options[:skip]) end end context 'when provided sort' do let(:options) do { sort: { 'x' => Mongo::Index::ASCENDING } } end it 'sets the sort on the view' do expect(view.sort).to eq(options[:sort]) end end end end describe '#find_one' do let(:fs) do described_class.new(authorized_client.database) end let(:file) do Mongo::Grid::File.new('hello world!', :filename => 'test.txt') end before do fs.insert_one(file) end let(:from_db) do fs.find_one(:filename => 'test.txt') end let(:from_db_upload_date) do from_db.info.upload_date.strftime("%Y-%m-%d %H:%M:%S") end let(:file_info_upload_date) do file.info.upload_date.strftime("%Y-%m-%d %H:%M:%S") end it 'returns the assembled file from the db' do expect(from_db.filename).to eq(file.info.filename) end it 'maps the file info correctly' do expect(from_db.info.length).to eq(file.info.length) expect(from_db_upload_date).to eq(file_info_upload_date) end end describe '#insert_one' do let(:fs) do described_class.new(authorized_client.database) end let(:file) do Mongo::Grid::File.new('Hello!', :filename => 'test.txt') end let(:support_file) do Mongo::Grid::File.new('Hello!', :filename => 'support_test.txt') end context 'when inserting the file once' do let!(:result) do fs.insert_one(file) end let(:from_db) do fs.find_one(:filename => 'test.txt') end it 'inserts the file into the database' do expect(from_db.filename).to eq(file.info.filename) end it 'includes the chunks and data with the file' do expect(from_db.data).to eq('Hello!') end it 'returns the file id' do expect(result).to eq(file.id) end end context 'when the files collection is empty' do before do fs.database[fs.files_collection.name].indexes end let(:operation) do expect(fs.files_collection).to receive(:indexes).and_call_original expect(fs.chunks_collection).to receive(:indexes).and_call_original fs.insert_one(file) end let(:chunks_index) do fs.database[fs.chunks_collection.name].indexes.get(:files_id => 1, :n => 1) end let(:files_index) do fs.database[fs.files_collection.name].indexes.get(:filename => 1, :uploadDate => 1) end it 'tries to create indexes' do expect(fs).to receive(:create_index_if_missing!).twice.and_call_original operation end it 'creates an index on the files collection' do operation expect(files_index[:name]).to eq('filename_1_uploadDate_1') end it 'creates an index on the chunks collection' do operation expect(chunks_index[:name]).to eq('files_id_1_n_1') end context 'when a write operation is called more than once' do let(:file2) do Mongo::Grid::File.new('Goodbye!', :filename => 'test2.txt') end it 'only creates the indexes the first time' do RSpec::Mocks.with_temporary_scope do expect(fs).to receive(:create_index_if_missing!).twice.and_call_original operation end RSpec::Mocks.with_temporary_scope do expect(fs).not_to receive(:create_index_if_missing!) expect(fs.insert_one(file2)).to be_a(BSON::ObjectId) end end end end context 'when the index creation encounters an error' do before do fs.chunks_collection.indexes.create_one(Mongo::Grid::FSBucket::CHUNKS_INDEX, :unique => false) end it 'should not raise an error to the user' do expect { fs.insert_one(file) }.not_to raise_error end end context 'when the files collection is not empty' do before do support_fs.insert_one(support_file) fs.insert_one(file) end let(:files_index) do fs.database[fs.files_collection.name].indexes.get(:filename => 1, :uploadDate => 1) end it 'assumes indexes already exist' do expect(files_index[:name]).to eq('filename_1_uploadDate_1') end end context 'when inserting the file more than once' do it 'raises an error' do expect { fs.insert_one(file) fs.insert_one(file) }.to raise_error(Mongo::Error::BulkWriteError) end end context 'when the file exceeds the max bson size' do let(:fs) do described_class.new(authorized_client.database) end let(:file) do str = 'y' * 16777216 Mongo::Grid::File.new(str, :filename => 'large-file.txt') end before do fs.insert_one(file) end it 'successfully inserts the file' do expect( fs.find_one(:filename => 'large-file.txt').chunks ).to eq(file.chunks) end end end describe '#delete_one' do let(:file) do Mongo::Grid::File.new('Hello!', :filename => 'test.txt') end before do fs.insert_one(file) fs.delete_one(file) end let(:from_db) do fs.find_one(:filename => 'test.txt') end it 'removes the file from the db' do expect(from_db).to be_nil end end describe '#delete' do let(:file_id) do fs.upload_from_stream(filename, file) end before do fs.delete(file_id) end let(:from_db) do fs.find_one(:filename => filename) end it 'removes the file from the db' do expect(from_db).to be_nil end context 'when a custom file id is used' do let(:custom_file_id) do fs.upload_from_stream(filename, file, file_id: 'Custom ID') end before do fs.delete(custom_file_id) end let(:from_db) do fs.find_one(:filename => filename) end it 'removes the file from the db' do expect(from_db).to be_nil end end end context 'when a read stream is opened' do let(:fs) do described_class.new(authorized_client.database, options) end let(:io) do StringIO.new end describe '#open_download_stream' do let!(:file_id) do fs.open_upload_stream(filename) do |stream| stream.write(file) end.file_id end context 'when a block is provided' do let!(:stream) do fs.open_download_stream(file_id) do |stream| io.write(stream.read) end end it 'returns a Stream::Read object' do expect(stream).to be_a(Mongo::Grid::FSBucket::Stream::Read) end it 'closes the stream after the block completes' do expect(stream.closed?).to be(true) end it 'yields the stream to the block' do expect(io.size).to eq(file.size) end end context 'when a block is not provided' do let!(:stream) do fs.open_download_stream(file_id) end it 'returns a Stream::Read object' do expect(stream).to be_a(Mongo::Grid::FSBucket::Stream::Read) end it 'does not close the stream' do expect(stream.closed?).to be(false) end it 'does not yield the stream to the block' do expect(io.size).to eq(0) end end context 'when a custom file id is provided' do let(:file) do File.open(__FILE__) end let!(:file_id) do fs.open_upload_stream(filename, file_id: 'Custom ID') do |stream| stream.write(file) end.file_id end context 'when a block is provided' do let!(:stream) do fs.open_download_stream(file_id) do |stream| io.write(stream.read) end end it 'yields the stream to the block' do expect(io.size).to eq(file.size) end end context 'when a block is not provided' do let!(:stream) do fs.open_download_stream(file_id) end it 'returns a Stream::Read object' do expect(stream).to be_a(Mongo::Grid::FSBucket::Stream::Read) end it 'does not close the stream' do expect(stream.closed?).to be(false) end it 'does not yield the stream to the block' do expect(io.size).to eq(0) end end end end describe '#download_to_stream' do context 'sessions' do let(:options) do { session: session } end let(:file_id) do fs.open_upload_stream(filename) do |stream| stream.write(file) end.file_id end let(:operation) do fs.download_to_stream(file_id, io) end let(:client) do authorized_client end it_behaves_like 'an operation using a session' end context 'when the file is found' do let!(:file_id) do fs.open_upload_stream(filename) do |stream| stream.write(file) end.file_id end before do fs.download_to_stream(file_id, io) end it 'writes to the provided stream' do expect(io.size).to eq(file.size) end it 'does not close the stream' do expect(io.closed?).to be(false) end context 'when the file has length 0' do let(:file) do StringIO.new('') end let(:from_db) do fs.open_upload_stream(filename) { |s| s.write(file) } fs.find_one(:filename => filename) end it 'can read the file back' do expect(from_db.data.size).to eq(file.size) end end end context 'when there is no files collection document found' do it 'raises an exception' do expect{ fs.download_to_stream(BSON::ObjectId.new, io) }.to raise_exception(Mongo::Error::FileNotFound) end end context 'when a file has an id that is not an ObjectId' do before do fs.insert_one(file) fs.download_to_stream(file_id, io) end let(:file_id) do 'non-object-id' end let(:file) do Mongo::Grid::File.new(File.open(__FILE__).read, :filename => filename, :_id => file_id) end it 'reads the file successfully' do expect(io.size).to eq(file.data.size) end end end context 'when a read preference is specified' do let(:fs) do described_class.new(authorized_client.database, options) end let(:options) do { read: { mode: :secondary } } end let(:stream) do fs.open_download_stream(BSON::ObjectId) end it 'sets the read preference on the Stream::Read object' do expect(stream.read_preference).to be_a(BSON::Document) expect(stream.read_preference).to eq(BSON::Document.new(options[:read])) end end describe '#download_to_stream_by_name' do let(:files) do [ StringIO.new('hello 1'), StringIO.new('hello 2'), StringIO.new('hello 3'), StringIO.new('hello 4') ] end context ' when using a session' do let(:options) do { session: session } end let(:operation) do fs.download_to_stream_by_name('test.txt', io) end let(:client) do authorized_client end before do files.each do |file| authorized_client.database.fs.upload_from_stream('test.txt', file) end end let(:io) do StringIO.new end it_behaves_like 'an operation using a session' end context 'when not using a session' do before do files.each do |file| fs.upload_from_stream('test.txt', file) end end let(:io) do StringIO.new end context 'when revision is not specified' do let!(:result) do fs.download_to_stream_by_name('test.txt', io) end it 'returns the most recent version' do expect(io.string).to eq('hello 4') end end context 'when revision is 0' do let!(:result) do fs.download_to_stream_by_name('test.txt', io, revision: 0) end it 'returns the original stored file' do expect(io.string).to eq('hello 1') end end context 'when revision is negative' do let!(:result) do fs.download_to_stream_by_name('test.txt', io, revision: -2) end it 'returns that number of versions from the most recent' do expect(io.string).to eq('hello 3') end end context 'when revision is positive' do let!(:result) do fs.download_to_stream_by_name('test.txt', io, revision: 1) end it 'returns that number revision' do expect(io.string).to eq('hello 2') end end context 'when the file revision is not found' do it 'raises a FileNotFound error' do expect { fs.download_to_stream_by_name('test.txt', io, revision: 100) }.to raise_exception(Mongo::Error::InvalidFileRevision) end end context 'when the file is not found' do it 'raises a FileNotFound error' do expect { fs.download_to_stream_by_name('non-existent.txt', io) }.to raise_exception(Mongo::Error::FileNotFound) end end end end describe '#open_download_stream_by_name' do let(:files) do [ StringIO.new('hello 1'), StringIO.new('hello 2'), StringIO.new('hello 3'), StringIO.new('hello 4') ] end let(:io) do StringIO.new end context ' when using a session' do let(:options) do { session: session } end let(:operation) do fs.download_to_stream_by_name('test.txt', io) end let(:client) do authorized_client end before do files.each do |file| authorized_client.database.fs.upload_from_stream('test.txt', file) end end let(:io) do StringIO.new end it_behaves_like 'an operation using a session' end context 'when not using a session' do before do files.each do |file| fs.upload_from_stream('test.txt', file) end end context 'when a block is provided' do let(:stream) do fs.open_download_stream_by_name('test.txt') do |stream| io.write(stream.read) end end it 'returns a Stream::Read object' do expect(stream).to be_a(Mongo::Grid::FSBucket::Stream::Read) end it 'closes the stream after the block completes' do expect(stream.closed?).to be(true) end it 'yields the stream to the block' do stream expect(io.size).to eq(files[0].size) end context 'when revision is not specified' do let!(:result) do fs.open_download_stream_by_name('test.txt') do |stream| io.write(stream.read) end end it 'returns the most recent version' do expect(io.string).to eq('hello 4') end end context 'when revision is 0' do let!(:result) do fs.open_download_stream_by_name('test.txt', revision: 0) do |stream| io.write(stream.read) end end it 'returns the original stored file' do expect(io.string).to eq('hello 1') end end context 'when revision is negative' do let!(:result) do fs.open_download_stream_by_name('test.txt', revision: -2) do |stream| io.write(stream.read) end end it 'returns that number of versions from the most recent' do expect(io.string).to eq('hello 3') end end context 'when revision is positive' do let!(:result) do fs.open_download_stream_by_name('test.txt', revision: 1) do |stream| io.write(stream.read) end end it 'returns that number revision' do expect(io.string).to eq('hello 2') end end context 'when the file revision is not found' do it 'raises a FileNotFound error' do expect { fs.open_download_stream_by_name('test.txt', revision: 100) }.to raise_exception(Mongo::Error::InvalidFileRevision) end end context 'when the file is not found' do it 'raises a FileNotFound error' do expect { fs.open_download_stream_by_name('non-existent.txt') }.to raise_exception(Mongo::Error::FileNotFound) end end end context 'when a block is not provided' do let!(:stream) do fs.open_download_stream_by_name('test.txt') end it 'returns a Stream::Read object' do expect(stream).to be_a(Mongo::Grid::FSBucket::Stream::Read) end it 'does not close the stream' do expect(stream.closed?).to be(false) end it 'does not yield the stream to the block' do expect(io.size).to eq(0) end end end end end context 'when a write stream is opened' do let(:stream) do fs.open_upload_stream(filename) end describe '#open_upload_stream' do context 'when a block is not provided' do it 'returns a Stream::Write object' do expect(stream).to be_a(Mongo::Grid::FSBucket::Stream::Write) end it 'creates an ObjectId for the file' do expect(stream.file_id).to be_a(BSON::ObjectId) end context 'when a custom file ID is provided' do let(:stream) do fs.open_upload_stream(filename, file_id: 'Custom ID') end it 'returns a Stream::Write object' do expect(stream).to be_a(Mongo::Grid::FSBucket::Stream::Write) end it 'creates an ObjectId for the file' do expect(stream.file_id).to eq('Custom ID') end end end context 'when a block is provided' do context 'when a session is not used' do let!(:stream) do fs.open_upload_stream(filename) do |stream| stream.write(file) end end let(:result) do fs.find_one(filename: filename) end it 'returns the stream' do expect(stream).to be_a(Mongo::Grid::FSBucket::Stream::Write) end it 'creates an ObjectId for the file' do expect(stream.file_id).to be_a(BSON::ObjectId) end it 'yields the stream to the block' do expect(result.data.size).to eq(file.size) end it 'closes the stream when the block completes' do expect(stream.closed?).to be(true) end end end end describe '#upload_from_stream' do let!(:result) do fs.upload_from_stream(filename, file) end let(:file_from_db) do fs.find_one(:filename => filename) end it 'writes to the provided stream' do expect(file_from_db.data.length).to eq(file.size) end it 'does not close the stream' do expect(file.closed?).to be(false) end it 'returns the id of the file' do expect(result).to be_a(BSON::ObjectId) end context 'when the io stream raises an error' do let(:stream) do fs.open_upload_stream(filename) end before do allow(fs).to receive(:open_upload_stream).and_yield(stream) end context 'when stream#abort does not raise an OperationFailure' do before do expect(stream).to receive(:abort).and_call_original file.close end it 'raises the original IOError' do expect { fs.upload_from_stream(filename, file) }.to raise_exception(IOError) end it 'closes the stream' do begin; fs.upload_from_stream(filename, file); rescue; end expect(stream.closed?).to be(true) end end context 'when stream#abort raises an OperationFailure' do before do allow(stream).to receive(:abort).and_raise(Mongo::Error::OperationFailure) file.close end it 'raises the original IOError' do expect { fs.upload_from_stream(filename, file) }.to raise_exception(IOError) end end end end context 'when options are provided when opening the write stream' do let(:stream) do fs.open_upload_stream(filename, stream_options) end context 'when a custom file id is provided' do let(:stream_options) do { file_id: 'Custom ID' } end it 'sets the file id on the stream' do expect(stream.file_id).to eq('Custom ID') end end context 'when a write option is specified' do let(:stream_options) do { write: { w: 2 } } end it 'sets the write concern on the write stream' do expect(stream.write_concern.options).to eq(Mongo::WriteConcern.get(stream_options[:write]).options) end end context 'when there is a chunk size set on the FSBucket' do let(:stream_options) do { } end let(:options) do { chunk_size: 100 } end it 'sets the chunk size as the default on the write stream' do expect(stream.options[:chunk_size]).to eq(options[:chunk_size]) end end context 'when a chunk size option is specified' do let(:stream_options) do { chunk_size: 50 } end it 'sets the chunk size on the write stream' do expect(stream.options[:chunk_size]).to eq(stream_options[:chunk_size]) end context 'when there is a chunk size set on the FSBucket' do let(:options) do { chunk_size: 100 } end let(:fs) do described_class.new(authorized_client.database, options) end it 'uses the chunk size set on the write stream' do expect(stream.options[:chunk_size]).to eq(stream_options[:chunk_size]) end end end context 'when a file metadata option is specified' do let(:stream_options) do { metadata: { some_field: 1 } } end it 'sets the file metadata option on the write stream' do expect(stream.options[:metadata]).to eq(stream_options[:metadata]) end end context 'when a content type option is specified' do let(:stream_options) do { content_type: 'text/plain' } end it 'sets the content type on the write stream' do expect(stream.options[:content_type]).to eq(stream_options[:content_type]) end end context 'when a aliases option is specified' do let(:stream_options) do { aliases: [ 'another-name.txt' ] } end it 'sets the alias option on the write stream' do expect(stream.options[:aliases]).to eq(stream_options[:aliases]) end end end end end mongo-ruby-driver-2.21.3/spec/mongo/grid/stream/000077500000000000000000000000001505113246500214615ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/mongo/grid/stream/read_spec.rb000066400000000000000000000137211505113246500237370ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Grid::FSBucket::Stream::Read do let(:support_fs) do authorized_client.database.fs(fs_options) end before do support_fs.files_collection.drop rescue nil support_fs.chunks_collection.drop rescue nil end let(:fs_options) do { } end let(:fs) do authorized_client.database.fs(fs_options) end let(:options) do { file_id: file_id } end let(:filename) do 'specs.rb' end let!(:file_id) do fs.upload_from_stream(filename, File.open(__FILE__)) end let(:stream) do described_class.new(fs, options) end describe '#initialize' do it 'sets the file id' do expect(stream.file_id).to eq(file_id) end it 'sets the fs object' do expect(stream.fs).to eq(fs) end context 'when there is a read preference set on the FSBucket' do let(:fs_options) do { read: { mode: :secondary } } end it 'uses the read preference of the fs as a default' do expect(stream.read_preference).to eq(fs.read_preference) end end it 'opens a stream' do expect(stream.close).to eq(file_id) end context 'when provided options' do context 'when provided read preference' do context 'when given as a hash with symbol keys' do let(:options) do { file_id: file_id, read: { mode: :primary_preferred }, } end it 'sets the read preference as a BSON::Document' do expect(stream.read_preference).to be_a(BSON::Document) expect(stream.read_preference).to eq('mode' => :primary_preferred) end it 'sets the read preference on the view' do expect(stream.send(:view).read).to eq(BSON::Document.new(options[:read])) end end context 'when given as a BSON::Document' do let(:options) do BSON::Document.new( file_id: file_id, read: { mode: :primary_preferred }, ) end it 'sets the read preference' do expect(stream.read_preference).to eq(options[:read]) end it 'sets the read preference on the view' do expect(stream.send(:view).read).to eq(options[:read]) end end end context 'when provided a file_id' do it 'sets the file id' do expect(stream.file_id).to eq(options[:file_id]) end end end end describe '#each' do let(:filename) do 'specs.rb' end let!(:file_id) do fs.upload_from_stream(filename, File.open(__FILE__)) end let(:fs_options) do { chunk_size: 5 } end it 'iterates over all the chunks of the file' do stream.each do |chunk| expect(chunk).not_to be(nil) end end context 'when the stream is closed' do before do stream.close end it 'does not allow further iteration' do expect { stream.to_a }.to raise_error(Mongo::Error::ClosedStream) end end context 'when a chunk is found out of order' do before do view = stream.fs.chunks_collection.find({ :files_id => file_id }, options).sort(:n => -1) stream.instance_variable_set(:@view, view) expect(stream).to receive(:close) end it 'raises an exception' do expect { stream.to_a }.to raise_error(Mongo::Error::MissingFileChunk) end it 'closes the query' do begin stream.to_a rescue Mongo::Error::MissingFileChunk end end end context 'when a chunk does not have the expected length' do before do stream.send(:file_info) stream.instance_variable_get(:@file_info).document['chunkSize'] = 4 expect(stream).to receive(:close) end it 'raises an exception' do expect { stream.to_a }.to raise_error(Mongo::Error::UnexpectedChunkLength) end it 'closes the query' do begin stream.to_a rescue Mongo::Error::UnexpectedChunkLength end end end context 'when there is no files document found' do before do fs.files_collection.delete_many end it 'raises an Exception' do expect{ stream.to_a }.to raise_exception(Mongo::Error::FileNotFound) end end end describe '#read' do let(:filename) do 'specs.rb' end let(:file) do File.open(__FILE__) end let(:file_id) do fs.upload_from_stream(filename, file) end it 'returns a string of all data' do expect(stream.read.size).to eq(file.size) end end describe '#file_info' do it 'returns a files information document' do expect(stream.file_info).to be_a(Mongo::Grid::File::Info) end end describe '#close' do let(:view) do stream.instance_variable_get(:@view) end before do stream.to_a end it 'returns the file id' do expect(stream.close).to eq(file_id) end context 'when the stream is closed' do before do stream.to_a expect(view).to receive(:close_query).and_call_original end it 'calls close_query on the view' do expect(stream.close).to be_a(BSON::ObjectId) end end context 'when the stream is already closed' do before do stream.close end it 'does not raise an exception' do expect { stream.close }.not_to raise_error end end end describe '#closed?' do context 'when the stream is closed' do before do stream.close end it 'returns true' do expect(stream.closed?).to be(true) end end context 'when the stream is still open' do it 'returns false' do expect(stream.closed?).to be(false) end end end end mongo-ruby-driver-2.21.3/spec/mongo/grid/stream/write_spec.rb000066400000000000000000000315541505113246500241620ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Grid::FSBucket::Stream::Write do let(:support_fs) do authorized_client.database.fs(fs_options) end before do support_fs.files_collection.drop rescue nil support_fs.chunks_collection.drop rescue nil end let(:file) do File.open(__FILE__) end let(:file2) do File.open(__FILE__) end let(:fs_options) do { } end let(:fs) do authorized_client.database.fs(fs_options) end let(:filename) do 'specs.rb' end let(:extra_options) do { } end let(:options) do { filename: filename }.merge(extra_options).merge(fs.options) end let(:stream) do described_class.new(fs, options) end describe '#initialize' do it 'sets the file id' do expect(stream.file_id).to be_a(BSON::ObjectId) end it 'sets the fs object' do expect(stream.fs).to eq(fs) end it 'opens a stream' do expect(stream.close).to be_a(BSON::ObjectId) end context 'when the fs does not have disable_md5 specified' do it 'sets an md5 for the file' do stream.send(:file_info).to_bson expect(stream.send(:file_info).document[:md5].size).to eq(32) end end context 'when the fs has disable_md5 specified' do before do stream.send(:file_info).to_bson end context 'when disable_md5 is true' do let(:fs_options) do { disable_md5: true } end it 'does not set an md5 for the file' do expect(stream.send(:file_info).document.has_key?(:md5)). to be(false) expect(stream.send(:file_info).document[:md5]). to be_nil end end context 'when disabled_md5 is false' do let(:fs_options) do { disable_md5: false } end it 'sets an md5 for the file' do stream.send(:file_info).to_bson expect(stream.send(:file_info).document[:md5].size).to eq(32) end end end context 'when the fs has a write concern' do require_topology :single let(:fs_options) do { write: INVALID_WRITE_CONCERN } end it 'uses the write concern of the fs as a default' do expect{ stream.close }.to raise_exception(Mongo::Error::OperationFailure) end end context 'when the fs does not have a write concern' do let(:fs) do authorized_client.with(write: nil).database.fs end it 'uses the write concern default at the operation level' do expect(stream.write(file).closed?).to eq(false) end end context 'when provided options' do context 'when provided a write option' do let(:extra_options) do { write: INVALID_WRITE_CONCERN } end let(:expected) do Mongo::WriteConcern.get(options[:write]).options end it 'sets the write concern' do expect(stream.write_concern.options).to eq(expected) end context 'when chunks are inserted' do it 'uses that write concern' do expect(stream.send(:chunks_collection).write_concern.options[:w]).to eq(expected[:w]) end end context 'when a files document is inserted' do it 'uses that write concern' do expect(stream.send(:files_collection).write_concern.options[:w]).to eq(expected[:w]) end end end context 'when provided a metadata document' do let(:options) do { metadata: { 'some_field' => 'test-file' } } end it 'sets the metadata document' do expect(stream.send(:file_info).metadata).to eq(options[:metadata]) end end context 'when provided a chunk size option' do let(:options) do { chunk_size: 50 } end it 'sets the chunk size' do expect(stream.send(:file_info).chunk_size).to eq(options[:chunk_size]) end context 'when chunk size is also set on the FSBucket object' do let(:fs_options) do { chunk_size: 100 } end it 'uses the write stream options' do expect(stream.send(:file_info).chunk_size).to eq(options[:chunk_size]) end end end context 'when provided a content type option' do let(:options) do { content_type: 'text/plain' } end it 'sets the content type' do expect(stream.send(:file_info).content_type).to eq(options[:content_type]) end end context 'when provided an aliases option' do let(:options) do { aliases: [ 'testing-file' ] } end it 'sets the aliases' do expect(stream.send(:file_info).document[:aliases]).to eq(options[:aliases]) end end context 'when provided a file_id option' do let(:options) do { file_id: 'Custom ID' } end it 'assigns the stream the file id' do expect(stream.file_id).to eq(options[:file_id]) end end end end describe '#write' do let(:file_from_db) do fs.find_one(filename: filename) end context 'when the stream is written to' do before do stream.write(file) end it 'does not close the stream' do expect(stream).not_to receive(:close) end end context 'when indexes need to be ensured' do context 'when the files collection is empty' do before do stream.write(file) end let(:chunks_index) do fs.database[fs.chunks_collection.name].indexes.get(:files_id => 1, :n => 1) end let(:files_index) do fs.database[fs.files_collection.name].indexes.get(:filename => 1, :uploadDate => 1) end it 'creates an index on the files collection' do expect(files_index[:name]).to eq('filename_1_uploadDate_1') end it 'creates an index on the chunks collection' do expect(chunks_index[:name]).to eq('files_id_1_n_1') end context 'when write is called more than once' do before do expect(fs).not_to receive(:ensure_indexes!) end it 'only creates the indexes the first time' do stream.write(file2) end end end context 'when the files collection is empty but indexes already exist with double values' do before do fs.files_collection.indexes.create_one( { filename: 1.0, uploadDate: 1.0 }, name: 'filename_1_uploadDate_1' ) fs.chunks_collection.indexes.create_one( { files_id: 1.0, n: 1.0 }, name: 'files_id_1_n_1', unique: true ) end it 'does not raise an exception' do expect do stream.write(file) end.not_to raise_error end it 'does not create new indexes' do stream.write(file) files_indexes = fs.files_collection.indexes.map { |index| index['key'] } chunks_indexes = fs.chunks_collection.indexes.map { |index| index['key'] } # Ruby parses the index keys with integer values expect(files_indexes).to eq([{ '_id' => 1 }, { 'filename' => 1, 'uploadDate' => 1 }]) expect(chunks_indexes).to eq([{ '_id' => 1 }, { 'files_id' => 1, 'n' => 1 }]) end end context 'when the files collection is not empty' do before do support_fs.send(:ensure_indexes!) support_fs.files_collection.insert_one(a: 1) stream.write(file) end let(:files_index) do fs.database[fs.files_collection.name].indexes.get(:filename => 1, :uploadDate => 1) end it 'assumes indexes already exist' do expect(files_index[:name]).to eq('filename_1_uploadDate_1') end end context 'when the index creation is done explicitely' do before do fs.chunks_collection.indexes.create_one(Mongo::Grid::FSBucket::CHUNKS_INDEX, :unique => false) end it 'should not raise an error to the user' do expect { stream.write(file) }.not_to raise_error end end end context 'when provided an io stream' do context 'when no file id is specified' do before do stream.write(file) stream.close end it 'writes the contents of the stream' do expect(file_from_db.data.size).to eq(file.size) end it 'updates the length written' do expect(stream.send(:file_info).document['length']).to eq(file.size) end it 'updates the position (n)' do expect(stream.instance_variable_get(:@n)).to eq(1) end end context 'when a custom file id is provided' do let(:extra_options) do { file_id: 'Custom ID' } end let!(:id) do stream.write(file) stream.close end it 'writes the contents of the stream' do expect(file_from_db.data.size).to eq(file.size) end it 'updates the length written' do expect(stream.send(:file_info).document['length']).to eq(file.size) end it 'updates the position (n)' do expect(stream.instance_variable_get(:@n)).to eq(1) end it 'uses the custom file id' do expect(id).to eq(options[:file_id]) end end context 'when the user file contains no data' do before do stream.write(file) stream.close end let(:file) do StringIO.new('') end let(:files_coll_doc) do stream.fs.files_collection.find(filename: filename).to_a.first end let(:chunks_documents) do stream.fs.chunks_collection.find(files_id: stream.file_id).to_a end it 'creates a files document' do expect(files_coll_doc).not_to be(nil) end it 'sets length to 0 in the files document' do expect(files_coll_doc['length']).to eq(0) end it 'does not insert any chunks' do expect(file_from_db.data.size).to eq(file.size) end end end context 'when the stream is written to multiple times' do before do stream.write(file) stream.write(file2) stream.close end it 'writes the contents of the stream' do expect(file_from_db.data.size).to eq(file.size * 2) end it 'updates the length written' do expect(stream.send(:file_info).document['length']).to eq(file.size * 2) end it 'updates the position (n)' do expect(stream.instance_variable_get(:@n)).to eq(2) end end context 'when the stream is closed' do before do stream.close end it 'does not allow further writes' do expect { stream.write(file) }.to raise_error(Mongo::Error::ClosedStream) end end end describe '#close' do let(:file_content) do File.open(__FILE__).read end context 'when close is called on the stream' do before do stream.write(file) end let(:file_id) do stream.file_id end it 'returns the file id' do expect(stream.close).to eq(file_id) end end context 'when the stream is closed' do before do stream.write(file) stream.close end let(:md5) do Digest::MD5.new.update(file_content).hexdigest end let(:files_coll_doc) do stream.fs.files_collection.find(filename: filename).to_a.first end it 'inserts a file documents in the files collection' do expect(files_coll_doc['_id']).to eq(stream.file_id) end it 'updates the length in the files collection file document' do expect(stream.send(:file_info).document[:length]).to eq(file.size) end it 'updates the md5 in the files collection file document' do expect(stream.send(:file_info).document[:md5]).to eq(md5) end end context 'when the stream is already closed' do before do stream.close end it 'raises an exception' do expect { stream.close }.to raise_error(Mongo::Error::ClosedStream) end end end describe '#closed?' do context 'when the stream is closed' do before do stream.close end it 'returns true' do expect(stream.closed?).to be(true) end end context 'when the stream is still open' do it 'returns false' do expect(stream.closed?).to be(false) end end end end mongo-ruby-driver-2.21.3/spec/mongo/grid/stream_spec.rb000066400000000000000000000020171505113246500230200ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Grid::FSBucket::Stream do let(:fs) do authorized_client.database.fs end describe '.get' do let(:stream) do described_class.get(fs, mode) end context 'when mode is read' do let(:mode) do Mongo::Grid::FSBucket::Stream::READ_MODE end it 'returns a Stream::Read object' do expect(stream).to be_a(Mongo::Grid::FSBucket::Stream::Read) end end context 'when mode is write' do let(:mode) do Mongo::Grid::FSBucket::Stream::WRITE_MODE end it 'returns a Stream::Write object' do expect(stream).to be_a(Mongo::Grid::FSBucket::Stream::Write) end context 'when options are provided' do let(:stream) do described_class.get(fs, mode, chunk_size: 100) end it 'sets the options on the stream object' do expect(stream.options[:chunk_size]).to eq(100) end end end end end mongo-ruby-driver-2.21.3/spec/mongo/id_spec.rb000066400000000000000000000012441505113246500211750ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' describe Mongo::Id do it 'starts with ID 1' do class IdA include Mongo::Id end expect(IdA.next_id).to eq(1) end it 'increases each subsequent ID' do class IdB include Mongo::Id end expect(IdB.next_id).to eq(1) expect(IdB.next_id).to eq(2) end it 'correctly generates independent IDs for separate classes' do class IdC include Mongo::Id end class IdD include Mongo::Id end expect(IdC.next_id).to eq(1) expect(IdD.next_id).to eq(1) expect(IdC.next_id).to eq(2) expect(IdD.next_id).to eq(2) end end mongo-ruby-driver-2.21.3/spec/mongo/index/000077500000000000000000000000001505113246500203505ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/mongo/index/view_spec.rb000066400000000000000000001042421505113246500226640ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Index::View do let(:subscriber) { Mrss::EventSubscriber.new } let(:client) do authorized_client.tap do |client| client.subscribe(Mongo::Monitoring::COMMAND, subscriber) end end let(:authorized_collection) do client[TEST_COLL] end let(:view) do described_class.new(authorized_collection, options) end let(:options) do {} end before do begin authorized_collection.delete_many rescue Mongo::Error::OperationFailure end begin authorized_collection.indexes.drop_all rescue Mongo::Error::OperationFailure end end describe '#drop_one' do let(:spec) do { another: -1 } end before do view.create_one(spec, unique: true) end context 'when provided a session' do let(:view_with_session) do described_class.new(authorized_collection, session: session) end let(:client) do authorized_client end let(:operation) do view_with_session.drop_one('another_-1') end let(:failed_operation) do view_with_session.drop_one('_another_-1') end it_behaves_like 'an operation using a session' it_behaves_like 'a failed operation using a session' end context 'when the index exists' do let(:result) do view.drop_one('another_-1') end it 'drops the index' do expect(result).to be_successful end end context 'when passing a * as the name' do it 'raises an exception' do expect { view.drop_one('*') }.to raise_error(Mongo::Error::MultiIndexDrop) end end context 'when the collection has a write concern' do let(:collection) do authorized_collection.with(write: INVALID_WRITE_CONCERN) end let(:view_with_write_concern) do described_class.new(collection) end let(:result) do view_with_write_concern.drop_one('another_-1') end context 'when the server accepts writeConcern for the dropIndexes operation' do min_server_fcv '3.4' it 'applies the write concern' do expect { result }.to raise_exception(Mongo::Error::OperationFailure) end end context 'when the server does not accept writeConcern for the dropIndexes operation' do max_server_version '3.2' it 'does not apply the write concern' do expect(result).to be_successful end end end context 'when there are multiple indexes with the same key pattern' do min_server_fcv '3.4' before do view.create_one({ random: 1 }, unique: true) view.create_one({ random: 1 }, name: 'random_1_with_collation', unique: true, collation: { locale: 'en_US', strength: 2 }) end context 'when a name is supplied' do let!(:result) do view.drop_one('random_1_with_collation') end let(:index_names) do view.collect { |model| model['name'] } end it 'returns ok' do expect(result).to be_successful end it 'drops the correct index' do expect(index_names).not_to include('random_1_with_collation') expect(index_names).to include('random_1') end end end context 'with a comment' do min_server_version '4.4' it 'drops the index' do expect(view.drop_one('another_-1', comment: "comment")).to be_successful command = subscriber.command_started_events("dropIndexes").last&.command expect(command).not_to be_nil expect(command["comment"]).to eq("comment") end end end describe '#drop_all' do let(:spec) do { another: -1 } end before do view.create_one(spec, unique: true) end context 'when indexes exists' do let(:result) do view.drop_all end it 'drops the index' do expect(result).to be_successful end context 'when provided a session' do let(:view_with_session) do described_class.new(authorized_collection, session: session) end let(:operation) do view_with_session.drop_all end let(:client) do authorized_client end it_behaves_like 'an operation using a session' end context 'when the collection has a write concern' do let(:collection) do authorized_collection.with(write: INVALID_WRITE_CONCERN) end let(:view_with_write_concern) do described_class.new(collection) end let(:result) do view_with_write_concern.drop_all end context 'when the server accepts writeConcern for the dropIndexes operation' do min_server_fcv '3.4' it 'applies the write concern' do expect { result }.to raise_exception(Mongo::Error::OperationFailure) end end context 'when the server does not accept writeConcern for the dropIndexes operation' do max_server_version '3.2' it 'does not apply the write concern' do expect(result).to be_successful end end end context 'with a comment' do min_server_version '4.4' it 'drops indexes' do expect(view.drop_all(comment: "comment")).to be_successful command = subscriber.command_started_events("dropIndexes").last&.command expect(command).not_to be_nil expect(command["comment"]).to eq("comment") end end end end describe '#create_many' do context 'when the indexes are created' do context 'when passing multi-args' do context 'when the index creation is successful' do let!(:result) do view.create_many( { key: { random: 1 }, unique: true }, { key: { testing: -1 }, unique: true } ) end it 'returns ok' do expect(result).to be_successful end context 'when provided a session' do let(:view_with_session) do described_class.new(authorized_collection, session: session) end let(:operation) do view_with_session.create_many( { key: { random: 1 }, unique: true }, { key: { testing: -1 }, unique: true } ) end let(:client) do authorized_client end let(:failed_operation) do view_with_session.create_many( { key: { random: 1 }, invalid: true } ) end it_behaves_like 'an operation using a session' it_behaves_like 'a failed operation using a session' end end context 'when commit quorum options are specified' do require_topology :replica_set, :sharded context 'on server versions >= 4.4' do min_server_fcv '4.4' let(:subscriber) { Mrss::EventSubscriber.new } let(:client) do authorized_client.tap do |client| client.subscribe(Mongo::Monitoring::COMMAND, subscriber) end end let(:authorized_collection) { client['view-subscribed'] } context 'when commit_quorum value is supported' do let!(:result) do view.create_many( { key: { random: 1 }, unique: true }, { key: { testing: -1 }, unique: true }, { commit_quorum: 'majority' } ) end let(:events) do subscriber.command_started_events('createIndexes') end it 'returns ok' do expect(result).to be_successful end it 'passes the commit_quorum option to the server' do expect(events.length).to eq(1) command = events.first.command expect(command['commitQuorum']).to eq('majority') end end context 'when commit_quorum value is not supported' do it 'raises an exception' do expect do view.create_many( { key: { random: 1 }, unique: true }, { key: { testing: -1 }, unique: true }, { commit_quorum: 'unsupported-value' } ) # 4.4.4 changed the text of the error message end.to raise_error(Mongo::Error::OperationFailure, /Commit quorum cannot be satisfied with the current replica set configuration|No write concern mode named 'unsupported-value' found in replica set configuration/) end end end context 'on server versions < 4.4' do max_server_fcv '4.2' it 'raises an exception' do expect do view.create_many( { key: { random: 1 }, unique: true }, { key: { testing: -1 }, unique: true }, { commit_quorum: 'majority' } ) end.to raise_error(Mongo::Error::UnsupportedOption, /The MongoDB server handling this request does not support the commit_quorum option/) end end end context 'when hidden is specified' do let(:index) { view.get('with_hidden_1') } context 'on server versions <= 3.2' do # DRIVERS-1220 Server versions 3.2 and older do not perform any option # checking on index creation. The server will allow the user to create # the index with the hidden option, but the server does not support this # option and will not use it. max_server_fcv '3.2' let!(:result) do view.create_many({ key: { with_hidden: 1 }, hidden: true }) end it 'returns ok' do expect(result).to be_successful end it 'creates an index' do expect(index).to_not be_nil end end context 'on server versions between 3.4 and 4.2' do max_server_fcv '4.2' min_server_fcv '3.4' it 'raises an exception' do expect do view.create_many({ key: { with_hidden: 1 }, hidden: true }) end.to raise_error(/The field 'hidden' is not valid for an index specification/) end end context 'on server versions >= 4.4' do min_server_fcv '4.4' context 'when hidden is true' do let!(:result) do view.create_many({ key: { with_hidden: 1 }, hidden: true }) end it 'returns ok' do expect(result).to be_successful end it 'creates an index' do expect(index).to_not be_nil end it 'applies the hidden option to the index' do expect(index['hidden']).to be true end end context 'when hidden is false' do let!(:result) do view.create_many({ key: { with_hidden: 1 }, hidden: false }) end it 'returns ok' do expect(result).to be_successful end it 'creates an index' do expect(index).to_not be_nil end it 'does not apply the hidden option to the index' do expect(index['hidden']).to be_nil end end end end context 'when collation is specified' do min_server_fcv '3.4' let(:result) do view.create_many( { key: { random: 1 }, unique: true, collation: { locale: 'en_US', strength: 2 } } ) end let(:index_info) do view.get('random_1') end context 'when the server supports collations' do min_server_fcv '3.4' it 'returns ok' do expect(result).to be_successful end it 'applies the collation to the new index' do result expect(index_info['collation']).not_to be_nil expect(index_info['collation']['locale']).to eq('en_US') expect(index_info['collation']['strength']).to eq(2) end end context 'when the server does not support collations' do max_server_version '3.2' it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedCollation) end context 'when a String key is used' do let(:result) do view.create_many( { key: { random: 1 }, unique: true, 'collation' => { locale: 'en_US', strength: 2 } } ) end it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedCollation) end end end end context 'when the collection has a write concern' do let(:collection) do authorized_collection.with(write: INVALID_WRITE_CONCERN) end let(:view_with_write_concern) do described_class.new(collection) end let(:result) do view_with_write_concern.create_many( { key: { random: 1 }, unique: true }, { key: { testing: -1 }, unique: true } ) end context 'when the server accepts writeConcern for the createIndexes operation' do min_server_fcv '3.4' it 'applies the write concern' do expect { result }.to raise_exception(Mongo::Error::OperationFailure) end end context 'when the server does not accept writeConcern for the createIndexes operation' do max_server_version '3.2' it 'does not apply the write concern' do expect(result).to be_successful end end end end context 'when passing an array' do context 'when the index creation is successful' do let!(:result) do view.create_many([ { key: { random: 1 }, unique: true }, { key: { testing: -1 }, unique: true } ]) end it 'returns ok' do expect(result).to be_successful end context 'when provided a session' do let(:view_with_session) do described_class.new(authorized_collection, session: session) end let(:operation) do view_with_session.create_many([ { key: { random: 1 }, unique: true }, { key: { testing: -1 }, unique: true } ]) end let(:failed_operation) do view_with_session.create_many([ { key: { random: 1 }, invalid: true }]) end let(:client) do authorized_client end it_behaves_like 'an operation using a session' it_behaves_like 'a failed operation using a session' end end context 'when collation is specified' do let(:result) do view.create_many([ { key: { random: 1 }, unique: true, collation: { locale: 'en_US', strength: 2 }}, ]) end let(:index_info) do view.get('random_1') end context 'when the server supports collations' do min_server_fcv '3.4' it 'returns ok' do expect(result).to be_successful end it 'applies the collation to the new index' do result expect(index_info['collation']).not_to be_nil expect(index_info['collation']['locale']).to eq('en_US') expect(index_info['collation']['strength']).to eq(2) end end context 'when the server does not support collations' do max_server_version '3.2' it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedCollation) end context 'when a String key is used' do let(:result) do view.create_many([ { key: { random: 1 }, unique: true, 'collation' => { locale: 'en_US', strength: 2 }}, ]) end it 'raises an exception' do expect { result }.to raise_exception(Mongo::Error::UnsupportedCollation) end end end end context 'when the collection has a write concern' do let(:collection) do authorized_collection.with(write: INVALID_WRITE_CONCERN) end let(:view_with_write_concern) do described_class.new(collection) end let(:result) do view_with_write_concern.create_many([ { key: { random: 1 }, unique: true }, { key: { testing: -1 }, unique: true } ]) end context 'when the server accepts writeConcern for the createIndexes operation' do min_server_fcv '3.4' it 'applies the write concern' do expect { result }.to raise_exception(Mongo::Error::OperationFailure) end end context 'when the server does not accept writeConcern for the createIndexes operation' do max_server_version '3.2' it 'does not apply the write concern' do expect(result).to be_successful end end end end context 'when index creation fails' do let(:spec) do { name: 1 } end before do view.create_one(spec, unique: true) end it 'raises an exception' do expect { view.create_many([{ key: { name: 1 }, unique: false }]) }.to raise_error(Mongo::Error::OperationFailure) end end end context 'when using bucket option' do # Option is removed in 4.9 max_server_version '4.7' let(:spec) do { 'any' => 1 } end let(:result) do view.create_many([key: spec, bucket_size: 1]) end it 'warns of deprecation' do RSpec::Mocks.with_temporary_scope do view.client.should receive(:log_warn).and_call_original result end end end context 'with a comment' do min_server_version '4.4' it 'creates indexes' do expect( view.create_many( [ { key: { random: 1 }, unique: true }, { key: { testing: -1 }, unique: true } ], comment: "comment" ) ).to be_successful command = subscriber.single_command_started_event("createIndexes")&.command expect(command).not_to be_nil expect(command["comment"]).to eq("comment") end end end describe '#create_one' do context 'when the index is created' do let(:spec) do { random: 1 } end let(:result) do view.create_one(spec, unique: true) end it 'returns ok' do expect(result).to be_successful end context 'when provided a session' do let(:view_with_session) do described_class.new(authorized_collection, session: session) end let(:operation) do view_with_session.create_one(spec, unique: true) end let(:failed_operation) do view_with_session.create_one(spec, invalid: true) end let(:client) do authorized_client end it_behaves_like 'an operation using a session' it_behaves_like 'a failed operation using a session' end context 'when the collection has a write concern' do let(:collection) do authorized_collection.with(write: INVALID_WRITE_CONCERN) end let(:view_with_write_concern) do described_class.new(collection) end let(:result) do view_with_write_concern.create_one(spec, unique: true) end context 'when the server accepts writeConcern for the createIndexes operation' do min_server_fcv '3.4' it 'applies the write concern' do expect { result }.to raise_exception(Mongo::Error::OperationFailure) end end context 'when the server does not accept writeConcern for the createIndexes operation' do max_server_version '3.2' it 'does not apply the write concern' do expect(result).to be_successful end end end context 'when the index is created on an subdocument field' do let(:spec) do { 'sub_document.random' => 1 } end let(:result) do view.create_one(spec, unique: true) end it 'returns ok' do expect(result).to be_successful end end context 'when using bucket option' do # Option is removed in 4.9 max_server_version '4.7' let(:spec) do { 'any' => 1 } end let(:result) do view.create_one(spec, bucket_size: 1) end it 'warns of deprecation' do RSpec::Mocks.with_temporary_scope do view.client.should receive(:log_warn).and_call_original result end end end end context 'when index creation fails' do let(:spec) do { name: 1 } end before do view.create_one(spec, unique: true) end it 'raises an exception' do expect { view.create_one(spec, unique: false) }.to raise_error(Mongo::Error::OperationFailure) end end context 'when providing an index name' do let(:spec) do { random: 1 } end let!(:result) do view.create_one(spec, unique: true, name: 'random_name') end it 'returns ok' do expect(result).to be_successful end it 'defines the index with the provided name' do expect(view.get('random_name')).to_not be_nil end end context 'when providing an invalid partial index filter' do min_server_fcv '3.2' it 'raises an exception' do expect { view.create_one({'x' => 1}, partial_filter_expression: 5) }.to raise_error(Mongo::Error::OperationFailure) end end context 'when providing a valid partial index filter' do min_server_fcv '3.2' let(:expression) do {'a' => {'$lte' => 1.5}} end let!(:result) do view.create_one({'x' => 1}, partial_filter_expression: expression) end let(:indexes) do authorized_collection.indexes.get('x_1') end it 'returns ok' do expect(result).to be_successful end it 'creates an index' do expect(indexes).to_not be_nil end it 'passes partialFilterExpression correctly' do expect(indexes[:partialFilterExpression]).to eq(expression) end end context 'when providing an invalid wildcard projection expression' do min_server_fcv '4.2' it 'raises an exception' do expect { view.create_one({ '$**' => 1 }, wildcard_projection: 5) }.to raise_error(Mongo::Error::OperationFailure, /Error in specification.*wildcardProjection|wildcardProjection.*must be a non-empty object/) end end context 'when providing a wildcard projection to an invalid base index' do min_server_fcv '4.2' it 'raises an exception' do expect { view.create_one({ 'x' => 1 }, wildcard_projection: { rating: 1 }) }.to raise_error(Mongo::Error::OperationFailure, /Error in specification.*wildcardProjection|wildcardProjection.*is only allowed/) end end context 'when providing a valid wildcard projection' do min_server_fcv '4.2' let!(:result) do view.create_one({ '$**' => 1 }, wildcard_projection: { 'rating' => 1 }) end let(:indexes) do authorized_collection.indexes.get('$**_1') end it 'returns ok' do expect(result).to be_successful end it 'creates an index' do expect(indexes).to_not be_nil end context 'on server versions <= 4.4' do max_server_fcv '4.4' it 'passes wildcardProjection correctly' do expect(indexes[:wildcardProjection]).to eq({ 'rating' => 1 }) end end context 'on server versions > 5.3' do min_server_fcv '5.4' it 'passes wildcardProjection correctly' do expect(indexes[:wildcardProjection]).to eq({ 'rating' => 1 }) end end end context 'when providing hidden option' do let(:index) { view.get('with_hidden_1') } context 'on server versions <= 3.2' do # DRIVERS-1220 Server versions 3.2 and older do not perform any option # checking on index creation. The server will allow the user to create # the index with the hidden option, but the server does not support this # option and will not use it. max_server_fcv '3.2' let!(:result) do view.create_one({ 'with_hidden' => 1 }, { hidden: true }) end it 'returns ok' do expect(result).to be_successful end it 'creates an index' do expect(index).to_not be_nil end end context 'on server versions between 3.4 and 4.2' do max_server_fcv '4.2' min_server_fcv '3.4' it 'raises an exception' do expect do view.create_one({ 'with_hidden' => 1 }, { hidden: true }) end.to raise_error(/The field 'hidden' is not valid for an index specification/) end end context 'on server versions >= 4.4' do min_server_fcv '4.4' context 'when hidden is true' do let!(:result) { view.create_one({ 'with_hidden' => 1 }, { hidden: true }) } it 'returns ok' do expect(result).to be_successful end it 'creates an index' do expect(index).to_not be_nil end it 'applies the hidden option to the index' do expect(index['hidden']).to be true end end context 'when hidden is false' do let!(:result) { view.create_one({ 'with_hidden' => 1 }, { hidden: false }) } it 'returns ok' do expect(result).to be_successful end it 'creates an index' do expect(index).to_not be_nil end it 'does not apply the hidden option to the index' do expect(index['hidden']).to be_nil end end end end context 'when providing commit_quorum option' do require_topology :replica_set, :sharded context 'on server versions >= 4.4' do min_server_fcv '4.4' let(:subscriber) { Mrss::EventSubscriber.new } let(:client) do authorized_client.tap do |client| client.subscribe(Mongo::Monitoring::COMMAND, subscriber) end end let(:authorized_collection) { client['view-subscribed'] } let(:indexes) do authorized_collection.indexes.get('x_1') end context 'when commit_quorum value is supported' do let!(:result) { view.create_one({ 'x' => 1 }, commit_quorum: 'majority') } it 'returns ok' do expect(result).to be_successful end it 'creates an index' do expect(indexes).to_not be_nil end let(:events) do subscriber.command_started_events('createIndexes') end it 'passes the commit_quorum option to the server' do expect(events.length).to eq(1) command = events.first.command expect(command['commitQuorum']).to eq('majority') end end context 'when commit_quorum value is not supported' do it 'raises an exception' do expect do view.create_one({ 'x' => 1 }, commit_quorum: 'unsupported-value') # 4.4.4 changed the text of the error message end.to raise_error(Mongo::Error::OperationFailure, /Commit quorum cannot be satisfied with the current replica set configuration|No write concern mode named 'unsupported-value' found in replica set configuration/) end end end context 'on server versions < 4.4' do max_server_fcv '4.2' it 'raises an exception' do expect do view.create_one({ 'x' => 1 }, commit_quorum: 'majority') end.to raise_error(Mongo::Error::UnsupportedOption, /The MongoDB server handling this request does not support the commit_quorum option/) end end end context 'with a comment' do min_server_version '4.4' it 'creates index' do expect( view.create_one( { 'x' => 1 }, comment: "comment" ) ).to be_successful command = subscriber.single_command_started_event("createIndexes")&.command expect(command).not_to be_nil expect(command["comment"]).to eq("comment") end end end describe '#get' do let(:spec) do { random: 1 } end let!(:result) do view.create_one(spec, unique: true, name: 'random_name') end context 'when providing a name' do let(:index) do view.get('random_name') end it 'returns the index' do expect(index['name']).to eq('random_name') end end context 'when providing a spec' do let(:index) do view.get(random: 1) end it 'returns the index' do expect(index['name']).to eq('random_name') end end context 'when provided a session' do let(:view_with_session) do described_class.new(authorized_collection, session: session) end let(:operation) do view_with_session.get(random: 1) end let(:client) do authorized_client end it_behaves_like 'an operation using a session' end context 'when the index does not exist' do it 'returns nil' do expect(view.get(other: 1)).to be_nil end end end describe '#each' do context 'when the collection exists' do let(:spec) do { name: 1 } end before do view.create_one(spec, unique: true) end let(:indexes) do view.each end it 'returns all the indexes for the database' do expect(indexes.to_a.count).to eq(2) end end context 'when the collection does not exist' do min_server_fcv '3.0' let(:nonexistent_collection) do authorized_client[:not_a_collection] end let(:nonexistent_view) do described_class.new(nonexistent_collection) end it 'raises a nonexistent collection error' do expect { nonexistent_view.each.to_a }.to raise_error(Mongo::Error::OperationFailure) end end end describe '#normalize_models' do context 'when providing options' do let(:options) do { :key => { :name => 1 }, :bucket_size => 5, :default_language => 'deutsch', :expire_after => 10, :language_override => 'language', :sphere_version => 1, :storage_engine => 'wiredtiger', :text_version => 2, :version => 1 } end let(:models) do view.send(:normalize_models, [ options ], authorized_primary) end let(:expected) do { :key => { :name => 1 }, :name => 'name_1', :bucketSize => 5, :default_language => 'deutsch', :expireAfterSeconds => 10, :language_override => 'language', :'2dsphereIndexVersion' => 1, :storageEngine => 'wiredtiger', :textIndexVersion => 2, :v => 1 } end it 'maps the ruby options to the server options' do expect(models).to eq([ expected ]) end context 'when using alternate names' do let(:extended_options) do options.merge!(expire_after_seconds: 5) end let(:extended_expected) do expected.tap { |exp| exp[:expireAfterSeconds] = 5 } end let(:models) do view.send(:normalize_models, [ extended_options ], authorized_primary) end it 'maps the ruby options to the server options' do expect(models).to eq([ extended_expected ]) end end context 'when the server supports collations' do min_server_fcv '3.4' let(:extended_options) do options.merge(:collation => { locale: 'en_US' } ) end let(:models) do view.send(:normalize_models, [ extended_options ], authorized_primary) end let(:extended_expected) do expected.tap { |exp| exp[:collation] = { locale: 'en_US' } } end it 'maps the ruby options to the server options' do expect(models).to eq([ extended_expected ]) end end end end end mongo-ruby-driver-2.21.3/spec/mongo/lint_spec.rb000066400000000000000000000157361505113246500215620ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Lint do before(:all) do # Since we are installing an expectation on ENV, close any open clients # which may have background threads reading ENV ClientRegistry.instance.close_all_clients end before do expect(ENV).to receive(:[]).with('MONGO_RUBY_DRIVER_LINT').at_least(:once).and_return('1') end describe '.validate_underscore_read_preference' do %w(primary primary_preferred secondary secondary_preferred nearest).each do |mode| it "accepts #{mode} as string" do expect do described_class.validate_underscore_read_preference(mode: mode) end.to_not raise_error end it "accepts #{mode} with string mode key" do expect do described_class.validate_underscore_read_preference('mode' => mode) end.to_not raise_error end it "accepts #{mode} as symbol" do expect do described_class.validate_underscore_read_preference(mode: mode.to_sym) end.to_not raise_error end end %w(primaryPreferred secondaryPreferred).each do |mode| it "rejects #{mode} as string" do expect do described_class.validate_underscore_read_preference(mode: mode) end.to raise_error(Mongo::Error::LintError) end it "rejects #{mode} with string mode key" do expect do described_class.validate_underscore_read_preference('mode' => mode) end.to raise_error(Mongo::Error::LintError) end it "rejects #{mode} as symbol" do expect do described_class.validate_underscore_read_preference(mode: mode.to_sym) end.to raise_error(Mongo::Error::LintError) end end end describe '.validate_underscore_read_preference_mode' do %w(primary primary_preferred secondary secondary_preferred nearest).each do |mode| it "accepts #{mode} as string" do expect do described_class.validate_underscore_read_preference_mode(mode) end.to_not raise_error end it "accepts #{mode} as symbol" do expect do described_class.validate_underscore_read_preference_mode(mode.to_sym) end.to_not raise_error end end %w(primaryPreferred secondaryPreferred).each do |mode| it "rejects #{mode} as string" do expect do described_class.validate_underscore_read_preference_mode(mode) end.to raise_error(Mongo::Error::LintError) end it "rejects #{mode} as symbol" do expect do described_class.validate_underscore_read_preference_mode(mode.to_sym) end.to raise_error(Mongo::Error::LintError) end end end describe '.validate_camel_case_read_preference' do %w(primary primaryPreferred secondary secondaryPreferred nearest).each do |mode| it "accepts #{mode} as string" do expect do described_class.validate_camel_case_read_preference(mode: mode) end.to_not raise_error end it "accepts #{mode} with string mode key" do expect do described_class.validate_camel_case_read_preference('mode' => mode) end.to_not raise_error end it "accepts #{mode} as symbol" do expect do described_class.validate_camel_case_read_preference(mode: mode.to_sym) end.to_not raise_error end end %w(primary_preferred secondary_preferred).each do |mode| it "rejects #{mode} as string" do expect do described_class.validate_camel_case_read_preference(mode: mode) end.to raise_error(Mongo::Error::LintError) end it "rejects #{mode} with string mode key" do expect do described_class.validate_camel_case_read_preference('mode' => mode) end.to raise_error(Mongo::Error::LintError) end it "rejects #{mode} as symbol" do expect do described_class.validate_camel_case_read_preference(mode: mode.to_sym) end.to raise_error(Mongo::Error::LintError) end end end describe '.validate_camel_case_read_preference_mode' do %w(primary primaryPreferred secondary secondaryPreferred nearest).each do |mode| it "accepts #{mode} as string" do expect do described_class.validate_camel_case_read_preference_mode(mode) end.to_not raise_error end it "accepts #{mode} as symbol" do expect do described_class.validate_camel_case_read_preference_mode(mode.to_sym) end.to_not raise_error end end %w(primary_preferred secondary_preferred).each do |mode| it "rejects #{mode} as string" do expect do described_class.validate_camel_case_read_preference_mode(mode) end.to raise_error(Mongo::Error::LintError) end it "rejects #{mode} as symbol" do expect do described_class.validate_camel_case_read_preference_mode(mode.to_sym) end.to raise_error(Mongo::Error::LintError) end end end describe '.validate_read_concern_option' do it 'accepts nil' do expect do described_class.validate_read_concern_option(nil) end.to_not raise_error end it 'accepts empty hash' do expect do described_class.validate_read_concern_option({}) end.to_not raise_error end it "rejects an object which is not a hash" do expect do described_class.validate_read_concern_option(:local) end.to raise_error(Mongo::Error::LintError) end [:local, :majority, :snapshot].each do |level| it "accepts :#{level}" do expect do described_class.validate_read_concern_option({level: level}) end.to_not raise_error end it "rejects #{level} as string" do expect do described_class.validate_read_concern_option({level: level.to_s}) end.to raise_error(Mongo::Error::LintError) end end it "rejects a bogus level" do expect do described_class.validate_read_concern_option({level: :bogus}) end.to raise_error(Mongo::Error::LintError) end it "rejects level given as a string key" do expect do described_class.validate_read_concern_option({'level' => :snapshot}) end.to raise_error(Mongo::Error::LintError) end it "rejects a bogus key as symbol" do expect do described_class.validate_read_concern_option({foo: 'bar'}) end.to raise_error(Mongo::Error::LintError) end it "rejects a bogus key as string" do expect do described_class.validate_read_concern_option({'foo' => 'bar'}) end.to raise_error(Mongo::Error::LintError) end %w(afterClusterTime after_cluster_time).each do |key| [:to_s, :to_sym].each do |conv| key = key.send(conv) it "rejects #{key.inspect}" do expect do described_class.validate_read_concern_option({key => 123}) end.to raise_error(Mongo::Error::LintError) end end end end end mongo-ruby-driver-2.21.3/spec/mongo/logger_spec.rb000066400000000000000000000021061505113246500220560ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' describe Mongo::Logger do let(:logger) do described_class.logger end around do |example| saved_logger = Mongo::Logger.logger begin example.run ensure Mongo::Logger.logger = saved_logger end end describe '.logger' do context 'when no logger has been set' do let(:test_logger) do Mongo::Logger.logger end before do Mongo::Logger.logger = nil end it 'returns the default logger' do expect(logger.level).to eq(Logger::INFO) end end context 'when a logger has been set' do let(:info) do Logger.new(STDOUT).tap do |log| log.level = Logger::INFO end end let(:debug) do Logger.new(STDOUT).tap do |log| log.level = Logger::DEBUG end end before do described_class.logger = debug end it 'returns the provided logger' do expect(logger.level).to eq(Logger::DEBUG) end end end end mongo-ruby-driver-2.21.3/spec/mongo/monitoring/000077500000000000000000000000001505113246500214265ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/mongo/monitoring/command_log_subscriber_spec.rb000066400000000000000000000032021505113246500274640ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Monitoring::CommandLogSubscriber do describe '#started' do let(:filter) do (1...100).reduce({}) do |hash, i| hash[i] = i hash end end let(:command) do { find: 'users', filter: filter } end let(:event) do Mongo::Monitoring::Event::CommandStarted.new( 'find', 'users', Mongo::Address.new('127.0.0.1:27017'), 12345, 67890, command ) end before do Mongo::Logger.level = Logger::DEBUG end after do Mongo::Logger.level = Logger::INFO end context 'when truncating the logs' do context 'when no option is provided' do let(:subscriber) do described_class.new end it 'truncates the logs at 250 characters' do expect(subscriber).to receive(:truncate).with(command).and_call_original subscriber.started(event) end end context 'when true option is provided' do let(:subscriber) do described_class.new(truncate_logs: true) end it 'truncates the logs at 250 characters' do expect(subscriber).to receive(:truncate).with(command).and_call_original subscriber.started(event) end end end context 'when not truncating the logs' do let(:subscriber) do described_class.new(truncate_logs: false) end it 'does not truncate the logs' do expect(subscriber).to_not receive(:truncate) subscriber.started(event) end end end end mongo-ruby-driver-2.21.3/spec/mongo/monitoring/event/000077500000000000000000000000001505113246500225475ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/mongo/monitoring/event/cmap/000077500000000000000000000000001505113246500234675ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/mongo/monitoring/event/cmap/connection_check_out_failed_spec.rb000066400000000000000000000010231505113246500325110ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' describe Mongo::Monitoring::Event::Cmap::ConnectionCheckOutFailed do describe '#summary' do let(:address) do Mongo::Address.new('127.0.0.1:27017') end let(:reason) do described_class::TIMEOUT end let(:event) do described_class.new(address, reason) end it 'renders correctly' do expect(event.summary).to eq('#') end end end mongo-ruby-driver-2.21.3/spec/mongo/monitoring/event/cmap/connection_check_out_started_spec.rb000066400000000000000000000007021505113246500327360ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' describe Mongo::Monitoring::Event::Cmap::ConnectionCheckOutStarted do describe '#summary' do let(:address) do Mongo::Address.new('127.0.0.1:27017') end let(:event) do described_class.new(address) end it 'renders correctly' do expect(event.summary).to eq('#') end end end mongo-ruby-driver-2.21.3/spec/mongo/monitoring/event/cmap/connection_checked_in_spec.rb000066400000000000000000000012221505113246500313160ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Monitoring::Event::Cmap::ConnectionCheckedIn do describe '#summary' do let(:address) do Mongo::Address.new('127.0.0.1:27017') end let(:id) do 1 end declare_topology_double let(:pool) do server = make_server(:primary) Mongo::Server::ConnectionPool.new(server) end let(:event) do described_class.new(address, id, pool) end it 'renders correctly' do expect(event.summary).to eq("#") end end end mongo-ruby-driver-2.21.3/spec/mongo/monitoring/event/cmap/connection_checked_out_spec.rb000066400000000000000000000012241505113246500315210ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Monitoring::Event::Cmap::ConnectionCheckedOut do describe '#summary' do let(:address) do Mongo::Address.new('127.0.0.1:27017') end let(:id) do 1 end declare_topology_double let(:pool) do server = make_server(:primary) Mongo::Server::ConnectionPool.new(server) end let(:event) do described_class.new(address, id, pool) end it 'renders correctly' do expect(event.summary).to eq("#") end end end mongo-ruby-driver-2.21.3/spec/mongo/monitoring/event/cmap/connection_closed_spec.rb000066400000000000000000000010641505113246500305170ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' describe Mongo::Monitoring::Event::Cmap::ConnectionClosed do describe '#summary' do let(:address) do Mongo::Address.new('127.0.0.1:27017') end let(:reason) do described_class::STALE end let(:id) do 1 end let(:event) do described_class.new(address, id, reason) end it 'renders correctly' do expect(event.summary).to eq('#') end end end mongo-ruby-driver-2.21.3/spec/mongo/monitoring/event/cmap/connection_created_spec.rb000066400000000000000000000007501505113246500306560ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' describe Mongo::Monitoring::Event::Cmap::ConnectionCreated do describe '#summary' do let(:address) do Mongo::Address.new('127.0.0.1:27017') end let(:id) do 1 end let(:event) do described_class.new(address, id) end it 'renders correctly' do expect(event.summary).to eq('#') end end end mongo-ruby-driver-2.21.3/spec/mongo/monitoring/event/cmap/connection_ready_spec.rb000066400000000000000000000007441505113246500303560ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' describe Mongo::Monitoring::Event::Cmap::ConnectionReady do describe '#summary' do let(:address) do Mongo::Address.new('127.0.0.1:27017') end let(:id) do 1 end let(:event) do described_class.new(address, id) end it 'renders correctly' do expect(event.summary).to eq('#') end end end mongo-ruby-driver-2.21.3/spec/mongo/monitoring/event/cmap/pool_cleared_spec.rb000066400000000000000000000006461505113246500274640ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' describe Mongo::Monitoring::Event::Cmap::PoolCleared do describe '#summary' do let(:address) do Mongo::Address.new('127.0.0.1:27017') end let(:event) do described_class.new(address) end it 'renders correctly' do expect(event.summary).to eq('#') end end end mongo-ruby-driver-2.21.3/spec/mongo/monitoring/event/cmap/pool_closed_spec.rb000066400000000000000000000011131505113246500273240ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Monitoring::Event::Cmap::PoolClosed do describe '#summary' do let(:address) do Mongo::Address.new('127.0.0.1:27017') end declare_topology_double let(:pool) do server = make_server(:primary) Mongo::Server::ConnectionPool.new(server) end let(:event) do described_class.new(address, pool) end it 'renders correctly' do expect(event.summary).to eq("#") end end end mongo-ruby-driver-2.21.3/spec/mongo/monitoring/event/cmap/pool_created_spec.rb000066400000000000000000000013631505113246500274710ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Monitoring::Event::Cmap::PoolCreated do describe '#summary' do let(:address) do Mongo::Address.new('127.0.0.1:27017') end let(:options) do { wait_queue_timeout: 3, min_pool_size: 5, } end declare_topology_double let(:pool) do server = make_server(:primary) Mongo::Server::ConnectionPool.new(server) end let(:event) do described_class.new(address, options, pool) end it 'renders correctly' do expect(event.summary).to eq("#3, :min_pool_size=>5} pool=0x#{pool.object_id}>") end end end mongo-ruby-driver-2.21.3/spec/mongo/monitoring/event/command_failed_spec.rb000066400000000000000000000041201505113246500270250ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' describe Mongo::Monitoring::Event::CommandFailed do let(:address) do Mongo::Address.new('127.0.0.1:27017') end let(:failure) do BSON::Document.new(test: 'value') end describe '#initialize' do context 'when the failure should be redacted' do context 'sensitive command' do let(:started_event) do double.tap do |evt| expect(evt).to receive(:sensitive).and_return(false) end end let(:event) do described_class.new( 'copydb', 'admin', address, 1, 2, "msg", failure, 0.5, started_event: started_event ) end it 'sets the failure to an empty document' do expect(event.failure).to be_empty end end context 'sensitive started event' do let(:started_event) do double.tap do |evt| expect(evt).to receive(:sensitive).and_return(true) end end let(:event) do described_class.new( 'find', 'admin', address, 1, 2, "msg", failure, 0.5, started_event: started_event ) end it 'sets the failure to an empty document' do expect(event.failure).to be_empty end end end end describe '#command_name' do let(:started_event) do double.tap do |evt| expect(evt).to receive(:sensitive).and_return(false) end end context 'when command_name is given as a string' do let(:event) do described_class.new( 'find', 'admin', address, 1, 2, 'Uh oh', nil, 0.5, started_event: started_event ) end it 'is a string' do expect(event.command_name).to eql('find') end end context 'when command_name is given as a symbol' do let(:event) do described_class.new( :find, 'admin', address, 1, 2, 'Uh oh', nil, 0.5, started_event: started_event ) end it 'is a string' do expect(event.command_name).to eql('find') end end end end mongo-ruby-driver-2.21.3/spec/mongo/monitoring/event/command_started_spec.rb000066400000000000000000000021231505113246500272500ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' describe Mongo::Monitoring::Event::CommandStarted do let(:address) do Mongo::Address.new('127.0.0.1:27017') end describe '#initialize' do let(:command) do BSON::Document.new(test: 'value') end context 'when the command should be redacted' do let(:event) do described_class.new('copydb', 'admin', address, 1, 2, command) end it 'sets the command to an empty document' do expect(event.command).to be_empty end end end describe '#command_name' do context 'when command_name is given as a string' do let(:event) do described_class.new('find', 'admin', address, 1, 2, {}) end it 'is a string' do expect(event.command_name).to eql('find') end end context 'when command_name is given as a symbol' do let(:event) do described_class.new(:find, 'admin', address, 1, 2, {}) end it 'is a string' do expect(event.command_name).to eql('find') end end end end mongo-ruby-driver-2.21.3/spec/mongo/monitoring/event/command_succeeded_spec.rb000066400000000000000000000040501505113246500275270ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' describe Mongo::Monitoring::Event::CommandSucceeded do let(:address) do Mongo::Address.new('127.0.0.1:27017') end let(:reply) do BSON::Document.new(test: 'value') end describe '#initialize' do context 'when the reply should be redacted' do context 'sensitive command' do let(:started_event) do double.tap do |evt| expect(evt).to receive(:sensitive).and_return(false) end end let(:event) do described_class.new( 'copydb', 'admin', address, 1, 2, reply, 0.5, started_event: started_event ) end it 'sets the reply to an empty document' do expect(event.reply).to be_empty end end context 'sensitive started event' do let(:started_event) do double.tap do |evt| expect(evt).to receive(:sensitive).and_return(true) end end let(:event) do described_class.new( 'find', 'admin', address, 1, 2, reply, 0.5, started_event: started_event ) end it 'sets the reply to an empty document' do expect(event.reply).to be_empty end end end end describe '#command_name' do let(:started_event) do double.tap do |evt| expect(evt).to receive(:sensitive).and_return(false) end end context 'when command_name is given as a string' do let(:event) do described_class.new( 'find', 'admin', address, 1, 2, reply, 0.5, started_event: started_event ) end it 'is a string' do expect(event.command_name).to eql('find') end end context 'when command_name is given as a symbol' do let(:event) do described_class.new( :find, 'admin', address, 1, 2, reply, 0.5, started_event: started_event ) end it 'is a string' do expect(event.command_name).to eql('find') end end end end mongo-ruby-driver-2.21.3/spec/mongo/monitoring/event/secure_spec.rb000066400000000000000000000052541505113246500254020ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' describe Mongo::Monitoring::Event::Secure do let(:document) do BSON::Document.new(test: 'value') end let(:klass) do Class.new do include Mongo::Monitoring::Event::Secure end end describe '#redacted' do let(:secure) do klass.new end context 'when the command must be redacted' do context 'when the command name is a string' do let(:redacted) do secure.redacted('saslStart', document) end it 'returns an empty document' do expect(redacted).to be_empty end end context 'when the command name is a symbol' do let(:redacted) do secure.redacted(:saslStart, document) end it 'returns an empty document' do expect(redacted).to be_empty end end end context 'when the command is not in the redacted list' do context 'the command is not a hello/legacy hello command' do let(:redacted) do secure.redacted(:find, document) end it 'returns the document' do expect(redacted).to eq(document) end end %w(hello ismaster isMaster).each do |command| context command do it 'returns an empty document if speculative auth' do expect( secure.redacted(command, BSON::Document.new('speculativeAuthenticate' => "foo")) ).to be_empty end it 'returns an original document if no speculative auth' do expect( secure.redacted(command, document) ).to eq(document) end end end end end describe '#compression_allowed?' do context 'when the selector represents a command for which compression is not allowed' do let(:secure) do klass.new end Mongo::Monitoring::Event::Secure::REDACTED_COMMANDS.each do |command| let(:selector) do { command => 1 } end context "when the command is #{command}" do it 'does not allow compression for the command' do expect(secure.compression_allowed?(selector.keys.first)).to be(false) end end end end context 'when the selector represents a command for which compression is allowed' do let(:selector) do { ping: 1 } end let(:secure) do klass.new end context 'when the command is :ping' do it 'does not allow compression for the command' do expect(secure.compression_allowed?(selector.keys.first)).to be(true) end end end end end mongo-ruby-driver-2.21.3/spec/mongo/monitoring/event/server_closed_spec.rb000066400000000000000000000017271505113246500267540ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' describe Mongo::Monitoring::Event::ServerClosed do let(:address) do Mongo::Address.new('127.0.0.1:27017') end let(:monitoring) { double('monitoring') } let(:cluster) do double('cluster').tap do |cluster| allow(cluster).to receive(:addresses).and_return([address]) allow(cluster).to receive(:servers_list).and_return([]) end end let(:topology) do Mongo::Cluster::Topology::Unknown.new({}, monitoring, cluster) end let(:event) do described_class.new(address, topology) end describe '#summary' do require_no_linting it 'renders correctly' do expect(topology).to receive(:server_descriptions).and_return({ '127.0.0.1:27017' => Mongo::Server::Description.new(Mongo::Address.new('127.0.0.1:27017'))}) expect(event.summary).to eq('#') end end end mongo-ruby-driver-2.21.3/spec/mongo/monitoring/event/server_description_changed_spec.rb000066400000000000000000000017061505113246500314740ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' describe Mongo::Monitoring::Event::ServerDescriptionChanged do let(:address) do Mongo::Address.new('127.0.0.1:27017') end let(:monitoring) { double('monitoring') } let(:cluster) do double('cluster').tap do |cluster| allow(cluster).to receive(:addresses).and_return([address]) allow(cluster).to receive(:servers_list).and_return([]) end end let(:topology) do Mongo::Cluster::Topology::Unknown.new({}, monitoring, cluster) end let(:previous_desc) { Mongo::Server::Description.new(address) } let(:updated_desc) { Mongo::Server::Description.new(address) } let(:event) do described_class.new(address, topology, previous_desc, updated_desc) end describe '#summary' do it 'renders correctly' do expect(event.summary).to eq("#") end end end mongo-ruby-driver-2.21.3/spec/mongo/monitoring/event/server_heartbeat_failed_spec.rb000066400000000000000000000015401505113246500307370ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' describe Mongo::Monitoring::Event::ServerHeartbeatFailed do let(:address) do Mongo::Address.new('127.0.0.1:27017') end let(:monitoring) { double('monitoring') } let(:cluster) do double('cluster').tap do |cluster| allow(cluster).to receive(:addresses).and_return([address]) allow(cluster).to receive(:servers_list).and_return([]) end end let(:topology) do Mongo::Cluster::Topology::Unknown.new({}, monitoring, cluster) end let(:event) do described_class.new(address, 1, Mongo::Error::SocketError.new('foo'), started_event: nil) end describe '#summary' do it 'renders correctly' do expect(event.summary).to eq('#>') end end end mongo-ruby-driver-2.21.3/spec/mongo/monitoring/event/server_heartbeat_started_spec.rb000066400000000000000000000013751505113246500311670ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' describe Mongo::Monitoring::Event::ServerHeartbeatStarted do let(:address) do Mongo::Address.new('127.0.0.1:27017') end let(:monitoring) { double('monitoring') } let(:cluster) do double('cluster').tap do |cluster| allow(cluster).to receive(:addresses).and_return([address]) allow(cluster).to receive(:servers_list).and_return([]) end end let(:topology) do Mongo::Cluster::Topology::Unknown.new({}, monitoring, cluster) end let(:event) do described_class.new(address) end describe '#summary' do it 'renders correctly' do expect(event.summary).to eq('#') end end end mongo-ruby-driver-2.21.3/spec/mongo/monitoring/event/server_heartbeat_succeeded_spec.rb000066400000000000000000000014301505113246500314350ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' describe Mongo::Monitoring::Event::ServerHeartbeatSucceeded do let(:address) do Mongo::Address.new('127.0.0.1:27017') end let(:monitoring) { double('monitoring') } let(:cluster) do double('cluster').tap do |cluster| allow(cluster).to receive(:addresses).and_return([address]) allow(cluster).to receive(:servers_list).and_return([]) end end let(:topology) do Mongo::Cluster::Topology::Unknown.new({}, monitoring, cluster) end let(:event) do described_class.new(address, 1, started_event: nil) end describe '#summary' do it 'renders correctly' do expect(event.summary).to eq('#') end end end mongo-ruby-driver-2.21.3/spec/mongo/monitoring/event/server_opening_spec.rb000066400000000000000000000017311505113246500271350ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' describe Mongo::Monitoring::Event::ServerOpening do let(:address) do Mongo::Address.new('127.0.0.1:27017') end let(:monitoring) { double('monitoring') } let(:cluster) do double('cluster').tap do |cluster| allow(cluster).to receive(:addresses).and_return([address]) allow(cluster).to receive(:servers_list).and_return([]) end end let(:topology) do Mongo::Cluster::Topology::Unknown.new({}, monitoring, cluster) end let(:event) do described_class.new(address, topology) end describe '#summary' do require_no_linting it 'renders correctly' do expect(topology).to receive(:server_descriptions).and_return({ '127.0.0.1:27017' => Mongo::Server::Description.new(Mongo::Address.new('127.0.0.1:27017'))}) expect(event.summary).to eq('#') end end end mongo-ruby-driver-2.21.3/spec/mongo/monitoring/event/topology_changed_spec.rb000066400000000000000000000024021505113246500274310ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' describe Mongo::Monitoring::Event::TopologyChanged do let(:address) do Mongo::Address.new('127.0.0.1:27017') end let(:monitoring) { double('monitoring') } let(:cluster) do double('cluster').tap do |cluster| allow(cluster).to receive(:addresses).and_return([address]) allow(cluster).to receive(:servers_list).and_return([]) end end let(:prev_topology) do Mongo::Cluster::Topology::Unknown.new({}, monitoring, cluster) end let(:new_topology) do Mongo::Cluster::Topology::Unknown.new({}, monitoring, cluster) end let(:event) do described_class.new(prev_topology, new_topology) end describe '#summary' do require_no_linting it 'renders correctly' do expect(prev_topology).to receive(:server_descriptions).and_return({ '127.0.0.1:27017' => Mongo::Server::Description.new(Mongo::Address.new('127.0.0.1:27017'))}) expect(new_topology).to receive(:server_descriptions).and_return({ '127.0.0.1:99999' => Mongo::Server::Description.new(Mongo::Address.new('127.0.0.1:99999'))}) expect(event.summary).to eq('#') end end end mongo-ruby-driver-2.21.3/spec/mongo/monitoring/event/topology_closed_spec.rb000066400000000000000000000016721505113246500273210ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' describe Mongo::Monitoring::Event::TopologyClosed do let(:address) do Mongo::Address.new('127.0.0.1:27017') end let(:monitoring) { double('monitoring') } let(:cluster) do double('cluster').tap do |cluster| allow(cluster).to receive(:addresses).and_return([address]) allow(cluster).to receive(:servers_list).and_return([]) end end let(:topology) do Mongo::Cluster::Topology::Unknown.new({}, monitoring, cluster) end let(:event) do described_class.new(topology) end describe '#summary' do require_no_linting it 'renders correctly' do expect(topology).to receive(:server_descriptions).and_return({ '127.0.0.1:27017' => Mongo::Server::Description.new(Mongo::Address.new('127.0.0.1:27017'))}) expect(event.summary).to eq('#') end end end mongo-ruby-driver-2.21.3/spec/mongo/monitoring/event/topology_opening_spec.rb000066400000000000000000000016741505113246500275110ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' describe Mongo::Monitoring::Event::TopologyOpening do let(:address) do Mongo::Address.new('127.0.0.1:27017') end let(:monitoring) { double('monitoring') } let(:cluster) do double('cluster').tap do |cluster| allow(cluster).to receive(:addresses).and_return([address]) allow(cluster).to receive(:servers_list).and_return([]) end end let(:topology) do Mongo::Cluster::Topology::Unknown.new({}, monitoring, cluster) end let(:event) do described_class.new(topology) end describe '#summary' do require_no_linting it 'renders correctly' do expect(topology).to receive(:server_descriptions).and_return({ '127.0.0.1:27017' => Mongo::Server::Description.new(Mongo::Address.new('127.0.0.1:27017'))}) expect(event.summary).to eq('#') end end end mongo-ruby-driver-2.21.3/spec/mongo/monitoring_spec.rb000066400000000000000000000100421505113246500227620ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Monitoring do describe '#dup' do let(:monitoring) do described_class.new end let(:copy) do monitoring.dup end it 'dups the subscribers' do expect(monitoring.subscribers).to_not equal(copy.subscribers) end it 'keeps the same subscriber instances' do expect(monitoring.subscribers).to eq(copy.subscribers) end context 'when adding to the copy' do let(:subscriber) do double('subscriber') end before do copy.subscribe('topic', subscriber) end it 'does not modify the original subscribers' do expect(monitoring.subscribers).to_not eq(copy.subscribers) end end end describe '#initialize' do context 'when no monitoring options provided' do let(:monitoring) do described_class.new end it 'includes the global subscribers' do expect(monitoring.subscribers.size).to eq(7) end end context 'when monitoring options provided' do context 'when monitoring is true' do let(:monitoring) do described_class.new(monitoring: true) end it 'includes the global subscribers' do expect(monitoring.subscribers.size).to eq(7) end end context 'when monitoring is false' do let(:monitoring) do described_class.new(monitoring: false) end it 'does not include the global subscribers' do expect(monitoring.subscribers).to be_empty end end end end describe '#subscribe' do let(:monitoring) do described_class.new(monitoring: false) end let(:subscriber) do double('subscriber') end it 'subscribes to the topic' do monitoring.subscribe('topic', subscriber) expect(monitoring.subscribers['topic']).to eq([ subscriber ]) end it 'subscribes to the topic twice' do monitoring.subscribe('topic', subscriber) monitoring.subscribe('topic', subscriber) expect(monitoring.subscribers['topic']).to eq([ subscriber, subscriber ]) end end describe '#unsubscribe' do let(:monitoring) do described_class.new(monitoring: false) end let(:subscriber) do double('subscriber') end it 'unsubscribes from the topic' do monitoring.subscribe('topic', subscriber) monitoring.unsubscribe('topic', subscriber) expect(monitoring.subscribers['topic']).to eq([ ]) end it 'unsubscribes from the topic when not subscribed' do monitoring.unsubscribe('topic', subscriber) expect(monitoring.subscribers['topic']).to eq([ ]) end end describe '#started' do let(:monitoring) do described_class.new(monitoring: false) end let(:subscriber) do double('subscriber') end let(:event) do double('event') end before do monitoring.subscribe('topic', subscriber) end it 'calls the started method on each subscriber' do expect(subscriber).to receive(:started).with(event) monitoring.started('topic', event) end end describe '#succeeded' do let(:monitoring) do described_class.new(monitoring: false) end let(:subscriber) do double('subscriber') end let(:event) do double('event') end before do monitoring.subscribe('topic', subscriber) end it 'calls the succeeded method on each subscriber' do expect(subscriber).to receive(:succeeded).with(event) monitoring.succeeded('topic', event) end end describe '#failed' do let(:monitoring) do described_class.new(monitoring: false) end let(:subscriber) do double('subscriber') end let(:event) do double('event') end before do monitoring.subscribe('topic', subscriber) end it 'calls the failed method on each subscriber' do expect(subscriber).to receive(:failed).with(event) monitoring.failed('topic', event) end end end mongo-ruby-driver-2.21.3/spec/mongo/operation/000077500000000000000000000000001505113246500212415ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/mongo/operation/aggregate/000077500000000000000000000000001505113246500231675ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/mongo/operation/aggregate/result_spec.rb000066400000000000000000000040561505113246500260510ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Operation::Aggregate::Result do let(:description) do Mongo::Server::Description.new( double('description address'), { 'minWireVersion' => 0, 'maxWireVersion' => 2 } ) end let(:result) do described_class.new(reply, description) end let(:cursor_id) { 0 } let(:documents) { [] } let(:flags) { [] } let(:starting_from) { 0 } let(:reply) do Mongo::Protocol::Reply.new.tap do |reply| reply.instance_variable_set(:@flags, flags) reply.instance_variable_set(:@cursor_id, cursor_id) reply.instance_variable_set(:@starting_from, starting_from) reply.instance_variable_set(:@number_returned, documents.size) reply.instance_variable_set(:@documents, documents) end end let(:aggregate) do [ { '_id' => 'New York', 'totalpop' => 40270 }, { '_id' => 'Berlin', 'totalpop' => 103056 } ] end describe '#cursor_id' do context 'when the result is not using a cursor' do let(:documents) do [{ 'result' => aggregate, 'ok' => 1.0 }] end it 'returns zero' do expect(result.cursor_id).to eq(0) end end context 'when the result is using a cursor' do let(:documents) do [{ 'cursor' => { 'id' => 15, 'ns' => 'test', 'firstBatch' => aggregate }, 'ok' => 1.0 }] end it 'returns the cursor id' do expect(result.cursor_id).to eq(15) end end end describe '#documents' do context 'when the result is not using a cursor' do let(:documents) do [{ 'result' => aggregate, 'ok' => 1.0 }] end it 'returns the documents' do expect(result.documents).to eq(aggregate) end end context 'when the result is using a cursor' do let(:documents) do [{ 'cursor' => { 'id' => 15, 'ns' => 'test', 'firstBatch' => aggregate }, 'ok' => 1.0 }] end it 'returns the documents' do expect(result.documents).to eq(aggregate) end end end end mongo-ruby-driver-2.21.3/spec/mongo/operation/aggregate_spec.rb000066400000000000000000000027401505113246500245310ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Operation::Aggregate do let(:options) do {} end let(:selector) do { :aggregate => TEST_COLL, :pipeline => [], } end let(:spec) do { :selector => selector, :options => options, :db_name => SpecConfig.instance.test_db } end let(:op) { described_class.new(spec) } let(:context) { Mongo::Operation::Context.new } describe '#initialize' do context 'spec' do it 'sets the spec' do expect(op.spec).to be(spec) end end end describe '#==' do context ' when two ops have different specs' do let(:other_selector) do { :aggregate => 'another_test_coll', :pipeline => [], } end let(:other_spec) do { :selector => other_selector, :options => options, :db_name => SpecConfig.instance.test_db, } end let(:other) { described_class.new(other_spec) } it 'returns false' do expect(op).not_to eq(other) end end end describe '#execute' do context 'when the aggregation fails' do let(:selector) do { :aggregate => TEST_COLL, :pipeline => [{ '$invalid' => 'operator' }], } end it 'raises an exception' do expect { op.execute(authorized_primary, context: context) }.to raise_error(Mongo::Error::OperationFailure) end end end end mongo-ruby-driver-2.21.3/spec/mongo/operation/collections_info_spec.rb000066400000000000000000000016531505113246500261360ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Operation::CollectionsInfo do require_no_required_api_version let(:spec) do { selector: { listCollections: 1 }, db_name: SpecConfig.instance.test_db } end let(:names) do [ 'berlin', 'london' ] end let(:op) do described_class.new(spec) end let(:context) { Mongo::Operation::Context.new } describe '#execute' do before do names.each do |name| authorized_client[name].insert_one(x: 1) end end after do names.each do |name| authorized_client[name].drop end end let(:info) do docs = op.execute(authorized_primary, context: context).documents docs.collect { |info| info['name'].sub("#{SpecConfig.instance.test_db}.", '') } end it 'returns the list of collection info' do expect(info).to include(*names) end end end mongo-ruby-driver-2.21.3/spec/mongo/operation/command_spec.rb000066400000000000000000000034251505113246500242220ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Operation::Command do require_no_required_api_version let(:selector) { { :ping => 1 } } let(:options) { { :limit => -1 } } let(:spec) do { :selector => selector, :options => options, :db_name => SpecConfig.instance.test_db } end let(:op) { described_class.new(spec) } let(:context) { Mongo::Operation::Context.new } describe '#initialize' do it 'sets the spec' do expect(op.spec).to be(spec) end end describe '#==' do context 'when the ops have different specs' do let(:other_selector) { { :ping => 1 } } let(:other_spec) do { :selector => other_selector, :options => {}, :db_name => 'test', } end let(:other) { described_class.new(other_spec) } it 'returns false' do expect(op).not_to eq(other) end end end describe '#execute' do context 'when the command succeeds' do let(:response) do op.execute(authorized_primary, context: context) end it 'returns the reponse' do expect(response).to be_successful end end context 'when the command fails' do let(:selector) do { notacommand: 1 } end it 'raises an exception' do expect { op.execute(authorized_primary, context: context) }.to raise_error(Mongo::Error::OperationFailure) end end context 'when a document exceeds max bson size' do let(:selector) do { :hello => '1'*17000000 } end it 'raises an error' do expect { op.execute(authorized_primary, context: context) }.to raise_error(Mongo::Error::MaxBSONSize) end end end end mongo-ruby-driver-2.21.3/spec/mongo/operation/context_spec.rb000066400000000000000000000036131505113246500242670ustar00rootroot00000000000000# frozen_string_literal: true require 'lite_spec_helper' describe Mongo::Operation::Context do describe '#initialize' do context 'when timeout_ms is negative' do it 'raises an error' do expect do described_class.new(operation_timeouts: { operation_timeout_ms: -1 }) end.to raise_error ArgumentError, /must be a non-negative integer/ end end end describe '#deadline' do let(:context) { described_class.new(operation_timeouts: { operation_timeout_ms: timeout_ms }) } context 'when timeout_ms is nil' do let(:timeout_ms) { nil } it 'returns nil' do expect(context.deadline).to be_nil end end context 'when timeout_ms is zero' do let(:timeout_ms) { 0 } it 'returns nil' do expect(context.deadline).to eq(0) end end context 'when timeout_ms is positive' do before do allow(Mongo::Utils).to receive(:monotonic_time).and_return(100.0) end let(:timeout_ms) { 10_000 } it 'calculates the deadline' do expect(context.deadline).to eq(110) end end end describe '#remaining_timeout_ms' do let(:context) { described_class.new(operation_timeouts: { operation_timeout_ms: timeout_ms }) } context 'when timeout_ms is nil' do let(:timeout_ms) { nil } it 'returns nil' do expect(context.remaining_timeout_ms).to be_nil end end context 'when timeout_ms is zero' do let(:timeout_ms) { 0 } it 'returns nil' do expect(context.remaining_timeout_ms).to be_nil end end context 'when timeout_ms is positive' do before do allow(Mongo::Utils).to receive(:monotonic_time).and_return(100.0, 105.0) end let(:timeout_ms) { 10_000 } it 'calculates the remaining time' do expect(context.remaining_timeout_ms).to eq(5_000) end end end end mongo-ruby-driver-2.21.3/spec/mongo/operation/create/000077500000000000000000000000001505113246500225045ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/mongo/operation/create/op_msg_spec.rb000066400000000000000000000154321505113246500253340ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' require_relative '../shared/csot/examples' describe Mongo::Operation::Create::OpMsg do include CSOT::Examples let(:context) { Mongo::Operation::Context.new } let(:write_concern) do Mongo::WriteConcern.get(w: :majority) end let(:session) { nil } let(:spec) do { :selector => { :create => authorized_collection.name }, :db_name => authorized_collection.database.name, :write_concern => write_concern, :session => session } end let(:op) { described_class.new(spec) } let(:connection) do double('connection').tap do |connection| allow(connection).to receive(:server).and_return(authorized_primary) allow(connection).to receive(:features).and_return(authorized_primary.features) allow(connection).to receive(:description).and_return(authorized_primary.description) allow(connection).to receive(:cluster_time).and_return(authorized_primary.cluster_time) end end describe '#initialize' do context 'spec' do it 'sets the spec' do expect(op.spec).to eq(spec) end end end describe '#==' do context 'spec' do context 'when two ops have the same specs' do let(:other) { described_class.new(spec) } it 'returns true' do expect(op).to eq(other) end end context 'when two ops have different specs' do let(:other_selector) do { :create => "other_collection_name" } end let(:other_spec) do { :selector => other_selector, :db_name => authorized_collection.database.name, :write_concern => write_concern, :ordered => true } end let(:other) { described_class.new(other_spec) } it 'returns false' do expect(op).not_to eq(other) end end end end describe '#selector' do it 'does not mutate user input' do user_input = IceNine.deep_freeze(spec.dup) expect do described_class.new(user_input).send(:selector, connection) end.not_to raise_error end end describe '#message' do # https://jira.mongodb.org/browse/RUBY-2224 require_no_linting let(:global_args) do { create: TEST_COLL, writeConcern: write_concern.options, '$db' => SpecConfig.instance.test_db, lsid: session.session_id } end let(:session) do authorized_client.start_session end context 'when the topology is replica set or sharded' do require_topology :replica_set, :sharded let(:expected_global_args) do global_args.merge(Mongo::Operation::CLUSTER_TIME => authorized_client.cluster.cluster_time) end it 'creates the correct OP_MSG message' do authorized_client.command(ping:1) expect(Mongo::Protocol::Msg).to receive(:new).with([], {}, expected_global_args) op.send(:message, connection) end end context 'when the topology is standalone' do require_topology :single let(:expected_global_args) do global_args end it 'creates the correct OP_MSG message' do authorized_client.command(ping:1) expect(Mongo::Protocol::Msg).to receive(:new).with([], {}, expected_global_args) op.send(:message, connection) end context 'when an implicit session is created and the topology is then updated and the server does not support sessions' do # Mocks on features are incompatible with linting require_no_linting let(:expected_global_args) do global_args.dup.tap do |args| args.delete(:lsid) end end let(:session) do Mongo::Session.new(nil, authorized_client, implicit: true).tap do |session| allow(session).to receive(:session_id).and_return(42) session.should be_implicit end end it 'creates the correct OP_MSG message' do RSpec::Mocks.with_temporary_scope do expect(connection.features).to receive(:sessions_enabled?).and_return(false) expect(expected_global_args[:session]).to be nil expect(Mongo::Protocol::Msg).to receive(:new).with([], {}, expected_global_args) op.send(:message, connection) end end end end context 'when the write concern is 0' do let(:write_concern) do Mongo::WriteConcern.get(w: 0) end context 'when the session is implicit' do let(:session) do Mongo::Session.new(nil, authorized_client, implicit: true).tap do |session| allow(session).to receive(:session_id).and_return(42) session.should be_implicit end end context 'when the topology is replica set or sharded' do require_topology :replica_set, :sharded let(:expected_global_args) do global_args.dup.tap do |args| args.delete(:lsid) args.merge!(Mongo::Operation::CLUSTER_TIME => authorized_client.cluster.cluster_time) end end it 'does not send a session id in the command' do authorized_client.command(ping:1) expect(Mongo::Protocol::Msg).to receive(:new).with([:more_to_come], {}, expected_global_args) op.send(:message, connection) end end context 'when the topology is standalone' do require_topology :single let(:expected_global_args) do global_args.dup.tap do |args| args.delete(:lsid) end end it 'creates the correct OP_MSG message' do authorized_client.command(ping:1) expect(Mongo::Protocol::Msg).to receive(:new).with([:more_to_come], {}, expected_global_args) op.send(:message, connection) end end end context 'when the session is explicit' do require_topology :replica_set, :sharded let(:session) do authorized_client.start_session end before do session.should_not be_implicit end let(:expected_global_args) do global_args.dup.tap do |args| args.delete(:lsid) args.merge!(Mongo::Operation::CLUSTER_TIME => authorized_client.cluster.cluster_time) end end it 'does not send a session id in the command' do authorized_client.command(ping:1) RSpec::Mocks.with_temporary_scope do expect(Mongo::Protocol::Msg).to receive(:new).with([:more_to_come], {}, expected_global_args) op.send(:message, connection) end end end end end it_behaves_like 'a CSOT-compliant OpMsg subclass' end mongo-ruby-driver-2.21.3/spec/mongo/operation/create_index_spec.rb000066400000000000000000000030051505113246500252300ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Operation::CreateIndex do require_no_required_api_version let(:context) { Mongo::Operation::Context.new } before do authorized_collection.drop authorized_collection.insert_one(test: 1) end describe '#execute' do context 'when the index is created' do let(:spec) do { key: { random: 1 }, name: 'random_1', unique: true } end let(:operation) do described_class.new(indexes: [ spec ], db_name: SpecConfig.instance.test_db, coll_name: TEST_COLL) end let(:response) do operation.execute(authorized_primary, context: context) end it 'returns ok' do expect(response).to be_successful end end context 'when index creation fails' do let(:spec) do { key: { random: 1 }, name: 'random_1', unique: true } end let(:operation) do described_class.new(indexes: [ spec ], db_name: SpecConfig.instance.test_db, coll_name: TEST_COLL) end let(:second_operation) do described_class.new(indexes: [ spec.merge(unique: false) ], db_name: SpecConfig.instance.test_db, coll_name: TEST_COLL) end before do operation.execute(authorized_primary, context: context) end it 'raises an exception' do expect { second_operation.execute(authorized_primary, context: context) }.to raise_error(Mongo::Error::OperationFailure) end end end end mongo-ruby-driver-2.21.3/spec/mongo/operation/create_user_spec.rb000066400000000000000000000023351505113246500251040ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Operation::CreateUser do require_no_required_api_version let(:context) { Mongo::Operation::Context.new } describe '#execute' do let(:user) do Mongo::Auth::User.new( user: 'durran', password: 'password', roles: [ Mongo::Auth::Roles::READ_WRITE ] ) end let(:operation) do described_class.new(user: user, db_name: SpecConfig.instance.test_db) end before do users = root_authorized_client.database.users if users.info('durran').any? users.remove('durran') end end context 'when user creation was successful' do let!(:response) do operation.execute(root_authorized_primary, context: context) end it 'saves the user in the database' do expect(response).to be_successful end end context 'when creation was not successful' do it 'raises an exception' do expect { operation.execute(root_authorized_primary, context: context) operation.execute(root_authorized_primary, context: context) }.to raise_error(Mongo::Error::OperationFailure) end end end end mongo-ruby-driver-2.21.3/spec/mongo/operation/delete/000077500000000000000000000000001505113246500225035ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/mongo/operation/delete/bulk_spec.rb000066400000000000000000000136661505113246500250130ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Operation::Delete do require_no_required_api_version let(:context) { Mongo::Operation::Context.new } let(:documents) do [ { 'q' => { foo: 1 }, 'limit' => 1 } ] end let(:spec) do { :deletes => documents, :db_name => SpecConfig.instance.test_db, :coll_name => TEST_COLL, :ordered => true } end let(:op) { described_class.new(spec) } describe '#initialize' do context 'spec' do it 'sets the spec' do expect(op.spec).to eq(spec) end end end describe '#==' do context 'spec' do context 'when two ops have the same specs' do let(:other) { described_class.new(spec) } it 'returns true' do expect(op).to eq(other) end end context 'when two ops have different specs' do let(:other_docs) do [ { 'q' => { bar: 1 }, 'limit' => 1 } ] end let(:other_spec) do { :deletes => other_docs, :db_name => SpecConfig.instance.test_db, :coll_name => TEST_COLL, :ordered => true } end let(:other) { described_class.new(other_spec) } it 'returns false' do expect(op).not_to eq(other) end end end end describe '#bulk_execute' do before do begin authorized_collection.delete_many rescue Mongo::Error::OperationFailure end begin authorized_collection.indexes.drop_all rescue Mongo::Error::OperationFailure end authorized_collection.insert_many([ { name: 'test', field: 'test' }, { name: 'testing', field: 'test' } ]) end after do authorized_collection.delete_many end context 'when deleting a single document' do let(:op) do described_class.new({ deletes: documents, db_name: SpecConfig.instance.test_db, coll_name: TEST_COLL, write_concern: Mongo::WriteConcern.get(w: 1) }) end context 'when the delete succeeds' do let(:documents) do [{ 'q' => { field: 'test' }, 'limit' => 1 }] end it 'deletes the document from the database' do authorized_primary.with_connection do |connection| op.bulk_execute(connection, context: context) end expect(authorized_collection.find.count).to eq(1) end end end context 'when deleting multiple documents' do let(:op) do described_class.new({ deletes: documents, db_name: SpecConfig.instance.test_db, coll_name: TEST_COLL, }) end context 'when the deletes succeed' do let(:documents) do [{ 'q' => { field: 'test' }, 'limit' => 0 }] end it 'deletes the documents from the database' do authorized_primary.with_connection do |connection| op.bulk_execute(connection, context: context) end expect(authorized_collection.find.count).to eq(0) end end end context 'when the deletes are ordered' do let(:documents) do [ { q: { '$set' => { a: 1 } }, limit: 0 }, { 'q' => { field: 'test' }, 'limit' => 1 } ] end let(:spec) do { :deletes => documents, :db_name => SpecConfig.instance.test_db, :coll_name => TEST_COLL, :ordered => true } end let(:failing_delete) do described_class.new(spec) end context 'when the delete fails' do context 'when write concern is acknowledged' do let(:write_concern) do Mongo::WriteConcern.get(w: :majority) end it 'aborts after first error' do authorized_primary.with_connection do |connection| failing_delete.bulk_execute(connection, context: context) end expect(authorized_collection.find.count).to eq(2) end end context 'when write concern is unacknowledged' do let(:write_concern) do Mongo::WriteConcern.get(w: 0) end it 'aborts after first error' do authorized_primary.with_connection do |connection| failing_delete.bulk_execute(connection, context: context) end expect(authorized_collection.find.count).to eq(2) end end end end context 'when the deletes are unordered' do let(:documents) do [ { q: { '$set' => { a: 1 } }, limit: 0 }, { 'q' => { field: 'test' }, 'limit' => 1 } ] end let(:spec) do { :deletes => documents, :db_name => SpecConfig.instance.test_db, :coll_name => TEST_COLL, :ordered => false } end let(:failing_delete) do described_class.new(spec) end context 'when the delete fails' do context 'when write concern is acknowledged' do let(:write_concern) do Mongo::WriteConcern.get(w: 1) end it 'does not abort after first error' do authorized_primary.with_connection do |connection| failing_delete.bulk_execute(connection, context: context) end expect(authorized_collection.find.count).to eq(1) end end context 'when write concern is unacknowledged' do let(:write_concern) do Mongo::WriteConcern.get(w: 0) end it 'does not abort after first error' do authorized_primary.with_connection do |connection| failing_delete.bulk_execute(connection, context: context) end expect(authorized_collection.find.count).to eq(1) end end end end end end mongo-ruby-driver-2.21.3/spec/mongo/operation/delete/op_msg_spec.rb000066400000000000000000000177341505113246500253420ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' require_relative '../shared/csot/examples' describe Mongo::Operation::Delete::OpMsg do include CSOT::Examples let(:context) { Mongo::Operation::Context.new } let(:write_concern) do Mongo::WriteConcern.get(w: :majority) end let(:session) { nil } let(:deletes) { [{:q => { :foo => 1 }, :limit => 1}] } let(:spec) do { :deletes => deletes, :db_name => authorized_collection.database.name, :coll_name => authorized_collection.name, :write_concern => write_concern, :ordered => true, :session => session } end let(:op) { described_class.new(spec) } let(:connection) do double('connection').tap do |connection| allow(connection).to receive(:server).and_return(authorized_primary) allow(connection).to receive(:features).and_return(authorized_primary.features) allow(connection).to receive(:description).and_return(authorized_primary.description) allow(connection).to receive(:cluster_time).and_return(authorized_primary.cluster_time) end end describe '#initialize' do context 'spec' do it 'sets the spec' do expect(op.spec).to eq(spec) end end end describe '#==' do context 'spec' do context 'when two ops have the same specs' do let(:other) { described_class.new(spec) } it 'returns true' do expect(op).to eq(other) end end context 'when two ops have different specs' do let(:other_deletes) { [{:q => { :bar => 1 }, :limit => 1}] } let(:other_spec) do { :deletes => other_deletes, :db_name => authorized_collection.database.name, :coll_name => authorized_collection.name, :write_concern => write_concern, :ordered => true } end let(:other) { described_class.new(other_spec) } it 'returns false' do expect(op).not_to eq(other) end end end end describe 'write concern' do # https://jira.mongodb.org/browse/RUBY-2224 require_no_linting context 'when write concern is not specified' do let(:spec) do { :deletes => deletes, :db_name => authorized_collection.database.name, :coll_name => authorized_collection.name, :ordered => true } end it 'does not include write concern in the selector' do expect(op.send(:command, connection)[:writeConcern]).to be_nil end end context 'when write concern is specified' do it 'includes write concern in the selector' do expect(op.send(:command, connection)[:writeConcern]).to eq(BSON::Document.new(write_concern.options)) end end end describe '#message' do # https://jira.mongodb.org/browse/RUBY-2224 require_no_linting context 'when the server supports OP_MSG' do let(:global_args) do { delete: TEST_COLL, ordered: true, writeConcern: write_concern.options, '$db' => SpecConfig.instance.test_db, lsid: session.session_id } end let(:expected_payload_1) do Mongo::Protocol::Msg::Section1.new('deletes', deletes) end let(:session) do authorized_client.start_session end context 'when the topology is replica set or sharded' do require_topology :replica_set, :sharded let(:expected_global_args) do global_args.merge(Mongo::Operation::CLUSTER_TIME => authorized_client.cluster.cluster_time) end it 'creates the correct OP_MSG message' do authorized_client.command(ping:1) expect(Mongo::Protocol::Msg).to receive(:new).with([], {}, expected_global_args, expected_payload_1) op.send(:message, connection) end end context 'when the topology is standalone' do require_topology :single let(:expected_global_args) do global_args end it 'creates the correct OP_MSG message' do authorized_client.command(ping:1) expect(Mongo::Protocol::Msg).to receive(:new).with([], {}, expected_global_args, expected_payload_1) op.send(:message, connection) end context 'when an implicit session is created and the topology is then updated and the server does not support sessions' do # Mocks on features are incompatible with linting require_no_linting let(:expected_global_args) do global_args.dup.tap do |args| args.delete(:lsid) end end let(:session) do Mongo::Session.new(nil, authorized_client, implicit: true).tap do |session| allow(session).to receive(:session_id).and_return(42) session.should be_implicit end end it 'creates the correct OP_MSG message' do RSpec::Mocks.with_temporary_scope do expect(connection.features).to receive(:sessions_enabled?).and_return(false) expect(expected_global_args[:session]).to be nil expect(Mongo::Protocol::Msg).to receive(:new).with([], {}, expected_global_args, expected_payload_1) op.send(:message, connection) end end end end context 'when the write concern is 0' do let(:write_concern) do Mongo::WriteConcern.get(w: 0) end context 'when the session is implicit' do let(:session) do Mongo::Session.new(nil, authorized_client, implicit: true).tap do |session| allow(session).to receive(:session_id).and_return(42) session.should be_implicit end end context 'when the topology is replica set or sharded' do require_topology :replica_set, :sharded let(:expected_global_args) do global_args.dup.tap do |args| args.delete(:lsid) args.merge!(Mongo::Operation::CLUSTER_TIME => authorized_client.cluster.cluster_time) end end it 'does not send a session id in the command' do authorized_client.command(ping:1) expect(Mongo::Protocol::Msg).to receive(:new).with([:more_to_come], {}, expected_global_args, expected_payload_1) op.send(:message, connection) end end context 'when the topology is standalone' do require_topology :single let(:expected_global_args) do global_args.dup.tap do |args| args.delete(:lsid) end end it 'creates the correct OP_MSG message' do authorized_client.command(ping:1) expect(Mongo::Protocol::Msg).to receive(:new).with([:more_to_come], {}, expected_global_args, expected_payload_1) op.send(:message, connection) end end end context 'when the session is explicit' do require_topology :replica_set, :sharded let(:session) do authorized_client.start_session end before do session.should_not be_implicit end let(:expected_global_args) do global_args.dup.tap do |args| args.delete(:lsid) args.merge!(Mongo::Operation::CLUSTER_TIME => authorized_client.cluster.cluster_time) end end it 'does not send a session id in the command' do authorized_client.command(ping:1) RSpec::Mocks.with_temporary_scope do expect(Mongo::Protocol::Msg).to receive(:new).with([:more_to_come], {}, expected_global_args, expected_payload_1) op.send(:message, connection) end end end end end end it_behaves_like 'a CSOT-compliant OpMsg subclass' end mongo-ruby-driver-2.21.3/spec/mongo/operation/delete_spec.rb000066400000000000000000000112011505113246500240350ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Operation::Delete do require_no_required_api_version before do begin authorized_collection.delete_many rescue Mongo::Error::OperationFailure end begin authorized_collection.indexes.drop_all rescue Mongo::Error::OperationFailure end end let(:document) do { :q => { :foo => 1 }, :limit => 1 } end let(:spec) do { :deletes => [ document ], :db_name => SpecConfig.instance.test_db, :coll_name => TEST_COLL, :write_concern => Mongo::WriteConcern.get(w: :majority), :ordered => true } end let(:op) { described_class.new(spec) } let(:context) { Mongo::Operation::Context.new } describe '#initialize' do context 'spec' do it 'sets the spec' do expect(op.spec).to eq(spec) end end end describe '#==' do context 'spec' do context 'when two ops have the same specs' do let(:other) { described_class.new(spec) } it 'returns true' do expect(op).to eq(other) end end context 'when two ops have different specs' do let(:other_doc) { { :q => { :bar => 1 }, :limit => 1 } } let(:other_spec) do { :deletes => [ other_doc ], :db_name => SpecConfig.instance.test_db, :coll_name => TEST_COLL, :write_concern => Mongo::WriteConcern.get(w: :majority), :ordered => true } end let(:other) { described_class.new(other_spec) } it 'returns false' do expect(op).not_to eq(other) end end end end describe '#execute' do before do authorized_collection.insert_many([ { name: 'test', field: 'test' }, { name: 'testing', field: 'test' } ]) end after do authorized_collection.delete_many end context 'when deleting a single document' do let(:delete) do described_class.new({ deletes: [ document ], db_name: SpecConfig.instance.test_db, coll_name: TEST_COLL, write_concern: Mongo::WriteConcern.get(w: :majority) }) end context 'when the delete succeeds' do let(:document) do { 'q' => { field: 'test' }, 'limit' => 1 } end let(:result) do delete.execute(authorized_primary, context: context) end it 'deletes the documents from the database' do expect(result.written_count).to eq(1) end it 'reports the correct deleted count' do expect(result.deleted_count).to eq(1) end end context 'when the delete fails' do let(:document) do { que: { field: 'test' } } end it 'raises an exception' do expect { delete.execute(authorized_primary, context: context) }.to raise_error(Mongo::Error::OperationFailure) end end end context 'when deleting multiple documents' do let(:delete) do described_class.new({ deletes: [ document ], db_name: SpecConfig.instance.test_db, coll_name: TEST_COLL, write_concern: Mongo::WriteConcern.get(w: :majority) }) end context 'when the deletes succeed' do let(:document) do { 'q' => { field: 'test' }, 'limit' => 0 } end let(:result) do delete.execute(authorized_primary, context: context) end it 'deletes the documents from the database' do expect(result.written_count).to eq(2) end it 'reports the correct deleted count' do expect(result.deleted_count).to eq(2) end end context 'when a delete fails' do let(:document) do { q: { '$set' => { a: 1 } }, limit: 0 } end let(:result) do delete.execute(authorized_primary, context: context) end it 'does not delete any documents' do expect { op.execute(authorized_primary, context: context) }.to raise_error(Mongo::Error::OperationFailure) expect(authorized_collection.find.count).to eq(2) end end context 'when a document exceeds max bson size' do let(:document) do { 'q' => { field: 't'*17000000 }, 'limit' => 0 } end it 'raises an error' do expect { op.execute(authorized_primary, context: context) }.to raise_error(Mongo::Error::MaxBSONSize) end end end end end mongo-ruby-driver-2.21.3/spec/mongo/operation/drop_index_spec.rb000066400000000000000000000024261505113246500247370ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Operation::DropIndex do require_no_required_api_version before do authorized_collection.indexes.drop_all end let(:context) { Mongo::Operation::Context.new } describe '#execute' do context 'when the index exists' do let(:spec) do { another: -1 } end before do authorized_collection.indexes.create_one(spec, unique: true) end let(:operation) do described_class.new( db_name: SpecConfig.instance.test_db, coll_name: TEST_COLL, index_name: 'another_-1' ) end let(:response) do operation.execute(authorized_primary, context: context) end it 'removes the index' do expect(response).to be_successful end end context 'when the index does not exist' do let(:operation) do described_class.new( db_name: SpecConfig.instance.test_db, coll_name: TEST_COLL, index_name: 'another_blah' ) end it 'raises an exception' do expect { operation.execute(authorized_primary, context: context) }.to raise_error(Mongo::Error::OperationFailure) end end end end mongo-ruby-driver-2.21.3/spec/mongo/operation/find/000077500000000000000000000000001505113246500221615ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/mongo/operation/find/builder/000077500000000000000000000000001505113246500236075ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/mongo/operation/find/builder/flags_spec.rb000066400000000000000000000043241505113246500262450ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' describe Mongo::Operation::Find::Builder::Flags do describe '.map_flags' do shared_examples_for 'a flag mapper' do let(:flags) do described_class.map_flags(options) end it 'maps allow partial results' do expect(flags).to include(:partial) end it 'maps oplog replay' do expect(flags).to include(:oplog_replay) end it 'maps no cursor timeout' do expect(flags).to include(:no_cursor_timeout) end it 'maps tailable' do expect(flags).to include(:tailable_cursor) end it 'maps await data' do expect(flags).to include(:await_data) end it 'maps exhaust' do expect(flags).to include(:exhaust) end end context 'when the options are standard' do let(:options) do { :allow_partial_results => true, :oplog_replay => true, :no_cursor_timeout => true, :tailable => true, :await_data => true, :exhaust => true } end it_behaves_like 'a flag mapper' end context 'when the options already have flags' do let(:options) do { :flags => [ :partial, :oplog_replay, :no_cursor_timeout, :tailable_cursor, :await_data, :exhaust ] } end it_behaves_like 'a flag mapper' end context 'when the options include tailable_await' do let(:options) do { :tailable_await => true } end let(:flags) do described_class.map_flags(options) end it 'maps the await data option' do expect(flags).to include(:await_data) end it 'maps the tailable option' do expect(flags).to include(:tailable_cursor) end end context 'when the options provide a cursor type' do let(:options) do { :cursor_type => :await_data } end let(:flags) do described_class.map_flags(options) end it 'maps the cursor type to a flag' do expect(flags).to include(:await_data) end end end end mongo-ruby-driver-2.21.3/spec/mongo/operation/find/builder/modifiers_spec.rb000066400000000000000000000116621505113246500271350ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' describe Mongo::Operation::Find::Builder::Modifiers do describe '.map_driver_options' do shared_examples_for 'transformable driver options' do it 'maps hint' do expect(transformed[:hint]).to eq("_id" => 1) end it 'maps comment' do expect(transformed[:comment]).to eq('testing') end it 'maps max scan' do expect(transformed[:max_scan]).to eq(200) end it 'maps max time ms' do expect(transformed[:max_time_ms]).to eq(500) end it 'maps max' do expect(transformed[:max_value]).to eq("name" => 'joe') end it 'maps min' do expect(transformed[:min_value]).to eq("name" => 'albert') end it 'maps return key' do expect(transformed[:return_key]).to be true end it 'maps show record id' do expect(transformed[:show_disk_loc]).to be true end it 'maps snapshot' do expect(transformed[:snapshot]).to be true end it 'maps explain' do expect(transformed[:explain]).to be true end it 'returns a BSON document' do expect(transformed).to be_a(BSON::Document) end end context 'when the keys are strings' do let(:modifiers) do { '$orderby' => { name: 1 }, '$hint' => { _id: 1 }, '$comment' => 'testing', '$snapshot' => true, '$maxScan' => 200, '$max' => { name: 'joe' }, '$min' => { name: 'albert' }, '$maxTimeMS' => 500, '$returnKey' => true, '$showDiskLoc' => true, '$explain' => true } end let(:transformed) do described_class.map_driver_options(modifiers) end it_behaves_like 'transformable driver options' end context 'when the keys are symbols' do let(:modifiers) do { :$orderby => { name: 1 }, :$hint => { _id: 1 }, :$comment => 'testing', :$snapshot => true, :$maxScan => 200, :$max => { name: 'joe' }, :$min => { name: 'albert' }, :$maxTimeMS => 500, :$returnKey => true, :$showDiskLoc => true, :$explain => true } end let(:transformed) do described_class.map_driver_options(modifiers) end it_behaves_like 'transformable driver options' end end describe '.map_server_modifiers' do shared_examples_for 'transformable server modifiers' do it 'maps hint' do expect(transformed[:$hint]).to eq("_id" => 1) end it 'maps comment' do expect(transformed[:$comment]).to eq('testing') end it 'maps max scan' do expect(transformed[:$maxScan]).to eq(200) end it 'maps max time ms' do expect(transformed[:$maxTimeMS]).to eq(500) end it 'maps max' do expect(transformed[:$max]).to eq("name" => 'joe') end it 'maps min' do expect(transformed[:$min]).to eq("name" => 'albert') end it 'maps return key' do expect(transformed[:$returnKey]).to be true end it 'maps show record id' do expect(transformed[:$showDiskLoc]).to be true end it 'maps snapshot' do expect(transformed[:$snapshot]).to be true end it 'maps explain' do expect(transformed[:$explain]).to be true end it 'returns a BSON document' do expect(transformed).to be_a(BSON::Document) end it 'does not include non modifiers' do expect(transformed[:limit]).to be_nil end end context 'when the keys are strings' do let(:options) do { 'sort' => { name: 1 }, 'hint' => { _id: 1 }, 'comment' => 'testing', 'snapshot' => true, 'max_scan' => 200, 'max_value' => { name: 'joe' }, 'min_value' => { name: 'albert' }, 'max_time_ms' => 500, 'return_key' => true, 'show_disk_loc' => true, 'explain' => true, 'limit' => 10 } end let(:transformed) do described_class.map_server_modifiers(options) end it_behaves_like 'transformable server modifiers' end context 'when the keys are symbols' do let(:options) do { :sort => { name: 1 }, :hint => { _id: 1 }, :comment => 'testing', :snapshot => true, :max_scan => 200, :max_value => { name: 'joe' }, :min_value => { name: 'albert' }, :max_time_ms => 500, :return_key => true, :show_disk_loc => true, :explain => true, :limit => 10 } end let(:transformed) do described_class.map_server_modifiers(options) end it_behaves_like 'transformable server modifiers' end end end mongo-ruby-driver-2.21.3/spec/mongo/operation/find/op_msg_spec.rb000066400000000000000000000030121505113246500250000ustar00rootroot00000000000000# frozen_string_literal: true require 'spec_helper' require_relative '../shared/csot/examples' describe Mongo::Operation::Find::OpMsg do include CSOT::Examples let(:spec) do { coll_name: 'coll_name', filter: {}, db_name: 'db_name' } end let(:op) { described_class.new(spec) } context 'when it is a CSOT-compliant OpMsg' do include_examples 'mock CSOT environment' context 'when no timeout_ms set' do it 'does not set maxTimeMS' do expect(body.key?(:maxTimeMS)).to be false end end context 'when timeout_ms is set' do let(:remaining_timeout_sec) { 3 } context 'when cursor is non-tailable' do let(:cursor_type) { nil } context 'when timeout_mode is cursor_lifetime' do let(:timeout_mode) { :cursor_lifetime } it 'sets maxTimeMS' do expect(body[:maxTimeMS]).to be == 3_000 end end context 'when timeout_mode is iteration' do let(:timeout_mode) { :iteration } it 'omits maxTimeMS' do expect(body[:maxTimeMS]).to be_nil end end end context 'when cursor is tailable' do let(:cursor_type) { :tailable } it 'omits maxTimeMS' do expect(body[:maxTimeMS]).to be_nil end end context 'when cursor is tailable_await' do let(:cursor_type) { :tailable_await } it 'sets maxTimeMS' do expect(body[:maxTimeMS]).to be == 3_000 end end end end end mongo-ruby-driver-2.21.3/spec/mongo/operation/get_more/000077500000000000000000000000001505113246500230425ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/mongo/operation/get_more/op_msg_spec.rb000066400000000000000000000027351505113246500256740ustar00rootroot00000000000000# frozen_string_literal: true require 'spec_helper' require_relative '../shared/csot/examples' describe Mongo::Operation::GetMore::OpMsg do include CSOT::Examples let(:spec) do { options: {}, db_name: 'db_name', coll_name: 'coll_name', cursor_id: 1_234_567_890, } end let(:op) { described_class.new(spec) } context 'when it is a CSOT-compliant OpMsg' do include_examples 'mock CSOT environment' context 'when no timeout_ms set' do it 'does not set maxTimeMS' do expect(body.key?(:maxTimeMS)).to be false end end context 'when timeout_ms is set' do let(:remaining_timeout_sec) { 3 } context 'when cursor is non-tailable' do it 'omits maxTimeMS' do expect(body[:maxTimeMS]).to be_nil end end context 'when cursor is tailable' do let(:cursor_type) { :tailable } it 'omits maxTimeMS' do expect(body[:maxTimeMS]).to be_nil end end context 'when cursor is tailable_await' do let(:cursor_type) { :tailable_await } context 'when max_await_time_ms is omitted' do it 'omits maxTimeMS' do expect(body[:maxTimeMS]).to be_nil end end context 'when max_await_time_ms is given' do let(:max_await_time_ms) { 1_234 } it 'sets maxTimeMS' do expect(body[:maxTimeMS]).to be == 1_234 end end end end end end mongo-ruby-driver-2.21.3/spec/mongo/operation/indexes_spec.rb000066400000000000000000000016671505113246500242510ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Operation::Indexes do require_no_required_api_version let(:context) { Mongo::Operation::Context.new } describe '#execute' do let(:index_spec) do { name: 1 } end before do authorized_collection.drop authorized_collection.insert_one(test: 1) authorized_collection.indexes.create_one(index_spec, unique: true) end after do authorized_collection.indexes.drop_one('name_1') end let(:operation) do described_class.new({ selector: { listIndexes: TEST_COLL }, coll_name: TEST_COLL, db_name: SpecConfig.instance.test_db }) end let(:indexes) do operation.execute(authorized_primary, context: context) end it 'returns the indexes for the collection' do expect(indexes.documents.size).to eq(2) end end end mongo-ruby-driver-2.21.3/spec/mongo/operation/insert/000077500000000000000000000000001505113246500225455ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/mongo/operation/insert/bulk_spec.rb000066400000000000000000000143401505113246500250430ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Operation::Insert do require_no_multi_mongos require_no_required_api_version let(:context) { Mongo::Operation::Context.new } before do begin authorized_collection.delete_many rescue Mongo::Error::OperationFailure end begin authorized_collection.indexes.drop_all rescue Mongo::Error::OperationFailure end end let(:documents) do [{ :name => 'test' }] end let(:write_concern) do Mongo::WriteConcern.get(w: :majority) end let(:spec) do { documents: documents, db_name: authorized_collection.database.name, coll_name: authorized_collection.name, write_concern: write_concern } end let(:op) do described_class.new(spec) end after do authorized_collection.delete_many end describe '#initialize' do context 'spec' do it 'sets the spec' do expect(op.spec).to eq(spec) end end end describe '#==' do context 'spec' do context 'when two inserts have the same specs' do let(:other) do described_class.new(spec) end it 'returns true' do expect(op).to eq(other) end end context 'when two inserts have different specs' do let(:other_docs) do [{ :bar => 1 }] end let(:other_spec) do { :documents => other_docs, :db_name => 'test', :coll_name => 'coll_name', :write_concern => { 'w' => 1 }, :ordered => true } end let(:other) do described_class.new(other_spec) end it 'returns false' do expect(op).not_to eq(other) end end end end describe 'document ids' do context 'when documents do not contain an id' do let(:documents) do [{ 'field' => 'test' }, { 'field' => 'test' }] end let(:inserted_ids) do authorized_primary.with_connection do |connection| op.bulk_execute(connection, context: context).inserted_ids end end let(:collection_ids) do authorized_collection.find(field: 'test').collect { |d| d['_id'] } end it 'adds an id to the documents' do expect(inserted_ids).to eq(collection_ids) end end end describe '#bulk_execute' do before do authorized_collection.indexes.create_one({ name: 1 }, { unique: true }) end after do authorized_collection.delete_many authorized_collection.indexes.drop_one('name_1') end context 'when inserting a single document' do context 'when the insert succeeds' do let(:response) do authorized_primary.with_connection do |connection| op.bulk_execute(connection, context: context) end end it 'inserts the documents into the database' do expect(response.written_count).to eq(1) end end end context 'when inserting multiple documents' do context 'when the insert succeeds' do let(:documents) do [{ name: 'test1' }, { name: 'test2' }] end let(:response) do authorized_primary.with_connection do |connection| op.bulk_execute(connection, context: context) end end it 'inserts the documents into the database' do expect(response.written_count).to eq(2) end end end context 'when the inserts are ordered' do let(:documents) do [{ name: 'test' }, { name: 'test' }, { name: 'test1' }] end let(:spec) do { documents: documents, db_name: authorized_collection.database.name, coll_name: authorized_collection.name, write_concern: write_concern, ordered: true } end let(:failing_insert) do described_class.new(spec) end context 'when write concern is acknowledged' do let(:write_concern) do Mongo::WriteConcern.get(w: 1) end context 'when the insert fails' do it 'aborts after first error' do authorized_primary.with_connection do |connection| failing_insert.bulk_execute(connection, context: context) end expect(authorized_collection.find.count).to eq(1) end end end context 'when write concern is unacknowledged' do let(:write_concern) do Mongo::WriteConcern.get(w: 0) end context 'when the insert fails' do it 'aborts after first error' do authorized_primary.with_connection do |connection| failing_insert.bulk_execute(connection, context: context) end expect(authorized_collection.find.count).to eq(1) end end end end context 'when the inserts are unordered' do let(:documents) do [{ name: 'test' }, { name: 'test' }, { name: 'test1' }] end let(:spec) do { documents: documents, db_name: authorized_collection.database.name, coll_name: authorized_collection.name, write_concern: write_concern, ordered: false } end let(:failing_insert) do described_class.new(spec) end context 'when write concern is acknowledged' do context 'when the insert fails' do it 'does not abort after first error' do authorized_primary.with_connection do |connection| failing_insert.bulk_execute(connection, context: context) end expect(authorized_collection.find.count).to eq(2) end end end context 'when write concern is unacknowledged' do let(:write_concern) do Mongo::WriteConcern.get(w: 0) end context 'when the insert fails' do it 'does not after first error' do authorized_primary.with_connection do |connection| failing_insert.bulk_execute(connection, context: context) end expect(authorized_collection.find.count).to eq(2) end end end end end end mongo-ruby-driver-2.21.3/spec/mongo/operation/insert/op_msg_spec.rb000066400000000000000000000220751505113246500253760ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' require_relative '../shared/csot/examples' describe Mongo::Operation::Insert::OpMsg do include CSOT::Examples let(:context) { Mongo::Operation::Context.new } let(:documents) { [{ :_id => 1, :foo => 1 }] } let(:session) { nil } let(:spec) do { :documents => documents, :db_name => authorized_collection.database.name, :coll_name => authorized_collection.name, :write_concern => write_concern, :ordered => true, :session => session } end let(:write_concern) do Mongo::WriteConcern.get(w: :majority) end let(:op) { described_class.new(spec) } let(:connection) do double('connection').tap do |connection| allow(connection).to receive(:server).and_return(authorized_primary) allow(connection).to receive(:features).and_return(authorized_primary.features) allow(connection).to receive(:description).and_return(authorized_primary.description) allow(connection).to receive(:cluster_time).and_return(authorized_primary.cluster_time) end end describe '#initialize' do context 'spec' do it 'sets the spec' do expect(op.spec).to eq(spec) end end end describe '#==' do context 'spec' do context 'when two ops have the same specs' do let(:other) { described_class.new(spec) } it 'returns true' do expect(op).to eq(other) end end context 'when two ops have different specs' do let(:other_documents) { [{ :bar => 1 }] } let(:other_spec) do { :documents => other_documents, :db_name => authorized_collection.database.name, :insert => authorized_collection.name, :write_concern => write_concern.options, :ordered => true } end let(:other) { described_class.new(other_spec) } it 'returns false' do expect(op).not_to eq(other) end end end end describe 'write concern' do # https://jira.mongodb.org/browse/RUBY-2224 require_no_linting context 'when write concern is not specified' do let(:spec) do { :documents => documents, :db_name => authorized_collection.database.name, :coll_name => authorized_collection.name, :ordered => true } end it 'does not include write concern in the selector' do expect(op.send(:command, connection)[:writeConcern]).to be_nil end end context 'when write concern is specified' do it 'includes write concern in the selector' do expect(op.send(:command, connection)[:writeConcern]).to eq(BSON::Document.new(write_concern.options)) end end end describe '#message' do # https://jira.mongodb.org/browse/RUBY-2224 require_no_linting let(:documents) do [ { foo: 1 }, { bar: 2 }] end let(:global_args) do { insert: TEST_COLL, ordered: true, writeConcern: write_concern.options, '$db' => SpecConfig.instance.test_db, lsid: session.session_id } end let!(:expected_payload_1) do Mongo::Protocol::Msg::Section1.new('documents', op.documents) end let(:session) do Mongo::Session.new(nil, authorized_client, implicit: true).tap do |session| allow(session).to receive(:session_id).and_return(42) end end context 'when the topology is replica set or sharded' do require_topology :replica_set, :sharded let(:expected_global_args) do global_args.merge(Mongo::Operation::CLUSTER_TIME => authorized_client.cluster.cluster_time) end it 'creates the correct OP_MSG message' do authorized_client.command(ping:1) RSpec::Mocks.with_temporary_scope do expect(Mongo::Protocol::Msg).to receive(:new).with([], {}, expected_global_args, expected_payload_1) op.send(:message, connection) end end end context 'when the topology is standalone' do require_topology :single let(:expected_global_args) do global_args end it 'creates the correct OP_MSG message' do RSpec::Mocks.with_temporary_scope do authorized_client.command(ping:1) expect(Mongo::Protocol::Msg).to receive(:new).with([], {}, expected_global_args, expected_payload_1) op.send(:message, connection) end end context 'when an implicit session is created and the topology is then updated and the server does not support sessions' do # Mocks on features are incompatible with linting require_no_linting let(:expected_global_args) do global_args.dup.tap do |args| args.delete(:lsid) end end before do session.implicit?.should be true end it 'creates the correct OP_MSG message' do RSpec::Mocks.with_temporary_scope do expect(connection.features).to receive(:sessions_enabled?).and_return(false) expect(expected_global_args).not_to have_key(:lsid) expect(Mongo::Protocol::Msg).to receive(:new).with([], {}, expected_global_args, expected_payload_1) op.send(:message, connection) end end end end context 'when the write concern is 0' do let(:write_concern) do Mongo::WriteConcern.get(w: 0) end context 'when the session is implicit' do let(:session) do Mongo::Session.new(nil, authorized_client, implicit: true).tap do |session| allow(session).to receive(:session_id).and_return(42) session.should be_implicit end end context 'when the topology is replica set or sharded' do require_topology :replica_set, :sharded let(:expected_global_args) do global_args.dup.tap do |args| args.delete(:lsid) args.merge!(Mongo::Operation::CLUSTER_TIME => authorized_client.cluster.cluster_time) end end it 'does not send a session id in the command' do authorized_client.command(ping:1) RSpec::Mocks.with_temporary_scope do expect(Mongo::Protocol::Msg).to receive(:new).with([:more_to_come], {}, expected_global_args, expected_payload_1) op.send(:message, connection) end end end context 'when the topology is standalone' do require_topology :single let(:expected_global_args) do global_args.dup.tap do |args| args.delete(:lsid) end end it 'creates the correct OP_MSG message' do authorized_client.command(ping:1) RSpec::Mocks.with_temporary_scope do expect(Mongo::Protocol::Msg).to receive(:new).with([:more_to_come], {}, expected_global_args, expected_payload_1) op.send(:message, connection) end end end end context 'when the session is explicit' do require_topology :replica_set, :sharded let(:session) do authorized_client.start_session end before do session.should_not be_implicit end let(:expected_global_args) do global_args.dup.tap do |args| args.delete(:lsid) args.merge!(Mongo::Operation::CLUSTER_TIME => authorized_client.cluster.cluster_time) end end it 'does not send a session id in the command' do authorized_client.command(ping:1) RSpec::Mocks.with_temporary_scope do expect(Mongo::Protocol::Msg).to receive(:new).with([:more_to_come], {}, expected_global_args, expected_payload_1) op.send(:message, connection) end end end end end it_behaves_like 'a CSOT-compliant OpMsg subclass' end mongo-ruby-driver-2.21.3/spec/mongo/operation/insert_spec.rb000066400000000000000000000141651505113246500241130ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Operation::Insert do require_no_required_api_version let(:context) { Mongo::Operation::Context.new } let(:documents) do [{ '_id' => 1, 'name' => 'test' }] end let(:spec) do { :documents => documents, :db_name => SpecConfig.instance.test_db, :coll_name => TEST_COLL, :write_concern => Mongo::WriteConcern.get(:w => 1) } end after do authorized_collection.delete_many end let(:insert) do described_class.new(spec) end describe '#initialize' do context 'spec' do it 'sets the spec' do expect(insert.spec).to eq(spec) end end end describe '#==' do context 'spec' do context 'when two inserts have the same specs' do let(:other) do described_class.new(spec) end it 'returns true' do expect(insert).to eq(other) end end context 'when two inserts have different specs' do let(:other_docs) do [{ :bar => 1 }] end let(:other_spec) do { :documents => other_docs, :db_name => 'test', :coll_name => 'test_coll', :write_concern => { 'w' => 1 } } end let(:other) do described_class.new(other_spec) end it 'returns false' do expect(insert).not_to eq(other) end end end end describe 'document ids' do context 'when documents do not contain an id' do let(:documents) do [{ 'field' => 'test' }, { 'field' => 'test' }] end let(:inserted_ids) do insert.execute(authorized_primary, context: context).inserted_ids end let(:collection_ids) do authorized_collection.find(field: 'test').collect { |d| d['_id'] } end it 'adds an id to the documents' do expect(inserted_ids).to eq(collection_ids) end end end describe '#execute' do before do authorized_collection.indexes.create_one({ name: 1 }, { unique: true }) end after do authorized_collection.delete_many authorized_collection.indexes.drop_one('name_1') end context 'when inserting a single document' do context 'when the insert succeeds' do let!(:response) do insert.execute(authorized_primary, context: context) end it 'reports the correct written count' do expect(response.written_count).to eq(1) end it 'inserts the document into the collection' do expect(authorized_collection.find(_id: 1).to_a). to eq(documents) end end context 'when the insert fails' do let(:documents) do [{ name: 'test' }] end let(:spec) do { :documents => documents, :db_name => SpecConfig.instance.test_db, :coll_name => TEST_COLL, :write_concern => Mongo::WriteConcern.get(:w => 1) } end let(:failing_insert) do described_class.new(spec) end it 'raises an error' do expect { failing_insert.execute(authorized_primary, context: context) failing_insert.execute(authorized_primary, context: context) }.to raise_error(Mongo::Error::OperationFailure) end end end context 'when inserting multiple documents' do context 'when the insert succeeds' do let(:documents) do [{ '_id' => 1, 'name' => 'test1' }, { '_id' => 2, 'name' => 'test2' }] end let!(:response) do insert.execute(authorized_primary, context: context) end it 'reports the correct written count' do expect(response.written_count).to eq(2) end it 'inserts the documents into the collection' do expect(authorized_collection.find.sort(_id: 1).to_a). to eq(documents) end end context 'when the insert fails on the last document' do let(:documents) do [{ name: 'test3' }, { name: 'test' }] end let(:spec) do { :documents => documents, :db_name => SpecConfig.instance.test_db, :coll_name => TEST_COLL, :write_concern => Mongo::WriteConcern.get(:w => 1) } end let(:failing_insert) do described_class.new(spec) end it 'raises an error' do expect { failing_insert.execute(authorized_primary, context: context) failing_insert.execute(authorized_primary, context: context) }.to raise_error(Mongo::Error::OperationFailure) end end context 'when the insert fails on the first document' do let(:documents) do [{ name: 'test' }, { name: 'test4' }] end let(:spec) do { :documents => documents, :db_name => SpecConfig.instance.test_db, :coll_name => TEST_COLL, :write_concern => Mongo::WriteConcern.get(:w => 1) } end let(:failing_insert) do described_class.new(spec) end it 'raises an error' do expect { failing_insert.execute(authorized_primary, context: context) failing_insert.execute(authorized_primary, context: context) }.to raise_error(Mongo::Error::OperationFailure) end end context 'when a document exceeds max bson size' do let(:documents) do [{ :x => 'y'* 17000000 }] end it 'raises an error' do expect { insert.execute(authorized_primary, context: context) }.to raise_error(Mongo::Error::MaxBSONSize) end it 'does not insert the document' do expect { insert.execute(authorized_primary, context: context) }.to raise_error(Mongo::Error::MaxBSONSize) expect(authorized_collection.find.count).to eq(0) end end end end end mongo-ruby-driver-2.21.3/spec/mongo/operation/limited_spec.rb000066400000000000000000000022331505113246500242270ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Operation::Limited do describe '#options' do let(:limited) do Class.new do include Mongo::Operation::Specifiable include Mongo::Operation::Limited end.new({ :options => spec }) end let(:server) { double('server') } context 'when no limit is provided' do let(:spec) do { :skip => 5 } end it 'returns a limit of -1' do expect(limited.send(:options, server)).to eq({ :skip => 5, :limit => -1 }) end end context 'when a limit is already provided' do context 'when the limit is -1' do let(:spec) do { :skip => 5, :limit => -1 } end it 'returns a limit of -1' do expect(limited.send(:options, server)).to eq({ :skip => 5, :limit => -1 }) end end context 'when the limit is not -1' do let(:spec) do { :skip => 5, :limit => 5 } end it 'returns a limit of -1' do expect(limited.send(:options, server)).to eq({ :skip => 5, :limit => -1 }) end end end end end mongo-ruby-driver-2.21.3/spec/mongo/operation/map_reduce_spec.rb000066400000000000000000000046751505113246500247200ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Operation::MapReduce do require_no_required_api_version let(:context) { Mongo::Operation::Context.new } let(:map) do %Q{ function() { emit(this.name, { population: this.population }); }} end let(:reduce) do %Q{ function(key, values) { var result = { population: 0 }; values.forEach(function(value) { result.population += value.population; }); return result; }} end let(:options) do {} end let(:selector) do { :mapreduce => TEST_COLL, :map => map, :reduce => reduce, :query => {}, :out => { inline: 1 } } end let(:spec) do { :selector => selector, :options => options, :db_name => SpecConfig.instance.test_db } end let(:op) do described_class.new(spec) end describe '#initialize' do context 'spec' do it 'sets the spec' do expect(op.spec).to be(spec) end end end describe '#==' do context ' when two ops have different specs' do let(:other_selector) do { :mapreduce => 'other_test_coll', :map => '', :reduce => '', } end let(:other_spec) do { :selector => other_selector, :options => {}, :db_name => SpecConfig.instance.test_db, } end let(:other) { described_class.new(other_spec) } it 'returns false' do expect(op).not_to eq(other) end end end describe '#execute' do let(:documents) do [ { name: 'Berlin', population: 3000000 }, { name: 'London', population: 9000000 } ] end before do authorized_collection.insert_many(documents) end after do authorized_collection.delete_many end context 'when the map/reduce succeeds' do let(:response) do op.execute(authorized_primary, context: context) end it 'returns the response' do expect(response).to be_successful end end context 'when the map/reduce fails' do let(:selector) do { :mapreduce => TEST_COLL, :map => map, :reduce => reduce, :query => {} } end it 'raises an exception' do expect { op.execute(authorized_primary, context: context) }.to raise_error(Mongo::Error::OperationFailure) end end end end mongo-ruby-driver-2.21.3/spec/mongo/operation/read_preference_legacy_spec.rb000066400000000000000000000232571505113246500272460ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Operation::ReadPreferenceSupported do let(:selector) do { name: 'test' } end let(:options) do {} end let(:cluster) do double('cluster').tap do |cluster| allow(cluster).to receive(:single?).and_return(single?) end end let(:operation) do Class.new do include Mongo::Operation::ReadPreferenceSupported end.new.tap do |op| allow(op).to receive(:read).and_return(read_pref) allow(op).to receive(:selector).and_return(selector) allow(op).to receive(:options).and_return(options) end end let(:description) do double('description').tap do |description| allow(description).to receive(:mongos?).and_return(mongos?) allow(description).to receive(:standalone?).and_return(standalone?) # TODO consider adding tests for load-balanced topologies also allow(description).to receive(:load_balancer?).and_return(false) end end let(:server) do double('server').tap do |server| allow(server).to receive(:cluster).and_return(cluster) # TODO consider adding tests for load-balanced topologies also allow(server).to receive(:load_balancer?).and_return(false) end end let(:connection) do double('connection').tap do |connection| allow(connection).to receive(:server).and_return(server) allow(connection).to receive(:description).and_return(description) end end describe '#add_secondary_ok_flag?' do let(:actual) do operation.send(:add_secondary_ok_flag?, connection) end shared_examples_for 'sets the secondary_ok flag as expected' do it 'sets the secondary_ok flag as expected' do expect(actual).to eq(expected) end end shared_examples_for 'never sets secondary_ok' do let(:expected) { false } context 'when no read preference is specified' do let(:read_pref) { Mongo::ServerSelector.get } it_behaves_like 'sets the secondary_ok flag as expected' end context 'when primary read preference is specified' do let(:read_pref) { Mongo::ServerSelector.get(:mode => :primary) } it_behaves_like 'sets the secondary_ok flag as expected' end context 'when secondary read preference is specified' do let(:read_pref) { Mongo::ServerSelector.get(:mode => :secondary) } it_behaves_like 'sets the secondary_ok flag as expected' end end shared_examples_for 'always sets secondary_ok' do let(:expected) { true } context 'when no read preference is specified' do let(:read_pref) { Mongo::ServerSelector.get } it_behaves_like 'sets the secondary_ok flag as expected' end context 'when primary read preference is specified' do let(:read_pref) { Mongo::ServerSelector.get(:mode => :primary) } it_behaves_like 'sets the secondary_ok flag as expected' end context 'when secondary read preference is specified' do let(:read_pref) { Mongo::ServerSelector.get(:mode => :secondary) } it_behaves_like 'sets the secondary_ok flag as expected' end end shared_examples_for 'sets secondary_ok if read preference is specified and is not primary' do context 'when there is no read preference set' do let(:read_pref) { Mongo::ServerSelector.get } let(:expected) { false } it_behaves_like 'sets the secondary_ok flag as expected' end context 'when there is a read preference' do context 'when the read preference requires the secondary_ok flag' do let(:read_pref) { Mongo::ServerSelector.get(:mode => :secondary) } let(:expected) { true } it_behaves_like 'sets the secondary_ok flag as expected' end context 'when the read preference does not require the secondary_ok flag' do let(:read_pref) { Mongo::ServerSelector.get(:mode => :primary) } let(:expected) { false } it_behaves_like 'sets the secondary_ok flag as expected' end end end context 'when the topology is Single' do let(:single?) { true } let(:mongos?) { false } context 'when the server is a standalone' do let(:standalone?) { true } it_behaves_like 'never sets secondary_ok' end context 'when the server is a mongos' do let(:standalone?) { false } let(:mongos?) { true } it_behaves_like 'always sets secondary_ok' end context 'when the server is a replica set member' do let(:standalone?) { false } let(:mongos?) { false } it_behaves_like 'always sets secondary_ok' end end context 'when the topology is not Single' do let(:single?) { false } let(:mongos?) { false } context 'when the server is a standalone' do let(:standalone?) { true } it_behaves_like 'never sets secondary_ok' end context 'when the server is a mongos' do let(:standalone?) { false } let(:mongos?) { true } it_behaves_like 'sets secondary_ok if read preference is specified and is not primary' end context 'when the server is a replica set member' do let(:standalone?) { false } let(:mongos?) { false } it_behaves_like 'sets secondary_ok if read preference is specified and is not primary' end end end describe '#add_read_preference_legacy' do let(:read_pref) do Mongo::ServerSelector.get(:mode => mode) end # Behavior of sending $readPreference is the same regardless of topology. shared_examples_for '$readPreference in the command' do let(:actual) do operation.send(:add_read_preference_legacy, operation.send(:selector), connection) end let(:expected_read_preference) do {mode: mode.to_s.gsub(/_(.)/) { $1.upcase }} end shared_examples_for 'adds read preference moving existing contents to $query' do let(:expected) do { :$query => selector, :$readPreference => expected_read_preference } end it 'moves existing selector contents under $query and adds read preference' do expect(actual).to eq(expected) end context 'when the selector already has $query in it' do let(:selector) do { :$query => { :name => 'test' }, :$orderby => { :name => -1 } } end let(:expected) do selector.merge(:$readPreference => expected_read_preference) end it 'keeps existing $query and adds read preference' do expect(actual).to eq(expected) end end end shared_examples_for 'does not modify selector' do it 'does not modify selector' do expect(actual).to eq(selector) end end shared_examples_for 'does not send read preference' do ([nil] + %i(primary primary_preferred secondary secondary_preferred nearest)).each do |_mode| active_mode = _mode context "when read preference mode is #{active_mode}" do let(:mode) { active_mode } it_behaves_like 'does not modify selector' end end end context 'when the server is a standalone' do let(:standalone?) { true } let(:mongos?) { false } it_behaves_like 'does not send read preference' end context 'when the server is a mongos' do let(:standalone?) { false } let(:mongos?) { true } context 'when the read preference mode is nil' do let(:mode) { nil } it_behaves_like 'does not modify selector' end context 'when the read preference mode is primary' do let(:mode) { :primary } it_behaves_like 'does not modify selector' end context 'when the read preference mode is primary_preferred' do let(:mode) { :primary_preferred } it_behaves_like 'adds read preference moving existing contents to $query' end context 'when the read preference mode is secondary' do let(:mode) { :secondary } it_behaves_like 'adds read preference moving existing contents to $query' end context 'when the read preference mode is secondary_preferred' do let(:mode) { :secondary_preferred } it_behaves_like 'does not modify selector' context 'when there are fields in the selector besides :mode' do let(:read_pref) do Mongo::ServerSelector.get(:mode => mode, tag_sets: ['dc' => 'nyc']) end let(:expected_read_preference) do {mode: mode.to_s.gsub(/_(.)/) { $1.upcase }, tags: ['dc' => 'nyc']} end it_behaves_like 'adds read preference moving existing contents to $query' end end context 'when the read preference mode is nearest' do let(:mode) { :nearest } it_behaves_like 'adds read preference moving existing contents to $query' end end context 'when the server is a replica set member' do let(:standalone?) { false } let(:mongos?) { false } # $readPreference is not sent to replica set nodes running legacy # servers - the allowance of secondary reads is handled by secondary_ok # flag. it_behaves_like 'does not send read preference' end end context 'in single topology' do let(:single?) { true } it_behaves_like '$readPreference in the command' end context 'not in single topology' do let(:single?) { false } it_behaves_like '$readPreference in the command' end end end mongo-ruby-driver-2.21.3/spec/mongo/operation/read_preference_op_msg_spec.rb000066400000000000000000000210501505113246500272530ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Operation::SessionsSupported do # https://jira.mongodb.org/browse/RUBY-2224 require_no_linting let(:selector) do BSON::Document.new(name: 'test') end let(:options) do {} end let(:cluster) do double('cluster').tap do |cluster| allow(cluster).to receive(:single?).and_return(single?) end end let(:operation) do Class.new do include Mongo::Operation::SessionsSupported end.new.tap do |op| allow(op).to receive(:read).and_return(read_pref) allow(op).to receive(:selector).and_return(selector) allow(op).to receive(:options).and_return(options) end end let(:description) do double('description').tap do |description| allow(description).to receive(:mongos?).and_return(mongos?) allow(description).to receive(:standalone?).and_return(standalone?) end end let(:server) do double('server').tap do |server| allow(server).to receive(:cluster).and_return(cluster) # TODO consider adding tests for load-balanced topologies also allow(server).to receive(:load_balancer?).and_return(false) end end let(:connection) do double('connection').tap do |connection| allow(connection).to receive(:server).and_return(server) allow(connection).to receive(:description).and_return(description) end end describe '#add_read_preference' do let(:read_pref) do Mongo::ServerSelector.get(:mode => mode) end let(:actual) do sel = operation.send(:selector).dup operation.send(:add_read_preference, sel, connection) sel end let(:expected_read_preference) do {mode: mode.to_s.gsub(/_(.)/) { $1.upcase }} end shared_examples_for 'adds read preference' do let(:expected) do selector.merge(:$readPreference => expected_read_preference) end it 'adds read preference' do expect(actual).to eq(expected) end end shared_examples_for 'does not modify selector' do it 'does not modify selector' do expect(actual).to eq(selector) end end shared_examples_for 'does not send read preference' do ([nil] + %i(primary primary_preferred secondary secondary_preferred nearest)).each do |_mode| active_mode = _mode context "when read preference mode is #{active_mode}" do let(:mode) { active_mode } it_behaves_like 'does not modify selector' end end end shared_examples_for 'sends read preference correctly for replica set' do context "when read preference mode is primary" do let(:mode) { :primary} it_behaves_like 'does not modify selector' end %i(primary_preferred secondary secondary_preferred nearest).each do |_mode| active_mode = _mode context "when read preference mode is #{active_mode}" do let(:mode) { active_mode } let(:expected) do selector.merge(:$readPreference => expected_read_preference) end it 'adds read preference' do expect(actual).to eq(expected) end end end end shared_examples_for 'sends user-specified read preference' do %i(primary primary_preferred secondary secondary_preferred nearest).each do |_mode| active_mode = _mode context "when read preference mode is #{active_mode}" do let(:mode) { active_mode } it_behaves_like 'adds read preference' end end context "when read preference mode is nil" do let(:mode) { nil } let(:expected_read_preference) do {mode: 'primary'} end it_behaves_like 'adds read preference' end end shared_examples_for 'changes read preference to allow secondary reads' do %i(primary_preferred secondary secondary_preferred nearest).each do |_mode| active_mode = _mode context "when read preference mode is #{active_mode}" do let(:mode) { active_mode } it_behaves_like 'adds read preference' end end context "when read preference mode is primary" do let(:mode) { :primary } let(:expected_read_preference) do {mode: 'primaryPreferred'} end it_behaves_like 'adds read preference' end context "when read preference mode is nil" do let(:mode) { nil } let(:expected_read_preference) do {mode: 'primaryPreferred'} end it_behaves_like 'adds read preference' end end shared_examples_for 'sends read preference correctly for mongos' do %i(primary_preferred secondary nearest).each do |_mode| active_mode = _mode context "when read preference mode is #{active_mode}" do let(:mode) { active_mode } it_behaves_like 'adds read preference' end end context 'when read preference mode is primary' do let(:mode) { 'primary' } it_behaves_like 'does not modify selector' end context 'when read preference mode is secondary_preferred' do let(:mode) { 'secondary_preferred' } let(:read_pref) do Mongo::ServerSelector.get(mode: mode, tag_sets: tag_sets) end let(:tag_sets) { nil } context 'without tag_sets specified' do it_behaves_like 'adds read preference' end context 'with empty tag_sets' do let(:tag_sets) { [] } it_behaves_like 'adds read preference' end context 'with tag_sets specified' do let(:tag_sets) { [{ dc: 'ny' }] } let(:expected_read_preference) do { mode: 'secondaryPreferred', tags: tag_sets } end it_behaves_like 'adds read preference' end end end context 'in single topology' do let(:single?) { true } context 'when the server is a standalone' do let(:standalone?) { true } let(:mongos?) { false } it_behaves_like 'does not send read preference' end context 'when the server is a mongos' do let(:standalone?) { false } let(:mongos?) { true } it_behaves_like 'sends read preference correctly for mongos' end context 'when the server is a replica set member' do let(:standalone?) { false } let(:mongos?) { false } it_behaves_like 'changes read preference to allow secondary reads' end end context 'not in single topology' do let(:single?) { false } context 'when the server is a standalone' do let(:standalone?) { true } let(:mongos?) { false } it_behaves_like 'does not send read preference' end context 'when the server is a mongos' do let(:standalone?) { false } let(:mongos?) { true } it_behaves_like 'sends read preference correctly for mongos' context 'when read preference mode is secondary_preferred' do let(:read_pref) do Mongo::ServerSelector.get( mode: mode, tag_sets: tag_sets, hedge: hedge ) end let(:mode) { 'secondary_preferred' } let(:tag_sets) { nil } let(:hedge) { nil } context 'when tag_sets and hedge are not specified' do it_behaves_like 'adds read preference' end context 'when tag_sets are specified' do let(:tag_sets) { [{ dc: 'ny' }] } let(:expected_read_preference) do { mode: 'secondaryPreferred', tags: tag_sets } end it_behaves_like 'adds read preference' end context 'when hedge is specified' do let(:hedge) { { enabled: true } } let(:expected_read_preference) do { mode: 'secondaryPreferred', hedge: hedge } end it_behaves_like 'adds read preference' end context 'when hedge and tag_sets are specified' do let(:hedge) { { enabled: true } } let(:tag_sets) { [{ dc: 'ny' }] } let(:expected_read_preference) do { mode: 'secondaryPreferred', tags: tag_sets, hedge: hedge } end it_behaves_like 'adds read preference' end end end context 'when the server is a replica set member' do let(:standalone?) { false } let(:mongos?) { false } it_behaves_like 'sends read preference correctly for replica set' end end end end mongo-ruby-driver-2.21.3/spec/mongo/operation/remove_user_spec.rb000066400000000000000000000023171505113246500251360ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Operation::RemoveUser do require_no_required_api_version let(:context) { Mongo::Operation::Context.new } describe '#execute' do before do users = root_authorized_client.database.users if users.info('durran').any? users.remove('durran') end users.create( 'durran', password: 'password', roles: [ Mongo::Auth::Roles::READ_WRITE ] ) end let(:operation) do described_class.new(user_name: 'durran', db_name: SpecConfig.instance.test_db) end context 'when user removal was successful' do let!(:response) do operation.execute(root_authorized_primary, context: context) end it 'removes the user from the database' do expect(response).to be_successful end end context 'when removal was not successful' do before do operation.execute(root_authorized_primary, context: context) end it 'raises an exception' do expect { operation.execute(root_authorized_primary, context: context) }.to raise_error(Mongo::Error::OperationFailure) end end end end mongo-ruby-driver-2.21.3/spec/mongo/operation/result_spec.rb000066400000000000000000000171111505113246500241170ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Operation::Result do let(:description) do Mongo::Server::Description.new( double('description address'), { 'minWireVersion' => 0, 'maxWireVersion' => 2 } ) end let(:result) do described_class.new(reply, description) end let(:cursor_id) { 0 } let(:documents) { [] } let(:flags) { [] } let(:starting_from) { 0 } let(:reply) do Mongo::Protocol::Reply.new.tap do |reply| reply.instance_variable_set(:@flags, flags) reply.instance_variable_set(:@cursor_id, cursor_id) reply.instance_variable_set(:@starting_from, starting_from) reply.instance_variable_set(:@number_returned, documents.size) reply.instance_variable_set(:@documents, documents) end end describe '#acknowledged?' do context 'when the reply is for a read command' do let(:documents) do [{ 'isWritablePrimary' => true, 'ok' => 1.0 }] end it 'returns true' do expect(result).to be_acknowledged end end context 'when the reply is for a write command' do context 'when the command was acknowledged' do let(:documents) do [{ "ok" => 1, "n" => 2 }] end it 'returns true' do expect(result).to be_acknowledged end end context 'when the command was not acknowledged' do let(:reply) { nil } it 'returns false' do expect(result).to_not be_acknowledged end end end end describe '#cursor_id' do context 'when the reply exists' do let(:cursor_id) { 5 } it 'delegates to the reply' do expect(result.cursor_id).to eq(5) end end context 'when the reply does not exist' do let(:reply) { nil } it 'returns zero' do expect(result.cursor_id).to eq(0) end end end describe '#has_cursor_id?' do context 'when the reply exists' do let(:cursor_id) { 5 } it 'returns true' do expect(result).to have_cursor_id end end context 'when the reply does not exist' do let(:reply) { nil } it 'returns false' do expect(result).not_to have_cursor_id end end end describe '#documents' do context 'when the result is for a command' do context 'when a reply is received' do let(:documents) do [{ "ok" => 1, "n" => 2 }] end it 'returns the documents' do expect(result.documents).to eq(documents) end end context 'when a reply is not received' do let(:reply) { nil } it 'returns an empty array' do expect(result.documents).to be_empty end end end end describe '#each' do let(:documents) do [{ "ok" => 1, "n" => 2 }] end context 'when a block is given' do it 'yields to each document' do result.each do |document| expect(document).to eq(documents.first) end end end context 'when no block is given' do it 'returns an enumerator' do expect(result.each).to be_a(Enumerator) end end end describe '#initialize' do it 'sets the replies' do expect(result.replies).to eq([ reply ]) end end describe '#returned_count' do context 'when the reply is for a read command' do let(:documents) do [{ 'hello' => true, 'ok' => 1.0 }] end it 'returns the number returned' do expect(result.returned_count).to eq(1) end end context 'when the reply is for a write command' do context 'when the write is acknowledged' do let(:documents) do [{ "ok" => 1, "n" => 2 }] end it 'returns the number returned' do expect(result.returned_count).to eq(1) end end context 'when the write is not acknowledged' do let(:reply) { nil } it 'returns zero' do expect(result.returned_count).to eq(0) end end end end describe '#successful?' do context 'when the reply is for a read command' do let(:documents) do [{ 'ismaster' => true, 'ok' => 1.0 }] end it 'returns true' do expect(result).to be_successful end end context 'when the reply is for a query' do context 'when the query has no errors' do let(:documents) do [{ 'field' => 'name' }] end it 'returns true' do expect(result).to be_successful end end context 'when the query has errors' do let(:documents) do [{ '$err' => 'not authorized for query on test.system.namespaces', 'code'=> 16550 }] end it 'returns false' do expect(result).to_not be_successful end end context 'when the query reply has the cursor_not_found flag set' do let(:flags) do [ :cursor_not_found ] end let(:documents) do [] end it 'returns false' do expect(result).to_not be_successful end end end context 'when the reply is for a write command' do context 'when the write is acknowledged' do context 'when ok is 1' do let(:documents) do [{ "ok" => 1, "n" => 2 }] end it 'returns true' do expect(result).to be_successful end end context 'when ok is not 1' do let(:documents) do [{ "ok" => 0, "n" => 0 }] end it 'returns false' do expect(result).to_not be_successful end end end context 'when the write is not acknowledged' do let(:reply) { nil } it 'returns true' do expect(result).to be_successful end end end context 'when there is a write concern error' do let(:documents) do [{'ok' => 1.0, 'writeConcernError' => { 'code' => 91, 'errmsg' => 'Replication is being shut down'}}] end it 'is false' do expect(result).not_to be_successful end end end describe '#written_count' do context 'when the reply is for a read command' do let(:documents) do [{ 'ismaster' => true, 'ok' => 1.0 }] end it 'returns the number written' do expect(result.written_count).to eq(0) end end context 'when the reply is for a write command' do let(:documents) do [{ "ok" => 1, "n" => 2 }] end it 'returns the number written' do expect(result.written_count).to eq(2) end end end context 'when there is a top-level Result class defined' do let(:client) do new_local_client(SpecConfig.instance.addresses, SpecConfig.instance.test_options) end before do class Result def get_result(client) client.database.command(:ping => 1) end end end let(:result) do Result.new.get_result(client) end it 'uses the Result class of the operation' do expect(result).to be_a(Mongo::Operation::Result) end end describe '#validate!' do context 'when there is a write concern error' do let(:documents) do [{'ok' => 1.0, 'writeConcernError' => { 'code' => 91, 'errmsg' => 'Replication is being shut down'}}] end it 'raises OperationFailure' do expect do result.validate! end.to raise_error(Mongo::Error::OperationFailure, /\[91\]: Replication is being shut down/) end end end end mongo-ruby-driver-2.21.3/spec/mongo/operation/shared/000077500000000000000000000000001505113246500225075ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/mongo/operation/shared/csot/000077500000000000000000000000001505113246500234575ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/mongo/operation/shared/csot/examples.rb000066400000000000000000000071571505113246500256340ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all module CSOT module Examples # expects the following values to be available: # `op` -- an instance of an OpMsgBase subclass def self.included(example_context) example_context.shared_examples 'mock CSOT environment' do # Linting freaks out because of the doubles used in these specs. require_no_linting let(:message) { op.send(:message, connection) } let(:body) { message.documents.first } let(:cursor_type) { nil } let(:timeout_mode) { nil } let(:remaining_timeout_sec) { nil } let(:minimum_round_trip_time) { 0 } let(:view_options) { {} } let(:max_await_time_ms) { nil } let(:view) do instance_double(Mongo::Collection::View).tap do |view| allow(view).to receive(:cursor_type).and_return(cursor_type) allow(view).to receive(:timeout_mode).and_return(timeout_mode) allow(view).to receive(:options).and_return(view_options) allow(view).to receive(:max_await_time_ms).and_return(max_await_time_ms) end end let(:context) do Mongo::Operation::Context.new(view: view).tap do |context| allow(context).to receive(:remaining_timeout_sec).and_return(remaining_timeout_sec) allow(context).to receive(:timeout?).and_return(!remaining_timeout_sec.nil?) end end let(:server) do instance_double(Mongo::Server).tap do |server| allow(server).to receive(:minimum_round_trip_time).and_return(minimum_round_trip_time) end end let(:address) { Mongo::Address.new('127.0.0.1') } let(:description) do Mongo::Server::Description.new( address, { Mongo::Operation::Result::OK => 1 } ) end let(:features) do Mongo::Server::Description::Features.new( Mongo::Server::Description::Features::DRIVER_WIRE_VERSIONS, address ) end let(:connection) do instance_double(Mongo::Server::Connection).tap do |conn| allow(conn).to receive(:server).and_return(server) allow(conn).to receive(:description).and_return(description) allow(conn).to receive(:features).and_return(features) end end before do # context is normally set when calling `execute` on the operation, # but since we're not doing that, we have to tell the operation # what the context is. op.context = context end end example_context.shared_examples 'a CSOT-compliant OpMsg subclass' do include_examples 'mock CSOT environment' context 'when no timeout_ms set' do it 'does not set maxTimeMS' do expect(body.key?(:maxTimeMS)).to be false end end context 'when there is enough time to send the message' do # Ten seconds remaining let(:remaining_timeout_sec) { 10 } # One second RTT let(:minimum_round_trip_time) { 1 } it 'sets the maxTimeMS' do # Nine seconds expect(body[:maxTimeMS]).to eq(9_000) end end context 'when there is not enough time to send the message' do # Ten seconds remaining let(:remaining_timeout_sec) { 0.1 } # One second RTT let(:minimum_round_trip_time) { 1 } it 'fails with an exception' do expect { message }.to raise_error(Mongo::Error::TimeoutError) end end end end end end mongo-ruby-driver-2.21.3/spec/mongo/operation/specifiable_spec.rb000066400000000000000000000032701505113246500250500ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Operation::Specifiable do let(:spec) do {} end let(:specifiable) do Class.new do include Mongo::Operation::Specifiable end.new(spec) end describe '#==' do context 'when the other object is a specifiable' do context 'when the specs are equal' do let(:other) do Class.new do include Mongo::Operation::Specifiable end.new(spec) end it 'returns true' do expect(specifiable).to eq(other) end end context 'when the specs are not equal' do let(:other) do Class.new do include Mongo::Operation::Specifiable end.new({ :db_name => 'test' }) end it 'returns false' do expect(specifiable).to_not eq(other) end end end context 'when the other object is not a specifiable' do it 'returns false' do expect(specifiable).to_not eq('test') end end end describe '#read' do context 'when read is specified' do let(:spec) do { read: { mode: :secondary} } end let(:server_selector) do Mongo::ServerSelector.get(spec[:read]) end it 'converts the read option to a ServerSelector' do expect(specifiable.read).to be_a(Mongo::ServerSelector::Secondary) end it 'uses the read option provided' do expect(specifiable.read).to eq(server_selector) end end context 'when read is not specified' do it 'returns nil' do expect(specifiable.read).to be_nil end end end end mongo-ruby-driver-2.21.3/spec/mongo/operation/update/000077500000000000000000000000001505113246500225235ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/mongo/operation/update/bulk_spec.rb000066400000000000000000000141661505113246500250270ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Operation::Update do require_no_multi_mongos require_no_required_api_version let(:context) { Mongo::Operation::Context.new } let(:documents) do [{ :q => { :foo => 1 }, :u => { :$set => { :bar => 1 } }, :multi => true, :upsert => false }] end let(:spec) do { updates: documents, db_name: authorized_collection.database.name, coll_name: authorized_collection.name, write_concern: write_concern, ordered: true } end let(:write_concern) do Mongo::WriteConcern.get(w: :majority) end let(:op) do described_class.new(spec) end describe '#initialize' do context 'spec' do it 'sets the spec' do expect(op.spec).to eq(spec) end end end describe '#==' do context 'spec' do context 'when two ops have the same specs' do let(:other) { described_class.new(spec) } it 'returns true' do expect(op).to eq(other) end end context 'when two ops have different specs' do let(:other_docs) do [ {:q => { :foo => 1 }, :u => { :$set => { :bar => 1 } }, :multi => true, :upsert => true } ] end let(:other_spec) do { updates: other_docs, db_name: authorized_collection.database.name, coll_name: authorized_collection.name, write_concern: write_concern, ordered: true } end let(:other) { described_class.new(other_spec) } it 'returns false' do expect(op).not_to eq(other) end end end end describe '#bulk_execute' do before do authorized_collection.drop authorized_collection.insert_many([ { name: 'test', field: 'test', other: 'test' }, { name: 'testing', field: 'test', other: 'test' } ]) end after do authorized_collection.delete_many end context 'when updating a single document' do context 'when the update passes' do let(:documents) do [{ 'q' => { other: 'test' }, 'u' => { '$set' => { field: 'blah' }}, 'multi' => false }] end it 'updates the document' do authorized_primary.with_connection do |connection| op.bulk_execute(connection, context: context) end expect(authorized_collection.find(field: 'blah').count).to eq(1) end end end context 'when updating multiple documents' do let(:update) do described_class.new({ updates: documents, db_name: authorized_collection.database.name, coll_name: authorized_collection.name, write_concern: write_concern }) end context 'when the updates succeed' do let(:documents) do [{ 'q' => { other: 'test' }, 'u' => { '$set' => { field: 'blah' }}, 'multi' => true }] end it 'updates the documents' do authorized_primary.with_connection do |connection| op.bulk_execute(connection, context: context) end expect(authorized_collection.find(field: 'blah').count).to eq(2) end end end context 'when the updates are ordered' do let(:documents) do [ { 'q' => { name: 'test' }, 'u' => { '$st' => { field: 'blah' }}, 'multi' => true}, { 'q' => { field: 'test' }, 'u' => { '$set' => { other: 'blah' }}, 'multi' => true } ] end let(:spec) do { updates: documents, db_name: authorized_collection.database.name, coll_name: authorized_collection.name, write_concern: write_concern, ordered: true } end let(:failing_update) do described_class.new(spec) end context 'when the update fails' do context 'when write concern is acknowledged' do it 'aborts after first error' do authorized_primary.with_connection do |connection| failing_update.bulk_execute(connection, context: context) end expect(authorized_collection.find(other: 'blah').count).to eq(0) end end context 'when write concern is unacknowledged' do let(:write_concern) do Mongo::WriteConcern.get(w: 0) end it 'aborts after first error' do authorized_primary.with_connection do |connection| failing_update.bulk_execute(connection, context: context) end expect(authorized_collection.find(other: 'blah').count).to eq(0) end end end end context 'when the updates are unordered' do let(:documents) do [ { 'q' => { name: 'test' }, 'u' => { '$st' => { field: 'blah' }}, 'multi' => true}, { 'q' => { field: 'test' }, 'u' => { '$set' => { other: 'blah' }}, 'multi' => false } ] end let(:spec) do { updates: documents, db_name: authorized_collection.database.name, coll_name: authorized_collection.name, write_concern: write_concern, ordered: false } end let(:failing_update) do described_class.new(spec) end context 'when the update fails' do context 'when write concern is acknowledged' do it 'does not abort after first error' do authorized_primary.with_connection do |connection| failing_update.bulk_execute(connection, context: context) end expect(authorized_collection.find(other: 'blah').count).to eq(1) end end context 'when write concern is unacknowledged' do let(:write_concern) do Mongo::WriteConcern.get(w: 0) end it 'does not abort after first error' do authorized_primary.with_connection do |connection| failing_update.bulk_execute(connection, context: context) end expect(authorized_collection.find(other: 'blah').count).to eq(1) end end end end end end mongo-ruby-driver-2.21.3/spec/mongo/operation/update/op_msg_spec.rb000066400000000000000000000202001505113246500253400ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Operation::Update::OpMsg do let(:updates) { [{:q => { :foo => 1 }, :u => { :$set => { :bar => 1 } }, :multi => true, :upsert => false }] } let(:write_concern) do Mongo::WriteConcern.get(w: :majority) end let(:session) { nil } let(:spec) do { :updates => updates, :db_name => SpecConfig.instance.test_db, :coll_name => TEST_COLL, :write_concern => write_concern, :ordered => true, :session => session } end let(:op) { described_class.new(spec) } let(:connection) do double('connection').tap do |connection| allow(connection).to receive(:server).and_return(authorized_primary) allow(connection).to receive(:features).and_return(authorized_primary.features) allow(connection).to receive(:description).and_return(authorized_primary.description) allow(connection).to receive(:cluster_time).and_return(authorized_primary.cluster_time) end end describe '#initialize' do context 'spec' do it 'sets the spec' do expect(op.spec).to eq(spec) end end end describe '#==' do context 'spec' do context 'when two ops have the same specs' do let(:other) { described_class.new(spec) } it 'returns true' do expect(op).to eq(other) end end context 'when two ops have different specs' do let(:other_updates) { [{:q => { :bar => 1 }, :u => { :$set => { :bar => 2 } }, :multi => true, :upsert => false }] } let(:other_spec) do { :updates => other_updates, :db_name => SpecConfig.instance.test_db, :coll_name => TEST_COLL, :write_concern => Mongo::WriteConcern.get(w: :majority), :ordered => true } end let(:other) { described_class.new(other_spec) } it 'returns false' do expect(op).not_to eq(other) end end end end describe 'write concern' do # https://jira.mongodb.org/browse/RUBY-2224 require_no_linting context 'when write concern is not specified' do let(:spec) do { :updates => updates, :db_name => SpecConfig.instance.test_db, :coll_name => TEST_COLL, :ordered => true } end it 'does not include write concern in the selector' do expect(op.send(:command, connection)[:writeConcern]).to be_nil end end context 'when write concern is specified' do it 'includes write concern in the selector' do expect(op.send(:command, connection)[:writeConcern]).to eq(BSON::Document.new(write_concern.options)) end end end describe '#message' do # https://jira.mongodb.org/browse/RUBY-2224 require_no_linting context 'when the server supports OP_MSG' do min_server_fcv '3.6' let(:global_args) do { update: TEST_COLL, ordered: true, writeConcern: write_concern.options, '$db' => SpecConfig.instance.test_db, lsid: session.session_id } end let(:expected_payload_1) do Mongo::Protocol::Msg::Section1.new('updates', updates) end let(:session) do authorized_client.start_session end context 'when the topology is replica set or sharded' do min_server_fcv '3.6' require_topology :replica_set, :sharded let(:expected_global_args) do global_args.merge(Mongo::Operation::CLUSTER_TIME => authorized_client.cluster.cluster_time) end it 'creates the correct OP_MSG message' do authorized_client.command(ping:1) expect(Mongo::Protocol::Msg).to receive(:new).with([], {}, expected_global_args, expected_payload_1) op.send(:message, connection) end end context 'when the topology is standalone' do min_server_fcv '3.6' require_topology :single let(:expected_global_args) do global_args end it 'creates the correct OP_MSG message' do authorized_client.command(ping:1) expect(Mongo::Protocol::Msg).to receive(:new).with([], {}, expected_global_args, expected_payload_1) op.send(:message, connection) end context 'when an implicit session is created and the topology is then updated and the server does not support sessions' do # Mocks on features are incompatible with linting require_no_linting let(:expected_global_args) do global_args.dup.tap do |args| args.delete(:lsid) end end let(:session) do Mongo::Session.new(nil, authorized_client, implicit: true).tap do |session| allow(session).to receive(:session_id).and_return(42) session.should be_implicit end end it 'creates the correct OP_MSG message' do RSpec::Mocks.with_temporary_scope do expect(connection.features).to receive(:sessions_enabled?).and_return(false) expect(Mongo::Protocol::Msg).to receive(:new).with([], {}, expected_global_args, expected_payload_1) op.send(:message, connection) end end end end context 'when the write concern is 0' do let(:write_concern) do Mongo::WriteConcern.get(w: 0) end context 'when the session is implicit' do let(:session) do Mongo::Session.new(nil, authorized_client, implicit: true).tap do |session| allow(session).to receive(:session_id).and_return(42) session.should be_implicit end end context 'when the topology is replica set or sharded' do min_server_fcv '3.6' require_topology :replica_set, :sharded let(:expected_global_args) do global_args.dup.tap do |args| args.delete(:lsid) args.merge!(Mongo::Operation::CLUSTER_TIME => authorized_client.cluster.cluster_time) end end it 'does not send a session id in the command' do authorized_client.command(ping:1) expect(Mongo::Protocol::Msg).to receive(:new).with([:more_to_come], {}, expected_global_args, expected_payload_1) op.send(:message, connection) end end context 'when the topology is standalone' do min_server_fcv '3.6' require_topology :single let(:expected_global_args) do global_args.dup.tap do |args| args.delete(:lsid) end end it 'creates the correct OP_MSG message' do authorized_client.command(ping:1) expect(Mongo::Protocol::Msg).to receive(:new).with([:more_to_come], {}, expected_global_args, expected_payload_1) op.send(:message, connection) end end end context 'when the session is explicit' do min_server_fcv '3.6' require_topology :replica_set, :sharded let(:session) do authorized_client.start_session end before do session.should_not be_implicit end let(:expected_global_args) do global_args.dup.tap do |args| args.delete(:lsid) args.merge!(Mongo::Operation::CLUSTER_TIME => authorized_client.cluster.cluster_time) end end it 'does not send a session id in the command' do authorized_client.command(ping:1) RSpec::Mocks.with_temporary_scope do expect(Mongo::Protocol::Msg).to receive(:new).with([:more_to_come], {}, expected_global_args, expected_payload_1) op.send(:message, connection) end end end end end end end mongo-ruby-driver-2.21.3/spec/mongo/operation/update_spec.rb000066400000000000000000000133511505113246500240650ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Operation::Update do require_no_required_api_version let(:context) { Mongo::Operation::Context.new } let(:document) do { :q => { :foo => 1 }, :u => { :$set => { :bar => 1 } }, :multi => true, :upsert => false } end let(:spec) do { :updates => [ document ], :db_name => SpecConfig.instance.test_db, :coll_name => TEST_COLL, :write_concern => Mongo::WriteConcern.get(:w => 1), :ordered => true } end let(:update) do described_class.new(spec) end describe '#initialize' do context 'spec' do it 'sets the spec' do expect(update.spec).to eq(spec) end end end describe '#==' do context 'spec' do context 'when two ops have the same specs' do let(:other) { described_class.new(spec) } it 'returns true' do expect(update).to eq(other) end end context 'when two ops have different specs' do let(:other_doc) { {:q => { :foo => 1 }, :u => { :$set => { :bar => 1 } }, :multi => true, :upsert => true } } let(:other_spec) do { :update => other_doc, :db_name => SpecConfig.instance.test_db, :coll_name => TEST_COLL, :write_concern => Mongo::WriteConcern.get(:w => 1), :ordered => true } end let(:other) { described_class.new(other_spec) } it 'returns false' do expect(update).not_to eq(other) end end end end describe '#execute' do before do authorized_collection.drop authorized_collection.insert_many([ { name: 'test', field: 'test', other: 'test' }, { name: 'testing', field: 'test', other: 'test' } ]) end after do authorized_collection.delete_many end context 'when updating a single document' do let(:update) do described_class.new({ updates: [ document ], db_name: SpecConfig.instance.test_db, coll_name: TEST_COLL, write_concern: Mongo::WriteConcern.get(:w => 1) }) end context 'when the update succeeds' do let(:document) do { 'q' => { name: 'test' }, 'u' => { '$set' => { field: 'blah' }} } end let(:result) do update.execute(authorized_primary, context: context) end it 'updates the document' do expect(result.written_count).to eq(1) end it 'reports the modified count' do expect(result.modified_count).to eq(1) end it 'reports the matched count' do expect(result.matched_count).to eq(1) end it 'reports the upserted id as nil' do expect(result.upserted_id).to eq(nil) end end context 'when the update fails' do let(:document) do { 'q' => { name: 'test' }, 'u' => { '$st' => { field: 'blah' } } } end it 'raises an exception' do expect { update.execute(authorized_primary, context: context) }.to raise_error(Mongo::Error::OperationFailure) end end end context 'when updating multiple documents' do let(:update) do described_class.new({ updates: [ document ], db_name: SpecConfig.instance.test_db, coll_name: TEST_COLL, write_concern: Mongo::WriteConcern.get(:w => 1) }) end context 'when the updates succeed' do let(:document) do { 'q' => { field: 'test' }, 'u' => { '$set' => { other: 'blah' }}, 'multi' => true } end let(:result) do update.execute(authorized_primary, context: context) end it 'updates the documents' do expect(result.written_count).to eq(2) end it 'reports the modified count' do expect(result.modified_count).to eq(2) end it 'reports the matched count' do expect(result.matched_count).to eq(2) end it 'reports the upserted id as nil' do expect(result.upserted_id).to eq(nil) end end context 'when an update fails' do let(:document) do { 'q' => { name: 'test' }, 'u' => { '$st' => { field: 'blah' } }, 'multi' => true } end it 'raises an exception' do expect { update.execute(authorized_primary, context: context) }.to raise_error(Mongo::Error::OperationFailure) end end context 'when a document exceeds max bson size' do let(:document) do { 'q' => { name: 't'*17000000}, 'u' => { '$set' => { field: 'blah' } } } end it 'raises an error' do expect { update.execute(authorized_primary, context: context) }.to raise_error(Mongo::Error::MaxBSONSize) end end context 'when upsert is true' do let(:document) do { 'q' => { field: 'non-existent' }, 'u' => { '$set' => { other: 'blah' }}, 'upsert' => true } end let(:result) do update.execute(authorized_primary, context: context) end it 'inserts the document' do expect(result.written_count).to eq(1) end it 'reports the modified count' do expect(result.modified_count).to eq(0) end it 'reports the matched count' do expect(result.matched_count).to eq(0) end it 'retruns the upserted id' do expect(result.upserted_id).to be_a(BSON::ObjectId) end end end end end mongo-ruby-driver-2.21.3/spec/mongo/operation/update_user_spec.rb000066400000000000000000000021571505113246500251250ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Operation::UpdateUser do require_no_required_api_version let(:context) { Mongo::Operation::Context.new } describe '#execute' do let(:user) do Mongo::Auth::User.new( user: 'durran', password: 'password', roles: [ Mongo::Auth::Roles::READ_WRITE ] ) end let(:user_updated) do Mongo::Auth::User.new( user: 'durran', password: '123', roles: [ Mongo::Auth::Roles::READ ] ) end let(:operation) do described_class.new(user: user_updated, db_name: SpecConfig.instance.test_db) end before do users = root_authorized_client.database.users if users.info('durran').any? users.remove('durran') end users.create(user) end context 'when user update was successful' do let!(:response) do operation.execute(root_authorized_primary, context: context) end it 'updates the user in the database' do expect(response).to be_successful end end end end mongo-ruby-driver-2.21.3/spec/mongo/options/000077500000000000000000000000001505113246500207345ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/mongo/options/redacted_spec.rb000066400000000000000000000176221505113246500240560ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Options::Redacted do let(:options) do described_class.new(original_opts) end describe '#to_s' do context 'when the hash contains a sensitive key' do let(:original_opts) do { password: 'sensitive_data' } end it 'replaces the value with the redacted string' do expect(options.to_s).not_to match(original_opts[:password]) end it 'replaces the value with the redacted string' do expect(options.to_s).to match(Mongo::Options::Redacted::STRING_REPLACEMENT) end end context 'when the hash does not contain a sensitive key' do let(:original_opts) do { user: 'emily' } end it 'prints all the values' do expect(options.to_s).to match(original_opts[:user]) end end end describe '#inspect' do context 'when the hash contains a sensitive key' do let(:original_opts) do { password: 'sensitive_data' } end it 'replaces the value with the redacted string' do expect(options.inspect).not_to match(original_opts[:password]) end it 'replaces the value with the redacted string' do expect(options.inspect).to match(Mongo::Options::Redacted::STRING_REPLACEMENT) end end context 'when the hash does not contain a sensitive key' do let(:original_opts) do { name: 'some_name' } end it 'does not replace the value with the redacted string' do expect(options.inspect).to match(original_opts[:name]) end it 'does not replace the value with the redacted string' do expect(options.inspect).not_to match(Mongo::Options::Redacted::STRING_REPLACEMENT) end end end describe '#has_key?' do context 'when the original key is a String' do let(:original_opts) do { 'name' => 'Emily' } end context 'when the method argument is a String' do it 'returns true' do expect(options.has_key?('name')).to be(true) end end context 'when method argument is a Symbol' do it 'returns true' do expect(options.has_key?(:name)).to be(true) end end end context 'when the original key is a Symbol' do let(:original_opts) do { name: 'Emily' } end context 'when the method argument is a String' do it 'returns true' do expect(options.has_key?('name')).to be(true) end end context 'when method argument is a Symbol' do it 'returns true' do expect(options.has_key?(:name)).to be(true) end end end context 'when the hash does not contain the key' do let(:original_opts) do { other: 'Emily' } end context 'when the method argument is a String' do it 'returns false' do expect(options.has_key?('name')).to be(false) end end context 'when method argument is a Symbol' do it 'returns false' do expect(options.has_key?(:name)).to be(false) end end end end describe '#reject' do let(:options) do described_class.new(a: 1, b: 2, c: 3) end context 'when no block is provided' do it 'returns an enumerable' do expect(options.reject).to be_a(Enumerator) end end context 'when a block is provided' do context 'when the block evaluates to true for some pairs' do let(:result) do options.reject { |k,v| k == 'a' } end it 'returns an object consisting of only the remaining pairs' do expect(result).to eq(described_class.new(b: 2, c: 3)) end it 'returns a new object' do expect(result).not_to be(options) end end context 'when the block does not evaluate to true for any pairs' do let(:result) do options.reject { |k,v| k == 'd' } end it 'returns an object with all pairs intact' do expect(result).to eq(described_class.new(a: 1, b: 2, c: 3)) end it 'returns a new object' do expect(result).not_to be(options) end end end end describe '#reject!' do let(:options) do described_class.new(a: 1, b: 2, c: 3) end context 'when no block is provided' do it 'returns an enumerable' do expect(options.reject).to be_a(Enumerator) end end context 'when a block is provided' do context 'when the block evaluates to true for some pairs' do let(:result) do options.reject! { |k,v| k == 'a' } end it 'returns an object consisting of only the remaining pairs' do expect(result).to eq(described_class.new(b: 2, c: 3)) end it 'returns the same object' do expect(result).to be(options) end end context 'when the block does not evaluate to true for any pairs' do let(:result) do options.reject! { |k,v| k == 'd' } end it 'returns nil' do expect(result).to be(nil) end end end end describe '#select' do let(:options) do described_class.new(a: 1, b: 2, c: 3) end context 'when no block is provided' do it 'returns an enumerable' do expect(options.reject).to be_a(Enumerator) end end context 'when a block is provided' do context 'when the block evaluates to true for some pairs' do let(:result) do options.select { |k,v| k == 'a' } end it 'returns an object consisting of those pairs' do expect(result).to eq(described_class.new(a: 1)) end it 'returns a new object' do expect(result).not_to be(options) end end context 'when the block does not evaluate to true for any pairs' do let(:result) do options.select { |k,v| k == 'd' } end it 'returns an object with no pairs' do expect(result).to eq(described_class.new) end it 'returns a new object' do expect(result).not_to be(options) end end context 'when the object is unchanged' do let(:options) do described_class.new(a: 1, b: 2, c: 3) end let(:result) do options.select { |k,v| ['a', 'b', 'c'].include?(k) } end it 'returns a new object' do expect(result).to eq(described_class.new(a: 1, b: 2, c: 3)) end end end end describe '#select!' do let(:options) do described_class.new(a: 1, b: 2, c: 3) end context 'when no block is provided' do it 'returns an enumerable' do expect(options.reject).to be_a(Enumerator) end end context 'when a block is provided' do context 'when the block evaluates to true for some pairs' do let(:result) do options.select! { |k,v| k == 'a' } end it 'returns an object consisting of those pairs' do expect(result).to eq(described_class.new(a: 1)) end it 'returns the same object' do expect(result).to be(options) end end context 'when the block does not evaluate to true for any pairs' do let(:result) do options.select! { |k,v| k == 'd' } end it 'returns an object with no pairs' do expect(result).to eq(described_class.new) end it 'returns the same object' do expect(result).to be(options) end end context 'when the object is unchanged' do let(:options) do described_class.new(a: 1, b: 2, c: 3) end let(:result) do options.select! { |k,v| ['a', 'b', 'c'].include?(k) } end it 'returns nil' do expect(result).to be(nil) end end end end end mongo-ruby-driver-2.21.3/spec/mongo/protocol/000077500000000000000000000000001505113246500211025ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/mongo/protocol/caching_hash_spec.rb000066400000000000000000000016351505113246500250450ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Protocol::CachingHash do let(:hash) { described_class.new(x:1) } let(:bson_reg) { {x:1}.to_bson } describe "#to_bson" do context "when serializing to bson" do it "caches the results" do hash.to_bson expect(hash.instance_variable_get("@bytes")).to eq(bson_reg.to_s) end end context "when giving a non empty buffer to_bson" do let!(:buffer) { {z: 1}.to_bson } let!(:bytes) { buffer.to_s } it "updates the given buffer" do hash.to_bson(buffer) expect(buffer.to_s).to eq(bytes + bson_reg.to_s) end it "given buffer is not included in the cached bytes" do hash.to_bson(buffer) expect(hash.instance_variable_get("@bytes")).to eq(bson_reg.to_s) expect(hash.to_bson.to_s).to eq(bson_reg.to_s) end end end end mongo-ruby-driver-2.21.3/spec/mongo/protocol/compressed_spec.rb000066400000000000000000000040761505113246500246140ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Protocol::Compressed do let(:original_message) { Mongo::Protocol::Query.new(SpecConfig.instance.test_db, 'protocol-test', { ping: 1 }) } let(:compressor) { 'zlib' } let(:level) { nil } let(:message) do described_class.new(original_message, compressor, level) end let(:original_message_bytes) do buf = BSON::ByteBuffer.new original_message.send(:serialize_fields, buf) buf.to_s end describe '#serialize' do context "when using the snappy compressor" do require_snappy_compression let(:compressor) { 'snappy' } it "uses snappy" do expect(Snappy).to receive(:deflate).with(original_message_bytes).and_call_original message.serialize end end context "when using the zstd compressor" do require_zstd_compression let(:compressor) { 'zstd' } it "uses zstd with default compression level" do expect(Zstd).to receive(:compress).with(original_message_bytes).and_call_original message.serialize end end context 'when zlib compression level is not provided' do it 'does not set a compression level' do expect(Zlib::Deflate).to receive(:deflate).with(original_message_bytes, nil).and_call_original message.serialize end end context 'when zlib compression level is provided' do let(:level) { 1 } it 'uses the compression level' do expect(Zlib::Deflate).to receive(:deflate).with(original_message_bytes, 1).and_call_original message.serialize end end end describe '#replyable?' do context 'when the original message is replyable' do it 'returns true' do expect(message.replyable?).to be(true) end end context 'when the original message is not replyable' do let(:original_message) do Mongo::Protocol::Msg.new([:more_to_come], {}, { ping: 1 }) end it 'returns false' do expect(message.replyable?).to be(false) end end end end mongo-ruby-driver-2.21.3/spec/mongo/protocol/get_more_spec.rb000066400000000000000000000072611505113246500242500ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' require 'support/shared/protocol' describe Mongo::Protocol::GetMore do let(:opcode) { 2005 } let(:db) { SpecConfig.instance.test_db } let(:collection_name) { 'protocol-test' } let(:ns) { "#{db}.#{collection_name}" } let(:limit) { 25 } let(:cursor_id) { 12345 } let(:message) do described_class.new(db, collection_name, limit, cursor_id) end describe '#initialize' do it 'sets the namepsace' do expect(message.namespace).to eq(ns) end it 'sets the number to return' do expect(message.number_to_return).to eq(limit) end it 'sets the cursor id' do expect(message.cursor_id).to eq(cursor_id) end end describe '#==' do context 'when the other is a getMore' do context 'when the fields are equal' do let(:other) do described_class.new(db, collection_name, limit, cursor_id) end it 'returns true' do expect(message).to eq(other) end end context 'when the database is not equal' do let(:other) do described_class.new('tyler', collection_name, limit, cursor_id) end it 'returns false' do expect(message).not_to eq(other) end end context 'when the collection is not equal' do let(:other) do described_class.new(db, 'tyler', limit, cursor_id) end it 'returns false' do expect(message).not_to eq(other) end end context 'when the limit is not equal' do let(:other) do described_class.new(db, collection_name, 123, cursor_id) end it 'returns false' do expect(message).not_to eq(other) end end context 'when the cursor id is not equal' do let(:other) do described_class.new(db, collection_name, limit, 7777) end it 'returns false' do expect(message).not_to eq(other) end end end context 'when the other is not a getMore' do let(:other) do expect(message).not_to eq('test') end end end describe '#hash' do let(:values) do message.send(:fields).map do |field| message.instance_variable_get(field[:name]) end end it 'returns a hash of the field values' do expect(message.hash).to eq(values.hash) end end describe '#replyable?' do it 'returns true' do expect(message).to be_replyable end end describe '#serialize' do let(:bytes) { message.serialize } include_examples 'message with a header' describe 'zero' do let(:field) { bytes.to_s[16..19] } it 'does not set any bits' do expect(field).to be_int32(0) end end describe 'namespace' do let(:field) { bytes.to_s[20..36] } it 'serializes the namespace' do expect(field).to be_cstring(ns) end end describe 'number to return' do let(:field) { bytes.to_s[37..40] } it 'serializes the number to return' do expect(field).to be_int32(limit) end end describe 'cursor id' do let(:field) { bytes.to_s[41..48] } it 'serializes the cursor id' do expect(field).to be_int64(cursor_id) end end end describe '#registry' do context 'when the class is loaded' do it 'registers the op code in the Protocol Registry' do expect(Mongo::Protocol::Registry.get(described_class::OP_CODE)).to be(described_class) end it 'creates an #op_code instance method' do expect(message.op_code).to eq(described_class::OP_CODE) end end end end mongo-ruby-driver-2.21.3/spec/mongo/protocol/kill_cursors_spec.rb000066400000000000000000000053131505113246500251560ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' require 'support/shared/protocol' describe Mongo::Protocol::KillCursors do let(:opcode) { 2007 } let(:cursor_ids) { [123, 456, 789] } let(:id_count) { cursor_ids.size } let(:collection_name) { 'protocol-test' } let(:database) { SpecConfig.instance.test_db } let(:message) do described_class.new(collection_name, database, cursor_ids) end describe '#initialize' do it 'sets the cursor ids' do expect(message.cursor_ids).to eq(cursor_ids) end it 'sets the count' do expect(message.id_count).to eq(id_count) end end describe '#==' do context 'when the other is a killcursors' do context 'when the cursor ids are equal' do let(:other) do described_class.new(collection_name, database, cursor_ids) end it 'returns true' do expect(message).to eq(other) end end context 'when the cursor ids are not equal' do let(:other) do described_class.new(collection_name, database, [123, 456]) end it 'returns false' do expect(message).not_to eq(other) end end end context 'when the other is not a killcursors' do let(:other) do expect(message).not_to eq('test') end end end describe '#hash' do let(:values) do message.send(:fields).map do |field| message.instance_variable_get(field[:name]) end end it 'returns a hash of the field values' do expect(message.hash).to eq(values.hash) end end describe '#replyable?' do it 'returns false' do expect(message).to_not be_replyable end end describe '#serialize' do let(:bytes) { message.serialize } include_examples 'message with a header' describe 'zero' do let(:field) { bytes.to_s[16..19] } it 'serializes a zero' do expect(field).to be_int32(0) end end describe 'number of cursors' do let(:field) { bytes.to_s[20..23] } it 'serializes the cursor count' do expect(field).to be_int32(id_count) end end describe 'cursor ids' do let(:field) { bytes.to_s[24..-1] } it 'serializes the selector' do expect(field).to be_int64_sequence(cursor_ids) end end end describe '#registry' do context 'when the class is loaded' do it 'registers the op code in the Protocol Registry' do expect(Mongo::Protocol::Registry.get(described_class::OP_CODE)).to be(described_class) end it 'creates an #op_code instance method' do expect(message.op_code).to eq(described_class::OP_CODE) end end end end mongo-ruby-driver-2.21.3/spec/mongo/protocol/msg_spec.rb000066400000000000000000000311531505113246500232320ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' require 'support/shared/protocol' describe Mongo::Protocol::Msg do let(:opcode) { 2013 } let(:flags) { [] } let(:options) { {} } let(:main_document) { { '$db' => SpecConfig.instance.test_db, ping: 1 } } let(:sequences) { [ ] } let(:message) do described_class.new(flags, options, main_document, *sequences) end let(:deserialized) do Mongo::Protocol::Message.deserialize(StringIO.new(message.serialize.to_s)) end describe '#initialize' do it 'adds the main_document to the sections' do expect(message.sections[0]).to eq(type: 0, payload: main_document) end context 'when flag bits are provided' do context 'when valid flags are provided' do let(:flags) { [:more_to_come] } it 'sets the flags' do expect(message.flags).to eq(flags) end end context 'when flags are not provided' do let(:flags) { nil } it 'sets the flags to []' do expect(message.flags).to eq([]) end end context 'when an invalid flag is provided' do let(:flags) { [:checksum_present] } let(:flag_bytes) { message.serialize.to_s[16..19] } it 'sets the flags' do expect(message.flags).to eq([:checksum_present]) end it 'only serializes the valid flags' do expect(flag_bytes).to be_int32(1) end end end context 'with user-provided and driver-generated keys in main_document' do let(:main_document) do { 'ping' => 1, 'lsid' => '__lsid__', 'a' => 'b', '$clusterTime' => '__ct__', 'signature' => '__signature__', 'd' => 'f'} end it 'reorders main_document for better logging' do expect(message.payload[:command].keys).to eq(%w(ping a d lsid $clusterTime signature)) end end end describe '#==' do context 'when the other is a msg' do context 'when the fields are equal' do let(:other) do described_class.new(flags, options, main_document) end it 'returns true' do expect(message).to eq(other) end end context 'when the flags are not equal' do let(:other) do described_class.new([:more_to_come], options, main_document) end it 'returns false' do expect(message).not_to eq(other) end end context 'when the main_document are not equal' do let(:other_main_document) do { '$db'=> SpecConfig.instance.test_db, hello: 1 } end let(:other) do described_class.new(flags, nil, other_main_document) end it 'returns false' do expect(message).not_to eq(other) end end end context 'when the other is not a msg' do let(:other) do expect(message).not_to eq('test') end end end describe '#hash' do let(:values) do message.send(:fields).map do |field| message.instance_variable_get(field[:name]) end end it 'returns a hash of the field values' do expect(message.hash).to eq(values.hash) end end describe '#replyable?' do context 'when the :more_to_come flag is set' do let(:flags) { [:more_to_come] } it 'returns false' do expect(message).to_not be_replyable end end context 'when the :more_to_come flag is not set' do it 'returns true' do expect(message).to be_replyable end end end describe '#serialize' do let(:bytes) do message.serialize end let(:flag_bytes) { bytes.to_s[16..19] } let(:payload_type) { bytes.to_s[20] } let(:payload_bytes) { bytes.to_s[21..-1] } let(:main_document) { { ping: 1 } } include_examples 'message with a header' context 'when flags are provided' do context 'when checksum_present is provided' do let(:flags) do [:checksum_present] end it 'sets the flag bits' do expect(flag_bytes).to be_int32(1) end end context 'when more_to_come is provided' do let(:flags) do [:more_to_come] end it 'sets the flag bits' do expect(flag_bytes).to be_int32(2) end end end context 'when no flag is provided' do let(:flags) do nil end it 'sets the flag bits to 0' do expect(flag_bytes).to be_int32(0) end end context 'when global args are provided' do it 'sets the payload type' do expect(payload_type).to eq(0.chr) end it 'serializes the global arguments' do expect(payload_bytes).to be_bson(main_document) end end context 'when sequences are provided' do let(:sequences) do [ section ] end context 'when an invalid payload type is specified' do let(:section) do { type: 2, payload: { identifier: 'documents', sequence: [ { a: 1 } ] } } end it 'raises an exception' do expect do message end.to raise_exception(ArgumentError, /All sequences must be Section1 instances/) end end context 'when a payload of type 1 is specified' do let(:section) do Mongo::Protocol::Msg::Section1.new('documents', [ { a: 1 } ]) end let(:section_payload_type) { bytes.to_s[36] } let(:section_size) { bytes.to_s[37..40] } let(:section_identifier) { bytes.to_s[41..50] } let(:section_bytes) { bytes.to_s[51..-1] } it 'sets the payload type' do expect(section_payload_type).to eq(1.chr) end it 'sets the section size' do expect(section_size).to be_int32(26) end it 'serializes the section identifier' do expect(section_identifier).to eq("documents#{BSON::NULL_BYTE}") end it 'serializes the section bytes' do expect(section_bytes).to be_bson({ a: 1 }) end context 'when two sections are specified' do let(:sequences) do [ section1, section2 ] end let(:section1) do Mongo::Protocol::Msg::Section1.new('documents', [ { a: 1 } ]) end let(:section2) do Mongo::Protocol::Msg::Section1.new('updates', [ { :q => { :bar => 1 }, :u => { :$set => { :bar => 2 } }, :multi => true, :upsert => false, } ]) end let(:section1_payload_type) { bytes.to_s[36] } let(:section1_size) { bytes.to_s[37..40] } let(:section1_identifier) { bytes.to_s[41..50] } let(:section1_bytes) { bytes.to_s[51..62] } it 'sets the first payload type' do expect(section1_payload_type).to eq(1.chr) end it 'sets the first section size' do expect(section1_size).to be_int32(26) end it 'serializes the first section identifier' do expect(section1_identifier).to eq("documents#{BSON::NULL_BYTE}") end it 'serializes the first section bytes' do expect(section1_bytes).to be_bson({ a: 1 }) end let(:section2_payload_type) { bytes.to_s[63] } let(:section2_size) { bytes.to_s[64..67] } let(:section2_identifier) { bytes.to_s[68..75] } let(:section2_bytes) { bytes.to_s[76..-1] } it 'sets the second payload type' do expect(section2_payload_type).to eq(1.chr) end it 'sets the second section size' do expect(section2_size).to be_int32(79) end it 'serializes the second section identifier' do expect(section2_identifier).to eq("updates#{BSON::NULL_BYTE}") end it 'serializes the second section bytes' do expect(section2_bytes).to be_bson(section2.documents[0]) end end end end context 'when the validating_keys option is true with payload 1' do let(:sequences) do [ section ] end let(:section) do Mongo::Protocol::Msg::Section1.new('documents', [ { '$b' => 2 } ]) end let(:options) do { validating_keys: true } end it 'does not check the sequence document keys' do expect(message.serialize).to be_a(BSON::ByteBuffer) end end context 'when the validating_keys option is false with payload 1' do let(:sequences) do [ section ] end let(:section) do Mongo::Protocol::Msg::Section1.new('documents', [ { '$b' => 2 } ]) end let(:options) do { validating_keys: false } end it 'does not check the sequence document keys' do expect(message.serialize).to be_a(BSON::ByteBuffer) end end [:more_to_come, :exhaust_allowed].each do |flag| context "with #{flag} flag" do let(:flags) { [flag] } it "round trips #{flag} flag" do expect(deserialized.flags).to eq(flags) end end end end describe '#deserialize' do context 'when the payload type is valid' do it 'deserializes the message' do expect(deserialized.documents).to eq([ BSON::Document.new(main_document) ]) end end context 'when the payload type is not valid' do let(:invalid_payload_message) do message.serialize.to_s.tap do |s| s[20] = 5.chr end end it 'raises an exception' do expect do Mongo::Protocol::Message.deserialize(StringIO.new(invalid_payload_message)) end.to raise_exception(Mongo::Error::UnknownPayloadType) end end end describe '#payload' do context 'when the msg only contains a payload type 0' do it 'creates a payload with the command' do expect(message.payload[:command_name]).to eq('ping') expect(message.payload[:database_name]).to eq(SpecConfig.instance.test_db) expect(message.payload[:command]).to eq('ping' => 1, '$db' => SpecConfig.instance.test_db) expect(message.payload[:request_id]).to eq(message.request_id) end end context 'when the contains a payload type 1' do let(:section) do Mongo::Protocol::Msg::Section1.new('documents', [ { a: 1 } ]) end let(:main_document) do { '$db' => SpecConfig.instance.test_db, 'insert' => 'foo', 'ordered' => true } end let(:sequences) do [ section ] end let(:expected_command_doc) do { 'insert' => 'foo', 'documents' => [{ 'a' => 1 }], 'ordered' => true, '$db' => SpecConfig.instance.test_db, } end it 'creates a payload with the command' do expect(message.payload[:command_name]).to eq('insert') expect(message.payload[:database_name]).to eq(SpecConfig.instance.test_db) expect(message.payload[:command]).to eq(expected_command_doc) expect(message.payload[:request_id]).to eq(message.request_id) end end end describe '#registry' do context 'when the class is loaded' do it 'registers the op code in the Protocol Registry' do expect(Mongo::Protocol::Registry.get(described_class::OP_CODE)).to be(described_class) end it 'creates an #op_code instance method' do expect(message.op_code).to eq(described_class::OP_CODE) end end end describe '#number_returned' do let(:batch) do (1..2).map{ |i| { field: "test#{i}" }} end context 'when the msg contains a find document' do let(:find_document) { { "cursor" => { "firstBatch" => batch } } } let(:find_message) do described_class.new(flags, options, find_document, *sequences) end it 'returns the correct number_returned' do expect(find_message.number_returned).to eq(2) end end context 'when the msg contains a getmore document' do let(:next_document) { { "cursor" => { "nextBatch" => batch } } } let(:next_message) do described_class.new(flags, options, next_document, *sequences) end it 'returns the correct number_returned' do expect(next_message.number_returned).to eq(2) end end context 'when the msg contains a document without first/nextBatch' do it 'raises NotImplementedError' do lambda do message.number_returned end.should raise_error(NotImplementedError, /number_returned is only defined for cursor replies/) end end end end mongo-ruby-driver-2.21.3/spec/mongo/protocol/query_spec.rb000066400000000000000000000203341505113246500236100ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' require 'support/shared/protocol' describe Mongo::Protocol::Query do let(:opcode) { 2004 } let(:db) { SpecConfig.instance.test_db } let(:collection_name) { 'protocol-test' } let(:ns) { "#{db}.#{collection_name}" } let(:selector) { { :name => 'Tyler' } } let(:options) { Hash.new } let(:message) do described_class.new(db, collection_name, selector, options) end describe '#initialize' do it 'sets the namespace' do expect(message.namespace).to eq(ns) end it 'sets the selector' do expect(message.selector).to eq(selector) end context 'when options are provided' do context 'when flags are provided' do let(:options) { { :flags => [:secondary_ok] } } it 'sets the flags' do expect(message.flags).to eq(options[:flags]) end end context 'when a limit is provided' do let(:options) { { :limit => 5 } } it 'sets the limit' do expect(message.limit).to eq(options[:limit]) end end context 'when a skip is provided' do let(:options) { { :skip => 13 } } it 'sets the flags' do expect(message.skip).to eq(options[:skip]) end end context 'when a projection is provided' do let(:options) { { :project => { :_id => 0 } } } it 'sets the projection' do expect(message.project).to eq(options[:project]) end end end end describe '#==' do context 'when the other is a query' do context 'when the fields are equal' do let(:other) do described_class.new(db, collection_name, selector, options) end it 'returns true' do expect(message).to eq(other) end end context 'when the database is not equal' do let(:other) do described_class.new('tyler', collection_name, selector, options) end it 'returns false' do expect(message).not_to eq(other) end end context 'when the collection is not equal' do let(:other) do described_class.new(db, 'tyler', selector, options) end it 'returns false' do expect(message).not_to eq(other) end end context 'when the selector is not equal' do let(:other) do described_class.new(db, collection_name, { :a => 1 }, options) end it 'returns false' do expect(message).not_to eq(other) end end context 'when the options are not equal' do let(:other) do described_class.new(db, collection_name, selector, :skip => 2) end it 'returns false' do expect(message).not_to eq(other) end end end context 'when the other is not a query' do let(:other) do expect(message).not_to eq('test') end end end describe '#hash' do let(:values) do message.send(:fields).map do |field| message.instance_variable_get(field[:name]) end end it 'returns a hash of the field values' do expect(message.hash).to eq(values.hash) end end describe '#replyable?' do it 'returns true' do expect(message).to be_replyable end end describe '#serialize' do let(:bytes) { message.serialize } include_examples 'message with a header' describe 'flags' do let(:field) { bytes.to_s[16..19] } context 'when no flags are provided' do it 'does not set any bits' do expect(field).to be_int32(0) end end context 'when flags are provided' do let(:options) { { :flags => flags } } context 'tailable cursor flag' do let(:flags) { [:tailable_cursor] } it 'sets the second bit' do expect(field).to be_int32(2) end end context 'slave ok flag' do let(:flags) { [:secondary_ok] } it 'sets the third bit' do expect(field).to be_int32(4) end end context 'oplog replay flag' do let(:flags) { [:oplog_replay] } it 'sets the fourth bit' do expect(field).to be_int32(8) end end context 'no cursor timeout flag' do let(:flags) { [:no_cursor_timeout] } it 'sets the fifth bit' do expect(field).to be_int32(16) end end context 'await data flag' do let(:flags) { [:await_data] } it 'sets the sixth bit' do expect(field).to be_int32(32) end end context 'exhaust flag' do let(:flags) { [:exhaust] } it 'sets the seventh bit' do expect(field).to be_int32(64) end end context 'partial flag' do let(:flags) { [:partial] } it 'sets the eigth bit' do expect(field).to be_int32(128) end end context 'multiple flags' do let(:flags) { [:await_data, :secondary_ok] } it 'sets the correct bits' do expect(field).to be_int32(36) end end end end describe 'namespace' do let(:field) { bytes.to_s[20..36] } it 'serializes the namespace' do expect(field).to be_cstring(ns) end context 'when the namespace contains unicode characters' do let(:field) { bytes.to_s[20..40] } let(:collection_name) do 'områder' end it 'serializes the namespace' do expect(field).to be_cstring(ns) end end end describe 'skip' do let(:field) { bytes.to_s[37..40] } context 'when no skip is provided' do it 'serializes a zero' do expect(field).to be_int32(0) end end context 'when skip is provided' do let(:options) { { :skip => 5 } } it 'serializes the skip' do expect(field).to be_int32(options[:skip]) end end end describe 'limit' do let(:field) { bytes.to_s[41..44] } context 'when no limit is provided' do it 'serializes a zero' do expect(field).to be_int32(0) end end context 'when limit is provided' do let(:options) { { :limit => 123 } } it 'serializes the limit' do expect(field).to be_int32(options[:limit]) end end end describe 'selector' do let(:field) { bytes.to_s[45..65] } it 'serializes the selector' do expect(field).to be_bson(selector) end end describe 'project' do let(:field) { bytes.to_s[66..-1] } context 'when no projection is provided' do it 'does not serialize a projection' do expect(field).to be_empty end end context 'when projection is provided' do let(:options) { { :project => projection } } let(:projection) { { :_id => 0 } } it 'serializes the projection' do expect(field).to be_bson(projection) end end end end describe '#registry' do context 'when the class is loaded' do it 'registers the op code in the Protocol Registry' do expect(Mongo::Protocol::Registry.get(described_class::OP_CODE)).to be(described_class) end it 'creates an #op_code instance method' do expect(message.op_code).to eq(described_class::OP_CODE) end end end describe '#compress' do context 'when the selector represents a command that can be compressed' do let(:selector) do { ping: 1 } end it 'returns a compressed message' do expect(message.maybe_compress('zlib')).to be_a(Mongo::Protocol::Compressed) end end context 'when the selector represents a command for which compression is not allowed' do Mongo::Monitoring::Event::Secure::REDACTED_COMMANDS.each do |command| let(:selector) do { command => 1 } end context "when the command is #{command}" do it 'does not allow compression for the command' do expect(message.maybe_compress('zlib')).to be(message) end end end end end end mongo-ruby-driver-2.21.3/spec/mongo/protocol/registry_spec.rb000066400000000000000000000013611505113246500243120ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' describe Mongo::Protocol::Registry do describe ".get" do context "when the type has a correspoding class" do before do described_class.register(Mongo::Protocol::Query::OP_CODE, Mongo::Protocol::Query) end let(:klass) do described_class.get(Mongo::Protocol::Query::OP_CODE, "message") end it "returns the class" do expect(klass).to eq(Mongo::Protocol::Query) end end context "when the type has no corresponding class" do it "raises an error" do expect { described_class.get(-100) }.to raise_error(Mongo::Error::UnsupportedMessageType) end end end end mongo-ruby-driver-2.21.3/spec/mongo/protocol/reply_spec.rb000066400000000000000000000117541505113246500236040ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' describe Mongo::Protocol::Reply do let(:length) { 78 } let(:request_id) { 0 } let(:response_to) { 0 } let(:op_code) { 1 } let(:flags) { 0 } let(:start) { 0 } let(:n_returned) { 2 } let(:cursor_id) { 999_999 } let(:doc) { { 'name' => 'Tyler' } } let(:documents) { [doc] * 2 } let(:header) do [length, request_id, response_to, op_code].pack('l 1 end end mongo-ruby-driver-2.21.3/spec/mongo/server/000077500000000000000000000000001505113246500205475ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/mongo/server/app_metadata/000077500000000000000000000000001505113246500231675ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/mongo/server/app_metadata/environment_spec.rb000066400000000000000000000225641505113246500271030ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' require 'fileutils' MOCKED_DOCKERENV_PATH = File.expand_path(File.join(Dir.pwd, '.dockerenv-mocked')) module ContainerChecking def mock_dockerenv_path before do allow_any_instance_of(Mongo::Server::AppMetadata::Environment) .to receive(:dockerenv_path) .and_return(MOCKED_DOCKERENV_PATH) end end def with_docker mock_dockerenv_path around do |example| File.write(MOCKED_DOCKERENV_PATH, 'placeholder') example.run ensure File.delete(MOCKED_DOCKERENV_PATH) end end def without_docker mock_dockerenv_path around do |example| FileUtils.rm_f(MOCKED_DOCKERENV_PATH) example.run end end def with_kubernetes local_env 'KUBERNETES_SERVICE_HOST' => 'kubernetes.default.svc.cluster.local' end def without_kubernetes local_env 'KUBERNETES_SERVICE_HOST' => nil end end describe Mongo::Server::AppMetadata::Environment do extend ContainerChecking let(:env) { described_class.new } shared_examples_for 'running in a FaaS environment' do it 'reports that a FaaS environment is detected' do expect(env.faas?).to be true end end shared_examples_for 'running outside a FaaS environment' do it 'reports that no FaaS environment is detected' do expect(env.faas?).to be false end end shared_examples_for 'not running in a Docker container' do it 'does not detect Docker' do expect(env.container || {}).not_to include :runtime end end shared_examples_for 'not running under Kubernetes' do it 'does not detect Kubernetes' do expect(env.container || {}).not_to include :orchestrator end end shared_examples_for 'running under Kubernetes' do it 'detects that Kubernetes is present' do expect(env.container[:orchestrator]).to be == 'kubernetes' end end shared_examples_for 'running in a Docker container' do it 'detects that Docker is present' do expect(env.container[:runtime]).to be == 'docker' end end shared_examples_for 'running under Kerbenetes' do it 'detects that kubernetes is present' do expect(env.container['orchestrator']).to be == 'kubernetes' end end context 'when run outside of a FaaS environment' do it_behaves_like 'running outside a FaaS environment' end context 'when run in a FaaS environment' do context 'when environment is invalid due to type mismatch' do local_env( 'AWS_EXECUTION_ENV' => 'AWS_Lambda_ruby2.7', 'AWS_REGION' => 'us-east-2', 'AWS_LAMBDA_FUNCTION_MEMORY_SIZE' => 'big' ) it_behaves_like 'running outside a FaaS environment' it 'fails due to type mismatch' do expect(env.error).to match(/AWS_LAMBDA_FUNCTION_MEMORY_SIZE must be integer/) end end context 'when environment is invalid due to long string' do local_env( 'AWS_EXECUTION_ENV' => 'AWS_Lambda_ruby2.7', 'AWS_REGION' => 'a' * 512, 'AWS_LAMBDA_FUNCTION_MEMORY_SIZE' => '1024' ) it_behaves_like 'running outside a FaaS environment' it 'fails due to long string' do expect(env.error).to match(/too long/) end end context 'when environment is invalid due to multiple providers' do local_env( 'AWS_EXECUTION_ENV' => 'AWS_Lambda_ruby2.7', 'AWS_REGION' => 'us-east-2', 'AWS_LAMBDA_FUNCTION_MEMORY_SIZE' => '1024', 'FUNCTIONS_WORKER_RUNTIME' => 'ruby' ) it_behaves_like 'running outside a FaaS environment' it 'fails due to multiple providers' do expect(env.error).to match(/too many environments/) end end context 'when VERCEL and AWS are both given' do local_env( 'AWS_EXECUTION_ENV' => 'AWS_Lambda_ruby2.7', 'AWS_REGION' => 'us-east-2', 'AWS_LAMBDA_FUNCTION_MEMORY_SIZE' => '1024', 'VERCEL' => '1', 'VERCEL_REGION' => 'cdg1' ) it_behaves_like 'running in a FaaS environment' it 'prefers vercel' do expect(env.aws?).to be false expect(env.vercel?).to be true expect(env.fields[:region]).to be == 'cdg1' end end context 'when environment is invalid due to missing variable' do local_env( 'AWS_EXECUTION_ENV' => 'AWS_Lambda_ruby2.7', 'AWS_LAMBDA_FUNCTION_MEMORY_SIZE' => '1024' ) it_behaves_like 'running outside a FaaS environment' it 'fails due to missing variable' do expect(env.error).to match(/missing environment variable/) end end context 'when FaaS environment is AWS' do shared_examples_for 'running in an AWS environment' do context 'when environment is valid' do local_env( 'AWS_REGION' => 'us-east-2', 'AWS_LAMBDA_FUNCTION_MEMORY_SIZE' => '1024' ) it_behaves_like 'running in a FaaS environment' it 'recognizes AWS' do expect(env.name).to be == 'aws.lambda' expect(env.fields[:region]).to be == 'us-east-2' expect(env.fields[:memory_mb]).to be == 1024 end end end # per DRIVERS-2623, AWS_EXECUTION_ENV must be prefixed # with 'AWS_Lambda_'. context 'when AWS_EXECUTION_ENV is invalid' do local_env( 'AWS_EXECUTION_ENV' => 'EC2', 'AWS_REGION' => 'us-east-2', 'AWS_LAMBDA_FUNCTION_MEMORY_SIZE' => '1024' ) it_behaves_like 'running outside a FaaS environment' end context 'when AWS_EXECUTION_ENV is detected' do local_env('AWS_EXECUTION_ENV' => 'AWS_Lambda_ruby2.7') it_behaves_like 'running in an AWS environment' end context 'when AWS_LAMBDA_RUNTIME_API is detected' do local_env('AWS_LAMBDA_RUNTIME_API' => 'lambda.aws.amazon.com/api') it_behaves_like 'running in an AWS environment' end end context 'when FaaS environment is Azure' do local_env('FUNCTIONS_WORKER_RUNTIME' => 'ruby') it_behaves_like 'running in a FaaS environment' it 'recognizes Azure' do expect(env.name).to be == 'azure.func' end end context 'when FaaS environment is GCP' do local_env( 'FUNCTION_MEMORY_MB' => '1024', 'FUNCTION_TIMEOUT_SEC' => '60', 'FUNCTION_REGION' => 'us-central1' ) shared_examples_for 'running in a GCP environment' do it_behaves_like 'running in a FaaS environment' it 'recognizes GCP' do expect(env.name).to be == 'gcp.func' expect(env.fields[:region]).to be == 'us-central1' expect(env.fields[:memory_mb]).to be == 1024 expect(env.fields[:timeout_sec]).to be == 60 end end context 'when K_SERVICE is present' do local_env('K_SERVICE' => 'servicename') it_behaves_like 'running in a GCP environment' end context 'when FUNCTION_NAME is present' do local_env('FUNCTION_NAME' => 'functionName') it_behaves_like 'running in a GCP environment' end end context 'when FaaS environment is Vercel' do local_env( 'VERCEL' => '1', 'VERCEL_REGION' => 'cdg1' ) it_behaves_like 'running in a FaaS environment' it 'recognizes Vercel' do expect(env.name).to be == 'vercel' expect(env.fields[:region]).to be == 'cdg1' end end context 'when converting environment to a hash' do local_env( 'K_SERVICE' => 'servicename', 'FUNCTION_MEMORY_MB' => '1024', 'FUNCTION_TIMEOUT_SEC' => '60', 'FUNCTION_REGION' => 'us-central1' ) it 'includes name and all fields' do expect(env.to_h).to be == { name: 'gcp.func', memory_mb: 1024, timeout_sec: 60, region: 'us-central1', } end context 'when a container is present' do with_kubernetes with_docker it 'includes a container key' do expect(env.to_h[:container]).to be == { runtime: 'docker', orchestrator: 'kubernetes' } end end context 'when no container is present' do without_kubernetes without_docker it 'does not include a container key' do expect(env.to_h).not_to include(:container) end end end end # have a specific test for this, since the tests that check # for Docker use a mocked value for the .dockerenv path. it 'should look for dockerenv in root directory' do expect(described_class::DOCKERENV_PATH).to be == '/.dockerenv' end context 'when no container is present' do without_kubernetes without_docker it_behaves_like 'not running in a Docker container' it_behaves_like 'not running under Kubernetes' end context 'when container is present' do context 'when kubernetes is present' do without_docker with_kubernetes it_behaves_like 'not running in a Docker container' it_behaves_like 'running under Kubernetes' end context 'when docker is present' do with_docker without_kubernetes it_behaves_like 'running in a Docker container' it_behaves_like 'not running under Kubernetes' end context 'when both kubernetes and docker are present' do with_docker with_kubernetes it_behaves_like 'running in a Docker container' it_behaves_like 'running under Kubernetes' end end end mongo-ruby-driver-2.21.3/spec/mongo/server/app_metadata/truncator_spec.rb000066400000000000000000000107171505113246500265550ustar00rootroot00000000000000# frozen_string_literal: true require 'spec_helper' # Quoted from specifications/source/mongodb-handshake/handshake.rst: # # Implementors SHOULD cumulatively update fields in the following order # until the document is under the size limit: # # 1. Omit fields from env except env.name. # 2. Omit fields from os except os.type. # 3. Omit the env document entirely. # 4. Truncate platform. describe Mongo::Server::AppMetadata::Truncator do let(:truncator) { described_class.new(Marshal.load(Marshal.dump(metadata))) } let(:app_name) { 'application' } let(:driver) { { name: 'driver', version: '1.2.3' } } let(:os) { { type: 'Darwin', name: 'macOS', architecture: 'arm64', version: '13.4' } } let(:platform) { { platform: 'platform' } } let(:env) { { name: 'aws.lambda', region: 'region', memory_mb: 1024 } } let(:metadata) do BSON::Document.new.tap do |doc| doc[:application] = { name: app_name } doc[:driver] = driver doc[:os] = os doc[:platform] = platform doc[:env] = env end end let(:untruncated_length) { metadata.to_bson.to_s.length } let(:truncated_length) { truncator.document.to_bson.to_s.length } shared_examples_for 'a truncated document' do it 'is shorter' do expect(truncated_length).to be < untruncated_length end it 'is not be longer than the maximum document size' do expect(truncated_length).to be <= described_class::MAX_DOCUMENT_SIZE end end describe 'MAX_DOCUMENT_SIZE' do it 'is 512 bytes' do # This test is an additional check that MAX_DOCUMENT_SIZE # has not been accidentially changed. expect(described_class::MAX_DOCUMENT_SIZE).to be == 512 end end context 'when document does not need truncating' do it 'does not truncate anything' do expect(truncated_length).to be == untruncated_length end end context 'when modifying env is sufficient' do context 'when a single value is too long' do let(:env) { { name: 'name', a: 'a' * 1000, b: 'b' } } it 'preserves name' do expect(truncator.document[:env][:name]).to be == 'name' end it 'removes the too-long entry and keeps name' do expect(truncator.document[:env].keys).to be == %w[ name b ] end it_behaves_like 'a truncated document' end context 'when multiple values are too long' do let(:env) { { name: 'name', a: 'a' * 1000, b: 'b', c: 'c' * 1000, d: 'd' } } it 'preserves name' do expect(truncator.document[:env][:name]).to be == 'name' end it 'removes all other entries until size is satisifed' do expect(truncator.document[:env].keys).to be == %w[ name d ] end it_behaves_like 'a truncated document' end end context 'when modifying os is sufficient' do context 'when a single value is too long' do let(:os) { { type: 'type', a: 'a' * 1000, b: 'b' } } it 'truncates env' do expect(truncator.document[:env].keys).to be == %w[ name ] end it 'preserves type' do expect(truncator.document[:os][:type]).to be == 'type' end it 'removes the too-long entry and keeps type' do expect(truncator.document[:os].keys).to be == %w[ type b ] end it_behaves_like 'a truncated document' end context 'when multiple values are too long' do let(:os) { { type: 'type', a: 'a' * 1000, b: 'b', c: 'c' * 1000, d: 'd' } } it 'truncates env' do expect(truncator.document[:env].keys).to be == %w[ name ] end it 'preserves type' do expect(truncator.document[:os][:type]).to be == 'type' end it 'removes all other entries until size is satisifed' do expect(truncator.document[:os].keys).to be == %w[ type d ] end it_behaves_like 'a truncated document' end end context 'when truncating os is insufficient' do let(:env) { { name: 'n' * 1000 } } it 'truncates os' do expect(truncator.document[:os].keys).to be == %w[ type ] end it 'removes env' do expect(truncator.document.key?(:env)).to be false end it_behaves_like 'a truncated document' end context 'when platform is too long' do let(:platform) { 'n' * 1000 } it 'truncates os' do expect(truncator.document[:os].keys).to be == %w[ type ] end it 'removes env' do expect(truncator.document.key?(:env)).to be false end it 'truncates platform' do expect(truncator.document[:platform].length).to be < 1000 end end end mongo-ruby-driver-2.21.3/spec/mongo/server/app_metadata_spec.rb000066400000000000000000000110131505113246500245220ustar00rootroot00000000000000# frozen_string_literal: true require 'spec_helper' describe Mongo::Server::AppMetadata do let(:max_size) { described_class::Truncator::MAX_DOCUMENT_SIZE } let(:app_metadata) do described_class.new(cluster.options) end let(:cluster) do authorized_client.cluster end describe '#initialize' do context 'when the cluster has an app name option set' do let(:client) do authorized_client.with(app_name: :app_metadata_test) end let(:cluster) do client.cluster end it 'sets the app name' do expect(app_metadata.client_document[:application][:name]).to eq('app_metadata_test') end context 'when the app name exceeds the max length of 128' do let(:client) do authorized_client.with(app_name: "\u3042" * 43) end let(:cluster) do client.cluster end it 'raises an error' do expect { app_metadata.validated_document } .to raise_exception(Mongo::Error::InvalidApplicationName) end end end context 'when the cluster does not have an app name option set' do it 'does not set the app name' do expect(app_metadata.client_document[:application]).to be_nil end end context 'when the client document exceeds the max of 512 bytes' do shared_examples_for 'a truncated document' do it 'is too long before validation' do expect(app_metadata.client_document.to_bson.to_s.size).to be > max_size end it 'is acceptable after validation' do app_metadata.validated_document # force validation expect(app_metadata.client_document.to_bson.to_s.size).to be <= max_size end end context 'when the os.name length is too long' do before do allow(app_metadata).to receive(:name).and_return('x' * 500) end it_behaves_like 'a truncated document' end context 'when the os.architecture length is too long' do before do allow(app_metadata).to receive(:architecture).and_return('x' * 500) end it_behaves_like 'a truncated document' end context 'when the platform length is too long' do before do allow(app_metadata).to receive(:platform).and_return('x' * 500) end it_behaves_like 'a truncated document' end end context 'when run outside of a FaaS environment' do context 'when a container is present' do local_env 'KUBERNETES_SERVICE_HOST' => 'something' it 'includes the :env key in the client document' do expect(app_metadata.client_document.key?(:env)).to be true end end context 'when no container is present' do it 'excludes the :env key from the client document' do expect(app_metadata.client_document.key?(:env)).to be false end end end context 'when run inside of a FaaS environment' do context 'when the environment is invalid' do # invalid, because it is missing the other required fields local_env('AWS_EXECUTION_ENV' => 'AWS_Lambda_ruby2.7') it 'excludes the :env key from the client document' do expect(app_metadata.client_document.key?(:env)).to be false end end context 'when the environment is valid' do # valid, because Azure requires only the one field local_env('FUNCTIONS_WORKER_RUNTIME' => 'ruby') it 'includes the :env key in the client document' do expect(app_metadata.client_document.key?(:env)).to be true expect(app_metadata.client_document[:env][:name]).to be == 'azure.func' end end end end describe '#document' do let(:document) do app_metadata.send(:document) end context 'when user is given and auth_mech is not given' do let(:app_metadata) do described_class.new(user: 'foo') end it 'includes saslSupportedMechs' do expect(document[:saslSupportedMechs]).to eq('admin.foo') end end it_behaves_like 'app metadata document' end describe '#validated_document' do it 'raises with too long app name' do app_name = 'app' * 500 expect { described_class.new(app_name: app_name).validated_document } .to raise_error(Mongo::Error::InvalidApplicationName) end it 'does not raise with correct app name' do app_name = 'app' expect { described_class.new(app_name: app_name).validated_document } .not_to raise_error end end end mongo-ruby-driver-2.21.3/spec/mongo/server/connection_auth_spec.rb000066400000000000000000000076041505113246500252750ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' # these tests fail intermittently in evergreen describe Mongo::Server::Connection do retry_test let(:address) do Mongo::Address.new(SpecConfig.instance.addresses.first) end let(:monitoring) do Mongo::Monitoring.new(monitoring: false) end let(:listeners) do Mongo::Event::Listeners.new end let(:app_metadata) do Mongo::Server::AppMetadata.new(SpecConfig.instance.test_options) end let(:cluster) do double('cluster').tap do |cl| allow(cl).to receive(:topology).and_return(topology) allow(cl).to receive(:app_metadata).and_return(app_metadata) allow(cl).to receive(:options).and_return({}) allow(cl).to receive(:cluster_time).and_return(nil) allow(cl).to receive(:update_cluster_time) allow(cl).to receive(:run_sdam_flow) pool = double('pool') allow(pool).to receive(:disconnect!) allow(cl).to receive(:pool).and_return(pool) end end declare_topology_double let(:server) do register_server( Mongo::Server.new(address, cluster, monitoring, listeners, SpecConfig.instance.test_options.merge(monitoring_io: false)) ) end before(:all) do ClientRegistry.instance.close_all_clients end describe '#auth_mechanism' do require_no_external_user let(:connection) do described_class.new(server, server.options) end context 'when the hello response includes saslSupportedMechs' do min_server_fcv '4.0' let(:server_options) do SpecConfig.instance.test_options.merge( user: SpecConfig.instance.test_user.name, password: SpecConfig.instance.test_user.password, auth_source: 'admin', ) end let(:app_metadata) do Mongo::Server::AppMetadata.new(server_options) end before do client = authorized_client.with(database: 'admin') info = client.database.users.info(SpecConfig.instance.test_user.name) expect(info.length).to eq(1) # this before block may have made 2 or 3 clients ClientRegistry.instance.close_all_clients end it 'uses scram256' do connection RSpec::Mocks.with_temporary_scope do pending_conn = nil Mongo::Server::PendingConnection.should receive(:new).and_wrap_original do |m, *args| pending_conn = m.call(*args) end connection.connect! expect(pending_conn.send(:default_mechanism)).to eq(:scram256) end end end context 'when the hello response indicates the auth mechanism is :scram' do require_no_external_user let(:features) do Mongo::Server::Description::Features.new(0..7) end it 'uses scram' do connection RSpec::Mocks.with_temporary_scope do expect(Mongo::Server::Description::Features).to receive(:new).and_return(features) pending_conn = nil Mongo::Server::PendingConnection.should receive(:new).and_wrap_original do |m, *args| pending_conn = m.call(*args) end connection.connect! expect(pending_conn.send(:default_mechanism)).to eq(:scram) end end end context 'when the hello response indicates the auth mechanism is :mongodb_cr' do let(:features) do Mongo::Server::Description::Features.new(0..2) end it 'uses mongodb_cr' do connection RSpec::Mocks.with_temporary_scope do expect(Mongo::Server::Description::Features).to receive(:new).and_return(features) pending_conn = nil Mongo::Server::PendingConnection.should receive(:new).and_wrap_original do |m, *args| pending_conn = m.call(*args) end connection.connect! expect(pending_conn.send(:default_mechanism)).to eq(:mongodb_cr) end end end end end mongo-ruby-driver-2.21.3/spec/mongo/server/connection_common_spec.rb000066400000000000000000000037101505113246500256160ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' describe Mongo::Server::ConnectionCommon do let(:subject) { described_class.new } let(:metadata) do Mongo::Server::AppMetadata.new({}) end describe '#handshake_document' do let(:document) do subject.handshake_document(metadata) end context 'with api version' do let(:metadata) do Mongo::Server::AppMetadata.new({ server_api: { version: '1' } }) end it 'returns hello document with API version' do expect(document['hello']).to eq(1) end end context 'without api version' do it 'returns legacy hello document without API version' do expect(document['isMaster']).to eq(1) end end context 'when connecting to load balancer' do let(:document) do subject.handshake_document(metadata, load_balancer: true) end it 'includes loadBalanced: true' do document['loadBalanced'].should be true end end end describe '#handshake_command' do let(:document) do subject.handshake_document(metadata, load_balancer: load_balancer) end let(:load_balancer) { false } context 'with api version' do let(:metadata) do Mongo::Server::AppMetadata.new({ server_api: { version: '1' } }) end it 'returns OP_MSG command' do expect( subject.handshake_command(document) ).to be_a(Mongo::Protocol::Msg) end end context 'with loadBalanced=true' do let(:load_balancer) { true } it 'returns OP_MSG command' do expect( subject.handshake_command(document) ).to be_a(Mongo::Protocol::Msg) end end context 'without api version' do it 'returns OP_QUERY command' do expect( subject.handshake_command(document) ).to be_a(Mongo::Protocol::Query) end end end end mongo-ruby-driver-2.21.3/spec/mongo/server/connection_pool/000077500000000000000000000000001505113246500237375ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/mongo/server/connection_pool/generation_manager_spec.rb000066400000000000000000000014701505113246500311250ustar00rootroot00000000000000# frozen_string_literal: true require 'spec_helper' describe Mongo::Server::ConnectionPool::GenerationManager do describe '#close_all_pipes' do let(:service_id) { 'test_service_id' } let(:server) { instance_double(Mongo::Server) } let(:manager) { described_class.new(server: server) } before do manager.pipe_fds(service_id: service_id) end it 'closes all pipes and removes them from the map' do expect(manager.pipe_fds(service_id: service_id).size).to eq(2) manager.instance_variable_get(:@pipe_fds)[service_id].each do |_gen, (r, w)| expect(r).to receive(:close).and_call_original expect(w).to receive(:close).and_call_original end manager.close_all_pipes expect(manager.instance_variable_get(:@pipe_fds)).to be_empty end end end mongo-ruby-driver-2.21.3/spec/mongo/server/connection_pool/populator_spec.rb000066400000000000000000000056331505113246500273320ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Server::ConnectionPool::Populator do require_no_linting let(:options) { {} } let(:client) do authorized_client.with(options) end let(:server) do client.cluster.next_primary end let(:pool) do server.pool end let(:populator) do register_background_thread_object( described_class.new(pool, pool.options) ) end before do # We create our own populator to test; disable pool's background populator # and clear the pool, so ours can run pool.disconnect! pool.stop_populator end describe '#log_warn' do it 'works' do expect do populator.log_warn('test warning') end.not_to raise_error end end describe '#run!' do context 'when the min_pool_size is zero' do let(:options) { {min_pool_size: 0} } it 'calls populate on pool once' do expect(pool).to receive(:populate).once.and_call_original populator.run! sleep 1 expect(populator.running?).to be true end end context 'when the min_pool_size is greater than zero' do let(:options) { {min_pool_size: 2, max_pool_size: 3} } it 'calls populate on the pool multiple times' do expect(pool).to receive(:populate).at_least(:once).and_call_original populator.run! sleep 1 expect(populator.running?).to be true end it 'populates the pool up to min_size' do pool.instance_variable_set(:@ready, true) populator.run! ::Utils.wait_for_condition(3) do pool.size >= 2 end expect(pool.size).to eq 2 expect(populator.running?).to be true end end context 'when populate raises a non socket related error' do it 'does not terminate the thread' do expect(pool).to receive(:populate).once.and_raise(Mongo::Auth::InvalidMechanism.new("")) populator.run! sleep 0.5 expect(populator.running?).to be true end end context 'when populate raises a socket related error' do it 'does not terminate the thread' do expect(pool).to receive(:populate).once.and_raise(Mongo::Error::SocketError) populator.run! sleep 0.5 expect(populator.running?).to be true end end context "when clearing the pool" do it "the populator is run one extra time" do expect(pool).to receive(:populate).twice populator.run! sleep 0.5 pool.disconnect! sleep 0.5 expect(populator.running?).to be true end end end describe '#stop' do it 'stops calling populate on pool and terminates the thread' do populator.run! # let populator do work and wait on semaphore sleep 0.5 expect(pool).not_to receive(:populate) populator.stop! expect(populator.running?).to be false end end end mongo-ruby-driver-2.21.3/spec/mongo/server/connection_pool_spec.rb000066400000000000000000001230351505113246500253020ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Server::ConnectionPool do let(:options) { {} } let(:server_options) do Mongo::Utils.shallow_symbolize_keys(Mongo::Client.canonicalize_ruby_options( SpecConfig.instance.all_test_options, )).tap do |opts| opts.delete(:min_pool_size) opts.delete(:max_pool_size) opts.delete(:wait_queue_timeout) end.update(options) end let(:address) do Mongo::Address.new(SpecConfig.instance.addresses.first) end let(:monitoring) do Mongo::Monitoring.new(monitoring: false) end let(:listeners) do Mongo::Event::Listeners.new end declare_topology_double let(:app_metadata) do Mongo::Server::AppMetadata.new(server_options) end let(:cluster) do double('cluster').tap do |cl| allow(cl).to receive(:topology).and_return(topology) allow(cl).to receive(:app_metadata).and_return(app_metadata) allow(cl).to receive(:options).and_return({}) allow(cl).to receive(:update_cluster_time) allow(cl).to receive(:cluster_time).and_return(nil) allow(cl).to receive(:run_sdam_flow) end end let(:server) do register_server( Mongo::Server.new(address, cluster, monitoring, listeners, {monitoring_io: false}.update(server_options) ).tap do |server| allow(server).to receive(:description).and_return(ClusterConfig.instance.primary_description) end ) end let(:pool) do register_pool(described_class.new(server, server_options)).tap do |pool| pool.ready end end let(:populate_semaphore) do pool.instance_variable_get(:@populate_semaphore) end let(:populator) do pool.instance_variable_get(:@populator) end describe '#initialize' do context 'when a min size is provided' do let (:options) do { min_pool_size: 2 } end it 'creates the pool with min size connections' do # Allow background thread to populate pool pool sleep 1 expect(pool.size).to eq(2) expect(pool.available_count).to eq(2) end it 'does not use the same objects in the pool' do expect(pool.check_out).to_not equal(pool.check_out) end end context 'when min size exceeds default max size' do let (:options) do { min_pool_size: 50 } end it 'sets max size to equal provided min size' do expect(pool.max_size).to eq(50) end end context 'when min size is provided and max size is zero (unlimited)' do let (:options) do { min_size: 10, max_size: 0 } end it 'sets max size to zero (unlimited)' do expect(pool.max_size).to eq(0) end end context 'when no min size is provided' do it 'creates the pool with no connections' do expect(pool.size).to eq(0) expect(pool.available_count).to eq(0) end it "starts the populator" do expect(populator).to be_running end end context 'sizes given as min_size and max_size' do let (:options) do { min_size: 3, max_size: 7 } end it 'sets sizes correctly' do expect(pool.min_size).to eq(3) expect(pool.max_size).to eq(7) end end context 'sizes given as min_pool_size and max_pool_size' do let (:options) do { min_pool_size: 3, max_pool_size: 7 } end it 'sets sizes correctly' do expect(pool.min_size).to eq(3) expect(pool.max_size).to eq(7) end end context 'timeout given as wait_timeout' do let (:options) do { wait_timeout: 4 } end it 'sets wait timeout correctly' do expect(pool.wait_timeout).to eq(4) end end context 'timeout given as wait_queue_timeout' do let (:options) do { wait_queue_timeout: 4 } end it 'sets wait timeout correctly' do expect(pool.wait_timeout).to eq(4) end end end describe '#max_size' do context 'when a max pool size option is provided' do let (:options) do { max_pool_size: 3 } end it 'returns the max size' do expect(pool.max_size).to eq(3) end end context 'when no pool size option is provided' do it 'returns the default size' do expect(pool.max_size).to eq(20) end end context 'when pool is closed' do before do pool.close end it 'returns max size' do expect(pool.max_size).to eq(20) end end end describe '#wait_timeout' do context 'when the wait timeout option is provided' do let (:options) do { wait_queue_timeout: 3 } end it 'returns the wait timeout' do expect(pool.wait_timeout).to eq(3) end end context 'when the wait timeout option is not provided' do it 'returns the default wait timeout' do expect(pool.wait_timeout).to eq(10) end end end describe '#size' do context 'pool without connections' do it 'is 0' do expect(pool.size).to eq(0) end end context 'pool with a checked out connection' do before do pool.check_out end it 'is 1' do expect(pool.size).to eq(1) end end context 'pool with an available connection' do before do connection = pool.check_out pool.check_in(connection) end it 'is 1' do expect(pool.size).to eq(1) end end context 'when pool is closed' do before do pool.close end it 'raises PoolClosedError' do expect do pool.size end.to raise_error(Mongo::Error::PoolClosedError) end end end describe '#available_count' do context 'pool without connections' do it 'is 0' do expect(pool.available_count).to eq(0) end end context 'pool with a checked out connection' do before do pool.check_out end it 'is 0' do expect(pool.available_count).to eq(0) end end context 'pool with an available connection' do before do connection = pool.check_out pool.check_in(connection) end it 'is 1' do expect(pool.available_count).to eq(1) end end context 'when pool is closed' do before do pool.close end it 'raises PoolClosedError' do expect do pool.available_count end.to raise_error(Mongo::Error::PoolClosedError) end end end describe '#closed?' do context 'pool is not closed' do it 'is false' do expect(pool.closed?).to be false end end context 'pool is closed' do before do pool.close end it 'is true' do expect(pool.closed?).to be true end it "stops the populator" do expect(populator).to_not be_running end end end describe '#check_in' do let!(:pool) do server.pool end after do server.close end let(:options) do { max_pool_size: 2 } end let(:connection) do pool.check_out end context 'when a connection is checked out on the thread' do before do pool.check_in(connection) end it 'returns the connection to the pool' do expect(pool.size).to eq(1) end end shared_examples 'does not add connection to pool' do it 'disconnects connection and does not add connection to pool' do # connection was checked out expect(pool.available_count).to eq(0) expect(pool.size).to eq(1) expect(connection).to receive(:disconnect!) pool.check_in(connection) # connection is not added to the pool, and no replacement # connection has been created at this point expect(pool.available_count).to eq(0) expect(pool.size).to eq(0) expect(pool.check_out).not_to eq(connection) end end shared_examples 'adds connection to the pool' do it 'adds the connection to the pool' do # connection is checked out expect(pool.available_count).to eq(0) expect(pool.size).to eq(1) pool.check_in(connection) # now connection is in the queue expect(pool.available_count).to eq(1) expect(pool.size).to eq(1) expect(pool.check_out).to eq(connection) end end context 'connection of the same generation as pool' do # These tests are also applicable to load balancers, but # require different setup and assertions because load balancers # do not have a single global generation. require_topology :single, :replica_set, :sharded before do expect(pool.generation).to eq(connection.generation) end it_behaves_like 'adds connection to the pool' end context 'connection of earlier generation than pool' do # These tests are also applicable to load balancers, but # require different setup and assertions because load balancers # do not have a single global generation. require_topology :single, :replica_set, :sharded context 'when connection is not pinned' do let(:connection) do pool.check_out.tap do |connection| expect(connection).to receive(:generation).at_least(:once).and_return(0) expect(connection).not_to receive(:record_checkin!) end end before do expect(connection.generation).to be < pool.generation end it_behaves_like 'does not add connection to pool' end context 'when connection is pinned' do let(:connection) do pool.check_out.tap do |connection| allow(connection).to receive(:pinned?).and_return(true) expect(connection).to receive(:generation).at_least(:once).and_return(0) expect(connection).to receive(:record_checkin!) end end before do expect(connection.generation).to be < pool.generation end it_behaves_like 'adds connection to the pool' end end context 'connection of later generation than pool' do # These tests are also applicable to load balancers, but # require different setup and assertions because load balancers # do not have a single global generation. require_topology :single, :replica_set, :sharded let(:connection) do pool.check_out.tap do |connection| expect(connection).to receive(:generation).at_least(:once).and_return(7) expect(connection).not_to receive(:record_checkin!) end end before do expect(connection.generation > pool.generation).to be true end it_behaves_like 'does not add connection to pool' end context 'interrupted connection' do let!(:connection) do pool.check_out.tap do |connection| expect(connection).to receive(:interrupted?).at_least(:once).and_return(true) expect(connection).not_to receive(:record_checkin!) end end it_behaves_like 'does not add connection to pool' end context 'closed and interrupted connection' do let!(:connection) do pool.check_out.tap do |connection| expect(connection).to receive(:interrupted?).exactly(:once).and_return(true) expect(connection).to receive(:closed?).exactly(:once).and_return(true) expect(connection).not_to receive(:record_checkin!) expect(connection).not_to receive(:disconnect!) end end it "returns immediately" do expect(pool.check_in(connection)).to be_nil end end context 'when pool is closed' do before do connection pool.close end it 'closes connection' do expect(connection.closed?).to be false expect(pool.instance_variable_get('@available_connections').length).to eq(0) pool.check_in(connection) expect(connection.closed?).to be true expect(pool.instance_variable_get('@available_connections').length).to eq(0) end end context 'when connection is checked in twice' do it 'raises an ArgumentError and does not change pool state' do pool.check_in(connection) expect do pool.check_in(connection) end.to raise_error(ArgumentError, /Trying to check in a connection which is not currently checked out by this pool.*/) expect(pool.size).to eq(1) expect(pool.check_out).to eq(connection) end end context 'when connection is checked in to a different pool' do it 'raises an ArgumentError and does not change the state of either pool' do pool_other = register_pool(described_class.new(server)) expect do pool_other.check_in(connection) end.to raise_error(ArgumentError, /Trying to check in a connection which was not checked out by this pool.*/) expect(pool.size).to eq(1) expect(pool_other.size).to eq(0) end end end describe '#check_out' do let!(:pool) do server.pool end context 'when max_size is zero (unlimited)' do let(:options) do { max_size: 0 } end it 'checks out a connection' do expect do pool.check_out end.not_to raise_error end end context 'when a connection is checked out on a different thread' do let!(:connection) do Thread.new { pool.check_out }.join end it 'returns a new connection' do expect(pool.check_out.address).to eq(server.address) end it 'does not return the same connection instance' do expect(pool.check_out).to_not eql(connection) end end context 'when connections are checked out and checked back in' do it 'pulls the connection from the front of the queue' do first = pool.check_out second = pool.check_out pool.check_in(second) pool.check_in(first) expect(pool.check_out).to be(first) end end context 'when there is an available connection which is stale' do # These tests are also applicable to load balancers, but # require different setup and assertions because load balancers # do not have a single global generation. require_topology :single, :replica_set, :sharded let(:options) do { max_pool_size: 2, max_idle_time: 0.1 } end context 'when connection is not pinned' do let(:connection) do pool.check_out.tap do |connection| allow(connection).to receive(:generation).and_return(pool.generation) allow(connection).to receive(:record_checkin!).and_return(connection) expect(connection).to receive(:last_checkin).at_least(:once).and_return(Time.now - 10) end end before do pool.check_in(connection) end it 'closes stale connection and creates a new one' do expect(connection).to receive(:disconnect!) expect(Mongo::Server::Connection).to receive(:new).and_call_original pool.check_out end end context 'when connection is pinned' do let(:connection) do pool.check_out.tap do |connection| allow(connection).to receive(:generation).and_return(pool.generation) allow(connection).to receive(:record_checkin!).and_return(connection) expect(connection).to receive(:pinned?).and_return(true) end end before do pool.check_in(connection) end it 'does not close stale connection' do expect(connection).not_to receive(:disconnect!) pool.check_out end end end context 'when there are no available connections' do let(:options) do { max_pool_size: 1, min_pool_size: 0 } end context 'when the max size is not reached' do it 'creates a new connection' do expect(Mongo::Server::Connection).to receive(:new).once.and_call_original expect(pool.check_out).to be_a(Mongo::Server::Connection) expect(pool.size).to eq(1) end end context 'when the max size is reached' do context 'without service_id' do it 'raises a timeout error' do expect(Mongo::Server::Connection).to receive(:new).once.and_call_original pool.check_out expect { pool.check_out }.to raise_error(::Timeout::Error) expect(pool.size).to eq(1) end end context 'with connection_global_id' do require_topology :load_balanced let(:connection_global_id) do pool.with_connection do |connection| connection.global_id.should_not be nil connection.global_id end end it 'raises a timeout error' do expect(Mongo::Server::Connection).to receive(:new).once.and_call_original connection_global_id pool.check_out(connection_global_id: connection_global_id) expect { pool.check_out(connection_global_id: connection_global_id) }.to raise_error(Mongo::Error::ConnectionCheckOutTimeout) expect(pool.size).to eq(1) end it 'waits for the timeout' do expect(Mongo::Server::Connection).to receive(:new).once.and_call_original connection_global_id pool.check_out(connection_global_id: connection_global_id) start_time = Mongo::Utils.monotonic_time expect { pool.check_out(connection_global_id: connection_global_id) }.to raise_error(Mongo::Error::ConnectionCheckOutTimeout) elapsed_time = Mongo::Utils.monotonic_time - start_time elapsed_time.should > 1 end end end end context 'when waiting for a connection to be checked in' do let!(:connection) { pool.check_out } before do allow(connection).to receive(:record_checkin!).and_return(connection) Thread.new do sleep(0.5) pool.check_in(connection) end.join end it 'returns the checked in connection' do expect(pool.check_out).to eq(connection) end end context 'when pool is closed' do before do pool.close end it 'raises PoolClosedError' do expect do pool.check_out end.to raise_error(Mongo::Error::PoolClosedError) end end context 'when connection set up throws an error during check out' do let(:client) do authorized_client end let(:pool) do client.cluster.next_primary.pool end before do pool.ready end it 'raises an error and emits ConnectionCheckOutFailedEvent' do pool subscriber = Mrss::EventSubscriber.new client.subscribe(Mongo::Monitoring::CONNECTION_POOL, subscriber) subscriber.clear_events! expect(Mongo::Auth).to receive(:get).at_least(:once).and_raise(Mongo::Error) expect { pool.check_out }.to raise_error(Mongo::Error) expect(pool.size).to eq(0) checkout_failed_events = subscriber.published_events.select do |event| event.is_a?(Mongo::Monitoring::Event::Cmap::ConnectionCheckOutFailed) end expect(checkout_failed_events.size).to eq(1) expect(checkout_failed_events.first.reason).to be(:connection_error) end context "when the error is caused by close" do let(:pool) { server.pool } let(:options) { { max_size: 1 } } it "raises an error and returns immediately" do expect(pool.max_size).to eq(1) Timeout::timeout(1) do c1 = pool.check_out thread = Thread.new do c2 = pool.check_out end sleep 0.1 expect do pool.close thread.join end.to raise_error(Mongo::Error::PoolClosedError) end end end end context "when the pool is paused" do require_no_linting before do pool.pause end it "raises a PoolPausedError" do expect do pool.check_out end.to raise_error(Mongo::Error::PoolPausedError) end end end describe "#ready" do require_no_linting let(:pool) do register_pool(described_class.new(server, server_options)) end context "when the pool is closed" do before do pool.close end it "raises an error" do expect do pool.ready end.to raise_error(Mongo::Error::PoolClosedError) end end context "when readying an initialized pool" do before do pool.ready end it "starts the populator" do expect(populator).to be_running end it "readies the pool" do expect(pool).to be_ready end end context "when readying a paused pool" do before do pool.ready pool.pause end it "readies the pool" do pool.ready expect(pool).to be_ready end it "signals the populate semaphore" do RSpec::Mocks.with_temporary_scope do expect(populate_semaphore).to receive(:signal).and_wrap_original do |m, *args| m.call(*args) end pool.ready end end end end describe "#ready?" do require_no_linting let(:pool) do register_pool(described_class.new(server, server_options)) end shared_examples "pool is ready" do it "is ready" do expect(pool).to be_ready end end shared_examples "pool is not ready" do it "is not ready" do expect(pool).to_not be_ready end end context "before readying the pool" do it_behaves_like "pool is not ready" end context "after readying the pool" do before do pool.ready end it_behaves_like "pool is ready" end context "after readying and pausing the pool" do before do pool.ready pool.pause end it_behaves_like "pool is not ready" end context "after readying, pausing, and readying the pool" do before do pool.ready pool.pause pool.ready end it_behaves_like "pool is ready" end context "after closing the pool" do before do pool.ready pool.close end it_behaves_like "pool is not ready" end end describe "#pause" do require_no_linting let(:pool) do register_pool(described_class.new(server, server_options)) end context "when the pool is closed" do before do pool.close end it "raises an error" do expect do pool.pause end.to raise_error(Mongo::Error::PoolClosedError) end end context "when the pool is paused" do before do pool.ready pool.pause end it "is still paused" do expect(pool).to be_paused pool.pause expect(pool).to be_paused end end context "when the pool is ready" do before do pool.ready end it "is still paused" do expect(pool).to be_ready pool.pause expect(pool).to be_paused end it "does not stop the populator" do expect(populator).to be_running end end end describe "#paused?" do require_no_linting let(:pool) do register_pool(described_class.new(server, server_options)) end shared_examples "pool is paused" do it "is paused" do expect(pool).to be_paused end end shared_examples "pool is not paused" do it "is not paused" do expect(pool).to_not be_paused end end context "before readying the pool" do it_behaves_like "pool is paused" end context "after readying the pool" do before do pool.ready end it_behaves_like "pool is not paused" end context "after readying and pausing the pool" do before do pool.ready pool.pause end it_behaves_like "pool is paused" end context "after readying, pausing, and readying the pool" do before do pool.ready pool.pause pool.ready end it_behaves_like "pool is not paused" end context "after closing the pool" do before do pool.ready pool.close end it "raises an error" do expect do pool.paused? end.to raise_error(Mongo::Error::PoolClosedError) end end end describe "#closed?" do require_no_linting let(:pool) do register_pool(described_class.new(server, server_options)) end shared_examples "pool is closed" do it "is closed" do expect(pool).to be_closed end end shared_examples "pool is not closed" do it "is not closed" do expect(pool).to_not be_closed end end context "before readying the pool" do it_behaves_like "pool is not closed" end context "after readying the pool" do before do pool.ready end it_behaves_like "pool is not closed" end context "after readying and pausing the pool" do before do pool.ready pool.pause end it_behaves_like "pool is not closed" end context "after closing the pool" do before do pool.ready pool.close end it_behaves_like "pool is closed" end end describe '#disconnect!' do context 'when pool is closed' do before do pool.close end it 'does nothing' do expect do pool.disconnect! end.not_to raise_error end end end describe '#clear' do let(:checked_out_connections) { pool.instance_variable_get(:@checked_out_connections) } let(:available_connections) { pool.instance_variable_get(:@available_connections) } let(:pending_connections) { pool.instance_variable_get(:@pending_connections) } let(:interrupt_connections) { pool.instance_variable_get(:@interrupt_connections) } def create_pool(min_pool_size) opts = SpecConfig.instance.test_options.merge(max_pool_size: 3, min_pool_size: min_pool_size) described_class.new(server, opts).tap do |pool| pool.ready # kill background thread to test disconnect behavior pool.stop_populator expect(pool.instance_variable_get('@populator').running?).to be false # make pool be of size 2 so that it has enqueued connections # when told to disconnect c1 = pool.check_out c2 = pool.check_out allow(c1).to receive(:record_checkin!).and_return(c1) allow(c2).to receive(:record_checkin!).and_return(c2) pool.check_in(c1) pool.check_in(c2) expect(pool.size).to eq(2) expect(pool.available_count).to eq(2) end end shared_examples_for 'disconnects and removes all connections in the pool and bumps generation' do # These tests are also applicable to load balancers, but # require different setup and assertions because load balancers # do not have a single global generation. require_topology :single, :replica_set, :sharded require_no_linting it 'disconnects and removes and bumps' do old_connections = [] pool.instance_variable_get('@available_connections').each do |connection| expect(connection).to receive(:disconnect!) old_connections << connection end expect(pool.size).to eq(2) expect(pool.available_count).to eq(2) RSpec::Mocks.with_temporary_scope do allow(pool.server).to receive(:unknown?).and_return(true) pool.disconnect! end expect(pool.size).to eq(0) expect(pool.available_count).to eq(0) expect(pool).to be_paused pool.ready new_connection = pool.check_out expect(old_connections).not_to include(new_connection) expect(new_connection.generation).to eq(2) end end context 'min size is 0' do let(:pool) do register_pool(create_pool(0)) end it_behaves_like 'disconnects and removes all connections in the pool and bumps generation' end context 'min size is not 0' do let(:pool) do register_pool(create_pool(1)) end it_behaves_like 'disconnects and removes all connections in the pool and bumps generation' end context 'when pool is closed' do before do pool.close end it 'raises PoolClosedError' do expect do pool.clear end.to raise_error(Mongo::Error::PoolClosedError) end end context "when interrupting in use connections" do context "when there's checked out connections" do require_topology :single, :replica_set, :sharded require_no_linting before do 3.times { pool.check_out } connection = pool.check_out pool.check_in(connection) expect(checked_out_connections.length).to eq(3) expect(available_connections.length).to eq(1) pending_connections << pool.send(:create_connection) end it "interrupts the connections" do expect(pool).to receive(:populate).exactly(3).and_call_original RSpec::Mocks.with_temporary_scope do allow(pool.server).to receive(:unknown?).and_return(true) pool.clear(lazy: true, interrupt_in_use_connections: true) end ::Utils.wait_for_condition(3) do pool.size == 0 end expect(pool.size).to eq(0) expect(available_connections).to be_empty expect(checked_out_connections).to be_empty expect(pending_connections).to be_empty end end end context "when in load-balanced mode" do require_topology :load_balanced it "does not pause the pool" do allow(pool.server).to receive(:unknown?).and_return(true) pool.clear expect(pool).to_not be_paused end end end describe '#close' do context 'when pool is not closed' do it 'closes the pool' do expect(pool).not_to be_closed pool.close expect(pool).to be_closed end end context 'when pool is closed' do before do pool.close end it 'is a no-op' do pool.close expect(pool).to be_closed end end it 'closes all pipes' do expect(pool.generation_manager).to receive(:close_all_pipes).and_call_original pool.close end end describe '#inspect' do let(:options) do { min_pool_size: 3, max_pool_size: 7, wait_timeout: 9, wait_queue_timeout: 9 } end let!(:pool) do server.pool end after do server.close pool.close # this will no longer be needed after server close kills bg thread end it 'includes the object id' do expect(pool.inspect).to include(pool.object_id.to_s) end it 'includes the min size' do expect(pool.inspect).to include('min_size=3') end it 'includes the max size' do expect(pool.inspect).to include('max_size=7') end it 'includes the wait timeout' do expect(pool.inspect).to include('wait_timeout=9') end it 'includes the current size' do expect(pool.inspect).to include('current_size=') end =begin obsolete it 'includes the queue inspection' do expect(pool.inspect).to include(pool.__send__(:queue).inspect) end =end it 'indicates the pool is not closed' do expect(pool.inspect).not_to include('closed') end context 'when pool is closed' do before do pool.close end it 'returns inspection string' do expect(pool.inspect).to include('min_size=') end it 'indicates the pool is closed' do expect(pool.inspect).to include('closed') end end end describe '#with_connection' do let!(:pool) do server.pool end context 'when a connection cannot be checked out' do it 'does not add the connection to the pool' do # fails because with_connection raises the SocketError which is not caught anywhere allow(pool).to receive(:check_out).and_raise(Mongo::Error::SocketError) expect do pool.with_connection { |c| c } end.to raise_error(Mongo::Error::SocketError) expect(pool.size).to eq(0) end end context 'when pool is closed' do before do pool.close end it 'raises PoolClosedError' do expect do pool.with_connection { |c| c } end.to raise_error(Mongo::Error::PoolClosedError) end end end describe '#close_idle_sockets' do let!(:pool) do server.pool end context 'when there is a max_idle_time specified' do let(:options) do { max_pool_size: 2, max_idle_time: 0.5 } end after do Timecop.return end =begin obsolete context 'when the connections have not been checked out' do before do queue.each do |conn| expect(conn).not_to receive(:disconnect!) end sleep(0.5) pool.close_idle_sockets end it 'does not close any sockets' do expect(queue.none? { |c| c.connected? }).to be(true) end end =end context 'when connections have been checked out and returned to the pool' do context 'when min size is 0' do let(:options) do { max_pool_size: 2, min_pool_size: 0, max_idle_time: 0.5 } end before do c1 = pool.check_out c2 = pool.check_out pool.check_in(c1) pool.check_in(c2) sleep(0.5) expect(c1).to receive(:disconnect!).and_call_original expect(c2).to receive(:disconnect!).and_call_original pool.close_idle_sockets end it 'closes all idle sockets' do expect(pool.size).to be(0) end end context 'when min size is > 0' do before do # Kill background thread to test close_idle_socket behavior pool.stop_populator expect(pool.instance_variable_get('@populator').running?).to be false end context 'when more than the number of min_size are checked out' do let(:options) do { max_pool_size: 5, min_pool_size: 3, max_idle_time: 0.5 } end it 'closes and removes connections with idle sockets and does not connect new ones' do first = pool.check_out second = pool.check_out third = pool.check_out fourth = pool.check_out fifth = pool.check_out pool.check_in(fifth) expect(fifth).to receive(:disconnect!).and_call_original expect(fifth).not_to receive(:connect!) Timecop.travel(Time.now + 1) expect(pool.size).to be(5) expect(pool.available_count).to be(1) pool.close_idle_sockets expect(pool.size).to be(4) expect(pool.available_count).to be(0) expect(fifth.connected?).to be(false) end end context 'when between 0 and min_size number of connections are checked out' do let(:options) do { max_pool_size: 5, min_pool_size: 3, max_idle_time: 0.5 } end it 'closes and removes connections with idle sockets and does not connect new ones' do first = pool.check_out second = pool.check_out third = pool.check_out fourth = pool.check_out fifth = pool.check_out pool.check_in(third) pool.check_in(fourth) pool.check_in(fifth) expect(third).to receive(:disconnect!).and_call_original expect(third).not_to receive(:connect!) expect(fourth).to receive(:disconnect!).and_call_original expect(fourth).not_to receive(:connect!) expect(fifth).to receive(:disconnect!).and_call_original expect(fifth).not_to receive(:connect!).and_call_original Timecop.travel(Time.now + 1) expect(pool.size).to be(5) expect(pool.available_count).to be(3) pool.close_idle_sockets expect(pool.size).to be(2) expect(pool.available_count).to be(0) expect(third.connected?).to be(false) expect(fourth.connected?).to be(false) expect(fifth.connected?).to be(false) end end end end end context 'when available connections include idle and non-idle ones' do let (:options) do { max_pool_size: 2, max_idle_time: 0.5 } end let(:connection) do pool.check_out.tap do |con| allow(con).to receive(:disconnect!) end end it 'disconnects all expired and only expired connections' do # Since per-test cleanup will close the pool and disconnect # the connection, we need to explicitly define the scope for the # assertions RSpec::Mocks.with_temporary_scope do c1 = pool.check_out expect(c1).to receive(:disconnect!) c2 = pool.check_out expect(c2).not_to receive(:disconnect!) pool.check_in(c1) Timecop.travel(Time.now + 1) pool.check_in(c2) expect(pool.size).to eq(2) expect(pool.available_count).to eq(2) expect(c1).not_to receive(:connect!) expect(c2).not_to receive(:connect!) pool.close_idle_sockets expect(pool.size).to eq(1) expect(pool.available_count).to eq(1) end end end context 'when there is no max_idle_time specified' do let(:connection) do conn = pool.check_out conn.connect! pool.check_in(conn) conn end it 'does not close any sockets' do # Since per-test cleanup will close the pool and disconnect # the connection, we need to explicitly define the scope for the # assertions RSpec::Mocks.with_temporary_scope do expect(connection).not_to receive(:disconnect!) pool.close_idle_sockets expect(connection.connected?).to be(true) end end end end describe '#populate' do require_no_linting before do # Disable the populator and clear the pool to isolate populate behavior pool.stop_populator pool.disconnect! # Manually mark the pool ready. pool.instance_variable_set('@ready', true) end let(:options) { {min_pool_size: 2, max_pool_size: 3} } context 'when pool size is at least min_pool_size' do before do first_connection = pool.check_out second_connection = pool.check_out expect(pool.size).to eq 2 expect(pool.available_count).to eq 0 end it 'does not create a connection and returns false' do expect(pool.populate).to be false expect(pool.size).to eq 2 expect(pool.available_count).to eq 0 end end context 'when pool size is less than min_pool_size' do before do first_connection = pool.check_out expect(pool.size).to eq 1 expect(pool.available_count).to eq 0 end it 'creates one connection, connects it, and returns true' do expect(pool.populate).to be true expect(pool.size).to eq 2 expect(pool.available_count).to eq 1 end end context 'when pool is closed' do before do pool.close end it 'does not create a connection and returns false' do expect(pool.populate).to be false # Can't just check pool size; size errors when pool is closed expect(pool.instance_variable_get('@available_connections').length).to eq(0) expect(pool.instance_variable_get('@checked_out_connections').length).to eq(0) expect(pool.instance_variable_get('@pending_connections').length).to eq(0) end end context 'when connect fails with socket related error once' do before do i = 0 expect(pool).to receive(:connect_connection).exactly(:twice).and_wrap_original{ |m, *args| i += 1 if i == 1 raise Mongo::Error::SocketError else m.call(*args) end } expect(pool.size).to eq 0 end it 'retries then succeeds in creating a connection' do expect(pool.populate).to be true expect(pool.size).to eq 1 expect(pool.available_count).to eq 1 end end context 'when connect fails with socket related error twice' do before do expect(pool).to receive(:connect_connection).exactly(:twice).and_raise(Mongo::Error::SocketError) expect(pool.size).to eq 0 end it 'retries, raises the second error, and fails to create a connection' do expect{ pool.populate }.to raise_error(Mongo::Error::SocketError) expect(pool.size).to eq 0 end end context 'when connect fails with non socket related error' do before do expect(pool).to receive(:connect_connection).once.and_raise(Mongo::Auth::InvalidMechanism.new("")) expect(pool.size).to eq 0 end it 'does not retry, raises the error, and fails to create a connection' do expect{ pool.populate }.to raise_error(Mongo::Auth::InvalidMechanism) expect(pool.size).to eq 0 end end end end mongo-ruby-driver-2.21.3/spec/mongo/server/connection_spec.rb000066400000000000000000001100731505113246500242470ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' # fails intermittently in evergreen describe Mongo::Server::Connection do class ConnectionSpecTestException < Exception; end clean_slate_for_all let(:generation_manager) do Mongo::Server::ConnectionPool::GenerationManager.new(server: server) end let!(:address) do default_address end let(:monitoring) do Mongo::Monitoring.new(monitoring: false) end let(:listeners) do Mongo::Event::Listeners.new end let(:app_metadata) do Mongo::Server::AppMetadata.new(authorized_client.cluster.options) end let(:cluster) do double('cluster').tap do |cl| allow(cl).to receive(:topology).and_return(topology) allow(cl).to receive(:app_metadata).and_return(app_metadata) allow(cl).to receive(:options).and_return({}) allow(cl).to receive(:cluster_time).and_return(nil) allow(cl).to receive(:update_cluster_time) allow(cl).to receive(:run_sdam_flow) end end declare_topology_double let(:server_options) { SpecConfig.instance.test_options.merge(monitoring_io: false) } let(:server) do register_server( Mongo::Server.new(address, cluster, monitoring, listeners, server_options.merge( # Normally the load_balancer option is set by the cluster load_balancer: ClusterConfig.instance.topology == :load_balanced, )) ) end let(:monitored_server) do register_server( Mongo::Server.new(address, cluster, monitoring, listeners, SpecConfig.instance.test_options.merge(monitoring_io: false) ).tap do |server| allow(server).to receive(:description).and_return(ClusterConfig.instance.primary_description) expect(server).not_to be_unknown end ) end let(:pool) do double('pool').tap do |pool| allow(pool).to receive(:close) allow(pool).to receive(:generation_manager).and_return(generation_manager) end end describe '#connect!' do shared_examples_for 'keeps server type and topology' do it 'does not mark server unknown' do expect(server).not_to receive(:unknown!) error end end shared_examples_for 'marks server unknown' do it 'marks server unknown' do expect(server).to receive(:unknown!) error end end context 'when no socket exists' do let(:connection) do described_class.new(server, server.options.merge(connection_pool: pool)) end let(:result) do connection.connect! end let(:socket) do connection.send(:socket) end it 'returns true' do expect(result).to be true end it 'creates a socket' do result expect(socket).to_not be_nil end it 'connects the socket' do result expect(socket).to be_alive end shared_examples_for 'failing connection' do it 'raises an exception' do expect(error).to be_a(Exception) end it 'clears socket' do error expect(connection.send(:socket)).to be nil end context 'when connection fails' do let(:description) do double('description').tap do |description| allow(description).to receive(:arbiter?).and_return(false) end end let(:first_pending_connection) do double('pending connection 1').tap do |conn| conn.should receive(:handshake_and_authenticate!).and_raise(exception) end end let(:second_pending_connection) do double('pending connection 2').tap do |conn| conn.should receive(:handshake_and_authenticate!).and_raise(ConnectionSpecTestException) end end it 'attempts to reconnect if asked to connect again' do RSpec::Mocks.with_temporary_scope do Mongo::Server::PendingConnection.should receive(:new).ordered.and_return(first_pending_connection) Mongo::Server::PendingConnection.should receive(:new).ordered.and_return(second_pending_connection) expect do connection.connect! end.to raise_error(exception) expect do connection.connect! end.to raise_error(ConnectionSpecTestException) end end end end shared_examples_for 'failing connection with server diagnostics' do it_behaves_like 'failing connection' it 'adds server diagnostics' do error.message.should =~ /on #{connection.address}/ end end shared_examples_for 'logs a warning' do require_warning_clean it 'logs a warning' do messages = [] expect(Mongo::Logger.logger).to receive(:warn) do |msg| messages << msg end expect(error).not_to be nil messages.any? { |msg| msg.include?(expected_message) }.should be true end end shared_examples_for 'adds server diagnostics' do require_warning_clean it 'adds server diagnostics' do messages = [] expect(Mongo::Logger.logger).to receive(:warn) do |msg| messages << msg end expect(error).not_to be nil messages.any? { |msg| msg =~ /on #{connection.address}/ }.should be true end end context 'when #handshake! dependency raises a non-network exception' do let(:exception) do Mongo::Error::OperationFailure.new end let(:error) do # The exception is mutated when notes are added to it expect_any_instance_of(Mongo::Socket).to receive(:write).and_raise(exception.dup) begin connection.connect! rescue Exception => e e else nil end end let(:expected_message) do "MONGODB | Failed to handshake with #{address}: #{error.class}: #{error}" end # The server diagnostics only apply to network exceptions. # If non-network exceptions can be legitimately raised during # handshake, and it makes sense to indicate which server the # corresponding request was sent to, we should apply server # diagnostics to non-network errors also. it_behaves_like 'failing connection' it_behaves_like 'keeps server type and topology' it_behaves_like 'logs a warning' end context 'when #handshake! dependency raises a network exception' do let(:exception) do Mongo::Error::SocketError.new.tap do |exc| allow(exc).to receive(:service_id).and_return('fake') end end let(:error) do # The exception is mutated when notes are added to it expect_any_instance_of(Mongo::Socket).to receive(:write).and_raise(exception) allow(connection).to receive(:service_id).and_return('fake') begin connection.connect! rescue Exception => e e else nil end end let(:expected_message) do "MONGODB | Failed to handshake with #{address}: #{error.class}: #{error}" end it_behaves_like 'failing connection with server diagnostics' it_behaves_like 'marks server unknown' it_behaves_like 'logs a warning' it_behaves_like 'adds server diagnostics' end context 'when #authenticate! raises an exception' do require_auth # because the mock/stub flow here doesn't cover the flow used by # the X.509 authentication mechanism... forbid_x509_auth let(:server_options) do Mongo::Client.canonicalize_ruby_options( SpecConfig.instance.all_test_options, ).update(monitoring_io: false) end let(:exception) do Mongo::Error::OperationFailure.new end let(:error) do # Speculative auth - would be reported as handshake failure expect(Mongo::Auth).to receive(:get).ordered.and_call_original # The actual authentication call expect(Mongo::Auth).to receive(:get).ordered.and_raise(exception) expect(connection.send(:socket)).to be nil begin connection.connect! rescue Exception => e e else nil end end let(:expected_message) do "MONGODB | Failed to authenticate to #{address}: #{error.class}: #{error}" end it_behaves_like 'failing connection' it_behaves_like 'logs a warning' end context 'when a non-Mongo exception is raised' do let(:exception) do SystemExit.new end let(:error) do expect_any_instance_of(Mongo::Server::PendingConnection).to receive(:authenticate!).and_raise(exception) begin connection.connect! rescue Exception => e e else nil end end it_behaves_like 'failing connection' end end context 'when a socket exists' do let(:connection) do described_class.new(server, server.options.merge(connection_pool: pool)) end let(:socket) do connection.send(:socket) end it 'keeps the socket alive' do expect(connection.connect!).to be true expect(connection.connect!).to be true expect(socket).to be_alive end it 'retains socket object' do expect(connection.connect!).to be true socket_id = connection.send(:socket).object_id expect(connection.connect!).to be true new_socket_id = connection.send(:socket).object_id expect(new_socket_id).to eq(socket_id) end end =begin These assertions require a working cluster with working SDAM flow, which the tests do not configure shared_examples_for 'does not disconnect connection pool' do it 'does not disconnect non-monitoring sockets' do allow(server).to receive(:pool).and_return(pool) expect(pool).not_to receive(:disconnect!) error end end shared_examples_for 'disconnects connection pool' do it 'disconnects non-monitoring sockets' do expect(server).to receive(:pool).at_least(:once).and_return(pool) expect(pool).to receive(:disconnect!).and_return(true) error end end =end let(:auth_mechanism) do if ClusterConfig.instance.server_version >= '3' Mongo::Auth::Scram else Mongo::Auth::CR end end context 'when user credentials exist' do require_no_external_user let(:server) { monitored_server } context 'when the user is not authorized' do let(:connection) do described_class.new( server, SpecConfig.instance.test_options.merge( user: 'notauser', password: 'password', database: SpecConfig.instance.test_db, heartbeat_frequency: 30, connection_pool: pool, ) ) end let(:error) do begin connection.send(:connect!) rescue => ex ex else nil end end context 'not checking pool disconnection' do before do allow(cluster).to receive(:pool).with(server).and_return(pool) allow(pool).to receive(:disconnect!).and_return(true) end it 'raises an error' do expect(error).to be_a(Mongo::Auth::Unauthorized) end #it_behaves_like 'disconnects connection pool' it_behaves_like 'marks server unknown' end # need a separate context here, otherwise disconnect expectation # is ignored due to allowing disconnects in the other context context 'checking pool disconnection' do #it_behaves_like 'disconnects connection pool' end end context 'socket timeout during auth' do let(:connection) do described_class.new( server, SpecConfig.instance.test_options.merge( :user => SpecConfig.instance.test_user.name, :password => SpecConfig.instance.test_user.password, :database => SpecConfig.instance.test_user.database ) ) end let(:error) do expect_any_instance_of(auth_mechanism).to receive(:login).and_raise(Mongo::Error::SocketTimeoutError) begin connection.send(:connect!) rescue => ex ex else nil end end it 'propagates the error' do expect(error).to be_a(Mongo::Error::SocketTimeoutError) end #it_behaves_like 'does not disconnect connection pool' it_behaves_like 'keeps server type and topology' end context 'non-timeout socket exception during auth' do let(:connection) do described_class.new( server, SpecConfig.instance.test_options.merge( :user => SpecConfig.instance.test_user.name, :password => SpecConfig.instance.test_user.password, :database => SpecConfig.instance.test_user.database ) ) end let(:exception) do Mongo::Error::SocketError.new.tap do |exc| if server.load_balancer? allow(exc).to receive(:service_id).and_return('fake') end end end let(:error) do expect_any_instance_of(auth_mechanism).to receive(:login).and_raise(exception) begin connection.send(:connect!) rescue => ex ex else nil end end it 'propagates the error' do expect(error).to be_a(Mongo::Error::SocketError) end #it_behaves_like 'disconnects connection pool' it_behaves_like 'marks server unknown' end describe 'when the user is authorized' do let(:connection) do described_class.new( server, SpecConfig.instance.test_options.merge( user: SpecConfig.instance.test_user.name, password: SpecConfig.instance.test_user.password, database: SpecConfig.instance.test_user.database, connection_pool: pool, ) ) end before do connection.connect! end it 'sets the connection as connected' do expect(connection).to be_connected end end end context 'connecting to arbiter' do require_topology :replica_set before(:all) do unless ENV['HAVE_ARBITER'] skip 'Test requires an arbiter in the deployment' end end let(:arbiter_server) do authorized_client.cluster.servers_list.each do |server| server.scan! end server = authorized_client.cluster.servers_list.detect do |server| server.arbiter? end.tap do |server| raise 'No arbiter in the deployment' unless server end end shared_examples_for 'does not authenticate' do let(:client) do new_local_client([address], SpecConfig.instance.test_options.merge( :user => 'bogus', :password => 'bogus', :database => 'bogus' ).merge(connect: :direct), ) end let(:connection) do described_class.new( server, ) end let(:ping) do client.database.command(ping: 1) end it 'does not authenticate' do ClientRegistry.instance.close_all_clients expect_any_instance_of(Mongo::Server::Connection).not_to receive(:authenticate!) expect(ping.documents.first['ok']).to eq(1) rescue nil end end context 'without me mismatch' do let(:address) do arbiter_server.address.to_s end it_behaves_like 'does not authenticate' end context 'with me mismatch' do let(:address) do "#{ClusterConfig.instance.alternate_address.host}:#{arbiter_server.address.port}" end it_behaves_like 'does not authenticate' end end context 'when the server returns unknown saslSupportedMechs' do min_server_version '4.0' let(:connection) do described_class.new(server, server.options.merge(connection_pool: pool)) end before do expect_any_instance_of(Mongo::Server::PendingConnection).to receive(:get_handshake_response).and_wrap_original do |original_method, *args| original_method.call(*args).tap do |result| if result.documents.first.fetch('saslSupportedMechs', nil).is_a?(Array) result.documents.first['saslSupportedMechs'].append('unknownMechanism') end end end end it 'does not raise an error' do expect { connection.connect! }.not_to raise_error end end end describe '#disconnect!' do context 'when a socket is not connected' do let(:connection) do described_class.new(server, server.options.merge(connection_pool: pool)) end it 'does not raise an error' do expect(connection.disconnect!).to be true end end context 'when a socket is connected' do let(:connection) do described_class.new(server, server.options.merge(connection_pool: pool)) end before do connection.connect! connection.disconnect! end it 'disconnects the socket' do expect(connection.send(:socket)).to be_nil end end end describe '#dispatch' do require_no_required_api_version let(:server) { monitored_server } let(:context) { Mongo::Operation::Context.new } let!(:connection) do described_class.new( server, SpecConfig.instance.test_options.merge( database: SpecConfig.instance.test_user.database, connection_pool: pool, ).merge(Mongo::Utils.shallow_symbolize_keys(Mongo::Client.canonicalize_ruby_options( SpecConfig.instance.credentials_or_external_user( user: SpecConfig.instance.test_user.name, password: SpecConfig.instance.test_user.password, ), ))) ).tap do |connection| connection.connect! end end (0..2).each do |i| let("msg#{i}".to_sym) do Mongo::Protocol::Msg.new( [], {}, {ping: 1, :$db => SpecConfig.instance.test_db} ) end end context 'when providing a single message' do let(:reply) do connection.dispatch([ msg0 ], context) end it 'it dispatches the message to the socket' do expect(reply.payload['reply']['ok']).to eq(1.0) end end context 'when providing multiple messages' do let(:reply) do connection.dispatch([ msg0, msg1 ], context) end it 'raises ArgumentError' do expect do reply end.to raise_error(ArgumentError, 'Can only dispatch one message at a time') end end context 'when the response_to does not match the request_id' do before do connection.dispatch([ msg0 ], context) # Fake a query for which we did not read the response. See RUBY-1117 allow(msg1).to receive(:replyable?) { false } connection.dispatch([ msg1 ], context) end it 'raises an UnexpectedResponse error' do expect { connection.dispatch([ msg0 ], context) }.to raise_error(Mongo::Error::UnexpectedResponse, /Got response for request ID \d+ but expected response for request ID \d+/) end it 'marks connection perished' do expect { connection.dispatch([ msg0 ], context) }.to raise_error(Mongo::Error::UnexpectedResponse) connection.should be_error end it 'makes the connection no longer usable' do expect { connection.dispatch([ msg0 ], context) }.to raise_error(Mongo::Error::UnexpectedResponse) expect { connection.dispatch([ msg0 ], context) }.to raise_error(Mongo::Error::ConnectionPerished) end end context 'when a request is interrupted (Thread.kill)' do require_no_required_api_version before do authorized_collection.delete_many connection.dispatch([ msg0 ], context) end it 'closes the socket and does not use it for subsequent requests' do t = Thread.new { # Kill the thread just before the reply is read allow(Mongo::Protocol::Reply).to receive(:deserialize_header) { t.kill && !t.alive? } connection.dispatch([ msg1 ], context) } t.join allow(Mongo::Protocol::Message).to receive(:deserialize_header).and_call_original resp = connection.dispatch([ msg2 ], context) expect(resp.payload['reply']['ok']).to eq(1.0) end end context 'when the message exceeds the max size' do require_no_linting let(:command) do Mongo::Protocol::Msg.new( [], {}, {ping: 1, padding: 'x'*16384, :$db => SpecConfig.instance.test_db} ) end let(:reply) do connection.dispatch([ command ], context) end it 'checks the size against the max bson size' do # 100 works for non-x509 auth. # 10 is needed for x509 auth due to smaller payloads, apparently. expect_any_instance_of(Mongo::Server::Description).to receive( :max_bson_object_size).at_least(:once).and_return(10) expect do reply end.to raise_exception(Mongo::Error::MaxBSONSize) end end context 'when a network error occurs' do let(:server) do authorized_client.cluster.next_primary.tap do |server| # to ensure the server stays in unknown state for the duration # of the test, i.e. to avoid racing with the monitor thread # which may put the server back into non-unknown state before # we can verify that the server was marked unknown, kill off # the monitor thread. unless ClusterConfig.instance.topology == :load_balanced server.monitor.instance_variable_get('@thread').kill end end end let(:socket) do connection.connect! connection.instance_variable_get(:@socket) end context 'when a non-timeout socket error occurs' do before do expect(socket).to receive(:write).and_raise(Mongo::Error::SocketError) end let(:result) do expect do connection.dispatch([ msg0 ], context) end.to raise_error(Mongo::Error::SocketError) end it 'marks connection perished' do result expect(connection).to be_error end context 'in load-balanced topology' do require_topology :load_balanced it 'disconnects connection pool for service id' do connection.global_id.should_not be nil RSpec::Mocks.with_temporary_scope do expect(server.pool).to receive(:disconnect!).with( service_id: connection.service_id ) result end end it 'does not mark server unknown' do expect(server).not_to be_unknown result expect(server).not_to be_unknown end end context 'in non-lb topologies' do require_topology :single, :replica_set, :sharded it 'disconnects connection pool' do expect(server.pool).to receive(:disconnect!) result end it 'marks server unknown' do expect(server).not_to be_unknown result expect(server).to be_unknown end end it 'does not request server scan' do expect(server.scan_semaphore).not_to receive(:signal) result end end context 'when a socket timeout occurs' do before do expect(socket).to receive(:write).and_raise(Mongo::Error::SocketTimeoutError) end let(:result) do expect do connection.dispatch([ msg0 ], context) end.to raise_error(Mongo::Error::SocketTimeoutError) end it 'marks connection perished' do result expect(connection).to be_error end =begin These assertions require a working cluster with working SDAM flow, which the tests do not configure it 'does not disconnect connection pool' do expect(server.pool).not_to receive(:disconnect!) result end =end it 'does not mark server unknown' do expect(server).not_to be_unknown result expect(server).not_to be_unknown end end end context 'when a socket timeout is set on client' do let(:connection) do described_class.new(server, socket_timeout: 10) end it 'is propagated to connection timeout' do expect(connection.timeout).to eq(10) end end context 'when an operation never completes' do let(:client) do authorized_client.with(socket_timeout: 1.5, # Read retries would cause the reads to be attempted twice, # thus making the find take twice as long to time out. retry_reads: false, max_read_retries: 0) end before do authorized_collection.insert_one(test: 1) client.cluster.next_primary end it 'times out and raises SocketTimeoutError' do start = Mongo::Utils.monotonic_time begin Timeout::timeout(1.5 + 15) do client[authorized_collection.name].find("$where" => "sleep(2000) || true").first end rescue => ex end_time = Mongo::Utils.monotonic_time expect(ex).to be_a(Mongo::Error::SocketTimeoutError) expect(ex.message).to match(/Took more than 1.5 seconds to receive data/) else fail 'Expected a timeout' end # allow 1.5 seconds +- 0.5 seconds expect(end_time - start).to be_within(1).of(2) end context 'when the socket_timeout is negative' do let(:connection) do described_class.new(server, server.options.merge(connection_pool: pool)).tap do |connection| connection.connect! end end before do expect(msg0).to receive(:replyable?) { false } connection.send(:deliver, msg0, context) connection.send(:socket).instance_variable_set(:@timeout, -(Time.now.to_i)) end let(:reply) do Mongo::Protocol::Message.deserialize(connection.send(:socket), 16*1024*1024, msg0.request_id) end it 'raises a timeout error' do expect { reply }.to raise_exception(Mongo::Error::SocketTimeoutError) end end end end describe '#initialize' do context 'when host and port are provided' do let(:connection) do described_class.new(server, server.options.merge(connection_pool: pool)) end it 'sets the address' do expect(connection.address).to eq(server.address) end it 'sets id' do expect(connection.id).to eq(1) end context 'multiple connections' do it 'use incrementing ids' do expect(connection.id).to eq(1) second_connection = described_class.new(server, server.options.merge(connection_pool: pool)) expect(second_connection.id).to eq(2) end end context 'two pools for different servers' do let(:server2) do register_server( Mongo::Server.new(address, cluster, monitoring, listeners, server_options.merge( load_balancer: ClusterConfig.instance.topology == :load_balanced, ) ) ) end before do allow(server).to receive(:unknown?).and_return(false) allow(server2).to receive(:unknown?).and_return(false) end it 'ids do not share namespace' do server.pool.with_connection do |conn| expect(conn.id).to eq(1) end server2.pool.with_connection do |conn| expect(conn.id).to eq(1) end end end it 'sets the socket to nil' do expect(connection.send(:socket)).to be_nil end context 'when timeout is not set in client options' do let(:server_options) do SpecConfig.instance.test_options.merge(monitoring_io: false, socket_timeout: nil) end it 'does not set the timeout to the default' do expect(connection.timeout).to be_nil end end end context 'when timeout options are provided' do let(:connection) do described_class.new(server, socket_timeout: 10) end it 'sets the timeout' do expect(connection.timeout).to eq(10) end end context 'when ssl options are provided' do let(:ssl_options) do { :ssl => true, :ssl_key => 'file', :ssl_key_pass_phrase => 'iamaphrase' } end let(:connection) do described_class.new(server, ssl_options) end it 'sets the ssl options' do expect(connection.send(:ssl_options)).to eq(ssl_options) end end context 'when ssl is false' do context 'when ssl options are provided' do let(:ssl_options) do { :ssl => false, :ssl_key => 'file', :ssl_key_pass_phrase => 'iamaphrase' } end let(:connection) do described_class.new(server, ssl_options) end it 'does not set the ssl options' do expect(connection.send(:ssl_options)).to eq(ssl: false) end end context 'when ssl options are not provided' do let(:ssl_options) do { :ssl => false } end let(:connection) do described_class.new(server, ssl_options) end it 'does not set the ssl options' do expect(connection.send(:ssl_options)).to eq(ssl: false) end end end context 'when authentication options are provided' do require_no_external_user let(:connection) do described_class.new( server, user: SpecConfig.instance.test_user.name, password: SpecConfig.instance.test_user.password, database: SpecConfig.instance.test_db, auth_mech: :mongodb_cr, connection_pool: pool, ) end let(:user) do Mongo::Auth::User.new( database: SpecConfig.instance.test_db, user: SpecConfig.instance.test_user.name, password: SpecConfig.instance.test_user.password ) end it 'sets the auth options' do expect(connection.options[:user]).to eq(user.name) end end end context 'when different timeout options are set' do let(:client) do authorized_client.with(options) end let(:server) do client.cluster.next_primary end let(:address) do server.address end let(:connection) do described_class.new(server, server.options.merge(connection_pool: pool)) end context 'when a connect_timeout is in the options' do context 'when a socket_timeout is in the options' do let(:options) do SpecConfig.instance.test_options.merge(connect_timeout: 3, socket_timeout: 5) end before do connection.connect! end it 'uses the connect_timeout for the address' do expect(connection.address.options[:connect_timeout]).to eq(3) end it 'uses the socket_timeout as the socket_timeout' do expect(connection.send(:socket).timeout).to eq(5) end end context 'when a socket_timeout is not in the options' do let(:options) do SpecConfig.instance.test_options.merge(connect_timeout: 3, socket_timeout: nil) end before do connection.connect! end it 'uses the connect_timeout for the address' do expect(connection.address.options[:connect_timeout]).to eq(3) end it 'does not use a socket_timeout' do expect(connection.send(:socket).timeout).to be(nil) end end end context 'when a connect_timeout is not in the options' do context 'when a socket_timeout is in the options' do let(:options) do SpecConfig.instance.test_options.merge(connect_timeout: nil, socket_timeout: 5) end before do connection.connect! end it 'does not specify connect_timeout for the address' do expect(connection.address.options[:connect_timeout]).to be nil end it 'uses the socket_timeout' do expect(connection.send(:socket).timeout).to eq(5) end end context 'when a socket_timeout is not in the options' do let(:options) do SpecConfig.instance.test_options.merge(connect_timeout: nil, socket_timeout: nil) end before do connection.connect! end it 'does not specify connect_timeout for the address' do expect(connection.address.options[:connect_timeout]).to be nil end it 'does not use a socket_timeout' do expect(connection.send(:socket).timeout).to be(nil) end end end end describe '#app_metadata' do context 'when all options are identical to server' do let(:connection) do described_class.new(server, server.options.merge(connection_pool: pool)) end it 'is the same object as server app_metadata' do expect(connection.app_metadata).not_to be nil expect(connection.app_metadata).to be server.app_metadata end end context 'when auth options are identical to server' do let(:connection) do described_class.new(server, server.options.merge(socket_timeout: 2, connection_pool: pool)) end it 'is the same object as server app_metadata' do expect(connection.app_metadata).not_to be nil expect(connection.app_metadata).to be server.app_metadata end end context 'when auth options differ from server' do require_no_external_user let(:connection) do described_class.new(server, server.options.merge(user: 'foo', connection_pool: pool)) end it 'is different object from server app_metadata' do expect(connection.app_metadata).not_to be nil expect(connection.app_metadata).not_to be server.app_metadata end it 'includes request auth mechanism' do document = connection.app_metadata.send(:document) expect(document[:saslSupportedMechs]).to eq('admin.foo') end end end describe '#generation' do context 'non-lb' do require_topology :single, :replica_set, :sharded before do allow(server).to receive(:unknown?).and_return(false) end it 'is set' do server.with_connection do |conn| conn.service_id.should be nil conn.generation.should be_a(Integer) end end context 'clean slate' do clean_slate before do allow(server).to receive(:unknown?).and_return(false) end it 'starts from 1' do server.with_connection do |conn| conn.service_id.should be nil conn.generation.should == 1 end end end end context 'lb' do require_topology :load_balanced it 'is set' do server.with_connection do |conn| conn.service_id.should_not be nil conn.generation.should be_a(Integer) end end context 'clean slate' do clean_slate it 'starts from 1' do server.with_connection do |conn| conn.service_id.should_not be nil conn.generation.should == 1 end end end end end end mongo-ruby-driver-2.21.3/spec/mongo/server/description/000077500000000000000000000000001505113246500230725ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/mongo/server/description/features_spec.rb000066400000000000000000000125521505113246500262540ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Server::Description::Features do let(:features) do described_class.new(wire_versions, default_address) end describe '#initialize' do context 'when the server wire version range is the same' do let(:wire_versions) do 0..3 end it 'sets the server wire version range' do expect(features.server_wire_versions).to eq(0..3) end end context 'when the server wire version range min is higher' do let(:wire_versions) do described_class::DRIVER_WIRE_VERSIONS.max+1..described_class::DRIVER_WIRE_VERSIONS.max+2 end it 'raises an exception' do expect { features.check_driver_support! }.to raise_error(Mongo::Error::UnsupportedFeatures) end end context 'when the server wire version range max is higher' do let(:wire_versions) do 0..4 end it 'sets the server wire version range' do expect(features.server_wire_versions).to eq(0..4) end end context 'when the server wire version range max is lower' do let(:wire_versions) do described_class::DRIVER_WIRE_VERSIONS.min-2..described_class::DRIVER_WIRE_VERSIONS.min-1 end it 'raises an exception' do expect { features.check_driver_support! }.to raise_error(Mongo::Error::UnsupportedFeatures) end end context 'when the server wire version range max is lower' do let(:wire_versions) do 0..2 end it 'sets the server wire version range' do expect(features.server_wire_versions).to eq(0..2) end end end describe '#collation_enabled?' do context 'when the wire range includes 5' do let(:wire_versions) do 0..5 end it 'returns true' do expect(features).to be_collation_enabled end end context 'when the wire range does not include 5' do let(:wire_versions) do 0..2 end it 'returns false' do expect(features).to_not be_collation_enabled end end end describe '#max_staleness_enabled?' do context 'when the wire range includes 5' do let(:wire_versions) do 0..5 end it 'returns true' do expect(features).to be_max_staleness_enabled end end context 'when the wire range does not include 5' do let(:wire_versions) do 0..2 end it 'returns false' do expect(features).to_not be_max_staleness_enabled end end end describe '#find_command_enabled?' do context 'when the wire range includes 4' do let(:wire_versions) do 0..4 end it 'returns true' do expect(features).to be_find_command_enabled end end context 'when the wire range does not include 4' do let(:wire_versions) do 0..2 end it 'returns false' do expect(features).to_not be_find_command_enabled end end end describe '#list_collections_enabled?' do context 'when the wire range includes 3' do let(:wire_versions) do 0..3 end it 'returns true' do expect(features).to be_list_collections_enabled end end context 'when the wire range does not include 3' do let(:wire_versions) do 0..2 end it 'returns false' do expect(features).to_not be_list_collections_enabled end end end describe '#list_indexes_enabled?' do context 'when the wire range includes 3' do let(:wire_versions) do 0..3 end it 'returns true' do expect(features).to be_list_indexes_enabled end end context 'when the wire range does not include 3' do let(:wire_versions) do 0..2 end it 'returns false' do expect(features).to_not be_list_indexes_enabled end end end describe '#write_command_enabled?' do context 'when the wire range includes 2' do let(:wire_versions) do 0..3 end it 'returns true' do expect(features).to be_write_command_enabled end end context 'when the wire range does not include 2' do let(:wire_versions) do 0..1 end it 'returns false' do expect { features.check_driver_support! }.to raise_exception(Mongo::Error::UnsupportedFeatures) end end end describe '#scram_sha_1_enabled?' do context 'when the wire range includes 3' do let(:wire_versions) do 0..3 end it 'returns true' do expect(features).to be_scram_sha_1_enabled end end context 'when the wire range does not include 3' do let(:wire_versions) do 0..2 end it 'returns false' do expect(features).to_not be_scram_sha_1_enabled end end end describe '#get_more_comment_enabled?' do context 'when the wire range includes 9' do let(:wire_versions) do 0..9 end it 'returns true' do expect(features).to be_get_more_comment_enabled end end context 'when the wire range does not include 9' do let(:wire_versions) do 0..8 end it 'returns false' do expect(features).to_not be_get_more_comment_enabled end end end end mongo-ruby-driver-2.21.3/spec/mongo/server/description_query_methods_spec.rb000066400000000000000000000145131505113246500274050ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' # For conciseness these tests are arranged by description types # rather than by methods being tested, as is customary describe Mongo::Server::Description do let(:address) do Mongo::Address.new(authorized_primary.address.to_s) end let(:desc_options) { {} } let(:ok) { 1 } let(:description) { described_class.new(address, desc_options) } shared_examples_for 'is unknown' do it 'is unknown' do expect(description).to be_unknown end %w( arbiter ghost hidden mongos passive primary secondary standalone other ).each do |type| it "is not #{type}" do expect(description.send("#{type}?")).to be false end end it 'is not data-bearing' do expect(description.data_bearing?).to be false end end context 'unknown' do context 'empty description' do it_behaves_like 'is unknown' end end context 'ghost' do let(:desc_options) { {'isreplicaset' => true, 'minWireVersion' => 2, 'maxWireVersion' => 8, 'ok' => ok} } it 'is ghost' do expect(description).to be_ghost end %w( arbiter hidden mongos passive primary secondary standalone other unknown ).each do |type| it "is not #{type}" do expect(description.send("#{type}?")).to be false end end it 'is not data-bearing' do expect(description.data_bearing?).to be false end context 'ok: 0' do let(:ok) { 0 } it_behaves_like 'is unknown' end end context 'mongos' do let(:desc_options) { {'msg' => 'isdbgrid', 'minWireVersion' => 2, 'maxWireVersion' => 8, 'ok' => ok} } it 'is mongos' do expect(description).to be_mongos end %w( arbiter hidden passive primary secondary standalone other unknown ghost ).each do |type| it "is not #{type}" do expect(description.send("#{type}?")).to be false end end it 'is data-bearing' do expect(description.data_bearing?).to be true end context 'ok: 0' do let(:ok) { 0 } it_behaves_like 'is unknown' end end context 'primary' do let(:desc_options) { {'isWritablePrimary' => true, 'minWireVersion' => 2, 'maxWireVersion' => 8, 'setName' => 'foo', 'ok' => ok} } it 'is primary' do expect(description).to be_primary end %w( arbiter hidden passive mongos secondary standalone other unknown ghost ).each do |type| it "is not #{type}" do expect(description.send("#{type}?")).to be false end end it 'is data-bearing' do expect(description.data_bearing?).to be true end context 'ok: 0' do let(:ok) { 0 } it_behaves_like 'is unknown' end end context 'secondary' do let(:desc_options) { {'secondary' => true, 'minWireVersion' => 2, 'maxWireVersion' => 8, 'setName' => 'foo', 'ok' => ok} } it 'is secondary' do expect(description).to be_secondary end %w( arbiter hidden passive mongos primary standalone other unknown ghost ).each do |type| it "is not #{type}" do expect(description.send("#{type}?")).to be false end end it 'is data-bearing' do expect(description.data_bearing?).to be true end context 'ok: 0' do let(:ok) { 0 } it_behaves_like 'is unknown' end it 'is not passive' do expect(description).not_to be_passive end context 'passive' do let(:desc_options) { {'secondary' => true, 'minWireVersion' => 2, 'maxWireVersion' => 8, 'setName' => 'foo', 'passive' => true, 'ok' => ok} } it 'is passive' do expect(description).to be_passive end it 'is data-bearing' do expect(description.data_bearing?).to be true end context 'ok: 0' do let(:ok) { 0 } it_behaves_like 'is unknown' it 'is not passive' do expect(description).not_to be_passive end end end end context 'arbiter' do let(:desc_options) { {'arbiterOnly' => true, 'minWireVersion' => 2, 'maxWireVersion' => 8, 'setName' => 'foo', 'ok' => ok} } it 'is arbiter' do expect(description).to be_arbiter end %w( secondary hidden passive mongos primary standalone other unknown ghost ).each do |type| it "is not #{type}" do expect(description.send("#{type}?")).to be false end end it 'is not data-bearing' do expect(description.data_bearing?).to be false end context 'ok: 0' do let(:ok) { 0 } it_behaves_like 'is unknown' end end context 'standalone' do let(:desc_options) { {'minWireVersion' => 2, 'maxWireVersion' => 8, 'ok' => ok} } it 'is standalone' do expect(description).to be_standalone end %w( secondary hidden passive mongos primary arbiter other unknown ghost ).each do |type| it "is not #{type}" do expect(description.send("#{type}?")).to be false end end it 'is data-bearing' do expect(description.data_bearing?).to be true end context 'ok: 0' do let(:ok) { 0 } it_behaves_like 'is unknown' end end context 'other' do shared_examples_for 'is other' do it 'is other' do expect(description).to be_other end %w( secondary passive mongos primary arbiter standalone unknown ghost ).each do |type| it "is not #{type}" do expect(description.send("#{type}?")).to be false end end it 'is not data-bearing' do expect(description.data_bearing?).to be false end context 'ok: 0' do let(:ok) { 0 } it_behaves_like 'is unknown' end end context 'hidden: true' do let(:desc_options) { {'setName' => 'foo', 'minWireVersion' => 2, 'maxWireVersion' => 8, 'hidden' => true, 'ok' => ok} } it_behaves_like 'is other' it 'is hidden' do expect(description).to be_hidden end end context 'not hidden: true' do let(:desc_options) { {'setName' => 'foo', 'minWireVersion' => 2, 'maxWireVersion' => 8, 'ok' => ok} } it_behaves_like 'is other' it 'is not hidden' do expect(description).not_to be_hidden end end end end mongo-ruby-driver-2.21.3/spec/mongo/server/description_spec.rb000066400000000000000000000542061505113246500244400ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Server::Description do %w[ismaster isWritablePrimary].each do |primary_param| context "#{primary_param} as primary parameter" do let(:replica) do { 'setName' => 'mongodb_set', primary_param => true, 'secondary' => false, 'hosts' => [ '127.0.0.1:27018', '127.0.0.1:27019' ], 'arbiters' => [ '127.0.0.1:27120' ], 'primary' => authorized_primary.address.to_s, 'tags' => { 'rack' => 'a' }, 'me' => '127.0.0.1:27019', 'maxBsonObjectSize' => 16777216, 'maxMessageSizeBytes' => 48000000, 'maxWriteBatchSize' => 1000, 'maxWireVersion' => 2, 'minWireVersion' => 1, 'localTime' => Time.now, 'lastWrite' => { 'lastWriteDate' => Time.now }, 'logicalSessionTimeoutMinutes' => 7, 'operationTime' => 1, '$clusterTime' => 1, 'connectionId' => 11, 'ok' => 1 } end let(:address) do Mongo::Address.new(authorized_primary.address.to_s) end let(:monitoring) do Mongo::Monitoring.new(monitoring: false) end declare_topology_double let(:cluster) do double('cluster').tap do |cl| allow(cl).to receive(:topology).and_return(topology) allow(cl).to receive(:app_metadata).and_return(app_metadata) allow(cl).to receive(:options).and_return({}) end end describe '#arbiters' do context 'when the replica set has arbiters' do let(:description) do described_class.new(address, replica) end it 'returns the arbiters' do expect(description.arbiters).to eq([ '127.0.0.1:27120' ]) end end context 'when the replica set has no arbiters' do let(:description) do described_class.new(address, {}) end it 'returns an empty array' do expect(description.arbiters).to be_empty end end context 'when the addresses are not lowercase' do let(:config) do replica.merge( { 'arbiters' => [ 'SERVER:27017' ], } ) end let(:description) do described_class.new(address, config) end it 'normalizes the addresses to lowercase' do expect(description.arbiters).to eq(['server:27017']) end end end describe '#hosts' do let(:description) do described_class.new(address, replica) end it 'returns all the hosts in the replica set' do expect(description.hosts).to eq([ '127.0.0.1:27018', '127.0.0.1:27019' ]) end context 'when the addresses are not lowercase' do let(:config) do replica.merge( { 'hosts' => [ 'SERVER:27017' ], } ) end let(:description) do described_class.new(address, config) end it 'normalizes the addresses to lowercase' do expect(description.hosts).to eq(['server:27017']) end end end describe '#max_bson_object_size' do let(:description) do described_class.new(address, replica) end it 'returns the value' do expect(description.max_bson_object_size).to eq(16777216) end end describe '#max_message_size' do let(:description) do described_class.new(address, replica) end it 'returns the value' do expect(description.max_message_size).to eq(48000000) end end describe '#max_write_batch_size' do let(:description) do described_class.new(address, replica) end it 'returns the value' do expect(description.max_write_batch_size).to eq(1000) end end describe '#max_wire_version' do context 'when the max wire version is provided' do let(:description) do described_class.new(address, replica) end it 'returns the value' do expect(description.max_wire_version).to eq(2) end end context 'when the max wire version is not provided' do let(:description) do described_class.new(address, {}) end it 'returns the default' do expect(description.max_wire_version).to eq(0) end end end describe '#min_wire_version' do context 'when the min wire version is provided' do let(:description) do described_class.new(address, replica) end it 'returns the value' do expect(description.min_wire_version).to eq(1) end end context 'when the min wire version is not provided' do let(:description) do described_class.new(address, {}) end it 'returns the default' do expect(description.min_wire_version).to eq(0) end end end describe '#tags' do context 'when the server has tags' do let(:description) do described_class.new(address, replica) end it 'returns the tags' do expect(description.tags).to eq(replica['tags']) end end context 'when the server does not have tags' do let(:config) do { primary_param => true } end let(:description) do described_class.new(address, config) end it 'returns an empty hash' do expect(description.tags).to eq({}) end end end describe '#passives' do context 'when passive servers exists' do let(:description) do described_class.new(address, { 'passives' => [ '127.0.0.1:27025' ] }) end it 'returns a list of the passives' do expect(description.passives).to eq([ '127.0.0.1:27025' ]) end end context 'when no passive servers exist' do let(:description) do described_class.new(address, replica) end it 'returns an empty array' do expect(description.passives).to be_empty end end context 'when the addresses are not lowercase' do let(:config) do replica.merge( { 'passives' => [ 'SERVER:27017' ], } ) end let(:description) do described_class.new(address, config) end it 'normalizes the addresses to lowercase' do expect(description.passives).to eq(['server:27017']) end it 'normalizes the addresses to lowercase' do end end end describe '#primary?' do context 'when the server is a primary' do context 'when the hostname contains no capital letters' do let(:description) do described_class.new(address, replica) end it 'returns true' do expect(description).to be_primary end end context 'when the hostname contains capital letters' do let(:description) do described_class.new('localhost:27017', { primary_param => true, 'ok' => 1, 'minWireVersion' => 2, 'maxWireVersion' => 3, 'primary' => 'LOCALHOST:27017', 'setName' => 'itsASet!'}) end it 'returns true' do expect(description).to be_primary end end end end describe '#average_round_trip_time' do let(:description) do described_class.new(address, { 'secondary' => false }, average_round_trip_time: 4.5) end it 'defaults to nil' do expect(described_class.new(address).average_round_trip_time).to be nil end it 'can be set via the constructor' do expect(description.average_round_trip_time).to eq(4.5) end end describe '#replica_set_name' do context 'when the server is in a replica set' do let(:description) do described_class.new(address, replica) end it 'returns the replica set name' do expect(description.replica_set_name).to eq('mongodb_set') end end context 'when the server is not in a replica set' do let(:description) do described_class.new(address, {}) end it 'returns nil' do expect(description.replica_set_name).to be_nil end end end describe '#servers' do let(:config) do replica.merge({ 'passives' => [ '127.0.0.1:27025' ]}) end let(:description) do described_class.new(address, config) end it 'returns the hosts + arbiters + passives' do expect(description.servers).to eq( [ '127.0.0.1:27018', '127.0.0.1:27019', '127.0.0.1:27120', '127.0.0.1:27025' ] ) end end describe '#server_type' do context 'when the server is an arbiter' do let(:description) do described_class.new(address, { 'arbiterOnly' => true, 'minWireVersion' => 2, 'maxWireVersion' => 3, 'setName' => 'test', 'ok' => 1 }) end it 'returns :arbiter' do expect(description.server_type).to eq(:arbiter) end end context 'when the server is a ghost' do let(:description) do described_class.new(address, { 'isreplicaset' => true, 'minWireVersion' => 2, 'maxWireVersion' => 3, 'ok' => 1 }) end it 'returns :ghost' do expect(description.server_type).to eq(:ghost) end end context 'when the server is a mongos' do let(:config) do { 'msg' => 'isdbgrid', primary_param => true, 'minWireVersion' => 2, 'maxWireVersion' => 3, 'ok' => 1 } end let(:description) do described_class.new(address, config) end it 'returns :sharded' do expect(description.server_type).to eq(:sharded) end context 'when client and server addresses are different' do let(:config) do { 'msg' => 'isdbgrid', primary_param => true, 'minWireVersion' => 2, 'maxWireVersion' => 3, 'ok' => 1, 'me' => '127.0.0.1', } end let(:address) do Mongo::Address.new('localhost') end it 'returns :sharded' do expect(description.server_type).to eq(:sharded) end end end context 'when the server is a primary' do let(:description) do described_class.new(address, replica) end it 'returns :primary' do expect(description.server_type).to eq(:primary) end end context 'when the server is a secondary' do let(:description) do described_class.new(address, { 'secondary' => true, 'minWireVersion' => 2, 'maxWireVersion' => 3, 'setName' => 'test', 'ok' => 1 }) end it 'returns :secondary' do expect(description.server_type).to eq(:secondary) end end context 'when the server is standalone' do let(:description) do described_class.new(address, { primary_param => true, 'minWireVersion' => 2, 'maxWireVersion' => 3, 'ok' => 1 }) end it 'returns :standalone' do expect(description.server_type).to eq(:standalone) end end context 'when the server is hidden' do let(:description) do described_class.new(address, { primary_param => false, 'minWireVersion' => 2, 'maxWireVersion' => 3, 'setName' => 'test', 'hidden' => true, 'ok' => 1 }) end it 'returns :other' do expect(description.server_type).to eq(:other) end end context 'when the server is other' do let(:description) do described_class.new(address, { primary_param => false, 'minWireVersion' => 2, 'maxWireVersion' => 3, 'setName' => 'test', 'ok' => 1 }) end it 'returns :other' do expect(description.server_type).to eq(:other) end end context 'when the description has no configuration' do let(:description) do described_class.new(address) end it 'returns :unknown' do expect(description.server_type).to eq(:unknown) end end end describe '#is_server?' do let(:listeners) do Mongo::Event::Listeners.new end let(:server) do Mongo::Server.new(address, cluster, monitoring, listeners, monitoring_io: false) end let(:description) do described_class.new(address, {}) end context 'when the server address matches the description address' do it 'returns true' do expect(description.is_server?(server)).to be(true) end end context 'when the server address does not match the description address' do let(:other_address) do Mongo::Address.new('127.0.0.1:27020') end let(:server) do Mongo::Server.new(other_address, cluster, monitoring, listeners, monitoring_io: false) end it 'returns false' do expect(description.is_server?(server)).to be(false) end end end describe '#me_mismatch?' do let(:description) do described_class.new(address, config) end context 'when the server address matches the me field' do let(:config) do replica.merge('me' => address.to_s) end it 'returns false' do expect(description.me_mismatch?).to be(false) end end context 'when the server address does not match the me field' do let(:config) do replica.merge('me' => 'localhost:27020') end it 'returns true' do expect(description.me_mismatch?).to be(true) end end context 'when there is no me field' do let(:config) do replica.tap do |r| r.delete('me') end end it 'returns false' do expect(description.me_mismatch?).to be(false) end end end describe '#lists_server?' do let(:description) do described_class.new(address, replica) end let(:server_address) do Mongo::Address.new('127.0.0.1:27018') end let(:listeners) do Mongo::Event::Listeners.new end let(:server) do Mongo::Server.new(server_address, cluster, monitoring, listeners, monitoring_io: false) end context 'when the server is included in the description hosts list' do it 'returns true' do expect(description.lists_server?(server)).to be(true) end end context 'when the server is not included in the description hosts list' do let(:server_address) do Mongo::Address.new('127.0.0.1:27017') end it 'returns false' do expect(description.lists_server?(server)).to be(false) end end end describe '#replica_set_member?' do context 'when the description is from a mongos' do let(:config) do { 'msg' => 'isdbgrid', primary_param => true } end let(:description) do described_class.new(address, config) end it 'returns false' do expect(description.replica_set_member?).to be(false) end end context 'when the description is from a standalone' do let(:description) do described_class.new(address, { primary_param => true, 'minWireVersion' => 2, 'maxWireVersion' => 3, 'ok' => 1 }) end it 'returns false' do expect(description.replica_set_member?).to be(false) end end context 'when the description is from a replica set member' do let(:description) do described_class.new(address, replica) end it 'returns true' do expect(description.replica_set_member?).to be(true) end end end describe '#logical_session_timeout_minutes' do context 'when a logical session timeout value is in the config' do let(:description) do described_class.new(address, replica) end it 'returns the logical session timeout value' do expect(description.logical_session_timeout).to eq(7) end end context 'when a logical session timeout value is not in the config' do let(:description) do described_class.new(address, { primary_param => true, 'minWireVersion' => 2, 'maxWireVersion' => 3, 'ok' => 1 }) end it 'returns nil' do expect(description.logical_session_timeout).to be(nil) end end end describe '#==' do let(:description) do described_class.new(address, replica) end let(:other) do described_class.new(address, replica.merge( 'localTime' => 1, 'lastWrite' => { 'lastWriteDate' => 1 }, 'operationTime' => 2, '$clusterTime' => 2 )) end it 'excludes certain fields' do expect(description == other).to be(true) end context 'when the classes do not match' do let(:description) do described_class.new(address, replica) end it 'returns false' do expect(description == Array.new).to be(false) end end context 'when the configs match' do let(:description) do described_class.new(address, replica) end let(:other) do described_class.new(address, replica) end it 'returns true' do expect(description == other).to be(true) end end context 'when the configs match, but have different connectionId values' do let(:description) do described_class.new(address, replica) end let(:other) do described_class.new(address, replica.merge( 'connectionId' => 12 )) end it 'returns true' do expect(description == other).to be(true) end end context 'when the configs do not match' do let(:description) do described_class.new(address, replica) end let(:other) do described_class.new(address, { primary_param => true, 'minWireVersion' => 2, 'maxWireVersion' => 3, 'ok' => 1 }) end it 'returns false' do expect(description == other).to be(false) end end context 'when one config is a subset of the other' do let(:one) do described_class.new(address, { primary_param => true, 'minWireVersion' => 2, 'maxWireVersion' => 3, 'ok' => 1 }) end let(:two) do described_class.new(address, { primary_param => true, 'minWireVersion' => 2, 'maxWireVersion' => 3, 'ok' => 1, 'setName' => 'mongodb_set' }) end it 'returns false when first config is the receiver' do expect(one == two).to be false end it 'returns false when second config is the receiver' do expect(two == one).to be false end end end describe '#last_update_time' do context 'stub description' do let(:description) { described_class.new(address) } it 'is present' do expect(description.last_update_time).to be_a(Time) end end context 'filled out description' do let(:description) { described_class.new(address, replica) } it 'is present' do expect(description.last_update_time).to be_a(Time) end end end describe '#last_update_monotime' do context 'stub description' do let(:description) { described_class.new(address) } it 'is present' do expect(description.last_update_monotime).to be_a(Float) end end context 'filled out description' do let(:description) { described_class.new(address, replica) } it 'is present' do expect(description.last_update_monotime).to be_a(Float) end end end end end end mongo-ruby-driver-2.21.3/spec/mongo/server/monitor/000077500000000000000000000000001505113246500222365ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/mongo/server/monitor/app_metadata_spec.rb000066400000000000000000000010431505113246500262130ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' describe Mongo::Server::Monitor::AppMetadata do describe '#document' do let(:document) do app_metadata.send(:document) end context 'when user is given and auth_mech is not given' do let(:app_metadata) do described_class.new(user: 'foo') end it 'does not include saslSupportedMechs' do expect(document).not_to have_key(:saslSupportedMechs) end end it_behaves_like 'app metadata document' end end mongo-ruby-driver-2.21.3/spec/mongo/server/monitor/connection_spec.rb000066400000000000000000000137331505113246500257430ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Server::Monitor::Connection do clean_slate let(:address) do Mongo::Address.new(ClusterConfig.instance.primary_address_str, options) end declare_topology_double let(:monitor_app_metadata) do Mongo::Server::Monitor::AppMetadata.new( server_api: SpecConfig.instance.ruby_options[:server_api], ) end let(:cluster) do double('cluster').tap do |cluster| allow(cluster).to receive(:topology).and_return(topology) allow(cluster).to receive(:app_metadata).and_return(Mongo::Server::Monitor::AppMetadata.new({})) allow(cluster).to receive(:options).and_return({}) allow(cluster).to receive(:monitor_app_metadata).and_return(monitor_app_metadata) allow(cluster).to receive(:push_monitor_app_metadata).and_return(monitor_app_metadata) allow(cluster).to receive(:heartbeat_interval).and_return(1000) allow(cluster).to receive(:run_sdam_flow) end end let(:server) do Mongo::Server.new(address, cluster, Mongo::Monitoring.new, Mongo::Event::Listeners.new, {monitoring_io: false}.update(options)) end let(:monitor) do metadata = Mongo::Server::Monitor::AppMetadata.new(options) register_background_thread_object( Mongo::Server::Monitor.new(server, server.event_listeners, server.monitoring, { app_metadata: metadata, push_monitor_app_metadata: metadata, }.update(options)) ).tap do |monitor| monitor.scan! end end let(:connection) do # NB this connection is set up in the background thread, # when the :scan option to client is changed to default to false # we must wait here for the connection to be established. # Do not call connect! on this connection as then the main thread # will be racing the monitoring thread to connect. monitor.connection.tap do |connection| expect(connection).not_to be nil deadline = Mongo::Utils.monotonic_time + 5 while Mongo::Utils.monotonic_time < deadline if connection.send(:socket) break end sleep 0.1 end expect(connection.send(:socket)).not_to be nil end end context 'when a connect_timeout is in the options' do context 'when a socket_timeout is in the options' do let(:options) do SpecConfig.instance.test_options.merge(connect_timeout: 3, socket_timeout: 5) end it 'uses the connect_timeout for the address' do expect(connection.address.options[:connect_timeout]).to eq(3) end it 'uses the connect_timeout as the socket_timeout' do expect(connection.send(:socket).timeout).to eq(3) end end context 'when a socket_timeout is not in the options' do let(:options) do SpecConfig.instance.test_options.merge(connect_timeout: 3, socket_timeout: nil) end it 'uses the connect_timeout for the address' do expect(connection.address.options[:connect_timeout]).to eq(3) end it 'uses the connect_timeout as the socket_timeout' do expect(connection.send(:socket).timeout).to eq(3) end end end context 'when a connect_timeout is not in the options' do context 'when a socket_timeout is in the options' do let(:options) do SpecConfig.instance.test_options.merge(connect_timeout: nil, socket_timeout: 5) end it 'does not specify connect_timeout for the address' do expect(connection.address.options[:connect_timeout]).to be nil end it 'uses the connect_timeout as the socket_timeout' do expect(connection.send(:socket).timeout).to eq(10) end end context 'when a socket_timeout is not in the options' do let(:options) do SpecConfig.instance.test_options.merge(connect_timeout: nil, socket_timeout: nil) end it 'does not specify connect_timeout for the address' do expect(connection.address.options[:connect_timeout]).to be nil end it 'uses the connect_timeout as the socket_timeout' do expect(connection.send(:socket).timeout).to eq(10) end end end describe '#connect!' do let(:options) do SpecConfig.instance.test_options.merge( app_metadata: monitor_app_metadata, ) end context 'when address resolution fails' do let(:connection) { described_class.new(server.address, options) } it 'propagates the exception' do connection expect(Socket).to receive(:getaddrinfo).and_raise(SocketError.new('Test exception')) lambda do connection.connect! end.should raise_error(SocketError, 'Test exception') end end end describe '#check_document' do context 'with API version' do let(:meta) do Mongo::Server::AppMetadata.new({ server_api: { version: '1' } }) end [false, true].each do |hello_ok| it "returns hello document if server #{ if hello_ok then 'supports' else 'does not support' end } hello" do subject = described_class.new(double("address"), app_metadata: meta) expect(subject).to receive(:hello_ok?).and_return(hello_ok) document = subject.check_document expect(document['hello']).to eq(1) end end end context 'without API version' do let(:meta) { Mongo::Server::AppMetadata.new({}) } it 'returns legacy hello document' do subject = described_class.new(double("address"), app_metadata: meta) expect(subject).to receive(:hello_ok?).and_return(false) document = subject.check_document expect(document['isMaster']).to eq(1) end it 'returns hello document when server responded with helloOk' do subject = described_class.new(double("address"), app_metadata: meta) expect(subject).to receive(:hello_ok?).and_return(true) document = subject.check_document expect(document['hello']).to eq(1) end end end end mongo-ruby-driver-2.21.3/spec/mongo/server/monitor_spec.rb000066400000000000000000000177541505113246500236130ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Server::Monitor do before(:all) do ClientRegistry.instance.close_all_clients end let(:address) do default_address end let(:listeners) do Mongo::Event::Listeners.new end let(:monitor_options) do {} end let(:monitor_app_metadata) do Mongo::Server::Monitor::AppMetadata.new( server_api: SpecConfig.instance.ruby_options[:server_api], ) end let(:cluster) do double('cluster').tap do |cluster| allow(cluster).to receive(:run_sdam_flow) allow(cluster).to receive(:heartbeat_interval).and_return(1000) end end let(:server) do Mongo::Server.new(address, cluster, Mongo::Monitoring.new, listeners, monitoring_io: false) end let(:monitor) do register_background_thread_object( described_class.new(server, listeners, Mongo::Monitoring.new, SpecConfig.instance.test_options.merge(cluster: cluster).merge(monitor_options).update( app_metadata: monitor_app_metadata, push_monitor_app_metadata: monitor_app_metadata)) ) end describe '#scan!' do context 'when calling multiple times in succession' do it 'throttles the scans to minimum 500ms' do start = Mongo::Utils.monotonic_time monitor.scan! monitor.scan! expect(Mongo::Utils.monotonic_time - start).to be >= 0.5 end end context 'when the hello fails the first time' do let(:monitor_options) do {monitoring_io: false} end it 'runs sdam flow on unknown description' do expect(monitor).to receive(:check).once.and_raise(Mongo::Error::SocketError) expect(cluster).to receive(:run_sdam_flow) monitor.scan! end end context 'when the hello command succeeds' do it 'invokes sdam flow' do server.unknown! expect(server.description).to be_unknown updated_desc = nil expect(cluster).to receive(:run_sdam_flow) do |prev_desc, _updated_desc| updated_desc = _updated_desc end monitor.scan! expect(updated_desc).not_to be_unknown end end context 'when the hello command fails' do context 'when no server is running on the address' do let(:address) do Mongo::Address.new('127.0.0.1:27050') end before do server.unknown! expect(server.description).to be_unknown monitor.scan! end it 'keeps the server unknown' do expect(server.description).to be_unknown end end context 'when the socket gets an exception' do let(:address) do default_address end before do server.unknown! expect(server.description).to be_unknown expect(monitor).to receive(:check).and_raise(Mongo::Error::SocketError) monitor.scan! end it 'keeps the server unknown' do expect(server.description).to be_unknown end it 'disconnects the connection' do expect(monitor.connection).to be nil end end end end =begin heartbeat interval is now taken out of cluster, monitor has no useful options describe '#heartbeat_frequency' do context 'when an option is provided' do let(:monitor_options) do {:heartbeat_frequency => 5} end it 'returns the option' do expect(monitor.heartbeat_frequency).to eq(5) end end context 'when no option is provided' do let(:monitor_options) do {:heartbeat_frequency => nil} end it 'defaults to 10' do expect(monitor.heartbeat_frequency).to eq(10) end end end =end describe '#run!' do let!(:thread) do monitor.run! end context 'when the monitor is already running' do it 'does not create a new thread' do expect(monitor.restart!).to be(thread) end end context 'when the monitor is not already running' do before do monitor.stop! sleep(1) end it 'creates a new thread' do expect(monitor.restart!).not_to be(thread) end end context 'when running after a stop' do it 'starts the thread' do ClientRegistry.instance.close_all_clients sleep 1 thread sleep 1 RSpec::Mocks.with_temporary_scope do expect(monitor.connection).to receive(:disconnect!).and_call_original monitor.stop! sleep 1 expect(thread.alive?).to be false new_thread = monitor.run! sleep 1 expect(new_thread.alive?).to be(true) end end end end describe '#stop' do let(:thread) do monitor.run! end it 'kills the monitor thread' do ClientRegistry.instance.close_all_clients thread sleep 0.5 RSpec::Mocks.with_temporary_scope do expect(monitor.connection).to receive(:disconnect!).and_call_original monitor.stop! expect(thread.alive?).to be(false) end end end describe '#connection' do context 'when there is a connect_timeout option set' do let(:connect_timeout) do 1 end let(:monitor_options) do {connect_timeout: connect_timeout} end it 'sets the value as the timeout on the connection' do monitor.scan! expect(monitor.connection.socket_timeout).to eq(connect_timeout) end it 'set the value as the timeout on the socket' do monitor.scan! expect(monitor.connection.send(:socket).timeout).to eq(connect_timeout) end end end describe '#log_warn' do it 'works' do expect do monitor.log_warn('test warning') end.not_to raise_error end end describe '#do_scan' do let(:result) { monitor.send(:do_scan) } it 'returns a hash' do expect(result).to be_a(Hash) end it 'is successful' do expect(result['ok']).to eq(1.0) end context 'network error during check' do let(:result) do expect(monitor).to receive(:check).and_raise(IOError) # The retry is done on a new socket instance. #expect(socket).to receive(:write).and_call_original monitor.send(:do_scan) end it 'adds server diagnostics' do expect(Mongo::Logger.logger).to receive(:warn) do |msg| # The "on
" and "for
" bits are in different parts # of the message. expect(msg).to match(/#{server.address}/) end expect do result end.to raise_error(IOError) end end context 'network error during connection' do let(:options) { SpecConfig.instance.test_options } let(:expected_message) { "MONGODB | Failed to handshake with #{address}: Mongo::Error::SocketError: test error" } before do monitor.connection.should be nil end it 'logs a warning' do # Note: the mock call below could mock do_write and raise IOError. # It is correct in raising Error::SocketError if mocking write # which performs exception mapping. expect_any_instance_of(Mongo::Socket).to receive(:write).and_raise(Mongo::Error::SocketError, 'test error') messages = [] expect(Mongo::Logger.logger).to receive(:warn).at_least(:once) do |msg| messages << msg end monitor.scan!.should be_unknown messages.any? { |msg| msg.include?(expected_message) }.should be true end it 'adds server diagnostics' do # Note: the mock call below could mock do_write and raise IOError. # It is correct in raising Error::SocketError if mocking write # which performs exception mapping. expect_any_instance_of(Mongo::Socket).to receive(:write).and_raise(Mongo::Error::SocketError, 'test error') expect do monitor.send(:check) end.to raise_error(Mongo::Error::SocketError, /#{server.address}/) end end end end mongo-ruby-driver-2.21.3/spec/mongo/server/push_monitor_spec.rb000066400000000000000000000043331505113246500246370ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Server::PushMonitor do before(:all) do ClientRegistry.instance.close_all_clients end let(:address) do default_address end let(:listeners) do Mongo::Event::Listeners.new end let(:monitor_options) do {} end let(:monitor_app_metadata) do Mongo::Server::Monitor::AppMetadata.new( server_api: SpecConfig.instance.ruby_options[:server_api], ) end let(:cluster) do double('cluster').tap do |cluster| allow(cluster).to receive(:run_sdam_flow) allow(cluster).to receive(:heartbeat_interval).and_return(1000) end end let(:server) do Mongo::Server.new(address, cluster, Mongo::Monitoring.new, listeners, monitoring_io: false) end let(:monitor) do register_background_thread_object( Mongo::Server::Monitor.new(server, listeners, Mongo::Monitoring.new, SpecConfig.instance.test_options.merge(cluster: cluster).merge(monitor_options).update( app_metadata: monitor_app_metadata, push_monitor_app_metadata: monitor_app_metadata)) ) end let(:topology_version) do Mongo::TopologyVersion.new('processId' => BSON::ObjectId.new, 'counter' => 1) end let(:check_document) do {hello: 1} end let(:push_monitor) do described_class.new(monitor, topology_version, monitor.monitoring, **monitor.options.merge(check_document: check_document)) end describe '#do_work' do it 'works' do lambda do push_monitor.do_work end.should_not raise_error end context 'network error during check' do it 'does not propagate the exception' do push_monitor expect(Socket).to receive(:getaddrinfo).and_raise(SocketError.new('Test exception')) lambda do push_monitor.do_work end.should_not raise_error end it 'stops the monitoring' do push_monitor start = Mongo::Utils.monotonic_time expect(Socket).to receive(:getaddrinfo).and_raise(SocketError.new('Test exception')) lambda do push_monitor.do_work end.should_not raise_error push_monitor.running?.should be false end end end end mongo-ruby-driver-2.21.3/spec/mongo/server/round_trip_time_calculator_spec.rb000066400000000000000000000067651505113246500275400ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Server::RoundTripTimeCalculator do let(:calculator) { Mongo::Server::RoundTripTimeCalculator.new } describe '#update_average_round_trip_time' do context 'no existing average rtt' do it 'updates average rtt' do calculator.instance_variable_set('@last_round_trip_time', 5) calculator.update_average_round_trip_time expect(calculator.average_round_trip_time).to eq(5) end end context 'with existing average rtt' do it 'averages with existing average rtt' do calculator.instance_variable_set('@last_round_trip_time', 5) calculator.instance_variable_set('@average_round_trip_time', 10) calculator.update_average_round_trip_time expect(calculator.average_round_trip_time).to eq(9) end end end describe '#update_minimum_round_trip_time' do context 'with no samples' do it 'sets minimum_round_trip_time to zero' do calculator.update_minimum_round_trip_time expect(calculator.minimum_round_trip_time).to eq(0) end end context 'with one sample' do before do calculator.instance_variable_set('@last_round_trip_time', 5) end it 'sets minimum_round_trip_time to zero' do calculator.update_minimum_round_trip_time expect(calculator.minimum_round_trip_time).to eq(0) end end context 'with two samples' do before do calculator.instance_variable_set('@last_round_trip_time', 10) calculator.instance_variable_set('@rtts', [5]) end it 'sets minimum_round_trip_time to zero' do calculator.update_minimum_round_trip_time expect(calculator.minimum_round_trip_time).to eq(0) end end context 'with samples less than maximum' do before do calculator.instance_variable_set('@last_round_trip_time', 10) calculator.instance_variable_set('@rtts', [5, 4, 120]) end it 'properly sets minimum_round_trip_time' do calculator.update_minimum_round_trip_time expect(calculator.minimum_round_trip_time).to eq(4) end end context 'with more than maximum samples' do before do calculator.instance_variable_set('@last_round_trip_time', 2) calculator.instance_variable_set('@rtts', [1, 20, 15, 4, 5, 6, 7, 39, 8, 4]) end it 'properly sets minimum_round_trip_time' do calculator.update_minimum_round_trip_time expect(calculator.minimum_round_trip_time).to eq(2) end end end describe '#measure' do context 'block does not raise' do it 'updates average rtt' do expect(calculator).to receive(:update_average_round_trip_time) calculator.measure do end end it 'updates minimum rtt' do expect(calculator).to receive(:update_minimum_round_trip_time) calculator.measure do end end end context 'block raises' do it 'does not update average rtt' do expect(calculator).not_to receive(:update_average_round_trip_time) expect do calculator.measure do raise "Problem" end end.to raise_error(/Problem/) end it 'does not update minimum rtt' do expect(calculator).not_to receive(:update_minimum_round_trip_time) expect do calculator.measure do raise "Problem" end end.to raise_error(/Problem/) end end end end mongo-ruby-driver-2.21.3/spec/mongo/server_selector/000077500000000000000000000000001505113246500224475ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/mongo/server_selector/nearest_spec.rb000066400000000000000000000227651505113246500254630ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' require 'support/shared/server_selector' describe Mongo::ServerSelector::Nearest do let(:name) { :nearest } include_context 'server selector' let(:default_address) { 'test.host' } it_behaves_like 'a server selector mode' do let(:secondary_ok) { true } end it_behaves_like 'a server selector accepting tag sets' it_behaves_like 'a server selector accepting hedge' it_behaves_like 'a server selector with sensitive data in its options' describe '#initialize' do context 'when max_staleness is provided' do let(:options) do { max_staleness: 95 } end it 'sets the max_staleness option' do expect(selector.max_staleness).to eq(options[:max_staleness]) end end end describe '#==' do context 'when max staleness is the same' do let(:options) do { max_staleness: 95 } end let(:other) do described_class.new(options) end it 'returns true' do expect(selector).to eq(other) end end context 'when max staleness is different' do let(:other_options) do { max_staleness: 100 } end let(:other) do described_class.new(other_options) end it 'returns false' do expect(selector).not_to eq(other) end end end describe '#to_mongos' do context 'tag set not provided' do let(:expected) do { :mode => 'nearest' } end it 'returns a read preference formatted for mongos' do expect(selector.to_mongos).to eq(expected) end end context 'tag set provided' do let(:tag_sets) do [tag_set] end it 'returns a read preference formatted for mongos' do expect(selector.to_mongos).to eq( { :mode => 'nearest', :tags => tag_sets } ) end end context 'max staleness not provided' do let(:expected) do { :mode => 'nearest' } end it 'returns a read preference formatted for mongos' do expect(selector.to_mongos).to eq(expected) end end context 'max staleness provided' do let(:max_staleness) do 100 end let(:expected) do { :mode => 'nearest', maxStalenessSeconds: 100 } end it 'returns a read preference formatted for mongos' do expect(selector.to_mongos).to eq(expected) end end end describe '#select_in_replica_set' do context 'no candidates' do let(:candidates) { [] } it 'returns an empty array' do expect(selector.send(:select_in_replica_set, candidates)).to be_empty end end context 'single primary candidates' do let(:candidates) { [primary] } it 'returns an array with the primary' do expect(selector.send(:select_in_replica_set, candidates)).to eq([primary]) end end context 'single secondary candidate' do let(:candidates) { [secondary] } it 'returns an array with the secondary' do expect(selector.send(:select_in_replica_set, candidates)).to eq([secondary]) end end context 'primary and secondary candidates' do let(:candidates) { [primary, secondary] } it 'returns an array with the primary and secondary' do expect(selector.send(:select_in_replica_set, candidates)).to match_array([primary, secondary]) end end context 'multiple secondary candidates' do let(:candidates) { [secondary, secondary] } it 'returns an array with the secondaries' do expect(selector.send(:select_in_replica_set, candidates)).to match_array([secondary, secondary]) end end context 'tag sets provided' do let(:tag_sets) { [tag_set] } let(:matching_primary) do make_server(:primary, :tags => server_tags, address: default_address) end let(:matching_secondary) do make_server(:secondary, :tags => server_tags, address: default_address) end context 'single candidate' do context 'primary' do let(:candidates) { [primary] } it 'returns an empty array' do expect(selector.send(:select_in_replica_set, candidates)).to be_empty end end context 'matching primary' do let(:candidates) { [matching_primary] } it 'returns an array with the primary' do expect(selector.send(:select_in_replica_set, candidates)).to eq([matching_primary]) end end context 'secondary' do let(:candidates) { [secondary] } it 'returns an empty array' do expect(selector.send(:select_in_replica_set, candidates)).to be_empty end end context 'matching secondary' do let(:candidates) { [matching_secondary] } it 'returns an array with the matching secondary' do expect(selector.send(:select_in_replica_set, candidates)).to eq([matching_secondary]) end end end context 'multiple candidates' do context 'no matching servers' do let(:candidates) { [primary, secondary, secondary] } it 'returns an empty array' do expect(selector.send(:select_in_replica_set, candidates)).to be_empty end end context 'one matching primary' do let(:candidates) { [matching_primary, secondary, secondary] } it 'returns an array with the matching primary' do expect(selector.send(:select_in_replica_set, candidates)).to eq([matching_primary]) end end context 'one matching secondary' do let(:candidates) { [primary, matching_secondary, secondary] } it 'returns an array with the matching secondary' do expect(selector.send(:select_in_replica_set, candidates)).to eq([matching_secondary]) end end context 'two matching secondaries' do let(:candidates) { [primary, matching_secondary, matching_secondary] } let(:expected) { [matching_secondary, matching_secondary] } it 'returns an array with the matching secondaries' do expect(selector.send(:select_in_replica_set, candidates)).to eq(expected) end end context 'one matching primary and one matching secondary' do let(:candidates) { [matching_primary, matching_secondary, secondary] } let(:expected) { [matching_primary, matching_secondary] } it 'returns an array with the matching primary and secondary' do expect(selector.send(:select_in_replica_set, candidates)).to match_array(expected) end end end end context 'high latency servers' do let(:far_primary) { make_server(:primary, :average_round_trip_time => 0.113, address: default_address) } let(:far_secondary) { make_server(:secondary, :average_round_trip_time => 0.114, address: default_address) } context 'single candidate' do context 'far primary' do let(:candidates) { [far_primary] } it 'returns array with far primary' do expect(selector.send(:select_in_replica_set, candidates)).to eq([far_primary]) end end context 'far secondary' do let(:candidates) { [far_secondary] } it 'returns array with far primary' do expect(selector.send(:select_in_replica_set, candidates)).to eq([far_secondary]) end end end context 'multiple candidates' do context 'local primary, local secondary' do let(:candidates) { [primary, secondary] } it 'returns array with primary and secondary' do expect(selector.send(:select_in_replica_set, candidates)).to match_array( [primary, secondary] ) end end context 'local primary, far secondary' do let(:candidates) { [primary, far_secondary] } it 'returns array with local primary' do expect(selector.send(:select_in_replica_set, candidates)).to eq([primary]) end end context 'far primary, local secondary' do let(:candidates) { [far_primary, secondary] } it 'returns array with local secondary' do expect(selector.send(:select_in_replica_set, candidates)).to eq([secondary]) end end context 'far primary, far secondary' do let(:candidates) { [far_primary, far_secondary] } let(:expected) { [far_primary, far_secondary] } it 'returns array with both servers' do expect(selector.send(:select_in_replica_set, candidates)).to match_array(expected) end end context 'two local servers, one far server' do context 'local primary, local secondary' do let(:candidates) { [primary, secondary, far_secondary] } let(:expected) { [primary, secondary] } it 'returns array with local primary and local secondary' do expect(selector.send(:select_in_replica_set, candidates)).to match_array(expected) end end context 'two near secondaries' do let(:candidates) { [far_primary, secondary, secondary] } let(:expected) { [secondary, secondary] } it 'returns array with the two local secondaries' do expect(selector.send(:select_in_replica_set, candidates)).to match_array(expected) end end end end end end end mongo-ruby-driver-2.21.3/spec/mongo/server_selector/primary_preferred_spec.rb000066400000000000000000000250671505113246500275410ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' require 'support/shared/server_selector' describe Mongo::ServerSelector::PrimaryPreferred do let(:name) { :primary_preferred } include_context 'server selector' let(:default_address) { 'test.host' } it_behaves_like 'a server selector mode' do let(:secondary_ok) { true } end it_behaves_like 'a server selector accepting tag sets' it_behaves_like 'a server selector accepting hedge' it_behaves_like 'a server selector with sensitive data in its options' describe '#initialize' do context 'when max_staleness is provided' do let(:options) do { max_staleness: 95 } end it 'sets the max_staleness option' do expect(selector.max_staleness).to eq(options[:max_staleness]) end end end describe '#==' do context 'when max staleness is the same' do let(:options) do { max_staleness: 95 } end let(:other) do described_class.new(options) end it 'returns true' do expect(selector).to eq(other) end end context 'when max staleness is different' do let(:other_options) do { max_staleness: 100 } end let(:other) do described_class.new(other_options) end it 'returns false' do expect(selector).not_to eq(other) end end end describe '#to_mongos' do context 'tag sets not provided' do it 'returns a read preference formatted for mongos' do expect(selector.to_mongos).to eq({ :mode => 'primaryPreferred' }) end end context 'tag set provided' do let(:tag_sets) { [tag_set] } it 'returns a read preference formatted for mongos' do expect(selector.to_mongos).to eq( { :mode => 'primaryPreferred', :tags => tag_sets} ) end end context 'max staleness not provided' do let(:expected) do { :mode => 'primaryPreferred' } end it 'returns a read preference formatted for mongos' do expect(selector.to_mongos).to eq(expected) end end context 'max staleness provided' do let(:max_staleness) do 100 end let(:expected) do { :mode => 'primaryPreferred', maxStalenessSeconds: 100 } end it 'returns a read preference formatted for mongos' do expect(selector.to_mongos).to eq(expected) end end end describe '#select_in_replica_set' do context 'no candidates' do let(:candidates) { [] } it 'returns an empty array' do expect(selector.send(:select_in_replica_set, candidates)).to be_empty end end context 'single primary candidate' do let(:candidates) { [primary] } it 'returns an array with the primary' do expect(selector.send(:select_in_replica_set, candidates)).to eq( [primary] ) end end context 'single secondary candidate' do let(:candidates) { [secondary] } it 'returns an array with the secondary' do expect(selector.send(:select_in_replica_set, candidates)).to eq( [secondary] ) end end context 'primary and secondary candidates' do let(:candidates) { [secondary, primary] } let(:expected) { [primary] } it 'returns an array with the primary' do expect(selector.send(:select_in_replica_set, candidates)).to eq(expected) end end context 'secondary and primary candidates' do let(:candidates) { [secondary, primary] } let(:expected) { [primary] } it 'returns an array with the primary' do expect(selector.send(:select_in_replica_set, candidates)).to eq(expected) end end context 'tag sets provided' do let(:tag_sets) { [tag_set] } let(:matching_primary) do make_server(:primary, :tags => server_tags, address: default_address ) end let(:matching_secondary) do make_server(:secondary, :tags => server_tags, address: default_address ) end context 'single candidate' do context 'primary' do let(:candidates) { [primary] } it 'returns array with primary' do expect(selector.send(:select_in_replica_set, candidates)).to eq([primary]) end end context 'matching_primary' do let(:candidates) { [matching_primary] } it 'returns array with matching primary' do expect(selector.send(:select_in_replica_set, candidates)).to eq([matching_primary]) end end context 'matching secondary' do let(:candidates) { [matching_secondary] } it 'returns array with matching secondary' do expect(selector.send(:select_in_replica_set, candidates)).to eq([matching_secondary]) end end context 'secondary' do let(:candidates) { [secondary] } it 'returns an empty array' do expect(selector.send(:select_in_replica_set, candidates)).to be_empty end end end context 'multiple candidates' do context 'no matching secondaries' do let(:candidates) { [primary, secondary, secondary] } it 'returns an array with the primary' do expect(selector.send(:select_in_replica_set, candidates)).to eq([primary]) end end context 'one matching primary' do let(:candidates) { [matching_primary, secondary, secondary] } it 'returns an array of the primary' do expect(selector.send(:select_in_replica_set, candidates)).to eq([matching_primary]) end end context 'one matching secondary' do let(:candidates) { [primary, matching_secondary, secondary] } let(:expected) { [primary] } it 'returns an array of the primary' do expect(selector.send(:select_in_replica_set, candidates)).to eq(expected) end end context 'two matching secondaries' do let(:candidates) { [primary, matching_secondary, matching_secondary] } let(:expected) { [primary] } it 'returns an array of the primary ' do expect(selector.send(:select_in_replica_set, candidates)).to eq(expected) end end context 'one matching primary, one matching secondary' do let(:candidates) { [primary, matching_secondary, secondary] } let(:expected) { [primary] } it 'returns an array of the primary' do expect(selector.send(:select_in_replica_set, candidates)).to eq(expected) end end end end context 'high latency servers' do let(:far_primary) { make_server(:primary, :average_round_trip_time => 0.100, address: default_address) } let(:far_secondary) { make_server(:secondary, :average_round_trip_time => 0.113, address: default_address) } context 'single candidate' do context 'far primary' do let(:candidates) { [far_primary] } it 'returns array with far primary' do expect(selector.send(:select_in_replica_set, candidates)).to eq([far_primary]) end end context 'far secondary' do let(:candidates) { [far_secondary] } it 'returns array with far primary' do expect(selector.send(:select_in_replica_set, candidates)).to eq([far_secondary]) end end end context 'multiple candidates' do context 'primary available' do context 'local primary, local secondary' do let(:candidates) { [primary, secondary] } let(:expected) { [primary] } it 'returns an array of the primary' do expect(selector.send(:select_in_replica_set, candidates)).to eq(expected) end end context 'local primary, far secondary' do let(:candidates) { [primary, far_secondary] } let(:expected) { [primary] } it 'returns an array of the primary' do expect(selector.send(:select_in_replica_set, candidates)).to eq(expected) end end context 'far primary, local secondary' do let(:candidates) { [far_primary, secondary] } let(:expected) { [far_primary] } it 'returns an array of the far primary' do expect(selector.send(:select_in_replica_set, candidates)).to eq(expected) end end context 'far primary, far secondary' do let(:candidates) { [far_primary, far_secondary] } let(:expected) { [far_primary] } it 'returns an array of the far primary' do expect(selector.send(:select_in_replica_set, candidates)).to eq(expected) end end context 'two local servers, one far server' do context 'local primary, local secondary, far secondary' do let(:candidates) { [primary, secondary, far_secondary] } let(:expected) { [primary] } it 'returns an array of the primary' do expect(selector.send(:select_in_replica_set, candidates)).to eq(expected) end end context 'two local secondaries' do let(:candidates) { [far_primary, secondary, secondary] } let(:expected) { [far_primary] } it 'returns an array with primary' do expect(selector.send(:select_in_replica_set, candidates)).to eq(expected) end end end end context 'primary not available' do context 'one secondary' do let(:candidates) { [secondary] } let(:expected) { [secondary] } it 'returns an array with the secondary' do expect(selector.send(:select_in_replica_set, candidates)).to eq(expected) end end context 'one local secondary, one far secondary' do let(:candidates) { [secondary, far_secondary] } let(:expected) { [secondary] } it 'returns an array of the secondary' do expect(selector.send(:select_in_replica_set, candidates)).to eq(expected) end end context 'two local secondaries, one far secondary' do let(:candidates) { [secondary, secondary, far_secondary] } let(:expected) { [secondary, secondary] } it 'returns an array of the secondary' do expect(selector.send(:select_in_replica_set, candidates)).to eq(expected) end end end end end end end mongo-ruby-driver-2.21.3/spec/mongo/server_selector/primary_spec.rb000066400000000000000000000104271505113246500254750ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' require 'support/shared/server_selector' describe Mongo::ServerSelector::Primary do let(:name) { :primary } include_context 'server selector' let(:default_address) { 'test.host' } it_behaves_like 'a server selector mode' do let(:secondary_ok) { false } end it_behaves_like 'a server selector with sensitive data in its options' describe '#initialize' do context 'when max_staleness is provided' do let(:options) do { max_staleness: 100 } end it 'raises an exception' do expect { selector }.to raise_exception(Mongo::Error::InvalidServerPreference) end end end describe '#tag_sets' do context 'tags not provided' do it 'returns an empty array' do expect(selector.tag_sets).to be_empty end end context 'tag sets provided' do let(:tag_sets) do [ tag_set ] end it 'raises an error' do expect { selector.tag_sets }.to raise_error(Mongo::Error::InvalidServerPreference) end end end describe '#hedge' do context 'hedge not provided' do it 'returns an empty array' do expect(selector.hedge).to be_nil end end context 'hedge provided' do let(:hedge) { { enabled: true } } it 'raises an error' do expect { selector.tag_sets }.to raise_error(Mongo::Error::InvalidServerPreference) end end end describe '#to_mongos' do it 'returns nil' do expect(selector.to_mongos).to be_nil end context 'max staleness not provided' do it 'returns nil' do expect(selector.to_mongos).to be_nil end end context 'max staleness provided' do let(:max_staleness) do 100 end it 'raises an error' do expect { selector }.to raise_exception(Mongo::Error::InvalidServerPreference) end end end describe '#select_in_replica_set' do context 'no candidates' do let(:candidates) { [] } it 'returns an empty array' do expect(selector.send(:select_in_replica_set, candidates)).to be_empty end end context 'secondary candidates' do let(:candidates) { [secondary] } it 'returns an empty array' do expect(selector.send(:select_in_replica_set, candidates)).to be_empty end end context 'primary candidate' do let(:candidates) { [primary] } it 'returns an array with the primary' do expect(selector.send(:select_in_replica_set, candidates)).to eq([primary]) end end context 'primary and secondary candidates' do let(:candidates) { [secondary, primary] } it 'returns an array with the primary' do expect(selector.send(:select_in_replica_set, candidates)).to eq([primary]) end end context 'high latency candidates' do let(:far_primary) { make_server(:primary, :average_round_trip_time => 0.100, address: default_address) } let(:far_secondary) { make_server(:secondary, :average_round_trip_time => 0.120, address: default_address) } context 'single candidate' do context 'far primary' do let(:candidates) { [far_primary] } it 'returns array with the primary' do expect(selector.send(:select_in_replica_set, candidates)).to eq([far_primary]) end end context 'far secondary' do let(:candidates) { [far_secondary] } it 'returns empty array' do expect(selector.send(:select_in_replica_set, candidates)).to be_empty end end end context 'multiple candidates' do context 'far primary, far secondary' do let(:candidates) { [far_primary, far_secondary] } it 'returns an array with the primary' do expect(selector.send(:select_in_replica_set, candidates)).to eq([far_primary]) end end context 'far primary, local secondary' do let(:candidates) { [far_primary, far_secondary] } it 'returns an array with the primary' do expect(selector.send(:select_in_replica_set, candidates)).to eq([far_primary]) end end end end end end mongo-ruby-driver-2.21.3/spec/mongo/server_selector/secondary_preferred_spec.rb000066400000000000000000000234701505113246500300410ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' require 'support/shared/server_selector' describe Mongo::ServerSelector::SecondaryPreferred do let(:name) { :secondary_preferred } include_context 'server selector' let(:default_address) { 'test.host' } it_behaves_like 'a server selector mode' do let(:secondary_ok) { true } end it_behaves_like 'a server selector with sensitive data in its options' it_behaves_like 'a server selector accepting tag sets' it_behaves_like 'a server selector accepting hedge' describe '#initialize' do context 'when max_staleness is provided' do let(:options) do { max_staleness: 95 } end it 'sets the max_staleness option' do expect(selector.max_staleness).to eq(options[:max_staleness]) end end end describe '#==' do context 'when max staleness is the same' do let(:options) do { max_staleness: 90 } end let(:other) do described_class.new(options) end it 'returns true' do expect(selector).to eq(other) end end context 'when max staleness is different' do let(:other_options) do { max_staleness: 100 } end let(:other) do described_class.new(other_options) end it 'returns false' do expect(selector).not_to eq(other) end end end describe '#to_mongos' do context 'tag sets provided' do let(:tag_sets) do [ tag_set ] end it 'returns a read preference formatted for mongos' do expect(selector.to_mongos).to eq( { :mode => 'secondaryPreferred', :tags => tag_sets } ) end end context 'tag sets not provided' do it 'returns secondaryPreferred' do selector.to_mongos.should == {mode: 'secondaryPreferred'} end end context 'max staleness not provided' do let(:expected) do { :mode => 'secondaryPreferred' } end it 'returns secondaryPreferred' do selector.to_mongos.should == {mode: 'secondaryPreferred'} end end context 'max staleness provided' do let(:max_staleness) do 60 end let(:expected) do { :mode => 'secondaryPreferred', maxStalenessSeconds: 60 } end it 'returns a read preference formatted for mongos' do expect(selector.to_mongos).to eq(expected) end end context 'hedge provided' do let(:hedge) { { enabled: true } } it 'returns a formatted read preference' do expect(selector.to_mongos).to eq({ mode: 'secondaryPreferred', hedge: { enabled: true } }) end end context 'hedge not provided' do let(:hedge) { nil } it 'returns secondaryPreferred' do selector.to_mongos.should == {mode: 'secondaryPreferred'} end end end describe '#select_in_replica_set' do context 'no candidates' do let(:candidates) { [] } it 'returns an empty array' do expect(selector.send(:select_in_replica_set, candidates)).to be_empty end end context 'single primary candidates' do let(:candidates) { [primary] } it 'returns array with primary' do expect(selector.send(:select_in_replica_set, candidates)).to eq([primary]) end end context 'single secondary candidate' do let(:candidates) { [secondary] } it 'returns array with secondary' do expect(selector.send(:select_in_replica_set, candidates)).to eq([secondary]) end end context 'primary and secondary candidates' do let(:candidates) { [primary, secondary] } let(:expected) { [secondary, primary] } it 'returns array with secondary first, then primary' do expect(selector.send(:select_in_replica_set, candidates)).to eq(expected) end end context 'secondary and primary candidates' do let(:candidates) { [secondary, primary] } let(:expected) { [secondary, primary] } it 'returns array with secondary and primary' do expect(selector.send(:select_in_replica_set, candidates)).to eq(expected) end end context 'tag sets provided' do let(:tag_sets) do [ tag_set ] end let(:matching_primary) do make_server(:primary, :tags => server_tags, address: default_address) end let(:matching_secondary) do make_server(:secondary, :tags => server_tags, address: default_address) end context 'single candidate' do context 'primary' do let(:candidates) { [primary] } it 'returns array with primary' do expect(selector.send(:select_in_replica_set, candidates)).to eq([primary]) end end context 'matching_primary' do let(:candidates) { [matching_primary] } it 'returns array with matching primary' do expect(selector.send(:select_in_replica_set, candidates)).to eq([matching_primary]) end end context 'matching secondary' do let(:candidates) { [matching_secondary] } it 'returns array with matching secondary' do expect(selector.send(:select_in_replica_set, candidates)).to eq([matching_secondary]) end end context 'secondary' do let(:candidates) { [secondary] } it 'returns an empty array' do expect(selector.send(:select_in_replica_set, candidates)).to be_empty end end end context 'multiple candidates' do context 'no matching secondaries' do let(:candidates) { [primary, secondary, secondary] } it 'returns an array with the primary' do expect(selector.send(:select_in_replica_set, candidates)).to eq([primary]) end end context 'one matching secondary' do let(:candidates) { [primary, matching_secondary] } it 'returns an array of the matching secondary, then primary' do expect(selector.send(:select_in_replica_set, candidates)).to eq( [matching_secondary, primary] ) end end context 'two matching secondaries' do let(:candidates) { [primary, matching_secondary, matching_secondary] } let(:expected) { [matching_secondary, matching_secondary, primary] } it 'returns an array of the matching secondaries, then primary' do expect(selector.send(:select_in_replica_set, candidates)).to eq(expected) end end context 'one matching secondary and one matching primary' do let(:candidates) { [matching_primary, matching_secondary] } let(:expected) {[matching_secondary, matching_primary] } it 'returns an array of the matching secondary, then the primary' do expect(selector.send(:select_in_replica_set, candidates)).to eq(expected) end end end end context 'high latency servers' do let(:far_primary) { make_server(:primary, :average_round_trip_time => 0.100, address: default_address) } let(:far_secondary) { make_server(:secondary, :average_round_trip_time => 0.113, address: default_address) } context 'single candidate' do context 'far primary' do let(:candidates) { [far_primary] } it 'returns array with primary' do expect(selector.send(:select_in_replica_set, candidates)).to eq([far_primary]) end end context 'far secondary' do let(:candidates) { [far_secondary] } it 'returns an array with the secondary' do expect(selector.send(:select_in_replica_set, candidates)).to eq([far_secondary]) end end end context 'multiple candidates' do context 'local primary, local secondary' do let(:candidates) { [primary, secondary] } it 'returns an array with secondary, then primary' do expect(selector.send(:select_in_replica_set, candidates)).to eq([secondary, primary]) end end context 'local primary, far secondary' do let(:candidates) { [primary, far_secondary] } it 'returns an array with the secondary, then primary' do expect(selector.send(:select_in_replica_set, candidates)).to eq([far_secondary, primary]) end end context 'local secondary' do let(:candidates) { [far_primary, secondary] } let(:expected) { [secondary, far_primary] } it 'returns an array with secondary, then primary' do expect(selector.send(:select_in_replica_set, candidates)).to eq(expected) end end context 'far primary, far secondary' do let(:candidates) { [far_primary, far_secondary] } let(:expected) { [far_secondary, far_primary] } it 'returns an array with secondary, then primary' do expect(selector.send(:select_in_replica_set, candidates)).to eq(expected) end end context 'two near servers, one far secondary' do context 'near primary, near secondary, far secondary' do let(:candidates) { [primary, secondary, far_secondary] } let(:expected) { [secondary, primary] } it 'returns an array with near secondary, then primary' do expect(selector.send(:select_in_replica_set, candidates)).to eq(expected) end end context 'two near secondaries, one far primary' do let(:candidates) { [far_primary, secondary, secondary] } let(:expected) { [secondary, secondary, far_primary] } it 'returns an array with secondaries, then primary' do expect(selector.send(:select_in_replica_set, candidates)).to eq(expected) end end end end end end end mongo-ruby-driver-2.21.3/spec/mongo/server_selector/secondary_spec.rb000066400000000000000000000167021505113246500260030ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' require 'support/shared/server_selector' describe Mongo::ServerSelector::Secondary do let(:name) { :secondary } include_context 'server selector' let(:default_address) { 'test.host' } it_behaves_like 'a server selector mode' do let(:secondary_ok) { true } end it_behaves_like 'a server selector with sensitive data in its options' it_behaves_like 'a server selector accepting tag sets' it_behaves_like 'a server selector accepting hedge' describe '#initialize' do context 'when max_staleness is provided' do let(:options) do { max_staleness: 100 } end it 'sets the max_staleness option' do expect(selector.max_staleness).to eq(options[:max_staleness]) end end end describe '#==' do context 'when max staleness is the same' do let(:options) do { max_staleness: 90 } end let(:other) do described_class.new(options) end it 'returns true' do expect(selector).to eq(other) end end context 'when max staleness is different' do let(:other_options) do { max_staleness: 95 } end let(:other) do described_class.new(other_options) end it 'returns false' do expect(selector).not_to eq(other) end end end describe '#to_mongos' do it 'returns read preference formatted for mongos' do expect(selector.to_mongos).to eq( { :mode => 'secondary' } ) end context 'tag sets provided' do let(:tag_sets) { [tag_set] } it 'returns read preference formatted for mongos with tag sets' do expect(selector.to_mongos).to eq( { :mode => 'secondary', :tags => tag_sets} ) end end context 'max staleness not provided' do let(:expected) do { :mode => 'secondary' } end it 'returns a read preference formatted for mongos' do expect(selector.to_mongos).to eq(expected) end end context 'max staleness provided' do let(:max_staleness) do 60 end let(:expected) do { :mode => 'secondary', maxStalenessSeconds: 60 } end it 'returns a read preference formatted for mongos' do expect(selector.to_mongos).to eq(expected) end end end describe '#select_in_replica_set' do context 'no candidates' do let(:candidates) { [] } it 'returns an empty array' do expect(selector.send(:select_in_replica_set, candidates)).to be_empty end end context 'single primary candidate' do let(:candidates) { [primary] } it 'returns an empty array' do expect(selector.send(:select_in_replica_set, candidates)).to be_empty end end context 'single secondary candidate' do let(:candidates) { [secondary] } it 'returns array with secondary' do expect(selector.send(:select_in_replica_set, candidates)).to eq([secondary]) end end context 'primary and secondary candidates' do let(:candidates) { [primary, secondary] } it 'returns array with secondary' do expect(selector.send(:select_in_replica_set, candidates)).to eq([secondary]) end end context 'multiple secondary candidates' do let(:candidates) { [secondary, secondary, primary] } it 'returns array with all secondaries' do expect(selector.send(:select_in_replica_set, candidates)).to eq([secondary, secondary]) end end context 'tag sets provided' do let(:tag_sets) { [tag_set] } let(:matching_secondary) { make_server(:secondary, :tags => server_tags, address: default_address) } context 'single candidate' do context 'primary' do let(:candidates) { [primary] } it 'returns an empty array' do expect(selector.send(:select_in_replica_set, candidates)).to be_empty end end context 'secondary' do let(:candidates) { [secondary] } it 'returns an empty array' do expect(selector.send(:select_in_replica_set, candidates)).to be_empty end end context 'matching secondary' do let(:candidates) { [matching_secondary] } it 'returns an array with matching secondary' do expect(selector.send(:select_in_replica_set, candidates)).to eq([matching_secondary]) end end end context 'multiple candidates' do context 'no matching candidates' do let(:candidates) { [primary, secondary, secondary] } it 'returns an emtpy array' do expect(selector.send(:select_in_replica_set, candidates)).to be_empty end end context 'one matching secondary' do let(:candidates) { [secondary, matching_secondary]} it 'returns array with matching secondary' do expect(selector.send(:select_in_replica_set, candidates)).to eq([matching_secondary]) end end context 'two matching secondaries' do let(:candidates) { [matching_secondary, matching_secondary] } it 'returns an array with both matching secondaries' do expect(selector.send(:select_in_replica_set, candidates)).to eq([matching_secondary, matching_secondary]) end end end end context 'high latency servers' do let(:far_primary) { make_server(:primary, :average_round_trip_time => 0.100, address: default_address) } let(:far_secondary) { make_server(:secondary, :average_round_trip_time => 0.113, address: default_address) } context 'single candidate' do context 'far primary' do let(:candidates) { [far_primary] } it 'returns an empty array' do expect(selector.send(:select_in_replica_set, candidates)).to be_empty end end context 'far secondary' do let(:candidates) { [far_secondary] } it 'returns an array with the secondary' do expect(selector.send(:select_in_replica_set, candidates)).to eq([far_secondary]) end end end context 'multiple candidates' do context 'local primary, far secondary' do let(:candidates) { [primary, far_secondary] } it 'returns an array with the secondary' do expect(selector.send(:select_in_replica_set, candidates)).to eq([far_secondary]) end end context 'far primary, far secondary' do let(:candidates) { [far_primary, far_secondary] } it 'returns an array with the secondary' do expect(selector.send(:select_in_replica_set, candidates)).to eq([far_secondary]) end end context 'two near servers, one far server' do context 'near primary, near and far secondaries' do let(:candidates) { [primary, secondary, far_secondary] } it 'returns an array with near secondary' do expect(selector.send(:select_in_replica_set, candidates)).to eq([secondary]) end end context 'far primary and two near secondaries' do let(:candidates) { [far_primary, secondary, secondary] } it 'returns an array with two secondaries' do expect(selector.send(:select_in_replica_set, candidates)).to eq([secondary, secondary]) end end end end end end end mongo-ruby-driver-2.21.3/spec/mongo/server_selector_spec.rb000066400000000000000000000536651505113246500240250ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' require 'support/shared/server_selector' describe Mongo::ServerSelector do include_context 'server selector' describe '.get' do let(:selector) do described_class.get(:mode => name, :tag_sets => tag_sets) end context 'when a server selector object is passed' do let(:name) do :primary end it 'returns the object' do expect(described_class.get(selector)).to be(selector) end end context 'when the mode is primary' do let(:name) do :primary end it 'returns a read preference of class Primary' do expect(selector).to be_a(Mongo::ServerSelector::Primary) end context 'when the mode is a string' do let(:name) do 'primary' end it 'returns a read preference of class Primary' do expect(selector).to be_a(Mongo::ServerSelector::Primary) end end end context 'when the mode is primary_preferred' do let(:name) do :primary_preferred end it 'returns a read preference of class PrimaryPreferred' do expect(selector).to be_a(Mongo::ServerSelector::PrimaryPreferred) end context 'when the mode is a string' do let(:name) do 'primary_preferred' end it 'returns a read preference of class PrimaryPreferred' do expect(selector).to be_a(Mongo::ServerSelector::PrimaryPreferred) end end end context 'when the mode is secondary' do let(:name) do :secondary end it 'returns a read preference of class Secondary' do expect(selector).to be_a(Mongo::ServerSelector::Secondary) end context 'when the mode is a string' do let(:name) do 'secondary' end it 'returns a read preference of class Secondary' do expect(selector).to be_a(Mongo::ServerSelector::Secondary) end end end context 'when the mode is secondary_preferred' do let(:name) do :secondary_preferred end it 'returns a read preference of class SecondaryPreferred' do expect(selector).to be_a(Mongo::ServerSelector::SecondaryPreferred) end context 'when the mode is a string' do let(:name) do 'secondary_preferred' end it 'returns a read preference of class SecondaryPreferred' do expect(selector).to be_a(Mongo::ServerSelector::SecondaryPreferred) end end end context 'when the mode is nearest' do let(:name) do :nearest end it 'returns a read preference of class Nearest' do expect(selector).to be_a(Mongo::ServerSelector::Nearest) end context 'when the mode is a string' do let(:name) do 'nearest' end it 'returns a read preference of class Nearest' do expect(selector).to be_a(Mongo::ServerSelector::Nearest) end end end context 'when a mode is not provided' do let(:selector) { described_class.get } it 'returns a read preference of class Primary' do expect(selector).to be_a(Mongo::ServerSelector::Primary) end end context 'when tag sets are provided' do let(:selector) do described_class.get(:mode => :secondary, :tag_sets => tag_sets) end let(:tag_sets) do [{ 'test' => 'tag' }] end it 'sets tag sets on the read preference object' do expect(selector.tag_sets).to eq(tag_sets) end end context 'when server_selection_timeout is specified' do let(:selector) do described_class.get(:mode => :secondary, :server_selection_timeout => 1) end it 'sets server selection timeout on the read preference object' do expect(selector.server_selection_timeout).to eq(1) end end context 'when server_selection_timeout is not specified' do let(:selector) do described_class.get(:mode => :secondary) end it 'sets server selection timeout to the default' do expect(selector.server_selection_timeout).to eq(Mongo::ServerSelector::SERVER_SELECTION_TIMEOUT) end end context 'when local_threshold is specified' do let(:selector) do described_class.get(:mode => :secondary, :local_threshold => 0.010) end it 'sets local_threshold on the read preference object' do expect(selector.local_threshold).to eq(0.010) end end context 'when local_threshold is not specified' do let(:selector) do described_class.get(:mode => :secondary) end it 'sets local threshold to the default' do expect(selector.local_threshold).to eq(Mongo::ServerSelector::LOCAL_THRESHOLD) end end end describe "#select_server" do require_no_linting context 'replica set topology' do let(:cluster) do double('cluster').tap do |c| allow(c).to receive(:connected?).and_return(true) allow(c).to receive(:summary) allow(c).to receive(:topology).and_return(topology) allow(c).to receive(:servers).and_return(servers) allow(c).to receive(:servers_list).and_return(servers) allow(c).to receive(:addresses).and_return(servers.map(&:address)) allow(c).to receive(:replica_set?).and_return(true) allow(c).to receive(:single?).and_return(false) allow(c).to receive(:sharded?).and_return(false) allow(c).to receive(:unknown?).and_return(false) allow(c).to receive(:scan!).and_return(true) allow(c).to receive(:options).and_return(server_selection_timeout: 0.1) allow(c).to receive(:server_selection_semaphore).and_return(nil) allow(topology).to receive(:compatible?).and_return(true) end end let(:primary) do make_server(:primary).tap do |server| allow(server).to receive(:features).and_return(double("primary features")) end end let(:secondary) do make_server(:secondary).tap do |server| allow(server).to receive(:features).and_return(double("secondary features")) end end context "when #select_in_replica_set returns a list of nils" do let(:servers) do [ primary ] end let(:read_pref) do described_class.get(mode: :primary).tap do |pref| allow(pref).to receive(:select_in_replica_set).and_return([ nil, nil ]) end end it 'raises a NoServerAvailable error' do expect do read_pref.select_server(cluster) end.to raise_exception(Mongo::Error::NoServerAvailable) end end context "write_aggregation is true" do before do # It does not matter for this context whether primary supports secondary wites or not, # but we need to mock out this call. allow(primary.features).to receive(:merge_out_on_secondary_enabled?).and_return(false) end context "read preference is primary" do let(:selector) { Mongo::ServerSelector::Primary.new } let(:servers) do [ primary, secondary ] end [true, false].each do |secondary_support_writes| context "secondary #{secondary_support_writes ? 'supports' : 'does not support' } writes" do it "selects a primary" do allow(secondary.features).to receive(:merge_out_on_secondary_enabled?).and_return(secondary_support_writes) expect(selector.select_server(cluster, write_aggregation: true)).to eq(primary) end end end end context "read preference is primary preferred" do let(:selector) { Mongo::ServerSelector::PrimaryPreferred.new } let(:servers) do [ primary, secondary ] end [true, false].each do |secondary_support_writes| context "secondary #{secondary_support_writes ? 'supports' : 'does not support' } writes" do it "selects a primary" do allow(secondary.features).to receive(:merge_out_on_secondary_enabled?).and_return(secondary_support_writes) expect(selector.select_server(cluster, write_aggregation: true)).to eq(primary) end end end end context "read preference is secondary preferred" do let(:selector) { Mongo::ServerSelector::SecondaryPreferred.new } let(:servers) do [ primary, secondary ] end context "secondary supports writes" do it "selects a secondary" do allow(secondary.features).to receive(:merge_out_on_secondary_enabled?).and_return(true) expect(selector.select_server(cluster, write_aggregation: true)).to eq(secondary) end end context "secondary does not support writes" do it "selects a primary" do allow(secondary.features).to receive(:merge_out_on_secondary_enabled?).and_return(false) expect(selector.select_server(cluster, write_aggregation: true)).to eq(primary) end end end context "read preference is secondary" do let(:selector) { Mongo::ServerSelector::Secondary.new } let(:servers) do [ primary, secondary ] end context "secondary supports writes" do it "selects a secondary" do allow(secondary.features).to receive(:merge_out_on_secondary_enabled?).and_return(true) expect(selector.select_server(cluster, write_aggregation: true)).to eq(secondary) end end context "secondary does not support writes" do it "selects a primary" do allow(secondary.features).to receive(:merge_out_on_secondary_enabled?).and_return(false) expect(selector.select_server(cluster, write_aggregation: true)).to eq(primary) end end context "no secondaries in cluster" do let(:servers) do [ primary ] end it "selects a primary" do expect(selector.select_server(cluster, write_aggregation: true)).to eq(primary) end end end end end context 'when the cluster has a server_selection_timeout set' do let(:servers) do [ make_server(:secondary), make_server(:primary) ] end let(:cluster) do double('cluster').tap do |c| allow(c).to receive(:connected?).and_return(true) allow(c).to receive(:summary) allow(c).to receive(:topology).and_return(topology) allow(c).to receive(:servers).and_return(servers) allow(c).to receive(:servers_list).and_return(servers) allow(c).to receive(:addresses).and_return(servers.map(&:address)) allow(c).to receive(:replica_set?).and_return(true) allow(c).to receive(:single?).and_return(false) allow(c).to receive(:sharded?).and_return(false) allow(c).to receive(:unknown?).and_return(false) allow(c).to receive(:scan!).and_return(true) allow(c).to receive(:options).and_return(server_selection_timeout: 0) end end let(:read_pref) do described_class.get(mode: :nearest) end it 'uses the server_selection_timeout of the cluster' do expect{ read_pref.select_server(cluster) }.to raise_exception(Mongo::Error::NoServerAvailable) end end context 'when the cluster has a local_threshold set' do let(:near_server) do make_server(:secondary).tap do |s| allow(s).to receive(:connectable?).and_return(true) allow(s).to receive(:average_round_trip_time).and_return(100) allow(s).to receive(:check_driver_support!).and_return(true) end end let(:far_server) do make_server(:secondary).tap do |s| allow(s).to receive(:connectable?).and_return(true) allow(s).to receive(:average_round_trip_time).and_return(200) allow(s).to receive(:check_driver_support!).and_return(true) end end let(:servers) do [ near_server, far_server ] end let(:cluster) do double('cluster').tap do |c| allow(c).to receive(:connected?).and_return(true) allow(c).to receive(:summary) allow(c).to receive(:topology).and_return(topology) allow(c).to receive(:servers).and_return(servers) allow(c).to receive(:addresses).and_return(servers.map(&:address)) allow(c).to receive(:replica_set?).and_return(true) allow(c).to receive(:single?).and_return(false) allow(c).to receive(:sharded?).and_return(false) allow(c).to receive(:unknown?).and_return(false) allow(c).to receive(:scan!).and_return(true) allow(c).to receive(:options).and_return(local_threshold: 0.050) end end let(:read_pref) do described_class.get(mode: :nearest) end it 'uses the local_threshold of the cluster' do expect(topology).to receive(:compatible?).and_return(true) expect(read_pref.select_server(cluster)).to eq(near_server) end end context 'when topology is incompatible' do let(:server) { make_server(:primary) } let(:cluster) do double('cluster').tap do |c| allow(c).to receive(:connected?).and_return(true) allow(c).to receive(:summary) allow(c).to receive(:topology).and_return(topology) allow(c).to receive(:servers).and_return([server]) allow(c).to receive(:addresses).and_return([server.address]) allow(c).to receive(:replica_set?).and_return(true) allow(c).to receive(:single?).and_return(false) allow(c).to receive(:sharded?).and_return(false) allow(c).to receive(:unknown?).and_return(false) allow(c).to receive(:scan!).and_return(true) allow(c).to receive(:options).and_return(local_threshold: 0.050) end end let(:compatibility_error) do Mongo::Error::UnsupportedFeatures.new('Test UnsupportedFeatures') end let(:selector) { described_class.primary } it 'raises Error::UnsupportedFeatures' do expect(topology).to receive(:compatible?).and_return(false) expect(topology).to receive(:compatibility_error).and_return(compatibility_error) expect do selector.select_server(cluster) end.to raise_error(Mongo::Error::UnsupportedFeatures, 'Test UnsupportedFeatures') end end context 'sharded topology' do let(:cluster) do double('cluster').tap do |c| allow(c).to receive(:connected?).and_return(true) allow(c).to receive(:summary) allow(c).to receive(:topology).and_return(topology) allow(c).to receive(:servers).and_return(servers) allow(c).to receive(:addresses).and_return(servers.map(&:address)) allow(c).to receive(:replica_set?).and_return(false) allow(c).to receive(:single?).and_return(false) allow(c).to receive(:sharded?).and_return(true) allow(c).to receive(:unknown?).and_return(false) allow(c).to receive(:scan!) allow(c).to receive(:options).and_return(local_threshold: 0.050) allow(topology).to receive(:compatible?).and_return(true) allow(topology).to receive(:single?).and_return(false) end end context 'unknown and mongos' do let(:mongos) { make_server(:mongos, address: Mongo::Address.new('localhost')) } let(:unknown) { make_server(:unknown, address: Mongo::Address.new('localhost')) } let(:servers) { [unknown, mongos] } let(:selector) { described_class.primary } [true, false].each do |write_aggregation| context "write_aggregation is #{write_aggregation}" do it 'returns the mongos' do expect(selector.select_server(cluster, write_aggregation: write_aggregation)).to eq(mongos) end end end end end end shared_context 'a ServerSelector' do context 'when cluster#servers is empty' do let(:servers) do [] end let(:cluster) do double('cluster').tap do |c| allow(c).to receive(:connected?).and_return(true) allow(c).to receive(:summary) allow(c).to receive(:topology).and_return(topology) allow(c).to receive(:servers).and_return(servers) allow(c).to receive(:addresses).and_return([]) allow(c).to receive(:replica_set?).and_return(!single && !sharded) allow(c).to receive(:single?).and_return(single) allow(c).to receive(:sharded?).and_return(sharded) allow(c).to receive(:unknown?).and_return(false) allow(c).to receive(:scan!).and_return(true) allow(c).to receive(:options).and_return(server_selection_timeout: 0.1) end end let(:read_pref) do described_class.primary end it 'raises a NoServerAvailable error' do expect do read_pref.select_server(cluster) end.to raise_exception(Mongo::Error::NoServerAvailable) end end end context 'when the cluster has a Single topology' do let(:single) { true } let(:sharded) { false } it_behaves_like 'a ServerSelector' end context 'when the cluster has a ReplicaSet topology' do let(:single) { false } let(:sharded) { false } it_behaves_like 'a ServerSelector' end context 'when the cluster has a Sharded topology' do let(:single) { false } let(:sharded) { true } it_behaves_like 'a ServerSelector' end describe '#inspect' do let(:options) do {} end let(:read_pref) do described_class.get({ mode: mode }.merge(options)) end context 'when the mode is primary' do let(:mode) do :primary end it 'includes the mode in the inspect string' do expect(read_pref.inspect).to match(/#{mode.to_s}/i) end end context 'when there are tag sets' do let(:mode) do :secondary end let(:options) do { tag_sets: [{ 'data_center' => 'nyc' }] } end it 'includes the tag sets in the inspect string' do expect(read_pref.inspect).to include(options[:tag_sets].inspect) end end context 'when there is a max staleness set' do let(:mode) do :secondary end let(:options) do { max_staleness: 123 } end it 'includes staleness in the inspect string' do expect(read_pref.inspect).to match(/max_staleness/i) expect(read_pref.inspect).to match(/123/) end end end describe '#filter_stale_servers' do require_no_linting include_context 'server selector' let(:name) do :secondary end let(:selector) { Mongo::ServerSelector::Secondary.new( mode: name, max_staleness: max_staleness) } def make_server_with_staleness(last_write_date) make_server(:secondary).tap do |server| allow(server.description.features).to receive(:max_staleness_enabled?).and_return(true) allow(server).to receive(:last_scan).and_return(Time.now) allow(server).to receive(:last_write_date).and_return(last_write_date) end end shared_context 'staleness filter' do let(:servers) do [recent_server, stale_server] end context 'when max staleness is not set' do let(:max_staleness) { nil } it 'filters correctly' do result = selector.send(:filter_stale_servers, servers, primary) expect(result).to eq([recent_server, stale_server]) end end context 'when max staleness is set' do let(:max_staleness) { 100 } it 'filters correctly' do result = selector.send(:filter_stale_servers, servers, primary) expect(result).to eq([recent_server]) end end end context 'primary is given' do let(:primary) do make_server(:primary).tap do |server| allow(server).to receive(:last_scan).and_return(Time.now) allow(server).to receive(:last_write_date).and_return(Time.now-100) end end # staleness is relative to primary, which itself is 100 seconds stale let(:recent_server) { make_server_with_staleness(Time.now-110) } let(:stale_server) { make_server_with_staleness(Time.now-210) } it_behaves_like 'staleness filter' end context 'primary is not given' do let(:primary) { nil } let(:recent_server) { make_server_with_staleness(Time.now-1) } let(:stale_server) { make_server_with_staleness(Time.now-110) } it_behaves_like 'staleness filter' end end describe '#suitable_servers' do let(:selector) { Mongo::ServerSelector::Primary.new(options) } let(:cluster) { double('cluster') } let(:options) { {} } context 'sharded' do let(:servers) do [make_server(:mongos)] end before do allow(cluster).to receive(:single?).and_return(false) allow(cluster).to receive(:sharded?).and_return(true) allow(cluster).to receive(:options).and_return({}) allow(cluster).to receive(:servers).and_return(servers) end it 'returns the servers' do expect(selector.candidates(cluster)).to eq(servers) end context 'with local threshold' do let(:options) do {local_threshold: 1} end it 'returns the servers' do expect(selector.candidates(cluster)).to eq(servers) end context 'when servers become unknown' do let(:servers) do [make_server(:unknown)] end it 'returns an empty list' do expect(selector.suitable_servers(cluster)).to eq([]) end end end end end end mongo-ruby-driver-2.21.3/spec/mongo/server_spec.rb000066400000000000000000000253721505113246500221170ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Server do fails_on_jruby declare_topology_double let(:cluster) do double('cluster').tap do |cl| allow(cl).to receive(:topology).and_return(topology) allow(cl).to receive(:app_metadata).and_return(app_metadata) allow(cl).to receive(:options).and_return({}) end end let(:listeners) do Mongo::Event::Listeners.new end let(:monitoring) do Mongo::Monitoring.new(monitoring: false) end let(:address) do default_address end let(:pool) do server.pool end let(:server_options) do {} end let(:server) do register_server( described_class.new(address, cluster, monitoring, listeners, SpecConfig.instance.test_options.merge(monitoring_io: false).merge(server_options)) ) end let(:monitor_app_metadata) do Mongo::Server::Monitor::AppMetadata.new( server_api: SpecConfig.instance.ruby_options[:server_api], ) end shared_context 'with monitoring io' do let(:server_options) do {monitoring_io: true} end before do allow(cluster).to receive(:monitor_app_metadata).and_return(monitor_app_metadata) allow(cluster).to receive(:push_monitor_app_metadata).and_return(monitor_app_metadata) allow(cluster).to receive(:heartbeat_interval).and_return(1000) end end describe '#==' do context 'when the other is not a server' do let(:other) do false end it 'returns false' do expect(server).to_not eq(other) end end context 'when the other is a server' do context 'when the addresses match' do let(:other) do register_server( described_class.new(address, cluster, monitoring, listeners, SpecConfig.instance.test_options.merge(monitoring_io: false)) ) end it 'returns true' do expect(server).to eq(other) end end context 'when the addresses dont match' do let(:other_address) do Mongo::Address.new('127.0.0.1:27018') end let(:other) do register_server( described_class.new(other_address, cluster, monitoring, listeners, SpecConfig.instance.test_options.merge(monitoring_io: false)) ) end it 'returns false' do expect(server).to_not eq(other) end end end end describe '#disconnect!' do context 'with monitoring io' do include_context 'with monitoring io' it 'stops the monitor instance' do expect(server.instance_variable_get(:@monitor)).to receive(:stop!).and_call_original server.disconnect! end end context 'when server has a pool' do before do allow(server).to receive(:unknown?).and_return(false) allow(cluster).to receive(:run_sdam_flow) server.pool.ready end it 'pauses and clears the connection pool' do expect(server.pool_internal).to receive(:close).once.and_call_original RSpec::Mocks.with_temporary_scope do # you can't disconnect from a known server, since this pauses the # pool and we only want to pause the pools of unknown servers. server.unknown! allow(server).to receive(:unknown?).and_return(true) server.close end end end context 'when server reconnects' do before do allow(server).to receive(:unknown?).and_return(false) allow(cluster).to receive(:run_sdam_flow) server.pool.ready end it 'keeps the same pool' do pool = server.pool RSpec::Mocks.with_temporary_scope do # you can't disconnect from a known server, since this pauses the # pool and we only want to pause the pools of unknown servers. # server.unknown! allow(server).to receive(:unknown?).and_return(true) server.disconnect! end server.reconnect! allow(server).to receive(:unknown?).and_return(false) expect(server.pool).to eq(pool) end end end describe '#initialize' do include_context 'with monitoring io' before do allow(cluster).to receive(:run_sdam_flow) end it 'sets the address host' do expect(server.address.host).to eq(default_address.host) end it 'sets the address port' do expect(server.address.port).to eq(default_address.port) end it 'sets the options' do expect(server.options[:monitoring_io]).to be true end context 'with monitoring app metadata option' do require_no_required_api_version it 'creates monitor with monitoring app metadata' do server.monitor.scan! expect(server.monitor.connection.options[:app_metadata]).to be monitor_app_metadata end end context 'monitoring_io: false' do let(:server_options) do {monitoring_io: false} end it 'does not create monitoring thread' do expect(server.monitor.instance_variable_get('@thread')).to be nil end end context 'monitoring_io: true' do include_context 'with monitoring io' it 'creates monitoring thread' do expect(server.monitor.instance_variable_get('@thread')).to be_a(Thread) end end end describe '#scan!' do clean_slate include_context 'with monitoring io' before do # We are invoking scan! on the monitor manually, stop the background # thread to avoid it interfering with our assertions. server.monitor.stop! end it 'delegates scan to the monitor' do expect(server.monitor).to receive(:scan!) server.scan! end it 'invokes sdam flow eventually' do expect(cluster).to receive(:run_sdam_flow) server.scan! end end describe '#reconnect!' do include_context 'with monitoring io' before do expect(server.monitor).to receive(:restart!).and_call_original end it 'restarts the monitor and returns true' do expect(server.reconnect!).to be(true) end end describe 'retry_writes?' do before do allow(server).to receive(:features).and_return(features) end context 'when the server version is less than 3.6' do let(:features) do double('features', sessions_enabled?: false) end context 'when the server has a logical_session_timeout value' do before do allow(server).to receive(:logical_session_timeout).and_return(true) end it 'returns false' do expect(server.retry_writes?).to be(false) end end context 'when the server does not have a logical_session_timeout value' do before do allow(server).to receive(:logical_session_timeout).and_return(nil) end it 'returns false' do expect(server.retry_writes?).to be(false) end end end context 'when the server version is at least 3.6' do let(:features) do double('features', sessions_enabled?: true) end context 'when the server has a logical_session_timeout value' do before do allow(server).to receive(:logical_session_timeout).and_return(true) end context 'when the server is a standalone' do before do allow(server).to receive(:standalone?).and_return(true) end it 'returns false' do expect(server.retry_writes?).to be(false) end end context 'when the server is not a standalone' do before do allow(server).to receive(:standalone?).and_return(true) end it 'returns false' do expect(server.retry_writes?).to be(false) end end end context 'when the server does not have a logical_session_timeout value' do before do allow(server).to receive(:logical_session_timeout).and_return(nil) end it 'returns false' do expect(server.retry_writes?).to be(false) end end end end describe '#summary' do context 'server is primary' do let(:server) do make_server(:primary) end before do expect(server).to be_primary end it 'includes its status' do expect(server.summary).to match(/PRIMARY/) end it 'includes replica set name' do expect(server.summary).to match(/replica_set=mongodb_set/) end end context 'server is secondary' do let(:server) do make_server(:secondary) end before do expect(server).to be_secondary end it 'includes its status' do expect(server.summary).to match(/SECONDARY/) end it 'includes replica set name' do expect(server.summary).to match(/replica_set=mongodb_set/) end end context 'server is arbiter' do let(:server) do make_server(:arbiter) end before do expect(server).to be_arbiter end it 'includes its status' do expect(server.summary).to match(/ARBITER/) end it 'includes replica set name' do expect(server.summary).to match(/replica_set=mongodb_set/) end end context 'server is ghost' do let(:server) do make_server(:ghost) end before do expect(server).to be_ghost end it 'includes its status' do expect(server.summary).to match(/GHOST/) end it 'does not include replica set name' do expect(server.summary).not_to include('replica_set') end end context 'server is other' do let(:server) do make_server(:other) end before do expect(server).to be_other end it 'includes its status' do expect(server.summary).to match(/OTHER/) end it 'includes replica set name' do expect(server.summary).to match(/replica_set=mongodb_set/) end end context 'server is unknown' do let(:server_options) do {monitoring_io: false} end before do expect(server).to be_unknown end it 'includes unknown status' do expect(server.summary).to match(/UNKNOWN/) end it 'does not include replica set name' do expect(server.summary).not_to include('replica_set') end end context 'server is a mongos' do let(:server) do make_server(:mongos) end before do expect(server).to be_mongos end it 'specifies the server is a mongos' do expect(server.summary).to match(/MONGOS/) end end end describe '#log_warn' do it 'works' do expect do server.log_warn('test warning') end.not_to raise_error end end end mongo-ruby-driver-2.21.3/spec/mongo/session/000077500000000000000000000000001505113246500207245ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/mongo/session/server_session_spec.rb000066400000000000000000000031021505113246500253300ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Session::ServerSession do describe '#initialize' do it 'sets the last use variable to the current time' do expect(described_class.new.last_use).to be_within(0.2).of(Time.now) end it 'sets a UUID as the session id' do expect(described_class.new.instance_variable_get(:@session_id)).to be_a(BSON::Document) expect(described_class.new.session_id).to be_a(BSON::Document) expect(described_class.new.session_id[:id]).to be_a(BSON::Binary) end end describe '#next_txn_number' do it 'advances and returns the next transaction number' do expect(described_class.new.next_txn_num).to be(1) end context 'when the method is called multiple times' do let(:server_session) do described_class.new end before do server_session.next_txn_num server_session.next_txn_num end it 'advances and returns the next transaction number' do expect(server_session.next_txn_num).to be(3) end end end describe '#inspect' do let(:session) do described_class.new end it 'includes the Ruby object_id in the formatted string' do expect(session.inspect).to include(session.object_id.to_s) end it 'includes the session_id in the formatted string' do expect(session.inspect).to include(session.session_id.to_s) end it 'includes the last_use in the formatted string' do expect(session.inspect).to include(session.last_use.to_s) end end end mongo-ruby-driver-2.21.3/spec/mongo/session/session_pool_spec.rb000066400000000000000000000163701505113246500250060ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Session::SessionPool do min_server_fcv '3.6' require_topology :replica_set, :sharded, :load_balanced clean_slate_for_all let(:cluster) do authorized_client.cluster.tap do |cluster| # Cluster time assertions can fail if there are background operations # that cause cluster time to be updated. This also necessitates clean # state requirement. authorized_client.close end end describe '#initialize' do let(:pool) do described_class.new(cluster) end it 'sets the cluster' do expect(pool.instance_variable_get(:@cluster)).to be(authorized_client.cluster) end end describe '#inspect' do let(:pool) do described_class.new(cluster) end before do s = pool.checkout pool.checkin(s) end it 'includes the Ruby object_id in the formatted string' do expect(pool.inspect).to include(pool.object_id.to_s) end it 'includes the pool size in the formatted string' do expect(pool.inspect).to include('current_size=1') end end describe 'checkout' do let(:pool) do described_class.new(cluster) end context 'when a session is checked out' do let!(:session_a) do pool.checkout end let!(:session_b) do pool.checkout end before do pool.checkin(session_a) pool.checkin(session_b) end it 'is returned to the front of the queue' do expect(pool.checkout).to be(session_b) expect(pool.checkout).to be(session_a) end end context 'when there are sessions about to expire in the queue' do let(:old_session_a) do pool.checkout end let(:old_session_b) do pool.checkout end before do pool.checkin(old_session_a) pool.checkin(old_session_b) allow(old_session_a).to receive(:last_use).and_return(Time.now - 1800) allow(old_session_b).to receive(:last_use).and_return(Time.now - 1800) end context 'when a session is checked out' do let(:checked_out_session) do pool.checkout end context "in non load-balanced topology" do require_topology :replica_set, :sharded it 'disposes of the old session and returns a new one' do old_sessions = [old_session_a, old_session_b] expect(old_sessions).not_to include(pool.checkout) expect(old_sessions).not_to include(pool.checkout) expect(pool.instance_variable_get(:@queue)).to be_empty end end context "in load-balanced topology" do require_topology :load_balanced it 'doed not dispose of the old session' do old_sessions = [old_session_a, old_session_b] expect(old_sessions).to include(checked_out_session) expect(old_sessions).to include(checked_out_session) expect(pool.instance_variable_get(:@queue)).to be_empty end end end end context 'when a sessions that is about to expire is checked in' do let(:old_session_a) do pool.checkout end let(:old_session_b) do pool.checkout end before do allow(old_session_a).to receive(:last_use).and_return(Time.now - 1800) allow(old_session_b).to receive(:last_use).and_return(Time.now - 1800) pool.checkin(old_session_a) pool.checkin(old_session_b) end context "in non load-balanced topology" do require_topology :replica_set, :sharded it 'disposes of the old sessions instead of adding them to the pool' do old_sessions = [old_session_a, old_session_b] expect(old_sessions).not_to include(pool.checkout) expect(old_sessions).not_to include(pool.checkout) expect(pool.instance_variable_get(:@queue)).to be_empty end end context "in load-balanced topology" do require_topology :load_balanced it 'does not dispose of the old sessions' do old_sessions = [old_session_a, old_session_b] expect(old_sessions).to include(pool.checkout) expect(old_sessions).to include(pool.checkout) expect(pool.instance_variable_get(:@queue)).to be_empty end end end end describe '#end_sessions' do let(:pool) do client.cluster.session_pool end let!(:session_a) do pool.checkout end let!(:session_b) do pool.checkout end let(:subscriber) { Mrss::EventSubscriber.new } let(:client) do authorized_client.tap do |client| client.subscribe(Mongo::Monitoring::COMMAND, subscriber) end end context 'when the number of ids is not larger than 10,000' do before do client.database.command(ping: 1) pool.checkin(session_a) pool.checkin(session_b) end let!(:cluster_time) do client.cluster.cluster_time end let(:end_sessions_command) do pool.end_sessions subscriber.started_events.find { |c| c.command_name == 'endSessions'} end it 'sends the endSessions command with all the session ids' do end_sessions_command expect(end_sessions_command.command[:endSessions]).to include(BSON::Document.new(session_a.session_id)) expect(end_sessions_command.command[:endSessions]).to include(BSON::Document.new(session_b.session_id)) end context 'when talking to a replica set or mongos' do it 'sends the endSessions command with all the session ids and cluster time' do start_time = client.cluster.cluster_time end_sessions_command end_time = client.cluster.cluster_time expect(end_sessions_command.command[:endSessions]).to include(BSON::Document.new(session_a.session_id)) expect(end_sessions_command.command[:endSessions]).to include(BSON::Document.new(session_b.session_id)) # cluster time may have been advanced due to background operations actual_cluster_time = Mongo::ClusterTime.new(end_sessions_command.command[:$clusterTime]) expect(actual_cluster_time).to be >= start_time expect(actual_cluster_time).to be <= end_time end end end context 'when the number of ids is larger than 10_000' do let(:ids) do 10_001.times.map do |i| bytes = [SecureRandom.uuid.gsub(/\-/, '')].pack('H*') BSON::Document.new(id: BSON::Binary.new(bytes, :uuid)) end end before do queue = [] ids.each do |id| queue << double('session', session_id: id) end pool.instance_variable_set(:@queue, queue) expect(Mongo::Operation::Command).to receive(:new).at_least(:twice).and_call_original end let(:end_sessions_commands) do subscriber.started_events.select { |c| c.command_name == 'endSessions'} end it 'sends the command more than once' do pool.end_sessions expect(end_sessions_commands.size).to eq(2) expect(end_sessions_commands[0].command[:endSessions]).to eq(ids[0...10_000]) expect(end_sessions_commands[1].command[:endSessions]).to eq([ids[10_000]]) end end end end mongo-ruby-driver-2.21.3/spec/mongo/session_spec.rb000066400000000000000000000214411505113246500222650ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Session do min_server_fcv '3.6' require_topology :replica_set, :sharded let(:session) do authorized_client.start_session(options) end let(:options) do {} end describe '#initialize' do context 'when options are provided' do it 'duplicates and freezes the options' do expect(session.options).not_to be(options) expect(session.options.frozen?).to be(true) end end it 'sets a server session with an id' do expect(session.session_id).to be_a(BSON::Document) end it 'sets the cluster time to nil' do expect(session.cluster_time).to be(nil) end it 'sets the cluster' do expect(session.cluster).to be(authorized_client.cluster) end end describe '#inspect' do it 'includes the Ruby object_id in the formatted string' do expect(session.inspect).to include(session.object_id.to_s) end it 'includes the session_id in the formatted string' do expect(session.inspect).to include(session.session_id.to_s) end context 'when options are provided' do let(:options) do { causal_consistency: true } end it 'includes the options in the formatted string' do expect(session.inspect).to include({ implicit: false, causal_consistency: true }.to_s) end end end describe '#advance_cluster_time' do let(:new_cluster_time) do { 'clusterTime' => BSON::Timestamp.new(0, 5) } end context 'when the session does not have a cluster time' do before do session.advance_cluster_time(new_cluster_time) end it 'sets the new cluster time' do expect(session.cluster_time).to eq(new_cluster_time) end end context 'when the session already has a cluster time' do context 'when the original cluster time is less than the new cluster time' do let(:original_cluster_time) do Mongo::ClusterTime.new('clusterTime' => BSON::Timestamp.new(0, 1)) end before do session.instance_variable_set(:@cluster_time, original_cluster_time) session.advance_cluster_time(new_cluster_time) end it 'sets the new cluster time' do expect(session.cluster_time).to eq(new_cluster_time) end end context 'when the original cluster time is equal or greater than the new cluster time' do let(:original_cluster_time) do Mongo::ClusterTime.new('clusterTime' => BSON::Timestamp.new(0, 6)) end before do session.instance_variable_set(:@cluster_time, original_cluster_time) session.advance_cluster_time(new_cluster_time) end it 'does not update the cluster time' do expect(session.cluster_time).to eq(original_cluster_time) end end end end describe '#advance_operation_time' do let(:new_operation_time) do BSON::Timestamp.new(0, 5) end context 'when the session does not have an operation time' do before do session.advance_operation_time(new_operation_time) end it 'sets the new operation time' do expect(session.operation_time).to eq(new_operation_time) end end context 'when the session already has an operation time' do context 'when the original operation time is less than the new operation time' do let(:original_operation_time) do BSON::Timestamp.new(0, 1) end before do session.instance_variable_set(:@operation_time, original_operation_time) session.advance_operation_time(new_operation_time) end it 'sets the new operation time' do expect(session.operation_time).to eq(new_operation_time) end end context 'when the original operation time is equal or greater than the new operation time' do let(:original_operation_time) do BSON::Timestamp.new(0, 6) end before do session.instance_variable_set(:@operation_time, original_operation_time) session.advance_operation_time(new_operation_time) end it 'does not update the operation time' do expect(session.operation_time).to eq(original_operation_time) end end end end describe 'ended?' do context 'when the session has not been ended' do it 'returns false' do expect(session.ended?).to be(false) end end context 'when the session has been ended' do before do session.end_session end it 'returns true' do expect(session.ended?).to be(true) end end end describe 'end_session' do let!(:server_session) do session.instance_variable_get(:@server_session) end let(:cluster_session_pool) do session.cluster.session_pool end it 'returns the server session to the cluster session pool' do session.end_session expect(cluster_session_pool.instance_variable_get(:@queue)).to include(server_session) end context 'when #end_session is called multiple times' do before do session.end_session end it 'returns nil' do expect(session.end_session).to be_nil end end end describe '#retry_writes?' do context 'when the option is set to true' do let(:client) do authorized_client_with_retry_writes end it 'returns true' do expect(client.start_session.retry_writes?).to be(true) end end context 'when the option is set to false' do let(:client) do authorized_client.with(retry_writes: false) end it 'returns false' do expect(client.start_session.retry_writes?).to be(false) end end context 'when the option is not defined' do require_no_retry_writes let(:client) do authorized_client end it 'returns false' do expect(client.start_session.retry_writes?).to be(false) end end end describe '#session_id' do it 'returns a BSON::Document' do expect(session.session_id).to be_a(BSON::Document) end context 'ended session' do before do session.end_session end it 'raises SessionEnded' do expect do session.session_id end.to raise_error(Mongo::Error::SessionEnded) end end context "when the sesion is not materialized" do let(:session) { authorized_client.get_session(implicit: true) } before do expect(session.materialized?).to be false end it "raises SessionNotMaterialized" do expect do session.session_id end.to raise_error(Mongo::Error::SessionNotMaterialized) end end end describe '#txn_num' do it 'returns an integer' do expect(session.txn_num).to be_a(Integer) end context 'ended session' do before do session.end_session end it 'raises SessionEnded' do expect do session.txn_num end.to raise_error(Mongo::Error::SessionEnded) end end end describe '#next_txn_num' do it 'returns an integer' do expect(session.next_txn_num).to be_a(Integer) end it 'increments transaction number on each call' do expect(session.next_txn_num).to eq(1) expect(session.next_txn_num).to eq(2) end context 'ended session' do before do session.end_session end it 'raises SessionEnded' do expect do session.next_txn_num end.to raise_error(Mongo::Error::SessionEnded) end end end describe '#start_session' do context 'when block doesn\'t raise an error' do it 'closes the session after the block' do block_session = nil authorized_client.start_session do |session| expect(session.ended?).to be false block_session = session end expect(block_session.ended?).to be true end end context 'when block raises an error' do it 'closes the session after the block' do block_session = nil expect do authorized_client.start_session do |session| block_session = session raise 'This is an error!' end end.to raise_error(StandardError, 'This is an error!') expect(block_session.ended?).to be true end end context 'when block returns value' do it 'is returned by the function' do res = authorized_client.start_session do |session| 4 end expect(res).to be 4 end end it 'returns a session with session id' do session = authorized_client.start_session session.session_id.should be_a(BSON::Document) end end end mongo-ruby-driver-2.21.3/spec/mongo/session_transaction_spec.rb000066400000000000000000000154551505113246500247020ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' class SessionTransactionSpecError < StandardError; end describe Mongo::Session do require_wired_tiger min_server_fcv '4.0' require_topology :replica_set, :sharded let(:session) do authorized_client.start_session(session_options) end let(:session_options) do {} end let(:collection) do authorized_client['session-transaction-test'] end before do collection.delete_many end describe 'start_transaction' do context 'when topology is sharded and server is < 4.2' do max_server_fcv '4.1' require_topology :sharded it 'raises an error' do expect { session.start_transaction }.to raise_error(Mongo::Error::TransactionsNotSupported, /sharded transactions require server version/) end end end describe '#abort_transaction' do require_topology :replica_set context 'when a non-Mongo error is raised' do before do collection.insert_one({foo: 1}) end it 'propagates the exception and sets state to transaction aborted' do session.start_transaction collection.insert_one({foo: 1}, session: session) expect(session).to receive(:write_with_retry).and_raise(SessionTransactionSpecError) expect do session.abort_transaction end.to raise_error(SessionTransactionSpecError) expect(session.send(:within_states?, Mongo::Session::TRANSACTION_ABORTED_STATE)).to be true # Since we failed abort_transaction call, the transaction is still # outstanding. It will cause subsequent tests to stall until it times # out on the server side. End the session to force the server # to close the transaction. kill_all_server_sessions end end context 'when a Mongo error is raised' do before do collection.insert_one({foo: 1}) end it 'swallows the exception and sets state to transaction aborted' do session.start_transaction collection.insert_one({foo: 1}, session: session) expect(session).to receive(:write_with_retry).and_raise(Mongo::Error::SocketError) expect do session.abort_transaction end.not_to raise_error expect(session.send(:within_states?, Mongo::Session::TRANSACTION_ABORTED_STATE)).to be true # Since we failed abort_transaction call, the transaction is still # outstanding. It will cause subsequent tests to stall until it times # out on the server side. End the session to force the server # to close the transaction. kill_all_server_sessions end end end describe '#with_transaction' do require_topology :replica_set context 'callback successful' do it 'commits' do session.with_transaction do collection.insert_one(a: 1) end result = collection.find(a: 1).first expect(result[:a]).to eq(1) end it 'propagates callback\'s return value' do rv = session.with_transaction do 42 end expect(rv).to eq(42) end end context 'callback raises' do it 'propagates the exception' do expect do session.with_transaction do raise SessionTransactionSpecError, 'test error' end end.to raise_error(SessionTransactionSpecError, 'test error') end end context 'callback aborts transaction' do it 'does not raise exceptions and propagates callback\'s return value' do rv = session.with_transaction do session.abort_transaction 42 end expect(rv).to eq(42) end end context 'timeout with callback raising TransientTransactionError' do max_example_run_time 7 it 'times out' do start = Mongo::Utils.monotonic_time expect(Mongo::Utils).to receive(:monotonic_time).ordered.and_return(start) expect(Mongo::Utils).to receive(:monotonic_time).ordered.and_return(start + 1) expect(Mongo::Utils).to receive(:monotonic_time).ordered.and_return(start + 2) expect(Mongo::Utils).to receive(:monotonic_time).ordered.and_return(start + 200) allow(session).to receive('check_transactions_supported!').and_return true expect do session.with_transaction do exc = Mongo::Error::OperationFailure.new('timeout test') exc.add_label('TransientTransactionError') raise exc end end.to raise_error(Mongo::Error::OperationFailure, 'timeout test') end end %w(UnknownTransactionCommitResult TransientTransactionError).each do |label| context "timeout with commit raising with #{label}" do max_example_run_time 7 # JRuby seems to burn through the monotonic time expectations # very quickly and the retries of the transaction get the original # time which causes the transaction to be stuck there. fails_on_jruby before do # create collection if it does not exist collection.insert_one(a: 1) end retry_test it 'times out' do start = Mongo::Utils.monotonic_time 11.times do |i| expect(Mongo::Utils).to receive(:monotonic_time).ordered.and_return(start + i) end expect(Mongo::Utils).to receive(:monotonic_time).ordered.and_return(start + 200) allow(session).to receive('check_transactions_supported!').and_return true exc = Mongo::Error::OperationFailure.new('timeout test') exc.add_label(label) expect(session).to receive(:commit_transaction).and_raise(exc).at_least(:once) expect do session.with_transaction do collection.insert_one(a: 2) end end.to raise_error(Mongo::Error::OperationFailure, 'timeout test') end end end context 'callback breaks out of with_tx loop' do it 'aborts transaction' do expect(session).to receive(:start_transaction).and_call_original expect(session).to receive(:abort_transaction).and_call_original expect(session).to receive(:log_warn).and_call_original session.with_transaction do break end end end context 'application timeout around with_tx' do it 'keeps session in a working state' do session collection.insert_one(a: 1) expect do Timeout.timeout(1, SessionTransactionSpecError) do session.with_transaction do sleep 2 end end end.to raise_error(SessionTransactionSpecError) session.with_transaction do collection.insert_one(timeout_around_with_tx: 2) end expect(collection.find(timeout_around_with_tx: 2).first).not_to be nil end end end end mongo-ruby-driver-2.21.3/spec/mongo/socket/000077500000000000000000000000001505113246500205315ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/mongo/socket/ssl_spec.rb000066400000000000000000000555041505113246500227020ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' # this test performs direct network connections without retries. # In case of intermittent network issues, retry the entire failing test. describe Mongo::Socket::SSL do retry_test clean_slate_for_all require_tls let(:host_name) { 'localhost' } let(:socket) do described_class.new('127.0.0.1', default_address.port, host_name, 1, :INET, ssl_options.merge( connect_timeout: 2.4)) end let(:ssl_options) do SpecConfig.instance.ssl_options end let (:key_string) do File.read(SpecConfig.instance.local_client_key_path) end let (:cert_string) do File.read(SpecConfig.instance.local_client_cert_path) end let (:ca_cert_string) do File.read(SpecConfig.instance.local_ca_cert_path) end let(:key_encrypted_string) do File.read(SpecConfig.instance.client_encrypted_key_path) end let(:cert_object) do OpenSSL::X509::Certificate.new(cert_string) end let(:key_object) do OpenSSL::PKey.read(key_string) end describe '#human_address' do it 'returns the address and tls indicator' do addr = socket.instance_variable_get(:@tcp_socket).remote_address expect(socket.send(:human_address)).to eq("#{addr.ip_address}:#{addr.ip_port} (#{default_address}, TLS)") end end describe '#connect!' do context 'when TLS context hooks are provided' do # https://github.com/jruby/jruby-openssl/issues/221 fails_on_jruby let(:proc) do Proc.new do |context| if BSON::Environment.jruby? context.ciphers = ["AES256-SHA256"] else context.ciphers = ["AES256-SHA"] end end end before do Mongo.tls_context_hooks = [ proc ] end after do Mongo.tls_context_hooks.clear end it 'runs the TLS context hook before connecting' do if ENV['OCSP_ALGORITHM'] skip "OCSP configurations use different certificates which this test does not handle" end expect(proc).to receive(:call).and_call_original socket # Even though we are requesting a single cipher in the hook, # there may be multiple ciphers available in the context. # All of the ciphers should match the requested one (using # OpenSSL's idea of what "match" means). socket.context.ciphers.each do |cipher| unless cipher.first =~ /SHA256/ || cipher.last == 256 raise "Unexpected cipher #{cipher} after requesting SHA-256" end end end end context 'when a certificate is provided' do context 'when connecting the tcp socket is successful' do it 'connects to the server' do expect(socket).to be_alive end end end context 'when a certificate and key are provided as strings' do let(:ssl_options) do { :ssl => true, :ssl_cert_string => cert_string, :ssl_key_string => key_string, :ssl_verify => false } end it 'connects to the server' do expect(socket).to be_alive end end context 'when certificate and an encrypted key are provided as strings' do require_local_tls let(:ssl_options) do { :ssl => true, :ssl_cert_string => cert_string, :ssl_key_string => key_encrypted_string, :ssl_key_pass_phrase => SpecConfig.instance.client_encrypted_key_passphrase, :ssl_verify => false } end it 'connects to the server' do expect(socket).to be_alive end end context 'when a certificate and key are provided as objects' do let(:ssl_options) do { :ssl => true, :ssl_cert_object => cert_object, :ssl_key_object => key_object, :ssl_verify => false } end it 'connects to the server' do expect(socket).to be_alive end end context 'when the certificate is specified using both a file and a PEM-encoded string' do let(:ssl_options) do super().merge( :ssl_cert_string => 'This is a random string, not a PEM-encoded certificate' ) end # since the lower priority option is clearly invalid we verify priority by checking that it connects it 'discards the value of :ssl_cert_string' do expect(socket).to be_alive end end context 'when the certificate is specified using both a file and an object' do let(:ssl_options) do super().merge( :ssl_cert_object => 'This is a string, not a certificate' ) end # since the lower priority option is clearly invalid we verify priority by checking that it connects it 'discards the value of :ssl_cert_object' do expect(socket).to be_alive end end context 'when the certificate is specified using both a PEM-encoded string and an object' do let(:ssl_options) do { :ssl => true, :ssl_cert_string => cert_string, :ssl_cert_object => 'This is a string, not a Certificate', :ssl_key => SpecConfig.instance.client_key_path, :ssl_verify => false } end # since the lower priority option is clearly invalid we verify priority by checking that it connects it 'discards the value of :ssl_cert_object' do expect(socket).to be_alive end end context 'when the key is specified using both a file and a PEM-encoded string' do let(:ssl_options) do super().merge( :ssl_key_string => 'This is a normal string, not a PEM-encoded key' ) end # since the lower priority option is clearly invalid we verify priority by checking that it connects it 'discards the value of :ssl_key_string' do expect(socket).to be_alive end end context 'when the key is specified using both a file and an object' do let(:ssl_options) do super().merge( :ssl_cert_object => 'This is a string, not a key' ) end # since the lower priority option is clearly invalid we verify priority by checking that it connects it 'discards the value of :ssl_key_object' do expect(socket).to be_alive end end context 'when the key is specified using both a PEM-encoded string and an object' do let(:ssl_options) do { :ssl => true, :ssl_cert => SpecConfig.instance.client_cert_path, :ssl_key_string => key_string, :ssl_key_object => 'This is a string, not a PKey', :ssl_verify => false } end # since the lower priority option is clearly invalid we verify priority by checking that it connects it 'discards the value of :ssl_key_object' do expect(socket).to be_alive end end context 'when a certificate is passed, but it is not of the right type' do let(:ssl_options) do cert = "This is a string, not an X.509 Certificate" { :ssl => true, :ssl_cert_object => cert, :ssl_key => SpecConfig.instance.local_client_key_path, :ssl_verify => false } end it 'raises a TypeError' do expect do socket end.to raise_exception(TypeError) end end context 'when the hostname is incorrect' do let(:host_name) do 'incorrect_hostname' end context 'when the hostname is verified' do let(:ssl_options) do SpecConfig.instance.ssl_options.merge(ssl_verify: false, ssl_verify_hostname: true) end it 'raises an error' do lambda do socket end.should raise_error(Mongo::Error::SocketError, /TLS handshake failed due to a hostname mismatch/) end end context 'when the hostname is not verified' do let(:ssl_options) do SpecConfig.instance.ssl_options.merge(ssl_verify: false, ssl_verify_hostname: false) end it 'does not raise an error' do lambda do socket end.should_not raise_error end end end # Note that as of MRI 2.4, Creating a socket with the wrong key type raises # a NoMethodError because #private? is attempted to be called on the key. # In jruby 9.2 a TypeError is raised. # In jruby 9.1 a OpenSSL::PKey::PKeyError is raised. context 'when a key is passed, but it is not of the right type' do let(:ssl_options) do key = "This is a string not a key" { :ssl => true, :ssl_key_object => key, :ssl_cert => SpecConfig.instance.client_cert_path, :ssl_verify => false } end let(:expected_exception) do if SpecConfig.instance.jruby? if RUBY_VERSION >= '2.5.0' # jruby 9.2 TypeError else # jruby 9.1 OpenSSL::OpenSSLError end else # MRI if RUBY_VERSION >= '3.1.0' TypeError else NoMethodError end end end it 'raises a NoMethodError' do expect do socket end.to raise_exception(expected_exception) end end context 'when a bad certificate/key is provided' do shared_examples_for 'raises an exception' do it 'raises an exception' do expect do socket end.to raise_exception(*expected_exception) end end context 'mri' do require_mri context 'when a bad certificate is provided' do let(:expected_exception) do if RUBY_VERSION >= '3.1.0' # OpenSSL::X509::CertificateError: PEM_read_bio_X509: no start line OpenSSL::X509::CertificateError else # OpenSSL::X509::CertificateError: nested asn1 error [OpenSSL::OpenSSLError, /asn1 error/i] end end let(:ssl_options) do super().merge( :ssl_cert => CRUD_TESTS.first, :ssl_key => nil, ) end it_behaves_like 'raises an exception' end context 'when a bad key is provided' do let(:expected_exception) do # OpenSSL::PKey::PKeyError: Could not parse PKey: no start line [OpenSSL::OpenSSLError, /Could not parse PKey/] end let(:ssl_options) do super().merge( :ssl_cert => nil, :ssl_key => CRUD_TESTS.first, ) end it_behaves_like 'raises an exception' end end context 'jruby' do require_jruby # On JRuby the key does not appear to be parsed, therefore only # specifying the bad certificate produces an error. context 'when a bad certificate is provided' do let(:ssl_options) do super().merge( :ssl_cert => CRUD_TESTS.first, :ssl_key => nil, ) end let(:expected_exception) do # java.lang.ClassCastException: org.bouncycastle.asn1.DERApplicationSpecific cannot be cast to org.bouncycastle.asn1.ASN1Sequence # OpenSSL::X509::CertificateError: parsing issue: malformed PEM data: no header found [OpenSSL::OpenSSLError, /malformed pem data/i] end it_behaves_like 'raises an exception' end end end context 'when a CA certificate is provided' do require_local_tls context 'as a path to a file' do let(:ssl_options) do super().merge( :ssl_ca_cert => SpecConfig.instance.local_ca_cert_path, :ssl_verify => true ) end it 'connects to the server' do expect(socket).to be_alive end end context 'as a string containing the PEM-encoded certificate' do let(:ssl_options) do super().merge( :ssl_ca_cert_string => ca_cert_string, :ssl_verify => true ) end it 'connects to the server' do expect(socket).to be_alive end end context 'as an array of Certificate objects' do let(:ssl_options) do cert = [OpenSSL::X509::Certificate.new(ca_cert_string)] super().merge( :ssl_ca_cert_object => cert, :ssl_verify => true ) end it 'connects to the server' do expect(socket).to be_alive end end context 'both as a file and a PEM-encoded parameter' do let(:ssl_options) do super().merge( :ssl_ca_cert => SpecConfig.instance.local_ca_cert_path, :ssl_ca_cert_string => 'This is a string, not a certificate', :ssl_verify => true ) end # since the lower priority option is clearly invalid we verify priority by checking that it connects it 'discards the value of :ssl_ca_cert_string' do expect(socket).to be_alive end end context 'both as a file and as object parameter' do let(:ssl_options) do super().merge( :ssl_ca_cert => SpecConfig.instance.local_ca_cert_path, :ssl_ca_cert_object => 'This is a string, not an array of certificates', :ssl_verify => true ) end it 'discards the value of :ssl_ca_cert_object' do expect(socket).to be_alive end end context 'both as a PEM-encoded string and as object parameter' do let(:ssl_options) do cert = File.read(SpecConfig.instance.local_ca_cert_path) super().merge( :ssl_ca_cert_string => cert, :ssl_ca_cert_object => 'This is a string, not an array of certificates', :ssl_verify => true ) end it 'discards the value of :ssl_ca_cert_object' do expect(socket).to be_alive end end end context 'when CA certificate file is not what server cert is signed with' do require_local_tls let(:server) do ClientRegistry.instance.global_client('authorized').cluster.next_primary end let(:connection) do Mongo::Server::Connection.new(server, ssl_options.merge(socket_timeout: 2)) end context 'as a file' do let(:ssl_options) do SpecConfig.instance.test_options.merge( ssl: true, ssl_cert: SpecConfig.instance.client_cert_path, ssl_key: SpecConfig.instance.client_key_path, ssl_ca_cert: SpecConfig.instance.ssl_certs_dir.join('python-ca.crt').to_s, ssl_verify: true, ) end it 'fails' do connection expect do connection.connect! end.to raise_error(Mongo::Error::SocketError, /SSLError/) end end end context 'when CA certificate file contains multiple certificates' do require_local_tls let(:server) do ClientRegistry.instance.global_client('authorized').cluster.next_primary end let(:connection) do Mongo::Server::Connection.new(server, ssl_options.merge(socket_timeout: 2)) end context 'as a file' do let(:ssl_options) do SpecConfig.instance.test_options.merge( ssl: true, ssl_cert: SpecConfig.instance.client_cert_path, ssl_key: SpecConfig.instance.client_key_path, ssl_ca_cert: SpecConfig.instance.multi_ca_path, ssl_verify: true, ) end it 'succeeds' do connection expect do connection.connect! end.not_to raise_error end end end context 'when a CA certificate is not provided' do require_local_tls let(:ssl_options) do super().merge( :ssl_verify => true ) end local_env do { 'SSL_CERT_FILE' => SpecConfig.instance.local_ca_cert_path } end it 'uses the default cert store' do expect(socket).to be_alive end end context 'when the client certificate uses an intermediate certificate' do require_local_tls let(:server) do ClientRegistry.instance.global_client('authorized').cluster.next_primary end let(:connection) do Mongo::Server::Connection.new(server, ssl_options.merge(socket_timeout: 2)) end context 'as a path to a file' do context 'standalone' do let(:ssl_options) do SpecConfig.instance.test_options.merge( ssl_cert: SpecConfig.instance.second_level_cert_path, ssl_key: SpecConfig.instance.second_level_key_path, ssl_ca_cert: SpecConfig.instance.local_ca_cert_path, ssl_verify: true, ) end it 'fails' do # This test provides a second level client certificate to the # server *without* providing the intermediate certificate. # If the server performs certificate verification, it will # reject the connection (seen from the driver as a SocketError) # and the test will succeed. If the server does not perform # certificate verification, it will accept the connection, # no SocketError will be raised and the test will fail. connection expect do connection.connect! end.to raise_error(Mongo::Error::SocketError) end end context 'bundled with intermediate cert' do # https://github.com/jruby/jruby-openssl/issues/181 require_mri let(:ssl_options) do SpecConfig.instance.test_options.merge( ssl: true, ssl_cert: SpecConfig.instance.second_level_cert_bundle_path, ssl_key: SpecConfig.instance.second_level_key_path, ssl_ca_cert: SpecConfig.instance.local_ca_cert_path, ssl_verify: true, ) end it 'succeeds' do connection expect do connection.connect! end.not_to raise_error end end end context 'as a string' do context 'standalone' do let(:ssl_options) do SpecConfig.instance.test_options.merge( ssl_cert: nil, ssl_cert_string: File.read(SpecConfig.instance.second_level_cert_path), ssl_key: nil, ssl_key_string: File.read(SpecConfig.instance.second_level_key_path), ssl_ca_cert: SpecConfig.instance.local_ca_cert_path, ssl_verify: true, ) end it 'fails' do connection expect do connection.connect! end.to raise_error(Mongo::Error::SocketError) end end context 'bundled with intermediate cert' do # https://github.com/jruby/jruby-openssl/issues/181 require_mri let(:ssl_options) do SpecConfig.instance.test_options.merge( ssl: true, ssl_cert: nil, ssl_cert_string: File.read(SpecConfig.instance.second_level_cert_bundle_path), ssl_key: nil, ssl_key_string: File.read(SpecConfig.instance.second_level_key_path), ssl_ca_cert: SpecConfig.instance.local_ca_cert_path, ssl_verify: true, ) end it 'succeeds' do connection expect do connection.connect! end.not_to raise_error end end end end context 'when client certificate and private key are bunded in a pem file' do require_local_tls let(:server) do ClientRegistry.instance.global_client('authorized').cluster.next_primary end let(:connection) do Mongo::Server::Connection.new(server, ssl_options.merge(socket_timeout: 2)) end let(:ssl_options) do SpecConfig.instance.ssl_options.merge( ssl: true, ssl_cert: SpecConfig.instance.client_pem_path, ssl_key: SpecConfig.instance.client_pem_path, ssl_ca_cert: SpecConfig.instance.local_ca_cert_path, ssl_verify: true, ) end it 'succeeds' do connection expect do connection.connect! end.not_to raise_error end end context 'when ssl_verify is not specified' do require_local_tls let(:ssl_options) do super().merge( :ssl_ca_cert => SpecConfig.instance.local_ca_cert_path ).tap { |options| options.delete(:ssl_verify) } end it 'verifies the server certificate' do expect(socket).to be_alive end end context 'when ssl_verify is true' do require_local_tls let(:ssl_options) do super().merge( :ssl_ca_cert => SpecConfig.instance.local_ca_cert_path, :ssl_verify => true ) end it 'verifies the server certificate' do expect(socket).to be_alive end end context 'when ssl_verify is false' do let(:ssl_options) do super().merge( :ssl_ca_cert => 'invalid', :ssl_verify => false ) end it 'does not verify the server certificate' do expect(socket).to be_alive end end context 'when OpenSSL allows disabling renegotiation 'do before do unless OpenSSL::SSL.const_defined?(:OP_NO_RENEGOTIATION) skip 'OpenSSL::SSL::OP_NO_RENEGOTIATION is not defined' end end it 'disables TLS renegotiation' do expect(socket.context.options & OpenSSL::SSL::OP_NO_RENEGOTIATION).to eq(OpenSSL::SSL::OP_NO_RENEGOTIATION) end end end describe '#readbyte' do before do allow_message_expectations_on_nil allow(socket.socket).to receive(:read) do |length| socket_content[0, length] end end context 'with the socket providing "abc"' do let(:socket_content) { "abc" } it 'should return 97 (the byte for "a")' do expect(socket.readbyte).to eq(97) end end context 'with the socket providing "\x00" (NULL_BYTE)' do let(:socket_content) { "\x00" } it 'should return 0' do expect(socket.readbyte).to eq(0) end end context 'with the socket providing no data' do let(:socket_content) { "" } let(:remote_address) { socket.instance_variable_get(:@tcp_socket).remote_address } let(:address_str) { "#{remote_address.ip_address}:#{remote_address.ip_port} (#{default_address}, TLS)" } it 'should raise EOFError' do expect do socket.readbyte end.to raise_error(Mongo::Error::SocketError).with_message("EOFError: EOFError (for #{address_str})") end end end end mongo-ruby-driver-2.21.3/spec/mongo/socket/tcp_spec.rb000066400000000000000000000007131505113246500226570ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Socket::TCP do let(:socket) do described_class.new('127.0.0.1', SpecConfig.instance.any_port, 5, Socket::AF_INET) end describe '#human_address' do it 'returns the address and tls indicator' do addr = socket.send(:socket).remote_address expect(socket.send(:human_address)).to eq("#{addr.ip_address}:#{addr.ip_port} (no TLS)") end end end mongo-ruby-driver-2.21.3/spec/mongo/socket/unix_spec.rb000066400000000000000000000016631505113246500230610ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Socket::Unix do require_unix_socket let(:path) { "/tmp/mongodb-#{SpecConfig.instance.any_port}.sock" } let(:socket) do described_class.new(path, 5) end describe '#human_address' do it 'returns the path' do expect(socket.send(:human_address)).to eq(path) end end describe '#connect!' do after do socket.close end it 'connects to the server' do expect(socket).to be_alive end end describe '#alive?' do context 'when the socket is connected' do after do socket.close end it 'returns true' do expect(socket).to be_alive end end context 'when the socket is not connected' do before do socket.close end it 'raises error' do expect { socket.alive? }.to raise_error(IOError) end end end end mongo-ruby-driver-2.21.3/spec/mongo/socket_spec.rb000066400000000000000000000112661505113246500220760ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::Socket do let(:socket) do described_class.new(0, {}) end describe '#human_address' do it 'raises NotImplementedError' do expect do socket.send(:human_address) end.to raise_error(NotImplementedError) end end describe '#map_exceptions' do before do expect(socket).to receive(:human_address).and_return('fake-address') end it 'maps timeout exception' do expect do socket.send(:map_exceptions) do raise Errno::ETIMEDOUT end end.to raise_error(Mongo::Error::SocketTimeoutError) end it 'maps SystemCallError and preserves message' do expect do socket.send(:map_exceptions) do raise SystemCallError.new('Test error', Errno::ENFILE::Errno) end end.to raise_error(Mongo::Error::SocketError, 'Errno::ENFILE: Too many open files in system - Test error (for fake-address)') end it 'maps IOError and preserves message' do expect do socket.send(:map_exceptions) do raise IOError.new('Test error') end end.to raise_error(Mongo::Error::SocketError, 'IOError: Test error (for fake-address)') end it 'maps SSLError and preserves message' do expect do socket.send(:map_exceptions) do raise OpenSSL::SSL::SSLError.new('Test error') end end.to raise_error(Mongo::Error::SocketError, 'OpenSSL::SSL::SSLError: Test error (for fake-address)') end end describe '#read' do let(:target_host) do host = ClusterConfig.instance.primary_address_host # Take ipv4 address Socket.getaddrinfo(host, 0).detect { |ai| ai.first == 'AF_INET' }[3] end let(:socket) do Mongo::Socket::TCP.new(target_host, ClusterConfig.instance.primary_address_port, 1, Socket::PF_INET) end let(:raw_socket) { socket.instance_variable_get('@socket') } context 'timeout' do clean_slate_for_all shared_examples_for 'times out' do it 'times out' do expect(socket).to receive(:timeout).at_least(:once).and_return(0.2) # When we raise WaitWritable, the socket object is ready for # writing which makes the read method invoke read_nonblock many times expect(raw_socket).to receive(:read_nonblock).at_least(:once) do |len, buf| sleep 0.01 raise exception_class end expect do socket.read(10) end.to raise_error(Mongo::Error::SocketTimeoutError, /Took more than .* seconds to receive data.*\(for /) end end context 'with WaitReadable' do let(:exception_class) do Class.new(Exception) do include IO::WaitReadable end end it_behaves_like 'times out' end context 'with WaitWritable' do let(:exception_class) do Class.new(Exception) do include IO::WaitWritable end end it_behaves_like 'times out' end end end describe '#write' do let(:target_host) do host = ClusterConfig.instance.primary_address_host # Take ipv4 address Socket.getaddrinfo(host, 0).detect { |ai| ai.first == 'AF_INET' }[3] end let(:socket) do Mongo::Socket::TCP.new(target_host, ClusterConfig.instance.primary_address_port, 1, Socket::PF_INET) end let(:raw_socket) { socket.instance_variable_get('@socket') } context 'with timeout' do let(:timeout) { 5_000 } context 'data is less than WRITE_CHUNK_SIZE' do let(:data) { "a" * 1024 } context 'when a partial write occurs' do before do expect(raw_socket) .to receive(:write_nonblock) .twice .and_return(data.length / 2) end it 'eventually writes everything' do expect(socket.write(data, timeout: timeout)). to be === data.length end end end context 'data is greater than WRITE_CHUNK_SIZE' do let(:data) { "a" * (2 * Mongo::Socket::WRITE_CHUNK_SIZE + 256) } context 'when a partial write occurs' do before do expect(raw_socket) .to receive(:write_nonblock) .exactly(4).times .and_return(Mongo::Socket::WRITE_CHUNK_SIZE, 128, Mongo::Socket::WRITE_CHUNK_SIZE - 128, 256) end it 'eventually writes everything' do expect(socket.write(data, timeout: timeout)). to be === data.length end end end end end end mongo-ruby-driver-2.21.3/spec/mongo/srv/000077500000000000000000000000001505113246500200535ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/mongo/srv/monitor_spec.rb000066400000000000000000000162671505113246500231150ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' describe Mongo::Srv::Monitor do describe '#scan!' do let(:hostname) do 'test1.test.build.10gen.cc' end let(:hosts) do [ 'localhost.test.build.10gen.cc:27017', 'localhost.test.build.10gen.cc:27018', ] end let(:result) do double('result').tap do |result| allow(result).to receive(:hostname).and_return(hostname) allow(result).to receive(:address_strs).and_return(hosts) allow(result).to receive(:empty?).and_return(false) allow(result).to receive(:min_ttl).and_return(nil) end end let(:uri_resolver) do double('uri resolver').tap do |resolver| expect(resolver).to receive(:get_records).and_return(result) end end let(:srv_uri) do Mongo::URI.get("mongodb+srv://this.is.not.used") end let(:cluster) do Mongo::Cluster.new(hosts, Mongo::Monitoring.new, monitoring_io: false) end let(:monitor) do described_class.new(cluster, srv_uri: srv_uri) end before do # monitor instantiation triggers cluster instantiation which # performs real SRV lookups for the hostname. # The next lookup (the one performed when cluster is already set up) # is using our doubles. RSpec::Mocks.with_temporary_scope do allow(uri_resolver).to receive(:get_txt_options_string) expect(Mongo::Srv::Resolver).to receive(:new).ordered.and_return(uri_resolver) allow(resolver).to receive(:get_txt_options_string) expect(Mongo::Srv::Resolver).to receive(:new).ordered.and_return(resolver) monitor.send(:scan!) end end context 'when a new DNS record is added' do let(:new_hosts) do hosts + ['localhost.test.build.10gen.cc:27019'] end let(:new_result) do double('result').tap do |result| allow(result).to receive(:hostname).and_return(hostname) allow(result).to receive(:address_strs).and_return(new_hosts) allow(result).to receive(:empty?).and_return(false) allow(result).to receive(:min_ttl).and_return(nil) end end let(:resolver) do double('monitor resolver').tap do |resolver| expect(resolver).to receive(:get_records).and_return(new_result) end end it 'adds the new host to the cluster' do expect(cluster.servers_list.map(&:address).map(&:to_s).sort).to eq(new_hosts.sort) end end context 'when a DNS record is removed' do let(:new_hosts) do hosts - ['test1.test.build.10gen.cc:27018'] end let(:new_result) do double('result').tap do |result| allow(result).to receive(:hostname).and_return(hostname) allow(result).to receive(:address_strs).and_return(new_hosts) allow(result).to receive(:empty?).and_return(false) allow(result).to receive(:min_ttl).and_return(nil) end end let(:resolver) do double('resolver').tap do |resolver| allow(resolver).to receive(:get_records).and_return(new_result) end end it 'adds the new host to the cluster' do expect(cluster.addresses.map(&:to_s).sort).to eq(new_hosts.sort) end end context 'when a single DNS record is replaced' do let(:new_hosts) do hosts - ['test1.test.build.10gen.cc:27018'] + ['test1.test.build.10gen.cc:27019'] end let(:new_result) do double('result').tap do |result| allow(result).to receive(:hostname).and_return(hostname) allow(result).to receive(:address_strs).and_return(new_hosts) allow(result).to receive(:empty?).and_return(false) allow(result).to receive(:min_ttl).and_return(nil) end end let(:resolver) do double('resolver').tap do |resolver| allow(resolver).to receive(:get_records).and_return(new_result) end end it 'adds the new host to the cluster' do expect(cluster.addresses.map(&:to_s).sort).to eq(new_hosts.sort) end end context 'when all DNS result are replaced with a single record' do let(:new_hosts) do ['test1.test.build.10gen.cc:27019'] end let(:new_result) do double('result').tap do |result| allow(result).to receive(:hostname).and_return(hostname) allow(result).to receive(:address_strs).and_return(new_hosts) allow(result).to receive(:empty?).and_return(false) allow(result).to receive(:min_ttl).and_return(nil) end end let(:resolver) do double('resolver').tap do |resolver| expect(resolver).to receive(:get_records).and_return(new_result) end end it 'adds the new host to the cluster' do expect(cluster.addresses.map(&:to_s).sort).to eq(new_hosts.sort) end end context 'when all DNS result are replaced with multiple result' do let(:new_hosts) do [ 'test1.test.build.10gen.cc:27019', 'test1.test.build.10gen.cc:27020', ] end let(:new_result) do double('result').tap do |result| allow(result).to receive(:hostname).and_return(hostname) allow(result).to receive(:address_strs).and_return(new_hosts) allow(result).to receive(:empty?).and_return(false) allow(result).to receive(:min_ttl).and_return(nil) end end let(:resolver) do double('resolver').tap do |resolver| allow(resolver).to receive(:get_records).and_return(new_result) end end it 'adds the new host to the cluster' do expect(cluster.addresses.map(&:to_s).sort).to eq(new_hosts.sort) end end context 'when the DNS lookup times out' do let(:resolver) do double('resolver').tap do |resolver| expect(resolver).to receive(:get_records).and_raise(Resolv::ResolvTimeout) end end it 'does not add or remove any hosts from the cluster' do expect(cluster.addresses.map(&:to_s).sort).to eq(hosts.sort) end end context 'when the DNS lookup is unable to resolve the hostname' do let(:resolver) do double('resolver').tap do |resolver| allow(resolver).to receive(:get_records).and_raise(Resolv::ResolvError) end end it 'does not add or remove any hosts from the cluster' do expect(cluster.addresses.map(&:to_s).sort).to eq(hosts.sort) end end context 'when no DNS result are returned' do let(:new_result) do double('result').tap do |result| allow(result).to receive(:hostname).and_return(hostname) allow(result).to receive(:address_strs).and_return([]) allow(result).to receive(:empty?).and_return(true) allow(result).to receive(:min_ttl).and_return(nil) end end let(:resolver) do double('resolver').tap do |resolver| allow(resolver).to receive(:get_records).and_return(new_result) end end it 'does not add or remove any hosts from the cluster' do expect(cluster.addresses.map(&:to_s).sort).to eq(hosts.sort) end end end end mongo-ruby-driver-2.21.3/spec/mongo/srv/result_spec.rb000066400000000000000000000025271505113246500227360ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' describe Mongo::Srv::Result do let(:result) do described_class.new('bar.com') end describe '#add_record' do context 'when incoming hostname is in mixed case' do let(:record) do double('record').tap do |record| allow(record).to receive(:target).and_return('FOO.bar.COM') allow(record).to receive(:port).and_return(42) allow(record).to receive(:ttl).and_return(1) end end it 'stores hostname in lower case' do result.add_record(record) expect(result.address_strs).to eq(['foo.bar.com:42']) end end end describe '#normalize_hostname' do let(:actual) do result.send(:normalize_hostname, hostname) end context 'when hostname is in mixed case' do let(:hostname) { 'FOO.bar.COM' } it 'converts to lower case' do expect(actual).to eq('foo.bar.com') end end context 'when hostname has one trailing dot' do let(:hostname) { 'foo.' } it 'removes the trailing dot' do expect(actual).to eq('foo') end end context 'when hostname has multiple trailing dots' do let(:hostname) { 'foo..' } it 'returns hostname unchanged' do expect(actual).to eq('foo..') end end end end mongo-ruby-driver-2.21.3/spec/mongo/timeout_spec.rb000066400000000000000000000022631505113246500222710ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' describe Mongo::Timeout do describe '#timeout' do let(:default_error_message) { 'execution expired' } let(:custom_error_class) { Mongo::Error::SocketTimeoutError } let(:custom_error_message) { 'socket timed out' } context 'with time argument' do it 'raises StandardError' do expect do Mongo::Timeout.timeout(0.1) do sleep 1 end end.to raise_error(::Timeout::Error, default_error_message) end end context 'with time and class arguments' do it 'raises the specified error class' do expect do Mongo::Timeout.timeout(0.1, custom_error_class) do sleep 1 end end.to raise_error(custom_error_class, default_error_message) end end context 'with time, class, and message arguments' do it 'raises the specified error class with message' do expect do Mongo::Timeout.timeout(0.1, custom_error_class, custom_error_message) do sleep 1 end end.to raise_error(custom_error_class, custom_error_message) end end end end mongo-ruby-driver-2.21.3/spec/mongo/tls_context_hooks_spec.rb000066400000000000000000000020471505113246500243540ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo do before do Mongo.tls_context_hooks.clear end describe '#tls_context_hooks' do it 'returns an array' do expect(Mongo.tls_context_hooks).to eq([]) end end describe '#tls_context_hooks=' do context 'when argument is not an array' do it 'raises an ArgumentError' do expect do Mongo.tls_context_hooks = "Hello" end.to raise_error(ArgumentError, /TLS context hooks must be an array of Procs/) end end context 'when argument is an array not containing procs' do it 'raises an ArgumentError' do expect do Mongo.tls_context_hooks = [1, 2, 3] end.to raise_error(ArgumentError, /TLS context hooks must be an array of Procs/) end end it 'saves the provided hooks' do Mongo.tls_context_hooks = [ Proc.new { |x| x ** 2 } ] expect(Mongo.tls_context_hooks.length).to eq(1) expect(Mongo.tls_context_hooks.first).to be_a(Proc) end end end mongo-ruby-driver-2.21.3/spec/mongo/uri/000077500000000000000000000000001505113246500200405ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/mongo/uri/options_mapper_spec.rb000066400000000000000000001007231505113246500244410ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::URI::OptionsMapper do let(:options_mapper) { described_class.new } let(:converted) { options_mapper.send(method, name, value) } let(:reverted) { options_mapper.send(method, value) } let(:name) { "name" } describe "#convert_bool" do let(:method) { :convert_bool } context "when providing false" do let(:value) { false } it "returns false" do expect(converted).to be false end end context "when providing true" do let(:value) { true } it "returns true" do expect(converted).to be true end end context "when providing a true string" do let(:value) { "true" } it "returns true" do expect(converted).to be true end end context "when providing a capital true string" do let(:value) { "TRUE" } it "returns true" do expect(converted).to be true end end context "when providing a false string" do let(:value) { "false" } it "returns false" do expect(converted).to be false end end context "when providing a false string" do let(:value) { "FALSE" } it "returns false" do expect(converted).to be false end end context "when providing a different string" do let(:value) { "hello" } it "returns nil" do expect(converted).to be nil end end context "when providing a different type" do let(:value) { :hello } it "returns nil" do expect(converted).to be nil end end end describe "#revert_bool" do let(:method) { :revert_bool } context "when passing a boolean" do let(:value) { true } it "returns the boolean" do expect(reverted).to eq(value) end end context "when passing nil" do let(:value) { nil } it "returns nil" do expect(reverted).to be nil end end end describe "#stringify_bool" do let(:method) { :stringify_bool } context "when passing a boolean" do let(:value) { true } it "returns a string" do expect(reverted).to eq("true") end end context "when passing false" do let(:value) { false } it "returns a string" do expect(reverted).to eq("false") end end context "when passing nil" do let(:value) { nil } it "returns nil" do expect(reverted).to be nil end end end describe "#convert_repeated_bool" do let(:method) { :convert_repeated_bool } let(:value) { true } it "wraps the result in an array" do expect(converted).to eq([ true ]) end end describe "#revert_repeated_bool" do let(:method) { :revert_repeated_bool } context "when passing a boolean list" do let(:value) { [ true ] } it "returns the passed value" do expect(reverted).to eq(value) end end context "when passing nil" do let(:value) { nil } it "returns nil" do expect(reverted).to be nil end end end describe "#stringify_repeated_bool" do let(:method) { :stringify_repeated_bool } context "when passing a boolean list" do let(:value) { [ true ] } it "returns a string" do expect(reverted).to eq("true") end end context "when passing a multi boolean list" do let(:value) { [ true, false ] } it "returns a string" do expect(reverted).to eq("true,false") end end context "when passing nil" do let(:value) { nil } it "returns nil" do expect(reverted).to be nil end end end describe "#convert_inverse_bool" do let(:method) { :convert_inverse_bool } context "when providing false" do let(:value) { false } it "returns false" do expect(converted).to be true end end context "when providing true" do let(:value) { true } it "returns true" do expect(converted).to be false end end context "when providing a true string" do let(:value) { "true" } it "returns true" do expect(converted).to be false end end context "when providing a capital true string" do let(:value) { "TRUE" } it "returns true" do expect(converted).to be false end end context "when providing a false string" do let(:value) { "false" } it "returns false" do expect(converted).to be true end end context "when providing a false string" do let(:value) { "FALSE" } it "returns false" do expect(converted).to be true end end context "when providing a different string" do let(:value) { "hello" } it "returns nil" do expect(converted).to be nil end end context "when providing a different type" do let(:value) { :hello } it "returns nil" do expect(converted).to be nil end end end describe "#revert_inverse_bool" do let(:method) { :revert_inverse_bool } context "when passing true" do let(:value) { true } it "returns false" do expect(reverted).to be false end end context "when passing false" do let(:value) { false } it "returns true" do expect(reverted).to be true end end context "when passing nil" do let(:value) { nil } it "returns nil" do expect(reverted).to be nil end end end describe "#stringify_inverse_bool" do let(:method) { :stringify_inverse_bool } context "when passing true" do let(:value) { true } it "returns false string" do expect(reverted).to eq("false") end end context "when passing false" do let(:value) { false } it "returns true string" do expect(reverted).to eq("true") end end context "when passing nil" do let(:value) { nil } it "returns nil" do expect(reverted).to be nil end end end describe "#convert_integer" do let(:method) { :convert_integer } context "when passing an integer" do let(:value) { 1 } it "returns as an integer" do expect(converted).to eq(1) end end context "when passing an integer string" do let(:value) { "42" } it "returns as an integer" do expect(converted).to eq(42) end end context "when passing an invalid string" do let(:value) { "hello" } it "returns nil" do expect(converted).to be nil end end end describe "#revert_integer" do let(:method) { :revert_integer } context "when passing an integer" do let(:value) { 1 } it "returns the passed value" do expect(reverted).to eq(value) end end context "when passing nil" do let(:value) { nil } it "returns nil" do expect(reverted).to be nil end end end describe "#stringify_integer" do let(:method) { :stringify_integer } context "when passing an integer" do let(:value) { 1 } it "returns the passed value as a string" do expect(reverted).to eq("1") end end context "when passing nil" do let(:value) { nil } it "returns nil" do expect(reverted).to be nil end end end describe "#convert_ms" do let(:method) { :convert_ms } context "when passing an integer" do let(:value) { 1000 } it "returns a float divided by 1000" do expect(converted).to eq(1.0) end end context "when passing a negative integer" do let(:value) { -1000 } it "returns a float divided by 1000" do expect(converted).to be nil end end context "when passing an integer string" do let(:value) { "1000" } it "returns a float divided by 1000" do expect(converted).to eq(1.0) end end context "when passing a negative integer string" do let(:value) { "-1000" } it "returns a float divided by 1000" do expect(converted).to be nil end end context "when passing a float string" do let(:value) { "1000.5" } it "returns a float divided by 1000" do expect(converted).to eq(1.0005) end end context "when passing a negative float string" do let(:value) { "-1000.5" } it "returns a float divided by 1000" do expect(converted).to be nil end end context "when passing a float" do let(:value) { 1000.5 } it "returns a float divided by 1000" do expect(converted).to eq(1.0005) end end context "when passing a negative float" do let(:value) { -1000.5 } it "returns a float divided by 1000" do expect(converted).to be nil end end end describe "#revert_ms" do let(:method) { :revert_ms } context "when passing a float" do let(:value) { 1.000005 } it "returns an integer" do expect(reverted).to eq(1000) end end end describe "#stringify_ms" do let(:method) { :stringify_ms } context "when passing a float" do let(:value) { 1.000005 } it "returns a string" do expect(reverted).to eq("1000") end end end describe "#convert_symbol" do let(:method) { :convert_symbol } context "when passing a string" do let(:value) { "hello" } it "returns a symbol" do expect(converted).to eq(:hello) end end context "when passing a symbol" do let(:value) { :hello } it "returns a symbol" do expect(converted).to eq(:hello) end end end describe "#revert_symbol" do let(:method) { :revert_symbol } context "when passing a symbol" do let(:value) { :hello } it "returns it as a string" do expect(reverted).to eq("hello") end end end describe "#stringify_symbol" do let(:method) { :stringify_symbol } context "when passing a symbol" do let(:value) { :hello } it "returns it as a string" do expect(reverted).to eq("hello") end end end describe "#convert_array" do let(:method) { :convert_array } context "when passing a string with no commas" do let(:value) { "hello" } it "returns one element" do expect(converted).to eq([ "hello" ]) end end context "when passing a string with commas" do let(:value) { "1,2,3" } it "returns multiple elements" do expect(converted).to eq([ '1', '2', '3' ]) end end end describe "#revert_array" do let(:method) { :revert_array } context "when passing one value" do let(:value) { [ "hello" ] } it "returns the value" do expect(reverted).to eq(value) end end context "when passing multiple value" do let(:value) { [ "1", "2", "3" ] } it "returns the value" do expect(reverted).to eq(value) end end end describe "#stringify_array" do let(:method) { :stringify_array } context "when passing one value" do let(:value) { [ "hello" ] } it "returns a string" do expect(reverted).to eq("hello") end end context "when passing multiple value" do let(:value) { [ "1", "2", "3" ] } it "returns the joined string" do expect(reverted).to eq("1,2,3") end end end describe "#convert_auth_mech" do let(:method) { :convert_auth_mech } context "when passing GSSAPI" do let(:value) { "GSSAPI" } it "returns it as a symbol" do expect(converted).to eq(:gssapi) end end context "when passing MONGODB-AWS" do let(:value) { "MONGODB-AWS" } it "returns it as a symbol" do expect(converted).to eq(:aws) end end context "when passing MONGODB-CR" do let(:value) { "MONGODB-CR" } it "returns it as a symbol" do expect(converted).to eq(:mongodb_cr) end end context "when passing MONGODB-X509" do let(:value) { "MONGODB-X509" } it "returns it as a symbol" do expect(converted).to eq(:mongodb_x509) end end context "when passing PLAIN" do let(:value) { "PLAIN" } it "returns it as a symbol" do expect(converted).to eq(:plain) end end context "when passing SCRAM-SHA-1" do let(:value) { "SCRAM-SHA-1" } it "returns it as a symbol" do expect(converted).to eq(:scram) end end context "when passing SCRAM-SHA-256" do let(:value) { "SCRAM-SHA-256" } it "returns it as a symbol" do expect(converted).to eq(:scram256) end end context "when passing a bogus value" do let(:value) { "hello" } it "returns the value" do expect(converted).to eq("hello") end it "warns" do expect(options_mapper).to receive(:log_warn).once converted end end end describe "#revert_auth_mech" do let(:method) { :revert_auth_mech } context "when passing GSSAPI" do let(:value) { :gssapi } it "returns it as a string" do expect(reverted).to eq("GSSAPI") end end context "when passing MONGODB-AWS" do let(:value) { :aws } it "returns it as a string" do expect(reverted).to eq("MONGODB-AWS") end end context "when passing MONGODB-CR" do let(:value) { :mongodb_cr } it "returns it as a string" do expect(reverted).to eq("MONGODB-CR") end end context "when passing MONGODB-X509" do let(:value) { :mongodb_x509 } it "returns it as a string" do expect(reverted).to eq("MONGODB-X509") end end context "when passing PLAIN" do let(:value) { :plain } it "returns it as a string" do expect(reverted).to eq("PLAIN") end end context "when passing SCRAM-SHA-1" do let(:value) { :scram } it "returns it as a string" do expect(reverted).to eq("SCRAM-SHA-1") end end context "when passing SCRAM-SHA-256" do let(:value) { :scram256 } it "returns it as a string" do expect(reverted).to eq("SCRAM-SHA-256") end end context "when passing a bogus value" do let(:value) { "hello" } it "raises an error" do expect do reverted end.to raise_error(ArgumentError, "Unknown auth mechanism hello") end end context "when passing nil" do let(:value) { nil } it "raises an error" do expect do reverted end.to raise_error(ArgumentError, "Unknown auth mechanism #{nil}") end end end describe "#stringify_auth_mech" do let(:method) { :stringify_auth_mech } context "when passing GSSAPI" do let(:value) { :gssapi } it "returns it as a string" do expect(reverted).to eq("GSSAPI") end end context "when passing MONGODB-AWS" do let(:value) { :aws } it "returns it as a string" do expect(reverted).to eq("MONGODB-AWS") end end context "when passing MONGODB-CR" do let(:value) { :mongodb_cr } it "returns it as a string" do expect(reverted).to eq("MONGODB-CR") end end context "when passing MONGODB-X509" do let(:value) { :mongodb_x509 } it "returns it as a string" do expect(reverted).to eq("MONGODB-X509") end end context "when passing PLAIN" do let(:value) { :plain } it "returns it as a string" do expect(reverted).to eq("PLAIN") end end context "when passing SCRAM-SHA-1" do let(:value) { :scram } it "returns it as a string" do expect(reverted).to eq("SCRAM-SHA-1") end end context "when passing SCRAM-SHA-256" do let(:value) { :scram256 } it "returns it as a string" do expect(reverted).to eq("SCRAM-SHA-256") end end context "when passing a bogus value" do let(:value) { "hello" } it "returns nil" do expect(reverted).to be nil end end context "when passing nil" do let(:value) { nil } it "returns nil" do expect(reverted).to be nil end end end describe "#convert_auth_mech_props" do let(:method) { :convert_auth_mech_props } context "when including one item" do let(:value) { "key:value" } it "returns a one element hash" do expect(converted).to eq(key: "value") end end context "when including multiple items" do let(:value) { "k1:v1,k2:v2" } it "returns a multiple element hash" do expect(converted).to eq(k1: "v1", k2: "v2") end end context "when including items without a colon" do let(:value) { "k1:v1,k2,v2" } it "drops those items" do expect(converted).to eq(k1: "v1") end it "warns" do expect(options_mapper).to receive(:log_warn).twice converted end end context "when giving the empty string" do let(:value) { "" } it "returns nil" do expect(converted).to be nil end end context "when giving no valid options" do let(:value) { "k1,k2" } it "returns nil" do expect(converted).to be nil end end context "when passing CANONICALIZE_HOST_NAME" do context "when passing true" do let(:value) { "CANONICALIZE_HOST_NAME:true" } it "returns true as a boolean" do expect(converted).to eq(CANONICALIZE_HOST_NAME: true) end end context "when passing uppercase true" do let(:value) { "CANONICALIZE_HOST_NAME:TRUE" } it "returns true as a boolean" do expect(converted).to eq(CANONICALIZE_HOST_NAME: true) end end context "when passing false" do let(:value) { "CANONICALIZE_HOST_NAME:false" } it "returns false as a boolean" do expect(converted).to eq(CANONICALIZE_HOST_NAME: false) end end context "when passing bogus" do let(:value) { "CANONICALIZE_HOST_NAME:bogus" } it "returns false as a boolean" do expect(converted).to eq(CANONICALIZE_HOST_NAME: false) end end end end describe "#revert_auth_mech_props" do let(:method) { :revert_auth_mech_props } context "when including one item" do let(:value) { { key: "value" } } it "returns a one element hash" do expect(reverted).to eq(value) end end context "when including multiple items" do let(:value) { { k1: "v1", k2: "v2" } } it "returns a multiple element hash" do expect(reverted).to eq(value) end end context "when passing nil" do let(:value) { nil } it "returns nil" do expect(reverted).to be nil end end end describe "#stringify_auth_mech_props" do let(:method) { :stringify_auth_mech_props } context "when including one item" do let(:value) { { key: "value" } } it "returns a string" do expect(reverted).to eq("key:value") end end context "when including multiple items" do let(:value) { { k1: "v1", k2: "v2" } } it "returns a string" do expect(reverted).to eq("k1:v1,k2:v2") end end context "when passing nil" do let(:value) { nil } it "returns nil" do expect(reverted).to be nil end end end describe "#convert_max_staleness" do let(:method) { :convert_max_staleness } context "when passing a string" do context "when passing a positive integer" do let(:value) { "100" } it "returns an integer" do expect(converted).to eq(100) end end context "when passing a negative integer" do let(:value) { "-100" } it "returns an integer" do expect(converted).to be nil end it "warns" do expect(options_mapper).to receive(:log_warn).once converted end end context "when passing a bogus value" do let(:value) { "hello" } it "returns an integer" do expect(converted).to be nil end end end context "when passing an integer" do context "when passing a positive integer" do let(:value) { 100 } it "returns an integer" do expect(converted).to eq(100) end end context "when passing a negative integer" do let(:value) { -100 } it "returns an integer" do expect(converted).to be nil end it "warns" do expect(options_mapper).to receive(:log_warn).once converted end end context "when passing negative 1" do let(:value) { -1 } it "returns an integer" do expect(converted).to be nil end it "doesn't warn" do expect(options_mapper).to receive(:log_warn).never converted end end context "when passing 0" do let(:value) { 0 } it "returns 0" do expect(converted).to eq(0) end it "doesn't warn" do expect(options_mapper).to receive(:log_warn).never converted end end context "when passing a number less than 90" do let(:value) { 50 } it "returns nil" do expect(converted).to be nil end end end context "when passing a bogus value" do let(:value) { :hello } it "returns nil" do expect(converted).to be nil end end end describe "#revert_max_staleness" do let(:method) { :revert_max_staleness } context "when passing an integer" do let(:value) { 1 } it "returns the integer" do expect(reverted).to eq(1) end end context "when passing nil" do let(:value) { nil } it "returns nil" do expect(reverted).to be nil end end end describe "#stringify_max_staleness" do let(:method) { :stringify_max_staleness } context "when passing an integer" do let(:value) { 1 } it "returns the integer string" do expect(reverted).to eq('1') end end context "when passing nil" do let(:value) { nil } it "returns nil" do expect(reverted).to be nil end end end describe "#convert_read_mode" do let(:method) { :convert_read_mode } context "when passing primary" do let(:value) { "primary" } it "returns it as a symbol" do expect(converted).to eq(:primary) end end context "when passing primarypreferred" do let(:value) { "primarypreferred" } it "returns it as a symbol" do expect(converted).to eq(:primary_preferred) end end context "when passing secondary" do let(:value) { "secondary" } it "returns it as a symbol" do expect(converted).to eq(:secondary) end end context "when passing secondarypreferred" do let(:value) { "secondarypreferred" } it "returns it as a symbol" do expect(converted).to eq(:secondary_preferred) end end context "when passing nearest" do let(:value) { "nearest" } it "returns it as a symbol" do expect(converted).to eq(:nearest) end end context "when passing capitalized primary" do let(:value) { "Primary" } it "returns it as a symbol" do expect(converted).to eq(:primary) end end context "when passing a bogus string" do let(:value) { "hello" } it "returns the string" do expect(converted).to eq(value) end end end describe "#revert_read_mode" do let(:method) { :revert_read_mode } context "when passing primary" do let(:value) { :primary } it "returns it as a string" do expect(reverted).to eq("primary") end end context "when passing primarypreferred" do let(:value) { :primary_preferred } it "returns it as a string" do expect(reverted).to eq("primaryPreferred") end end context "when passing secondary" do let(:value) { :secondary } it "returns it as a string" do expect(reverted).to eq("secondary") end end context "when passing secondarypreferred" do let(:value) { :secondary_preferred } it "returns it as a string" do expect(reverted).to eq("secondaryPreferred") end end context "when passing nearest" do let(:value) { :nearest } it "returns it as a string" do expect(reverted).to eq("nearest") end end context "when passing a bogus string" do let(:value) { "hello" } it "returns the string" do expect(reverted).to eq("hello") end end end describe "#stringify_read_mode" do let(:method) { :stringify_read_mode } context "when passing primary" do let(:value) { :primary } it "returns it as a string" do expect(reverted).to eq("primary") end end context "when passing primarypreferred" do let(:value) { :primary_preferred } it "returns it as a string" do expect(reverted).to eq("primaryPreferred") end end context "when passing secondary" do let(:value) { :secondary } it "returns it as a string" do expect(reverted).to eq("secondary") end end context "when passing secondarypreferred" do let(:value) { :secondary_preferred } it "returns it as a string" do expect(reverted).to eq("secondaryPreferred") end end context "when passing nearest" do let(:value) { :nearest } it "returns it as a string" do expect(reverted).to eq("nearest") end end context "when passing a bogus string" do let(:value) { "hello" } it "returns the string" do expect(reverted).to eq("hello") end end end describe "#convert_read_tags" do let(:method) { :convert_read_tags } context "when including one item" do let(:value) { "key:value" } it "returns a one element hash" do expect(converted).to eq([{ key: "value" }]) end end context "when including multiple items" do let(:value) { "k1:v1,k2:v2" } it "returns a multiple element hash" do expect(converted).to eq([{ k1: "v1", k2: "v2" }]) end end context "when including items without a colon" do let(:value) { "k1:v1,k2,v2" } it "drops those items" do expect(converted).to eq([{ k1: "v1" }]) end it "warns" do expect(options_mapper).to receive(:log_warn).twice converted end end context "when giving the empty string" do let(:value) { "" } it "returns nil" do expect(converted).to be nil end end context "when giving no valid options" do let(:value) { "k1,k2" } it "returns nil" do expect(converted).to be nil end end end describe "#revert_read_tags" do let(:method) { :revert_read_tags } context "when including one item" do let(:value) { [ { key: "value" } ] } it "returns the passed value" do expect(reverted).to eq(value) end end context "when including multiple items" do let(:value) { [ { k1: "v1", k2: "v2" } ] } it "returns the passed value" do expect(reverted).to eq(value) end end context "when including multiple hashes" do let(:value) { [ { k1: "v1", k2: "v2" }, { k3: "v3", k4: "v4" } ] } it "returns the passed value" do expect(reverted).to eq(value) end end context "when passing nil" do let(:value) { nil } it "returns nil" do expect(reverted).to be nil end end end describe "#stringify_read_tags" do let(:method) { :stringify_read_tags } context "when including one item" do let(:value) { [ { key: "value" } ] } it "returns a one element string list" do expect(reverted).to eq([ "key:value" ]) end end context "when including multiple items" do let(:value) { [ { k1: "v1", k2: "v2" } ] } it "returns a one element string list" do expect(reverted).to eq([ "k1:v1,k2:v2" ]) end end context "when including multiple hashes" do let(:value) { [ { k1: "v1", k2: "v2" }, { k3: "v3", k4: "v4" } ] } it "returns a multiple element string list" do expect(reverted).to eq([ "k1:v1,k2:v2", "k3:v3,k4:v4" ]) end end context "when passing nil" do let(:value) { nil } it "returns nil" do expect(reverted).to be nil end end end describe "#convert_w" do let(:method) { :convert_w } context "when passing majority" do let(:value) { 'majority' } it "returns it as a symbol" do expect(converted).to eq(:majority) end end context "when passing an integer string" do let(:value) { '42' } it "returns it as an integer" do expect(converted).to eq(42) end end context "when passing a bogus string" do let(:value) { 'hello' } it "returns the string" do expect(converted).to eq(value) end end end describe "#revert_w" do let(:method) { :revert_w } context "when passing an integer" do let(:value) { 1 } it "returns an integer" do expect(reverted).to eq(1) end end context "when passing a symbol" do let(:value) { :majority } it "returns a string" do expect(reverted).to eq("majority") end end context "when passing a string" do let(:value) { "hello" } it "returns a string" do expect(reverted).to eq(value) end end end describe "#stringify_w" do let(:method) { :stringify_w } context "when passing an integer" do let(:value) { 1 } it "returns a string" do expect(reverted).to eq('1') end end context "when passing a symbol" do let(:value) { :majority } it "returns a string" do expect(reverted).to eq("majority") end end context "when passing a string" do let(:value) { "hello" } it "returns a string" do expect(reverted).to eq(value) end end end describe "#convert_zlib_compression_level" do let(:method) { :convert_zlib_compression_level } context "when passing an integer string" do let(:value) { "1" } it "returns it as an integer" do expect(converted).to eq(1) end end context "when passing a negative integer string" do let(:value) { "-1" } it "returns it as an integer" do expect(converted).to eq(-1) end end context "when passing a bogus string" do let(:value) { "hello" } it "returns nil" do expect(converted).to be nil end end context "when passing an integer" do let(:value) { 1 } it "returns the integer" do expect(converted).to eq(value) end end context "when passing a negative integer" do let(:value) { -1 } it "returns the integer" do expect(converted).to eq(value) end end context "when passing a out of range integer" do let(:value) { 10 } it "returns nil" do expect(converted).to be nil end it "warns" do expect(options_mapper).to receive(:log_warn).once converted end end context "when passing a out of range negative integer" do let(:value) { -2 } it "returns nil" do expect(converted).to be nil end it "warns" do expect(options_mapper).to receive(:log_warn).once converted end end end describe "#revert_zlib_compression_level" do let(:method) { :revert_zlib_compression_level } context "when passing an integer" do let(:value) { 1 } it "returns an integer" do expect(reverted).to eq(1) end end context "when passing nil" do let(:value) { nil } it "returns nil" do expect(reverted).to be nil end end end describe "#stringify_zlib_compression_level" do let(:method) { :stringify_zlib_compression_level } context "when passing an integer" do let(:value) { 1 } it "returns a string" do expect(reverted).to eq('1') end end context "when passing nil" do let(:value) { nil } it "returns nil" do expect(reverted).to be nil end end end end mongo-ruby-driver-2.21.3/spec/mongo/uri/srv_protocol_spec.rb000066400000000000000000001154521505113246500241420ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' require 'support/recording_logger' describe Mongo::URI::SRVProtocol do require_external_connectivity clean_slate_for_all_if_possible retry_test let(:scheme) { 'mongodb+srv://' } let(:uri) { described_class.new(string) } let(:client) do new_local_client_nmio(string) end shared_examples "roundtrips string" do it "returns the correct string for the uri" do expect(uri.to_s).to eq(URI::DEFAULT_PARSER.unescape(string)) end end describe 'logging' do let(:logger) { RecordingLogger.new } let(:uri) { described_class.new(string, logger: logger) } let(:host) { 'test5.test.build.10gen.cc' } let(:string) { "#{scheme}#{host}" } it 'logs when resolving the address' do expect { uri }.not_to raise_error expect(logger.contents).to include("attempting to resolve #{host}") end end describe 'invalid uris' do context 'when there is more than one hostname' do let(:string) { "#{scheme}#{hosts}" } let(:hosts) { 'test5.test.build.10gen.cc,test6.test.build.10gen.cc' } it 'raises an error' do expect { uri }.to raise_error(Mongo::Error::InvalidURI) end end context 'when the the hostname has a port' do let(:string) { "#{scheme}#{hosts}" } let(:hosts) { 'test5.test.build.10gen.cc:8123' } it 'raises an error' do expect { uri }.to raise_error(Mongo::Error::InvalidURI) end end context 'when the host in URI does not have {hostname}, {domainname} and {tld}' do let(:string) { "#{scheme}#{hosts}" } let(:hosts) { '10gen.cc/' } it 'raises an error' do expect { uri }.to raise_error(Mongo::Error::InvalidURI) end end context 'when the {tld} is empty' do let(:string) { "#{scheme}#{hosts}" } let(:hosts) { '10gen.cc./' } it 'raises an error' do expect { uri }.to raise_error(Mongo::Error::InvalidURI) end end context 'string is not uri' do let(:string) { 'tyler' } it 'raises an error' do expect { uri }.to raise_error(Mongo::Error::InvalidURI) end end context 'empty string' do let(:string) { '' } it 'raises an error' do expect { uri }.to raise_error(Mongo::Error::InvalidURI) end end context 'mongodb+srv://' do let(:string) { "#{scheme}" } it 'raises an error' do expect { uri }.to raise_error(Mongo::Error::InvalidURI) end end context 'mongodb+srv://localhost::27017/' do let(:string) { "#{scheme}localhost::27017/" } it 'raises an error' do expect { uri }.to raise_error(Mongo::Error::InvalidURI) end end context 'mongodb+srv://::' do let(:string) { "#{scheme}::" } it 'raises an error' do expect { uri }.to raise_error(Mongo::Error::InvalidURI) end end context 'mongodb+srv://localhost,localhost::' do let(:string) { "#{scheme}localhost,localhost::" } it 'raises an error' do expect { uri }.to raise_error(Mongo::Error::InvalidURI) end end context 'mongodb+srv://localhost::27017,abc' do let(:string) { "#{scheme}localhost::27017,abc" } it 'raises an error' do expect { uri }.to raise_error(Mongo::Error::InvalidURI) end end context 'mongodb+srv://localhost:-1' do let(:string) { "#{scheme}localhost:-1" } it 'raises an error' do expect { uri }.to raise_error(Mongo::Error::InvalidURI) end end context 'mongodb+srv://localhost:0/' do let(:string) { "#{scheme}localhost:0/" } it 'raises an error' do expect { uri }.to raise_error(Mongo::Error::InvalidURI) end end context 'mongodb+srv://localhost:65536' do let(:string) { "#{scheme}localhost:65536" } it 'raises an error' do expect { uri }.to raise_error(Mongo::Error::InvalidURI) end end context 'mongodb+srv://localhost:foo' do let(:string) { "#{scheme}localhost:foo" } it 'raises an error' do expect { uri }.to raise_error(Mongo::Error::InvalidURI) end end context 'mongodb+srv://mongodb://[::1]:-1' do let(:string) { "#{scheme}mongodb://[::1]:-1" } it 'raises an error' do expect { uri }.to raise_error(Mongo::Error::InvalidURI) end end context 'mongodb+srv://[::1]:0/' do let(:string) { "#{scheme}[::1]:0/" } it 'raises an error' do expect { uri }.to raise_error(Mongo::Error::InvalidURI) end end context 'mongodb+srv://[::1]:65536' do let(:string) { "#{scheme}[::1]:65536" } it 'raises an error' do expect { uri }.to raise_error(Mongo::Error::InvalidURI) end end context 'mongodb+srv://[::1]:65536/' do let(:string) { "#{scheme}[::1]:65536/" } it 'raises an error' do expect { uri }.to raise_error(Mongo::Error::InvalidURI) end end context 'mongodb+srv://[::1]:foo' do let(:string) { "#{scheme}[::1]:foo" } it 'raises an error' do expect { uri }.to raise_error(Mongo::Error::InvalidURI) end end context 'mongodb+srv://example.com?w=1' do let(:string) { "#{scheme}example.com?w=1" } it 'raises an error' do expect { uri }.to raise_error(Mongo::Error::InvalidURI) end end context 'mongodb+srv://example.com/?w' do let(:string) { "#{scheme}example.com/?w" } it 'raises an error' do expect { uri }.to raise_error(Mongo::Error::InvalidURI) end end end describe 'valid uris' do describe 'invalid query results' do context 'when there are too many TXT records' do let(:string) { "#{scheme}test6.test.build.10gen.cc/" } it 'raises an error' do expect { uri }.to raise_exception(Mongo::Error::InvalidTXTRecord) end end context 'when the TXT has an invalid option' do let(:string) { "#{scheme}test10.test.build.10gen.cc" } it 'raises an error' do expect { uri }.to raise_exception(Mongo::Error::InvalidTXTRecord) end end context 'when the SRV records domain does not match hostname used for the query' do let(:string) { "#{scheme}test12.test.build.10gen.cc" } it 'raises an error' do expect { uri }.to raise_exception(Mongo::Error::MismatchedDomain) end end context 'when the query returns no SRV records' do let(:string) { "#{scheme}test4.test.build.10gen.cc" } it 'raises an error' do expect { uri }.to raise_exception(Mongo::Error::NoSRVRecords) end end end describe '#servers' do let(:string) { "#{scheme}#{servers}#{options}" } let(:servers) { 'test1.test.build.10gen.cc' } let(:options) { '' } context 'single server' do let(:servers) { 'test5.test.build.10gen.cc' } it 'returns an array with the parsed server' do expect(uri.servers).to eq(['localhost.test.build.10gen.cc:27017']) end include_examples "roundtrips string" end context 'multiple servers' do let(:hosts) { ['localhost.test.build.10gen.cc:27017', 'localhost.test.build.10gen.cc:27018'] } context 'without srvMaxHosts' do it 'returns an array with the parsed servers' do expect(uri.servers.length).to eq 2 uri.servers.should =~ hosts end include_examples "roundtrips string" end context 'with srvMaxHosts' do let(:options) { '/?srvMaxHosts=1' } it 'returns an array with only one of the parsed servers' do expect(uri.servers.length).to eq 1 expect(hosts.include?(uri.servers.first)).to be true end include_examples "roundtrips string" end context 'with srvMaxHosts > total hosts' do let(:options) { '/?srvMaxHosts=3' } it 'returns an array with only one of the parsed servers' do expect(uri.servers.length).to eq 2 uri.servers.should =~ hosts end include_examples "roundtrips string" end context 'with srvMaxHosts == total hosts' do let(:options) { '/?srvMaxHosts=2' } it 'returns an array with only one of the parsed servers' do expect(uri.servers.length).to eq 2 uri.servers.should =~ hosts end include_examples "roundtrips string" end context 'with srvMaxHosts=0' do let(:options) { '/?srvMaxHosts=0' } it 'returns an array with only one of the parsed servers' do expect(uri.servers.length).to eq 2 uri.servers.should =~ hosts end include_examples "roundtrips string" end context 'when setting the srvServiceName' do let(:servers) { 'test22.test.build.10gen.cc' } let(:options) { '/?srvServiceName=customname' } it 'returns an array with the parsed server' do uri.servers.should =~ hosts end include_examples "roundtrips string" end end end describe '#client_options' do let(:db) { SpecConfig.instance.test_db } let(:servers) { 'test5.test.build.10gen.cc' } let(:string) { "#{scheme}#{credentials}@#{servers}/#{db}" } let(:user) { 'tyler' } let(:password) { 's3kr4t' } let(:credentials) { "#{user}:#{password}" } let(:options) do uri.client_options end it 'includes the database in the options' do expect(options[:database]).to eq(SpecConfig.instance.test_db) end it 'includes the user in the options' do expect(options[:user]).to eq(user) end it 'includes the password in the options' do expect(options[:password]).to eq(password) end it 'sets ssl to true' do expect(options[:ssl]).to eq(true) end include_examples "roundtrips string" end describe '#credentials' do let(:servers) { 'test5.test.build.10gen.cc' } let(:string) { "#{scheme}#{credentials}@#{servers}" } let(:user) { 'tyler' } context 'username provided' do let(:credentials) { "#{user}:" } it 'returns the username' do expect(uri.credentials[:user]).to eq(user) end it "drops the colon in to_s" do expect(uri.to_s).to eq("mongodb+srv://tyler@test5.test.build.10gen.cc") end end context 'username and password provided' do let(:password) { 's3kr4t' } let(:credentials) { "#{user}:#{password}" } it 'returns the username' do expect(uri.credentials[:user]).to eq(user) end it 'returns the password' do expect(uri.credentials[:password]).to eq(password) end include_examples "roundtrips string" end end describe '#database' do let(:servers) { 'test5.test.build.10gen.cc' } let(:string) { "#{scheme}#{servers}/#{db}" } let(:db) { 'auth-db' } context 'database provided' do it 'returns the database name' do expect(uri.database).to eq(db) end include_examples "roundtrips string" end end describe '#uri_options' do let(:servers) { 'test5.test.build.10gen.cc' } let(:string) { "#{scheme}#{servers}/?#{options}" } context 'when no options were provided' do let(:string) { "#{scheme}#{servers}" } it 'returns an empty hash' do expect(uri.uri_options).to be_empty end include_examples "roundtrips string" end context 'write concern options provided' do context 'numerical w value' do let(:options) { 'w=1' } let(:concern) { Mongo::Options::Redacted.new(:w => 1)} it 'sets the write concern options' do expect(uri.uri_options[:write_concern]).to eq(concern) end it 'sets the options on a client created with the uri' do expect(client.options[:write_concern]).to eq(concern) end include_examples "roundtrips string" end context 'w=majority' do let(:options) { 'w=majority' } let(:concern) { Mongo::Options::Redacted.new(:w => :majority) } it 'sets the write concern options' do expect(uri.uri_options[:write_concern]).to eq(concern) end it 'sets the options on a client created with the uri' do expect(client.options[:write_concern]).to eq(concern) end include_examples "roundtrips string" end context 'journal' do let(:options) { 'journal=true' } let(:concern) { Mongo::Options::Redacted.new(:j => true) } it 'sets the write concern options' do expect(uri.uri_options[:write_concern]).to eq(concern) end it 'sets the options on a client created with the uri' do expect(client.options[:write_concern]).to eq(concern) end include_examples "roundtrips string" end context 'fsync' do let(:options) { 'fsync=true' } let(:concern) { Mongo::Options::Redacted.new(:fsync => true) } it 'sets the write concern options' do expect(uri.uri_options[:write_concern]).to eq(concern) end it 'sets the options on a client created with the uri' do expect(client.options[:write_concern]).to eq(concern) end include_examples "roundtrips string" end context 'wtimeoutMS' do let(:timeout) { 1234 } let(:options) { "w=2&wtimeoutMS=#{timeout}" } let(:concern) { Mongo::Options::Redacted.new(:w => 2, :wtimeout => timeout) } it 'sets the write concern options' do expect(uri.uri_options[:write_concern]).to eq(concern) end it 'sets the options on a client created with the uri' do expect(client.options[:write_concern]).to eq(concern) end it "roundtrips the string with camelCase" do expect(uri.to_s).to eq("mongodb+srv://test5.test.build.10gen.cc/?w=2&wTimeoutMS=1234") end end end context 'read preference option provided' do let(:options) { "readPreference=#{mode}" } context 'primary' do let(:mode) { 'primary' } let(:read) { Mongo::Options::Redacted.new(:mode => :primary) } it 'sets the read preference' do expect(uri.uri_options[:read]).to eq(read) end it 'sets the options on a client created with the uri' do expect(client.options[:read]).to eq(read) end include_examples "roundtrips string" end context 'primaryPreferred' do let(:mode) { 'primaryPreferred' } let(:read) { Mongo::Options::Redacted.new(:mode => :primary_preferred) } it 'sets the read preference' do expect(uri.uri_options[:read]).to eq(read) end it 'sets the options on a client created with the uri' do expect(client.options[:read]).to eq(read) end include_examples "roundtrips string" end context 'secondary' do let(:mode) { 'secondary' } let(:read) { Mongo::Options::Redacted.new(:mode => :secondary) } it 'sets the read preference' do expect(uri.uri_options[:read]).to eq(read) end it 'sets the options on a client created with the uri' do expect(client.options[:read]).to eq(read) end include_examples "roundtrips string" end context 'secondaryPreferred' do let(:mode) { 'secondaryPreferred' } let(:read) { Mongo::Options::Redacted.new(:mode => :secondary_preferred) } it 'sets the read preference' do expect(uri.uri_options[:read]).to eq(read) end it 'sets the options on a client created with the uri' do expect(client.options[:read]).to eq(read) end include_examples "roundtrips string" end context 'nearest' do let(:mode) { 'nearest' } let(:read) { Mongo::Options::Redacted.new(:mode => :nearest) } it 'sets the read preference' do expect(uri.uri_options[:read]).to eq(read) end it 'sets the options on a client created with the uri' do expect(client.options[:read]).to eq(read) end include_examples "roundtrips string" end end context 'read preference tags provided' do context 'single read preference tag set' do let(:options) do 'readPreferenceTags=dc:ny,rack:1' end let(:read) do Mongo::Options::Redacted.new(:tag_sets => [{ 'dc' => 'ny', 'rack' => '1' }]) end it 'sets the read preference tag set' do expect(uri.uri_options[:read]).to eq(read) end it 'sets the options on a client created with the uri' do expect(client.options[:read]).to eq(read) end include_examples "roundtrips string" end context 'multiple read preference tag sets' do let(:options) do 'readPreferenceTags=dc:ny&readPreferenceTags=dc:bos' end let(:read) do Mongo::Options::Redacted.new(:tag_sets => [{ 'dc' => 'ny' }, { 'dc' => 'bos' }]) end it 'sets the read preference tag sets' do expect(uri.uri_options[:read]).to eq(read) end it 'sets the options on a client created with the uri' do expect(client.options[:read]).to eq(read) end include_examples "roundtrips string" end end context 'read preference max staleness option provided' do let(:options) do 'readPreference=Secondary&maxStalenessSeconds=120' end let(:read) do Mongo::Options::Redacted.new(mode: :secondary, :max_staleness => 120) end it 'sets the read preference max staleness in seconds' do expect(uri.uri_options[:read]).to eq(read) end it 'sets the options on a client created with the uri' do expect(client.options[:read]).to eq(read) end it "rountrips the string with lowercase values" do expect(uri.to_s).to eq("mongodb+srv://test5.test.build.10gen.cc/?readPreference=secondary&maxStalenessSeconds=120") end context 'when the read preference and max staleness combination is invalid' do context 'when max staleness is combined with read preference mode primary' do let(:options) do 'readPreference=primary&maxStalenessSeconds=120' end it 'raises an exception when read preference is accessed on the client' do expect { client.server_selector }.to raise_exception(Mongo::Error::InvalidServerPreference) end end context 'when the max staleness value is too small' do let(:options) do 'readPreference=secondary&maxStalenessSeconds=89' end it 'does not raise an exception and is omitted' do expect(client.read_preference).to eq(BSON::Document.new(mode: :secondary)) end it "drops maxStalenessSeconds in to_s" do expect(uri.to_s).to eq("mongodb+srv://test5.test.build.10gen.cc/?readPreference=secondary") end end end end context 'replica set option provided' do let(:rs_name) { 'test-rs-name' } let(:options) { "replicaSet=#{rs_name}" } it 'sets the replica set option' do expect(uri.uri_options[:replica_set]).to eq(rs_name) end it 'sets the options on a client created with the uri' do expect(client.options[:replica_set]).to eq(rs_name) end include_examples "roundtrips string" end context 'auth mechanism provided' do let(:options) { "authMechanism=#{mechanism}" } let(:string) { "#{scheme}#{credentials}@#{servers}/?#{options}" } let(:user) { 'tyler' } let(:password) { 's3kr4t' } let(:credentials) { "#{user}:#{password}" } context 'plain' do let(:mechanism) { 'PLAIN' } let(:expected) { :plain } it 'sets the auth mechanism to :plain' do expect(uri.uri_options[:auth_mech]).to eq(expected) end it 'sets the options on a client created with the uri' do expect(client.options[:auth_mech]).to eq(expected) end it 'is case-insensitive' do client = new_local_client_nmio(string.downcase) expect(client.options[:auth_mech]).to eq(expected) end include_examples "roundtrips string" end context 'mongodb-cr' do let(:mechanism) { 'MONGODB-CR' } let(:expected) { :mongodb_cr } it 'sets the auth mechanism to :mongodb_cr' do expect(uri.uri_options[:auth_mech]).to eq(expected) end it 'sets the options on a client created with the uri' do expect(client.options[:auth_mech]).to eq(expected) end it 'is case-insensitive' do client = new_local_client_nmio(string.downcase) expect(client.options[:auth_mech]).to eq(expected) end include_examples "roundtrips string" end context 'gssapi' do require_mongo_kerberos let(:mechanism) { 'GSSAPI' } let(:expected) { :gssapi } let(:options) { "authMechanism=#{mechanism}&authSource=$external" } it 'sets the auth mechanism to :gssapi' do expect(uri.uri_options[:auth_mech]).to eq(expected) end it 'sets the options on a client created with the uri' do expect(client.options[:auth_mech]).to eq(expected) end it 'is case-insensitive' do client = new_local_client_nmio(string.downcase) expect(client.options[:auth_mech]).to eq(expected) end it "roundtrips the string" do expect(uri.to_s).to eq("mongodb+srv://tyler:s3kr4t@test5.test.build.10gen.cc/?authSource=$external&authMechanism=GSSAPI") end end context 'scram-sha-1' do let(:mechanism) { 'SCRAM-SHA-1' } let(:expected) { :scram } it 'sets the auth mechanism to :scram' do expect(uri.uri_options[:auth_mech]).to eq(expected) end it 'sets the options on a client created with the uri' do expect(client.options[:auth_mech]).to eq(expected) end it 'is case-insensitive' do client = new_local_client_nmio(string.downcase) expect(client.options[:auth_mech]).to eq(expected) end include_examples "roundtrips string" end context 'mongodb-x509' do let(:options) { "authMechanism=#{mechanism}&authSource=$external" } let(:mechanism) { 'MONGODB-X509' } let(:expected) { :mongodb_x509 } let(:credentials) { user } it 'sets the auth mechanism to :mongodb_x509' do expect(uri.uri_options[:auth_mech]).to eq(expected) end it 'sets the options on a client created with the uri' do expect(client.options[:auth_mech]).to eq(expected) end it 'is case-insensitive' do client = new_local_client_nmio(string.downcase) expect(client.options[:auth_mech]).to eq(expected) end it "roundtrips the string" do expect(uri.to_s).to eq("mongodb+srv://tyler@test5.test.build.10gen.cc/?authSource=$external&authMechanism=MONGODB-X509") end context 'when a username is not provided' do let(:string) { "#{scheme}#{servers}/?#{options}" } it 'recognizes the mechanism with no username' do client = new_local_client_nmio(string.downcase) expect(client.options[:auth_mech]).to eq(expected) expect(client.options[:user]).to be_nil end it "roundtrips the string" do expect(uri.to_s).to eq("mongodb+srv://test5.test.build.10gen.cc/?authSource=$external&authMechanism=MONGODB-X509") end end end end context 'auth source provided' do let(:options) { "authSource=#{source}" } context 'regular db' do let(:source) { 'foo' } it 'sets the auth source to the database' do expect(uri.uri_options[:auth_source]).to eq(source) end it 'sets the options on a client created with the uri' do expect(client.options[:auth_source]).to eq(source) end include_examples "roundtrips string" end end # This context exactly duplicates the same one in uri_spec.rb context 'auth mechanism properties provided' do shared_examples 'sets options in the expected manner' do it 'preserves case in auth mechanism properties returned from URI' do expect(uri.uri_options[:auth_mech_properties]).to eq(expected_uri_options) end it 'downcases auth mechanism properties keys in client options' do client = new_local_client_nmio(string) expect(client.options[:auth_mech_properties]).to eq(expected_client_options) end end context 'service_name' do let(:options) do "authMechanismProperties=SERVICE_name:#{service_name}" end let(:service_name) { 'foo' } let(:expected_uri_options) do Mongo::Options::Redacted.new( SERVICE_name: service_name, ) end let(:expected_client_options) do Mongo::Options::Redacted.new( service_name: service_name, ) end include_examples 'sets options in the expected manner' include_examples "roundtrips string" end context 'canonicalize_host_name' do let(:options) do "authMechanismProperties=CANONICALIZE_HOST_name:#{canonicalize_host_name}" end let(:canonicalize_host_name) { 'true' } let(:expected_uri_options) do Mongo::Options::Redacted.new( CANONICALIZE_HOST_name: true, ) end let(:expected_client_options) do Mongo::Options::Redacted.new( canonicalize_host_name: true, ) end include_examples 'sets options in the expected manner' include_examples "roundtrips string" end context 'service_realm' do let(:options) do "authMechanismProperties=SERVICE_realm:#{service_realm}" end let(:service_realm) { 'dumdum' } let(:expected_uri_options) do Mongo::Options::Redacted.new( SERVICE_realm: service_realm, ) end let(:expected_client_options) do Mongo::Options::Redacted.new( service_realm: service_realm, ) end include_examples 'sets options in the expected manner' include_examples "roundtrips string" end context 'multiple properties' do let(:options) do "authMechanismProperties=SERVICE_realm:#{service_realm}," + "CANONICALIZE_HOST_name:#{canonicalize_host_name}," + "SERVICE_name:#{service_name}" end let(:service_name) { 'foo' } let(:canonicalize_host_name) { 'true' } let(:service_realm) { 'dumdum' } let(:expected_uri_options) do Mongo::Options::Redacted.new( SERVICE_name: service_name, CANONICALIZE_HOST_name: true, SERVICE_realm: service_realm, ) end let(:expected_client_options) do Mongo::Options::Redacted.new( service_name: service_name, canonicalize_host_name: true, service_realm: service_realm, ) end include_examples 'sets options in the expected manner' include_examples "roundtrips string" end end context 'connectTimeoutMS' do let(:options) { "connectTimeoutMS=4567" } it 'sets the the connect timeout' do expect(uri.uri_options[:connect_timeout]).to eq(4.567) end include_examples "roundtrips string" end context 'socketTimeoutMS' do let(:options) { "socketTimeoutMS=8910" } it 'sets the socket timeout' do expect(uri.uri_options[:socket_timeout]).to eq(8.910) end include_examples "roundtrips string" end context 'when providing serverSelectionTimeoutMS' do let(:options) { "serverSelectionTimeoutMS=3561" } it 'sets the the connect timeout' do expect(uri.uri_options[:server_selection_timeout]).to eq(3.561) end include_examples "roundtrips string" end context 'when providing localThresholdMS' do let(:options) { "localThresholdMS=3561" } it 'sets the the connect timeout' do expect(uri.uri_options[:local_threshold]).to eq(3.561) end include_examples "roundtrips string" end context 'when providing maxConnecting' do let(:max_connecting) { 10 } let(:options) { "maxConnecting=#{max_connecting}" } it 'sets the max connecting option' do expect(uri.uri_options[:max_connecting]).to eq(max_connecting) end include_examples "roundtrips string" end context 'when providing maxPoolSize' do let(:max_pool_size) { 10 } let(:options) { "maxPoolSize=#{max_pool_size}" } it 'sets the max pool size option' do expect(uri.uri_options[:max_pool_size]).to eq(max_pool_size) end include_examples "roundtrips string" end context 'when providing minPoolSize' do let(:min_pool_size) { 5 } let(:options) { "minPoolSize=#{min_pool_size}" } it 'sets the min pool size option' do expect(uri.uri_options[:min_pool_size]).to eq(min_pool_size) end include_examples "roundtrips string" end context 'when providing waitQueueTimeoutMS' do let(:wait_queue_timeout) { 500 } let(:options) { "waitQueueTimeoutMS=#{wait_queue_timeout}" } it 'sets the wait queue timeout option' do expect(uri.uri_options[:wait_queue_timeout]).to eq(0.5) end include_examples "roundtrips string" end context 'when providing srvMaxHosts' do let(:srv_max_hosts) { 1 } let(:options) { "srvMaxHosts=#{srv_max_hosts}" } it 'sets the srv max hosts option' do expect(uri.uri_options[:srv_max_hosts]).to eq(srv_max_hosts) end include_examples "roundtrips string" end context 'when providing srvMaxHosts as 0' do let(:srv_max_hosts) { 0 } let(:options) { "srvMaxHosts=#{srv_max_hosts}" } it 'doesn\'t set the srv max hosts option' do expect(uri.uri_options[:srv_max_hosts]).to eq(srv_max_hosts) end include_examples "roundtrips string" end context 'when providing invalid integer to srvMaxHosts' do let(:srv_max_hosts) { -1 } let(:options) { "srvMaxHosts=#{srv_max_hosts}" } it 'does not set the srv max hosts option' do expect(uri.uri_options).to_not have_key(:srv_max_hosts) end it "drops srvMaxHosts in to_s" do expect(uri.to_s).to eq("mongodb+srv://test5.test.build.10gen.cc") end end context 'when providing invalid type to srvMaxHosts' do let(:srv_max_hosts) { "foo" } let(:options) { "srvMaxHosts=#{srv_max_hosts}" } it 'does not set the srv max hosts option' do expect(uri.uri_options).to_not have_key(:srv_max_hosts) end it "drops srvMaxHosts in to_s" do expect(uri.to_s).to eq("mongodb+srv://test5.test.build.10gen.cc") end end context 'when providing srvServiceName' do let(:srv_service_name) { "mongodb" } let(:options) { "srvServiceName=#{srv_service_name}" } it 'sets the srv service name option' do expect(uri.uri_options[:srv_service_name]).to eq(srv_service_name) end include_examples "roundtrips string" end context 'ssl' do let(:options) { "ssl=#{ssl}" } context 'true' do let(:ssl) { true } it 'sets the ssl option to true' do expect(uri.uri_options[:ssl]).to be true end it "uses tls in to_s" do expect(uri.to_s).to eq("mongodb+srv://test5.test.build.10gen.cc/?tls=true") end end context 'false' do let(:ssl) { false } it 'sets the ssl option to false' do expect(uri.uri_options[:ssl]).to be false end it "uses tls in to_s" do expect(uri.to_s).to eq("mongodb+srv://test5.test.build.10gen.cc/?tls=false") end end end context 'grouped and non-grouped options provided' do let(:options) { 'w=1&ssl=true' } it 'do not overshadow top level options' do expect(uri.uri_options).not_to be_empty end it "uses tls in to_s" do expect(uri.to_s).to eq("mongodb+srv://test5.test.build.10gen.cc/?w=1&tls=true") end end context 'when an invalid option is provided' do let(:options) { 'invalidOption=10' } let(:uri_options) do uri.uri_options end it 'does not raise an exception' do expect(uri_options).to be_empty end it "drops the invalid option in to_s" do expect(uri.to_s).to eq("mongodb+srv://test5.test.build.10gen.cc") end context 'when an invalid option is combined with valid options' do let(:options) { 'invalidOption=10&waitQueueTimeoutMS=500&ssl=true' } it 'does not raise an exception' do expect(uri_options).not_to be_empty end it 'sets the valid options' do expect(uri_options[:wait_queue_timeout]).to eq(0.5) expect(uri_options[:ssl]).to be true end it "drops the invalid option in to_s" do expect(uri.to_s).to eq("mongodb+srv://test5.test.build.10gen.cc/?waitQueueTimeoutMS=500&tls=true") end end end context 'when an app name option is provided' do let(:options) { "appName=srv_test" } it 'sets the app name on the client' do expect(client.options[:app_name]).to eq('srv_test') end include_examples "roundtrips string" end context 'when a supported compressors option is provided' do let(:options) { "compressors=zlib" } it 'sets the compressors as an array on the client' do expect(client.options[:compressors]).to eq(['zlib']) end include_examples "roundtrips string" end context 'when a non-supported compressors option is provided' do let(:options) { "compressors=snoopy" } it 'sets no compressors on the client and warns' do expect(Mongo::Logger.logger).to receive(:warn) expect(client.options[:compressors]).to be_nil end include_examples "roundtrips string" end context 'when a zlibCompressionLevel option is provided' do let(:options) { "zlibCompressionLevel=6" } it 'sets the zlib compression level on the client' do expect(client.options[:zlib_compression_level]).to eq(6) end include_examples "roundtrips string" end end end describe '#validate_srv_hostname' do let(:valid_hostname) do end let(:dummy_uri) do Mongo::URI::SRVProtocol.new("mongodb+srv://test1.test.build.10gen.cc/") end let(:validate) do dummy_uri.send(:validate_srv_hostname, hostname) end context 'when the hostname is valid' do let(:hostname) do 'a.b.c' end it 'does not raise an error' do expect { validate }.not_to raise_error end end context 'when the hostname has a trailing dot' do let(:hostname) do "a.b.c." end it 'raises an error' do expect { validate }.to raise_error(Mongo::Error::InvalidURI, /Hostname cannot end with a dot: a\.b\.c\./) end end context 'when the hostname is empty' do let(:hostname) do '' end it 'raises an error' do expect { validate }.to raise_error(Mongo::Error::InvalidURI) end end context 'when the hostname has only one part' do let(:hostname) do 'a' end it 'raises an error' do expect { validate }.to raise_error(Mongo::Error::InvalidURI) end end context 'when the hostname has only two parts' do let(:hostname) do 'a.b' end it 'raises an error' do expect { validate }.to raise_error(Mongo::Error::InvalidURI) end end context 'when the hostname has an empty last part' do let(:hostname) do 'a.b.' end it 'it raises an error' do expect { validate }.to raise_error(Mongo::Error::InvalidURI) end end context 'when multiple hostnames are specified' do it 'raises an error' do expect do Mongo::URI::SRVProtocol.new("mongodb+srv://a.b.c,d.e.f/") end.to raise_error(Mongo::Error::InvalidURI, /One and only one host is required/) end end context 'when the hostname contains a colon' do let(:hostname) do 'a.b.c:27017' end it 'raises an error' do expect { validate }.to raise_error(Mongo::Error::InvalidURI) end end context 'when the hostname starts with a dot' do let(:hostname) do '.a.b.c' end it 'raises an error' do expect { validate }.to raise_error(Mongo::Error::InvalidURI) end end context 'when the hostname ends with consecutive dots' do let(:hostname) do 'a.b.c..' end it 'raises an error' do expect { validate }.to raise_error(Mongo::Error::InvalidURI) end end context 'when the hostname contains consecutive dots in the middle' do let(:hostname) do 'a..b.c' end it 'raises an error' do expect { validate }.to raise_error(Mongo::Error::InvalidURI) end end end end mongo-ruby-driver-2.21.3/spec/mongo/uri_option_parsing_spec.rb000066400000000000000000000350261505113246500245200ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' describe Mongo::URI do let(:uri) { described_class.new(string) } shared_examples_for 'parses successfully' do it 'returns a Mongo::URI object' do expect(uri).to be_a(Mongo::URI) end end shared_examples_for 'raises parse error' do it 'raises InvalidURI' do expect do uri end.to raise_error(Mongo::Error::InvalidURI) end end shared_examples_for 'a millisecond option' do let(:string) { "mongodb://example.com/?#{uri_option}=123" } it_behaves_like 'parses successfully' it 'is a float' do expect(uri.uri_options[ruby_option]).to eq(0.123) end context 'a multiple of 1 second' do let(:string) { "mongodb://example.com/?#{uri_option}=123000" } it_behaves_like 'parses successfully' it 'is a float' do expect(uri.uri_options[ruby_option]).to be_a(Float) expect(uri.uri_options[ruby_option]).to eq(123) end end end shared_examples_for 'an integer option' do let(:string) { "mongodb://example.com/?#{uri_option}=123" } it_behaves_like 'parses successfully' it 'is an integer' do expect(uri.uri_options[ruby_option]).to eq(123) end context 'URL encoded value' do let(:string) { "mongodb://example.com/?#{uri_option}=%31%32%33" } it_behaves_like 'parses successfully' it 'is an integer' do expect(uri.uri_options[ruby_option]).to eq(123) end end end shared_examples_for 'a boolean option' do context 'is true' do let(:string) { "mongodb://example.com/?#{uri_option}=true" } it_behaves_like 'parses successfully' it 'is a boolean' do expect(uri.uri_options[ruby_option]).to be true end end context 'is TRUE' do let(:string) { "mongodb://example.com/?#{uri_option}=TRUE" } it_behaves_like 'parses successfully' it 'is a boolean' do expect(uri.uri_options[ruby_option]).to be true end end context 'is false' do let(:string) { "mongodb://example.com/?#{uri_option}=false" } it_behaves_like 'parses successfully' it 'is a boolean' do expect(uri.uri_options[ruby_option]).to be false end end context 'is FALSE' do let(:string) { "mongodb://example.com/?#{uri_option}=FALSE" } it_behaves_like 'parses successfully' it 'is a boolean' do expect(uri.uri_options[ruby_option]).to be false end end end shared_examples_for 'an inverted boolean option' do let(:string) { "mongodb://example.com/?#{uri_option}=true" } it_behaves_like 'parses successfully' it 'is a boolean' do expect(uri.uri_options[ruby_option]).to be false end end shared_examples_for 'a string option' do let(:string) { "mongodb://example.com/?#{uri_option}=foo" } it_behaves_like 'parses successfully' it 'is a string' do expect(uri.uri_options[ruby_option]).to eq('foo') end context 'it is escaped in URI' do let(:string) { "mongodb://example.com/?#{uri_option}=foo%2f" } it 'is unescaped' do expect(uri.uri_options[ruby_option]).to eq('foo/') end end context 'it is escaped twice in URI' do let(:string) { "mongodb://example.com/?#{uri_option}=foo%252f" } it 'is unescaped once' do expect(uri.uri_options[ruby_option]).to eq('foo%2f') end end context 'value is a number' do let(:string) { "mongodb://example.com/?#{uri_option}=1" } it_behaves_like 'parses successfully' it 'is a string' do expect(uri.uri_options[ruby_option]).to eq('1') end end end context 'appName' do let(:uri_option) { 'appName' } let(:ruby_option) { :app_name } it_behaves_like 'a string option' end context 'authMechanism' do let(:string) { 'mongodb://example.com/?authMechanism=SCRAM-SHA-256' } it_behaves_like 'parses successfully' it 'is a symbol' do expect(uri.uri_options[:auth_mech]).to eq(:scram256) end context 'lowercase value' do let(:string) { 'mongodb://example.com/?authMechanism=scram-sha-256' } it_behaves_like 'parses successfully' it 'is mapped to auth mechanism' do expect(uri.uri_options[:auth_mech]).to eq(:scram256) end end context 'unrecognized value' do let(:string) { 'mongodb://example.com/?authMechanism=foobar' } it_behaves_like 'parses successfully' it 'is mapped to auth mechanism' do expect(uri.uri_options[:auth_mech]).to eq('foobar') end end end context 'authMechanismProperties' do let(:string) { 'mongodb://example.com/?authmechanismproperties=SERVICE_realm:foo,CANONICALIZE_HOST_name:TRUE' } it_behaves_like 'parses successfully' it 'parses correctly' do expect(uri.uri_options[:auth_mech_properties]).to eq(BSON::Document.new( SERVICE_realm: 'foo', CANONICALIZE_HOST_name: true, )) end context 'canonicalize host name is false' do let(:string) { 'mongodb://example.com/?authmechanismproperties=SERVICE_realm:foo,CANONICALIZE_HOST_name:false' } it 'parses correctly' do expect(uri.uri_options[:auth_mech_properties]).to eq(BSON::Document.new( SERVICE_realm: 'foo', CANONICALIZE_HOST_name: false, )) end end context 'canonicalize host name is true in mixed case' do let(:string) { 'mongodb://example.com/?authmechanismproperties=SERVICE_realm:foo,CANONICALIZE_HOST_name:TrUe' } it 'parses correctly' do expect(uri.uri_options[:auth_mech_properties]).to eq(BSON::Document.new( SERVICE_realm: 'foo', CANONICALIZE_HOST_name: true, )) end end end context 'authSource' do let(:uri_option) { 'authSource' } let(:ruby_option) { :auth_source } it_behaves_like 'a string option' context 'empty' do let(:string) { "mongodb://example.com/?#{uri_option}=" } it 'is mapped to the empty string' do expect(uri.uri_options[ruby_option]).to eq('') end end end context 'compressors' do let(:string) { 'mongodb://example.com/?compressors=snappy,zlib' } it_behaves_like 'parses successfully' it 'is an array of strings string' do expect(uri.uri_options[:compressors]).to eq(['snappy', 'zlib']) end end context 'connect' do let(:client) { new_local_client_nmio(string) } shared_examples 'raises an error when client is created' do it 'raises an error when client is created' do lambda do client end.should raise_error(ArgumentError, /Invalid :connect option value/) end end %i(direct sharded replica_set load_balanced).each do |value| context "#{value}" do let(:string) { "mongodb://example.com/?connect=#{value}" } it_behaves_like 'parses successfully' it 'is a symbol' do expect(uri.uri_options[:connect]).to eq(value) end end end %i(replica-set load-balanced).each do |value| context "#{value}" do let(:string) { "mongodb://example.com/?connect=#{value}" } it_behaves_like 'parses successfully' it 'is a symbol' do expect(uri.uri_options[:connect]).to eq(value) end include_examples 'raises an error when client is created' end end context 'invalid value' do let(:string) { 'mongodb://example.com/?connect=bogus' } it_behaves_like 'parses successfully' it 'is a symbol' do expect(uri.uri_options[:connect]).to eq(:bogus) end include_examples 'raises an error when client is created' end end context 'connectTimeoutMS' do let(:uri_option) { 'connectTimeoutMS' } let(:ruby_option) { :connect_timeout } it_behaves_like 'a millisecond option' end context 'fsync' do let(:string) { 'mongodb://example.com/?fsync=true' } it_behaves_like 'parses successfully' it 'is a boolean' do expect(uri.uri_options[:write_concern]).to eq(BSON::Document.new(fsync: true)) end end context 'heartbeatFrequencyMS' do let(:uri_option) { 'heartbeatFrequencyMS' } let(:ruby_option) { :heartbeat_frequency } it_behaves_like 'a millisecond option' end context 'journal' do let(:string) { 'mongodb://example.com/?journal=true' } it_behaves_like 'parses successfully' it 'is a boolean' do expect(uri.uri_options[:write_concern]).to eq(BSON::Document.new(j: true)) end end context 'localThresholdMS' do let(:uri_option) { 'localThresholdMS' } let(:ruby_option) { :local_threshold } it_behaves_like 'a millisecond option' end context 'maxIdleTimeMS' do let(:uri_option) { 'maxIdleTimeMS' } let(:ruby_option) { :max_idle_time } it_behaves_like 'a millisecond option' end context 'maxStalenessSeconds' do let(:string) { "mongodb://example.com/?maxStalenessSeconds=123" } it_behaves_like 'parses successfully' it 'is an integer' do expect(uri.uri_options[:read][:max_staleness]).to be_a(Integer) expect(uri.uri_options[:read][:max_staleness]).to eq(123) end context '-1 as value' do let(:string) { "mongodb://example.com/?maxStalenessSeconds=-1" } it_behaves_like 'parses successfully' it 'is omitted' do uri.uri_options.should_not have_key(:read) end end end context 'maxPoolSize' do let(:uri_option) { 'maxPoolSize' } let(:ruby_option) { :max_pool_size } it_behaves_like 'an integer option' end context 'minPoolSize' do let(:uri_option) { 'minPoolSize' } let(:ruby_option) { :min_pool_size } it_behaves_like 'an integer option' end context 'readConcernLevel' do let(:string) { 'mongodb://example.com/?readConcernLevel=snapshot' } it_behaves_like 'parses successfully' it 'is a symbol' do expect(uri.uri_options[:read_concern]).to eq(BSON::Document.new(level: :snapshot)) end end context 'readPreference' do let(:string) { "mongodb://example.com/?readPreference=nearest" } it_behaves_like 'parses successfully' it 'is a string' do expect(uri.uri_options[:read]).to eq(BSON::Document.new(mode: :nearest)) end context 'an unknown value' do let(:string) { "mongodb://example.com/?readPreference=foobar" } it 'is unchanged' do expect(uri.uri_options[:read]).to eq(BSON::Document.new(mode: 'foobar')) end end end context 'readPreferenceTags' do let(:string) { "mongodb://example.com/?readPreferenceTags=dc:ny,rack:1" } it_behaves_like 'parses successfully' it 'parses correctly' do expect(uri.uri_options[:read]).to eq(BSON::Document.new( tag_sets: [{'dc' => 'ny', 'rack' => '1'}])) end context 'with double escaped keys and values' do let(:string) { "mongodb://example.com/?readPreferenceTags=dc%252f:ny,rack:1%252f" } it 'unescapes once' do expect(uri.uri_options[:read]).to eq(BSON::Document.new( tag_sets: [{'dc%2f' => 'ny', 'rack' => '1%2f'}])) end end end context 'replicaSet' do let(:uri_option) { 'replicaSet' } let(:ruby_option) { :replica_set } it_behaves_like 'a string option' end context 'retryWrites' do let(:uri_option) { 'retryWrites' } let(:ruby_option) { :retry_writes } it_behaves_like 'a boolean option' end context 'serverSelectionTimeoutMS' do let(:uri_option) { 'serverSelectionTimeoutMS' } let(:ruby_option) { :server_selection_timeout } it_behaves_like 'a millisecond option' end context 'socketTimeoutMS' do let(:uri_option) { 'socketTimeoutMS' } let(:ruby_option) { :socket_timeout } it_behaves_like 'a millisecond option' end context 'ssl' do let(:uri_option) { 'ssl' } let(:ruby_option) { :ssl } it_behaves_like 'a boolean option' end context 'tls' do let(:uri_option) { 'tls' } let(:ruby_option) { :ssl } it_behaves_like 'a boolean option' end context 'tlsAllowInvalidCertificates' do let(:uri_option) { 'tlsAllowInvalidCertificates' } let(:ruby_option) { :ssl_verify_certificate } it_behaves_like 'an inverted boolean option' end context 'tlsAllowInvalidHostnames' do let(:uri_option) { 'tlsAllowInvalidHostnames' } let(:ruby_option) { :ssl_verify_hostname } it_behaves_like 'an inverted boolean option' end context 'tlsCAFile' do let(:uri_option) { 'tlsCAFile' } let(:ruby_option) { :ssl_ca_cert } it_behaves_like 'a string option' end context 'tlsCertificateKeyFile' do let(:uri_option) { 'tlsCertificateKeyFile' } let(:ruby_option) { :ssl_cert } it_behaves_like 'a string option' end context 'tlsCertificateKeyFilePassword' do let(:uri_option) { 'tlsCertificateKeyFilePassword' } let(:ruby_option) { :ssl_key_pass_phrase } it_behaves_like 'a string option' end context 'tlsInsecure' do let(:uri_option) { 'tlsInsecure' } let(:ruby_option) { :ssl_verify } it_behaves_like 'an inverted boolean option' end context 'w' do context 'integer value' do let(:string) { "mongodb://example.com/?w=1" } it_behaves_like 'parses successfully' it 'is an integer' do expect(uri.uri_options[:write_concern]).to eq(BSON::Document.new(w: 1)) end end context 'string value' do let(:string) { "mongodb://example.com/?w=foo" } it_behaves_like 'parses successfully' it 'is a string' do expect(uri.uri_options[:write_concern]).to eq(BSON::Document.new(w: 'foo')) end end context 'majority' do let(:string) { "mongodb://example.com/?w=majority" } it_behaves_like 'parses successfully' it 'is a symbol' do expect(uri.uri_options[:write_concern]).to eq(BSON::Document.new(w: :majority)) end end end context 'waitQueueTimeoutMS' do let(:uri_option) { 'waitQueueTimeoutMS' } let(:ruby_option) { :wait_queue_timeout } it_behaves_like 'a millisecond option' end context 'wtimeoutMS' do let(:string) { "mongodb://example.com/?wtimeoutMS=100" } it_behaves_like 'parses successfully' it 'is a float' do expect(uri.uri_options[:write_concern]).to eq(BSON::Document.new(wtimeout: 100)) end end context 'zlibCompressionLevel' do let(:uri_option) { 'zlibCompressionLevel' } let(:ruby_option) { :zlib_compression_level } let(:string) { "mongodb://example.com/?#{uri_option}=7" } it_behaves_like 'parses successfully' it 'is an integer' do expect(uri.uri_options[ruby_option]).to eq(7) end end end mongo-ruby-driver-2.21.3/spec/mongo/uri_spec.rb000066400000000000000000001170741505113246500214110ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' describe Mongo::URI do shared_examples "roundtrips string" do it "returns the correct string for the uri" do expect(uri.to_s).to eq(URI::DEFAULT_PARSER.unescape(string)) end end describe '.get' do let(:uri) { described_class.get(string) } describe 'invalid uris' do context 'string is not uri' do let(:string) { 'tyler' } it 'raises an error' do expect { uri }.to raise_error(Mongo::Error::InvalidURI) end end context 'nil' do let(:string) { nil } it 'raises an error' do expect do uri end.to raise_error(Mongo::Error::InvalidURI, /URI must be a string, not nil/) end end context 'empty string' do let(:string) { '' } it 'raises an error' do expect do uri end.to raise_error(Mongo::Error::InvalidURI, /Cannot parse an empty URI/) end end end context 'when the scheme is mongodb://' do let(:string) do 'mongodb://localhost:27017' end it 'returns a Mongo::URI object' do expect(uri).to be_a(Mongo::URI) end end context 'when the scheme is mongodb+srv://' do require_external_connectivity let(:string) do 'mongodb+srv://test5.test.build.10gen.cc' end it 'returns a Mongo::URI::SRVProtocol object' do expect(uri).to be_a(Mongo::URI::SRVProtocol) end include_examples "roundtrips string" end context 'when the scheme is invalid' do let(:string) do 'mongo://localhost:27017' end it 'raises an exception' do expect { uri }.to raise_error(Mongo::Error::InvalidURI) end end end let(:scheme) { 'mongodb://' } let(:uri) { described_class.new(string) } describe 'invalid uris' do context 'string is not uri' do let(:string) { 'tyler' } it 'raises an error' do expect { uri }.to raise_error(Mongo::Error::InvalidURI) end end context 'nil' do let(:string) { nil } it 'raises an error' do expect do uri end.to raise_error(Mongo::Error::InvalidURI, /URI must be a string, not nil/) end end context 'empty string' do let(:string) { '' } it 'raises an error' do expect do uri end.to raise_error(Mongo::Error::InvalidURI, /Cannot parse an empty URI/) end end context 'mongo://localhost:27017' do let(:string) { 'mongo://localhost:27017' } it 'raises an error' do expect { uri }.to raise_error(Mongo::Error::InvalidURI) end end context 'mongodb://' do let(:string) { 'mongodb://' } it 'raises an error' do expect { uri }.to raise_error(Mongo::Error::InvalidURI) end end context 'mongodb://localhost::27017' do let(:string) { 'mongodb://localhost::27017' } it 'raises an error' do expect { uri }.to raise_error(Mongo::Error::InvalidURI) end end context 'mongodb://localhost::27017/' do let(:string) { 'mongodb://localhost::27017/' } it 'raises an error' do expect { uri }.to raise_error(Mongo::Error::InvalidURI) end end context 'mongodb://::' do let(:string) { 'mongodb://::' } it 'raises an error' do expect { uri }.to raise_error(Mongo::Error::InvalidURI) end end context 'mongodb://localhost,localhost::' do let(:string) { 'mongodb://localhost,localhost::' } it 'raises an error' do expect { uri }.to raise_error(Mongo::Error::InvalidURI) end end context 'mongodb://localhost::27017,abc' do let(:string) { 'mongodb://localhost::27017,abc' } it 'raises an error' do expect { uri }.to raise_error(Mongo::Error::InvalidURI) end end context 'mongodb://localhost:-1' do let(:string) { 'mongodb://localhost:-1' } it 'raises an error' do expect { uri }.to raise_error(Mongo::Error::InvalidURI) end end context 'mongodb://localhost:0/' do let(:string) { 'mongodb://localhost:0/' } it 'raises an error' do expect { uri }.to raise_error(Mongo::Error::InvalidURI) end end context 'mongodb://localhost:65536' do let(:string) { 'mongodb://localhost:65536' } it 'raises an error' do expect { uri }.to raise_error(Mongo::Error::InvalidURI) end end context 'mongodb://localhost:foo' do let(:string) { 'mongodb://localhost:foo' } it 'raises an error' do expect { uri }.to raise_error(Mongo::Error::InvalidURI) end end context 'mongodb://[::1]:-1' do let(:string) { 'mongodb://[::1]:-1' } it 'raises an error' do expect { uri }.to raise_error(Mongo::Error::InvalidURI) end end context 'mongodb://[::1]:0/' do let(:string) { 'mongodb://[::1]:0/' } it 'raises an error' do expect { uri }.to raise_error(Mongo::Error::InvalidURI) end end context 'mongodb://[::1]:65536' do let(:string) { 'mongodb://[::1]:65536' } it 'raises an error' do expect { uri }.to raise_error(Mongo::Error::InvalidURI) end end context 'mongodb://[::1]:65536/' do let(:string) { 'mongodb://[::1]:65536/' } it 'raises an error' do expect { uri }.to raise_error(Mongo::Error::InvalidURI) end end context 'mongodb://[::1]:foo' do let(:string) { 'mongodb://[::1]:foo' } it 'raises an error' do expect { uri }.to raise_error(Mongo::Error::InvalidURI) end end context 'mongodb://example.com/?w' do let(:string) { 'mongodb://example.com/?w' } it 'raises an error' do expect { uri }.to raise_error(Mongo::Error::InvalidURI, /Option w has no value/) end end context 'equal sign in option value' do let(:string) { 'mongodb://example.com/?authmechanismproperties=foo:a=b&appname=test' } it 'is allowed' do expect(uri.uri_options[:auth_mech_properties]).to eq('foo' => 'a=b') end end context 'slash in option value' do let(:string) { 'mongodb://example.com/?tlsCAFile=a/b' } it 'returns a Mongo::URI object' do expect(uri).to be_a(Mongo::URI) end it 'parses correctly' do expect(uri.servers).to eq(['example.com']) expect(uri.uri_options[:ssl_ca_cert]).to eq('a/b') end end context 'numeric value in a string option' do let(:string) { 'mongodb://example.com/?appName=1' } it 'returns a Mongo::URI object' do expect(uri).to be_a(Mongo::URI) end it 'sets option to the string value' do expect(uri.uri_options[:app_name]).to eq('1') end end context 'options start with ampersand' do let(:string) { 'mongodb://example.com/?&appName=foo' } it 'returns a Mongo::URI object' do expect(uri).to be_a(Mongo::URI) end it 'parses the options' do expect(uri.uri_options[:app_name]).to eq('foo') end end context 'mongodb://alice:foo:bar@127.0.0.1' do let(:string) { 'mongodb://alice:foo:bar@127.0.0.1' } it 'raises an error' do expect { uri }.to raise_error(Mongo::Error::InvalidURI) end end context 'mongodb://alice@@127.0.0.1' do let(:string) { 'mongodb://alice@@127.0.0.1' } it 'raises an error' do expect { uri }.to raise_error(Mongo::Error::InvalidURI) end end context 'mongodb://alice@foo:bar@127.0.0.1' do let(:string) { 'mongodb://alice@foo:bar@127.0.0.1' } it 'raises an error' do expect { uri }.to raise_error(Mongo::Error::InvalidURI) end end end describe '#initialize' do context 'string is not uri' do let(:string) { 'tyler' } it 'raises an error' do expect { uri }.to raise_error(Mongo::Error::InvalidURI) end end end describe "#to_s" do context "string is a uri" do let(:string) { 'mongodb://localhost:27017' } it "returns the original string" do expect(uri.to_s).to eq(string) end end end describe '#servers' do let(:string) { "#{scheme}#{servers}" } context 'single server' do let(:servers) { 'localhost' } it 'returns an array with the parsed server' do expect(uri.servers).to eq([servers]) end include_examples "roundtrips string" end context 'single server with port' do let(:servers) { 'localhost:27017' } it 'returns an array with the parsed server' do expect(uri.servers).to eq([servers]) end include_examples "roundtrips string" end context 'numerical ipv4 server' do let(:servers) { '127.0.0.1' } it 'returns an array with the parsed server' do expect(uri.servers).to eq([servers]) end include_examples "roundtrips string" end context 'numerical ipv6 server' do let(:servers) { '[::1]:27107' } it 'returns an array with the parsed server' do expect(uri.servers).to eq([servers]) end include_examples "roundtrips string" end context 'unix socket server' do let(:servers) { '%2Ftmp%2Fmongodb-27017.sock' } it 'returns an array with the parsed server' do expect(uri.servers).to eq([URI::DEFAULT_PARSER.unescape(servers)]) end include_examples "roundtrips string" end context 'multiple servers' do let(:servers) { 'localhost,127.0.0.1' } it 'returns an array with the parsed servers' do expect(uri.servers).to eq(servers.split(',')) end include_examples "roundtrips string" end context 'multiple servers with ports' do let(:servers) { '127.0.0.1:27107,localhost:27018' } it 'returns an array with the parsed servers' do expect(uri.servers).to eq(servers.split(',')) end include_examples "roundtrips string" end end describe '#client_options' do let(:db) { 'dummy_db' } let(:servers) { 'localhost' } let(:string) { "#{scheme}#{credentials}@#{servers}/#{db}" } let(:user) { 'tyler' } let(:password) { 's3kr4t' } let(:credentials) { "#{user}:#{password}" } let(:options) do uri.client_options end it 'includes the database in the options' do expect(options[:database]).to eq('dummy_db') end it 'includes the user in the options' do expect(options[:user]).to eq(user) end it 'includes the password in the options' do expect(options[:password]).to eq(password) end include_examples "roundtrips string" end describe '#credentials' do let(:servers) { 'localhost' } let(:string) { "#{scheme}#{credentials}@#{servers}" } let(:user) { 'tyler' } context 'username provided' do let(:credentials) { "#{user}:" } it 'returns the username' do expect(uri.credentials[:user]).to eq(user) end it "roundtrips string without the colon" do expect(uri.to_s).to eq("mongodb://tyler@localhost") end end context 'username and password provided' do let(:password) { 's3kr4t' } let(:credentials) { "#{user}:#{password}" } it 'returns the username' do expect(uri.credentials[:user]).to eq(user) end it 'returns the password' do expect(uri.credentials[:password]).to eq(password) end include_examples "roundtrips string" end end describe '#database' do let(:servers) { 'localhost' } let(:string) { "#{scheme}#{servers}/#{db}" } let(:db) { 'auth-db' } context 'database provided' do it 'returns the database name' do expect(uri.database).to eq(db) end include_examples "roundtrips string" end end describe '#uri_options' do let(:servers) { 'localhost' } let(:string) { "#{scheme}#{servers}/?#{options}" } context 'when no options were provided' do let(:string) { "#{scheme}#{servers}" } it 'returns an empty hash' do expect(uri.uri_options).to be_empty end include_examples "roundtrips string" end context 'write concern options provided' do context 'numerical w value' do let(:options) { 'w=1' } let(:concern) { Mongo::Options::Redacted.new(:w => 1)} it 'sets the write concern options' do expect(uri.uri_options[:write_concern]).to eq(concern) end it 'sets the options on a client created with the uri' do client = new_local_client_nmio(string) expect(client.options[:write_concern]).to eq(concern) end include_examples "roundtrips string" end context 'w=majority' do let(:options) { 'w=majority' } let(:concern) { Mongo::Options::Redacted.new(:w => :majority) } it 'sets the write concern options' do expect(uri.uri_options[:write_concern]).to eq(concern) end it 'sets the options on a client created with the uri' do client = new_local_client_nmio(string) expect(client.options[:write_concern]).to eq(concern) end include_examples "roundtrips string" end context 'journal' do let(:options) { 'journal=true' } let(:concern) { Mongo::Options::Redacted.new(:j => true) } it 'sets the write concern options' do expect(uri.uri_options[:write_concern]).to eq(concern) end it 'sets the options on a client created with the uri' do client = new_local_client_nmio(string) expect(client.options[:write_concern]).to eq(concern) end include_examples "roundtrips string" end context 'fsync' do let(:options) { 'fsync=true' } let(:concern) { Mongo::Options::Redacted.new(:fsync => true) } it 'sets the write concern options' do expect(uri.uri_options[:write_concern]).to eq(concern) end it 'sets the options on a client created with the uri' do client = new_local_client_nmio(string) expect(client.options[:write_concern]).to eq(concern) end include_examples "roundtrips string" end context 'wtimeoutMS' do let(:timeout) { 1234 } let(:options) { "w=2&wtimeoutMS=#{timeout}" } let(:concern) { Mongo::Options::Redacted.new(:w => 2, :wtimeout => timeout) } it 'sets the write concern options' do expect(uri.uri_options[:write_concern]).to eq(concern) end it 'sets the options on a client created with the uri' do client = new_local_client_nmio(string) expect(client.options[:write_concern]).to eq(concern) end it "roundtrips the string with camelCase" do expect(uri.to_s).to eq("mongodb://localhost/?w=2&wTimeoutMS=1234") end end end context 'read preference option provided' do let(:options) { "readPreference=#{mode}" } context 'primary' do let(:mode) { 'primary' } let(:read) { Mongo::Options::Redacted.new(:mode => :primary) } it 'sets the read preference' do expect(uri.uri_options[:read]).to eq(read) end it 'sets the options on a client created with the uri' do client = new_local_client_nmio(string) expect(client.options[:read]).to eq(read) end include_examples "roundtrips string" end context 'primaryPreferred' do let(:mode) { 'primaryPreferred' } let(:read) { Mongo::Options::Redacted.new(:mode => :primary_preferred) } it 'sets the read preference' do expect(uri.uri_options[:read]).to eq(read) end it 'sets the options on a client created with the uri' do client = new_local_client_nmio(string) expect(client.options[:read]).to eq(read) end include_examples "roundtrips string" end context 'secondary' do let(:mode) { 'secondary' } let(:read) { Mongo::Options::Redacted.new(:mode => :secondary) } it 'sets the read preference' do expect(uri.uri_options[:read]).to eq(read) end it 'sets the options on a client created with the uri' do client = new_local_client_nmio(string) expect(client.options[:read]).to eq(read) end include_examples "roundtrips string" end context 'secondaryPreferred' do let(:mode) { 'secondaryPreferred' } let(:read) { Mongo::Options::Redacted.new(:mode => :secondary_preferred) } it 'sets the read preference' do expect(uri.uri_options[:read]).to eq(read) end it 'sets the options on a client created with the uri' do client = new_local_client_nmio(string) expect(client.options[:read]).to eq(read) end include_examples "roundtrips string" end context 'nearest' do let(:mode) { 'nearest' } let(:read) { Mongo::Options::Redacted.new(:mode => :nearest) } it 'sets the read preference' do expect(uri.uri_options[:read]).to eq(read) end it 'sets the options on a client created with the uri' do client = new_local_client_nmio(string) expect(client.options[:read]).to eq(read) end include_examples "roundtrips string" end end context 'read preference tags provided' do context 'single read preference tag set' do let(:options) do 'readPreferenceTags=dc:ny,rack:1' end let(:read) do Mongo::Options::Redacted.new(:tag_sets => [{ 'dc' => 'ny', 'rack' => '1' }]) end it 'sets the read preference tag set' do expect(uri.uri_options[:read]).to eq(read) end it 'sets the options on a client created with the uri' do client = new_local_client_nmio(string) expect(client.options[:read]).to eq(read) end include_examples "roundtrips string" end context 'multiple read preference tag sets' do let(:options) do 'readPreferenceTags=dc:ny&readPreferenceTags=dc:bos' end let(:read) do Mongo::Options::Redacted.new(:tag_sets => [{ 'dc' => 'ny' }, { 'dc' => 'bos' }]) end it 'sets the read preference tag sets' do expect(uri.uri_options[:read]).to eq(read) end it 'sets the options on a client created with the uri' do client = new_local_client_nmio(string) expect(client.options[:read]).to eq(read) end include_examples "roundtrips string" end end context 'read preference max staleness option provided' do let(:options) do 'readPreference=Secondary&maxStalenessSeconds=120' end let(:read) do Mongo::Options::Redacted.new(mode: :secondary, :max_staleness => 120) end it 'sets the read preference max staleness in seconds' do expect(uri.uri_options[:read]).to eq(read) end it 'sets the options on a client created with the uri' do client = new_local_client_nmio(string) expect(client.options[:read]).to eq(read) end context 'when the read preference and max staleness combination is invalid' do context 'when max staleness is combined with read preference mode primary' do let(:options) do 'readPreference=primary&maxStalenessSeconds=120' end it 'raises an exception when read preference is accessed on the client' do client = new_local_client_nmio(string) expect { client.server_selector }.to raise_exception(Mongo::Error::InvalidServerPreference) end end context 'when the max staleness value is too small' do let(:options) do 'readPreference=secondary&maxStalenessSeconds=89' end it 'does not raise an exception and drops the option' do client = new_local_client_nmio(string) expect(client.read_preference).to eq(BSON::Document.new(mode: :secondary)) end it "returns the string without the dropped option" do expect(uri.to_s).to eq("mongodb://localhost/?readPreference=secondary") end end end end context 'replica set option provided' do let(:rs_name) { 'dummy_rs' } let(:options) { "replicaSet=#{rs_name}" } it 'sets the replica set option' do expect(uri.uri_options[:replica_set]).to eq(rs_name) end it 'sets the options on a client created with the uri' do client = new_local_client_nmio(string) expect(client.options[:replica_set]).to eq(rs_name) end include_examples "roundtrips string" end context 'auth mechanism provided' do let(:string) { "#{scheme}#{credentials}@#{servers}/?#{options}" } let(:user) { 'tyler' } let(:password) { 's3kr4t' } let(:credentials) { "#{user}:#{password}" } let(:options) { "authMechanism=#{mechanism}" } context 'plain' do let(:mechanism) { 'PLAIN' } let(:expected) { :plain } it 'sets the auth mechanism to :plain' do expect(uri.uri_options[:auth_mech]).to eq(expected) end it 'sets the options on a client created with the uri' do client = new_local_client_nmio(string) expect(client.options[:auth_mech]).to eq(expected) end it 'is case-insensitive' do client = new_local_client_nmio(string.downcase) expect(client.options[:auth_mech]).to eq(expected) end include_examples "roundtrips string" context 'when mechanism_properties are provided' do let(:options) { "authMechanism=#{mechanism}&authMechanismProperties=CANONICALIZE_HOST_NAME:true" } it 'does not allow a client to be created' do expect { new_local_client_nmio(string) }.to raise_error(Mongo::Auth::InvalidConfiguration, /mechanism_properties are not supported/) end end end context 'mongodb-cr' do let(:mechanism) { 'MONGODB-CR' } let(:expected) { :mongodb_cr } it 'sets the auth mechanism to :mongodb_cr' do expect(uri.uri_options[:auth_mech]).to eq(expected) end it 'sets the options on a client created with the uri' do client = new_local_client_nmio(string) expect(client.options[:auth_mech]).to eq(expected) end it 'is case-insensitive' do client = new_local_client_nmio(string.downcase) expect(client.options[:auth_mech]).to eq(expected) end include_examples "roundtrips string" context 'when mechanism_properties are provided' do let(:options) { "authMechanism=#{mechanism}&authMechanismProperties=CANONICALIZE_HOST_NAME:true" } it 'does not allow a client to be created' do expect { new_local_client_nmio(string) }.to raise_error(Mongo::Auth::InvalidConfiguration, /mechanism_properties are not supported/) end end end context 'gssapi' do require_mongo_kerberos let(:mechanism) { 'GSSAPI' } let(:expected) { :gssapi } let(:client) { new_local_client_nmio(string) } it 'sets the auth mechanism to :gssapi' do expect(uri.uri_options[:auth_mech]).to eq(expected) end it 'sets the options on a client created with the uri' do expect(client.options[:auth_mech]).to eq(expected) end it 'is case-insensitive' do client = new_local_client_nmio(string.downcase) expect(client.options[:auth_mech]).to eq(expected) end include_examples "roundtrips string" context 'when auth source is invalid' do let(:options) { "authMechanism=#{mechanism}&authSource=foo" } it 'does not allow a client to be created' do expect { client }.to raise_error(Mongo::Auth::InvalidConfiguration, /invalid auth source/) end end context 'when mechanism_properties are provided' do let(:options) { "authMechanism=#{mechanism}&authMechanismProperties=SERVICE_NAME:other,CANONICALIZE_HOST_NAME:true" } it 'sets the options on a client created with the uri' do expect(client.options[:auth_mech_properties]).to eq({ 'canonicalize_host_name' => true, 'service_name' => 'other' }) end include_examples "roundtrips string" context 'when a mapping value is missing' do let(:options) { "authMechanism=#{mechanism}&authMechanismProperties=SERVICE_NAME:,CANONICALIZE_HOST_NAME:" } it 'sets the options to defaults' do expect(client.options[:auth_mech_properties]).to eq({ 'service_name' => 'mongodb' }) end it "roundtrips the string" do expect(uri.to_s).to eq("mongodb://tyler:s3kr4t@localhost/?authMechanism=GSSAPI") end end context 'when a mapping value is missing but another is present' do let(:options) { "authMechanism=#{mechanism}&authMechanismProperties=SERVICE_NAME:foo,CANONICALIZE_HOST_NAME:" } it 'only sets the present value' do expect(client.options[:auth_mech_properties]).to eq({ 'service_name' => 'foo' }) end it "roundtrips the string" do expect(uri.to_s).to eq("mongodb://tyler:s3kr4t@localhost/?authMechanism=GSSAPI&authMechanismProperties=SERVICE_NAME:foo") end end end end context 'scram-sha-1' do let(:mechanism) { 'SCRAM-SHA-1' } let(:expected) { :scram } it 'sets the auth mechanism to :scram' do expect(uri.uri_options[:auth_mech]).to eq(expected) end it 'sets the options on a client created with the uri' do client = new_local_client_nmio(string) expect(client.options[:auth_mech]).to eq(expected) end it 'is case-insensitive' do client = new_local_client_nmio(string.downcase) expect(client.options[:auth_mech]).to eq(expected) end include_examples "roundtrips string" context 'when mechanism_properties are provided' do let(:options) { "authMechanism=#{mechanism}&authMechanismProperties=CANONICALIZE_HOST_NAME:true" } it 'does not allow a client to be created' do expect { new_local_client_nmio(string) }.to raise_error(Mongo::Auth::InvalidConfiguration, /mechanism_properties are not supported/) end end end context 'mongodb-x509' do let(:mechanism) { 'MONGODB-X509' } let(:expected) { :mongodb_x509 } let(:credentials) { user } it 'sets the auth mechanism to :mongodb_x509' do expect(uri.uri_options[:auth_mech]).to eq(expected) end it 'sets the options on a client created with the uri' do client = new_local_client_nmio(string) expect(client.options[:auth_mech]).to eq(expected) end it 'is case-insensitive' do client = new_local_client_nmio(string.downcase) expect(client.options[:auth_mech]).to eq(expected) end include_examples "roundtrips string" context 'when auth source is invalid' do let(:options) { "authMechanism=#{mechanism}&authSource=foo" } it 'does not allow a client to be created' do expect { new_local_client_nmio(string) }.to raise_error(Mongo::Auth::InvalidConfiguration, /invalid auth source/) end end context 'when a username is not provided' do let(:string) { "#{scheme}#{servers}/?#{options}" } it 'recognizes the mechanism with no username' do client = new_local_client_nmio(string.downcase) expect(client.options[:auth_mech]).to eq(expected) expect(client.options[:user]).to be_nil end include_examples "roundtrips string" end context 'when a password is provided' do let(:credentials) { "#{user}:#{password}"} let(:password) { 's3kr4t' } it 'does not allow a client to be created' do expect do new_local_client_nmio(string) end.to raise_error(Mongo::Auth::InvalidConfiguration, /Password is not supported/) end end context 'when mechanism_properties are provided' do let(:options) { "authMechanism=#{mechanism}&authMechanismProperties=CANONICALIZE_HOST_NAME:true" } it 'does not allow a client to be created' do expect { new_local_client_nmio(string) }.to raise_error(Mongo::Auth::InvalidConfiguration, /mechanism_properties are not supported/) end end end end context 'auth mechanism is not provided' do let(:string) { "#{scheme}#{credentials}@#{servers}/" } context 'with no credentials' do let(:string) { "#{scheme}#{servers}" } it 'sets user and password as nil' do expect(uri.credentials[:user]).to be_nil expect(uri.credentials[:password]).to be_nil end it 'sets the options on a client created with the uri' do client = new_local_client_nmio(string) expect(client.options[:user]).to be_nil expect(client.options[:password]).to be_nil end include_examples "roundtrips string" end context 'with empty credentials' do let(:credentials) { '' } it 'sets user as an empty string and password as nil' do expect(uri.credentials[:user]).to eq('') expect(uri.credentials[:password]).to be_nil end it 'does not allow a client to be created with default auth mechanism' do expect do new_local_client_nmio(string) end.to raise_error(Mongo::Auth::InvalidConfiguration, /Empty username is not supported/) end end end context 'auth source provided' do let(:options) { "authSource=#{source}" } context 'regular db' do let(:source) { 'foo' } it 'sets the auth source to the database' do expect(uri.uri_options[:auth_source]).to eq(source) end it 'sets the options on a client created with the uri' do client = new_local_client_nmio(string) expect(client.options[:auth_source]).to eq(source) end include_examples "roundtrips string" end end context 'auth mechanism properties provided' do shared_examples 'sets options in the expected manner' do it 'preserves case in auth mechanism properties returned from URI' do expect(uri.uri_options[:auth_mech_properties]).to eq(expected_uri_options) end it 'downcases auth mechanism properties keys in client options' do client = new_local_client_nmio(string) expect(client.options[:auth_mech_properties]).to eq(expected_client_options) end end context 'service_name' do let(:options) do "authMechanismProperties=SERVICE_name:#{service_name}" end let(:service_name) { 'foo' } let(:expected_uri_options) do Mongo::Options::Redacted.new( SERVICE_name: service_name, ) end let(:expected_client_options) do Mongo::Options::Redacted.new( service_name: service_name, ) end include_examples 'sets options in the expected manner' include_examples "roundtrips string" end context 'canonicalize_host_name' do let(:options) do "authMechanismProperties=CANONICALIZE_HOST_name:#{canonicalize_host_name}" end let(:canonicalize_host_name) { 'true' } let(:expected_uri_options) do Mongo::Options::Redacted.new( CANONICALIZE_HOST_name: true, ) end let(:expected_client_options) do Mongo::Options::Redacted.new( canonicalize_host_name: true, ) end include_examples 'sets options in the expected manner' include_examples "roundtrips string" end context 'service_realm' do let(:options) do "authMechanismProperties=SERVICE_realm:#{service_realm}" end let(:service_realm) { 'dumdum' } let(:expected_uri_options) do Mongo::Options::Redacted.new( SERVICE_realm: service_realm, ) end let(:expected_client_options) do Mongo::Options::Redacted.new( service_realm: service_realm, ) end include_examples 'sets options in the expected manner' include_examples "roundtrips string" end context 'multiple properties' do let(:options) do "authMechanismProperties=SERVICE_realm:#{service_realm}," + "CANONICALIZE_HOST_name:#{canonicalize_host_name}," + "SERVICE_name:#{service_name}" end let(:service_name) { 'foo' } let(:canonicalize_host_name) { 'true' } let(:service_realm) { 'dumdum' } let(:expected_uri_options) do Mongo::Options::Redacted.new( SERVICE_name: service_name, CANONICALIZE_HOST_name: true, SERVICE_realm: service_realm, ) end let(:expected_client_options) do Mongo::Options::Redacted.new( service_name: service_name, canonicalize_host_name: true, service_realm: service_realm, ) end include_examples 'sets options in the expected manner' include_examples "roundtrips string" end end context 'connectTimeoutMS' do let(:options) { "connectTimeoutMS=4567" } it 'sets the the connect timeout' do expect(uri.uri_options[:connect_timeout]).to eq(4.567) end include_examples "roundtrips string" end context 'socketTimeoutMS' do let(:options) { "socketTimeoutMS=8910" } it 'sets the socket timeout' do expect(uri.uri_options[:socket_timeout]).to eq(8.910) end include_examples "roundtrips string" end context 'when providing serverSelectionTimeoutMS' do let(:options) { "serverSelectionTimeoutMS=3561" } it 'sets the the connect timeout' do expect(uri.uri_options[:server_selection_timeout]).to eq(3.561) end include_examples "roundtrips string" end context 'when providing localThresholdMS' do let(:options) { "localThresholdMS=3561" } it 'sets the the connect timeout' do expect(uri.uri_options[:local_threshold]).to eq(3.561) end include_examples "roundtrips string" end context 'when providing maxPoolSize' do let(:max_pool_size) { 10 } let(:options) { "maxPoolSize=#{max_pool_size}" } it 'sets the max pool size option' do expect(uri.uri_options[:max_pool_size]).to eq(max_pool_size) end include_examples "roundtrips string" end context 'when providing minPoolSize' do let(:min_pool_size) { 5 } let(:options) { "minPoolSize=#{min_pool_size}" } it 'sets the min pool size option' do expect(uri.uri_options[:min_pool_size]).to eq(min_pool_size) end include_examples "roundtrips string" end context 'when providing srvMaxHosts with non-SRV URI' do let(:srv_max_hosts) { 5 } let(:options) { "srvMaxHosts=#{srv_max_hosts}" } it 'raises an error' do lambda do uri end.should raise_error(Mongo::Error::InvalidURI) end end context 'when providing srvServiceName with non-SRV URI' do let(:scheme) { "mongodb+srv://" } let(:srv_service_name) { "customname" } let(:options) { "srvServiceName=#{srv_service_name}" } it 'raises an error' do lambda do uri end.should raise_error(Mongo::Error::InvalidURI) end end context 'when providing waitQueueTimeoutMS' do let(:wait_queue_timeout) { 500 } let(:options) { "waitQueueTimeoutMS=#{wait_queue_timeout}" } it 'sets the wait queue timeout option' do expect(uri.uri_options[:wait_queue_timeout]).to eq(0.5) end include_examples "roundtrips string" end context 'ssl' do let(:options) { "ssl=#{ssl}" } context 'true' do let(:ssl) { true } it 'sets the ssl option to true' do expect(uri.uri_options[:ssl]).to be true end it "returns the ssl as tls from to_s" do expect(uri.to_s).to eq("mongodb://localhost/?tls=true") end end context 'false' do let(:ssl) { false } it 'sets the ssl option to false' do expect(uri.uri_options[:ssl]).to be false end it "returns the ssl as tls from to_s" do expect(uri.to_s).to eq("mongodb://localhost/?tls=false") end end end context 'grouped and non-grouped options provided' do let(:options) { 'w=1&ssl=true' } it 'do not overshadow top level options' do expect(uri.uri_options).not_to be_empty end it "returns the ssl as tls from to_s" do expect(uri.to_s).to eq("mongodb://localhost/?w=1&tls=true") end end context 'when an invalid option is provided' do let(:options) { 'invalidOption=10' } let(:uri_options) do uri.uri_options end it 'does not raise an exception' do expect(uri_options).to be_empty end context 'when an invalid option is combined with valid options' do let(:options) { 'invalidOption=10&waitQueueTimeoutMS=500&ssl=true' } it 'does not raise an exception' do expect(uri_options).not_to be_empty end it 'sets the valid options' do expect(uri_options[:wait_queue_timeout]).to eq(0.5) expect(uri_options[:ssl]).to be true end end end context 'when an app name option is provided' do let(:options) { "appname=uri_test" } it 'sets the app name on the client' do client = new_local_client_nmio(string) expect(client.options[:app_name]).to eq('uri_test') end it "roundtrips the string with camelCase" do expect(uri.to_s).to eq("mongodb://localhost/?appName=uri_test") end end context 'when a supported compressors option is provided' do let(:options) { "compressors=zlib" } it 'sets the compressors as an array on the client' do client = new_local_client_nmio(string) expect(client.options[:compressors]).to eq(['zlib']) end include_examples "roundtrips string" end context 'when a non-supported compressors option is provided' do let(:options) { "compressors=snoopy" } let(:client) do client = new_local_client_nmio(string) end it 'sets no compressors on the client and warns' do expect(Mongo::Logger.logger).to receive(:warn) expect(client.options[:compressors]).to be_nil end include_examples "roundtrips string" end context 'when a zlibCompressionLevel option is provided' do let(:options) { "zlibCompressionLevel=6" } it 'sets the zlib compression level on the client' do client = new_local_client_nmio(string) expect(client.options[:zlib_compression_level]).to eq(6) end include_examples "roundtrips string" end end end mongo-ruby-driver-2.21.3/spec/mongo/utils_spec.rb000066400000000000000000000016271505113246500217460ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' describe Mongo::Utils do describe '#shallow_symbolize_keys' do it 'symbolizes' do described_class.shallow_symbolize_keys( 'foo' => 'bar', 'aKey' => 'aValue', 'a_key' => 'a_value', key: :value, ).should == { foo: 'bar', aKey: 'aValue', a_key: 'a_value', key: :value, } end end describe '#shallow_camelize_keys' do it 'camelizes' do described_class.shallow_camelize_keys( 'foo' => 'bar', 'aKey' => 'aValue', 'aa_key' => 'a_value', key: :value, sKey: :sValue, us_key: :us_value, ).should == { 'foo' => 'bar', 'aKey' => 'aValue', 'aaKey' => 'a_value', 'key' => :value, 'sKey' => :sValue, 'usKey' => :us_value, } end end end mongo-ruby-driver-2.21.3/spec/mongo/write_concern/000077500000000000000000000000001505113246500221025ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/mongo/write_concern/acknowledged_spec.rb000066400000000000000000000022521505113246500260710ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::WriteConcern::Acknowledged do describe '#acknowledged?' do let(:concern) do described_class.new(:w => :majority) end it 'returns true' do expect(concern.acknowledged?).to be(true) end end describe '#get_last_error' do let(:get_last_error) do concern.get_last_error end context 'when the options are symbols' do let(:concern) do described_class.new(:w => :majority) end it 'converts the values to strings' do expect(get_last_error).to eq(:getlasterror => 1, :w => 'majority') end end context 'when the options are strings' do let(:concern) do described_class.new(:w => 'majority') end it 'keeps the values as strings' do expect(get_last_error).to eq(:getlasterror => 1, :w => 'majority') end end context 'when the options are numbers' do let(:concern) do described_class.new(:w => 3) end it 'keeps the values as numbers' do expect(get_last_error).to eq(:getlasterror => 1, :w => 3) end end end end mongo-ruby-driver-2.21.3/spec/mongo/write_concern/unacknowledged_spec.rb000066400000000000000000000007521505113246500264370ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe Mongo::WriteConcern::Unacknowledged do let(:concern) do described_class.new(:w => 0) end describe '#get_last_error' do it 'returns nil' do expect(concern.get_last_error).to be_nil end end describe '#acknowledged?' do let(:concern) do described_class.new(:w => 0) end it 'returns false' do expect(concern.acknowledged?).to be(false) end end end mongo-ruby-driver-2.21.3/spec/mongo/write_concern_spec.rb000066400000000000000000000126051505113246500234450ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' describe Mongo::WriteConcern do describe '#get' do let(:wc) { Mongo::WriteConcern.get(options) } context 'when no options are set' do let(:options) do { } end it 'returns an Acknowledged write concern object' do expect(Mongo::WriteConcern.get(options)).to be_a(Mongo::WriteConcern::Acknowledged) end end context 'when the value is a WriteConcern object' do let(:value) do Mongo::WriteConcern.get({}) end it 'returns the object' do expect(Mongo::WriteConcern.get(value)).to be(value) end end context 'when the value is nil' do it 'returns nil' do expect(Mongo::WriteConcern.get(nil)).to be(nil) end end context 'when w is 0' do context 'when no other options are provided' do let(:options) do { w: 0 } end it 'returns an Unacknowledged write concern object' do expect(Mongo::WriteConcern.get(options)).to be_a(Mongo::WriteConcern::Unacknowledged) end end context 'when j is also provided' do context 'when j is false' do let(:options) do { w: 0, j: false } end it 'returns an Unacknowledged write concern object' do expect(Mongo::WriteConcern.get(options)).to be_a(Mongo::WriteConcern::Unacknowledged) end end context 'when j is true' do let(:options) do { w: 0, j: true } end it 'raises an exception' do expect { Mongo::WriteConcern.get(options) }.to raise_error(Mongo::Error::InvalidWriteConcern) end context 'when j is given as a string' do let(:options) do { w: 0, 'j' => true } end it 'raises an exception' do expect { Mongo::WriteConcern.get(options) }.to raise_error(Mongo::Error::InvalidWriteConcern) end end end context 'when fsync is true' do let(:options) do { w: 0, fsync: true } end it 'raises an exception' do expect { Mongo::WriteConcern.get(options) }.to raise_error(Mongo::Error::InvalidWriteConcern) end end end context 'when wtimeout is also provided' do let(:options) do { w: 0, wimteout: 100 } end it 'returns an Unacknowledged write concern object' do expect(Mongo::WriteConcern.get(options)).to be_a(Mongo::WriteConcern::Unacknowledged) end end end context 'when w is less than 0' do let(:options) do { w: -1 } end it 'raises an exception' do expect { Mongo::WriteConcern.get(options) }.to raise_error(Mongo::Error::InvalidWriteConcern) end end context 'when w is greater than 0' do let(:options) do { w: 2, j: true } end it 'returns an Acknowledged write concern object' do expect(Mongo::WriteConcern.get(options)).to be_a(Mongo::WriteConcern::Acknowledged) end it 'sets the options' do expect(Mongo::WriteConcern.get(options).options).to eq(options) end end context 'when w is a string' do let(:options) do { w: 'majority', j: true } end it 'returns an Acknowledged write concern object' do expect(Mongo::WriteConcern.get(options)).to be_a(Mongo::WriteConcern::Acknowledged) end it 'sets the options' do expect(Mongo::WriteConcern.get(options).options).to eq(options) end end context 'when w is a symbol' do let(:options) do { w: :majority, j: true } end it 'returns an Acknowledged write concern object' do expect(Mongo::WriteConcern.get(options)).to be_a(Mongo::WriteConcern::Acknowledged) end it 'sets w to a string' do expect(Mongo::WriteConcern.get(options).options[:w]).to eq('majority') end end context 'when options are provided with string keys' do context 'acknowledged write concern' do let(:options) do { 'w' => 2, 'j' => true } end it 'converts keys to symbols' do expect(wc).to be_a(Mongo::WriteConcern::Acknowledged) expect(wc.options[:w]).to eq(2) expect(wc.options[:j]).to be true end end context 'unacknowledged write concern' do let(:options) do { 'w' => 0 } end it 'converts keys to symbols' do expect(wc).to be_a(Mongo::WriteConcern::Unacknowledged) expect(wc.options[:w]).to eq(0) end context 'and j is true' do let(:options) do { 'w' => 0, j: true } end it 'raises an exception' do expect do wc end.to raise_error(Mongo::Error::InvalidWriteConcern, /:j cannot be true when :w is 0/) end end end end context 'when :journal option is given' do let(:options) do { 'w' => 1, journal: true } end it 'raises an exception' do expect do wc end.to raise_error(Mongo::Error::InvalidWriteConcern, /use :j for journal/) end end end end mongo-ruby-driver-2.21.3/spec/runners/000077500000000000000000000000001505113246500176165ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/runners/auth.rb000066400000000000000000000076101505113246500211100ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. RSpec::Matchers.define :have_blank_credentials do match do |client| # The "null credential" definition in auth spec tests readme at # https://github.com/mongodb/specifications/blob/master/source/auth/tests/README.md # is as follows: # # credential: If null, the credential must not be considered configured # for the the purpose of deciding if the driver should authenticate to the # topology. # # Ruby driver authenticates if :user or :auth_mech client options are set. # # Note that this is a different test from "no auth-related options are # set on the client". Options like password or auth source are preserved # by the client if set, but do not trigger authentication. %i(auth_mech user).all? do |key| client.options[key].nil? end end failure_message do |client| "Expected client to have blank credentials, but got the following credentials: \n\n" + client.options.inspect end end module Mongo module Auth class Spec attr_reader :description attr_reader :tests def initialize(test_path) @spec = ::Utils.load_spec_yaml_file(test_path) @description = File.basename(test_path) end def tests @tests ||= @spec['tests'].collect do |spec| Test.new(spec) end end end class Test attr_reader :description attr_reader :uri_string def initialize(spec) @spec = spec @description = @spec['description'] @uri_string = @spec['uri'] end def valid? @spec['valid'] end def credential @spec['credential'] end def client @client ||= ClientRegistry.instance.new_local_client(@spec['uri'], monitoring_io: false) end def expected_credential expected_credential = { 'auth_source' => credential['source'] } if credential['username'] expected_credential['user'] = credential['username'] end if credential['password'] expected_credential['password'] = credential['password'] end if credential['mechanism'] expected_credential['auth_mech'] = expected_auth_mech end if credential['mechanism_properties'] props = Hash[credential['mechanism_properties'].map do |k, v| [k.downcase, v] end] expected_credential['auth_mech_properties'] = props end expected_credential end def actual_client_options client.options.select do |k, _| %w(auth_mech auth_mech_properties auth_source password user).include?(k) end end def actual_user_attributes user = Mongo::Auth::User.new(client.options) attrs = {} { auth_mech_properties: 'auth_mech_properties', auth_source: 'auth_source', name: 'user', password: 'password', mechanism: 'auth_mech', }.each do |attr, field| value = user.send(attr) unless value.nil? || attr == :auth_mech_properties && value == {} attrs[field] = value end end attrs end private def expected_auth_mech Mongo::URI::AUTH_MECH_MAP[credential['mechanism']] end end end end mongo-ruby-driver-2.21.3/spec/runners/change_streams/000077500000000000000000000000001505113246500226015ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/runners/change_streams/outcome.rb000066400000000000000000000024371505113246500246070ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module ChangeStreams class Outcome def initialize(spec) if spec.nil? raise ArgumentError, 'Outcome specification cannot be nil' end if spec.keys.length != 1 raise ArgumentError, 'Outcome must have exactly one key: success or error' end if spec['success'] @documents = spec['success'] elsif spec['error'] @error = spec['error'] else raise ArgumentError, 'Outcome must have exactly one key: success or error' end end attr_reader :documents attr_reader :error def error? !!error end end end end mongo-ruby-driver-2.21.3/spec/runners/change_streams/spec.rb000066400000000000000000000034621505113246500240650ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'runners/change_streams/test' module Mongo module ChangeStreams class Spec # @return [ String ] description The spec description. # # @since 2.6.0 attr_reader :description # Instantiate the new spec. # # @param [ String ] test_path The path to the file. # # @since 2.6.0 def initialize(test_path) @spec = ::Utils.load_spec_yaml_file(test_path) @description = File.basename(test_path) @spec_tests = @spec['tests'] @collection_name = @spec['collection_name'] @collection2_name = @spec['collection2_name'] @database_name = @spec['database_name'] @database2_name = @spec['database2_name'] end # Get a list of ChangeStreamsTests for each test definition. # # @example Get the list of ChangeStreamsTests. # spec.tests # # @return [ Array ] The list of ChangeStreamsTests. # # @since 2.0.0 def tests @spec_tests.map do |test| ChangeStreamsTest.new(self, test, @collection_name, @collection2_name, @database_name, @database2_name) end end end end end mongo-ruby-driver-2.21.3/spec/runners/change_streams/test.rb000066400000000000000000000155671505113246500241230ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'runners/crud/operation' require 'runners/crud/test_base' require 'runners/change_streams/outcome' module Mongo module ChangeStreams class ChangeStreamsTest < Mongo::CRUD::CRUDTestBase def initialize(crud_spec, test, collection_name, collection2_name, database_name, database2_name) @spec = crud_spec @description = test['description'] @fail_point_command = test['failPoint'] @min_server_version = test['minServerVersion'] @max_server_version = test['maxServerVersion'] @target_type = test['target'] @topologies = test['topology'].map do |topology| {'single' => :single, 'replicaset' => :replica_set, 'sharded' => :sharded}[topology] end @pipeline = test['changeStreamPipeline'] || [] @options = test['changeStreamOptions'] || {} @operations = test['operations'].map do |op| Mongo::CRUD::Operation.new(self, op) end @expectations = test['expectations'] && BSON::ExtJSON.parse_obj(test['expectations'], mode: :bson) @result = BSON::ExtJSON.parse_obj(test['result'], mode: :bson) @collection_name = collection_name @collection2_name = collection2_name @database_name = database_name @database2_name = database2_name @outcome = Outcome.new(test.fetch('result')) end attr_reader :topologies attr_reader :outcome attr_reader :result def setup_test clear_fail_point(global_client) @database = global_client.use(@database_name).database.tap(&:drop) if @database2_name @database2 = global_client.use(@database2_name).database.tap(&:drop) end # Work around https://jira.mongodb.org/browse/SERVER-17397 if ClusterConfig.instance.server_version < '4.4' && global_client.cluster.servers.length > 1 then ::Utils.mongos_each_direct_client do |client| client.database.command(flushRouterConfig: 1) end end @database[@collection_name].create if @collection2_name @database2[@collection2_name].create end client = ClientRegistry.instance.global_client('root_authorized').with( database: @database_name, app_name: 'this is used solely to force the new client to create its own cluster') setup_fail_point(client) @subscriber = Mrss::EventSubscriber.new client.subscribe(Mongo::Monitoring::COMMAND, @subscriber) @target = case @target_type when 'client' client when 'database' client.database when 'collection' client[@collection_name] end end def teardown_test if @fail_point_command clear_fail_point(global_client) end end def run change_stream = begin @target.watch(@pipeline, ::Utils.snakeize_hash(@options)) rescue Mongo::Error::OperationFailure::Family => e return { result: { error: { code: e.code, labels: e.labels, }, }, events: events, } end # JRuby must iterate the same object, not switch from # enum to change stream enum = change_stream.to_enum @operations.each do |op| db = case op.spec['database'] when @database_name @database when @database2_name @database2 else raise "Unknown database name #{op.spec['database']}" end collection = db[op.spec['collection']] op.execute(collection) end changes = [] # attempt first next call (catch NonResumableChangeStreamError errors) begin change = enum.next changes << change rescue Mongo::Error::OperationFailure::Family => e return { result: { error: { code: e.code, labels: e.labels, }, }, events: events, } end # continue until changeStream has received as many changes as there # are in result.success if @result['success'] && changes.length < @result['success'].length while changes.length < @result['success'].length changes << enum.next end end change_stream.close { result: { 'success' => changes }, events: events, } end def server_version_satisfied?(client) lower_bound_satisfied?(client) && upper_bound_satisfied?(client) end private IGNORE_COMMANDS = %w(saslStart saslContinue killCursors) def global_client @global_client ||= ClientRegistry.instance.global_client('root_authorized').use('admin') end def events @subscriber.started_events.reduce([]) do |evs, e| next evs if IGNORE_COMMANDS.include?(e.command_name) command = e.command.dup if command['aggregate'] && command['pipeline'] command['pipeline'] = command['pipeline'].map do |stage| if stage['$changeStream'] cs = stage['$changeStream'].dup cs.delete('resumeAfter') stage.merge('$changeStream' => cs) else stage end end end evs << { 'command_started_event' => { 'command' => command, 'command_name' => e.command_name.to_s, 'database_name' => e.database_name, } } end end def server_version(client) @server_version ||= client.database.command(buildInfo: 1).first['version'] end def upper_bound_satisfied?(client) return true unless @max_server_version ClusterConfig.instance.server_version <= @max_server_version end def lower_bound_satisfied?(client) return true unless @min_server_version #@min_server_version <= server_version(client) @min_server_version <= ClusterConfig.instance.fcv_ish end end end end mongo-ruby-driver-2.21.3/spec/runners/cmap.rb000066400000000000000000000404171505113246500210710ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'runners/cmap/verifier' module Mongo module Cmap # Represents a specification. class Spec # @return [ String ] description The spec description. attr_reader :description # @return [ Hash ] pool_options The options for the created pools. attr_reader :pool_options # @return [ Array ] spec_ops The spec operations. attr_reader :spec_ops # @return [ Error | nil ] error The expected error. attr_reader :expected_error # @return [ Array ] events The events expected to occur. attr_reader :expected_events # @return [ Array ] events The names of events to ignore. attr_reader :ignore_events # @return [ Mongo::ConnectionPool ] pool The connection pool to use for operations. attr_reader :pool # @return [ Mrss::EventSubscriber ] subscriber The subscriber receiving the CMAP events. attr_reader :subscriber # Instantiate the new spec. # # @param [ String ] test_path The path to the file. def initialize(test_path) @test = ::Utils.load_spec_yaml_file(test_path) @description = @test['description'] @pool_options = process_options(@test['poolOptions']) @spec_ops = @test['operations'].map { |o| Operation.new(self, o) } @expected_error = @test['error'] @expected_events = @test['events'] @ignore_events = @test['ignore'] || [] @fail_point_command = @test['failPoint'] @threads = Set.new process_run_on end attr_reader :pool def setup(server, client, subscriber) @subscriber = subscriber @client = client # The driver always creates pools for known servers. # There is a test which creates and destroys a pool and it only expects # those two events, not the ready event. # This situation cannot happen in normal driver operation, but to # support this test, create the pool manually here. @pool = Mongo::Server::ConnectionPool.new(server, server.options) server.instance_variable_set(:@pool, @pool) configure_fail_point end def run state = {} {}.tap do |result| spec_ops.each do |op| err = op.run(pool, state) if err result['error'] = err break elsif op.name == 'start' @threads << state[op.target] end end result['error'] ||= nil result['events'] = subscriber.published_events.each_with_object([]) do |event, events| next events unless event.is_a?(Mongo::Monitoring::Event::Cmap::Base) event = case event when Mongo::Monitoring::Event::Cmap::PoolCreated { 'type' => 'ConnectionPoolCreated', 'address' => event.address, 'options' => normalize_options(event.options), } when Mongo::Monitoring::Event::Cmap::PoolClosed { 'type' => 'ConnectionPoolClosed', 'address' => event.address, } when Mongo::Monitoring::Event::Cmap::ConnectionCreated { 'type' => 'ConnectionCreated', 'connectionId' => event.connection_id, 'address' => event.address, } when Mongo::Monitoring::Event::Cmap::ConnectionReady { 'type' => 'ConnectionReady', 'connectionId' => event.connection_id, 'address' => event.address, } when Mongo::Monitoring::Event::Cmap::ConnectionClosed { 'type' => 'ConnectionClosed', 'connectionId' => event.connection_id, 'reason' => event.reason, 'address' => event.address, } when Mongo::Monitoring::Event::Cmap::ConnectionCheckOutStarted { 'type' => 'ConnectionCheckOutStarted', 'address' => event.address, } when Mongo::Monitoring::Event::Cmap::ConnectionCheckOutFailed { 'type' => 'ConnectionCheckOutFailed', 'reason' => event.reason, 'address' => event.address, } when Mongo::Monitoring::Event::Cmap::ConnectionCheckedOut { 'type' => 'ConnectionCheckedOut', 'connectionId' => event.connection_id, 'address' => event.address, } when Mongo::Monitoring::Event::Cmap::ConnectionCheckedIn { 'type' => 'ConnectionCheckedIn', 'connectionId' => event.connection_id, 'address' => event.address, } when Mongo::Monitoring::Event::Cmap::PoolCleared { 'type' => 'ConnectionPoolCleared', 'address' => event.address, 'interruptInUseConnections' => event.options[:interrupt_in_use_connections] } when Mongo::Monitoring::Event::Cmap::PoolReady { 'type' => 'ConnectionPoolReady', 'address' => event.address, } else raise "Unhandled event: #{event}" end events << event unless @ignore_events.include?(event.fetch('type')) end end ensure disable_fail_points kill_remaining_threads end def disable_fail_points if @fail_point_command @client.command( configureFailPoint: @fail_point_command['configureFailPoint'], mode: 'off' ) end end def kill_remaining_threads @threads.each(&:kill) end def satisfied? cc = ClusterConfig.instance ok = true if @min_server_version ok &&= Gem::Version.new(cc.fcv_ish) >= Gem::Version.new(@min_server_version) end if @max_server_version ok &&= Gem::Version.new(cc.server_version) <= Gem::Version.new(@max_server_version) end if @topologies ok &&= @topologies.include?(cc.topology) end if @oses ok &&= @oses.any? { |os| SpecConfig.instance.send("#{os.to_s}?")} end ok end private # Converts the options used by the Ruby driver to the spec test format. def normalize_options(options) (options || {}).reduce({}) do |opts, kv| case kv.first when :max_idle_time opts['maxIdleTimeMS'] = (kv.last * 1000.0).to_i when :max_size opts['maxPoolSize'] = kv.last when :min_size opts['minPoolSize'] = kv.last when :wait_queue_size opts['waitQueueSize'] = kv.last when :wait_timeout opts['waitQueueTimeoutMS'] = (kv.last * 1000.0).to_i end opts end end # Converts the options given by the spec to the Ruby driver format. # # This method only handles options used by spec tests at the time when # this method was written. Other options are silently dropped. def process_options(options) (options || {}).each_with_object({}) do |kv, opts| case kv.first when 'maxIdleTimeMS' opts[:max_idle_time] = kv.last / 1000.0 when 'maxPoolSize' opts[:max_pool_size] = kv.last when 'minPoolSize' opts[:min_pool_size] = kv.last when 'waitQueueSize' opts[:wait_queue_size] = kv.last when 'waitQueueTimeoutMS' opts[:wait_queue_timeout] = kv.last / 1000.0 when 'backgroundThreadIntervalMS' # The populator busy loops, this option doesn't apply to our driver. when 'maxConnecting' opts[:max_connecting] = kv.last when 'appName' opts[:app_name] = kv.last else raise "Unknown option #{kv.first}" end end end def process_run_on if run_on = @test['runOn'] @min_server_version = run_on.detect do |doc| doc.keys.first == 'minServerVersion' end&.values&.first @max_server_version = run_on.detect do |doc| doc.keys.first == 'maxServerVersion' end&.values&.first @topologies = if topologies = run_on.detect { |doc| doc.keys.first == 'topology' } (topologies['topology'] || {}).map do |topology| { 'replicaset' => :replica_set, 'single' => :single, 'sharded' => :sharded, 'sharded-replicaset' => :sharded, 'load-balanced' => :load_balanced, }[topology].tap do |v| unless v raise "Unknown topology #{topology}" end end end end @oses = if oses = run_on.detect { |doc| doc.keys.first == 'requireOs' } (oses['requireOs'] || {}).map do |os| { 'macos' => :macos, 'linux' => :linux, 'windows' => :windows, }[os].tap do |v| unless v raise "Unknown os #{os}" end end end end end end def configure_fail_point @client.database.command(@fail_point_command) if @fail_point_command end end # Represents an operation in the spec. Operations are sequential. class Operation include RSpec::Mocks::ExampleMethods # @return [ String ] command The name of the operation to run. attr_reader :name # @return [ String | nil ] thread The identifier of the thread to run the operation on (`nil` # signifying the default thread.) attr_reader :thread # @return [ String | nil ] target The name of the started thread. attr_reader :target # @return [ Integer | nil ] ms The number of milliseconds to sleep. attr_reader :ms # @return [ String | nil ] label The label for the returned connection. attr_reader :label # @return [ true | false ] interrupt_in_use_connections Whether or not # all connections should be closed on pool clear. attr_reader :interrupt_in_use_connections # @return [ String | nil ] The binding for the connection which should run the operation. attr_reader :connection # @return [ Mongo::ConnectionPool ] pool The connection pool to use for the operation. attr_reader :pool # Create the new Operation. # # @param [ Spec ] spec The Spec object. # @param [ Hash ] operation The operation hash. def initialize(spec, operation) @spec = spec @name = operation['name'] @thread = operation['thread'] @target = operation['target'] @ms = operation['ms'] @label = operation['label'] @connection = operation['connection'] @event = operation['event'] @count = operation['count'] @interrupt_in_use_connections = !!operation['interruptInUseConnections'] end def run(pool, state, main_thread = true) return run_on_thread(state) if thread && main_thread @pool = pool case name when 'start' run_start_op(state) when 'ready' run_ready_op(state) when 'wait' run_wait_op(state) when 'waitForThread' run_wait_for_thread_op(state) when 'waitForEvent' run_wait_for_event_op(state) when 'checkOut' run_checkout_op(state) when 'checkIn' run_checkin_op(state) when 'clear' run_clear_op(state) when 'close' run_close_op(state) else raise "invalid operation: #{name}" end nil # We hard-code the error messages because ours contain information like the address and the # connection ID. rescue Error::PoolClosedError raise unless main_thread { 'type' => 'PoolClosedError', 'message' => 'Attempted to check out a connection from closed connection pool', } rescue Error::ConnectionCheckOutTimeout raise unless main_thread { 'type' => 'WaitQueueTimeoutError', 'message' => 'Timed out while checking out a connection from connection pool', } end private def run_start_op(state) thread_context = ThreadContext.new thread = Thread.start do loop do begin op = thread_context.operations.pop(true) op.run(pool, state, false) rescue ThreadError # Queue is empty end if thread_context.stop? break else sleep 0.1 end end end class << thread attr_accessor :context end thread.context = thread_context state[target] = thread # Allow the thread to begin running. sleep 0.1 # Since we expect exceptions to occur in some cases, we disable the printing of error # messages from the thread if the Ruby version supports it. if state[target].respond_to?(:report_on_exception) state[target].report_on_exception = false end end def run_wait_op(_state) sleep(ms / 1000.0) end def run_wait_for_thread_op(state) if thread = state[target] thread.context.signal_stop thread.join else raise "Expected thread for '#{thread}' but none exists." end nil end def run_wait_for_event_op(state) subscriber = @spec.subscriber looped = 0 deadline = Utils.monotonic_time + 3 loop do actual_events = @spec.subscriber.published_events.select do |e| e.class.name.sub(/.*::/, '').sub(/^ConnectionPool/, 'Pool') == @event.sub(/^ConnectionPool/, 'Pool') end if actual_events.length >= @count break end if looped == 1 puts("Waiting for #{@count} #{@event} events (have #{actual_events.length}): #{@spec.description}") end if Utils.monotonic_time > deadline raise "Did not receive #{@count} #{@event} events in time (have #{actual_events.length}): #{@spec.description}" end looped += 1 sleep 0.1 end end def run_checkout_op(state) conn = pool.check_out state[label] = conn if label end def run_checkin_op(state) until state[connection] sleep(0.2) end pool.check_in(state[connection]) end def run_clear_op(state) RSpec::Mocks.with_temporary_scope do allow(pool.server).to receive(:unknown?).and_return(true) pool.clear(lazy: true, interrupt_in_use_connections: interrupt_in_use_connections) end end def run_close_op(state) pool.close end def run_ready_op(state) pool.ready end def run_on_thread(state) if thd = state[thread] thd.context.operations << self # Sleep to allow the other thread to execute the new command. sleep 0.1 else raise "Expected thread for '#{thread}' but none exists." end nil end end class ThreadContext def initialize @operations = Queue.new end def stop? !!@stop end def signal_stop @stop = true end attr_reader :operations end end end mongo-ruby-driver-2.21.3/spec/runners/cmap/000077500000000000000000000000001505113246500205365ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/runners/cmap/verifier.rb000066400000000000000000000025321505113246500227000ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2019-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Cmap class Verifier include RSpec::Matchers def initialize(test_instance) @test_instance = test_instance end attr_reader :test_instance def verify_hashes(actual, expected) expect(expected).to be_a(Hash) expect(actual).to be_a(Hash) actual_modified = actual.dup if actual['reason'] actual_modified['reason'] = actual['reason'].to_s.gsub(/_[a-z]/) { |m| m[1].upcase } end actual.each do |k, v| if expected.key?(k) && expected[k] == 42 && v actual_modified[k] = 42 end end expect(actual_modified.slice(*expected.keys)).to eq(expected) end end end end mongo-ruby-driver-2.21.3/spec/runners/command_monitoring.rb000066400000000000000000000217031505113246500240310ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # RSpec::Matchers.define :match_command_name do |expectation| match do |event| expect(event.command_name.to_s).to eq(expectation.command_name.to_s) end end RSpec::Matchers.define :match_database_name do |expectation| match do |event| expect(event.database_name.to_s).to eq(expectation.database_name.to_s) end end RSpec::Matchers.define :generate_request_id do |expectation| match do |event| expect(event.request_id).to be > 0 end end RSpec::Matchers.define :generate_operation_id do |expectation| match do |event| expect(event.request_id).to be > 0 end end RSpec::Matchers.define :match_command do |expectation| include Mongo::CommandMonitoring::Matchable match do |event| data_matches?(event.command, expectation.event_data['command']) end end RSpec::Matchers.define :match_reply do |expectation| include Mongo::CommandMonitoring::Matchable match do |event| data_matches?(event.reply, expectation.event_data['reply']) end end RSpec::Matchers.define :match_command_started_event do |expectation| match do |event| expect(event).to match_command_name(expectation) expect(event).to match_database_name(expectation) expect(event).to generate_operation_id expect(event).to generate_request_id expect(event).to match_command(expectation) end end RSpec::Matchers.define :match_command_succeeded_event do |expectation| match do |event| expect(event).to match_command_name(expectation) expect(event).to generate_operation_id expect(event).to generate_request_id expect(event).to match_reply(expectation) end end RSpec::Matchers.define :match_command_failed_event do |expectation| match do |event| expect(event).to match_command_name(expectation) expect(event).to generate_operation_id expect(event).to generate_request_id end end module Mongo module CommandMonitoring # Matchers common behavior. # # @since 2.1.0 module Matchable # Determine if the data matches. # # @example Does the data match? # matchable.data_matches?(actual, expected) # # @param [ Object ] actual The actual data. # @param [ Object ] expected The expected data. # # @return [ true, false ] If the data matches. # # @since 2.1.0 def data_matches?(actual, expected) case expected when ::Hash, BSON::Document then hash_matches?(actual, expected) when ::Array array_matches?(actual, expected) else value_matches?(actual, expected) end end # Determine if the hash matches. # # @example Does the hash match? # matchable.hash_matches?(actual, expected) # # @param [ Hash ] actual The actual hash. # @param [ Hash ] expected The expected hash. # # @return [ true, false ] If the hash matches. # # @since 2.1.0 def hash_matches?(actual, expected) if expected['writeConcern'] expected['writeConcern'] = Options::Mapper.transform_keys_to_symbols(expected['writeConcern']) end if expected.keys.first == '$numberLong' converted = expected.values.first.to_i if actual.is_a?(BSON::Int64) actual = ::Utils.int64_value(actual) elsif actual.is_a?(BSON::Int32) return false end (actual == converted) || actual >= 0 else expected.each do |key, value| return false unless data_matches?(actual[key], value) end end end # Determine if an array matches. # # @example Does the array match? # matchable.array_matches?(actual, expected) # # @param [ Array ] actual The actual array. # @param [ Array ] expected The expected array. # # @return [ true, false ] If the array matches. # # @since 2.1.0 def array_matches?(actual, expected) expected.each_with_index do |value, i| # @todo: Durran: fix for kill cursors replies if actual return false unless data_matches?(actual[i], value) end end end # Check if a value matches. # # @example Does a value match. # matchable.value_matches?(actual, expected) # # @param [ Object ] actual The actual value. # @param [ Object ] expected The expected object. # # @return [ true, false ] If the value matches. # # @since 2.1.0 def value_matches?(actual, expected) case expected when '42', 42 then actual > 0 when '' then !actual.nil? else actual == expected end end end # Represents a command monitoring spec in its entirety. # # @since 2.1.0 class Spec # Create the spec. # # @param [ String ] test_path The yaml test path. # # @since 2.1.0 def initialize(test_path) @spec = ::Utils.load_spec_yaml_file(test_path) @data = @spec['data'] @tests = @spec['tests'] end # Get all the tests in the spec. # # @example Get all the tests. # spec.tests # # @return [ Array ] The tests. def tests @tests.map do |test| Test.new(@data, test) end end end # Represents an individual command monitoring test. # # @since 2.1.0 class Test # @return [ String ] description The test description. attr_reader :description # @return [ Array ] The expectations. attr_reader :expectations attr_reader :min_server_fcv attr_reader :max_server_version # Create the new test. # # @example Create the test. # Test.new(data, test) # # @param [ Array ] data The test data. # @param [ Hash ] The test itself. # # @since 2.1.0 def initialize(data, test) @data = data @description = test['description'] @max_server_version = test['ignore_if_server_version_greater_than'] @min_server_fcv = test['ignore_if_server_version_less_than'] @operation = Mongo::CRUD::Operation.new(self, test['operation']) @expectations = test['expectations'].map{ |e| Expectation.new(e) } end # Run the test against the provided collection. # # @example Run the test. # test.run(collection) # # @param [ Mongo::Collection ] collection The collection. # # @since 2.1.0 def run(collection, subscriber) collection.insert_many(@data) subscriber.clear_events! @operation.execute(collection) end end # Encapsulates expectation behavior. # # @since 2.1.0 class Expectation # @return [ String ] event_type The type of expected event. attr_reader :event_type # @return [ Hash ] event_data The event data. attr_reader :event_data # Get the expected command name. # # @example Get the expected command name. # expectation.command_name # # @return [ String ] The command name. # # @since 2.1.0 def command_name @event_data['command_name'] end # Get the expected database name. # # @example Get the expected database name. # expectation.database_name # # @return [ String ] The database name. # # @since 2.1.0 def database_name @event_data['database_name'] end # Get a readable event name. # # @example Get the event name. # expectation.event_name # # @return [ String ] The event name. # # @since 2.1.0 def event_name event_type.gsub('_', ' ') end # Create the new expectation. # # @example Create the new expectation. # Expectation.new(expectation) # # @param [ Hash ] expectation The expectation. # # @since 2.1.0 def initialize(expectation) @event_type = expectation.keys.first @event_data = expectation[@event_type] end # Get the name of the matcher. # # @example Get the matcher name. # expectation.matcher # # @return [ String ] The matcher name. # # @since 2.1.0 def matcher "match_#{event_type}" end end end end mongo-ruby-driver-2.21.3/spec/runners/connection_string.rb000066400000000000000000000252131505113246500236730ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. RSpec::Matchers.define :have_hosts do |test, hosts| match do |cl| def find_server(client, host) client.cluster.servers_list.detect do |s| if host.port s.address.host == host.host && s.address.port == host.port else s.address.host == host.host end end end def match_host?(server, host) server.address.host == host.host end def match_port?(server, host) server.address.port == host.port || !host.port end def match_address_family?(server, host) address_family(server) == host.address_family end def address_family(server) server.address.socket(2) server.address.instance_variable_get(:@resolver).class end hosts.all? do |host| server = find_server(cl, host) server && match_host?(server, host) && match_port?(server, host) #&& #match_address_family?(server, host) end end failure_message do |client| "With URI: #{test.uri_string}\n" + "Expected client hosts: #{client.cluster.instance_variable_get(:@servers)} " + "to match #{hosts}" end end RSpec::Matchers.define :match_auth do |test| def match_database?(client, auth) client.options[:database] == auth.database || !auth.database end def match_password?(client, auth) client.options[:password] == auth.password || client.options[:password].nil? && auth.password == '' end match do |client| auth = test.auth return true unless auth client.options[:user] == auth.username && match_password?(client, auth) && match_database?(client, auth) end failure_message do |client| "With URI: #{test.uri_string}\n" + "Expected that test auth: #{test.auth} would match client auth: #{client.options}" end end module Mongo module ConnectionString class Spec attr_reader :description # Instantiate the new spec. # # @param [ String ] test_path The path to the file. # # @since 2.0.0 def initialize(test_path) @spec = ::Utils.load_spec_yaml_file(test_path) @description = File.basename(test_path) end def tests @tests ||= @spec['tests'].collect do |spec| Test.new(spec) end end end class Test include RSpec::Core::Pending attr_reader :description attr_reader :uri_string def initialize(spec) @spec = spec @description = @spec['description'] @uri_string = @spec['uri'] end def valid? @spec['valid'] end def warn? @spec['warning'] end def hosts @hosts ||= (@spec['hosts'] || []).collect do |host| Host.new(host) end end def seeds if @spec['seeds'] @seeds ||= (@spec['seeds'] || []).collect do |host| Host.new(host) end else nil end end def expected_options @spec['options'] end def non_uri_options @spec['parsed_options'] end def client @client ||= ClientRegistry.instance.new_local_client(@spec['uri'], monitoring_io: false) rescue Mongo::Error::LintError => e if e.message =~ /arbitraryButStillValid/ skip 'Test uses a read concern that fails linter' end end def uri @uri ||= Mongo::URI.get(@spec['uri']) end def auth @auth ||= Auth.new(@spec['auth']) if @spec['auth'] end def raise_error? @spec['error'] end def read_concern_expectation @spec['readConcern'] end def write_concern_expectation @spec['writeConcern'] end def num_seeds @spec['numSeeds'] end def num_hosts @spec['numHosts'] end end class Host MAPPING = { 'ipv4' => Mongo::Address::IPv4, 'ipv6' => Mongo::Address::IPv6, 'unix' => Mongo::Address::Unix } attr_reader :host attr_reader :port def initialize(spec) if spec.is_a?(Hash) # Connection string spec tests @spec = spec @host = @spec['host'] @port = @spec['port'] else # DNS seed list spec tests address = Mongo::Address.new(spec) @host = address.host @port = address.port end end def address_family MAPPING[@spec['type']] end end class Auth attr_reader :username attr_reader :password attr_reader :database def initialize(spec) @spec = spec @username = @spec['username'] @password = @spec['password'] @database = @spec['db'] end def to_s "username: #{username}, password: #{password}, database: #{database}" end end module_function def adjust_expected_mongo_client_options(options) expected = options.dup.tap do |expected| expected.each do |k, v| # Ruby driver downcases auth mechanism properties when # constructing the client. # # Some tests give options in all lower case. if k.downcase == 'authmechanismproperties' expected[k] = ::Utils.downcase_keys(v) end end # We omit retryReads/retryWrites=true because some tests do not # provide those. %w(retryReads retryWrites).each do |k, v| if expected[k] == true expected.delete(k) end end # Fix appName case. if expected.key?('appname') && !expected.key?('appName') expected['appName'] = expected.delete('appname') end end end end end def define_connection_string_spec_tests(test_paths, spec_cls = Mongo::ConnectionString::Spec, &block) clean_slate_for_all_if_possible test_paths.each do |path| spec = spec_cls.new(path) context(spec.description) do #include Mongo::ConnectionString spec.tests.each_with_index do |test, index| context "when a #{test.description} is provided" do if test.description.downcase.include?("gssapi") require_mongo_kerberos end context 'when the uri is invalid', unless: test.valid? do it 'raises an error' do expect do test.uri end.to raise_exception(Mongo::Error::InvalidURI) end end context 'when the uri should warn', if: test.warn? do before do expect(Mongo::Logger.logger).to receive(:warn) end it 'warns' do expect(test.client).to be_a(Mongo::Client) end end context 'when the uri is valid', if: test.valid? do it 'does not raise an exception' do expect(test.uri).to be_a(Mongo::URI) end it 'creates a client with the correct hosts' do expect(test.client).to have_hosts(test, test.hosts) end it 'creates a client with the correct authentication options' do expect(test.client).to match_auth(test) end if test.expected_options it 'creates a client with the correct options' do mapped = Mongo::URI::OptionsMapper.new.ruby_to_smc(test.client.options) # Connection string spec tests do not use canonical URI option names actual = Utils.downcase_keys(mapped) actual.delete('authsource') expected = Mongo::ConnectionString.adjust_expected_mongo_client_options( test.expected_options, ) actual.should == expected end end if test.read_concern_expectation # Tests do not specify a read concern in the input and expect # the read concern to be {); our non-specified read concern is nil. # (But if a test used nil for the expectation, we wouldn't assert # read concern at all.) if test.read_concern_expectation == {} it 'creates a client with no read concern' do actual = Utils.camelize_hash(test.client.options[:read_concern]) expect(actual).to be nil end else it 'creates a client with the correct read concern' do actual = Utils.camelize_hash(test.client.options[:read_concern]) expect(actual).to eq(test.read_concern_expectation) end end end if test.write_concern_expectation let(:actual_write_concern) do Utils.camelize_hash(test.client.options[:write_concern]) end let(:expected_write_concern) do test.write_concern_expectation.dup.tap do |expected| # Spec tests have expectations on the "driver API" which is # different from what is being sent to the server. In Ruby # the "driver API" matches what we send to the server, thus # these expectations are rather awkward to work with. # Convert them all to expected server fields. j = expected.delete('journal') unless j.nil? expected['j'] = j end wtimeout = expected.delete('wtimeoutMS') unless wtimeout.nil? expected['wtimeout'] = wtimeout end end end if test.write_concern_expectation == {} it 'creates a client with no write concern' do expect(actual_write_concern).to be nil end else it 'creates a client with the correct write concern' do expect(actual_write_concern).to eq(expected_write_concern) end end end end end end end end end mongo-ruby-driver-2.21.3/spec/runners/crud.rb000066400000000000000000000163741505113246500211130ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'runners/gridfs' require 'runners/crud/requirement' require 'runners/crud/spec' require 'runners/crud/test_base' require 'runners/crud/test' require 'runners/crud/outcome' require 'runners/crud/context' require 'runners/crud/operation' require 'runners/crud/verifier' def collection_data(collection) collection.find.sort(_id: 1).to_a end def crud_execute_operations(spec, test, num_ops, event_subscriber, expect_error, client ) cache_key = "#{test.object_id}:#{num_ops}" $crud_result_cache ||= {} $crud_result_cache[cache_key] ||= begin if spec.bucket_name client["#{spec.bucket_name}.files"].delete_many client["#{spec.bucket_name}.chunks"].delete_many else client[spec.collection_name].delete_many end test.setup_test(spec, client) event_subscriber.clear_events! result = if expect_error.nil? res = nil begin res = test.run(client, num_ops) rescue Mongo::Error => e res = e end res elsif expect_error error = nil begin test.run(client, num_ops) rescue => e error = e end error else test.run(client, num_ops) end $crud_event_cache ||= {} # It only makes sense to assert on events if all operations succeeded, # but populate our cache in any event for simplicity $crud_event_cache[cache_key] = event_subscriber.started_events.dup last_op = test.operations[num_ops-1] if last_op.outcome && last_op.outcome.collection_data? verify_collection = client[last_op.verify_collection_name] $crud_collection_data_cache ||= {} $crud_collection_data_cache[cache_key] = collection_data(verify_collection) end result ensure test.clear_fail_point(client) end end def define_crud_spec_test_examples(spec, req = nil, &block) spec.tests.each do |test| context(test.description) do if test.description =~ /ListIndexNames/ before do skip "Ruby driver does not implement list_index_names" end end let(:event_subscriber) do Mrss::EventSubscriber.new end let(:verifier) { Mongo::CRUD::Verifier.new(test) } let(:verify_collection) { client[verify_collection_name] } instance_exec(spec, req, test, &block) test.operations.each_with_index do |operation, index| context "operation #{index+1}" do let(:result) do crud_execute_operations(spec, test, index+1, event_subscriber, operation.outcome.error?, client) end let(:verify_collection_name) do if operation.outcome && operation.outcome.collection_name operation.outcome.collection_name else spec.collection_name end end if operation.outcome.error? it 'raises an error' do expect(result).to be_a(Mongo::Error) if operation.outcome.result verifier.verify_operation_result( operation.outcome.result, { 'errorContains' => result.message, 'errorLabels' => result.labels, } ) end end else tested = false if operation.outcome.result tested = true it 'returns the correct result' do result verifier.verify_operation_result(operation.outcome.result, result) end end if operation.outcome.collection_data? tested = true it 'has the correct data in the collection' do result verifier.verify_collection_data( operation.outcome.collection_data, collection_data(verify_collection)) end end unless tested it 'succeeds' do expect do result end.not_to raise_error end end end end end if test.expectations let(:result) do crud_execute_operations(spec, test, test.operations.length, event_subscriber, nil, client) end let(:actual_events) do result Utils.yamlify_command_events($crud_event_cache["#{test.object_id}:#{test.operations.length}"]) end it 'has the correct number of command_started events' do verifier.verify_command_started_event_count(test.expectations, actual_events) end test.expectations.each_with_index do |expectation, i| it "has the correct command_started event #{i+1}" do verifier.verify_command_started_event( test.expectations, actual_events, i) end end end if test.outcome && test.outcome.collection_data? let(:result) do crud_execute_operations(spec, test, test.operations.length, event_subscriber, nil, client) end it 'has the correct data in the collection' do result verifier.verify_collection_data( test.outcome.collection_data, collection_data(client[test.outcome.collection_name || spec.collection_name])) end end end end end def define_spec_tests_with_requirements(spec, &block) if spec.requirements # This block defines the same set of examples multiple times, # once for each requirement specified in the YAML files. # This allows detecting when any of the configurations is # not tested by CI. spec.requirements.each do |req| context(req.description) do if req.min_server_version min_server_fcv req.short_min_server_version end if req.max_server_version max_server_version req.short_max_server_version end if req.topologies require_topology *req.topologies end if SpecConfig.instance.serverless? && req.serverless == :forbid before(:all) do skip "Serverless forbidden" end end if !SpecConfig.instance.serverless? && req.serverless == :require before(:all) do skip "Serverless required" end end instance_exec(req, &block) end end else yield end end def define_crud_spec_tests(test_paths, spec_cls = Mongo::CRUD::Spec, &block) test_paths.each do |path| spec = spec_cls.new(path) context(spec.description) do define_spec_tests_with_requirements(spec) do |req| define_crud_spec_test_examples(spec, req, &block) end end end end mongo-ruby-driver-2.21.3/spec/runners/crud/000077500000000000000000000000001505113246500205535ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/runners/crud/context.rb000066400000000000000000000015121505113246500225630ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2019-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'support/keyword_struct' module Mongo module CRUD Context = KeywordStruct.new( :session0, :session1, :sdam_subscriber, :threads, :primary_address, ) end end mongo-ruby-driver-2.21.3/spec/runners/crud/operation.rb000066400000000000000000000355541505113246500231140ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module CRUD class Operation # Instantiate the operation. # # @param [ Hash ] spec The operation specification. # @param [ Hash ] outcome_spec The outcome specification. # If not provided, outcome is taken out of operation specification. # # @since 2.0.0 def initialize(crud_test, spec, outcome_spec = nil) @crud_test = crud_test @spec = IceNine.deep_freeze(spec) @name = spec['name'] if spec['arguments'] @arguments = BSON::ExtJSON.parse_obj(spec['arguments'], mode: :bson) else @arguments = {} end @outcome = Outcome.new(outcome_spec || spec) end attr_reader :spec # The operation name. # # @return [ String ] name The operation name. # # @since 2.0.0 attr_reader :name attr_reader :arguments attr_reader :outcome def object @spec['object'] || 'collection' end # Which collection to verify results from. # Returns the collection name specified on the operation, or # the collection name for the entire spec file. def verify_collection_name if outcome && outcome.collection_name outcome.collection_name else @spec['collection_name'] || 'crud_spec_test' end end # Whether the operation is expected to have results. # # @example Whether the operation is expected to have results. # operation.has_results? # # @return [ true, false ] If the operation is expected to have results. # # @since 2.0.0 def has_results? !(name == 'aggregate' && pipeline.find {|op| op.keys.include?('$out') }) end # Execute the operation. # # @example Execute the operation. # operation.execute # # @param [ Collection ] collection The collection to execute the operation on. # # @return [ Result, Array ] The result of executing the operation. # # @since 2.0.0 def execute(target) op_name = ::Utils.underscore(name) if target.is_a?(Mongo::Database) op_name = "db_#{op_name}" elsif target.is_a?(Mongo::Client) op_name = "client_#{op_name}" end send(op_name, target, Context.new) end def database_options if opts = @spec['databaseOptions'] ::Utils.convert_operation_options(opts) else nil end end def collection_options ::Utils.convert_operation_options(@spec['collectionOptions']) end private # read operations def aggregate(collection, context) collection.aggregate(arguments['pipeline'], transformed_options(context)).to_a end def db_aggregate(database, context) database.aggregate(arguments['pipeline'], transformed_options(context)).to_a end def count(collection, context) collection.count(arguments['filter'], transformed_options(context)) end def count_documents(collection, context) collection.count_documents(arguments['filter'], transformed_options(context)) end def distinct(collection, context) collection.distinct(arguments['fieldName'], arguments['filter'], transformed_options(context)) end def estimated_document_count(collection, context) collection.estimated_document_count(transformed_options(context)) end def find(collection, context) opts = transformed_options(context) if arguments['modifiers'] opts = opts.merge(modifiers: BSON::Document.new(arguments['modifiers'])) end if read_preference collection = collection.with(read: read_preference) end collection.find(arguments['filter'], opts).to_a end def find_one(collection, context) find(collection, context).first end def watch(collection, context) collection.watch end def db_watch(database, context) database.watch end def client_watch(client, context) client.watch end def download(fs_bucket, context) stream = fs_bucket.open_download_stream(arguments['id']) stream.read end def download_by_name(fs_bucket, context) stream = fs_bucket.open_download_stream_by_name(arguments['filename']) stream.read end def map_reduce(collection, context) view = Mongo::Collection::View.new(collection) mr = Mongo::Collection::View::MapReduce.new(view, arguments['map'].javascript, arguments['reduce'].javascript) mr.to_a end # write operations def bulk_write(collection, context) result = collection.bulk_write(requests, transformed_options(context)) return_doc = {} return_doc['deletedCount'] = result.deleted_count || 0 return_doc['insertedIds'] = result.inserted_ids if result.inserted_ids return_doc['insertedCount'] = result.inserted_count || 0 return_doc['upsertedId'] = result.upserted_id if arguments['upsert'] return_doc['upsertedIds'] = result.upserted_ids if result.upserted_ids return_doc['upsertedCount'] = result.upserted_count || 0 return_doc['matchedCount'] = result.matched_count || 0 return_doc['modifiedCount'] = result.modified_count || 0 return_doc end def delete_many(collection, context) result = collection.delete_many(arguments['filter'], transformed_options(context)) { 'deletedCount' => result.deleted_count } end def delete_one(collection, context) result = collection.delete_one(arguments['filter'], transformed_options(context)) { 'deletedCount' => result.deleted_count } end def insert_many(collection, context) result = collection.insert_many(arguments['documents'], transformed_options(context)) { 'insertedIds' => result.inserted_ids } end def insert_one(collection, context) result = collection.insert_one(arguments['document'], transformed_options(context)) { 'insertedId' => result.inserted_id } end def replace_one(collection, context) result = collection.replace_one(arguments['filter'], arguments['replacement'], transformed_options(context)) update_return_doc(result) end def update_many(collection, context) result = collection.update_many(arguments['filter'], arguments['update'], transformed_options(context)) update_return_doc(result) end def update_one(collection, context) result = collection.update_one(arguments['filter'], arguments['update'], transformed_options(context)) update_return_doc(result) end def find_one_and_delete(collection, context) collection.find_one_and_delete(arguments['filter'], transformed_options(context)) end def find_one_and_replace(collection, context) collection.find_one_and_replace(arguments['filter'], arguments['replacement'], transformed_options(context)) end def find_one_and_update(collection, context) collection.find_one_and_update(arguments['filter'], arguments['update'], transformed_options(context)) end # ddl def client_list_databases(client, context) client.list_databases end def client_list_database_names(client, context) client.list_databases({}, true) end def client_list_database_objects(client, context) client.list_mongo_databases end def db_list_collections(database, context) database.list_collections end def db_list_collection_names(database, context) database.collection_names end def db_list_collection_objects(database, context) database.collections end def create_collection(database, context) opts = transformed_options(context) database[arguments.fetch('collection')] .create( { session: opts[:session], encrypted_fields: opts[:encrypted_fields], validator: opts[:validator], }.compact ) end def rename(collection, context) collection.client.use(:admin).command({ renameCollection: "#{collection.database.name}.#{collection.name}", to: "#{collection.database.name}.#{arguments['to']}" }) end def drop(collection, context) opts = transformed_options(context) collection.drop(encrypted_fields: opts[:encrypted_fields]) end def drop_collection(database, context) opts = transformed_options(context) database[arguments.fetch('collection')].drop(encrypted_fields: opts[:encrypted_fields]) end def create_index(collection, context) # The Ruby driver method uses `key` while the createIndexes server # command and the test specifiecation use 'keys`. opts = BSON::Document.new(options) if opts.key?(:keys) opts[:key] = opts.delete(:keys) end session = opts.delete(:session) collection.indexes(session: session && context.send(session)).create_many([opts]) end def drop_index(collection, context) unless options.keys == %i(name) raise "Only name is allowed when dropping the index" end name = options[:name] collection.indexes.drop_one(name) end def list_indexes(collection, context) collection.indexes.to_a end # special def assert_collection_exists(client, context) c = client.use(dn = arguments.fetch('database')) unless c.database.collection_names.include?(cn = arguments.fetch('collection')) raise "Collection #{cn} does not exist in database #{dn}, but must" end end def assert_collection_not_exists(client, context) c = client.use(dn = arguments.fetch('database')) if c.database.collection_names.include?(cn = arguments.fetch('collection')) raise "Collection #{cn} exists in database #{dn}, but must not" end end def assert_index_exists(client, context) c = client.use(dn = arguments.fetch('database')) coll = c[cn = arguments.fetch('collection')] unless coll.indexes.map { |doc| doc['name'] }.include?(ixn = arguments.fetch('index')) raise "Index #{ixn} does not exist in collection #{cn} in database #{dn}, but must" end end def assert_index_not_exists(client, context) c = client.use(dn = arguments.fetch('database')) coll = c[cn = arguments.fetch('collection')] begin if coll.indexes.map { |doc| doc['name'] }.include?(ixn = arguments.fetch('index')) raise "Index #{ixn} exists in collection #{cn} in database #{dn}, but must not" end rescue Mongo::Error::OperationFailure::Family => e if e.to_s =~ /ns does not exist/ # Success. else raise end end end def configure_fail_point(client, context) fp = arguments.fetch('failPoint') $disable_fail_points ||= [] $disable_fail_points << [ fp, ClusterConfig.instance.primary_address, ] client.use('admin').database.command(fp) end # options & arguments def options out = {} # Most tests have an "arguments" key which is a hash of options to # be provided to the operation. The command monitoring unacknowledged # bulk write test is an exception in that it has an "options" key # with the options. arguments.merge(arguments['options'] || {}).each do |spec_k, v| ruby_k = ::Utils.underscore(spec_k).to_sym ruby_k = { min: :min_value, max: :max_value, show_record_id: :show_disk_loc }[ruby_k] || ruby_k if respond_to?("transform_#{ruby_k}", true) v = send("transform_#{ruby_k}", v) end out[ruby_k] = v end out end def requests arguments['requests'].map do |request| case request.keys.first when 'insertOne' then { insert_one: request['insertOne']['document'] } when 'updateOne' then update = request['updateOne'] { update_one: { filter: update['filter'], update: update['update'] } } when 'name' then bulk_request(request) end end end def bulk_request(request) op_name = ::Utils.underscore(request['name']) args = ::Utils.shallow_snakeize_hash(request['arguments']) if args[:document] unless args.keys == [:document] raise "If :document is given, it must be the only key" end args = args[:document] end { op_name => args } end def upsert arguments['upsert'] end def transform_return_document(v) ::Utils.underscore(v).to_sym end def update arguments['update'] end def transform_read_preference(v) ::Utils.snakeize_hash(v) end def read_preference transform_read_preference(@spec['read_preference']) end def update_return_doc(result) return_doc = {} return_doc['upsertedId'] = result.upserted_id if arguments['upsert'] return_doc['upsertedCount'] = result.upserted_count return_doc['matchedCount'] = result.matched_count return_doc['modifiedCount'] = result.modified_count if result.modified_count return_doc end def transformed_options(context) opts = options.dup if opts[:session] opts[:session] = case opts[:session] when 'session0' unless context.session0 raise "Trying to use session0 but it is not in context" end context.session0 when 'session1' unless context.session1 raise "Trying to use session1 but it is not in context" end context.session1 else raise "Invalid session name '#{opts[:session]}'" end end opts end end end end mongo-ruby-driver-2.21.3/spec/runners/crud/outcome.rb000066400000000000000000000030051505113246500225510ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module CRUD class Outcome def initialize(spec) if spec.nil? raise ArgumentError, 'Outcome specification cannot be nil' end @result = spec['result'] @collection = spec['collection'] @error = spec['error'] end def error? !!@error end def collection_data? !!collection_data end # The expected data in the collection as an outcome after running an # operation. # # @return [ Array ] The list of documents expected to be in the collection. def collection_data @collection && @collection['data'] end def collection_name @collection && @collection['name'] end # The expected result of running an operation. # # @return [ Array ] The expected result. attr_reader :result end end end mongo-ruby-driver-2.21.3/spec/runners/crud/requirement.rb000066400000000000000000000101101505113246500234310ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all module Mongo module CRUD class Requirement YAML_KEYS = %w(auth minServerVersion maxServerVersion topology topologies serverParameters serverless csfle).freeze def initialize(spec) spec = spec.dup # Legacy tests have the requirements mixed with other test fields spec.delete('data') spec.delete('tests') unless (unhandled_keys = spec.keys - YAML_KEYS).empty? raise "Unhandled requirement specification keys: #{unhandled_keys}" end @min_server_version = spec['minServerVersion'] @max_server_version = spec['maxServerVersion'] # topologies is for unified test format. # topology is for legacy tests. @topologies = if topologies = spec['topology'] || spec['topologies'] topologies.map do |topology| { 'replicaset' => :replica_set, 'single' => :single, 'sharded' => :sharded, 'sharded-replicaset' => :sharded, 'load-balanced' => :load_balanced, }[topology].tap do |v| unless v raise "Unknown topology #{topology}" end end end else nil end @server_parameters = spec['serverParameters'] @serverless = if serverless = spec['serverless'] case spec['serverless'] when 'require' then :require when 'forbid' then :forbid when 'allow' then :allow else raise "Unknown serverless requirement: #{serverless}" end else nil end @auth = spec['auth'] @csfle = !!spec['csfle'] if spec['csfle'] end attr_reader :min_server_version attr_reader :max_server_version attr_reader :topologies attr_reader :serverless def short_min_server_version if min_server_version min_server_version.split('.')[0..1].join('.') else nil end end def short_max_server_version if max_server_version max_server_version.split('.')[0..1].join('.') else nil end end def satisfied? cc = ClusterConfig.instance ok = true if min_server_version ok &&= Gem::Version.new(cc.fcv_ish) >= Gem::Version.new(min_server_version) end if max_server_version ok &&= Gem::Version.new(cc.server_version) <= Gem::Version.new(max_server_version) end if topologies ok &&= topologies.include?(cc.topology) end if @server_parameters @server_parameters.each do |k, required_v| actual_v = cc.server_parameters[k] if actual_v.nil? && !required_v.nil? ok = false elsif actual_v != required_v if Numeric === actual_v && Numeric === required_v if actual_v.to_f != required_v.to_f ok = false end else ok = false end end end end if @serverless if SpecConfig.instance.serverless? ok = ok && [:allow, :require].include?(serverless) else ok = ok && [:allow, :forbid].include?(serverless) end end if @auth == true ok &&= SpecConfig.instance.auth? elsif @auth == false ok &&= !SpecConfig.instance.auth? end if @csfle ok &&= !!(ENV['LIBMONGOCRYPT_PATH'] || ENV['FLE']) ok &&= Gem::Version.new(cc.fcv_ish) >= Gem::Version.new('4.2.0') end ok end def description versions = [min_server_version, max_server_version].compact if versions.any? versions = versions.join('-') else versions = nil end topologies = if self.topologies self.topologies.map(&:to_s).join(',') else nil end [versions, topologies].compact.join('/') end end end end mongo-ruby-driver-2.21.3/spec/runners/crud/spec.rb000066400000000000000000000041611505113246500220340ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all module Mongo module CRUD # Represents a CRUD specification test. class Spec # Instantiate the new spec. # # @param [ String ] test_path The path to the file. # # @since 2.0.0 def initialize(test_path) @spec = ::Utils.load_spec_yaml_file(test_path) @description = File.basename(test_path) @data = BSON::ExtJSON.parse_obj(@spec['data']) @tests = @spec['tests'] # Introduced with Client-Side Encryption tests @json_schema = BSON::ExtJSON.parse_obj(@spec['json_schema']) @key_vault_data = BSON::ExtJSON.parse_obj(@spec['key_vault_data']) @encrypted_fields = BSON::ExtJSON.parse_obj(@spec['encrypted_fields'], mode: :bson) @requirements = if run_on = @spec['runOn'] run_on.map do |spec| Requirement.new(spec) end elsif Requirement::YAML_KEYS.any? { |key| @spec.key?(key) } [Requirement.new(@spec)] else nil end end # @return [ String ] description The spec description. # # @since 2.0.0 attr_reader :description attr_reader :requirements # @return [ Hash ] The jsonSchema collection validator. attr_reader :json_schema # @return [ Array ] Data to insert into the key vault before # running each test. attr_reader :key_vault_data # @return [ Hash ] An encryptedFields option that should be set on the # collection (using createCollection) before each test run. attr_reader :encrypted_fields def collection_name # Older spec tests do not specify a collection name, thus # we provide a default here @spec['collection_name'] || 'crud_spec_test' end def bucket_name @spec['bucket_name'] end def database_name @spec['database_name'] end # Get a list of Test instances, one for each test definition. def tests @tests.map do |test| Mongo::CRUD::CRUDTest.new(self, @data, test) end end end end end mongo-ruby-driver-2.21.3/spec/runners/crud/test.rb000066400000000000000000000072141505113246500220630ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all module Mongo module CRUD # Represents a single CRUD test. # # @since 2.0.0 class CRUDTest < CRUDTestBase # Spec tests have configureFailPoint as a string, make it a string here too FAIL_POINT_BASE_COMMAND = BSON::Document.new( 'configureFailPoint' => "onPrimaryTransactionalWrite", ).freeze # Instantiate the new CRUDTest. # # data can be an array of hashes, with each hash corresponding to a # document to be inserted into the collection whose name is given in # collection_name as configured in the YAML file. Alternatively data # can be a map of collection names to arrays of hashes. # # @param [ Crud::Spec ] crud_spec The top level YAML specification object. # @param [ Hash | Array ] data The documents the collection # must have before the test runs. # @param [ Hash ] test The test specification. # # @since 2.0.0 def initialize(crud_spec, data, test) @spec = crud_spec @data = data @description = test['description'] @client_options = ::Utils.convert_client_options(test['clientOptions'] || {}) if test['failPoint'] @fail_point_command = FAIL_POINT_BASE_COMMAND.merge(test['failPoint']) end if test['operations'] @operations = test['operations'].map do |op_spec| Operation.new(self, op_spec) end else @operations = [Operation.new(self, test['operation'], test['outcome'])] end @expectations = BSON::ExtJSON.parse_obj(test['expectations'], mode: :bson) if test['outcome'] @outcome = Mongo::CRUD::Outcome.new(BSON::ExtJSON.parse_obj(test['outcome'], mode: :bson)) end end attr_reader :client_options # Operations to be performed by the test. # # For CRUD tests, there is one operation for test. For retryable writes, # there are multiple operations for each test. In either case we build # an array of operations. attr_reader :operations attr_reader :outcome # Run the test. # # The specified number of operations are executed, so that the # test can assert on the outcome of each specified operation in turn. # # @param [ Client ] client The client the test # should be run with. # @param [ Integer ] num_ops Number of operations to run. # # @return [ Result, Array ] The result(s) of running the test. # # @since 2.0.0 def run(client, num_ops) result = nil 1.upto(num_ops) do |i| operation = @operations[i-1] target = resolve_target(client, operation) result = operation.execute(target) end result end class DataConverter include Mongo::GridFS::Convertible end def setup_test(spec, client) clear_fail_point(client) if @data.nil? # nothing to do elsif @data.is_a?(Array) collection = client[spec.collection_name, write_concern: {w: :majority}] collection.delete_many collection.insert_many(@data) unless @data.empty? elsif @data.is_a?(Hash) converter = DataConverter.new @data.each do |collection_name, data| collection = client[collection_name] collection.delete_many data = converter.transform_docs(data) collection.insert_many(data) end else raise "Unknown type of data: #{@data}" end setup_fail_point(client) end end end end mongo-ruby-driver-2.21.3/spec/runners/crud/test_base.rb000066400000000000000000000026321505113246500230540ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all module Mongo module CRUD class CRUDTestBase # The test description. # # @return [ String ] description The test description. attr_reader :description # The expected command monitoring events. attr_reader :expectations def setup_fail_point(client) if @fail_point_command client.use(:admin).command(@fail_point_command) end end def clear_fail_point(client) if @fail_point_command ClientRegistry.instance.global_client('root_authorized').use(:admin).command(BSON::Document.new(@fail_point_command).merge(mode: "off")) end end private def resolve_target(client, operation) if operation.database_options # Some CRUD spec tests specify "database options". In Ruby there is # no facility to specify options on a database, hence these are # lifted to the client. client = client.with(operation.database_options) end case operation.object when 'collection' client[@spec.collection_name].with(operation.collection_options) when 'database' client.database when 'client' client when 'gridfsbucket' client.database.fs else raise "Unknown target #{operation.object}" end end end end end mongo-ruby-driver-2.21.3/spec/runners/crud/verifier.rb000066400000000000000000000174121505113246500227200ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2019-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module CRUD class Verifier include RSpec::Matchers def initialize(test_instance) @test_instance = test_instance end attr_reader :test_instance # Compare the existing collection data and the expected collection data. # # Uses RSpec matchers and raises expectation failures if there is a # mismatch. def verify_collection_data(expected_collection_data, actual_collection_data) if expected_collection_data.nil? expect(actual_collection_data).to be nil elsif expected_collection_data.empty? expect(actual_collection_data).to be_empty else expect(actual_collection_data).not_to be nil expect(actual_collection_data).to match_with_type(expected_collection_data) end end # Compare the actual operation result to the expected operation result. # # Uses RSpec matchers and raises expectation failures if there is a # mismatch. def verify_operation_result(expected, actual) if expected.is_a?(Array) if expected.empty? expect(actual).to be_empty else expected.each_with_index do |expected_elt, i| # If the YAML spec test does not define a result, # do not assert the operation's result - the operation may # have produced a result, the test just does not care what it is if expected_elt verify_result(expected_elt, actual[i]) end end end else verify_result(expected, actual) end end def verify_command_started_event_count(expected_events, actual_events) if actual_events.length != expected_events.length raise RSpec::Expectations::ExpectationNotMetError.new, <<-EOT Expected #{expected_events.length} events, got #{actual_events.length} events. Expected events: #{expected_events.pretty_inspect} Actual events: #{actual_events.pretty_inspect} EOT end end # This variant used by change stream tests which provide the first N # events rather than all of them. def verify_command_started_event_min_count(expected_events, actual_events) if actual_events.length < expected_events.length raise RSpec::Expectations::ExpectationNotMetError.new, <<-EOT Expected at least #{expected_events.length} events, got #{actual_events.length} events. Expected events: #{expected_events.pretty_inspect} Actual events: #{actual_events.pretty_inspect} EOT end end def verify_command_started_event(expected_events, actual_events, i) expect(expected_events.length).to be > i expect(actual_events.length).to be > i expectation = expected_events[i] actual_event = actual_events[i]['command_started_event'].dup expect(expectation.keys).to eq(%w(command_started_event)) expected_event = expectation['command_started_event'].dup # Retryable reads tests' YAML assertions omit some of the keys # that are included in the actual command events. # Transactions and transactions API tests specify all keys # in YAML that are present in actual command events. actual_event.keys.each do |key| unless expected_event.key?(key) actual_event.delete(key) end end expect(actual_event).not_to be nil expect(actual_event.keys.sort).to eq(expected_event.keys.sort) expected_command = expected_event.delete('command') actual_command = actual_event.delete('command') expected_presence = expected_command.compact expected_absence = expected_command.select { |k, v| v.nil? } expected_presence.each do |k, v| expect(actual_command[k]).to match_with_type(v) end expected_absence.each do |k, v| expect(actual_command).not_to have_key(k) end expect(actual_event).to match_with_type(expected_event) end private def verify_result(expected, actual) case expected when nil expect(actual).to be nil when 42, '42' expect(actual).not_to be nil when Hash if actual.is_a?(Hash) && actual['error'] && !expected.keys.any? { |key| key.start_with?('error') || key == 'isTimeoutError' } then raise RSpec::Expectations::ExpectationNotMetError.new, "Expected operation not to fail but it failed: #{actual.inspect}" end expect(actual).to be_a(Hash) expected.each do |k, v| case k when 'isTimeoutError' expect(actual['errorContains']).to eq('Mongo::Error::TimeoutError') when 'errorContains' expect(actual['errorContains'].downcase).to include(v.downcase) when 'errorLabelsContain' v.each do |label| expect(actual['errorLabels']).to include(label) end when 'errorLabelsOmit' v.each do |label| if actual['errorLabels'] expect(actual['errorLabels']).not_to include(label) end end else verify_hash_items_equal(expected, actual, k) end end when Array expect(actual).to be_a(Array) expect(actual.size).to eq(expected.size) expected.zip(actual).each do |pair| verify_result(pair.first, pair.last) end else expect(actual).to eq(expected) end end def verify_hash_items_equal(expected, actual, k) expect(actual).to be_a(Hash) if expected[k] == actual[k] return end if [42, '42'].include?(expected[k]) && actual[k] return end if %w(deletedCount matchedCount modifiedCount upsertedCount).include?(k) # Some tests assert that some of these counts are zero. # The driver may omit the respective key, which is fine. if expected[k] == 0 expect([0, nil]).to include(actual[k]) return end end if %w(insertedIds upsertedIds).include?(k) if expected[k] == {} # Like with the counts, allow a response to not specify the # ids in question if the expectation is for an empty id map. expect([nil, []]).to include(actual[k]) else expect(actual[k]).to eq(expected[k].values) end return end if k == 'updateDescription' # Change stream result - verify subset, not exact match expected.fetch(k).each do |sub_k, sub_v| {sub_k => sub_v}.should == {sub_k => actual.fetch(k).fetch(sub_k)} end return end if expected[k].is_a?(Time) expect(k => actual[k].utc.to_s).to eq(k => expected[k].utc.to_s) else # This should produce a meaningful error message, # even though we do not actually require that expected[k] == actual[k] expect(k => actual[k]).to eq(k => expected[k]) end end end end end mongo-ruby-driver-2.21.3/spec/runners/gridfs.rb000066400000000000000000000425161505113246500214310ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # Matcher for determining whether the operation completed successfully. # # @since 2.1.0 RSpec::Matchers.define :completes_successfully do |test| match do |actual| actual == test.expected_result || test.expected_result.nil? end end # Matcher for determining whether the actual chunks collection matches # the expected chunks collection. # # @since 2.1.0 RSpec::Matchers.define :match_chunks_collection do |expected| match do |actual| return true if expected.nil? if expected.find.to_a.empty? actual.find.to_a.empty? else actual.find.all? do |doc| if matching_doc = expected.find(files_id: doc['files_id'], n: doc['n']).first matching_doc.all? do |k, v| doc[k] == v || k == '_id' end else false end end end end end # Matcher for determining whether the actual files collection matches # the expected files collection. # # @since 2.1.0 RSpec::Matchers.define :match_files_collection do |expected| match do |actual| return true if expected.nil? actual.find.all? do |doc| if matching_doc = expected.find(_id: doc['_id']).first matching_doc.all? do |k, v| doc[k] == v end else false end end end end # Matcher for determining whether the operation raised the correct error. # # @since 2.1.0 RSpec::Matchers.define :match_error do |error| match do |actual| Mongo::GridFS::Test::ERROR_MAPPING[error] == actual.class end end module Mongo module GridFS # Represents a GridFS specification test. # # @since 2.1.0 class Spec # @return [ String ] description The spec description. # # @since 2.1.0 attr_reader :description # Instantiate the new spec. # # @param [ String ] test_path The path to the file. # # @since 2.1.0 def initialize(test_path) @spec = ::Utils.load_spec_yaml_file(test_path) @description = File.basename(test_path) @data = @spec['data'] end # Get a list of Tests for each test definition. # # @example Get the list of Tests. # spec.tests # # @return [ Array ] The list of Tests. # # @since 2.1.0 def tests @tests ||= @spec['tests'].collect do |test| Test.new(@data, test) end end end # Contains shared helper functions for converting YAML test values to Ruby objects. # # @since 2.1.0 module Convertible # Convert an integer to the corresponding CRUD method suffix. # # @param [ Integer ] int The limit. # # @return [ String ] The CRUD method suffix. # # @since 2.1.0 def limit(int) int == 0 ? 'many' : 'one' end # Convert an id value to a BSON::ObjectId. # # @param [ Object ] v The value to convert. # @param [ Hash ] opts The options. # # @option opts [ BSON::ObjectId ] :id The id override. # # @return [ BSON::ObjectId ] The object id. # # @since 2.1.0 def convert__id(v, opts = {}) to_oid(v, opts[:id]) end # Convert a value to a date. # # @param [ Object ] v The value to convert. # @param [ Hash ] opts The options. # # @return [ Time ] The upload date time value. # # @since 2.1.0 def convert_uploadDate(v, opts = {}) v.is_a?(Time) ? v : v['$date'] ? Time.parse(v['$date']) : upload_date end # Convert an file id value to a BSON::ObjectId. # # @param [ Object ] v The value to convert. # @param [ Hash ] opts The options. # # @option opts [ BSON::ObjectId ] :id The id override. # # @return [ BSON::ObjectId ] The object id. # # @since 2.1.0 def convert_files_id(v, opts = {}) to_oid(v, opts[:files_id]) end # Convert a value to BSON::Binary data. # # @param [ Object ] v The value to convert. # @param [ Hash ] opts The options. # # @return [ BSON::Binary ] The converted data. # # @since 2.1.0 def convert_data(v, opts = {}) v.is_a?(BSON::Binary) ? v : BSON::Binary.new(to_hex(v['$hex'], opts), :generic) end # Transform documents to have the correct object types for serialization. # # @param [ Array ] docs The documents to transform. # @param [ Hash ] opts The options. # # @return [ Array ] The transformed documents. # # @since 2.1.0 def transform_docs(docs, opts = {}) docs.collect do |doc| doc.each do |k, v| doc[k] = send("convert_#{k}", v, opts) if respond_to?("convert_#{k}") end doc end end # Convert a string to a hex value. # # @param [ String ] string The value to convert. # @param [ Hash ] opts The options. # # @return [ String ] The hex value. # # @since 2.1.0 def to_hex(string, opts = {}) [ string ].pack('H*') end # Convert an object id represented in json to a BSON::ObjectId. # A new BSON::ObjectId is returned if the json document is empty. # # @param [ Object ] value The value to convert. # @param [ Object ] id The id override. # # @return [ BSON::ObjectId ] The object id. # # @since 2.1.0 def to_oid(value, id = nil) if id id elsif value.is_a?(BSON::ObjectId) value elsif value['$oid'] BSON::ObjectId.from_string(value['$oid']) else BSON::ObjectId.new end end # Convert options. # # @return [ Hash ] The options. # # @since 2.1.0 def options @act['arguments']['options'].reduce({}) do |opts, (k, v)| opts.merge!(chunk_size: v) if k == "chunkSizeBytes" opts.merge!(upload_date: upload_date) opts.merge!(content_type: v) if k == "contentType" opts.merge!(metadata: v) if k == "metadata" opts end end end # Represents a single GridFS test. # # @since 2.1.0 class Test include Convertible extend Forwardable def_delegators :@operation, :expected_files_collection, :expected_chunks_collection, :result, :expected_error, :expected_result, :error? # The test description. # # @return [ String ] The test description. # # @since 2.1.0 attr_reader :description # The upload date to use in the test. # # @return [ Time ] The upload date. # # @since 2.1.0 attr_reader :upload_date # Mapping of test error strings to driver classes. # # @since 2.1.0 ERROR_MAPPING = { 'FileNotFound' => Mongo::Error::FileNotFound, 'ChunkIsMissing' => Mongo::Error::MissingFileChunk, 'ChunkIsWrongSize' => Mongo::Error::UnexpectedChunkLength, 'ExtraChunk' => Mongo::Error::ExtraFileChunk, 'RevisionNotFound' => Mongo::Error::InvalidFileRevision } # Instantiate the new GridFS::Test. # # @example Create the test. # Test.new(data, test) # # @param [ Array ] data The documents the files and chunks # collections must have before the test runs. # @param [ Hash ] test The test specification. # # @since 2.1.0 def initialize(data, test) @pre_data = data @description = test['description'] @upload_date = Time.now if test['assert']['error'] @operation = UnsuccessfulOp.new(self, test) else @operation = SuccessfulOp.new(self, test) end @result = nil end # Whether the expected and actual collections should be compared after the test runs. # # @return [ true, false ] Whether the actual and expected collections should be compared. # # @since 2.1.0 def assert_data? @operation.assert['data'] end # Run the test. # # @example Run the test # test.run(fs) # # @param [ Mongo::Grid::FSBucket ] fs The Grid::FSBucket to use in the test. # # @since 2.1.0 def run(fs) clear_collections(fs) setup(fs) @operation.run(fs) end # Clear the files and chunks collection in the FSBucket and other collections used in the test. # # @example Clear the test collections # test.clear_collections(fs) # # @param [ Mongo::Grid::FSBucket ] fs The Grid::FSBucket whose collections should be cleared. # # @since 2.1.0 def clear_collections(fs) fs.files_collection.delete_many fs.files_collection.indexes.drop_all rescue nil fs.chunks_collection.delete_many fs.chunks_collection.indexes.drop_all rescue nil #@operation.clear_collections(fs) end private def setup(fs) insert_pre_data(fs) @operation.arrange(fs) end def files_data @files_data ||= transform_docs(@pre_data['files']) end def chunks_data @chunks_data ||= transform_docs(@pre_data['chunks']) end def insert_pre_files_data(fs) fs.files_collection.insert_many(files_data) fs.database['expected.files'].insert_many(files_data) if assert_data? end def insert_pre_chunks_data(fs) fs.chunks_collection.insert_many(chunks_data) fs.database['expected.chunks'].insert_many(chunks_data) if assert_data? end def insert_pre_data(fs) insert_pre_files_data(fs) unless files_data.empty? insert_pre_chunks_data(fs) unless chunks_data.empty? end # Contains logic and helper methods shared between a successful and # non-successful GridFS test operation. # # @since 2.1.0 module Operable extend Forwardable def_delegators :@test, :upload_date # The test operation name. # # @return [ String ] The operation name. # # @since 2.1.0 attr_reader :op # The test assertion. # # @return [ Hash ] The test assertion definition. # # @since 2.1.0 attr_reader :assert # The operation result. # # @return [ Object ] The operation result. # # @since 2.1.0 attr_reader :result # The collection containing the expected files. # # @return [ Mongo::Collection ] The expected files collection. # # @since 2.1.0 attr_reader :expected_files_collection # The collection containing the expected chunks. # # @return [ Mongo::Collection ] The expected chunks collection. # # @since 2.1.0 attr_reader :expected_chunks_collection # Instantiate the new test operation. # # @example Create the test operation. # Test.new(data, test) # # @param [ Test ] test The test. # @param [ Hash ] spec The test specification. # # @since 2.1.0 def initialize(test, spec) @test = test @arrange = spec['arrange'] @act = spec['act'] @op = @act['operation'] @arguments = @act['arguments'] @assert = spec['assert'] end # Arrange the data before running the operation. # This sets up the correct scenario for the test. # # @example Arrange the data. # operation.arrange(fs) # # @param [ Grid::FSBucket ] fs The FSBucket used in the test. # # @since 2.1.0 def arrange(fs) if @arrange @arrange['data'].each do |data| send("#{data.keys.first}_exp_data", fs, data) end end end # Run the test operation. # # @example Execute the operation. # operation.run(fs) # # @param [ Grid::FSBucket ] fs The FSBucket used in the test. # # @result [ Object ] The operation result. # # @since 2.1.0 def run(fs) @expected_files_collection = fs.database['expected.files'] @expected_chunks_collection = fs.database['expected.chunks'] act(fs) prepare_expected_collections(fs) result end private def prepare_expected_collections(fs) if @test.assert_data? @assert['data'].each do |data| op = "#{data.keys.first}_exp_data" send(op, fs, data) end end end def insert_exp_data(fs, data) coll = fs.database[data['insert']] if coll.name =~ /.files/ opts = { id: @result } else opts = { files_id: @result } end coll.insert_many(transform_docs(data['documents'], opts)) end def delete_exp_data(fs, data) coll = fs.database[data['delete']] data['deletes'].each do |del| id = del['q'].keys.first coll.find(id => to_oid(del['q'][id])).send("delete_#{limit(del['limit'])}") end end def update_exp_data(fs, data) coll = fs.database[data['update']] data['updates'].each do |update| sel = update['q'].merge('files_id' => to_oid(update['q']['files_id'])) data = BSON::Binary.new(to_hex(update['u']['$set']['data']['$hex']), :generic) u = update['u'].merge('$set' => { 'data' => data }) coll.find(sel).update_one(u) end end def upload(fs) io = StringIO.new(to_hex(@arguments['source']['$hex'])) fs.upload_from_stream(@arguments['filename'], io, options) end def download(fs) io = StringIO.new.set_encoding(BSON::BINARY) fs.download_to_stream(to_oid(@arguments['id']), io) io.string end def download_by_name(fs) io = StringIO.new.set_encoding(BSON::BINARY) if @arguments['options'] fs.download_to_stream_by_name(@arguments['filename'], io, revision: @arguments['options']['revision']) else fs.download_to_stream_by_name(@arguments['filename'], io) end io.string end def delete(fs) fs.delete(to_oid(@arguments['id'])) end end # A GridFS test operation that is expected to succeed. # # @since 2.1.0 class SuccessfulOp include Convertible include Test::Operable # The expected result of executing the operation. # # @example Get the expected result. # operation.expected_result # # @result [ Object ] The operation result. # # @since 2.1.0 def expected_result if @assert['result'] == '&result' @result elsif @assert['result'] != 'void' to_hex(@assert['result']['$hex']) end end # Execute the operation. # # @example Execute the operation. # operation.act(fs) # # @param [ Grid::FSBucket ] fs The FSBucket used in the test. # # @result [ Object ] The operation result. # # @since 2.1.0 def act(fs) @result = send(op, fs) end # Whether this operation is expected to raise an error. # # @return [ false ] The operation is expected to succeed. # # @since 2.1.0 def error? false end end class UnsuccessfulOp include Convertible include Test::Operable # Whether this operation is expected to raise an error. # # @return [ true ] The operation is expected to fail. # # @since 2.1.0 def error? true end # The expected error. # # @example Execute the operation. # operation.expected_error # # @return [ String ] The expected error name. # # @since 2.1.0 def expected_error @assert['error'] end # Execute the operation. # # @example Execute the operation. # operation.act(fs) # # @param [ Grid::FSBucket ] fs The FSBucket used in the test. # # @result [ Mongo::Error ] The error encountered. # # @since 2.1.0 def act(fs) begin send(op, fs) rescue => ex @result = ex end end end end end end mongo-ruby-driver-2.21.3/spec/runners/read_write_concern_document.rb000066400000000000000000000030571505113246500257020ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all module ReadWriteConcernDocument class Spec attr_reader :description # Instantiate the new spec. # # @param [ String ] test_path The path to the file. # # @since 2.0.0 def initialize(test_path) @spec = ::Utils.load_spec_yaml_file(test_path) @description = File.basename(test_path) end def tests @tests ||= @spec['tests'].collect do |spec| Test.new(spec) end end end class Test def initialize(spec) @spec = spec @description = @spec['description'] @uri_string = @spec['uri'] end attr_reader :description def valid? !!@spec['valid'] end def input_document (@spec['readConcern'] || @spec['writeConcern']).tap do |concern| # Documented Ruby API matches the server API, and Ruby prohibits # journal key as used in the spec tests... if concern.key?('journal') concern['j'] = concern.delete('journal') end # ... and uses wtimeout instead of wtimeoutMS if concern.key?('wtimeoutMS') concern['wtimeout'] = concern.delete('wtimeoutMS') end end end def server_document @spec['readConcernDocument'] || @spec['writeConcernDocument'] end # Returns true, false or nil def server_default? # Do not convert to boolean @spec['isServerDefault'] end # Returns true, false or nil def acknowledged? # Do not convert to boolean @spec['isAcknowledged'] end end end mongo-ruby-driver-2.21.3/spec/runners/sdam.rb000066400000000000000000000164261505113246500211000ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Matcher for determining if the server is of the expected type according to # the test. # # @since 2.0.0 RSpec::Matchers.define :be_server_type do |expected| match do |actual| Mongo::SDAM.server_of_type?(actual, expected) end end # Matcher for determining if the cluster topology is the expected type. # # @since 2.0.0 RSpec::Matchers.define :be_topology do |expected| match do |actual| actual.topology.class.name.sub(/.*::/, '') == expected end end module Mongo module SDAM module UniversalMethods def server_of_type?(server, type) case type when 'Standalone' then server.standalone? when 'RSPrimary' then server.primary? when 'RSSecondary' then server.secondary? when 'RSArbiter' then server.arbiter? when 'Mongos' then server.mongos? when 'Unknown' then server.unknown? when 'PossiblePrimary' then server.unknown? when 'RSGhost' then server.ghost? when 'RSOther' then server.other? when 'LoadBalancer' then server.load_balancer? else raise "Unknown type #{type}" end end end include UniversalMethods extend UniversalMethods # Convenience helper to find a server by it's URI. # # @since 2.0.0 def find_server(client, address_str) client.cluster.servers_list.detect{ |s| s.address.to_s == address_str } end # Represents a specification. # # @since 2.0.0 class Spec # @return [ String ] description The spec description. attr_reader :description # @return [ Array ] phases The spec phases. attr_reader :phases # @return [ Mongo::URI ] uri The URI object. attr_reader :uri # @return [ String ] uri_string The passed uri string. attr_reader :uri_string # Instantiate the new spec. # # @param [ String ] test_path The path to the file. # # @since 2.0.0 def initialize(test_path) @test = ::Utils.load_spec_yaml_file(test_path) @description = @test['description'] @uri_string = @test['uri'] @uri = URI.new(uri_string) @phases = @test['phases'].map{ |phase| Phase.new(phase, uri) } end end # Represents a phase in the spec. Phases are sequential. # # @since 2.0.0 class Phase # @return [ Outcome ] outcome The phase outcome. attr_reader :outcome # @return [ Array ] responses The responses for each server in # the phase. attr_reader :responses attr_reader :application_errors # Create the new phase. # # @example Create the new phase. # Phase.new(phase, uri) # # @param [ Hash ] phase The phase hash. # @param [ Mongo::URI ] uri The URI. # # @since 2.0.0 def initialize(phase, uri) @phase = phase @responses = @phase['responses']&.map{ |response| Response.new(response, uri) } @application_errors = @phase['applicationErrors']&.map{ |error_spec| ApplicationError.new(error_spec) } @outcome = Outcome.new(BSON::ExtJSON.parse_obj(@phase['outcome'])) end end # Represents a server response during a phase. # # @since 2.0.0 class Response # @return [ String ] address The server address. attr_reader :address # @return [ Hash ] hello The hello response. attr_reader :hello # Create the new response. # # @example Create the response. # Response.new(response, uri) # # @param [ Hash ] response The response value. # @param [ Mongo::URI ] uri The URI. # # @since 2.0.0 def initialize(response, uri) @uri = uri @address = response[0] @hello = BSON::ExtJSON.parse_obj(response[1]) end end class ApplicationError def initialize(spec) @spec = spec end def address_str @spec.fetch('address') end def when ::Utils.underscore(@spec.fetch('when')) end def max_wire_version @spec['max_wire_version'] end def generation @spec['generation'] end def type ::Utils.underscore(@spec.fetch('type')) end def result msg = Mongo::Protocol::Msg.new([], {}, BSON::ExtJSON.parse_obj(@spec['response'])) Mongo::Operation::Result.new([msg]) end end # Get the outcome or expectations from the phase. # # @since 2.0.0 class Outcome # @return [ Array ] events The expected events. attr_reader :events # @return [ Hash ] servers The expecations for # server states. attr_reader :servers # @return [ String ] set_name The expected RS set name. attr_reader :set_name # @return [ String ] topology_type The expected cluster topology type. attr_reader :topology_type # @return [ Integer, nil ] logical_session_timeout The expected logical session timeout. attr_reader :logical_session_timeout attr_reader :max_election_id attr_reader :max_set_version # Create the new outcome. # # @example Create the new outcome. # Outcome.new(outcome) # # @param [ Hash ] outcome The outcome object. # # @since 2.0.0 def initialize(outcome) @servers = outcome['servers'] if outcome['servers'] @set_name = outcome['setName'] @topology_type = outcome['topologyType'] @logical_session_timeout = outcome['logicalSessionTimeoutMinutes'] @events = map_events(outcome['events']) if outcome['events'] @compatible = outcome['compatible'] if outcome['maxElectionId'] @max_election_id = outcome['maxElectionId'] end @max_set_version = outcome['maxSetVersion'] end # Whether the server responses indicate that their versions are supported by the driver. # # @example Do the server responses indicate that their versions are supported by the driver. # outcome.compatible? # # @return [ true, false ] Whether the server versions are compatible with the driver. # # @since 2.5.1 def compatible? @compatible.nil? || !!@compatible end def compatible_specified? !@compatible.nil? end private def map_events(events) events.map do |event| Event.new(event.keys.first, event.values.first) end end end class Event MAPPINGS = { 'server_closed_event' => Mongo::Monitoring::Event::ServerClosed, 'server_description_changed_event' => Mongo::Monitoring::Event::ServerDescriptionChanged, 'server_opening_event' => Mongo::Monitoring::Event::ServerOpening, 'topology_description_changed_event' => Mongo::Monitoring::Event::TopologyChanged, 'topology_opening_event' => Mongo::Monitoring::Event::TopologyOpening }.freeze attr_reader :name attr_reader :data def initialize(name, data) @name = name @data = data end def expected MAPPINGS.fetch(name) end end end end class SdamSpecEventPublisher include Mongo::Event::Publisher def initialize(event_listeners) @event_listeners = event_listeners end end mongo-ruby-driver-2.21.3/spec/runners/sdam/000077500000000000000000000000001505113246500205425ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/runners/sdam/verifier.rb000066400000000000000000000075101505113246500227050ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all module Sdam class Verifier include RSpec::Matchers def verify_sdam_event(expected_events, actual_events, i) expect(expected_events.length).to be > i expect(actual_events.length).to be > i expected_event = expected_events[i] actual_event = actual_events[i] actual_event_name = Utils.underscore(actual_event.class.name.sub(/.*::/, '')) actual_event_name = actual_event_name.to_s.sub('topology_changed', 'topology_description_changed') + '_event' expect(actual_event_name).to eq(expected_event.name) send("verify_#{expected_event.name}", expected_event, actual_event) end def verify_topology_opening_event(expected, actual) expect(actual.topology).not_to be nil end def verify_topology_description_changed_event(expected, actual) verify_topology_matches(expected.data['previousDescription'], actual.previous_topology) verify_topology_matches(expected.data['newDescription'], actual.new_topology) end def verify_topology_matches(expected, actual) expected_type = ::Mongo::Cluster::Topology.const_get(expected['topologyType']) expect(actual).to be_a(expected_type) expect(actual.replica_set_name).to eq(expected['setName']) expected['servers'].each do |server| desc = actual.server_descriptions[server['address'].to_s] expect(desc).not_to be nil verify_description_matches(server, desc) end # Verify actual topology has no servers not also present in the # expected topology description. expected_addresses = expected['servers'].map do |server| server['address'] end actual.server_descriptions.keys.each do |address_str| expect(expected_addresses).to include(address_str) end end def verify_server_opening_event(expected, actual) expect(actual.address.to_s).to eq(expected.data['address']) end def verify_server_description_changed_event(expected, actual) verify_description_matches(expected.data['previousDescription'], actual.previous_description) verify_description_matches(expected.data['newDescription'], actual.new_description) end def verify_description_matches(server_spec, actual) case server_spec['type'] when 'Standalone' expect(actual).to be_standalone when 'RSPrimary' expect(actual).to be_primary when 'RSSecondary' expect(actual).to be_secondary when 'RSArbiter' expect(actual).to be_arbiter when 'Mongos' expect(actual).to be_mongos when 'Unknown', 'PossiblePrimary' expect(actual).to be_unknown when 'RSGhost' expect(actual).to be_ghost when 'RSOther' expect(actual).to be_other end if server_spec['arbiters'] expect(actual.arbiters).to eq(server_spec['arbiters']) end if server_spec['hosts'] expect(actual.hosts).to eq(server_spec['hosts']) end if server_spec['passives'] expect(actual.passives).to eq(server_spec['passives']) end if server_spec['primary'] expect(actual.primary_host).to eq(server_spec['primary']) end expect(actual.replica_set_name).to eq(server_spec['setName']) if server_spec['topologyVersion'] # In the Ruby TopologyVersion object, the counter is a # Ruby integer. It would serialize to BSON int. # The expected topology version specifies counter as a # BSON long. # Parse expected value as extended json and compare # Ruby objects. expected_tv = server_spec['topologyVersion'] expect(actual.topology_version).to eq(expected_tv) end end def verify_server_closed_event(expected, actual) expect(actual.address.to_s).to eq(expected.data['address']) end end end mongo-ruby-driver-2.21.3/spec/runners/server_selection.rb000066400000000000000000000316601505113246500235240ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all module Mongo module ServerSelection module Read # Represents a Server Selection specification test. # # @since 2.0.0 class Spec # Mapping of read preference modes. # # @since 2.0.0 READ_PREFERENCES = { 'Primary' => :primary, 'Secondary' => :secondary, 'PrimaryPreferred' => :primary_preferred, 'SecondaryPreferred' => :secondary_preferred, 'Nearest' => :nearest, } # @return [ String ] description The spec description. # # @since 2.0.0 attr_reader :description # @return [ Hash ] read_preference The read preference to be used for selection. # # @since 2.0.0 attr_reader :read_preference # @return [ Integer ] heartbeat_frequency The heartbeat frequency to be set on the client. # # @since 2.4.0 attr_reader :heartbeat_frequency # @return [ Integer ] max_staleness The max_staleness. # # @since 2.4.0 attr_reader :max_staleness # @return [ Array ] eligible_servers The eligible servers before the latency # window is taken into account. # # @since 2.0.0 attr_reader :eligible_servers # @return [ Array ] suitable_servers The set of servers matching all server # selection logic. May be a subset of eligible_servers and/or candidate_servers. # # @since 2.0.0 attr_reader :suitable_servers # @return [ Mongo::Cluster::Topology ] type The topology type. # # @since 2.0.0 attr_reader :type # Instantiate the new spec. # # @param [ String ] test_path The path to the file. # # @since 2.0.0 def initialize(test_path) @test = ::Utils.load_spec_yaml_file(test_path) @description = "#{@test['topology_description']['type']}: #{File.basename(test_path)}" @heartbeat_frequency = @test['heartbeatFrequencyMS'] / 1000 if @test['heartbeatFrequencyMS'] @read_preference = @test['read_preference'] @read_preference['mode'] = READ_PREFERENCES[@read_preference['mode']] @max_staleness = @read_preference['maxStalenessSeconds'] @candidate_servers = @test['topology_description']['servers'] @suitable_servers = @test['suitable_servers'] || [] @in_latency_window = @test['in_latency_window'] || [] @type = Mongo::Cluster::Topology.const_get(@test['topology_description']['type']) end # Does this spec expect a server to be found. # # @example Will a server be found with this spec. # spec.server_available? # # @return [true, false] If a server will be found with this spec. # # @since 2.0.0 def server_available? !in_latency_window.empty? end # Whether the test requires an error to be raised during server selection. # # @return [ true, false ] Whether the test expects an error. def error? @test['error'] end # The subset of suitable servers that falls within the allowable latency # window. # We have to correct for our server selection algorithm that adds the primary # to the end of the list for SecondaryPreferred read preference mode. # # @example Get the list of suitable servers within the latency window. # spec.in_latency_window # # @return [ Array ] The servers within the latency window. # # @since 2.0.0 def in_latency_window @in_latency_window end # The servers a topology would return as candidates for selection. # # @return [ Array ] candidate_servers The candidate servers. # # @since 2.0.0 def candidate_servers @candidate_servers end end end end end def define_server_selection_spec_tests(test_paths) # Linter insists that a server selection semaphore is present when # performing server selection. require_no_linting test_paths.each do |file| spec = Mongo::ServerSelection::Read::Spec.new(file) context(spec.description) do # Cluster needs a topology and topology needs a cluster... # This temporary cluster is used for topology construction. let(:temp_cluster) do double('temp cluster').tap do |cluster| allow(cluster).to receive(:servers_list).and_return([]) end end let(:topology) do options = if spec.type <= Mongo::Cluster::Topology::ReplicaSetNoPrimary {replica_set_name: 'foo'} else {} end spec.type.new(options, monitoring, temp_cluster) end let(:monitoring) do Mongo::Monitoring.new(monitoring: false) end let(:listeners) do Mongo::Event::Listeners.new end let(:options) do if spec.heartbeat_frequency {server_selection_timeout: 0.1, heartbeat_frequency: spec.heartbeat_frequency} else {server_selection_timeout: 0.1} end end let(:cluster) do double('cluster').tap do |c| allow(c).to receive(:server_selection_semaphore) allow(c).to receive(:connected?).and_return(true) allow(c).to receive(:summary) allow(c).to receive(:topology).and_return(topology) allow(c).to receive(:single?).and_return(topology.single?) allow(c).to receive(:sharded?).and_return(topology.sharded?) allow(c).to receive(:replica_set?).and_return(topology.replica_set?) allow(c).to receive(:unknown?).and_return(topology.unknown?) allow(c).to receive(:options).and_return(options) allow(c).to receive(:scan!).and_return(true) allow(c).to receive(:app_metadata).and_return(app_metadata) allow(c).to receive(:heartbeat_interval).and_return( spec.heartbeat_frequency || Mongo::Server::Monitor::DEFAULT_HEARTBEAT_INTERVAL) end end # One of the spec test assertions is on the set of servers that are # eligible for selection without taking latency into account. # In the driver, latency is taken into account at various points during # server selection, hence there isn't a method that can be called to # retrieve the list of servers without accounting for latency. # Work around this by executing server selection with all servers set # to zero latency, when evaluating the candidate server set. let(:ignore_latency) { false } let(:candidate_servers) do spec.candidate_servers.collect do |server| features = double('features').tap do |feat| allow(feat).to receive(:max_staleness_enabled?).and_return(server['maxWireVersion'] && server['maxWireVersion'] >= 5) allow(feat).to receive(:check_driver_support!).and_return(true) end address = Mongo::Address.new(server['address']) Mongo::Server.new(address, cluster, monitoring, listeners, {monitoring_io: false}.update(options) ).tap do |s| allow(s).to receive(:average_round_trip_time) do if ignore_latency 0 elsif server['avg_rtt_ms'] server['avg_rtt_ms'] / 1000.0 end end allow(s).to receive(:tags).and_return(server['tags']) allow(s).to receive(:secondary?).and_return(server['type'] == 'RSSecondary') allow(s).to receive(:primary?).and_return(server['type'] == 'RSPrimary') allow(s).to receive(:mongos?).and_return(server['type'] == 'Mongos') allow(s).to receive(:standalone?).and_return(server['type'] == 'Standalone') allow(s).to receive(:unknown?).and_return(server['type'] == 'Unknown') allow(s).to receive(:connectable?).and_return(true) allow(s).to receive(:last_write_date).and_return( Time.at(server['lastWrite']['lastWriteDate']['$numberLong'].to_f / 1000)) if server['lastWrite'] allow(s).to receive(:last_scan).and_return( Time.at(server['lastUpdateTime'].to_f / 1000)) allow(s).to receive(:features).and_return(features) allow(s).to receive(:replica_set_name).and_return('foo') end end end let(:suitable_servers) do spec.suitable_servers.collect do |server| Mongo::Server.new(Mongo::Address.new(server['address']), cluster, monitoring, listeners, options.merge(monitoring_io: false)) end end let(:in_latency_window) do spec.in_latency_window.collect do |server| Mongo::Server.new(Mongo::Address.new(server['address']), cluster, monitoring, listeners, options.merge(monitoring_io: false)) end end let(:server_selector_definition) do { mode: spec.read_preference['mode'] }.tap do |definition| definition[:tag_sets] = spec.read_preference['tag_sets'] definition[:max_staleness] = spec.max_staleness if spec.max_staleness end end let(:server_selector) do Mongo::ServerSelector.get(server_selector_definition) end let(:app_metadata) do Mongo::Server::AppMetadata.new({}) end before do allow(cluster).to receive(:servers_list).and_return(candidate_servers) allow(cluster).to receive(:servers) do # Copy Cluster#servers definition because clusters is a double cluster.topology.servers(cluster.servers_list) end allow(cluster).to receive(:addresses).and_return(candidate_servers.map(&:address)) end if spec.error? it 'Raises an InvalidServerPreference exception' do expect do server_selector.select_server(cluster) end.to raise_exception(Mongo::Error::InvalidServerPreference) end else if spec.server_available? it 'has non-empty suitable servers' do spec.suitable_servers.should be_a(Array) spec.suitable_servers.should_not be_empty end if spec.in_latency_window.length == 1 it 'selects the expected server' do [server_selector.select_server(cluster)].should == in_latency_window end else it 'selects a server in the suitable list' do in_latency_window.should include(server_selector.select_server(cluster)) end let(:expected_addresses) do in_latency_window.map(&:address).map(&:seed).sort end let(:actual_addresses) do server_selector.suitable_servers(cluster).map(&:address).map(&:seed).sort end it 'identifies expected suitable servers' do actual_addresses.should == expected_addresses end end context 'candidate servers without taking latency into account' do let(:ignore_latency) { true } let(:expected_addresses) do suitable_servers.map(&:address).map(&:seed).sort end let(:actual_addresses) do servers = server_selector.send(:suitable_servers, cluster) # The tests expect that only secondaries are "suitable" for # server selection with secondary preferred read preference. # In actuality, primaries are also suitable, and the driver # returns the primaries also. Remove primaries from the # actual set when read preference is secondary preferred. # HOWEVER, if a test ends up selecting a primary, then it # includes that primary into its suitable servers. Therefore # only remove primaries when the number of suitable servers # is greater than 1. servers.delete_if do |server| server_selector.is_a?(Mongo::ServerSelector::SecondaryPreferred) && server.primary? && servers.length > 1 end # Since we remove the latency requirement, the servers # may be returned in arbitrary order. servers.map(&:address).map(&:seed).sort end it 'identifies expected suitable servers' do actual_addresses.should == expected_addresses end end else # Runner does not handle non-empty suitable servers with # no servers in latency window. it 'has empty suitable servers' do expect(spec.suitable_servers).to eq([]) end it 'Raises a NoServerAvailable Exception' do expect do server_selector.select_server(cluster) end.to raise_exception(Mongo::Error::NoServerAvailable) end end end end end end mongo-ruby-driver-2.21.3/spec/runners/server_selection_rtt.rb000066400000000000000000000025011505113246500244050ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all module Mongo module ServerSelection module RTT # Represents a specification. # # @since 2.0.0 class Spec # @return [ String ] description The spec description. attr_reader :description # @return [ Float ] average_rtt The starting average round trip time, in seconds. attr_reader :average_rtt # @return [ Float ] new_rtt The new round trip time for hello, in seconds. attr_reader :new_rtt # @return [ Float ] new_average_rtt The newly calculated moving average round trip time, in seconds. attr_reader :new_average_rtt # Instantiate the new spec. # # @param [ String ] test_path The path to the file. # # @since 2.0.0 def initialize(test_path) @test = ::Utils.load_spec_yaml_file(test_path) @description = "#{File.basename(test_path)}: avg_rtt_ms: #{@test['avg_rtt_ms']}, new_rtt_ms: #{@test['new_rtt_ms']}," + " new_avg_rtt: #{@test['new_avg_rtt']}" @average_rtt = @test['avg_rtt_ms'] == 'NULL' ? nil : @test['avg_rtt_ms'].to_f / 1000 @new_rtt = @test['new_rtt_ms'].to_f / 1000 @new_average_rtt = @test['new_avg_rtt'].to_f / 1000 end end end end end mongo-ruby-driver-2.21.3/spec/runners/transactions.rb000066400000000000000000000066651505113246500226700ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. require 'runners/transactions/operation' require 'runners/transactions/spec' require 'runners/transactions/test' def define_transactions_spec_tests(test_paths, expectations_bson_types: true) config_override :validate_update_replace, true test_paths.each do |file| spec = Mongo::Transactions::Spec.new(file) context(spec.description) do define_spec_tests_with_requirements(spec) do |req| spec.tests(expectations_bson_types: expectations_bson_types).each do |test| context(test.description) do before(:all) do if ClusterConfig.instance.topology == :sharded if test.multiple_mongoses? && SpecConfig.instance.addresses.length == 1 skip "Test requires multiple mongoses" elsif !test.multiple_mongoses? && SpecConfig.instance.addresses.length > 1 # Many transaction spec tests that do not specifically deal with # sharded transactions fail when run against a multi-mongos cluster skip "Test does not specify multiple mongoses" end end end if test.skip_reason before(:all) do skip test.skip_reason end end unless req.satisfied? before(:all) do skip "Requirements not satisfied" end end before(:all) do test.setup_test end after(:all) do test.teardown_test end let(:results) do $tx_spec_results_cache ||= {} $tx_spec_results_cache[test.object_id] ||= test.run end let(:verifier) { Mongo::CRUD::Verifier.new(test) } it 'returns the correct results' do verifier.verify_operation_result(test.expected_results, results[:results]) end if test.outcome && test.outcome.collection_data? it 'has the correct data in the collection' do results verifier.verify_collection_data( test.outcome.collection_data, results[:contents]) end end if test.expectations it 'has the correct number of command_started events' do verifier.verify_command_started_event_count( test.expectations, results[:events]) end test.expectations.each_with_index do |expectation, i| it "has the correct command_started event #{i}" do verifier.verify_command_started_event( test.expectations, results[:events], i) end end end end end end end end end mongo-ruby-driver-2.21.3/spec/runners/transactions/000077500000000000000000000000001505113246500223265ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/runners/transactions/operation.rb000066400000000000000000000250621505113246500246600ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Transactions class Operation < Mongo::CRUD::Operation include RSpec::Matchers def needs_session? arguments && arguments['session'] || object =~ /session/ end def execute(target, context) op_name = ::Utils.underscore(name).to_sym if op_name == :with_transaction args = [target] else args = [] end if op_name.nil? raise "Unknown operation #{name}" end result = send(op_name, target, context, *args) if result if result.is_a?(Hash) result = result.dup result['error'] = false end end result rescue Mongo::Error::OperationFailure::Family => e raise "OperationFailure had nil result: #{e}" if e.result.nil? err_doc = e.result.send(:first_document) error_code_name = err_doc['codeName'] || err_doc['writeConcernError'] && err_doc['writeConcernError']['codeName'] if error_code_name.nil? # Sometimes the server does not return the error code name, # but does return the error code (or we can parse the error code # out of the message). # https://jira.mongodb.org/browse/SERVER-39706 warn "Error without error code name: #{e.code}" end { 'errorCode' => e.code, 'errorCodeName' => e.code_name, 'errorContains' => e.message, 'errorLabels' => e.labels, 'exception' => e, 'error' => true, } rescue Mongo::Error => e { 'errorContains' => e.message, 'errorLabels' => e.labels, 'exception' => e, 'error' => true, } rescue bson_error => e { 'exception' => e, 'clientError' => true, 'error' => true, } end private # operations def run_command(database, context) # Convert the first key (i.e. the command name) to a symbol. cmd = arguments['command'].dup command_name = cmd.first.first command_value = cmd.delete(command_name) cmd = { command_name.to_sym => command_value }.merge(cmd) opts = ::Utils.snakeize_hash(transformed_options(context)).dup opts[:read] = opts.delete(:read_preference) database.command(cmd, opts).documents.first end def start_transaction(session, context) session.start_transaction(::Utils.convert_operation_options(arguments['options'])) nil end def commit_transaction(session, context) session.commit_transaction nil end def abort_transaction(session, context) session.abort_transaction nil end def with_transaction(session, context, collection) unless callback = arguments['callback'] raise ArgumentError, 'with_transaction requires a callback to be present' end if arguments['options'] options = ::Utils.snakeize_hash(arguments['options']) else options = nil end session.with_transaction(options) do callback['operations'].each do |op_spec| op = Operation.new(@crud_test, op_spec) target = @crud_test.resolve_target(@crud_test.test_client, op) rv = op.execute(target, context) if rv && rv['exception'] raise rv['exception'] end end end end def assert_session_transaction_state(collection, context) session = context.send(arguments['session']) actual_state = session.instance_variable_get('@state').to_s.sub(/^transaction_|_transaction$/, '').sub(/^no$/, 'none') expect(actual_state).to eq(arguments['state']) end def targeted_fail_point(collection, context) args = transformed_options(context) session = args[:session] unless session.pinned_server raise ArgumentError, 'Targeted fail point requires session to be pinned to a server' end client = ClusterTools.instance.direct_client(session.pinned_server.address, database: 'admin') client.command(arguments['failPoint']) $disable_fail_points ||= [] $disable_fail_points << [ arguments['failPoint'], session.pinned_server.address, ] end def assert_session_pinned(collection, context) args = transformed_options(context) session = args[:session] unless session.pinned_server raise ArgumentError, 'Expected session to be pinned' end end def assert_session_unpinned(collection, context) args = transformed_options(context) session = args[:session] if session.pinned_server raise ArgumentError, 'Expected session to not be pinned' end end def wait_for_event(client, context) deadline = Utils.monotonic_time + 5 loop do events = _select_events(context) if events.length >= arguments['count'] break end if Utils.monotonic_time >= deadline raise "Did not receive an event matching #{arguments} in 5 seconds; received #{events.length} but expected #{arguments['count']} events" else sleep 0.1 end end end def assert_event_count(client, context) events = _select_events(context) if %w(ServerMarkedUnknownEvent PoolClearedEvent).include?(arguments['event']) # We publish SDAM events from both regular and push monitors. # This means sometimes there are two ServerMarkedUnknownEvent # events published for the same server transition. # Allow actual event count to be at least the expected event count # in case there are multiple transitions in a single test. unless events.length >= arguments['count'] raise "Expected #{arguments['count']} #{arguments['event']} events, but have #{events.length}" end else unless events.length == arguments['count'] raise "Expected #{arguments['count']} #{arguments['event']} events, but have #{events.length}" end end end def _select_events(context) case arguments['event'] when 'ServerMarkedUnknownEvent' context.sdam_subscriber.all_events.select do |event| event.is_a?(Mongo::Monitoring::Event::ServerDescriptionChanged) && event.new_description.unknown? end else context.sdam_subscriber.all_events.select do |event| event.class.name.sub(/.*::/, '') == arguments['event'].sub(/Event$/, '') end end end class ThreadContext def initialize @operations = Queue.new @unexpected_operation_results = [] end def stop? !!@stop end def signal_stop @stop = true end attr_reader :operations attr_reader :unexpected_operation_results end def start_thread(client, context) thread_context = ThreadContext.new thread = Thread.new do loop do begin op_spec = thread_context.operations.pop(true) op = Operation.new(@crud_test, op_spec) target = @crud_test.resolve_target(@crud_test.test_client, op) result = op.execute(target, context) if op_spec['error'] unless result['error'] thread_context.unexpected_operation_results << result end else if result['error'] thread_context.unexpected_operation_results << result end end rescue ThreadError # Queue is empty end if thread_context.stop? break else sleep 1 end end end class << thread attr_accessor :context end thread.context = thread_context unless context.threads context.threads ||= {} end context.threads[arguments['name']] = thread end def run_on_thread(client, context) thread = context.threads.fetch(arguments['name']) thread.context.operations << arguments['operation'] end def wait_for_thread(client, context) thread = context.threads.fetch(arguments['name']) thread.context.signal_stop thread.join unless thread.context.unexpected_operation_results.empty? raise "Thread #{arguments['name']} had #{thread.context.unexpected_operation_results}.length unexpected operation results" end end def wait(client, context) sleep arguments['ms'] / 1000.0 end def record_primary(client, context) context.primary_address = client.cluster.next_primary.address end def run_admin_command(support_client, context) support_client.use('admin').database.command(arguments['command']) end def wait_for_primary_change(client, context) timeout = if arguments['timeoutMS'] arguments['timeoutMS'] / 1000.0 else 10 end deadline = Utils.monotonic_time + timeout loop do client.cluster.scan! if client.cluster.next_primary.address != context.primary_address break end if Utils.monotonic_time >= deadline raise "Failed to change primary in #{timeout} seconds" end end end # The error to rescue BSON tests for. If we still define # BSON::String::IllegalKey then we should rescue that particular error, # otherwise, rescue an arbitrary BSON::Error def bson_error BSON::String.const_defined?(:IllegalKey) ? BSON::String.const_get(:IllegalKey) : BSON::Error end end end end mongo-ruby-driver-2.21.3/spec/runners/transactions/spec.rb000066400000000000000000000016561505113246500236150ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Transactions class Spec < Mongo::CRUD::Spec def tests(expectations_bson_types: true) @tests.map do |test| Mongo::Transactions::TransactionsTest.new(self, @data, test, expectations_bson_types: expectations_bson_types) end end end end end mongo-ruby-driver-2.21.3/spec/runners/transactions/test.rb000066400000000000000000000315341505113246500236400ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2014-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Mongo module Transactions # Represents a single transaction test. # # @since 2.6.0 class TransactionsTest < CRUD::CRUDTestBase include MongosMacros attr_reader :expected_results attr_reader :skip_reason attr_reader :results # @return [ Crud::Spec ] the top-level YAML specification object attr_reader :spec # Instantiate the new CRUDTest. # # @example Create the test. # TransactionTest.new(data, test) # # @param [ Crud::Spec ] crud_spec The top level YAML specification object. # @param [ Array ] data The documents the collection # must have before the test runs. # @param [ Hash ] test The test specification. # @param [ true | false | Proc ] expectations_bson_types Whether bson # types should be expected. If a Proc is given, it is invoked with the # test as its argument, and should return true or false. # # @since 2.6.0 def initialize(crud_spec, data, test, expectations_bson_types: true) test = IceNine.deep_freeze(test) @spec = crud_spec @data = data || [] @description = test['description'] @client_options = { # Disable legacy read & write retries, so that when spec tests # disable modern retries we do not retry at all instead of using # legacy retries which is contrary to what the tests want. max_read_retries: 0, max_write_retries: 0, app_name: 'Tx spec - test client', }.update(::Utils.convert_client_options(test['clientOptions'] || {})) @fail_point_command = test['failPoint'] @session_options = if opts = test['sessionOptions'] Hash[opts.map do |session_name, options| [session_name.to_sym, ::Utils.convert_operation_options(options)] end] else {} end @skip_reason = test['skipReason'] @multiple_mongoses = test['useMultipleMongoses'] operations = test['operations'] @operations = operations.map do |op| Operation.new(self, op) end if expectations_bson_types.respond_to?(:call) expectations_bson_types = expectations_bson_types[self] end mode = if expectations_bson_types then :bson else nil end @expectations = BSON::ExtJSON.parse_obj(test['expectations'], mode: mode) if test['outcome'] @outcome = Mongo::CRUD::Outcome.new(BSON::ExtJSON.parse_obj(test['outcome'], mode: mode)) end @expected_results = operations.map do |o| o = BSON::ExtJSON.parse_obj(o, mode: :bson) # We check both o.key('error') and o['error'] to provide a better # error message in case error: false is ever needed in the tests if o.key?('error') if o['error'] {'error' => true} else raise "Unsupported error value #{o['error']}" end else result = o['result'] next result unless result.class == Hash # Change maps of result ids to arrays of ids result.dup.tap do |r| r.each do |k, v| next unless ['insertedIds', 'upsertedIds'].include?(k) r[k] = v.to_a.sort_by(&:first).map(&:last) end end end end end attr_reader :outcome def multiple_mongoses? @multiple_mongoses end def support_client @support_client ||= ClientRegistry.instance.global_client('root_authorized').use(@spec.database_name) end def admin_support_client @admin_support_client ||= support_client.use('admin') end def test_client @test_client ||= begin sdam_proc = lambda do |test_client| test_client.subscribe(Mongo::Monitoring::COMMAND, command_subscriber) test_client.subscribe(Mongo::Monitoring::TOPOLOGY_OPENING, sdam_subscriber) test_client.subscribe(Mongo::Monitoring::SERVER_OPENING, sdam_subscriber) test_client.subscribe(Mongo::Monitoring::SERVER_DESCRIPTION_CHANGED, sdam_subscriber) test_client.subscribe(Mongo::Monitoring::TOPOLOGY_CHANGED, sdam_subscriber) test_client.subscribe(Mongo::Monitoring::SERVER_CLOSED, sdam_subscriber) test_client.subscribe(Mongo::Monitoring::TOPOLOGY_CLOSED, sdam_subscriber) test_client.subscribe(Mongo::Monitoring::CONNECTION_POOL, sdam_subscriber) end if kms_providers = @client_options.dig(:auto_encryption_options, :kms_providers) @client_options[:auto_encryption_options][:kms_providers] = kms_providers.map do |provider, opts| case provider when :aws_temporary [ :aws, { access_key_id: SpecConfig.instance.fle_aws_temp_key, secret_access_key: SpecConfig.instance.fle_aws_temp_secret, session_token: SpecConfig.instance.fle_aws_temp_session_token, } ] when :aws_temporary_no_session_token [ :aws, { access_key_id: SpecConfig.instance.fle_aws_temp_key, secret_access_key: SpecConfig.instance.fle_aws_temp_secret, } ] else [provider, opts] end end.to_h end if @client_options[:auto_encryption_options] && SpecConfig.instance.crypt_shared_lib_path @client_options[:auto_encryption_options][:extra_options] ||= {} @client_options[:auto_encryption_options][:extra_options][:crypt_shared_lib_path] = SpecConfig.instance.crypt_shared_lib_path end ClientRegistry.instance.new_local_client( SpecConfig.instance.addresses, SpecConfig.instance.authorized_test_options.merge( database: @spec.database_name, auth_source: SpecConfig.instance.auth_options[:auth_source] || 'admin', sdam_proc: sdam_proc, ).merge(@client_options)) end end def command_subscriber @command_subscriber ||= Mrss::EventSubscriber.new end def sdam_subscriber @sdam_subscriber ||= Mrss::EventSubscriber.new(name: 'sdam subscriber') end # Run the test. # # @example Run the test. # test.run # # @return [ Result ] The result of running the test. # # @since 2.6.0 def run @threads = {} results = @operations.map do |op| target = resolve_target(test_client, op) if op.needs_session? context = CRUD::Context.new( session0: session0, session1: session1, sdam_subscriber: sdam_subscriber, threads: @threads, primary_address: @primary_address, ) else # Hack to support write concern operations tests, which are # defined to use transactions format but target pre-3.6 servers # that do not support sessions target ||= support_client context = CRUD::Context.new( sdam_subscriber: sdam_subscriber, threads: @threads, primary_address: @primary_address, ) end op.execute(target, context).tap do @threads = context.threads @primary_address = context.primary_address end end session0_id = @session0&.session_id session1_id = @session1&.session_id @session0&.end_session @session1&.end_session actual_events = ::Utils.yamlify_command_events(command_subscriber.started_events) actual_events = actual_events.reject do |event| event['command_started_event']['command']['endSessions'] end actual_events.each do |e| # Replace the session id placeholders with the actual session ids. payload = e['command_started_event'] if @session0 payload['command']['lsid'] = 'session0' if payload['command']['lsid'] == session0_id end if @session1 payload['command']['lsid'] = 'session1' if payload['command']['lsid'] == session1_id end end @results = { results: results, contents: @result_collection.with( read: {mode: 'primary'}, read_concern: { level: 'local' }, ).find.sort(_id: 1).to_a, events: actual_events, } end def setup_test begin admin_support_client.command(killAllSessions: []) rescue Mongo::Error end if ClusterConfig.instance.fcv_ish >= '4.2' ::Utils.mongos_each_direct_client do |direct_client| direct_client.command(configureFailPoint: 'failCommand', mode: 'off') end end key_vault_coll = support_client .use(:keyvault)[:datakeys] .with(write: { w: :majority }) key_vault_coll.drop # Insert data into the key vault collection if required to do so by # the tests. if @spec.key_vault_data && !@spec.key_vault_data.empty? key_vault_coll.insert_many(@spec.key_vault_data) end encrypted_fields = @spec.encrypted_fields if @spec.encrypted_fields coll = support_client[@spec.collection_name].with(write: { w: :majority }) coll.drop(encrypted_fields: encrypted_fields) # Place a jsonSchema validator on the collection if required to do so # by the tests. collection_validator = if @spec.json_schema { '$jsonSchema' => @spec.json_schema } else {} end create_collection_spec = { create: @spec.collection_name, validator: collection_validator, writeConcern: { w: 'majority' } } create_collection_spec[:encryptedFields] = encrypted_fields if encrypted_fields support_client.command(create_collection_spec) coll.insert_many(@data) unless @data.empty? if description =~ /distinct/ || @operations.any? { |op| op.name == 'distinct' } run_mongos_distincts(@spec.database_name, 'test') end admin_support_client.command(@fail_point_command) if @fail_point_command @collection = test_client[@spec.collection_name] # Client-side encryption tests require the use of a separate client # without auto_encryption_options for querying results. result_collection_name = outcome&.collection_name || @spec.collection_name @result_collection = support_client.use(@spec.database_name)[result_collection_name] # DRIVERS-2816, adjusted for legacy spec runner @cluster_time = support_client.command(ping: 1).cluster_time end def teardown_test if @fail_point_command admin_support_client.command(configureFailPoint: 'failCommand', mode: 'off') end if $disable_fail_points $disable_fail_points.each do |(fail_point_command, address)| client = ClusterTools.instance.direct_client(address, database: 'admin') client.command(configureFailPoint: fail_point_command['configureFailPoint'], mode: 'off') end $disable_fail_points = nil end if @test_client @test_client.cluster.session_pool.end_sessions end end def resolve_target(client, operation) case operation.object when 'session0' session0 when 'session1' session1 when 'testRunner' # We don't actually use this target in any way. nil else super end end def new_session(options) test_client.start_session(options || {}).tap do |s| # DRIVERS-2816, adjusted for legacy spec runner s.advance_cluster_time(@cluster_time) end end def session0 @session0 ||= new_session(@session_options[:session0]) end def session1 @session1 ||= new_session(@session_options[:session1]) end end end end mongo-ruby-driver-2.21.3/spec/runners/unified.rb000066400000000000000000000075551505113246500216020ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'support/using_hash' require 'runners/unified/error' require 'runners/unified/entity_map' require 'runners/unified/event_subscriber' require 'runners/unified/test' require 'runners/unified/test_group' def define_unified_spec_tests(base_path, paths, expect_failure: false) config_override :validate_update_replace, true paths.each do |path| basename = path[base_path.length+1...path.length] context basename do group = Unified::TestGroup.new(path) if basename =~ /retryable|transaction/ require_wired_tiger end group.tests.each do |test| context test.description do if test.skip? before do skip test.skip_reason end end before(:all) do if SpecConfig.instance.retry_reads == false skip "Tests are not applicable when legacy read retries are used" end if SpecConfig.instance.retry_writes == false skip "Tests are not applicable when legacy write retries are used" end if ClusterConfig.instance.topology == :sharded if test.require_multiple_mongoses? && SpecConfig.instance.addresses.length == 1 skip "Test requires multiple mongoses" elsif test.require_single_mongos? && SpecConfig.instance.addresses.length > 1 # Many transaction spec tests that do not specifically deal with # sharded transactions fail when run against a multi-mongos cluster skip "Test requires single mongos" end end end if test.retry? retry_test tries: 3 end if expect_failure it 'fails as expected' do if test.group_reqs unless test.group_reqs.any? { |r| r.satisfied? } skip "Group requirements not satisfied" end end if test.reqs unless test.reqs.any? { |r| r.satisfied? } skip "Requirements not satisfied" end end begin test.create_spec_entities test.set_initial_data begin test.run test.assert_outcome test.assert_events # HACK: other errors are possible and likely will need to # be added here later as the tests evolve. rescue Mongo::Error::OperationFailure::Family, Unified::Error::UnsupportedOperation, UsingHash::UsingHashKeyError, Unified::Error::EntityMissing rescue => e fail "Expected to raise Mongo::Error::OperationFailure or Unified::Error::UnsupportedOperation or UsingHash::UsingHashKeyError or Unified::Error::EntityMissing, got #{e.class}: #{e}" else fail "Expected to raise Mongo::Error::OperationFailure or Unified::Error::UnsupportedOperation or UsingHash::UsingHashKeyError or Unified::Error::EntityMissing, but no error was raised" end ensure test.cleanup end end else it 'passes' do if test.group_reqs unless test.group_reqs.any? { |r| r.satisfied? } skip "Group requirements not satisfied" end end if test.reqs unless test.reqs.any? { |r| r.satisfied? } skip "Requirements not satisfied" end end test.create_spec_entities test.set_initial_data test.run test.assert_outcome test.assert_events test.cleanup end end end end end end end mongo-ruby-driver-2.21.3/spec/runners/unified/000077500000000000000000000000001505113246500212415ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/runners/unified/ambiguous_operations.rb000066400000000000000000000004311505113246500260220ustar00rootroot00000000000000# frozen_string_literal: true module Unified module AmbiguousOperations def find(op) entities.get(:collection, op['object']) crud_find(op) rescue Unified::Error::EntityMissing entities.get(:bucket, op['object']) gridfs_find(op) end end end mongo-ruby-driver-2.21.3/spec/runners/unified/assertions.rb000066400000000000000000000354211505113246500237650ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all module Unified module Assertions include RSpec::Matchers def assert_result_matches(actual, expected) if Hash === expected if expected.keys == ["$$unsetOrMatches"] assert_result_matches(actual, UsingHash[expected.values.first]) else use_all(expected, 'expected result', expected) do |expected| %w(deleted inserted matched modified upserted).each do |k| if count = expected.use("#{k}Count") if Hash === count || count > 0 actual_count = case actual when Mongo::BulkWrite::Result, Mongo::Operation::Delete::Result, Mongo::Operation::Update::Result actual.send("#{k}_count") else actual["n_#{k}"] end assert_value_matches(actual_count, count, "#{k} count") end end end %w(inserted upserted).each do |k| expected_v = expected.use("#{k}Ids") next unless expected_v actual_v = case actual when Mongo::BulkWrite::Result, Mongo::Operation::Update::Result # Ruby driver returns inserted ids as an array of ids. # The yaml file specifies them as a map from operation. if Hash === expected_v && expected_v.keys == %w($$unsetOrMatches) expected_v = expected_v.values.first.values elsif Hash === expected_v expected_v = expected_v.values end actual.send("#{k}_ids") else actual["#{k}_ids"] end if expected_v if expected_v.empty? if actual_v && !actual_v.empty? raise Error::ResultMismatch, "Actual not empty" end else if actual_v != expected_v raise Error::ResultMismatch, "Mismatch: actual #{actual_v}, expected #{expected_v}" end end end end %w(acknowledged).each do |k| expected_v = expected.use(k) next unless expected_v actual_v = case actual when Mongo::BulkWrite::Result, Mongo::Operation::Result if Hash === expected_v && expected_v.keys == %w($$unsetOrMatches) expected_v = expected_v.values.first end actual.send("#{k}?") else actual[k] end if expected_v if expected_v.empty? if actual_v && !actual_v.empty? raise Error::ResultMismatch, "Actual not empty" end else if actual_v != expected_v raise Error::ResultMismatch, "Mismatch: actual #{actual_v}, expected #{expected_v}" end end end end %w(bulkWriteResult).each do |k| expected_v = expected.use(k) next unless expected_v actual_v = case actual when Mongo::Crypt::RewrapManyDataKeyResult actual.send(Utils.underscore(k)) else raise Error::ResultMismatch, "Mismatch: actual #{actual_v}, expected #{expected_v}" end if expected_v if expected_v.empty? if actual_v && !actual_v.empty? raise Error::ResultMismatch, "Actual not empty" end else %w(deleted inserted matched modified upserted).each do |k| if count = expected_v.use("#{k}Count") if Hash === count || count > 0 actual_count = actual_v.send("#{k}_count") assert_value_matches(actual_count, count, "#{k} count") end end end end end end assert_matches(actual, expected, 'result') expected.clear end end else assert_matches(actual, expected, 'result') end end def assert_outcome return unless outcome client = ClientRegistry.instance.global_client('root_authorized') outcome.each do |spec| spec = UsingHash[spec] collection = client.use(spec.use!('databaseName'))[spec.use!('collectionName')] expected_docs = spec.use!('documents') actual_docs = collection.find({}, sort: { _id: 1 }).to_a assert_documents_match(actual_docs, expected_docs) unless spec.empty? raise NotImplementedError, "Unhandled keys: #{spec}" end end end def assert_documents_match(actual, expected) unless actual.length == expected.length raise Error::ResultMismatch, "Unexpected number of documents: expected #{expected.length}, actual #{actual.length}" end actual.each_with_index do |document, index| assert_matches(document, expected[index], "document ##{index}") end end def assert_document_matches(actual, expected, msg) unless actual == expected raise Error::ResultMismatch, "#{msg} does not match" end end def assert_events return unless @expected_events @expected_events.each do |spec| spec = UsingHash[spec] client_id = spec.use!('client') client = entities.get(:client, client_id) subscriber = @subscribers.fetch(client) expected_events = spec.use!('events') ignore_extra_events = if ignore = spec.use('ignoreExtraEvents') # Ruby treats 0 as truthy, whereas the spec tests use it as falsy. ignore == 0 ? false : ignore else false end actual_events = subscriber.wanted_events(@observe_sensitive[client_id]) case spec.use('eventType') when nil, 'command' actual_events.select! do |event| event.class.name.sub(/.*::/, '') =~ /^Command/ end when 'cmap' actual_events.select! do |event| event.class.name.sub(/.*::/, '') =~ /^(?:Pool|Connection)/ end end if (!ignore_extra_events && actual_events.length != expected_events.length) || (ignore_extra_events && actual_events.length < expected_events.length) raise Error::ResultMismatch, "Event count mismatch: expected #{expected_events.length}, actual #{actual_events.length}\nExpected: #{expected_events}\nActual: #{actual_events}" end expected_events.each_with_index do |event, i| assert_event_matches(actual_events[i], event) end unless spec.empty? raise NotImplementedError, "Unhandled keys: #{spec}" end end end def assert_event_matches(actual, expected) assert_eq(expected.keys.length, 1, "Expected event must have one key: #{expected}") expected_name, spec = expected.first spec = UsingHash[spec] expected_name = expected_name.sub(/Event$/, '').sub(/^(.)/) { $1.upcase } assert_eq(actual.class.name.sub(/.*::/, ''), expected_name, 'Event name does not match') if spec.use('hasServiceId') actual.service_id.should_not be nil end if spec.use('hasServerConnectionId') actual.server_connection_id.should_not be nil end if db_name = spec.use('databaseName') assert_eq(actual.database_name, db_name, 'Database names differ') end if command_name = spec.use('commandName') assert_eq(actual.command_name, command_name, 'Command names differ') end if command = spec.use('command') assert_matches(actual.command, command, 'Commands differ') end if reply = spec.use('reply') assert_matches(actual.reply, reply, 'Command reply does not match expectation') end if interrupt_in_use_connections = spec.use('interruptInUseConnections') assert_matches(actual.options[:interrupt_in_use_connections], interrupt_in_use_connections, 'Command interrupt_in_use_connections does not match expectation') end unless spec.empty? raise NotImplementedError, "Unhandled keys: #{spec}" end end def assert_eq(actual, expected, msg) unless expected == actual raise Error::ResultMismatch, "#{msg}: expected #{expected}, actual #{actual}" end end def assert_gte(actual, expected, msg) unless actual >= expected raise Error::ResultMismatch, "#{msg}: expected #{expected}, actual #{actual}" end end def assert_matches(actual, expected, msg) if actual.nil? if expected.is_a?(Hash) && expected.keys == ["$$unsetOrMatches"] return elsif !expected.nil? raise Error::ResultMismatch, "#{msg}: expected #{expected} but got nil" end end case expected when Array unless Array === actual raise Error::ResultMismatch, "Expected an array, found #{actual}" end unless actual.length == expected.length raise Error::ResultMismatch, "Expected array of length #{expected.length}, found array of length #{actual.length}: #{actual}" end expected.each_with_index do |v, i| assert_matches(actual[i], v, "#{msg}: index #{i}") end when Hash if expected.keys == %w($$unsetOrMatches) && expected.values.first.keys == %w(insertedId) actual_v = get_actual_value(actual, 'inserted_id') expected_v = expected.values.first.values.first assert_value_matches(actual_v, expected_v, 'inserted_id') elsif expected.keys == %w(insertedId) actual_v = get_actual_value(actual, 'inserted_id') expected_v = expected.values.first assert_value_matches(actual_v, expected_v, 'inserted_id') else if expected.empty? # This needs to be a match assertion. Check type only # and allow BulkWriteResult and generic operation result. unless Hash === actual || Mongo::BulkWrite::Result === actual || Mongo::Operation::Result === actual || Mongo::Crypt::RewrapManyDataKeyResult === actual raise Error::ResultMismatch, "#{msg}: expected #{expected}, actual #{actual}" end else expected.each do |k, expected_v| if k.start_with?('$$') assert_value_matches(actual, expected, k) else actual_v = get_actual_value(actual, k) if Hash === expected_v && expected_v.length == 1 && expected_v.keys.first.start_with?('$$') assert_value_matches(actual_v, expected_v, k) else assert_matches(actual_v, expected_v, "#{msg}: key #{k}") end end end end end else if Integer === expected && BSON::Int64 === actual actual = actual.value end unless actual == expected raise Error::ResultMismatch, "#{msg}: expected #{expected}, actual #{actual}" end end end # The actual value may be of different types depending on the operation. # In order to avoid having to write a lot of code to handle the different # types, we use this method to get the actual value. def get_actual_value(actual, key) if Hash === actual actual[key] elsif Mongo::Operation::Result === actual && !actual.respond_to?(key.to_sym) actual.documents.first[key] else actual.send(key) end end def assert_type(object, type) ok = [*type].reduce(false) { |acc, x| acc || type_matches?(object, x) } unless ok raise Error::ResultMismatch, "Object #{object} is not of type #{type}" end end def type_matches?(object, type) ok = case type when 'object' Hash === object when 'int', 'long' Integer === object || BSON::Int32 === object || BSON::Int64 === object when 'objectId' BSON::ObjectId === object when 'date' Time === object when 'double' Float === object when 'string' String === object when 'binData' BSON::Binary === object when 'array' Array === object else raise NotImplementedError, "Unhandled type #{type}" end end def assert_value_matches(actual, expected, msg) if Hash === expected && expected.keys.length == 1 && (operator = expected.keys.first).start_with?('$$') then expected_v = expected.values.first case operator when '$$unsetOrMatches' if actual if Mongo::BulkWrite::Result === actual || Mongo::Operation::Result === actual assert_result_matches(actual, UsingHash[expected_v]) else assert_matches(actual, expected_v, msg) end end when '$$matchesHexBytes' expected_data = decode_hex_bytes(expected_v) unless actual == expected_data raise Error::ResultMismatch, "Hex bytes do not match" end when '$$exists' case expected_v when true if actual.nil? raise Error::ResultMismatch, "#{msg}: wanted value to exist, but it did not" end when false if actual raise Error::ResultMismatch, "#{msg}: wanted value to not exist, but it did" end else raise NotImplementedError, "Bogus value #{expected_v}" end when '$$sessionLsid' expected_session = entities.get(:session, expected_v) # TODO - sessions do not expose server sessions after being ended #unless actual_v == {'id' => expected_session.server_session.session_id.to_bson} # raise Error::ResultMismatch, "Session does not match: wanted #{expected_session}, have #{actual_v}" #end when '$$type' assert_type(actual, expected_v) when '$$matchesEntity' result = entities.get(:result, expected_v) unless actual == result raise Error::ResultMismatch, "Actual value #{actual} does not match entity #{expected_v} with value #{result}" end when '$$lte' if actual.nil? || actual >= expected_v raise Error::ResultMismatch, "Actual value #{actual} should be less than #{expected_v}" end else raise NotImplementedError, "Unknown operator #{operator}" end else if actual != expected raise Error::ResultMismatch, "Mismatch for #{msg}: expected #{expected}, have #{actual}" end end end end end mongo-ruby-driver-2.21.3/spec/runners/unified/change_stream_operations.rb000066400000000000000000000022041505113246500266270ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all module Unified module ChangeStreamOperations def create_change_stream(op) object_id = op.use!('object') object = entities.get_any(object_id) use_arguments(op) do |args| pipeline = args.use!('pipeline') opts = extract_options(args, 'batchSize', 'comment', 'fullDocument', 'fullDocumentBeforeChange', 'showExpandedEvents', 'timeoutMS', 'maxAwaitTimeMS') cs = object.watch(pipeline, **opts) if name = op.use('saveResultAsEntity') entities.set(:change_stream, name, cs) end end end def iterate_until_document_or_error(op) object_id = op.use!('object') object = entities.get_any(object_id) object.try_next end def iterate_once(op) stream_id = op.use!('object') stream = entities.get_any(stream_id) stream.try_next end def close(op) object_id = op.use!('object') opts = op.key?('arguments') ? extract_options(op.use!('arguments'), 'timeoutMS') : {} object = entities.get_any(object_id) object.close(opts) end end end mongo-ruby-driver-2.21.3/spec/runners/unified/client_side_encryption_operations.rb000066400000000000000000000046201505113246500305670ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all module Unified module ClientSideEncryptionOperations def create_data_key(op) client_encryption = entities.get(:clientEncryption, op.use!('object')) use_arguments(op) do |args| opts = Utils.shallow_snakeize_hash(args.use('opts')) || {} opts[:master_key] = Utils.shallow_snakeize_hash(opts[:master_key]) if opts[:master_key] opts[:key_material] = opts[:key_material].data if opts[:key_material] client_encryption.create_data_key( args.use!('kmsProvider'), opts, ) end end def add_key_alt_name(op) client_encryption = entities.get(:clientEncryption, op.use!('object')) use_arguments(op) do |args| client_encryption.add_key_alt_name( args.use!('id'), args.use!('keyAltName') ) end end def delete_key(op) client_encryption = entities.get(:clientEncryption, op.use!('object')) use_arguments(op) do |args| client_encryption.delete_key( args.use!('id') ) end end def get_key(op) client_encryption = entities.get(:clientEncryption, op.use!('object')) use_arguments(op) do |args| client_encryption.get_key( args.use!('id') ) end end def get_key_by_alt_name(op) client_encryption = entities.get(:clientEncryption, op.use!('object')) use_arguments(op) do |args| client_encryption.get_key_by_alt_name( args.use!('keyAltName') ) end end def get_keys(op) client_encryption = entities.get(:clientEncryption, op.use!('object')) client_encryption.get_keys.to_a end def remove_key_alt_name(op) client_encryption = entities.get(:clientEncryption, op.use!('object')) use_arguments(op) do |args| client_encryption.remove_key_alt_name( args.use!('id'), args.use!('keyAltName') ) end end def rewrap_many_data_key(op) client_encryption = entities.get(:clientEncryption, op.use!('object')) use_arguments(op) do |args| opts = Utils.shallow_snakeize_hash(args.use('opts')) || {} opts[:master_key] = Utils.shallow_snakeize_hash(opts[:master_key]) if opts[:master_key] client_encryption.rewrap_many_data_key( args.use!('filter'), opts ) end end end end mongo-ruby-driver-2.21.3/spec/runners/unified/crud_operations.rb000066400000000000000000000264041505113246500247740ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all module Unified module CrudOperations def crud_find(op) get_find_view(op).to_a end def find_one(op) get_find_view(op).first end def get_find_view(op) collection = entities.get(:collection, op.use!('object')) use_arguments(op) do |args| filter = args.use!('filter') session = args.use('session') opts = extract_options(args, 'let', 'comment', 'allowDiskUse', 'returnKey', 'projection', 'skip', 'hint', 'maxTimeMS', 'timeoutMS', 'collation', 'noCursorTimeout', 'oplogReplay', 'allowPartialResults', 'timeoutMode', 'maxAwaitTimeMS', 'cursorType', 'timeoutMode', { 'showRecordId' => :show_disk_loc, 'max' => :max_value, 'min' => :min_value }, allow_extra: true) symbolize_options!(opts, :timeout_mode, :cursor_type) opts[:session] = entities.get(:session, session) if session req = collection.find(filter, **opts) if batch_size = args.use('batchSize') req = req.batch_size(batch_size) end if sort = args.use('sort') req = req.sort(sort) end if limit = args.use('limit') req = req.limit(limit) end if projection = args.use('projection') req = req.projection(projection) end req end end def count(op) collection = entities.get(:collection, op.use!('object')) use_arguments(op) do |args| opts = extract_options(args, 'comment', 'timeoutMS', 'maxTimeMS', allow_extra: true) if session = args.use('session') opts[:session] = entities.get(:session, session) end collection.count(args.use!('filter'), **opts) end end def count_documents(op) collection = entities.get(:collection, op.use!('object')) use_arguments(op) do |args| opts = extract_options(args, 'comment', 'timeoutMS', 'maxTimeMS', allow_extra: true) if session = args.use('session') opts[:session] = entities.get(:session, session) end collection.find(args.use!('filter')).count_documents(**opts) end end def estimated_document_count(op) collection = entities.get(:collection, op.use!('object')) use_arguments(op) do |args| opts = extract_options(args, 'comment', 'timeoutMS', 'maxTimeMS', allow_extra: true) if session = args.use('session') opts[:session] = entities.get(:session, session) end collection.estimated_document_count(**opts) end end def distinct(op) collection = entities.get(:collection, op.use!('object')) use_arguments(op) do |args| opts = extract_options(args, 'comment', 'timeoutMS', 'maxTimeMS', allow_extra: true) if session = args.use('session') opts[:session] = entities.get(:session, session) end req = collection.find(args.use!('filter'), **opts).distinct(args.use!('fieldName'), **opts) result = req.to_a end end def find_one_and_update(op) collection = entities.get(:collection, op.use!('object')) use_arguments(op) do |args| filter = args.use!('filter') update = args.use!('update') opts = { let: args.use('let'), comment: args.use('comment'), hint: args.use('hint'), upsert: args.use('upsert'), timeout_ms: args.use('timeoutMS'), max_time_ms: args.use('maxTimeMS') } if return_document = args.use('returnDocument') opts[:return_document] = return_document.downcase.to_sym end if session = args.use('session') opts[:session] = entities.get(:session, session) end collection.find_one_and_update(filter, update, **opts) end end def find_one_and_replace(op) collection = entities.get(:collection, op.use!('object')) use_arguments(op) do |args| filter = args.use!('filter') update = args.use!('replacement') opts = { let: args.use('let'), comment: args.use('comment'), hint: args.use('hint'), timeout_ms: args.use('timeoutMS'), max_time_ms: args.use('maxTimeMS') } if session = args.use('session') opts[:session] = entities.get(:session, session) end collection.find_one_and_replace(filter, update, **opts) end end def find_one_and_delete(op) collection = entities.get(:collection, op.use!('object')) use_arguments(op) do |args| filter = args.use!('filter') opts = { let: args.use('let'), comment: args.use('comment'), hint: args.use('hint'), timeout_ms: args.use('timeoutMS'), max_time_ms: args.use('maxTimeMS') } if session = args.use('session') opts[:session] = entities.get(:session, session) end collection.find_one_and_delete(filter, **opts) end end def insert_one(op) collection = entities.get(:collection, op.use!('object')) use_arguments(op) do |args| opts = { comment: args.use('comment'), timeout_ms: args.use('timeoutMS'), max_time_ms: args.use('maxTimeMS') } if session = args.use('session') opts[:session] = entities.get(:session, session) end collection.insert_one(args.use!('document'), **opts) end end def insert_many(op) collection = entities.get(:collection, op.use!('object')) use_arguments(op) do |args| opts = { comment: args.use('comment'), timeout_ms: args.use('timeoutMS'), max_time_ms: args.use('maxTimeMS') } unless (ordered = args.use('ordered')).nil? opts[:ordered] = ordered end if session = args.use('session') opts[:session] = entities.get(:session, session) end collection.insert_many(args.use!('documents'), **opts) end end def update_one(op) collection = entities.get(:collection, op.use!('object')) use_arguments(op) do |args| opts = { let: args.use('let'), comment: args.use('comment'), hint: args.use('hint'), upsert: args.use('upsert'), timeout_ms: args.use('timeoutMS'), max_time_ms: args.use('maxTimeMS') } if session = args.use('session') opts[:session] = entities.get(:session, session) end collection.update_one(args.use!('filter'), args.use!('update'), **opts) end end def update_many(op) collection = entities.get(:collection, op.use!('object')) use_arguments(op) do |args| opts = { let: args.use('let'), comment: args.use('comment'), hint: args.use('hint'), timeout_ms: args.use('timeoutMS'), max_time_ms: args.use('maxTimeMS') } collection.update_many(args.use!('filter'), args.use!('update'), **opts) end end def replace_one(op) collection = entities.get(:collection, op.use!('object')) use_arguments(op) do |args| collection.replace_one( args.use!('filter'), args.use!('replacement'), comment: args.use('comment'), upsert: args.use('upsert'), let: args.use('let'), hint: args.use('hint'), timeout_ms: args.use('timeoutMS'), max_time_ms: args.use('maxTimeMS') ) end end def delete_one(op) collection = entities.get(:collection, op.use!('object')) use_arguments(op) do |args| opts = { let: args.use('let'), comment: args.use('comment'), hint: args.use('hint'), timeout_ms: args.use('timeoutMS'), max_time_ms: args.use('maxTimeMS') } if session = args.use('session') opts[:session] = entities.get(:session, session) end collection.delete_one(args.use!('filter'), **opts) end end def delete_many(op) collection = entities.get(:collection, op.use!('object')) use_arguments(op) do |args| opts = { let: args.use('let'), comment: args.use('comment'), hint: args.use('hint'), timeout_ms: args.use('timeoutMS'), max_time_ms: args.use('maxTimeMS') } collection.delete_many(args.use!('filter'), **opts) end end def bulk_write(op) collection = entities.get(:collection, op.use!('object')) use_arguments(op) do |args| requests = args.use!('requests').map do |req| convert_bulk_write_spec(req) end opts = {} if args.key?('ordered') opts[:ordered] = args.use!('ordered') end if comment = args.use('comment') opts[:comment] = comment end if let = args.use('let') opts[:let] = let end if timeout_ms = args.use('timeoutMS') opts[:timeout_ms] = timeout_ms end if max_time_ms = args.use('maxTimeMS') opts[:max_time_ms] = max_time_ms end collection.bulk_write(requests, **opts) end end def aggregate(op) obj = entities.get_any(op.use!('object')) args = op.use!('arguments') pipeline = args.use!('pipeline') opts = extract_options(args, 'let', 'comment', 'batchSize', 'maxTimeMS', 'allowDiskUse', 'timeoutMode', 'timeoutMS', 'maxTimeMS', allow_extra: true) symbolize_options!(opts, :timeout_mode) if session = args.use('session') opts[:session] = entities.get(:session, session) end unless args.empty? raise NotImplementedError, "Unhandled spec keys: #{args} in #{test_spec}" end obj.aggregate(pipeline, **opts).to_a end def create_find_cursor(op) obj = entities.get_any(op.use!('object')) args = op.use!('arguments') filter = args.use('filter') opts = extract_options(args, 'batchSize', 'timeoutMS', 'cursorType', 'maxAwaitTimeMS') symbolize_options!(opts, :cursor_type) view = obj.find(filter, opts) view.each # to initialize the cursor view.cursor end private def convert_bulk_write_spec(spec) unless spec.keys.length == 1 raise NotImplementedError, "Must have exactly one item" end op, spec = spec.first spec = UsingHash[spec] out = case op when 'insertOne' spec.use!('document') when 'updateOne', 'updateMany' { filter: spec.use('filter'), update: spec.use('update'), upsert: spec.use('upsert'), array_filters: spec.use('arrayFilters'), hint: spec.use('hint'), } when 'replaceOne' { filter: spec.use('filter'), replacement: spec.use('replacement'), upsert: spec.use('upsert'), hint: spec.use('hint'), } when 'deleteOne', 'deleteMany' { filter: spec.use('filter'), hint: spec.use('hint'), } else raise NotImplementedError, "Unknown operation #{op}" end unless spec.empty? raise NotImplementedError, "Unhandled keys: #{spec}" end {Utils.underscore(op) =>out} end end end mongo-ruby-driver-2.21.3/spec/runners/unified/ddl_operations.rb000066400000000000000000000213741505113246500246030ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all module Unified module DdlOperations def list_databases(op) list_dbs(op, name_only: false) end def list_database_names(op) list_dbs(op, name_only: false) end def list_dbs(op, name_only: false) client = entities.get(:client, op.use!('object')) use_arguments(op) do |args| opts = {} if session = args.use('session') opts[:session] = entities.get(:session, session) end if timeout_ms = args.use('timeoutMS') opts[:timeout_ms] = timeout_ms end client.list_databases(args.use('filter') || {}, name_only, **opts) end end def create_collection(op) database = entities.get(:database, op.use!('object')) use_arguments(op) do |args| opts = {} if session = args.use('session') opts[:session] = entities.get(:session, session) end collection_opts = {} if timeseries = args.use('timeseries') collection_opts[:time_series] = timeseries end if expire_after_seconds = args.use('expireAfterSeconds') collection_opts[:expire_after] = expire_after_seconds end if clustered_index = args.use('clusteredIndex') collection_opts[:clustered_index] = clustered_index end if change_stream_pre_and_post_images = args.use('changeStreamPreAndPostImages') collection_opts[:change_stream_pre_and_post_images] = change_stream_pre_and_post_images end if view_on = args.use('viewOn') collection_opts[:view_on] = view_on end if pipeline = args.use('pipeline') collection_opts[:pipeline] = pipeline end if capped = args.use('capped') collection_opts[:capped] = capped end if size = args.use('size') collection_opts[:size] = size end if max = args.use('max') collection_opts[:max] = max end database[args.use!('collection'), collection_opts].create(**opts) end end def list_collections(op) list_colls(op, name_only: false) end def list_collection_names(op) list_colls(op, name_only: true) end def list_colls(op, name_only: false) database = entities.get(:database, op.use!('object')) use_arguments(op) do |args| opts = extract_options(args, 'filter', 'timeoutMode', allow_extra: true) symbolize_options!(opts, :timeout_mode) if session = args.use('session') opts[:session] = entities.get(:session, session) end if timeout_ms = args.use('timeoutMS') opts[:timeout_ms] = timeout_ms end database.list_collections(**opts.merge(name_only: name_only)) end end def drop_collection(op) database = entities.get(:database, op.use!('object')) use_arguments(op) do |args| collection = database[args.use!('collection')] collection.drop end end def rename(op) collection = entities.get(:collection, op.use!('object')) use_arguments(op) do |args| to = args.use!('to') cmd = { renameCollection: "#{collection.database.name}.#{collection.name}", to: "#{collection.database.name}.#{to}" } if args.key?("dropTarget") cmd[:dropTarget] = args.use("dropTarget") end collection.client.use(:admin).command(**cmd) end end def assert_collection_exists(op, state = true) consume_test_runner(op) use_arguments(op) do |args| client = ClientRegistry.instance.global_client('authorized') database = client.use(args.use!('databaseName')).database collection_name = args.use!('collectionName') if state unless database.collection_names.include?(collection_name) raise Error::ResultMismatch, "Expected collection #{collection_name} to exist, but it does not" end else if database.collection_names.include?(collection_name) raise Error::ResultMismatch, "Expected collection #{collection_name} to not exist, but it does" end end end end def assert_collection_not_exists(op) assert_collection_exists(op, false) end def list_indexes(op) collection = entities.get(:collection, op.use!('object')) use_arguments(op) do |args| opts = extract_options(args, 'timeoutMode', allow_extra: true) if session = args.use('session') opts[:session] = entities.get(:session, session) end if timeout_ms = args.use('timeoutMS') opts[:timeout_ms] = timeout_ms end collection.indexes(**opts).to_a end end def drop_indexes(op) collection = entities.get(:collection, op.use!('object')) use_arguments(op) do |args| opts = extract_options(args, 'maxTimeMS', 'timeoutMS', allow_extra: true) collection.indexes.drop_all(**opts) end end def create_index(op) collection = entities.get(:collection, op.use!('object')) use_arguments(op) do |args| opts = {} if session = args.use('session') opts[:session] = entities.get(:session, session) end if args.key?('unique') opts[:unique] = args.use('unique') end if timeout_ms = args.use('timeoutMS') opts[:timeout_ms] = timeout_ms end if max_time_ms = args.use('maxTimeMS') opts[:max_time_ms] = max_time_ms end collection.indexes.create_one( args.use!('keys'), name: args.use('name'), **opts, ) end end def drop_index(op) collection = entities.get(:collection, op.use!('object')) use_arguments(op) do |args| opts = extract_options(args, 'maxTimeMS', 'timeoutMS', allow_extra: true) if session = args.use('session') opts[:session] = entities.get(:session, session) end collection.indexes.drop_one( args.use!('name'), **opts, ) end end def assert_index_exists(op) consume_test_runner(op) use_arguments(op) do |args| client = ClientRegistry.instance.global_client('authorized') database = client.use(args.use!('databaseName')) collection = database[args.use!('collectionName')] index = collection.indexes.get(args.use!('indexName')) end end def assert_index_not_exists(op) consume_test_runner(op) use_arguments(op) do |args| client = ClientRegistry.instance.global_client('authorized') database = client.use(args.use!('databaseName')) collection = database[args.use!('collectionName')] begin index = collection.indexes.get(args.use!('indexName')) raise Error::ResultMismatch, "Index found" rescue Mongo::Error::OperationFailure::Family => e if e.code == 26 # OK else raise end end end end def create_entities(op) consume_test_runner(op) use_arguments(op) do |args| generate_entities(args.use!('entities')) end end def record_topology_description(op) consume_test_runner(op) use_arguments(op) do |args| client = entities.get(:client, args.use!('client')) entities.set(:topology, args.use!('id'), client.cluster.topology) end end def assert_topology_type(op) consume_test_runner(op) use_arguments(op) do |args| topology = entities.get(:topology, args.use!('topologyDescription')) type = args.use!('topologyType') unless topology.display_name == type raise Error::ResultMismatch, "Expected topology type to be #{type}, but got #{topology.class}" end end end def retrieve_primary(topology) topology.server_descriptions.detect { |k, desc| desc.primary? }&.first end def wait_for_primary_change(op) consume_test_runner(op) use_arguments(op) do |args| client = entities.get(:client, args.use!('client')) topology = entities.get(:topology, args.use!('priorTopologyDescription')) timeout_ms = args.use('timeoutMS') || 10000 old_primary = retrieve_primary(topology) deadline = Mongo::Utils.monotonic_time + timeout_ms / 1000.0 loop do client.cluster.scan! new_primary = client.cluster.next_primary.address if new_primary && old_primary != new_primary break end if Mongo::Utils.monotonic_time >= deadline raise "Did not receive a change in primary from #{old_primary} in 10 seconds" else sleep 0.1 end end end end end end mongo-ruby-driver-2.21.3/spec/runners/unified/entity_map.rb000066400000000000000000000015641505113246500237450ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all module Unified class EntityMap extend Forwardable def initialize @map = {} end def set(type, id, value) @map[type] ||= {} if @map[type][id] raise Error::EntityMapOverwriteAttempt, "Cannot set #{type} #{id} because it is already defined" end @map[type][id] = value end def get(type, id) unless @map[type] raise Error::EntityMissing, "There are no #{type} entities known" end unless v = @map[type][id] raise Error::EntityMissing, "There is no #{type} #{id} known" end v end def get_any(id) @map.each do |type, sub| if sub[id] return sub[id] end end raise Error::EntityMissing, "There is no #{id} known" end def_delegators :@map, :[], :fetch end end mongo-ruby-driver-2.21.3/spec/runners/unified/error.rb000066400000000000000000000006341505113246500227220ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all module Unified class Error < StandardError class ResultMismatch < Error end class ErrorMismatch < Error end class UnhandledField < Error end class EntityMapOverwriteAttempt < Error end class EntityMissing < Error end class InvalidTest < Error end class UnsupportedOperation < Error end end end mongo-ruby-driver-2.21.3/spec/runners/unified/event_subscriber.rb000066400000000000000000000064141505113246500251370ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'mrss/event_subscriber' module Unified class EventSubscriber < Mrss::EventSubscriber def ignore_commands(command_names) @ignore_commands = command_names end def wanted_events(observe_sensitive = false) events = all_events.select do |event| kind = event.class.name.sub(/.*::/, '').sub('Command', '').gsub(/([A-Z])/) { "_#{$1}" }.sub(/^_/, '').downcase.to_sym @wanted_events[kind] end.select do |event| if event.respond_to?(:command_name) event.command_name != 'configureFailPoint' && if @ignore_commands !@ignore_commands.include?(event.command_name) else true end else true end end if observe_sensitive events else events.reject do |event| if event.respond_to?(:command_name) # event could be a command started event or command succeeded event command = event.respond_to?(:command) ? event.command : event.started_event.command %w(authenticate getnonce saslStart saslContinue).include?(event.command_name) || # if the command is empty that means we used speculativeAuth and we should # reject the event. (%w(hello ismaster isMaster).include?(event.command_name) && command.empty?) end end end end def add_wanted_events(kind) @wanted_events ||= {} @wanted_events[kind] = true end end class StoringEventSubscriber def initialize(&block) @handler = block end def started(event) @handler.call( 'name' => event.class.name.sub(/.*::/, '') + 'Event', 'commandName' => event.command_name, 'databaseName' => event.database_name, 'observedAt' => Time.now.to_f, 'address' => event.address.seed, 'requestId' => event.request_id, 'operationId' => event.operation_id, 'connectionId' => event.connection_id, ) end def succeeded(event) @handler.call( 'name' => event.class.name.sub(/.*::/, '') + 'Event', 'commandName' => event.command_name, 'duration' => event.duration, 'observedAt' => Time.now.to_f, 'address' => event.address.seed, 'requestId' => event.request_id, 'operationId' => event.operation_id, ) end def failed(event) @handler.call( 'name' => event.class.name.sub(/.*::/, '') + 'Event', 'commandName' => event.command_name, 'duration' => event.duration, 'failure' => event.failure, 'observedAt' => Time.now.to_f, 'address' => event.address.seed, 'requestId' => event.request_id, 'operationId' => event.operation_id, ) end def published(event) payload = { 'name' => event.class.name.sub(/.*::/, '') + 'Event', 'observedAt' => Time.now.to_f, 'address' => event.address.seed, }.tap do |entry| if event.respond_to?(:connection_id) entry['connectionId'] = event.connection_id end if event.respond_to?(:reason) entry['reason'] = event.reason end end @handler.call(payload) end end end mongo-ruby-driver-2.21.3/spec/runners/unified/exceptions.rb000066400000000000000000000004571505113246500237550ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all module Unified class Error < StandardError end class ResultMismatch < Error end class ErrorMismatch < Error end class EntityMapOverwriteAttempt < Error end class EntityMissing < Error end class InvalidTest < Error end end mongo-ruby-driver-2.21.3/spec/runners/unified/grid_fs_operations.rb000066400000000000000000000057131505113246500254540ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all module Unified module GridFsOperations def gridfs_find(op) bucket = entities.get(:bucket, op.use!('object')) use_arguments(op) do |args| filter = args.use!('filter') opts = extract_options(args, 'allowDiskUse', 'skip', 'hint','timeoutMS', 'noCursorTimeout', 'sort', 'limit') bucket.find(filter,opts).to_a end end def delete(op) bucket = entities.get(:bucket, op.use!('object')) use_arguments(op) do |args| opts = {} if timeout_ms = args.use('timeoutMS') opts[:timeout_ms] = timeout_ms end bucket.delete(args.use!('id'), opts) end end def download(op) bucket = entities.get(:bucket, op.use!('object')) use_arguments(op) do |args| opts = {} if timeout_ms = args.use('timeoutMS') opts[:timeout_ms] = timeout_ms end stream = bucket.open_download_stream(args.use!('id'), opts) stream.read end end def download_by_name(op) bucket = entities.get(:bucket, op.use!('object')) use_arguments(op) do |args| opts = {} if revision = args.use('revision') opts[:revision] = revision end stream = bucket.open_download_stream_by_name(args.use!('filename'), opts) stream.read end end def upload(op) bucket = entities.get(:bucket, op.use!('object')) use_arguments(op) do |args| opts = {} if chunk_size = args.use('chunkSizeBytes') opts[:chunk_size] = chunk_size end if metadata = args.use('metadata') opts[:metadata] = metadata end if content_type = args.use('contentType') opts[:content_type] = content_type end if disable_md5 = args.use('disableMD5') opts[:disable_md5] = disable_md5 end if timeout_ms = args.use('timeoutMS') opts[:timeout_ms] = timeout_ms end contents = transform_contents(args.use!('source')) file_id = nil bucket.open_upload_stream(args.use!('filename'), **opts) do |stream| stream.write(contents) file_id = stream.file_id end file_id end end def drop(op) bucket = entities.get(:bucket, op.use!('object')) use_arguments(op) do |args| opts = {} if timeout_ms = args.use('timeoutMS') opts[:timeout_ms] = timeout_ms end bucket.drop(opts) end end private def transform_contents(contents) if Hash === contents if contents.length != 1 raise NotImplementedError, "Wanted hash with one element" end if contents.keys.first != '$$hexBytes' raise NotImplementedError, "$$hexBytes is the only key supported" end decode_hex_bytes(contents.values.first) else contents end end end end mongo-ruby-driver-2.21.3/spec/runners/unified/search_index_operations.rb000066400000000000000000000033131505113246500264650ustar00rootroot00000000000000# frozen_string_literal: true module Unified # The definitions of available search index operations, as used by the # unified tests. module SearchIndexOperations def create_search_index(op) collection = entities.get(:collection, op.use!('object')) use_arguments(op) do |args| model = args.use('model') name = model.use('name') definition = model.use('definition') type = model.use('type') collection.search_indexes.create_one(definition, name: name, type: type) end end def create_search_indexes(op) collection = entities.get(:collection, op.use!('object')) use_arguments(op) do |args| models = args.use('models') collection.search_indexes.create_many(models) end end def drop_search_index(op) collection = entities.get(:collection, op.use!('object')) use_arguments(op) do |args| collection.search_indexes.drop_one( id: args.use('id'), name: args.use('name') ) end end def list_search_indexes(op) collection = entities.get(:collection, op.use!('object')) use_arguments(op) do |args| agg_opts = args.use('aggregationOptions') || {} collection.search_indexes( id: args.use('id'), name: args.use('name'), aggregate: ::Utils.underscore_hash(agg_opts) ).to_a end end def update_search_index(op) collection = entities.get(:collection, op.use!('object')) use_arguments(op) do |args| collection.search_indexes.update_one( args.use('definition'), id: args.use('id'), name: args.use('name') ) end end end end mongo-ruby-driver-2.21.3/spec/runners/unified/support_operations.rb000066400000000000000000000271751505113246500255610ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all module Unified module SupportOperations def run_command(op) database = entities.get(:database, op.use!('object')) use_arguments(op) do |args| args.use!('commandName') cmd = args.use!('command') opts = {} if session = args.use('session') opts[:session] = entities.get(:session, session) end if read_preference = args.use('readPreference') opts[:read] = ::Utils.snakeize_hash(read_preference) end if timeout_ms = args.use('timeoutMS') opts[:timeout_ms] = timeout_ms end database.command(cmd, **opts) end end def fail_point(op) consume_test_runner(op) use_arguments(op) do |args| client = entities.get(:client, args.use!('client')) client.command(fp = args.use('failPoint')) $disable_fail_points ||= [] $disable_fail_points << [ fp, ClusterConfig.instance.primary_address, ] end end def targeted_fail_point(op) consume_test_runner(op) use_arguments(op) do |args| session = args.use!('session') session = entities.get(:session, session) unless session.pinned_server raise ArgumentError, 'Targeted fail point requires session to be pinned to a server' end client = ClusterTools.instance.direct_client(session.pinned_server.address, database: 'admin') client.command(fp = args.use!('failPoint')) args.clear $disable_fail_points ||= [] $disable_fail_points << [ fp, session.pinned_server.address, ] end end def end_session(op) session = entities.get(:session, op.use!('object')) session.end_session end def assert_session_dirty(op) consume_test_runner(op) use_arguments(op) do |args| session = entities.get(:session, args.use!('session')) session.dirty? || raise(Error::ResultMismatch, 'expected session to be dirty') end end def assert_session_not_dirty(op) consume_test_runner(op) use_arguments(op) do |args| session = entities.get(:session, args.use!('session')) session.dirty? && raise(Error::ResultMismatch, 'expected session to be not dirty') end end def assert_same_lsid_on_last_two_commands(op, expected: true) consume_test_runner(op) use_arguments(op) do |args| client = entities.get(:client, args.use!('client')) subscriber = @subscribers.fetch(client) unless subscriber.started_events.length >= 2 raise Error::ResultMismatch, "Must have at least 2 events, have #{subscriber.started_events.length}" end lsids = subscriber.started_events[-2..-1].map do |cmd| cmd.command.fetch('lsid') end if expected unless lsids.first == lsids.last raise Error::ResultMismatch, "lsids differ but they were expected to be the same" end else if lsids.first == lsids.last raise Error::ResultMismatch, "lsids are the same but they were expected to be different" end end end end def assert_different_lsid_on_last_two_commands(op) assert_same_lsid_on_last_two_commands(op, expected: false) end def start_transaction(op) $klil_transactions = true session = entities.get(:session, op.use!('object')) assert_no_arguments(op) session.start_transaction end def assert_session_transaction_state(op) consume_test_runner(op) use_arguments(op) do |args| session = entities.get(:session, args.use!('session')) state = args.use!('state') unless session.send("#{state}_transaction?") raise Error::ResultMismatch, "Expected session to have state #{state}" end end end def commit_transaction(op) session = entities.get(:session, op.use!('object')) opts = {} use_arguments(op) do |args| opts[:timeout_ms] = args.use('timeoutMS') end session.commit_transaction(opts.compact) end def abort_transaction(op) session = entities.get(:session, op.use!('object')) opts = {} use_arguments(op) do |args| opts[:timeout_ms] = args.use('timeoutMS') end session.abort_transaction(opts.compact) end def with_transaction(op) $kill_transactions = true session = entities.get(:session, op.use!('object')) use_arguments(op) do |args| ops = args.use!('callback') if args.empty? opts = {} else opts = ::Utils.underscore_hash(args) if value = opts[:read_concern]&.[](:level) opts[:read_concern][:level] = value.to_sym end args.clear end session.with_transaction(**opts) do execute_operations(ops) end end end def assert_session_pinned(op, state = true) consume_test_runner(op) use_arguments(op) do |args| session = entities.get(:session, args.use!('session')) if state unless session.pinned_server raise Error::ResultMismatch, 'Expected session to be pinned but it is not' end else if session.pinned_server raise Error::ResultMismatch, 'Expected session to be not pinned but it is' end end end end def assert_session_unpinned(op) assert_session_pinned(op, false) end def _loop(op) consume_test_runner(op) use_arguments(op) do |args| ops = args.use!('operations') if store_errors = args.use('storeErrorsAsEntity') entities.set(:error_list, store_errors, []) end if store_failures = args.use('storeFailuresAsEntity') entities.set(:failure_list, store_failures, []) end store_iterations = args.use('storeIterationsAsEntity') iterations = 0 store_successes = args.use('storeSuccessesAsEntity') successes = 0 loop do break if stop? begin ops.map(&:dup).each do |op| execute_operation(op) successes += 1 end rescue Unified::Error::ResultMismatch => e if store_failures STDERR.puts "Failure: #{e.class}: #{e}" entities.get(:failure_list, store_failures) << { error: "#{e.class}: #{e}", time: Time.now.to_f, } elsif store_errors STDERR.puts "Failure: #{e.class}: #{e} (reporting as error)" entities.get(:error_list, store_errors) << { error: "#{e.class}: #{e}", time: Time.now.to_f, } else raise end rescue Interrupt raise rescue => e if store_failures STDERR.puts "Error: #{e.class}: #{e} (reporting as failure)" entities.get(:failure_list, store_failures) << { error: "#{e.class}: #{e}", time: Time.now.to_f, } elsif store_errors STDERR.puts "Error: #{e.class}: #{e}" entities.get(:error_list, store_errors) << { error: "#{e.class}: #{e}", time: Time.now.to_f, } else raise end end iterations += 1 end if store_iterations entities.set(:iteration_count, store_iterations, iterations) end if store_successes entities.set(:success_count, store_successes, successes) end end end def assert_event_count(op) consume_test_runner(op) use_arguments(op) do |args| client = entities.get(:client, args.use!('client')) subscriber = @subscribers.fetch(client) event = args.use!('event') assert_eq(event.keys.length, 1, "Expected event must have one key: #{event}") count = args.use!('count') events = select_events(subscriber, event) if %w(serverDescriptionChangedEvent poolClearedEvent).include?(event.keys.first) # We publish SDAM events from both regular and push monitors. # This means sometimes there are two ServerMarkedUnknownEvent # events published for the same server transition. # Allow actual event count to be at least the expected event count # in case there are multiple transitions in a single test. assert_gte(events.length, count, "Expected event #{event} to occur #{count} times but received it #{events.length} times.") else assert_eq(events.length, count, "Expected event #{event} to occur #{count} times but received it #{events.length} times.") end end end def select_events(subscriber, event) expected_name, opts = event.first expected_name = expected_name.sub(/Event$/, '').sub(/^(.)/) { $1.upcase } subscriber.wanted_events.select do |wevent| if wevent.class.name.sub(/.*::/, '') == expected_name spec = UsingHash[opts] result = true if new_desc = spec.use('newDescription') if type = new_desc.use('type') result &&= wevent.new_description.server_type == type.downcase.to_sym end end unless spec.empty? raise NotImplementedError, "Unhandled keys: #{spec}" end result end end end def assert_number_connections_checked_out(op) consume_test_runner(op) use_arguments(op) do |args| client = entities.get(:client, args.use!('client')) connections = args.use!('connections') actual_c = client.cluster.servers.map(&:pool_internal).compact.sum do |p| p.instance_variable_get(:@checked_out_connections).length end assert_eq(actual_c, connections, "Expected client #{client} to have #{connections} checked out connections but there are #{actual_c}.") end end private # @param [ UsingHash ] args the arguments to extract options from # @param [ Array ] keys an array of strings and Hashes, # where Hashes represent a mapping from the MDB key to the correspoding # Ruby key. For Strings, the Ruby key is assumed to be a simple conversion # of the MDB key, from camel-case to snake-case. # @param [ true | false ] allow_extra whether or not extra keys are allowed # to exist in the args hash, beyond those listed. def extract_options(args, *keys, allow_extra: false) {}.tap do |opts| keys.each do |key| Array(key).each do |mdb_key, ruby_key| value = args.use(mdb_key) opts[ruby_key || mdb_name_to_ruby(mdb_key)] = value unless value.nil? end end raise NotImplementedError, "unhandled keys: #{args}" if !allow_extra && !args.empty? end end def symbolize_options!(opts, *keys) keys.each do |key| opts[key] = mdb_name_to_ruby(opts[key]) if opts[key] end end def mdb_name_to_ruby(name) name.to_s.gsub(/([a-z])([A-Z])/) { "#{$1}_#{$2}" }.downcase.to_sym end def assert_no_arguments(op) if op.key?('arguments') raise NotimplementedError, "Arguments are not allowed" end end def consume_test_runner(op) v = op.use!('object') unless v == 'testRunner' raise NotImplementedError, 'Expected object to be testRunner' end end def decode_hex_bytes(value) value.scan(/../).map { |hex| hex.to_i(16).chr }.join end end end mongo-ruby-driver-2.21.3/spec/runners/unified/test.rb000066400000000000000000000515541505113246500225570ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'runners/crud/requirement' require 'runners/unified/ambiguous_operations' require 'runners/unified/client_side_encryption_operations' require 'runners/unified/crud_operations' require 'runners/unified/grid_fs_operations' require 'runners/unified/ddl_operations' require 'runners/unified/change_stream_operations' require 'runners/unified/support_operations' require 'runners/unified/thread_operations' require 'runners/unified/search_index_operations' require 'runners/unified/assertions' require 'support/utils' require 'support/crypt' module Unified class Test include AmbiguousOperations include ClientSideEncryptionOperations include CrudOperations include GridFsOperations include DdlOperations include ChangeStreamOperations include SupportOperations include ThreadOperations include SearchIndexOperations include Assertions include RSpec::Core::Pending def initialize(spec, **opts) @spec = spec @entities = EntityMap.new @test_spec = UsingHash[@spec.fetch('test')] @description = @test_spec.use('description') @outcome = @test_spec.use('outcome') @expected_events = @test_spec.use('expectEvents') @skip_reason = @test_spec.use('skipReason') if req = @test_spec.use('runOnRequirements') @reqs = req.map { |r| Mongo::CRUD::Requirement.new(r) } end if req = @spec['group_runOnRequirements'] @group_reqs = req.map { |r| Mongo::CRUD::Requirement.new(r) } end if @spec['createEntities'] mongoses = @spec['createEntities'].select do |spec| spec['client'] end.map do |spec| spec['client']['useMultipleMongoses'] end.compact.uniq @multiple_mongoses = mongoses.any? { |v| v } end @test_spec.freeze @subscribers = {} @observe_sensitive = {} @options = opts end attr_reader :test_spec attr_reader :description attr_reader :outcome attr_reader :skip_reason attr_reader :reqs, :group_reqs attr_reader :options def retry? @description =~ /KMS/i end def skip? !!@skip_reason end def require_multiple_mongoses? @multiple_mongoses == true end def require_single_mongos? @multiple_mongoses == false end attr_reader :entities def create_spec_entities return if @entities_created generate_entities(@spec['createEntities']) end def generate_entities(es) return if es.nil? es.each do |entity_spec| unless entity_spec.keys.length == 1 raise NotImplementedError, "Entity must have exactly one key" end type, spec = entity_spec.first spec = UsingHash[spec] id = spec.use!('id') entity = case type when 'client' if smc_opts = spec.use('uriOptions') opts = Mongo::URI::OptionsMapper.new.smc_to_ruby(smc_opts) else opts = {} end # max_pool_size gets automatically set to 3 if not explicitly set by # the test, therefore, if min_pool_size is set, make sure to set the # max_pool_size as well to something greater. if !opts.key?('max_pool_size') && min_pool_size = opts[:min_pool_size] opts[:max_pool_size] = min_pool_size + 3 end if spec.use('useMultipleMongoses') if ClusterConfig.instance.topology == :sharded unless SpecConfig.instance.addresses.length > 1 raise "useMultipleMongoses requires more than one address in MONGODB_URI" end end else # If useMultipleMongoses isn't true, truncate the address # list to the first address. # This works OK in replica sets because the driver will discover # the other set members, in standalone deployments because # there is only one server, but changes behavior in # sharded clusters compared to how the test suite is configured. options[:single_address] = true end if store_events = spec.use('storeEventsAsEntities') store_event_names = {} store_events.each do |spec| entity_name = spec['id'] event_names = spec['events'] event_names.each do |event_name| store_event_names[event_name] = entity_name end end store_event_names.values.uniq.each do |entity_name| entities.set(:event_list, entity_name, []) end subscriber = StoringEventSubscriber.new do |payload| if entity_name = store_event_names[payload['name']] entities.get(:event_list, entity_name) << payload end end opts[:sdam_proc] = lambda do |client| client.subscribe(Mongo::Monitoring::COMMAND, subscriber) client.subscribe(Mongo::Monitoring::CONNECTION_POOL, subscriber) end end if server_api = spec.use('serverApi') server_api = ::Utils.underscore_hash(server_api) opts[:server_api] = server_api end observe_events = spec.use('observeEvents') subscriber = EventSubscriber.new current_proc = opts[:sdam_proc] opts[:sdam_proc] = lambda do |client| current_proc.call(client) if current_proc if oe = observe_events oe.each do |event| case event when 'commandStartedEvent', 'commandSucceededEvent', 'commandFailedEvent' unless client.send(:monitoring).subscribers[Mongo::Monitoring::COMMAND].include?(subscriber) client.subscribe(Mongo::Monitoring::COMMAND, subscriber) end kind = event.sub('command', '').sub('Event', '').downcase.to_sym subscriber.add_wanted_events(kind) if ignore_events = spec.use('ignoreCommandMonitoringEvents') subscriber.ignore_commands(ignore_events) end when /\A(?:pool|connection)/ unless client.send(:monitoring).subscribers[Mongo::Monitoring::CONNECTION_POOL]&.include?(subscriber) client.subscribe(Mongo::Monitoring::CONNECTION_POOL, subscriber) end kind = event.sub('Event', '').gsub(/([A-Z])/) { "_#{$1}" }.sub('pool', 'Pool').downcase.to_sym subscriber.add_wanted_events(kind) when 'serverDescriptionChangedEvent' unless client.send(:monitoring).subscribers[Mongo::Monitoring::SERVER_DESCRIPTION_CHANGED]&.include?(subscriber) client.subscribe(Mongo::Monitoring::SERVER_DESCRIPTION_CHANGED, subscriber) end kind = event.sub('Event', '').gsub(/([A-Z])/) { "_#{$1}" }.downcase.to_sym subscriber.add_wanted_events(kind) else raise NotImplementedError, "Unknown event #{event}" end end end end create_client(**opts).tap do |client| @observe_sensitive[id] = spec.use('observeSensitiveCommands') @subscribers[client] ||= subscriber end when 'database' client = entities.get(:client, spec.use!('client')) opts = Utils.snakeize_hash(spec.use('databaseOptions') || {}) .merge(database: spec.use!('databaseName')) if opts.key?(:read_preference) opts[:read] = opts.delete(:read_preference) if opts[:read].key?(:max_staleness_seconds) opts[:read][:max_staleness] = opts[:read].delete(:max_staleness_seconds) end end client.with(opts).database when 'collection' database = entities.get(:database, spec.use!('database')) # TODO verify opts = Utils.snakeize_hash(spec.use('collectionOptions') || {}) if opts.key?(:read_preference) opts[:read] = opts.delete(:read_preference) if opts[:read].key?(:max_staleness_seconds) opts[:read][:max_staleness] = opts[:read].delete(:max_staleness_seconds) end end database[spec.use!('collectionName'), opts] when 'bucket' database = entities.get(:database, spec.use!('database')) database.fs when 'session' client = entities.get(:client, spec.use!('client')) if smc_opts = spec.use('sessionOptions') opts = ::Utils.underscore_hash(smc_opts) else opts = {} end client.start_session(**opts).tap do |session| session.advance_cluster_time(@cluster_time) end when 'clientEncryption' client_encryption_opts = spec.use!('clientEncryptionOpts') key_vault_client = entities.get(:client, client_encryption_opts['keyVaultClient']) opts = { key_vault_namespace: client_encryption_opts['keyVaultNamespace'], kms_providers: Utils.snakeize_hash(client_encryption_opts['kmsProviders']), kms_tls_options: { kmip: { ssl_cert: SpecConfig.instance.fle_kmip_tls_certificate_key_file, ssl_key: SpecConfig.instance.fle_kmip_tls_certificate_key_file, ssl_ca_cert: SpecConfig.instance.fle_kmip_tls_ca_file } } } opts[:kms_providers] = opts[:kms_providers].map do |provider, options| converted_options = options.map do |key, value| converted_value = if value == { '$$placeholder'.to_sym => 1 } case provider when :aws case key when :access_key_id then SpecConfig.instance.fle_aws_key when :secret_access_key then SpecConfig.instance.fle_aws_secret end when :azure case key when :tenant_id then SpecConfig.instance.fle_azure_tenant_id when :client_id then SpecConfig.instance.fle_azure_client_id when :client_secret then SpecConfig.instance.fle_azure_client_secret end when :gcp case key when :email then SpecConfig.instance.fle_gcp_email when :private_key then SpecConfig.instance.fle_gcp_private_key end when :kmip case key when :endpoint then SpecConfig.instance.fle_kmip_endpoint end when :local case key when :key then Crypt::LOCAL_MASTER_KEY end end else value end [key, converted_value] end.to_h [provider, converted_options] end.to_h Mongo::ClientEncryption.new( key_vault_client, opts ) when 'thread' thread_context = ThreadContext.new thread = Thread.new do loop do begin op_spec = thread_context.operations.pop(true) execute_operation(op_spec) rescue ThreadError # Queue is empty end if thread_context.stop? break else sleep 0.1 end end end class << thread attr_accessor :context end thread.context = thread_context thread else raise NotImplementedError, "Unknown type #{type}" end unless spec.empty? raise NotImplementedError, "Unhandled spec keys: #{spec}" end entities.set(type.to_sym, id, entity) end @entities_created = true end def set_initial_data @spec['initialData']&.each do |entity_spec| spec = UsingHash[entity_spec] collection = root_authorized_client.with(write_concern: {w: :majority}). use(spec.use!('databaseName'))[spec.use!('collectionName')] collection.drop create_options = spec.use('createOptions') || {} docs = spec.use!('documents') begin collection.create(create_options) rescue Mongo::Error => e if Mongo::Error::OperationFailure::Family === e && ( e.code == 48 || e.message =~ /collection already exists/ ) # Already exists else raise end end if docs.any? collection.insert_many(docs) end unless spec.empty? raise NotImplementedError, "Unhandled spec keys: #{spec}" end end # the cluster time is used to advance the cluster time of any # sessions created during this test. # -> see DRIVERS-2816 @cluster_time = root_authorized_client.command(ping: 1).cluster_time end def run kill_sessions test_spec = UsingHash[self.test_spec] ops = test_spec.use!('operations') execute_operations(ops) unless test_spec.empty? raise NotImplementedError, "Unhandled spec keys: #{test_spec}" end ensure disable_fail_points end def stop! @stop = true end def stop? !!@stop end def cleanup if $kill_transactions || true kill_sessions $kill_transactions = nil end entities[:client]&.each do |id, client| client.close end end private def execute_operations(ops) ops.each do |op| execute_operation(op) end end def execute_operation(op) use_all(op, 'operation', op) do |op| name = Utils.underscore(op.use!('name')) method_name = name if name.to_s == 'loop' method_name = "_#{name}" end if ["modify_collection", "list_index_names"].include?(name.to_s) skip "Mongo Ruby Driver does not support #{name.to_s}" end if expected_error = op.use('expectError') begin unless respond_to?(method_name) raise Error::UnsupportedOperation, "Mongo Ruby Driver does not support #{name.to_s}" end public_send(method_name, op) rescue Mongo::Error, bson_error, Mongo::Auth::Unauthorized, ArgumentError => e if expected_error.use('isTimeoutError') unless Mongo::Error::TimeoutError === e raise e raise Error::ErrorMismatch, %Q,Expected TimeoutError ("isTimeoutError") but got #{e}, end end if expected_error.use('isClientError') # isClientError doesn't actually mean a client error. # It means anything other than OperationFailure. DRIVERS-1799 if Mongo::Error::OperationFailure::Family === e raise Error::ErrorMismatch, %Q,Expected not OperationFailure ("isClientError") but got #{e}, end end if code = expected_error.use('errorCode') unless e.code == code raise Error::ErrorMismatch, "Expected #{code} code but had #{e.code}" end end if code_name = expected_error.use('errorCodeName') unless e.code_name == code_name raise Error::ErrorMismatch, "Expected #{code_name} code name but had #{e.code_name}" end end if text = expected_error.use('errorContains') unless e.to_s.include?(text) raise Error::ErrorMismatch, "Expected #{text} in the message but had #{e}" end end if labels = expected_error.use('errorLabelsContain') labels.each do |label| unless e.label?(label) raise Error::ErrorMismatch, "Expected error to contain label #{label} but it did not" end end end if omit_labels = expected_error.use('errorLabelsOmit') omit_labels.each do |label| if e.label?(label) raise Error::ErrorMismatch, "Expected error to not contain label #{label} but it did" end end end if error_response = expected_error.use("errorResponse") assert_result_matches(e.document, error_response) end if expected_result = expected_error.use('expectResult') assert_result_matches(e.result, expected_result) # Important: this must be the last branch. elsif expected_error.use('isError') # Nothing but we consume the key. end unless expected_error.empty? raise NotImplementedError, "Unhandled keys: #{expected_error}" end else raise Error::ErrorMismatch, "Expected exception but none was raised" end elsif op.use('ignoreResultAndError') unless respond_to?(method_name) raise Error::UnsupportedOperation, "Mongo Ruby Driver does not support #{name.to_s}" end begin send(method_name, op) # We can possibly rescue more errors here, add as needed. rescue Mongo::Error end else unless respond_to?(method_name, true) raise Error::UnsupportedOperation, "Mongo Ruby Driver does not support #{name.to_s}" end result = send(method_name, op) if expected_result = op.use('expectResult') if result.nil? && expected_result.keys == ["$$unsetOrMatches"] return elsif result.nil? && !expected_result.empty? raise Error::ResultMismatch, "expected #{expected_result} but got nil" elsif Array === expected_result assert_documents_match(result, expected_result) else assert_result_matches(result, expected_result) end #expected_result.clear end if save_entity = op.use('saveResultAsEntity') entities.set(:result, save_entity, result) end end end end def use_sub(hash, key, &block) v = hash.use!(key) use_all(hash, key, v, &block) end def use_all(hash, key, v) orig_v = v.dup (yield v).tap do unless v.empty? raise NotImplementedError, "Unconsumed items for #{key}: #{v}\nOriginal hash: #{orig_v}" end end end def use_arguments(op, &block) if op.key?('arguments') use_sub(op, 'arguments', &block) else yield UsingHash.new end end def disable_fail_points if $disable_fail_points $disable_fail_points.each do |(fail_point_command, address)| client = ClusterTools.instance.direct_client(address, database: 'admin') client.command(configureFailPoint: fail_point_command['configureFailPoint'], mode: 'off') end $disable_fail_points = nil end end def kill_sessions begin root_authorized_client.command( killAllSessions: [], ) rescue Mongo::Error::OperationFailure::Family => e if e.code == 11601 # operation was interrupted, ignore. SERVER-38335 elsif e.code == 13 # Unauthorized - e.g. when running in Atlas as part of # drivers-atlas-testing, ignore. SERVER-54216 elsif e.code == 59 # no such command (old server), ignore elsif e.code == 8000 # CMD_NOT_ALLOWED: killAllSessions - running against a serverless instance else raise end end end def root_authorized_client @root_authorized_client ||= ClientRegistry.instance.global_client('root_authorized') end def create_client(**opts) args = case v = options[:client_args] when Array unless v.length == 2 raise NotImplementedError, 'Client args array must have two elements' end [v.first, v.last.dup] when String [v, {}] else addresses = SpecConfig.instance.addresses if options[:single_address] addresses = [addresses.first] end [ addresses, SpecConfig.instance.all_test_options, ] end args.last.update( max_read_retries: 0, max_write_retries: 0, ).update(opts) Mongo::Client.new(*args) end # The error to rescue BSON tests for. If we still define # BSON::String::IllegalKey then we should rescue that particular error, # otherwise, rescue an arbitrary BSON::Error def bson_error BSON::String.const_defined?(:IllegalKey) ? BSON::String.const_get(:IllegalKey) : BSON::Error end end end mongo-ruby-driver-2.21.3/spec/runners/unified/test_group.rb000066400000000000000000000011471505113246500237640ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all module Unified class TestGroup def initialize(path, **opts) if String === path data = ::Utils.load_spec_yaml_file(path) else data = path end @spec = BSON::ExtJSON.parse_obj(data) @options = opts end attr_reader :options def tests reqs = @spec['runOnRequirements'] @spec.fetch('tests').map do |test| sub = @spec.dup sub.delete('tests') sub['test'] = test sub['group_runOnRequirements'] = reqs Test.new(sub, **options) end end end end mongo-ruby-driver-2.21.3/spec/runners/unified/thread_operations.rb000066400000000000000000000033421505113246500253020ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all module Unified module ThreadOperations class ThreadContext def initialize @operations = Queue.new end def stop? !!@stop end def signal_stop @stop = true end attr_reader :operations end def wait(op) consume_test_runner(op) use_arguments(op) do |args| sleep args.use!('ms') / 1000.0 end end def wait_for_event(op) consume_test_runner(op) use_arguments(op) do |args| client = entities.get(:client, args.use!('client')) subscriber = @subscribers.fetch(client) event = args.use!('event') assert_eq(event.keys.length, 1, "Expected event must have one key: #{event}") count = args.use!('count') deadline = Mongo::Utils.monotonic_time + 10 loop do events = select_events(subscriber, event) if events.length >= count break end if Mongo::Utils.monotonic_time >= deadline raise "Did not receive an event matching #{event} in 10 seconds; received #{events.length} but expected #{count} events" else sleep 0.1 end end end end def run_on_thread(op) consume_test_runner(op) use_arguments(op) do |args| thread = entities.get(:thread, args.use!('thread')) operation = args.use!('operation') thread.context.operations << operation end end def wait_for_thread(op) consume_test_runner(op) use_arguments(op) do |args| thread = entities.get(:thread, args.use!('thread')) thread.context.signal_stop thread.join end end end end mongo-ruby-driver-2.21.3/spec/shared/000077500000000000000000000000001505113246500173705ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/solo/000077500000000000000000000000001505113246500170765ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/solo/clean_exit_spec.rb000066400000000000000000000006651505113246500225570ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'mongo' require 'lite_spec_helper' describe 'Clean exit' do require_external_connectivity require_solo context 'with SRV URI' do let(:uri) do 'mongodb+srv://test1.test.build.10gen.cc/?tls=false' end it 'exits cleanly' do client = Mongo::Client.new(uri) client.database.collection_names.to_a ensure client.close end end end mongo-ruby-driver-2.21.3/spec/spec_helper.rb000066400000000000000000000014431505113246500207420ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' require 'mrss/constraints' require 'mrss/cluster_config' ClusterConfig = Mrss::ClusterConfig require 'support/constraints' require 'support/authorization' require 'support/primary_socket' require 'support/cluster_tools' require 'support/monitoring_ext' RSpec.configure do |config| config.include(Authorization) config.extend(Mrss::Constraints) config.extend(Constraints) config.before(:all) do if SpecConfig.instance.kill_all_server_sessions? kill_all_server_sessions end end config.after do LocalResourceRegistry.instance.close_all ClientRegistry.instance.close_local_clients end end # require all shared examples Dir['./spec/support/shared/*.rb'].sort.each { |file| require file } mongo-ruby-driver-2.21.3/spec/spec_tests/000077500000000000000000000000001505113246500202765ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/auth_spec.rb000066400000000000000000000030061505113246500225750ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' require 'runners/auth' describe 'Auth' do include Mongo::Auth AUTH_TESTS.each do |file| spec = Mongo::Auth::Spec.new(file) context(spec.description) do spec.tests.each_with_index do |test, index| context test.description do if test.description.downcase.include?("gssapi") require_mongo_kerberos end if test.valid? context 'the auth configuration is valid' do if test.credential it 'creates a client with options matching the credential' do expect(test.actual_client_options).to eq(test.expected_credential) end it 'creates a user with attributes matching the credential' do expect(test.actual_user_attributes).to eq(test.expected_credential) end else context 'with empty credentials' do it 'creates a client with no credential information' do expect(test.client).to have_blank_credentials end end end end else context 'the auth configuration is invalid' do it 'raises an error' do expect do test.client end.to raise_error(Mongo::Auth::InvalidConfiguration) end end end end end end end end mongo-ruby-driver-2.21.3/spec/spec_tests/change_streams_unified_spec.rb000066400000000000000000000005611505113246500263250ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' require 'runners/unified' base = "#{CURRENT_PATH}/spec_tests/data/change_streams_unified" CHANGE_STREAM_UNIFIED_TESTS = Dir.glob("#{base}/**/*.yml").sort describe 'Change stream unified spec tests' do require_no_multi_mongos define_unified_spec_tests(base, CHANGE_STREAM_UNIFIED_TESTS) end mongo-ruby-driver-2.21.3/spec/spec_tests/client_side_encryption_spec.rb000066400000000000000000000023011505113246500263650ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' require 'runners/crud' require 'runners/transactions' SPECS_IGNORING_BSON_TYPES = %w[ fle2v2-CreateCollection.yml ] # expect bson types for all specs EXCEPT those mentioned in # SPECS_IGNORING_BSON_TYPES EXPECTATIONS_BSON_TYPES = -> (test) { !SPECS_IGNORING_BSON_TYPES.include?(test.spec.description) } describe 'Client-Side Encryption' do require_libmongocrypt require_enterprise min_libmongocrypt_version '1.8.0' context 'with mongocryptd' do SpecConfig.instance.without_crypt_shared_lib_path do define_transactions_spec_tests(CLIENT_SIDE_ENCRYPTION_TESTS, expectations_bson_types: EXPECTATIONS_BSON_TYPES) end end context 'with crypt_shared' do # Under JRuby+Evergreen, these specs complain about the crypt_shared # library not loading; however, crypt_shared appears to load for other # specs that require it (see the client_side_encryption_unified_spec and # mongocryptd_prose_spec tests). fails_on_jruby SpecConfig.instance.require_crypt_shared do define_transactions_spec_tests(CLIENT_SIDE_ENCRYPTION_TESTS, expectations_bson_types: EXPECTATIONS_BSON_TYPES) end end end mongo-ruby-driver-2.21.3/spec/spec_tests/client_side_encryption_unified_spec.rb000066400000000000000000000013011505113246500300670ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' require 'runners/unified' base = "#{CURRENT_PATH}/spec_tests/data/client_side_encryption" CLIENT_SIDE_ENCRYPTION_UNIFIED_TESTS = Dir.glob("#{base}/unified/**/*.yml").sort describe 'Client side encryption spec tests - unified' do require_libmongocrypt require_enterprise context 'with mongocryptd' do SpecConfig.instance.without_crypt_shared_lib_path do define_unified_spec_tests(base, CLIENT_SIDE_ENCRYPTION_UNIFIED_TESTS) end end context 'with crypt_shared' do SpecConfig.instance.require_crypt_shared do define_unified_spec_tests(base, CLIENT_SIDE_ENCRYPTION_UNIFIED_TESTS) end end end mongo-ruby-driver-2.21.3/spec/spec_tests/client_side_operations_timeout_spec.rb000066400000000000000000000007151505113246500301330ustar00rootroot00000000000000# frozen_string_literal: true require 'spec_helper' require 'runners/unified' base = "#{CURRENT_PATH}/spec_tests/data/client_side_operations_timeout" CSOT_TESTS = Dir.glob("#{base}/**/*.yml").sort describe 'CSOT unified spec tests' do if [ 1, '1', 'yes', 'true' ].include?(ENV['CSOT_SPEC_TESTS']) define_unified_spec_tests(base, CSOT_TESTS) else skip 'CSOT spec tests are disabled. To enable them set env variable CSOT_SPEC_TESTS to 1' end end mongo-ruby-driver-2.21.3/spec/spec_tests/cmap_spec.rb000066400000000000000000000076661505113246500225740ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' require 'runners/cmap' # Temporary scopes in all of the tests are needed to exclude endSessions # commands being sent during cleanup from interfering with assertions. describe 'Cmap' do clean_slate declare_topology_double let(:cluster) do double('cluster').tap do |cl| allow(cl).to receive(:topology).and_return(topology) allow(cl).to receive(:options).and_return({}) allow(cl).to receive(:app_metadata).and_return(Mongo::Server::AppMetadata.new({})) allow(cl).to receive(:run_sdam_flow) allow(cl).to receive(:update_cluster_time) allow(cl).to receive(:cluster_time).and_return(nil) end end let(:options) do Mongo::Utils.shallow_symbolize_keys(Mongo::Client.canonicalize_ruby_options( SpecConfig.instance.all_test_options, )).update(monitoring_io: false, populator_io: true).tap do |options| # We have a wait queue timeout set in the test suite options, but having # this option set interferes with assertions in the cmap spec tests. options.delete(:wait_queue_timeout) end end CMAP_TESTS.each do |file| spec = Mongo::Cmap::Spec.new(file) context("#{spec.description} (#{file.sub(%r'.*/data/cmap/', '')})") do unless spec.satisfied? before(:all) do skip "Requirements not satisfied" end end before do subscriber = Mrss::EventSubscriber.new monitoring = Mongo::Monitoring.new(monitoring: false) monitoring.subscribe(Mongo::Monitoring::CONNECTION_POOL, subscriber) @server = register_server( Mongo::Server.new( ClusterConfig.instance.primary_address, cluster, monitoring, Mongo::Event::Listeners.new, options.merge(spec.pool_options) ).tap do |server| allow(server).to receive(:description).and_return(ClusterConfig.instance.primary_description) # Since we use a mock for the cluster, run_sdam_flow does not clear # the pool or mark the server unknown. Manually clear the pool and # mock the server as unknown. allow(server).to receive(:unknown!).and_wrap_original do |m, *args| m.call(*args) RSpec::Mocks.with_temporary_scope do allow(server).to receive(:unknown?).and_return(true) server.pool_internal&.clear(lazy: true) end end end ) if app_name = spec.pool_options[:app_name] allow(cluster).to receive(:app_metadata).and_return(Mongo::Server::AppMetadata.new({ app_name: app_name })) end @client = ClusterTools.instance.direct_client(ClusterConfig.instance.primary_address, database: 'admin') spec.setup(@server, @client, subscriber) end after do if pool = @server&.pool_internal pool.disconnect! end spec.pool&.close end let!(:result) do if @server.load_balancer? allow_any_instance_of(Mongo::Server::Connection).to receive(:service_id).and_return('very fake') end spec.run end let(:verifier) do Mongo::Cmap::Verifier.new(spec) end it 'raises the correct error' do RSpec::Mocks.with_temporary_scope do expect(result['error']).to eq(spec.expected_error) end end let(:actual_events) { result['events'].freeze } it 'emits the correct number of events' do RSpec::Mocks.with_temporary_scope do expect(actual_events.length).to eq(spec.expected_events.length) end end spec.expected_events.each_with_index do |expected_event, index| it "emits correct event #{index+1}" do RSpec::Mocks.with_temporary_scope do verifier.verify_hashes(actual_events[index], expected_event) end end end end end end mongo-ruby-driver-2.21.3/spec/spec_tests/collection_management_spec.rb000066400000000000000000000005261505113246500261670ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' require 'runners/unified' base = "#{CURRENT_PATH}/spec_tests/data/collection_management" COLLECTION_MANAGEMENT_TESTS = Dir.glob("#{base}/**/*.yml").sort describe 'Collection management spec tests' do define_unified_spec_tests(base, COLLECTION_MANAGEMENT_TESTS) end mongo-ruby-driver-2.21.3/spec/spec_tests/command_monitoring_unified_spec.rb000066400000000000000000000005521505113246500272250ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' require 'runners/unified' base = "#{CURRENT_PATH}/spec_tests/data/command_monitoring_unified" COMMAND_MONITORING_UNIFIED_TESTS = Dir.glob("#{base}/**/*.yml").sort describe 'Command monitoring unified spec tests' do define_unified_spec_tests(base, COMMAND_MONITORING_UNIFIED_TESTS) end mongo-ruby-driver-2.21.3/spec/spec_tests/connection_string_spec.rb000066400000000000000000000003261505113246500253630ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' require 'runners/connection_string' describe 'Connection String' do define_connection_string_spec_tests(CONNECTION_STRING_TESTS) end mongo-ruby-driver-2.21.3/spec/spec_tests/crud_spec.rb000066400000000000000000000003551505113246500225750ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' require 'runners/crud' describe 'CRUD v1 spec tests' do define_crud_spec_tests(CRUD_TESTS) do |spec, req, test| let(:client) { authorized_client } end end mongo-ruby-driver-2.21.3/spec/spec_tests/crud_unified_spec.rb000066400000000000000000000004621505113246500242770ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' require 'runners/unified' base = "#{CURRENT_PATH}/spec_tests/data/crud_unified" CRUD_UNIFIED_TESTS = Dir.glob("#{base}/**/*.yml").sort describe 'CRUD unified spec tests' do define_unified_spec_tests(base, CRUD_UNIFIED_TESTS) end mongo-ruby-driver-2.21.3/spec/spec_tests/data/000077500000000000000000000000001505113246500212075ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/auth/000077500000000000000000000000001505113246500221505ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/auth/connection-string.yml000066400000000000000000000337221505113246500263450ustar00rootroot00000000000000tests: - description: "should use the default source and mechanism" uri: "mongodb://user:password@localhost" valid: true credential: username: "user" password: "password" source: "admin" mechanism: ~ mechanism_properties: ~ - description: "should use the database when no authSource is specified" uri: "mongodb://user:password@localhost/foo" valid: true credential: username: "user" password: "password" source: "foo" mechanism: ~ mechanism_properties: ~ - description: "should use the authSource when specified" uri: "mongodb://user:password@localhost/foo?authSource=bar" valid: true credential: username: "user" password: "password" source: "bar" mechanism: ~ mechanism_properties: ~ - description: "should recognise the mechanism (GSSAPI)" uri: "mongodb://user%40DOMAIN.COM@localhost/?authMechanism=GSSAPI" valid: true credential: username: "user@DOMAIN.COM" password: ~ source: "$external" mechanism: "GSSAPI" mechanism_properties: SERVICE_NAME: "mongodb" - description: "should ignore the database (GSSAPI)" uri: "mongodb://user%40DOMAIN.COM@localhost/foo?authMechanism=GSSAPI" valid: true credential: username: "user@DOMAIN.COM" password: ~ source: "$external" mechanism: "GSSAPI" mechanism_properties: SERVICE_NAME: "mongodb" - description: "should accept valid authSource (GSSAPI)" uri: "mongodb://user%40DOMAIN.COM@localhost/?authMechanism=GSSAPI&authSource=$external" valid: true credential: username: "user@DOMAIN.COM" password: ~ source: "$external" mechanism: "GSSAPI" mechanism_properties: SERVICE_NAME: "mongodb" - description: "should accept generic mechanism property (GSSAPI)" uri: "mongodb://user%40DOMAIN.COM@localhost/?authMechanism=GSSAPI&authMechanismProperties=SERVICE_NAME:other,CANONICALIZE_HOST_NAME:true" valid: true credential: username: "user@DOMAIN.COM" password: ~ source: "$external" mechanism: "GSSAPI" mechanism_properties: SERVICE_NAME: "other" CANONICALIZE_HOST_NAME: true - description: "should accept the password (GSSAPI)" uri: "mongodb://user%40DOMAIN.COM:password@localhost/?authMechanism=GSSAPI&authSource=$external" valid: true credential: username: "user@DOMAIN.COM" password: "password" source: "$external" mechanism: "GSSAPI" mechanism_properties: SERVICE_NAME: "mongodb" - description: "must raise an error when the authSource is empty" uri: "mongodb://user:password@localhost/foo?authSource=" valid: false - description: "must raise an error when the authSource is empty without credentials" uri: "mongodb://localhost/admin?authSource=" valid: false - description: "should throw an exception if authSource is invalid (GSSAPI)" uri: "mongodb://user%40DOMAIN.COM@localhost/?authMechanism=GSSAPI&authSource=foo" valid: false - description: "should throw an exception if no username (GSSAPI)" uri: "mongodb://localhost/?authMechanism=GSSAPI" valid: false - description: "should recognize the mechanism (MONGODB-CR)" uri: "mongodb://user:password@localhost/?authMechanism=MONGODB-CR" valid: true credential: username: "user" password: "password" source: "admin" mechanism: "MONGODB-CR" mechanism_properties: ~ - description: "should use the database when no authSource is specified (MONGODB-CR)" uri: "mongodb://user:password@localhost/foo?authMechanism=MONGODB-CR" valid: true credential: username: "user" password: "password" source: "foo" mechanism: "MONGODB-CR" mechanism_properties: ~ - description: "should use the authSource when specified (MONGODB-CR)" uri: "mongodb://user:password@localhost/foo?authMechanism=MONGODB-CR&authSource=bar" valid: true credential: username: "user" password: "password" source: "bar" mechanism: "MONGODB-CR" mechanism_properties: ~ - description: "should throw an exception if no username is supplied (MONGODB-CR)" uri: "mongodb://localhost/?authMechanism=MONGODB-CR" valid: false - description: "should recognize the mechanism (MONGODB-X509)" uri: "mongodb://CN%3DmyName%2COU%3DmyOrgUnit%2CO%3DmyOrg%2CL%3DmyLocality%2CST%3DmyState%2CC%3DmyCountry@localhost/?authMechanism=MONGODB-X509" valid: true credential: username: "CN=myName,OU=myOrgUnit,O=myOrg,L=myLocality,ST=myState,C=myCountry" password: ~ source: "$external" mechanism: "MONGODB-X509" mechanism_properties: ~ - description: "should ignore the database (MONGODB-X509)" uri: "mongodb://CN%3DmyName%2COU%3DmyOrgUnit%2CO%3DmyOrg%2CL%3DmyLocality%2CST%3DmyState%2CC%3DmyCountry@localhost/foo?authMechanism=MONGODB-X509" valid: true credential: username: "CN=myName,OU=myOrgUnit,O=myOrg,L=myLocality,ST=myState,C=myCountry" password: ~ source: "$external" mechanism: "MONGODB-X509" mechanism_properties: ~ - description: "should accept valid authSource (MONGODB-X509)" uri: "mongodb://CN%3DmyName%2COU%3DmyOrgUnit%2CO%3DmyOrg%2CL%3DmyLocality%2CST%3DmyState%2CC%3DmyCountry@localhost/?authMechanism=MONGODB-X509&authSource=$external" valid: true credential: username: "CN=myName,OU=myOrgUnit,O=myOrg,L=myLocality,ST=myState,C=myCountry" password: ~ source: "$external" mechanism: "MONGODB-X509" mechanism_properties: ~ - description: "should recognize the mechanism with no username (MONGODB-X509)" uri: "mongodb://localhost/?authMechanism=MONGODB-X509" valid: true credential: username: ~ password: ~ source: "$external" mechanism: "MONGODB-X509" mechanism_properties: ~ - description: "should recognize the mechanism with no username when auth source is explicitly specified (MONGODB-X509)" uri: "mongodb://localhost/?authMechanism=MONGODB-X509&authSource=$external" valid: true credential: username: ~ password: ~ source: "$external" mechanism: "MONGODB-X509" mechanism_properties: ~ - description: "should throw an exception if supplied a password (MONGODB-X509)" uri: "mongodb://user:password@localhost/?authMechanism=MONGODB-X509" valid: false - description: "should throw an exception if authSource is invalid (MONGODB-X509)" uri: "mongodb://CN%3DmyName%2COU%3DmyOrgUnit%2CO%3DmyOrg%2CL%3DmyLocality%2CST%3DmyState%2CC%3DmyCountry@localhost/foo?authMechanism=MONGODB-X509&authSource=bar" valid: false - description: "should recognize the mechanism (PLAIN)" uri: "mongodb://user:password@localhost/?authMechanism=PLAIN" valid: true credential: username: "user" password: "password" source: "$external" mechanism: "PLAIN" mechanism_properties: ~ - description: "should use the database when no authSource is specified (PLAIN)" uri: "mongodb://user:password@localhost/foo?authMechanism=PLAIN" valid: true credential: username: "user" password: "password" source: "foo" mechanism: "PLAIN" mechanism_properties: ~ - description: "should use the authSource when specified (PLAIN)" uri: "mongodb://user:password@localhost/foo?authMechanism=PLAIN&authSource=bar" valid: true credential: username: "user" password: "password" source: "bar" mechanism: "PLAIN" mechanism_properties: ~ - description: "should throw an exception if no username (PLAIN)" uri: "mongodb://localhost/?authMechanism=PLAIN" valid: false - description: "should recognize the mechanism (SCRAM-SHA-1)" uri: "mongodb://user:password@localhost/?authMechanism=SCRAM-SHA-1" valid: true credential: username: "user" password: "password" source: "admin" mechanism: "SCRAM-SHA-1" mechanism_properties: ~ - description: "should use the database when no authSource is specified (SCRAM-SHA-1)" uri: "mongodb://user:password@localhost/foo?authMechanism=SCRAM-SHA-1" valid: true credential: username: "user" password: "password" source: "foo" mechanism: "SCRAM-SHA-1" mechanism_properties: ~ - description: "should accept valid authSource (SCRAM-SHA-1)" uri: "mongodb://user:password@localhost/foo?authMechanism=SCRAM-SHA-1&authSource=bar" valid: true credential: username: "user" password: "password" source: "bar" mechanism: "SCRAM-SHA-1" mechanism_properties: ~ - description: "should throw an exception if no username (SCRAM-SHA-1)" uri: "mongodb://localhost/?authMechanism=SCRAM-SHA-1" valid: false - description: "should recognize the mechanism (SCRAM-SHA-256)" uri: "mongodb://user:password@localhost/?authMechanism=SCRAM-SHA-256" valid: true credential: username: "user" password: "password" source: "admin" mechanism: "SCRAM-SHA-256" mechanism_properties: ~ - description: "should use the database when no authSource is specified (SCRAM-SHA-256)" uri: "mongodb://user:password@localhost/foo?authMechanism=SCRAM-SHA-256" valid: true credential: username: "user" password: "password" source: "foo" mechanism: "SCRAM-SHA-256" mechanism_properties: ~ - description: "should accept valid authSource (SCRAM-SHA-256)" uri: "mongodb://user:password@localhost/foo?authMechanism=SCRAM-SHA-256&authSource=bar" valid: true credential: username: "user" password: "password" source: "bar" mechanism: "SCRAM-SHA-256" mechanism_properties: ~ - description: "should throw an exception if no username (SCRAM-SHA-256)" uri: "mongodb://localhost/?authMechanism=SCRAM-SHA-256" valid: false - description: "URI with no auth-related info doesn't create credential" uri: "mongodb://localhost/" valid: true credential: ~ - description: "database in URI path doesn't create credentials" uri: "mongodb://localhost/foo" valid: true credential: ~ - description: "authSource without username doesn't create credential (default mechanism)" uri: "mongodb://localhost/?authSource=foo" valid: true credential: ~ - description: "should throw an exception if no username provided (userinfo implies default mechanism)" uri: "mongodb://@localhost.com/" valid: false - description: "should throw an exception if no username/password provided (userinfo implies default mechanism)" uri: "mongodb://:@localhost.com/" valid: false - description: "should recognise the mechanism (MONGODB-AWS)" uri: "mongodb://localhost/?authMechanism=MONGODB-AWS" valid: true credential: username: ~ password: ~ source: "$external" mechanism: "MONGODB-AWS" mechanism_properties: ~ - description: "should recognise the mechanism when auth source is explicitly specified (MONGODB-AWS)" uri: "mongodb://localhost/?authMechanism=MONGODB-AWS&authSource=$external" valid: true credential: username: ~ password: ~ source: "$external" mechanism: "MONGODB-AWS" mechanism_properties: ~ - description: "should throw an exception if username and no password (MONGODB-AWS)" uri: "mongodb://user@localhost/?authMechanism=MONGODB-AWS" valid: false credential: ~ - description: "should use username and password if specified (MONGODB-AWS)" uri: "mongodb://user%21%40%23%24%25%5E%26%2A%28%29_%2B:pass%21%40%23%24%25%5E%26%2A%28%29_%2B@localhost/?authMechanism=MONGODB-AWS" valid: true credential: username: "user!@#$%^&*()_+" password: "pass!@#$%^&*()_+" source: "$external" mechanism: "MONGODB-AWS" mechanism_properties: ~ - description: "should use username, password and session token if specified (MONGODB-AWS)" uri: "mongodb://user:password@localhost/?authMechanism=MONGODB-AWS&authMechanismProperties=AWS_SESSION_TOKEN:token%21%40%23%24%25%5E%26%2A%28%29_%2B" valid: true credential: username: "user" password: "password" source: "$external" mechanism: "MONGODB-AWS" mechanism_properties: AWS_SESSION_TOKEN: "token!@#$%^&*()_+" mongo-ruby-driver-2.21.3/spec/spec_tests/data/change_streams_unified/000077500000000000000000000000001505113246500256755ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/change_streams_unified/change-streams-clusterTime.yml000066400000000000000000000023031505113246500336150ustar00rootroot00000000000000description: "change-streams-clusterTime" schemaVersion: "1.4" createEntities: - client: id: &client0 client0 useMultipleMongoses: false - database: id: &database0 database0 client: *client0 databaseName: *database0 - collection: id: &collection0 collection0 database: *database0 collectionName: *collection0 runOnRequirements: - minServerVersion: "4.0.0" # TODO(DRIVERS-2323): Run all possible tests against sharded clusters once we know the # cause of unexpected command monitoring events. topologies: [ replicaset ] serverless: forbid initialData: - collectionName: *collection0 databaseName: *database0 documents: [] tests: - description: "clusterTime is present" operations: - name: createChangeStream object: *collection0 arguments: { pipeline: [] } saveResultAsEntity: &changeStream0 changeStream0 - name: insertOne object: *collection0 arguments: document: { _id: 1 } - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: ns: { db: *database0, coll: *collection0 } clusterTime: { $$exists: true } change-streams-disambiguatedPaths.yml000066400000000000000000000051131505113246500350420ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/change_streams_unifieddescription: "disambiguatedPaths" schemaVersion: "1.4" createEntities: - client: id: &client0 client0 useMultipleMongoses: false - database: id: &database0 database0 client: *client0 databaseName: *database0 - collection: id: &collection0 collection0 database: *database0 collectionName: *collection0 runOnRequirements: - minServerVersion: "6.1.0" # TODO(DRIVERS-2323): Run all possible tests against sharded clusters once we know the # cause of unexpected command monitoring events. topologies: [ replicaset ] serverless: forbid initialData: - collectionName: *collection0 databaseName: *database0 documents: [] tests: - description: "disambiguatedPaths is present on updateDescription when an ambiguous path is present" operations: - name: insertOne object: *collection0 arguments: document: { _id: 1, 'a': { '1': 1 } } - name: createChangeStream object: *collection0 arguments: { pipeline: [], showExpandedEvents: true } saveResultAsEntity: &changeStream0 changeStream0 - name: updateOne object: *collection0 arguments: filter: { _id: 1 } update: { $set: { 'a.1': 2 } } - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: operationType: "update" ns: { db: *database0, coll: *collection0 } updateDescription: updatedFields: { $$exists: true } removedFields: { $$exists: true } truncatedArrays: { $$exists: true } disambiguatedPaths: { 'a.1': ['a', '1'] } - description: "disambiguatedPaths returns array indices as integers" operations: - name: insertOne object: *collection0 arguments: document: { _id: 1, 'a': [{'1': 1 }] } - name: createChangeStream object: *collection0 arguments: { pipeline: [], showExpandedEvents: true } saveResultAsEntity: &changeStream0 changeStream0 - name: updateOne object: *collection0 arguments: filter: { _id: 1 } update: { $set: { 'a.0.1': 2 } } - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: operationType: "update" ns: { db: *database0, coll: *collection0 } updateDescription: updatedFields: { $$exists: true } removedFields: { $$exists: true } truncatedArrays: { $$exists: true } disambiguatedPaths: { 'a.0.1': ['a', { $$type: 'int' }, '1'] } mongo-ruby-driver-2.21.3/spec/spec_tests/data/change_streams_unified/change-streams-errors.yml000066400000000000000000000072531505113246500326420ustar00rootroot00000000000000description: "change-streams-errors" schemaVersion: "1.7" runOnRequirements: # TODO(DRIVERS-2323): Run all possible tests against sharded clusters once we know the # cause of unexpected command monitoring events. - topologies: [ replicaset ] createEntities: - client: id: &client0 client0 observeEvents: [ commandStartedEvent ] ignoreCommandMonitoringEvents: [ killCursors ] useMultipleMongoses: false - client: id: &globalClient globalClient useMultipleMongoses: false - database: id: &database0 database0 client: *client0 databaseName: *database0 - collection: id: &collection0 collection0 database: *database0 collectionName: *collection0 - database: id: &globalDatabase0 globalDatabase0 client: *globalClient databaseName: *database0 - collection: id: &globalCollection0 globalCollection0 database: *globalDatabase0 collectionName: *collection0 initialData: - collectionName: *collection0 databaseName: *database0 documents: [] tests: - description: "The watch helper must not throw a custom exception when executed against a single server topology, but instead depend on a server error" runOnRequirements: - minServerVersion: "3.6.0" topologies: [ single ] operations: - name: createChangeStream object: *collection0 arguments: { pipeline: [] } expectError: { errorCode: 40573 } - description: Change Stream should error when an invalid aggregation stage is passed in runOnRequirements: - minServerVersion: "3.6.0" topologies: [ replicaset ] operations: - name: createChangeStream object: *collection0 arguments: pipeline: [ { $unsupported: foo } ] expectError: { errorCode: 40324 } expectEvents: - client: *client0 ignoreExtraEvents: true events: - commandStartedEvent: command: aggregate: *collection0 cursor: {} pipeline: - $changeStream: {} - $unsupported: foo commandName: aggregate databaseName: *database0 - description: Change Stream should error when _id is projected out runOnRequirements: - minServerVersion: "4.1.11" topologies: [ replicaset, sharded, load-balanced ] operations: - name: createChangeStream object: *collection0 arguments: pipeline: - $project: { _id: 0 } saveResultAsEntity: &changeStream0 changeStream0 - name: insertOne object: *globalCollection0 arguments: document: { z: 3 } - name: iterateUntilDocumentOrError object: *changeStream0 expectError: { errorCode: 280 } - description: change stream errors on ElectionInProgress runOnRequirements: - minServerVersion: "4.2" topologies: [ replicaset, sharded, load-balanced ] operations: - name: failPoint object: testRunner arguments: client: *globalClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: [ getMore ] errorCode: 216 closeConnection: false - name: createChangeStream object: *collection0 arguments: pipeline: [] saveResultAsEntity: &changeStream0 changeStream0 - name: insertOne object: *globalCollection0 arguments: document: { z: 3 } - name: iterateUntilDocumentOrError object: *changeStream0 expectError: { errorCode: 216 } change-streams-pre_and_post_images.yml000066400000000000000000000265011505113246500352460ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/change_streams_unifieddescription: "change-streams-pre_and_post_images" schemaVersion: "1.4" runOnRequirements: - minServerVersion: "6.0.0" topologies: [ replicaset ] serverless: forbid createEntities: - client: id: &client0 client0 observeEvents: [ commandStartedEvent ] ignoreCommandMonitoringEvents: [ collMod, insert, update, getMore, killCursors ] - database: id: &database0 database0 client: *client0 databaseName: &database0Name change-stream-tests - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name test initialData: - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1 } tests: - description: "fullDocument:whenAvailable with changeStreamPreAndPostImages enabled" operations: - name: runCommand object: *database0 arguments: &enablePreAndPostImages commandName: collMod command: collMod: *collection0Name changeStreamPreAndPostImages: { enabled: true } - name: createChangeStream object: *collection0 arguments: pipeline: [] fullDocument: "whenAvailable" saveResultAsEntity: &changeStream0 changeStream0 - name: updateOne object: *collection0 arguments: filter: { _id: 1 } update: { $set: { x: 1 }} - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: operationType: "update" ns: { db: *database0Name, coll: *collection0Name } updateDescription: { $$type: "object" } fullDocument: { _id: 1, x: 1 } expectEvents: - client: *client0 events: - commandStartedEvent: command: aggregate: *collection0Name pipeline: - $changeStream: { fullDocument: "whenAvailable" } - description: "fullDocument:whenAvailable with changeStreamPreAndPostImages disabled" operations: - name: runCommand object: *database0 arguments: &disablePreAndPostImages commandName: collMod command: collMod: *collection0Name changeStreamPreAndPostImages: { enabled: false } - name: createChangeStream object: *collection0 arguments: pipeline: [] fullDocument: "whenAvailable" saveResultAsEntity: &changeStream0 changeStream0 - name: updateOne object: *collection0 arguments: filter: { _id: 1 } update: { $set: { x: 1 }} - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: operationType: "update" ns: { db: *database0Name, coll: *collection0Name } updateDescription: { $$type: "object" } fullDocument: null expectEvents: - client: *client0 events: - commandStartedEvent: command: aggregate: *collection0Name pipeline: - $changeStream: { fullDocument: "whenAvailable" } - description: "fullDocument:required with changeStreamPreAndPostImages enabled" operations: - name: runCommand object: *database0 arguments: *enablePreAndPostImages - name: createChangeStream object: *collection0 arguments: pipeline: [] fullDocument: "required" saveResultAsEntity: &changeStream0 changeStream0 - name: updateOne object: *collection0 arguments: filter: { _id: 1 } update: { $set: { x: 1 }} - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: operationType: "update" ns: { db: *database0Name, coll: *collection0Name } updateDescription: { $$type: "object" } fullDocument: { _id: 1, x: 1 } expectEvents: - client: *client0 events: - commandStartedEvent: command: aggregate: *collection0Name pipeline: - $changeStream: { fullDocument: "required" } - description: "fullDocument:required with changeStreamPreAndPostImages disabled" operations: - name: runCommand object: *database0 arguments: *disablePreAndPostImages - name: createChangeStream object: *collection0 arguments: pipeline: [] fullDocument: "required" saveResultAsEntity: &changeStream0 changeStream0 - name: updateOne object: *collection0 arguments: filter: { _id: 1 } update: { $set: { x: 1 }} - name: iterateUntilDocumentOrError object: *changeStream0 expectError: isClientError: false expectEvents: - client: *client0 events: - commandStartedEvent: command: aggregate: *collection0Name pipeline: - $changeStream: { fullDocument: "required" } - description: "fullDocumentBeforeChange:whenAvailable with changeStreamPreAndPostImages enabled" operations: - name: runCommand object: *database0 arguments: *enablePreAndPostImages - name: createChangeStream object: *collection0 arguments: pipeline: [] fullDocumentBeforeChange: "whenAvailable" saveResultAsEntity: &changeStream0 changeStream0 - name: updateOne object: *collection0 arguments: filter: { _id: 1 } update: { $set: { x: 1 }} - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: operationType: "update" ns: { db: *database0Name, coll: *collection0Name } updateDescription: { $$type: "object" } fullDocumentBeforeChange: { _id: 1 } expectEvents: - client: *client0 events: - commandStartedEvent: command: aggregate: *collection0Name pipeline: - $changeStream: { fullDocumentBeforeChange: "whenAvailable" } - description: "fullDocumentBeforeChange:whenAvailable with changeStreamPreAndPostImages disabled" operations: - name: runCommand object: *database0 arguments: *disablePreAndPostImages - name: createChangeStream object: *collection0 arguments: pipeline: [] fullDocumentBeforeChange: "whenAvailable" saveResultAsEntity: &changeStream0 changeStream0 - name: updateOne object: *collection0 arguments: filter: { _id: 1 } update: { $set: { x: 1 }} - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: operationType: "update" ns: { db: *database0Name, coll: *collection0Name } updateDescription: { $$type: "object" } fullDocumentBeforeChange: null expectEvents: - client: *client0 events: - commandStartedEvent: command: aggregate: *collection0Name pipeline: - $changeStream: { fullDocumentBeforeChange: "whenAvailable" } - description: "fullDocumentBeforeChange:required with changeStreamPreAndPostImages enabled" operations: - name: runCommand object: *database0 arguments: *enablePreAndPostImages - name: createChangeStream object: *collection0 arguments: pipeline: [] fullDocumentBeforeChange: "required" saveResultAsEntity: &changeStream0 changeStream0 - name: updateOne object: *collection0 arguments: filter: { _id: 1 } update: { $set: { x: 1 }} - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: operationType: "update" ns: { db: *database0Name, coll: *collection0Name } updateDescription: { $$type: "object" } fullDocumentBeforeChange: { _id: 1 } expectEvents: - client: *client0 events: - commandStartedEvent: command: aggregate: *collection0Name pipeline: - $changeStream: { fullDocumentBeforeChange: "required" } - description: "fullDocumentBeforeChange:required with changeStreamPreAndPostImages disabled" operations: - name: runCommand object: *database0 arguments: *disablePreAndPostImages - name: createChangeStream object: *collection0 arguments: pipeline: [] fullDocumentBeforeChange: "required" saveResultAsEntity: &changeStream0 changeStream0 - name: updateOne object: *collection0 arguments: filter: { _id: 1 } update: { $set: { x: 1 }} - name: iterateUntilDocumentOrError object: *changeStream0 expectError: isClientError: false expectEvents: - client: *client0 events: - commandStartedEvent: command: aggregate: *collection0Name pipeline: - $changeStream: { fullDocumentBeforeChange: "required" } - description: "fullDocumentBeforeChange:off with changeStreamPreAndPostImages enabled" operations: - name: runCommand object: *database0 arguments: *enablePreAndPostImages - name: createChangeStream object: *collection0 arguments: pipeline: [] fullDocumentBeforeChange: "off" saveResultAsEntity: &changeStream0 changeStream0 - name: updateOne object: *collection0 arguments: filter: { _id: 1 } update: { $set: { x: 1 }} - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: operationType: "update" ns: { db: *database0Name, coll: *collection0Name } updateDescription: { $$type: "object" } fullDocumentBeforeChange: { $$exists: false } expectEvents: - client: *client0 events: - commandStartedEvent: command: aggregate: *collection0Name pipeline: - $changeStream: { fullDocumentBeforeChange: "off" } - description: "fullDocumentBeforeChange:off with changeStreamPreAndPostImages disabled" operations: - name: runCommand object: *database0 arguments: *disablePreAndPostImages - name: createChangeStream object: *collection0 arguments: pipeline: [] fullDocumentBeforeChange: "off" saveResultAsEntity: &changeStream0 changeStream0 - name: updateOne object: *collection0 arguments: filter: { _id: 1 } update: { $set: { x: 1 }} - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: operationType: "update" ns: { db: *database0Name, coll: *collection0Name } updateDescription: { $$type: "object" } fullDocumentBeforeChange: { $$exists: false } expectEvents: - client: *client0 events: - commandStartedEvent: command: aggregate: *collection0Name pipeline: - $changeStream: { fullDocumentBeforeChange: "off" } change-streams-resume-allowlist.yml000066400000000000000000001103011505113246500345440ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/change_streams_unified# Tests for resume behavior on server versions that do not support the ResumableChangeStreamError label description: "change-streams-resume-allowlist" schemaVersion: "1.7" runOnRequirements: - minServerVersion: "3.6" topologies: [ replicaset ] serverless: forbid createEntities: - client: id: &client0 client0 observeEvents: [ commandStartedEvent ] ignoreCommandMonitoringEvents: [ killCursors ] useMultipleMongoses: false - client: id: &globalClient globalClient useMultipleMongoses: false - database: id: &database0 database0 client: *client0 databaseName: *database0 - collection: id: &collection0 collection0 database: *database0 collectionName: *collection0 - database: id: &globalDatabase0 globalDatabase0 client: *globalClient databaseName: *database0 - collection: id: &globalCollection0 globalCollection0 database: *globalDatabase0 collectionName: *collection0 tests: - description: change stream resumes after a network error runOnRequirements: - minServerVersion: "4.2" operations: - name: failPoint object: testRunner arguments: client: *globalClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: [ getMore ] closeConnection: true - name: createChangeStream object: *collection0 arguments: { pipeline: [] } saveResultAsEntity: &changeStream0 changeStream0 - name: insertOne object: *globalCollection0 arguments: document: { x: 1 } - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: _id: { $$exists: true } documentKey: { $$exists: true } operationType: insert ns: db: *database0 coll: *collection0 fullDocument: x: 1 _id: { $$exists: true } expectEvents: - client: *client0 ignoreExtraEvents: true events: - commandStartedEvent: command: aggregate: *collection0 cursor: {} pipeline: [ { $changeStream: {} } ] commandName: aggregate databaseName: *database0 - commandStartedEvent: command: getMore: { $$exists: true } collection: *collection0 commandName: getMore databaseName: *database0 - commandStartedEvent: command: aggregate: *collection0 cursor: {} pipeline: - $changeStream: resumeAfter: { $$unsetOrMatches: { $$exists: true } } commandName: aggregate databaseName: *database0 - description: change stream resumes after HostUnreachable runOnRequirements: - minServerVersion: "4.2" maxServerVersion: "4.2.99" operations: - name: failPoint object: testRunner arguments: client: *globalClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: [ getMore ] errorCode: 6 closeConnection: false - name: createChangeStream object: *collection0 arguments: { pipeline: [] } saveResultAsEntity: &changeStream0 changeStream0 - name: insertOne object: *globalCollection0 arguments: document: { x: 1 } - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: _id: { $$exists: true } documentKey: { $$exists: true } operationType: insert ns: db: *database0 coll: *collection0 fullDocument: x: 1 _id: { $$exists: true } expectEvents: - client: *client0 ignoreExtraEvents: true events: - commandStartedEvent: command: aggregate: *collection0 cursor: {} pipeline: [ { $changeStream: {} } ] commandName: aggregate databaseName: *database0 - commandStartedEvent: command: getMore: { $$exists: true } collection: *collection0 commandName: getMore databaseName: *database0 - commandStartedEvent: command: aggregate: *collection0 cursor: {} pipeline: - $changeStream: resumeAfter: { $$unsetOrMatches: { $$exists: true } } commandName: aggregate databaseName: *database0 - description: change stream resumes after HostNotFound runOnRequirements: - minServerVersion: "4.2" maxServerVersion: "4.2.99" operations: - name: failPoint object: testRunner arguments: client: *globalClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: [ getMore ] errorCode: 7 closeConnection: false - name: createChangeStream object: *collection0 arguments: { pipeline: [] } saveResultAsEntity: &changeStream0 changeStream0 - name: insertOne object: *globalCollection0 arguments: document: { x: 1 } - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: _id: { $$exists: true } documentKey: { $$exists: true } operationType: insert ns: db: *database0 coll: *collection0 fullDocument: x: 1 _id: { $$exists: true } expectEvents: - client: *client0 ignoreExtraEvents: true events: - commandStartedEvent: command: aggregate: *collection0 cursor: {} pipeline: [ { $changeStream: {} } ] commandName: aggregate databaseName: *database0 - commandStartedEvent: command: getMore: { $$exists: true } collection: *collection0 commandName: getMore databaseName: *database0 - commandStartedEvent: command: aggregate: *collection0 cursor: {} pipeline: - $changeStream: resumeAfter: { $$unsetOrMatches: { $$exists: true } } commandName: aggregate databaseName: *database0 - description: change stream resumes after NetworkTimeout runOnRequirements: - minServerVersion: "4.2" maxServerVersion: "4.2.99" operations: - name: failPoint object: testRunner arguments: client: *globalClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: [ getMore ] errorCode: 89 closeConnection: false - name: createChangeStream object: *collection0 arguments: { pipeline: [] } saveResultAsEntity: &changeStream0 changeStream0 - name: insertOne object: *globalCollection0 arguments: document: { x: 1 } - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: _id: { $$exists: true } documentKey: { $$exists: true } operationType: insert ns: db: *database0 coll: *collection0 fullDocument: x: 1 _id: { $$exists: true } expectEvents: - client: *client0 ignoreExtraEvents: true events: - commandStartedEvent: command: aggregate: *collection0 cursor: {} pipeline: [ { $changeStream: {} } ] commandName: aggregate databaseName: *database0 - commandStartedEvent: command: getMore: { $$exists: true } collection: *collection0 commandName: getMore databaseName: *database0 - commandStartedEvent: command: aggregate: *collection0 cursor: {} pipeline: - $changeStream: resumeAfter: { $$unsetOrMatches: { $$exists: true } } commandName: aggregate databaseName: *database0 - description: change stream resumes after ShutdownInProgress runOnRequirements: - minServerVersion: "4.2" maxServerVersion: "4.2.99" operations: - name: failPoint object: testRunner arguments: client: *globalClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: [ getMore ] errorCode: 91 closeConnection: false - name: createChangeStream object: *collection0 arguments: { pipeline: [] } saveResultAsEntity: &changeStream0 changeStream0 - name: insertOne object: *globalCollection0 arguments: document: { x: 1 } - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: _id: { $$exists: true } documentKey: { $$exists: true } operationType: insert ns: db: *database0 coll: *collection0 fullDocument: x: 1 _id: { $$exists: true } expectEvents: - client: *client0 ignoreExtraEvents: true events: - commandStartedEvent: command: aggregate: *collection0 cursor: {} pipeline: [ { $changeStream: {} } ] commandName: aggregate databaseName: *database0 - commandStartedEvent: command: getMore: { $$exists: true } collection: *collection0 commandName: getMore databaseName: *database0 - commandStartedEvent: command: aggregate: *collection0 cursor: {} pipeline: - $changeStream: resumeAfter: { $$unsetOrMatches: { $$exists: true } } commandName: aggregate databaseName: *database0 - description: change stream resumes after PrimarySteppedDown runOnRequirements: - minServerVersion: "4.2" maxServerVersion: "4.2.99" operations: - name: failPoint object: testRunner arguments: client: *globalClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: [ getMore ] errorCode: 189 closeConnection: false - name: createChangeStream object: *collection0 arguments: { pipeline: [] } saveResultAsEntity: &changeStream0 changeStream0 - name: insertOne object: *globalCollection0 arguments: document: { x: 1 } - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: _id: { $$exists: true } documentKey: { $$exists: true } operationType: insert ns: db: *database0 coll: *collection0 fullDocument: x: 1 _id: { $$exists: true } expectEvents: - client: *client0 ignoreExtraEvents: true events: - commandStartedEvent: command: aggregate: *collection0 cursor: {} pipeline: [ { $changeStream: {} } ] commandName: aggregate databaseName: *database0 - commandStartedEvent: command: getMore: { $$exists: true } collection: *collection0 commandName: getMore databaseName: *database0 - commandStartedEvent: command: aggregate: *collection0 cursor: {} pipeline: - $changeStream: resumeAfter: { $$unsetOrMatches: { $$exists: true } } commandName: aggregate databaseName: *database0 - description: change stream resumes after ExceededTimeLimit runOnRequirements: - minServerVersion: "4.2" maxServerVersion: "4.2.99" operations: - name: failPoint object: testRunner arguments: client: *globalClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: [ getMore ] errorCode: 262 closeConnection: false - name: createChangeStream object: *collection0 arguments: { pipeline: [] } saveResultAsEntity: &changeStream0 changeStream0 - name: insertOne object: *globalCollection0 arguments: document: { x: 1 } - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: _id: { $$exists: true } documentKey: { $$exists: true } operationType: insert ns: db: *database0 coll: *collection0 fullDocument: x: 1 _id: { $$exists: true } expectEvents: - client: *client0 ignoreExtraEvents: true events: - commandStartedEvent: command: aggregate: *collection0 cursor: {} pipeline: [ { $changeStream: {} } ] commandName: aggregate databaseName: *database0 - commandStartedEvent: command: getMore: { $$exists: true } collection: *collection0 commandName: getMore databaseName: *database0 - commandStartedEvent: command: aggregate: *collection0 cursor: {} pipeline: - $changeStream: resumeAfter: { $$unsetOrMatches: { $$exists: true } } commandName: aggregate databaseName: *database0 - description: change stream resumes after SocketException runOnRequirements: - minServerVersion: "4.2" maxServerVersion: "4.2.99" operations: - name: failPoint object: testRunner arguments: client: *globalClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: [ getMore ] errorCode: 9001 closeConnection: false - name: createChangeStream object: *collection0 arguments: { pipeline: [] } saveResultAsEntity: &changeStream0 changeStream0 - name: insertOne object: *globalCollection0 arguments: document: { x: 1 } - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: _id: { $$exists: true } documentKey: { $$exists: true } operationType: insert ns: db: *database0 coll: *collection0 fullDocument: x: 1 _id: { $$exists: true } expectEvents: - client: *client0 ignoreExtraEvents: true events: - commandStartedEvent: command: aggregate: *collection0 cursor: {} pipeline: [ { $changeStream: {} } ] commandName: aggregate databaseName: *database0 - commandStartedEvent: command: getMore: { $$exists: true } collection: *collection0 commandName: getMore databaseName: *database0 - commandStartedEvent: command: aggregate: *collection0 cursor: {} pipeline: - $changeStream: resumeAfter: { $$unsetOrMatches: { $$exists: true } } commandName: aggregate databaseName: *database0 - description: change stream resumes after NotWritablePrimary runOnRequirements: - minServerVersion: "4.2" maxServerVersion: "4.2.99" operations: - name: failPoint object: testRunner arguments: client: *globalClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: [ getMore ] errorCode: 10107 closeConnection: false - name: createChangeStream object: *collection0 arguments: { pipeline: [] } saveResultAsEntity: &changeStream0 changeStream0 - name: insertOne object: *globalCollection0 arguments: document: { x: 1 } - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: _id: { $$exists: true } documentKey: { $$exists: true } operationType: insert ns: db: *database0 coll: *collection0 fullDocument: x: 1 _id: { $$exists: true } expectEvents: - client: *client0 ignoreExtraEvents: true events: - commandStartedEvent: command: aggregate: *collection0 cursor: {} pipeline: [ { $changeStream: {} } ] commandName: aggregate databaseName: *database0 - commandStartedEvent: command: getMore: { $$exists: true } collection: *collection0 commandName: getMore databaseName: *database0 - commandStartedEvent: command: aggregate: *collection0 cursor: {} pipeline: - $changeStream: resumeAfter: { $$unsetOrMatches: { $$exists: true } } commandName: aggregate databaseName: *database0 - description: change stream resumes after InterruptedAtShutdown runOnRequirements: - minServerVersion: "4.2" maxServerVersion: "4.2.99" operations: - name: failPoint object: testRunner arguments: client: *globalClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: [ getMore ] errorCode: 11600 closeConnection: false - name: createChangeStream object: *collection0 arguments: { pipeline: [] } saveResultAsEntity: &changeStream0 changeStream0 - name: insertOne object: *globalCollection0 arguments: document: { x: 1 } - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: _id: { $$exists: true } documentKey: { $$exists: true } operationType: insert ns: db: *database0 coll: *collection0 fullDocument: x: 1 _id: { $$exists: true } expectEvents: - client: *client0 ignoreExtraEvents: true events: - commandStartedEvent: command: aggregate: *collection0 cursor: {} pipeline: [ { $changeStream: {} } ] commandName: aggregate databaseName: *database0 - commandStartedEvent: command: getMore: { $$exists: true } collection: *collection0 commandName: getMore databaseName: *database0 - commandStartedEvent: command: aggregate: *collection0 cursor: {} pipeline: - $changeStream: resumeAfter: { $$unsetOrMatches: { $$exists: true } } commandName: aggregate databaseName: *database0 - description: change stream resumes after InterruptedDueToReplStateChange runOnRequirements: - minServerVersion: "4.2" maxServerVersion: "4.2.99" operations: - name: failPoint object: testRunner arguments: client: *globalClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: [ getMore ] errorCode: 11602 closeConnection: false - name: createChangeStream object: *collection0 arguments: { pipeline: [] } saveResultAsEntity: &changeStream0 changeStream0 - name: insertOne object: *globalCollection0 arguments: document: { x: 1 } - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: _id: { $$exists: true } documentKey: { $$exists: true } operationType: insert ns: db: *database0 coll: *collection0 fullDocument: x: 1 _id: { $$exists: true } expectEvents: - client: *client0 ignoreExtraEvents: true events: - commandStartedEvent: command: aggregate: *collection0 cursor: {} pipeline: [ { $changeStream: {} } ] commandName: aggregate databaseName: *database0 - commandStartedEvent: command: getMore: { $$exists: true } collection: *collection0 commandName: getMore databaseName: *database0 - commandStartedEvent: command: aggregate: *collection0 cursor: {} pipeline: - $changeStream: resumeAfter: { $$unsetOrMatches: { $$exists: true } } commandName: aggregate databaseName: *database0 - description: change stream resumes after NotPrimaryNoSecondaryOk runOnRequirements: - minServerVersion: "4.2" maxServerVersion: "4.2.99" operations: - name: failPoint object: testRunner arguments: client: *globalClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: [ getMore ] errorCode: 13435 closeConnection: false - name: createChangeStream object: *collection0 arguments: { pipeline: [] } saveResultAsEntity: &changeStream0 changeStream0 - name: insertOne object: *globalCollection0 arguments: document: { x: 1 } - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: _id: { $$exists: true } documentKey: { $$exists: true } operationType: insert ns: db: *database0 coll: *collection0 fullDocument: x: 1 _id: { $$exists: true } expectEvents: - client: *client0 ignoreExtraEvents: true events: - commandStartedEvent: command: aggregate: *collection0 cursor: {} pipeline: [ { $changeStream: {} } ] commandName: aggregate databaseName: *database0 - commandStartedEvent: command: getMore: { $$exists: true } collection: *collection0 commandName: getMore databaseName: *database0 - commandStartedEvent: command: aggregate: *collection0 cursor: {} pipeline: - $changeStream: resumeAfter: { $$unsetOrMatches: { $$exists: true } } commandName: aggregate databaseName: *database0 - description: change stream resumes after NotPrimaryOrSecondary runOnRequirements: - minServerVersion: "4.2" maxServerVersion: "4.2.99" operations: - name: failPoint object: testRunner arguments: client: *globalClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: [ getMore ] errorCode: 13436 closeConnection: false - name: createChangeStream object: *collection0 arguments: { pipeline: [] } saveResultAsEntity: &changeStream0 changeStream0 - name: insertOne object: *globalCollection0 arguments: document: { x: 1 } - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: _id: { $$exists: true } documentKey: { $$exists: true } operationType: insert ns: db: *database0 coll: *collection0 fullDocument: x: 1 _id: { $$exists: true } expectEvents: - client: *client0 ignoreExtraEvents: true events: - commandStartedEvent: command: aggregate: *collection0 cursor: {} pipeline: [ { $changeStream: {} } ] commandName: aggregate databaseName: *database0 - commandStartedEvent: command: getMore: { $$exists: true } collection: *collection0 commandName: getMore databaseName: *database0 - commandStartedEvent: command: aggregate: *collection0 cursor: {} pipeline: - $changeStream: resumeAfter: { $$unsetOrMatches: { $$exists: true } } commandName: aggregate databaseName: *database0 - description: change stream resumes after StaleShardVersion runOnRequirements: - minServerVersion: "4.2" maxServerVersion: "4.2.99" operations: - name: failPoint object: testRunner arguments: client: *globalClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: [ getMore ] errorCode: 63 closeConnection: false - name: createChangeStream object: *collection0 arguments: { pipeline: [] } saveResultAsEntity: &changeStream0 changeStream0 - name: insertOne object: *globalCollection0 arguments: document: { x: 1 } - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: _id: { $$exists: true } documentKey: { $$exists: true } operationType: insert ns: db: *database0 coll: *collection0 fullDocument: x: 1 _id: { $$exists: true } expectEvents: - client: *client0 ignoreExtraEvents: true events: - commandStartedEvent: command: aggregate: *collection0 cursor: {} pipeline: [ { $changeStream: {} } ] commandName: aggregate databaseName: *database0 - commandStartedEvent: command: getMore: { $$exists: true } collection: *collection0 commandName: getMore databaseName: *database0 - commandStartedEvent: command: aggregate: *collection0 cursor: {} pipeline: - $changeStream: resumeAfter: { $$unsetOrMatches: { $$exists: true } } commandName: aggregate databaseName: *database0 - description: change stream resumes after StaleEpoch runOnRequirements: - minServerVersion: "4.2" maxServerVersion: "4.2.99" operations: - name: failPoint object: testRunner arguments: client: *globalClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: [ getMore ] errorCode: 150 closeConnection: false - name: createChangeStream object: *collection0 arguments: { pipeline: [] } saveResultAsEntity: &changeStream0 changeStream0 - name: insertOne object: *globalCollection0 arguments: document: { x: 1 } - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: _id: { $$exists: true } documentKey: { $$exists: true } operationType: insert ns: db: *database0 coll: *collection0 fullDocument: x: 1 _id: { $$exists: true } expectEvents: - client: *client0 ignoreExtraEvents: true events: - commandStartedEvent: command: aggregate: *collection0 cursor: {} pipeline: [ { $changeStream: {} } ] commandName: aggregate databaseName: *database0 - commandStartedEvent: command: getMore: { $$exists: true } collection: *collection0 commandName: getMore databaseName: *database0 - commandStartedEvent: command: aggregate: *collection0 cursor: {} pipeline: - $changeStream: resumeAfter: { $$unsetOrMatches: { $$exists: true } } commandName: aggregate databaseName: *database0 - description: change stream resumes after RetryChangeStream runOnRequirements: - minServerVersion: "4.2" maxServerVersion: "4.2.99" operations: - name: failPoint object: testRunner arguments: client: *globalClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: [ getMore ] errorCode: 234 closeConnection: false - name: createChangeStream object: *collection0 arguments: { pipeline: [] } saveResultAsEntity: &changeStream0 changeStream0 - name: insertOne object: *globalCollection0 arguments: document: { x: 1 } - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: _id: { $$exists: true } documentKey: { $$exists: true } operationType: insert ns: db: *database0 coll: *collection0 fullDocument: x: 1 _id: { $$exists: true } expectEvents: - client: *client0 ignoreExtraEvents: true events: - commandStartedEvent: command: aggregate: *collection0 cursor: {} pipeline: [ { $changeStream: {} } ] commandName: aggregate databaseName: *database0 - commandStartedEvent: command: getMore: { $$exists: true } collection: *collection0 commandName: getMore databaseName: *database0 - commandStartedEvent: command: aggregate: *collection0 cursor: {} pipeline: - $changeStream: resumeAfter: { $$unsetOrMatches: { $$exists: true } } commandName: aggregate databaseName: *database0 - description: change stream resumes after FailedToSatisfyReadPreference runOnRequirements: - minServerVersion: "4.2" maxServerVersion: "4.2.99" operations: - name: failPoint object: testRunner arguments: client: *globalClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: [ getMore ] errorCode: 133 closeConnection: false - name: createChangeStream object: *collection0 arguments: { pipeline: [] } saveResultAsEntity: &changeStream0 changeStream0 - name: insertOne object: *globalCollection0 arguments: document: { x: 1 } - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: _id: { $$exists: true } documentKey: { $$exists: true } operationType: insert ns: db: *database0 coll: *collection0 fullDocument: x: 1 _id: { $$exists: true } expectEvents: - client: *client0 ignoreExtraEvents: true events: - commandStartedEvent: command: aggregate: *collection0 cursor: {} pipeline: [ { $changeStream: {} } ] commandName: aggregate databaseName: *database0 - commandStartedEvent: command: getMore: { $$exists: true } collection: *collection0 commandName: getMore databaseName: *database0 - commandStartedEvent: command: aggregate: *collection0 cursor: {} pipeline: - $changeStream: resumeAfter: { $$unsetOrMatches: { $$exists: true } } commandName: aggregate databaseName: *database0 # CursorNotFound is special-cased to be resumable regardless of server versions or error labels, so this test has # no maxWireVersion. - description: change stream resumes after CursorNotFound runOnRequirements: - minServerVersion: "4.2" operations: - name: failPoint object: testRunner arguments: client: *globalClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: [ getMore ] errorCode: 43 closeConnection: false - name: createChangeStream object: *collection0 arguments: { pipeline: [] } saveResultAsEntity: &changeStream0 changeStream0 - name: insertOne object: *globalCollection0 arguments: document: { x: 1 } - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: _id: { $$exists: true } documentKey: { $$exists: true } operationType: insert ns: db: *database0 coll: *collection0 fullDocument: x: 1 _id: { $$exists: true } expectEvents: - client: *client0 ignoreExtraEvents: true events: - commandStartedEvent: command: aggregate: *collection0 cursor: {} pipeline: [ { $changeStream: {} } ] commandName: aggregate databaseName: *database0 - commandStartedEvent: command: getMore: { $$exists: true } collection: *collection0 commandName: getMore databaseName: *database0 - commandStartedEvent: command: aggregate: *collection0 cursor: {} pipeline: - $changeStream: resumeAfter: { $$unsetOrMatches: { $$exists: true } } commandName: aggregate databaseName: *database0 change-streams-resume-errorLabels.yml000066400000000000000000001032401505113246500350120ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/change_streams_unified# Tests for resume behavior on server versions that support the ResumableChangeStreamError label description: "change-streams-resume-errorlabels" schemaVersion: "1.7" runOnRequirements: - minServerVersion: "4.3.1" topologies: [ replicaset ] serverless: forbid createEntities: - client: id: &client0 client0 observeEvents: [ commandStartedEvent ] ignoreCommandMonitoringEvents: [ killCursors ] useMultipleMongoses: false - client: id: &globalClient globalClient useMultipleMongoses: false - database: id: &database0 database0 client: *client0 databaseName: *database0 - collection: id: &collection0 collection0 database: *database0 collectionName: *collection0 - database: id: &globalDatabase0 globalDatabase0 client: *globalClient databaseName: *database0 - collection: id: &globalCollection0 globalCollection0 database: *globalDatabase0 collectionName: *collection0 tests: - description: change stream resumes after HostUnreachable operations: - name: failPoint object: testRunner arguments: client: *globalClient failPoint: configureFailPoint: failGetMoreAfterCursorCheckout # SERVER-46091 explains why a new failpoint was needed mode: { times: 1 } data: errorCode: 6 closeConnection: false - name: createChangeStream object: *collection0 arguments: { pipeline: [] } saveResultAsEntity: &changeStream0 changeStream0 - name: insertOne object: *globalCollection0 arguments: document: { x: 1 } - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: _id: { $$exists: true } documentKey: { $$exists: true } operationType: insert ns: db: *database0 coll: *collection0 fullDocument: x: 1 _id: { $$exists: true } expectEvents: - client: *client0 ignoreExtraEvents: true events: - commandStartedEvent: command: aggregate: *collection0 cursor: {} pipeline: [ { $changeStream: {} } ] commandName: aggregate databaseName: *database0 - commandStartedEvent: command: getMore: { $$exists: true } collection: *collection0 commandName: getMore databaseName: *database0 - commandStartedEvent: command: aggregate: *collection0 cursor: {} pipeline: - $changeStream: resumeAfter: { $$unsetOrMatches: { $$exists: true } } commandName: aggregate databaseName: *database0 - description: change stream resumes after HostNotFound operations: - name: failPoint object: testRunner arguments: client: *globalClient failPoint: configureFailPoint: failGetMoreAfterCursorCheckout mode: { times: 1 } data: errorCode: 7 closeConnection: false - name: createChangeStream object: *collection0 arguments: { pipeline: [] } saveResultAsEntity: &changeStream0 changeStream0 - name: insertOne object: *globalCollection0 arguments: document: { x: 1 } - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: _id: { $$exists: true } documentKey: { $$exists: true } operationType: insert ns: db: *database0 coll: *collection0 fullDocument: x: 1 _id: { $$exists: true } expectEvents: - client: *client0 ignoreExtraEvents: true events: - commandStartedEvent: command: aggregate: *collection0 cursor: {} pipeline: [ { $changeStream: {} } ] commandName: aggregate databaseName: *database0 - commandStartedEvent: command: getMore: { $$exists: true } collection: *collection0 commandName: getMore databaseName: *database0 - commandStartedEvent: command: aggregate: *collection0 cursor: {} pipeline: - $changeStream: resumeAfter: { $$unsetOrMatches: { $$exists: true } } commandName: aggregate databaseName: *database0 - description: change stream resumes after NetworkTimeout operations: - name: failPoint object: testRunner arguments: client: *globalClient failPoint: configureFailPoint: failGetMoreAfterCursorCheckout mode: { times: 1 } data: errorCode: 89 closeConnection: false - name: createChangeStream object: *collection0 arguments: { pipeline: [] } saveResultAsEntity: &changeStream0 changeStream0 - name: insertOne object: *globalCollection0 arguments: document: { x: 1 } - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: _id: { $$exists: true } documentKey: { $$exists: true } operationType: insert ns: db: *database0 coll: *collection0 fullDocument: x: 1 _id: { $$exists: true } expectEvents: - client: *client0 ignoreExtraEvents: true events: - commandStartedEvent: command: aggregate: *collection0 cursor: {} pipeline: [ { $changeStream: {} } ] commandName: aggregate databaseName: *database0 - commandStartedEvent: command: getMore: { $$exists: true } collection: *collection0 commandName: getMore databaseName: *database0 - commandStartedEvent: command: aggregate: *collection0 cursor: {} pipeline: - $changeStream: resumeAfter: { $$unsetOrMatches: { $$exists: true } } commandName: aggregate - description: change stream resumes after ShutdownInProgress operations: - name: failPoint object: testRunner arguments: client: *globalClient failPoint: configureFailPoint: failGetMoreAfterCursorCheckout mode: { times: 1 } data: errorCode: 91 closeConnection: false - name: createChangeStream object: *collection0 arguments: { pipeline: [] } saveResultAsEntity: &changeStream0 changeStream0 - name: insertOne object: *globalCollection0 arguments: document: { x: 1 } - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: _id: { $$exists: true } documentKey: { $$exists: true } operationType: insert ns: db: *database0 coll: *collection0 fullDocument: x: 1 _id: { $$exists: true } expectEvents: - client: *client0 ignoreExtraEvents: true events: - commandStartedEvent: command: aggregate: *collection0 cursor: {} pipeline: [ { $changeStream: {} } ] commandName: aggregate databaseName: *database0 - commandStartedEvent: command: getMore: { $$exists: true } collection: *collection0 commandName: getMore databaseName: *database0 - commandStartedEvent: command: aggregate: *collection0 cursor: {} pipeline: - $changeStream: resumeAfter: { $$unsetOrMatches: { $$exists: true } } commandName: aggregate databaseName: *database0 - description: change stream resumes after PrimarySteppedDown operations: - name: failPoint object: testRunner arguments: client: *globalClient failPoint: configureFailPoint: failGetMoreAfterCursorCheckout mode: { times: 1 } data: errorCode: 189 closeConnection: false - name: createChangeStream object: *collection0 arguments: { pipeline: [] } saveResultAsEntity: &changeStream0 changeStream0 - name: insertOne object: *globalCollection0 arguments: document: { x: 1 } - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: _id: { $$exists: true } documentKey: { $$exists: true } operationType: insert ns: db: *database0 coll: *collection0 fullDocument: x: 1 _id: { $$exists: true } expectEvents: - client: *client0 ignoreExtraEvents: true events: - commandStartedEvent: command: aggregate: *collection0 cursor: {} pipeline: [ { $changeStream: {} } ] commandName: aggregate databaseName: *database0 - commandStartedEvent: command: getMore: { $$exists: true } collection: *collection0 commandName: getMore databaseName: *database0 - commandStartedEvent: command: aggregate: *collection0 cursor: {} pipeline: - $changeStream: resumeAfter: { $$unsetOrMatches: { $$exists: true } } commandName: aggregate databaseName: *database0 - description: change stream resumes after ExceededTimeLimit operations: - name: failPoint object: testRunner arguments: client: *globalClient failPoint: configureFailPoint: failGetMoreAfterCursorCheckout mode: { times: 1 } data: errorCode: 262 closeConnection: false - name: createChangeStream object: *collection0 arguments: { pipeline: [] } saveResultAsEntity: &changeStream0 changeStream0 - name: insertOne object: *globalCollection0 arguments: document: { x: 1 } - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: _id: { $$exists: true } documentKey: { $$exists: true } operationType: insert ns: db: *database0 coll: *collection0 fullDocument: x: 1 _id: { $$exists: true } expectEvents: - client: *client0 ignoreExtraEvents: true events: - commandStartedEvent: command: aggregate: *collection0 cursor: {} pipeline: [ { $changeStream: {} } ] commandName: aggregate databaseName: *database0 - commandStartedEvent: command: getMore: { $$exists: true } collection: *collection0 commandName: getMore databaseName: *database0 - commandStartedEvent: command: aggregate: *collection0 cursor: {} pipeline: - $changeStream: resumeAfter: { $$unsetOrMatches: { $$exists: true } } commandName: aggregate databaseName: *database0 - description: change stream resumes after SocketException operations: - name: failPoint object: testRunner arguments: client: *globalClient failPoint: configureFailPoint: failGetMoreAfterCursorCheckout mode: { times: 1 } data: errorCode: 9001 closeConnection: false - name: createChangeStream object: *collection0 arguments: { pipeline: [] } saveResultAsEntity: &changeStream0 changeStream0 - name: insertOne object: *globalCollection0 arguments: document: { x: 1 } - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: _id: { $$exists: true } documentKey: { $$exists: true } operationType: insert ns: db: *database0 coll: *collection0 fullDocument: x: 1 _id: { $$exists: true } expectEvents: - client: *client0 ignoreExtraEvents: true events: - commandStartedEvent: command: aggregate: *collection0 cursor: {} pipeline: [ { $changeStream: {} } ] commandName: aggregate databaseName: *database0 - commandStartedEvent: command: getMore: { $$exists: true } collection: *collection0 commandName: getMore databaseName: *database0 - commandStartedEvent: command: aggregate: *collection0 cursor: {} pipeline: - $changeStream: resumeAfter: { $$unsetOrMatches: { $$exists: true } } commandName: aggregate databaseName: *database0 - description: change stream resumes after NotWritablePrimary operations: - name: failPoint object: testRunner arguments: client: *globalClient failPoint: configureFailPoint: failGetMoreAfterCursorCheckout mode: { times: 1 } data: errorCode: 10107 closeConnection: false - name: createChangeStream object: *collection0 arguments: { pipeline: [] } saveResultAsEntity: &changeStream0 changeStream0 - name: insertOne object: *globalCollection0 arguments: document: { x: 1 } - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: _id: { $$exists: true } documentKey: { $$exists: true } operationType: insert ns: db: *database0 coll: *collection0 fullDocument: x: 1 _id: { $$exists: true } expectEvents: - client: *client0 ignoreExtraEvents: true events: - commandStartedEvent: command: aggregate: *collection0 cursor: {} pipeline: [ { $changeStream: {} } ] commandName: aggregate databaseName: *database0 - commandStartedEvent: command: getMore: { $$exists: true } collection: *collection0 commandName: getMore databaseName: *database0 - commandStartedEvent: command: aggregate: *collection0 cursor: {} pipeline: - $changeStream: resumeAfter: { $$unsetOrMatches: { $$exists: true } } commandName: aggregate databaseName: *database0 - description: change stream resumes after InterruptedAtShutdown operations: - name: failPoint object: testRunner arguments: client: *globalClient failPoint: configureFailPoint: failGetMoreAfterCursorCheckout mode: { times: 1 } data: errorCode: 11600 closeConnection: false - name: createChangeStream object: *collection0 arguments: { pipeline: [] } saveResultAsEntity: &changeStream0 changeStream0 - name: insertOne object: *globalCollection0 arguments: document: { x: 1 } - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: _id: { $$exists: true } documentKey: { $$exists: true } operationType: insert ns: db: *database0 coll: *collection0 fullDocument: x: 1 _id: { $$exists: true } expectEvents: - client: *client0 ignoreExtraEvents: true events: - commandStartedEvent: command: aggregate: *collection0 cursor: {} pipeline: [ { $changeStream: {} } ] commandName: aggregate databaseName: *database0 - commandStartedEvent: command: getMore: { $$exists: true } collection: *collection0 commandName: getMore databaseName: *database0 - commandStartedEvent: command: aggregate: *collection0 cursor: {} pipeline: - $changeStream: resumeAfter: { $$unsetOrMatches: { $$exists: true } } commandName: aggregate databaseName: *database0 - description: change stream resumes after InterruptedDueToReplStateChange operations: - name: failPoint object: testRunner arguments: client: *globalClient failPoint: configureFailPoint: failGetMoreAfterCursorCheckout mode: { times: 1 } data: errorCode: 11602 closeConnection: false - name: createChangeStream object: *collection0 arguments: { pipeline: [] } saveResultAsEntity: &changeStream0 changeStream0 - name: insertOne object: *globalCollection0 arguments: document: { x: 1 } - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: _id: { $$exists: true } documentKey: { $$exists: true } operationType: insert ns: db: *database0 coll: *collection0 fullDocument: x: 1 _id: { $$exists: true } expectEvents: - client: *client0 ignoreExtraEvents: true events: - commandStartedEvent: command: aggregate: *collection0 cursor: {} pipeline: [ { $changeStream: {} } ] commandName: aggregate databaseName: *database0 - commandStartedEvent: command: getMore: { $$exists: true } collection: *collection0 commandName: getMore databaseName: *database0 - commandStartedEvent: command: aggregate: *collection0 cursor: {} pipeline: - $changeStream: resumeAfter: { $$unsetOrMatches: { $$exists: true } } commandName: aggregate databaseName: *database0 - description: change stream resumes after NotPrimaryNoSecondaryOk operations: - name: failPoint object: testRunner arguments: client: *globalClient failPoint: configureFailPoint: failGetMoreAfterCursorCheckout mode: { times: 1 } data: errorCode: 13435 closeConnection: false - name: createChangeStream object: *collection0 arguments: { pipeline: [] } saveResultAsEntity: &changeStream0 changeStream0 - name: insertOne object: *globalCollection0 arguments: document: { x: 1 } - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: _id: { $$exists: true } documentKey: { $$exists: true } operationType: insert ns: db: *database0 coll: *collection0 fullDocument: x: 1 _id: { $$exists: true } expectEvents: - client: *client0 ignoreExtraEvents: true events: - commandStartedEvent: command: aggregate: *collection0 cursor: {} pipeline: [ { $changeStream: {} } ] commandName: aggregate databaseName: *database0 - commandStartedEvent: command: getMore: { $$exists: true } collection: *collection0 commandName: getMore databaseName: *database0 - commandStartedEvent: command: aggregate: *collection0 cursor: {} pipeline: - $changeStream: resumeAfter: { $$unsetOrMatches: { $$exists: true } } commandName: aggregate databaseName: *database0 - description: change stream resumes after NotPrimaryOrSecondary operations: - name: failPoint object: testRunner arguments: client: *globalClient failPoint: configureFailPoint: failGetMoreAfterCursorCheckout mode: { times: 1 } data: errorCode: 13436 closeConnection: false - name: createChangeStream object: *collection0 arguments: { pipeline: [] } saveResultAsEntity: &changeStream0 changeStream0 - name: insertOne object: *globalCollection0 arguments: document: { x: 1 } - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: _id: { $$exists: true } documentKey: { $$exists: true } operationType: insert ns: db: *database0 coll: *collection0 fullDocument: x: 1 _id: { $$exists: true } expectEvents: - client: *client0 ignoreExtraEvents: true events: - commandStartedEvent: command: aggregate: *collection0 cursor: {} pipeline: [ { $changeStream: {} } ] commandName: aggregate databaseName: *database0 - commandStartedEvent: command: getMore: { $$exists: true } collection: *collection0 commandName: getMore databaseName: *database0 - commandStartedEvent: command: aggregate: *collection0 cursor: {} pipeline: - $changeStream: resumeAfter: { $$unsetOrMatches: { $$exists: true } } commandName: aggregate databaseName: *database0 - description: change stream resumes after StaleShardVersion runOnRequirements: # StaleShardVersion is obsolete as of 6.1 and is no longer marked as resumable. - maxServerVersion: "6.0.99" operations: - name: failPoint object: testRunner arguments: client: *globalClient failPoint: configureFailPoint: failGetMoreAfterCursorCheckout mode: { times: 1 } data: errorCode: 63 closeConnection: false - name: createChangeStream object: *collection0 arguments: { pipeline: [] } saveResultAsEntity: &changeStream0 changeStream0 - name: insertOne object: *globalCollection0 arguments: document: { x: 1 } - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: _id: { $$exists: true } documentKey: { $$exists: true } operationType: insert ns: db: *database0 coll: *collection0 fullDocument: x: 1 _id: { $$exists: true } expectEvents: - client: *client0 ignoreExtraEvents: true events: - commandStartedEvent: command: aggregate: *collection0 cursor: {} pipeline: [ { $changeStream: {} } ] commandName: aggregate databaseName: *database0 - commandStartedEvent: command: getMore: { $$exists: true } collection: *collection0 commandName: getMore databaseName: *database0 - commandStartedEvent: command: aggregate: *collection0 cursor: {} pipeline: - $changeStream: resumeAfter: { $$unsetOrMatches: { $$exists: true } } commandName: aggregate databaseName: *database0 - description: change stream resumes after StaleEpoch operations: - name: failPoint object: testRunner arguments: client: *globalClient failPoint: configureFailPoint: failGetMoreAfterCursorCheckout mode: { times: 1 } data: errorCode: 150 closeConnection: false - name: createChangeStream object: *collection0 arguments: { pipeline: [] } saveResultAsEntity: &changeStream0 changeStream0 - name: insertOne object: *globalCollection0 arguments: document: { x: 1 } - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: _id: { $$exists: true } documentKey: { $$exists: true } operationType: insert ns: db: *database0 coll: *collection0 fullDocument: x: 1 _id: { $$exists: true } expectEvents: - client: *client0 ignoreExtraEvents: true events: - commandStartedEvent: command: aggregate: *collection0 cursor: {} pipeline: [ { $changeStream: {} } ] commandName: aggregate databaseName: *database0 - commandStartedEvent: command: getMore: { $$exists: true } collection: *collection0 commandName: getMore databaseName: *database0 - commandStartedEvent: command: aggregate: *collection0 cursor: {} pipeline: - $changeStream: resumeAfter: { $$unsetOrMatches: { $$exists: true } } commandName: aggregate databaseName: *database0 - description: change stream resumes after RetryChangeStream operations: - name: failPoint object: testRunner arguments: client: *globalClient failPoint: configureFailPoint: failGetMoreAfterCursorCheckout mode: { times: 1 } data: errorCode: 234 closeConnection: false - name: createChangeStream object: *collection0 arguments: { pipeline: [] } saveResultAsEntity: &changeStream0 changeStream0 - name: insertOne object: *globalCollection0 arguments: document: { x: 1 } - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: _id: { $$exists: true } documentKey: { $$exists: true } operationType: insert ns: db: *database0 coll: *collection0 fullDocument: x: 1 _id: { $$exists: true } expectEvents: - client: *client0 ignoreExtraEvents: true events: - commandStartedEvent: command: aggregate: *collection0 cursor: {} pipeline: [ { $changeStream: {} } ] commandName: aggregate databaseName: *database0 - commandStartedEvent: command: getMore: { $$exists: true } collection: *collection0 commandName: getMore databaseName: *database0 - commandStartedEvent: command: aggregate: *collection0 cursor: {} pipeline: - $changeStream: resumeAfter: { $$unsetOrMatches: { $$exists: true } } commandName: aggregate databaseName: *database0 - description: change stream resumes after FailedToSatisfyReadPreference operations: - name: failPoint object: testRunner arguments: client: *globalClient failPoint: configureFailPoint: failGetMoreAfterCursorCheckout mode: { times: 1 } data: errorCode: 133 closeConnection: false - name: createChangeStream object: *collection0 arguments: { pipeline: [] } saveResultAsEntity: &changeStream0 changeStream0 - name: insertOne object: *globalCollection0 arguments: document: { x: 1 } - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: _id: { $$exists: true } documentKey: { $$exists: true } operationType: insert ns: db: *database0 coll: *collection0 fullDocument: x: 1 _id: { $$exists: true } expectEvents: - client: *client0 ignoreExtraEvents: true events: - commandStartedEvent: command: aggregate: *collection0 cursor: {} pipeline: [ { $changeStream: {} } ] commandName: aggregate databaseName: *database0 - commandStartedEvent: command: getMore: { $$exists: true } collection: *collection0 commandName: getMore databaseName: *database0 - commandStartedEvent: command: aggregate: *collection0 cursor: {} pipeline: - $changeStream: resumeAfter: { $$unsetOrMatches: { $$exists: true } } commandName: aggregate databaseName: *database0 # The next two tests ensure that the driver only uses the error label, not the allow list. - description: change stream resumes if error contains ResumableChangeStreamError operations: - name: failPoint object: testRunner arguments: client: *globalClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: [ getMore ] errorCode: 50 # Use an error code that does not have the allow list label by default closeConnection: false errorLabels: [ ResumableChangeStreamError ] - name: createChangeStream object: *collection0 arguments: { pipeline: [] } saveResultAsEntity: &changeStream0 changeStream0 - name: insertOne object: *globalCollection0 arguments: document: { x: 1 } - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: _id: { $$exists: true } documentKey: { $$exists: true } operationType: insert ns: db: *database0 coll: *collection0 fullDocument: x: 1 _id: { $$exists: true } expectEvents: - client: *client0 ignoreExtraEvents: true events: - commandStartedEvent: command: aggregate: *collection0 cursor: {} pipeline: [ { $changeStream: {} } ] commandName: aggregate databaseName: *database0 - commandStartedEvent: command: getMore: { $$exists: true } collection: *collection0 commandName: getMore databaseName: *database0 - commandStartedEvent: command: aggregate: *collection0 cursor: {} pipeline: - $changeStream: resumeAfter: { $$unsetOrMatches: { $$exists: true } } commandName: aggregate databaseName: *database0 - description: change stream does not resume if error does not contain ResumableChangeStreamError operations: - name: failPoint object: testRunner arguments: client: *globalClient failPoint: configureFailPoint: failCommand # failCommand will not add the allow list error label mode: { times: 1 } data: failCommands: [ getMore ] errorCode: 6 # Use an error code that is on the allow list closeConnection: false - name: createChangeStream object: *collection0 arguments: { pipeline: [] } saveResultAsEntity: &changeStream0 changeStream0 - name: insertOne object: *globalCollection0 arguments: document: { x: 1 } - name: iterateUntilDocumentOrError object: *changeStream0 expectError: { errorCode: 6 } change-streams-showExpandedEvents.yml000066400000000000000000000220071505113246500350570ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/change_streams_unifieddescription: "change-streams-showExpandedEvents" schemaVersion: "1.7" runOnRequirements: - minServerVersion: "6.0.0" topologies: [ replicaset ] serverless: forbid createEntities: - client: id: &client0 client0 observeEvents: [ commandStartedEvent ] ignoreCommandMonitoringEvents: [ killCursors ] useMultipleMongoses: false - database: id: &database0 database0 client: *client0 databaseName: *database0 - collection: id: &collection0 collection0 database: *database0 collectionName: *collection0 - database: id: &database1 database1 client: *client0 databaseName: *database1 - collection: id: &collection1 collection1 database: *database1 collectionName: *collection1 - database: id: &shardedDb shardedDb client: *client0 databaseName: *shardedDb - database: id: &adminDb adminDb client: *client0 databaseName: admin - collection: id: &shardedCollection shardedCollection database: *shardedDb collectionName: *shardedCollection initialData: - collectionName: *collection0 databaseName: *database0 documents: [] tests: - description: "when provided, showExpandedEvents is sent as a part of the aggregate command" operations: - name: createChangeStream object: *collection0 arguments: pipeline: [] showExpandedEvents: true saveResultAsEntity: &changeStream0 changeStream0 expectEvents: - client: *client0 ignoreExtraEvents: true events: - commandStartedEvent: command: aggregate: *collection0 cursor: {} pipeline: - $changeStream: showExpandedEvents: true commandName: aggregate databaseName: *database0 - description: "when omitted, showExpandedEvents is not sent as a part of the aggregate command" operations: - name: createChangeStream object: *collection0 arguments: pipeline: [] saveResultAsEntity: &changeStream0 changeStream0 expectEvents: - client: *client0 ignoreExtraEvents: true events: - commandStartedEvent: command: aggregate: *collection0 cursor: {} pipeline: - $changeStream: showExpandedEvents: $$exists: false commandName: aggregate databaseName: *database0 - description: "when showExpandedEvents is true, new fields on change stream events are handled appropriately" operations: - name: dropCollection object: *database0 arguments: collection: &existing-collection foo - name: createCollection object: *database0 arguments: collection: *existing-collection - name: createChangeStream object: *collection0 arguments: pipeline: [] showExpandedEvents: true saveResultAsEntity: &changeStream0 changeStream0 - name: insertOne object: *collection0 arguments: document: a: 1 - name: createIndex object: *collection0 arguments: keys: x: 1 name: x_1 - name: rename object: *collection0 arguments: to: *existing-collection dropTarget: true - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: operationType: insert ns: db: *database0 coll: *collection0 collectionUUID: $$exists: true - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: operationType: createIndexes ns: db: *database0 coll: *collection0 operationDescription: $$exists: true - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: operationType: rename ns: db: *database0 coll: *collection0 to: db: *database0 coll: *existing-collection operationDescription: dropTarget: $$exists: true to: db: *database0 coll: *existing-collection - description: "when showExpandedEvents is true, createIndex events are reported" operations: - name: createChangeStream object: *collection0 arguments: pipeline: # On sharded clusters, the create command run when loading initial # data sometimes is still reported in the change stream. To avoid # this, we exclude the create command when creating the change # stream, but specifically don't exclude other events to still catch # driver errors. - $match: operationType: $ne: create showExpandedEvents: true saveResultAsEntity: &changeStream0 changeStream0 - name: createIndex object: *collection0 arguments: keys: x: 1 name: x_1 - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: operationType: createIndexes - description: "when showExpandedEvents is true, dropIndexes events are reported" operations: - name: createIndex object: *collection0 arguments: keys: x: 1 name: &index1 x_1 - name: createChangeStream object: *collection0 arguments: pipeline: [] showExpandedEvents: true saveResultAsEntity: &changeStream0 changeStream0 - name: dropIndex object: *collection0 arguments: name: *index1 - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: operationType: dropIndexes - description: "when showExpandedEvents is true, create events are reported" operations: - name: dropCollection object: *database0 arguments: collection: &collection1 foo - name: createChangeStream object: *database0 arguments: pipeline: [] showExpandedEvents: true saveResultAsEntity: &changeStream0 changeStream0 - name: createCollection object: *database0 arguments: collection: *collection1 - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: operationType: create - description: "when showExpandedEvents is true, create events on views are reported" operations: - name: dropCollection object: *database0 arguments: collection: &collection1 foo - name: createChangeStream object: *database0 arguments: pipeline: [] showExpandedEvents: true saveResultAsEntity: &changeStream0 changeStream0 - name: createCollection object: *database0 arguments: collection: *collection1 viewOn: testName - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: operationType: create - description: "when showExpandedEvents is true, modify events are reported" operations: - name: createIndex object: *collection0 arguments: keys: x: 1 name: &index2 x_2 - name: createChangeStream object: *collection0 arguments: pipeline: [] showExpandedEvents: true saveResultAsEntity: &changeStream0 changeStream0 - name: runCommand object: *database0 arguments: command: collMod: *collection0 commandName: collMod - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: operationType: modify - description: "when showExpandedEvents is true, shardCollection events are reported" runOnRequirements: # Note: minServerVersion is specified in top-level runOnRequirements - topologies: [ sharded ] operations: - name: dropCollection object: *shardedDb arguments: collection: *shardedCollection - name: createCollection object: *shardedDb arguments: collection: *shardedCollection - name: createChangeStream object: *shardedCollection arguments: pipeline: [] showExpandedEvents: true saveResultAsEntity: &changeStream0 changeStream0 - name: runCommand object: *adminDb arguments: command: shardCollection: shardedDb.shardedCollection key: _id: 1 commandName: shardCollection - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: operationType: shardCollection mongo-ruby-driver-2.21.3/spec/spec_tests/data/change_streams_unified/change-streams.yml000066400000000000000000000673051505113246500313340ustar00rootroot00000000000000description: "change-streams" schemaVersion: "1.7" runOnRequirements: - minServerVersion: "3.6" # TODO(DRIVERS-2323): Run all possible tests against sharded clusters once we know the # cause of unexpected command monitoring events. topologies: [ replicaset ] serverless: forbid createEntities: - client: id: &client0 client0 observeEvents: [ commandStartedEvent ] ignoreCommandMonitoringEvents: [ killCursors ] useMultipleMongoses: false - client: id: &globalClient globalClient useMultipleMongoses: false - database: id: &database0 database0 client: *client0 databaseName: *database0 - collection: id: &collection0 collection0 database: *database0 collectionName: *collection0 - database: id: &database1 database1 client: *client0 databaseName: *database1 - collection: id: &collection1 collection1 database: *database1 collectionName: *collection1 - database: id: &globalDatabase0 globalDatabase0 client: *globalClient databaseName: *database0 - collection: id: &globalCollection0 globalCollection0 database: *globalDatabase0 collectionName: *collection0 - database: id: &globalDatabase1 globalDatabase1 client: *globalClient databaseName: *database1 - collection: id: &globalCollection1 globalCollection1 database: *globalDatabase1 collectionName: *collection1 # Some tests run operations against db1.coll0 or db0.coll1 - collection: id: &globalDb1Collection0 globalDb1Collection0 database: *globalDatabase1 collectionName: *collection0 - collection: id: &globalDb0Collection1 globalDb0Collection1 database: *globalDatabase0 collectionName: *collection1 initialData: - collectionName: *collection0 databaseName: *database0 documents: [] tests: - description: "Test array truncation" runOnRequirements: - minServerVersion: "4.7" operations: - name: insertOne object: *collection0 arguments: document: { "_id": 1, "a": 1, "array": ["foo", {"a": "bar"}, 1, 2, 3] } - name: createChangeStream object: *collection0 arguments: { pipeline: [] } saveResultAsEntity: &changeStream0 changeStream0 - name: updateOne object: *collection0 arguments: filter: { "_id": 1 } update: [ { "$set": { "array": ["foo", {"a": "bar"}] } } ] - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: { "operationType": "update", "ns": { "db": "database0", "coll": "collection0" }, # It is up to the MongoDB server to decide how to report a change. # This expectation is based on the current MongoDB server behavior. # Alternatively, we could have used a set of possible expectations of which only one # must be satisfied, but the unified test format does not support this. "updateDescription": { "updatedFields": {}, "removedFields": [], "truncatedArrays": [ { "field": "array", "newSize": 2 } ] } } - description: "Test with document comment" runOnRequirements: - minServerVersion: "4.4" operations: - name: createChangeStream object: *collection0 arguments: pipeline: [] comment: &comment0 { name: "test1" } saveResultAsEntity: &changeStream0 changeStream0 expectEvents: - client: *client0 events: - commandStartedEvent: command: aggregate: *collection0 pipeline: - $changeStream: {} comment: *comment0 - description: "Test with document comment - pre 4.4" runOnRequirements: - maxServerVersion: "4.2.99" operations: - name: createChangeStream object: *collection0 arguments: pipeline: [] comment: &comment0 { name: "test1" } expectError: isClientError: false expectEvents: - client: *client0 events: - commandStartedEvent: command: aggregate: *collection0 pipeline: - $changeStream: {} comment: *comment0 - description: "Test with string comment" operations: - name: createChangeStream object: *collection0 arguments: pipeline: [] comment: "comment" saveResultAsEntity: &changeStream0 changeStream0 expectEvents: - client: *client0 events: - commandStartedEvent: command: aggregate: *collection0 pipeline: - $changeStream: {} comment: "comment" - description: "Test that comment is set on getMore" runOnRequirements: - minServerVersion: "4.4.0" operations: - name: createChangeStream object: *collection0 arguments: pipeline: [] comment: &commentDoc key: "value" saveResultAsEntity: &changeStream0 changeStream0 - name: insertOne object: *collection0 arguments: document: &new_document _id: 1 a: 1 - name: iterateUntilDocumentOrError object: *changeStream0 expectEvents: - client: *client0 events: - commandStartedEvent: command: aggregate: *collection0 pipeline: - $changeStream: {} comment: *commentDoc - commandStartedEvent: command: insert: *collection0 documents: - *new_document - commandStartedEvent: command: getMore: { $$type: [ int, long ] } collection: *collection0 comment: *commentDoc commandName: getMore databaseName: *database0 - description: "Test that comment is not set on getMore - pre 4.4" runOnRequirements: - maxServerVersion: "4.3.99" operations: - name: createChangeStream object: *collection0 arguments: pipeline: [] comment: "comment" saveResultAsEntity: &changeStream0 changeStream0 - name: insertOne object: *collection0 arguments: document: &new_document _id: 1 a: 1 - name: iterateUntilDocumentOrError object: *changeStream0 expectEvents: - client: *client0 events: - commandStartedEvent: command: aggregate: *collection0 pipeline: - $changeStream: {} comment: "comment" - commandStartedEvent: command: insert: *collection0 documents: - *new_document - commandStartedEvent: command: getMore: { $$type: [ int, long ] } collection: *collection0 comment: { $$exists: false } commandName: getMore databaseName: *database0 - description: "to field is set in a rename change event" runOnRequirements: - minServerVersion: "4.0.1" operations: - name: createChangeStream object: *collection0 arguments: { pipeline: [] } saveResultAsEntity: &changeStream0 changeStream0 - name: dropCollection object: *database0 arguments: collection: &collection1 collection1 - name: rename object: *collection0 arguments: to: *collection1 - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: operationType: rename ns: db: *database0 coll: *collection0 to: db: *database0 coll: *collection1 - description: "Test unknown operationType MUST NOT err" operations: - name: createChangeStream object: *collection0 arguments: # using $project to simulate future changes to ChangeStreamDocument structure pipeline: [ { $project: { operationType: "addedInFutureMongoDBVersion", ns: 1 } } ] saveResultAsEntity: &changeStream0 changeStream0 - name: insertOne object: *collection0 arguments: document: { "_id": 1, "a": 1 } - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: operationType: "addedInFutureMongoDBVersion" ns: db: *database0 coll: *collection0 - description: "Test newField added in response MUST NOT err" operations: - name: createChangeStream object: *collection0 arguments: # using $project to simulate future changes to ChangeStreamDocument structure pipeline: [ { $project: { operationType: 1, ns: 1, newField: "newFieldValue" } } ] saveResultAsEntity: &changeStream0 changeStream0 - name: insertOne object: *collection0 arguments: document: { "_id": 1, "a": 1 } - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: operationType: "insert" ns: db: *database0 coll: *collection0 newField: "newFieldValue" - description: "Test new structure in ns document MUST NOT err" runOnRequirements: - minServerVersion: "3.6" maxServerVersion: "5.2.99" - minServerVersion: "6.0" operations: - name: createChangeStream object: *collection0 arguments: # using $project to simulate future changes to ChangeStreamDocument structure pipeline: [ { $project: { operationType: "insert", "ns.viewOn": "db.coll" } } ] saveResultAsEntity: &changeStream0 changeStream0 - name: insertOne object: *collection0 arguments: document: { "_id": 1, "a": 1 } - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: operationType: "insert" ns: viewOn: "db.coll" - description: "Test modified structure in ns document MUST NOT err" operations: - name: createChangeStream object: *collection0 arguments: # using $project to simulate future changes to ChangeStreamDocument structure pipeline: [ { $project: { operationType: "insert", ns: { db: "$ns.db", coll: "$ns.coll", viewOn: "db.coll" } } } ] saveResultAsEntity: &changeStream0 changeStream0 - name: insertOne object: *collection0 arguments: document: { "_id": 1, "a": 1 } - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: operationType: "insert" ns: db: *database0 coll: *collection0 viewOn: "db.coll" - description: "Test server error on projecting out _id" runOnRequirements: - minServerVersion: "4.2" # Server returns an error if _id is modified on versions 4.2 and higher operations: - name: createChangeStream object: *collection0 arguments: pipeline: [ { $project: { _id: 0 } } ] saveResultAsEntity: &changeStream0 changeStream0 - name: insertOne object: *collection0 arguments: document: { "_id": 1, "a": 1 } - name: iterateUntilDocumentOrError object: *changeStream0 expectError: errorCode: 280 errorCodeName: "ChangeStreamFatalError" errorLabelsContain: [ "NonResumableChangeStreamError" ] - description: "Test projection in change stream returns expected fields" operations: - name: createChangeStream object: *collection0 arguments: pipeline: [ { $project: { optype: "$operationType", ns: 1, newField: "value" } } ] saveResultAsEntity: &changeStream0 changeStream0 - name: insertOne object: *collection0 arguments: document: { "_id": 1, "a": 1 } - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: optype: "insert" ns: db: *database0 coll: *collection0 newField: "value" - description: $changeStream must be the first stage in a change stream pipeline sent to the server runOnRequirements: - minServerVersion: "3.6.0" operations: - name: createChangeStream object: *collection0 arguments: { pipeline: [] } saveResultAsEntity: &changeStream0 changeStream0 - name: insertOne object: *globalCollection0 arguments: document: { x: 1 } - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: _id: { $$exists: true } documentKey: { $$exists: true } operationType: insert ns: db: *database0 coll: *collection0 fullDocument: x: 1 _id: { $$exists: true } expectEvents: - client: *client0 ignoreExtraEvents: true events: - commandStartedEvent: command: aggregate: *collection0 cursor: {} pipeline: [ { $changeStream: {} } ] commandName: aggregate databaseName: *database0 - description: The server returns change stream responses in the specified server response format runOnRequirements: - minServerVersion: "3.6.0" operations: - name: createChangeStream object: *collection0 arguments: { pipeline: [] } saveResultAsEntity: &changeStream0 changeStream0 - name: insertOne object: *globalCollection0 arguments: document: { x: 1 } - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: _id: { $$exists: true } documentKey: { $$exists: true } operationType: insert ns: db: *database0 coll: *collection0 fullDocument: x: 1 _id: { $$exists: true } - description: Executing a watch helper on a Collection results in notifications for changes to the specified collection runOnRequirements: - minServerVersion: "3.6.0" operations: - name: createChangeStream object: *collection0 arguments: { pipeline: [] } saveResultAsEntity: &changeStream0 changeStream0 - name: insertOne object: *globalDb0Collection1 arguments: document: { x: 1 } - name: insertOne object: *globalDb1Collection0 arguments: document: { y: 2 } - name: insertOne object: *globalCollection0 arguments: document: { z: 3 } - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: operationType: insert ns: db: *database0 coll: *collection0 fullDocument: z: 3 _id: { $$exists: true } expectEvents: - client: *client0 ignoreExtraEvents: true events: - commandStartedEvent: command: aggregate: *collection0 cursor: {} pipeline: [ { $changeStream: {} } ] commandName: aggregate databaseName: *database0 - description: Change Stream should allow valid aggregate pipeline stages runOnRequirements: - minServerVersion: "3.6.0" operations: - name: createChangeStream object: *collection0 arguments: pipeline: - $match: fullDocument.z: 3 saveResultAsEntity: &changeStream0 changeStream0 - name: insertOne object: *globalCollection0 arguments: document: { y: 2 } - name: insertOne object: *globalCollection0 arguments: document: { z: 3 } - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: operationType: insert ns: db: *database0 coll: *collection0 fullDocument: z: 3 _id: { $$exists: true } expectEvents: - client: *client0 ignoreExtraEvents: true events: - commandStartedEvent: command: aggregate: *collection0 cursor: {} pipeline: - $changeStream: {} - $match: fullDocument.z: 3 commandName: aggregate databaseName: *database0 - description: Executing a watch helper on a Database results in notifications for changes to all collections in the specified database. runOnRequirements: - minServerVersion: "3.8.0" operations: - name: createChangeStream object: *database0 arguments: { pipeline: [] } saveResultAsEntity: &changeStream0 changeStream0 - name: insertOne object: *globalDb0Collection1 arguments: document: { x: 1 } - name: insertOne object: *globalDb1Collection0 arguments: document: { y: 2 } - name: insertOne object: *globalCollection0 arguments: document: { z: 3 } - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: operationType: insert ns: db: *database0 coll: *collection1 fullDocument: x: 1 _id: { $$exists: true } - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: operationType: insert ns: db: *database0 coll: *collection0 fullDocument: z: 3 _id: { $$exists: true } expectEvents: - client: *client0 ignoreExtraEvents: true events: - commandStartedEvent: command: aggregate: 1 cursor: {} pipeline: [ { $changeStream: {} } ] commandName: aggregate databaseName: *database0 - description: Executing a watch helper on a MongoClient results in notifications for changes to all collections in all databases in the cluster. runOnRequirements: - minServerVersion: "3.8.0" operations: - name: createChangeStream object: *client0 arguments: { pipeline: [] } saveResultAsEntity: &changeStream0 changeStream0 - name: insertOne object: *globalDb0Collection1 arguments: document: { x: 1 } - name: insertOne object: *globalDb1Collection0 arguments: document: { y: 2 } - name: insertOne object: *globalCollection0 arguments: document: { z: 3 } - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: operationType: insert ns: db: *database0 coll: *collection1 fullDocument: x: 1 _id: { $$exists: true } - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: operationType: insert ns: db: *database1 coll: *collection0 fullDocument: y: 2 _id: { $$exists: true } - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: operationType: insert ns: db: *database0 coll: *collection0 fullDocument: z: 3 _id: { $$exists: true } expectEvents: - client: *client0 ignoreExtraEvents: true events: - commandStartedEvent: command: aggregate: 1 cursor: {} pipeline: - $changeStream: { allChangesForCluster: true } commandName: aggregate databaseName: admin - description: "Test insert, update, replace, and delete event types" runOnRequirements: - minServerVersion: "3.6.0" operations: - name: createChangeStream object: *collection0 arguments: { pipeline: [] } saveResultAsEntity: &changeStream0 changeStream0 - name: insertOne object: *globalCollection0 arguments: document: { x: 1 } - name: updateOne object: *globalCollection0 arguments: filter: { x: 1 } update: $set: { x: 2 } - name: replaceOne object: *globalCollection0 arguments: filter: { x: 2 } replacement: { x: 3 } - name: deleteOne object: *globalCollection0 arguments: filter: { x: 3 } - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: operationType: insert ns: db: *database0 coll: *collection0 fullDocument: x: 1 _id: { $$exists: true } - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: operationType: update ns: db: *database0 coll: *collection0 updateDescription: updatedFields: { x: 2 } removedFields: [] truncatedArrays: { $$unsetOrMatches: { $$exists: true } } - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: operationType: replace ns: db: *database0 coll: *collection0 fullDocument: x: 3 _id: { $$exists: true } - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: operationType: delete ns: db: *database0 coll: *collection0 expectEvents: - client: *client0 ignoreExtraEvents: true events: - commandStartedEvent: command: aggregate: *collection0 cursor: {} pipeline: [ { $changeStream: {} } ] commandName: aggregate databaseName: *database0 - description: Test rename and invalidate event types runOnRequirements: - minServerVersion: "4.0.1" operations: - name: createChangeStream object: *collection0 arguments: { pipeline: [] } saveResultAsEntity: &changeStream0 changeStream0 - name: dropCollection object: *database0 arguments: collection: *collection1 - name: rename object: *globalCollection0 arguments: to: *collection1 - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: operationType: rename ns: db: *database0 coll: *collection0 to: db: *database0 coll: *collection1 - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: operationType: invalidate expectEvents: - client: *client0 ignoreExtraEvents: true events: - commandStartedEvent: command: aggregate: *collection0 cursor: {} pipeline: [ { $changeStream: {} } ] commandName: aggregate databaseName: *database0 - description: Test drop and invalidate event types runOnRequirements: - minServerVersion: "4.0.1" operations: - name: createChangeStream object: *collection0 arguments: { pipeline: [] } saveResultAsEntity: &changeStream0 changeStream0 - name: dropCollection object: *database0 arguments: collection: *collection0 - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: operationType: drop ns: db: *database0 coll: *collection0 - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: operationType: invalidate expectEvents: - client: *client0 ignoreExtraEvents: true events: - commandStartedEvent: command: aggregate: *collection0 cursor: {} pipeline: [ { $changeStream: {} } ] commandName: aggregate databaseName: *database0 # Test that resume logic works correctly even after consecutive retryable failures of a getMore command, # with no intervening events. This is ensured by setting the batch size of the change stream to 1, - description: Test consecutive resume runOnRequirements: - minServerVersion: "4.1.7" operations: - name: failPoint object: testRunner arguments: client: *globalClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: [ getMore ] closeConnection: true - name: createChangeStream object: *collection0 arguments: pipeline: [] batchSize: 1 saveResultAsEntity: &changeStream0 changeStream0 - name: insertOne object: *globalCollection0 arguments: document: { x: 1 } - name: insertOne object: *globalCollection0 arguments: document: { x: 2 } - name: insertOne object: *globalCollection0 arguments: document: { x: 3 } - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: operationType: insert ns: db: *database0 coll: *collection0 fullDocument: x: 1 _id: { $$exists: true } - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: operationType: insert ns: db: *database0 coll: *collection0 fullDocument: x: 2 _id: { $$exists: true } - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: operationType: insert ns: db: *database0 coll: *collection0 fullDocument: x: 3 _id: { $$exists: true } expectEvents: - client: *client0 ignoreExtraEvents: true events: - commandStartedEvent: command: aggregate: *collection0 cursor: batchSize: 1 pipeline: [ { $changeStream: {} } ] commandName: aggregate databaseName: *database0 - description: "Test wallTime field is set in a change event" runOnRequirements: - minServerVersion: "6.0.0" operations: - name: createChangeStream object: *collection0 arguments: { pipeline: [] } saveResultAsEntity: &changeStream0 changeStream0 - name: insertOne object: *collection0 arguments: document: { "_id": 1, "a": 1 } - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: operationType: "insert" ns: db: *database0 coll: *collection0 wallTime: { $$exists: true } mongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_encryption/000077500000000000000000000000001505113246500257435ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_encryption/aggregate.yml000066400000000000000000000133221505113246500304150ustar00rootroot00000000000000runOn: - minServerVersion: "4.1.10" database_name: &database_name "default" collection_name: &collection_name "default" data: - &doc0_encrypted { _id: 1, encrypted_string: {'$binary': {'base64': 'AQAAAAAAAAAAAAAAAAAAAAACwj+3zkv2VM+aTfk60RqhXq6a/77WlLwu/BxXFkL7EppGsju/m8f0x5kBDD3EZTtGALGXlym5jnpZAoSIkswHoA==', 'subType': '06'}} } json_schema: {'properties': {'encrypted_w_altname': {'encrypt': {'keyId': '/altname', 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Random'}}, 'encrypted_string': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}, 'random': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Random'}}, 'encrypted_string_equivalent': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}}, 'bsonType': 'object'} key_vault_data: [{'status': 1, '_id': {'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}, 'masterKey': {'provider': 'aws', 'key': 'arn:aws:kms:us-east-1:579766882180:key/89fcc2c4-08b0-4bd9-9f25-e30687b580d0', 'region': 'us-east-1'}, 'updateDate': {'$date': {'$numberLong': '1552949630483'}}, 'keyMaterial': {'$binary': {'base64': 'AQICAHhQNmWG2CzOm1dq3kWLM+iDUZhEqnhJwH9wZVpuZ94A8gEqnsxXlR51T5EbEVezUqqKAAAAwjCBvwYJKoZIhvcNAQcGoIGxMIGuAgEAMIGoBgkqhkiG9w0BBwEwHgYJYIZIAWUDBAEuMBEEDHa4jo6yp0Z18KgbUgIBEIB74sKxWtV8/YHje5lv5THTl0HIbhSwM6EqRlmBiFFatmEWaeMk4tO4xBX65eq670I5TWPSLMzpp8ncGHMmvHqRajNBnmFtbYxN3E3/WjxmdbOOe+OXpnGJPcGsftc7cB2shRfA4lICPnE26+oVNXT6p0Lo20nY5XC7jyCO', 'subType': '00'}}, 'creationDate': {'$date': {'$numberLong': '1552949630483'}}, 'keyAltNames': ['altname', 'another_altname']}] tests: - description: "Aggregate with deterministic encryption" skipReason: "SERVER-39395" clientOptions: autoEncryptOpts: kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: aggregate arguments: pipeline: - { $match: { encrypted_string: "457-55-5642" } } result: - &doc0 { _id: 1, encrypted_string: "string0" } expectations: # Auto encryption will request the collection info. - command_started_event: command: listCollections: 1 filter: name: *collection_name command_name: listCollections # Then key is fetched from the key vault. - command_started_event: command: find: datakeys filter: {"$or": [{"_id": {"$in": [ {'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}} ] }}, {"keyAltNames": {"$in": []}}]} $db: keyvault readConcern: { level: "majority" } command_name: find - command_started_event: command: aggregate: *collection_name pipeline: - { $match: { encrypted_string: "457-55-5642" } } command_name: aggregate outcome: collection: # Outcome is checked using a separate MongoClient without auto encryption. data: - *doc0_encrypted - description: "Aggregate with empty pipeline" skipReason: "SERVER-40829 hides agg support behind enableTestCommands flag." clientOptions: autoEncryptOpts: kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: aggregate arguments: pipeline: [] result: - { _id: 1, encrypted_string: "string0" } expectations: # Auto encryption will request the collection info. - command_started_event: command: listCollections: 1 filter: name: *collection_name command_name: listCollections - command_started_event: command: aggregate: *collection_name pipeline: [] cursor: {} command_name: aggregate # Needs to fetch key when decrypting results # Then key is fetched from the key vault. - command_started_event: command: find: datakeys filter: {"$or": [{"_id": {"$in": [ {'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}} ] }}, {"keyAltNames": {"$in": []}}]} $db: keyvault readConcern: { level: "majority" } command_name: find outcome: collection: # Outcome is checked using a separate MongoClient without auto encryption. data: - *doc0_encrypted - description: "Aggregate should fail with random encryption" skipReason: "SERVER-39395" clientOptions: autoEncryptOpts: kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: aggregate arguments: pipeline: - { $match: { random: "abc" } } result: errorContains: "Cannot query on fields encrypted with the randomized encryption" - description: "Database aggregate should fail" clientOptions: autoEncryptOpts: kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: aggregate object: database arguments: pipeline: - $currentOp: { allUsers: false, idleConnections: false, localOps: true } - $match: { command.aggregate: { $eq: 1 } } - $project: { command: 1 } - $project: { command.lsid: 0 } result: errorContains: "non-collection command not supported for auto encryption: aggregate"mongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_encryption/awsTemporary.yml000066400000000000000000000065721505113246500311750ustar00rootroot00000000000000runOn: - minServerVersion: "4.1.10" database_name: &database_name "default" collection_name: &collection_name "default" data: [] json_schema: {'properties': {'encrypted_w_altname': {'encrypt': {'keyId': '/altname', 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Random'}}, 'encrypted_string': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}, 'random': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Random'}}, 'encrypted_string_equivalent': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}}, 'bsonType': 'object'} key_vault_data: [{'status': 1, '_id': {'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}, 'masterKey': {'provider': 'aws', 'key': 'arn:aws:kms:us-east-1:579766882180:key/89fcc2c4-08b0-4bd9-9f25-e30687b580d0', 'region': 'us-east-1'}, 'updateDate': {'$date': {'$numberLong': '1552949630483'}}, 'keyMaterial': {'$binary': {'base64': 'AQICAHhQNmWG2CzOm1dq3kWLM+iDUZhEqnhJwH9wZVpuZ94A8gEqnsxXlR51T5EbEVezUqqKAAAAwjCBvwYJKoZIhvcNAQcGoIGxMIGuAgEAMIGoBgkqhkiG9w0BBwEwHgYJYIZIAWUDBAEuMBEEDHa4jo6yp0Z18KgbUgIBEIB74sKxWtV8/YHje5lv5THTl0HIbhSwM6EqRlmBiFFatmEWaeMk4tO4xBX65eq670I5TWPSLMzpp8ncGHMmvHqRajNBnmFtbYxN3E3/WjxmdbOOe+OXpnGJPcGsftc7cB2shRfA4lICPnE26+oVNXT6p0Lo20nY5XC7jyCO', 'subType': '00'}}, 'creationDate': {'$date': {'$numberLong': '1552949630483'}}, 'keyAltNames': ['altname', 'another_altname']}] tests: - description: "Insert a document with auto encryption using the AWS provider with temporary credentials" clientOptions: autoEncryptOpts: kmsProviders: awsTemporary: {} operations: - name: insertOne arguments: document: &doc0 { _id: 1, encrypted_string: "string0" } expectations: # Auto encryption will request the collection info. - command_started_event: command: listCollections: 1 filter: name: *collection_name command_name: listCollections # Then key is fetched from the key vault. - command_started_event: command: find: datakeys filter: { $or: [ { _id: { $in: [ {'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}} ] } }, { keyAltNames: { $in: [] } } ] } $db: keyvault command_name: find - command_started_event: command: insert: *collection_name documents: - &doc0_encrypted { _id: 1, encrypted_string: {'$binary': {'base64': 'AQAAAAAAAAAAAAAAAAAAAAACwj+3zkv2VM+aTfk60RqhXq6a/77WlLwu/BxXFkL7EppGsju/m8f0x5kBDD3EZTtGALGXlym5jnpZAoSIkswHoA==', 'subType': '06'}} } ordered: true command_name: insert outcome: collection: # Outcome is checked using a separate MongoClient without auto encryption. data: - *doc0_encrypted - description: "Insert with invalid temporary credentials" clientOptions: autoEncryptOpts: kmsProviders: awsTemporaryNoSessionToken: {} operations: - name: insertOne arguments: document: *doc0 result: errorContains: "security token"mongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_encryption/azureKMS.yml000066400000000000000000000065271505113246500302010ustar00rootroot00000000000000runOn: - minServerVersion: "4.1.10" database_name: &database_name "default" collection_name: &collection_name "default" data: [] json_schema: {'properties': {'encrypted_string_aws': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}, 'encrypted_string_azure': {'encrypt': {'keyId': [{'$binary': {'base64': 'AZURE+AAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}, 'encrypted_string_gcp': {'encrypt': {'keyId': [{'$binary': {'base64': 'GCP+AAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}, 'encrypted_string_local': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}, 'encrypted_string_kmip': {'encrypt': {'keyId': [{'$binary': {'base64': 'dBHpr8aITfeBQ15grpbLpQ==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}}, 'bsonType': 'object'} key_vault_data: [{'_id': {'$binary': {'base64': 'AZURE+AAAAAAAAAAAAAAAA==', 'subType': '04'}}, 'keyMaterial': {'$binary': {'base64': 'n+HWZ0ZSVOYA3cvQgP7inN4JSXfOH85IngmeQxRpQHjCCcqT3IFqEWNlrsVHiz3AELimHhX4HKqOLWMUeSIT6emUDDoQX9BAv8DR1+E1w4nGs/NyEneac78EYFkK3JysrFDOgl2ypCCTKAypkn9CkAx1if4cfgQE93LW4kczcyHdGiH36CIxrCDGv1UzAvERN5Qa47DVwsM6a+hWsF2AAAJVnF0wYLLJU07TuRHdMrrphPWXZsFgyV+lRqJ7DDpReKNO8nMPLV/mHqHBHGPGQiRdb9NoJo8CvokGz4+KE8oLwzKf6V24dtwZmRkrsDV4iOhvROAzz+Euo1ypSkL3mw==', 'subType': '00'}}, 'creationDate': {'$date': {'$numberLong': '1601573901680'}}, 'updateDate': {'$date': {'$numberLong': '1601573901680'}}, 'status': {'$numberInt': '0'}, 'masterKey': {'provider': 'azure', 'keyVaultEndpoint': 'key-vault-csfle.vault.azure.net', 'keyName': 'key-name-csfle'}, 'keyAltNames': ['altname', 'azure_altname']}] tests: - description: "Insert a document with auto encryption using Azure KMS provider" clientOptions: autoEncryptOpts: kmsProviders: azure: {} operations: - name: insertOne arguments: document: &doc0 { _id: 1, encrypted_string_azure: "string0" } expectations: # Auto encryption will request the collection info. - command_started_event: command: listCollections: 1 filter: name: *collection_name command_name: listCollections # Then key is fetched from the key vault. - command_started_event: command: find: datakeys filter: { $or: [ { _id: { $in: [ {'$binary': {'base64': 'AZURE+AAAAAAAAAAAAAAAA==', 'subType': '04'}} ] } }, { keyAltNames: { $in: [] } } ] } $db: keyvault command_name: find - command_started_event: command: insert: *collection_name documents: - &doc0_encrypted { _id: 1, encrypted_string_azure: {'$binary': {'base64': 'AQGVERPgAAAAAAAAAAAAAAAC5DbBSwPwfSlBrDtRuglvNvCXD1KzDuCKY2P+4bRFtHDjpTOE2XuytPAUaAbXf1orsPq59PVZmsbTZbt2CB8qaQ==', 'subType': '06'}} } ordered: true command_name: insert outcome: collection: # Outcome is checked using a separate MongoClient without auto encryption. data: - *doc0_encryptedmongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_encryption/badQueries.yml000066400000000000000000000503471505113246500305630ustar00rootroot00000000000000runOn: - minServerVersion: "4.1.10" topology: [ "replicaset", "sharded" ] database_name: &database_name "default" collection_name: &collection_name "default" data: - &doc0_encrypted { _id: 1, encrypted_string: {'$binary': {'base64': 'AQAAAAAAAAAAAAAAAAAAAAACwj+3zkv2VM+aTfk60RqhXq6a/77WlLwu/BxXFkL7EppGsju/m8f0x5kBDD3EZTtGALGXlym5jnpZAoSIkswHoA==', 'subType': '06'}} } - &doc1_encrypted { _id: 2, encrypted_string: {'$binary': {'base64': 'AQAAAAAAAAAAAAAAAAAAAAACDdw4KFz3ZLquhsbt7RmDjD0N67n0uSXx7IGnQNCLeIKvot6s/ouI21Eo84IOtb6lhwUNPlSEBNY0/hbszWAKJg==', 'subType': '06'}} } json_schema: {'properties': {'encrypted_w_altname': {'encrypt': {'keyId': '/altname', 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Random'}}, 'encrypted_string': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}, 'random': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Random'}}, 'encrypted_string_equivalent': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}}, 'bsonType': 'object'} key_vault_data: [{'status': 1, '_id': {'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}, 'masterKey': {'provider': 'aws', 'key': 'arn:aws:kms:us-east-1:579766882180:key/89fcc2c4-08b0-4bd9-9f25-e30687b580d0', 'region': 'us-east-1'}, 'updateDate': {'$date': {'$numberLong': '1552949630483'}}, 'keyMaterial': {'$binary': {'base64': 'AQICAHhQNmWG2CzOm1dq3kWLM+iDUZhEqnhJwH9wZVpuZ94A8gEqnsxXlR51T5EbEVezUqqKAAAAwjCBvwYJKoZIhvcNAQcGoIGxMIGuAgEAMIGoBgkqhkiG9w0BBwEwHgYJYIZIAWUDBAEuMBEEDHa4jo6yp0Z18KgbUgIBEIB74sKxWtV8/YHje5lv5THTl0HIbhSwM6EqRlmBiFFatmEWaeMk4tO4xBX65eq670I5TWPSLMzpp8ncGHMmvHqRajNBnmFtbYxN3E3/WjxmdbOOe+OXpnGJPcGsftc7cB2shRfA4lICPnE26+oVNXT6p0Lo20nY5XC7jyCO', 'subType': '00'}}, 'creationDate': {'$date': {'$numberLong': '1552949630483'}}, 'keyAltNames': ['altname', 'another_altname']}] # TODO: I could see an argument against having these tests of mongocryptd as part # of driver tests. When mongocryptd introduces support for these operators, these # tests will fail. But it's also easy enough to remove these tests when that happens. tests: - description: "$text unconditionally fails" clientOptions: autoEncryptOpts: kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: find arguments: filter: { $text: { $search: "search text" } } result: errorContains: "Unsupported match expression operator for encryption" - description: "$where unconditionally fails" clientOptions: autoEncryptOpts: kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: find arguments: filter: { $where: { $code: "function() { return true }" } } result: errorContains: "Unsupported match expression operator for encryption" - description: "$bit operators succeed on unencrypted, error on encrypted" clientOptions: autoEncryptOpts: kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: find arguments: filter: { unencrypted: { $bitsAllClear: 35 }} result: [] - name: find arguments: filter: { encrypted_string: { $bitsAllClear: 35 }} result: errorContains: "Invalid match expression operator on encrypted field" - name: find arguments: filter: { unencrypted: { $bitsAllSet: 35 }} result: [] - name: find arguments: filter: { encrypted_string: { $bitsAllSet: 35 }} result: errorContains: "Invalid match expression operator on encrypted field" - name: find arguments: filter: { unencrypted: { $bitsAnyClear: 35 }} result: [] - name: find arguments: filter: { encrypted_string: { $bitsAnyClear: 35 }} result: errorContains: "Invalid match expression operator on encrypted field" - name: find arguments: filter: { unencrypted: { $bitsAnySet: 35 }} result: [] - name: find arguments: filter: { encrypted_string: { $bitsAnySet: 35 }} result: errorContains: "Invalid match expression operator on encrypted field" - description: "geo operators succeed on unencrypted, error on encrypted" clientOptions: autoEncryptOpts: kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: find arguments: filter: { unencrypted: { $near: [0,0] }} result: # Still an error because no geo index, but from mongod - not mongocryptd. errorContains: "unable to find index" - name: find arguments: filter: { encrypted_string: { $near: [0,0] }} result: errorContains: "Invalid match expression operator on encrypted field" - name: find arguments: filter: { unencrypted: { $nearSphere: [0,0] }} result: # Still an error because no geo index, but from mongod - not mongocryptd. errorContains: "unable to find index" - name: find arguments: filter: { encrypted_string: { $nearSphere: [0,0] }} result: errorContains: "Invalid match expression operator on encrypted field" - name: find arguments: filter: { unencrypted: { $geoIntersects: { $geometry: { type: "Polygon", coordinates: [[ [0,0], [1,0], [1,1], [0,0] ]] }} }} result: [] - name: find arguments: filter: { encrypted_string: { $geoIntersects: { $geometry: { type: "Polygon", coordinates: [[ [0,0], [1,0], [1,1], [0,0] ]] }} }} result: errorContains: "Invalid match expression operator on encrypted field" - name: find arguments: filter: { unencrypted: { $geoWithin: { $geometry: { type: "Polygon", coordinates: [[ [0,0], [1,0], [1,1], [0,0] ]] }} }} result: [] - name: find arguments: filter: { encrypted_string: { $geoWithin: { $geometry: { type: "Polygon", coordinates: [[ [0,0], [1,0], [1,1], [0,0] ]] }} }} result: errorContains: "Invalid match expression operator on encrypted field" - description: "inequality operators succeed on unencrypted, error on encrypted" clientOptions: autoEncryptOpts: kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: find arguments: filter: { unencrypted: { $gt: 1 }} result: [] - name: find arguments: filter: { encrypted_string: { $gt: 1 }} result: errorContains: "Invalid match expression operator on encrypted field" - name: find arguments: filter: { unencrypted: { $lt: 1 }} result: [] - name: find arguments: filter: { encrypted_string: { $lt: 1 }} result: errorContains: "Invalid match expression operator on encrypted field" - name: find arguments: filter: { unencrypted: { $gte: 1 }} result: [] - name: find arguments: filter: { encrypted_string: { $gte: 1 }} result: errorContains: "Invalid match expression operator on encrypted field" - name: find arguments: filter: { unencrypted: { $lte: 1 }} result: [] - name: find arguments: filter: { encrypted_string: { $lte: 1 }} result: errorContains: "Invalid match expression operator on encrypted field" - description: "other misc operators succeed on unencrypted, error on encrypted" clientOptions: autoEncryptOpts: kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: find arguments: filter: { unencrypted: { $mod: [3, 1] }} result: [] - name: find arguments: filter: { encrypted_string: { $mod: [3, 1] }} result: errorContains: "Invalid match expression operator on encrypted field" - name: find arguments: filter: { unencrypted: { $regex: "pattern", $options: "" }} result: [] - name: find arguments: filter: { encrypted_string: { $regex: "pattern", $options: "" }} result: errorContains: "Invalid match expression operator on encrypted field" - name: find arguments: filter: { unencrypted: { $size: 2 }} result: [] - name: find arguments: filter: { encrypted_string: { $size: 2 }} result: errorContains: "Invalid match expression operator on encrypted field" - name: find arguments: filter: { unencrypted: { $type: 2 }} result: [] - name: find arguments: filter: { encrypted_string: { $type: 2 }} result: errorContains: "Invalid match expression operator on encrypted field" - name: find arguments: filter: { unencrypted: { $eq: null }} result: - &doc0 { _id: 1, encrypted_string: "string0" } - &doc1 { _id: 2, encrypted_string: "string1" } - name: find arguments: filter: { encrypted_string: { $eq: null }} result: errorContains: "Illegal equality to null predicate for encrypted field" - name: find arguments: filter: { unencrypted: { $in: [null] }} result: - *doc0 - *doc1 - name: find arguments: filter: { encrypted_string: { $in: [null] }} result: errorContains: "Illegal equality to null inside $in against an encrypted field" - description: "$addToSet succeeds on unencrypted, error on encrypted" clientOptions: autoEncryptOpts: kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: updateOne arguments: filter: { } update: { $addToSet: { "unencrypted": ["a"]}} result: matchedCount: 1 modifiedCount: 1 upsertedCount: 0 - name: updateOne arguments: filter: { } update: { $addToSet: { "encrypted_string": ["a"]}} result: errorContains: "$addToSet not allowed on encrypted values" - description: "$inc succeeds on unencrypted, error on encrypted" clientOptions: autoEncryptOpts: kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: updateOne arguments: filter: { } update: { $inc: { "unencrypted": 1}} result: matchedCount: 1 modifiedCount: 1 upsertedCount: 0 - name: updateOne arguments: filter: { } update: { $inc: { "encrypted_string": 1}} result: errorContains: "$inc and $mul not allowed on encrypted values" - description: "$mul succeeds on unencrypted, error on encrypted" clientOptions: autoEncryptOpts: kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: updateOne arguments: filter: { } update: { $mul: { "unencrypted": 1}} result: matchedCount: 1 modifiedCount: 1 upsertedCount: 0 - name: updateOne arguments: filter: { } update: { $mul: { "encrypted_string": 1}} result: errorContains: "$inc and $mul not allowed on encrypted values" - description: "$max succeeds on unencrypted, error on encrypted" clientOptions: autoEncryptOpts: kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: updateOne arguments: filter: { } update: { $max: { "unencrypted": 1}} result: matchedCount: 1 modifiedCount: 1 upsertedCount: 0 - name: updateOne arguments: filter: { } update: { $max: { "encrypted_string": 1}} result: errorContains: "$max and $min not allowed on encrypted values" - description: "$min succeeds on unencrypted, error on encrypted" clientOptions: autoEncryptOpts: kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: updateOne arguments: filter: { } update: { $min: { "unencrypted": 1}} result: matchedCount: 1 modifiedCount: 1 upsertedCount: 0 - name: updateOne arguments: filter: { } update: { $min: { "encrypted_string": 1}} result: errorContains: "$max and $min not allowed on encrypted values" - description: "$currentDate succeeds on unencrypted, error on encrypted" clientOptions: autoEncryptOpts: kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: updateOne arguments: filter: { } update: { $currentDate: { "unencrypted": true}} result: matchedCount: 1 modifiedCount: 1 upsertedCount: 0 - name: updateOne arguments: filter: { } update: { $currentDate: { "encrypted_string": true }} result: errorContains: "$currentDate not allowed on encrypted values" - description: "$pop succeeds on unencrypted, error on encrypted" clientOptions: autoEncryptOpts: kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: updateOne arguments: filter: { } update: { $pop: { "unencrypted": 1}} result: matchedCount: 1 modifiedCount: 0 upsertedCount: 0 - name: updateOne arguments: filter: { } update: { $pop: { "encrypted_string": 1 }} result: errorContains: "$pop not allowed on encrypted values" - description: "$pull succeeds on unencrypted, error on encrypted" clientOptions: autoEncryptOpts: kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: updateOne arguments: filter: { } update: { $pull: { "unencrypted": 1}} result: matchedCount: 1 modifiedCount: 0 upsertedCount: 0 - name: updateOne arguments: filter: { } update: { $pull: { "encrypted_string": 1 }} result: errorContains: "$pull not allowed on encrypted values" - description: "$pullAll succeeds on unencrypted, error on encrypted" clientOptions: autoEncryptOpts: kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: updateOne arguments: filter: { } update: { $pullAll: { "unencrypted": [1] }} result: matchedCount: 1 modifiedCount: 0 upsertedCount: 0 - name: updateOne arguments: filter: { } update: { $pullAll: { "encrypted_string": [1] }} result: errorContains: "$pullAll not allowed on encrypted values" - description: "$push succeeds on unencrypted, error on encrypted" clientOptions: autoEncryptOpts: kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: updateOne arguments: filter: { } update: { $push: { "unencrypted": 1}} result: matchedCount: 1 modifiedCount: 1 upsertedCount: 0 - name: updateOne arguments: filter: { } update: { $push: { "encrypted_string": 1 }} result: errorContains: "$push not allowed on encrypted values" - description: "array filters on encrypted fields does not error in mongocryptd, but errors in mongod" clientOptions: autoEncryptOpts: kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: updateOne arguments: filter: { } update: { $set : { "encrypted_string.$[i].x": 1 }} arrayFilters: [{ i.x: 1 }] result: errorContains: "Array update operations not allowed on encrypted values" - description: "positional operator succeeds on unencrypted, errors on encrypted" clientOptions: autoEncryptOpts: kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: updateOne arguments: filter: { "unencrypted": 1 } update: { $set : { "unencrypted.$": 1 }} result: matchedCount: 0 modifiedCount: 0 upsertedCount: 0 - name: updateOne arguments: filter: { "encrypted_string": "abc" } update: { $set : { "encrypted_string.$": "abc" }} result: errorContains: "Cannot encrypt fields below '$' positional update operator" - description: "an update that would produce an array on an encrypted field errors" clientOptions: autoEncryptOpts: kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: updateOne arguments: filter: { } update: { $set : { "encrypted_string": [1,2] }} result: # DRIVERS-2272: The expected error message changed in mongocryptd 6.0. Before it was: # "Cannot encrypt element of type array because schema requires that type is one of: [ string ]" # After it is: # "Cannot encrypt element of type: array" # Only check for the common prefix. errorContains: "Cannot encrypt element of type" - description: "an insert with encrypted field on _id errors" clientOptions: autoEncryptOpts: kmsProviders: aws: {} # Credentials filled in from environment. schemaMap: "default.default": {'properties': {'_id': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}}} operations: - name: insertOne arguments: document: { _id: 1 } result: errorContains: "Invalid schema containing the 'encrypt' keyword." - description: "an insert with an array value for an encrypted field fails" clientOptions: autoEncryptOpts: kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: insertOne arguments: document: { encrypted_string: [ "123", "456"] } result: # DRIVERS-2272: The expected error message changed in mongocryptd 6.0. Before it was: # "Cannot encrypt element of type array because schema requires that type is one of: [ string ]" # After it is: # "Cannot encrypt element of type: array" # Only check for the common prefix. errorContains: "Cannot encrypt element of type" - description: "an insert with a Timestamp(0,0) value in the top-level fails" clientOptions: autoEncryptOpts: kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: insertOne arguments: document: { random: {"$timestamp": {"t": 0, "i": 0 }} } result: errorContains: "A command that inserts cannot supply Timestamp(0, 0) for an encrypted" - description: "distinct with the key referring to a field where the keyID is a JSON Pointer errors" clientOptions: autoEncryptOpts: kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: distinct arguments: filter: {} fieldName: "encrypted_w_altname" result: errorContains: "The distinct key is not allowed to be marked for encryption with a non-UUID keyId" mongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_encryption/badSchema.yml000066400000000000000000000070471505113246500303450ustar00rootroot00000000000000runOn: - minServerVersion: "4.1.10" database_name: &database_name "default" collection_name: &collection_name "default" data: [] key_vault_data: [{'status': 1, '_id': {'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}, 'masterKey': {'provider': 'aws', 'key': 'arn:aws:kms:us-east-1:579766882180:key/89fcc2c4-08b0-4bd9-9f25-e30687b580d0', 'region': 'us-east-1'}, 'updateDate': {'$date': {'$numberLong': '1552949630483'}}, 'keyMaterial': {'$binary': {'base64': 'AQICAHhQNmWG2CzOm1dq3kWLM+iDUZhEqnhJwH9wZVpuZ94A8gEqnsxXlR51T5EbEVezUqqKAAAAwjCBvwYJKoZIhvcNAQcGoIGxMIGuAgEAMIGoBgkqhkiG9w0BBwEwHgYJYIZIAWUDBAEuMBEEDHa4jo6yp0Z18KgbUgIBEIB74sKxWtV8/YHje5lv5THTl0HIbhSwM6EqRlmBiFFatmEWaeMk4tO4xBX65eq670I5TWPSLMzpp8ncGHMmvHqRajNBnmFtbYxN3E3/WjxmdbOOe+OXpnGJPcGsftc7cB2shRfA4lICPnE26+oVNXT6p0Lo20nY5XC7jyCO', 'subType': '00'}}, 'creationDate': {'$date': {'$numberLong': '1552949630483'}}, 'keyAltNames': ['altname', 'another_altname']}] tests: - description: "Schema with an encrypted field in an array" clientOptions: autoEncryptOpts: schemaMap: "default.default": {'properties': {'encrypted_string': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Random'}}}, 'bsonType': 'array'} kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: insertOne arguments: document: &doc0 { _id: 1, encrypted_string: "string0" } result: errorContains: "Invalid schema" outcome: collection: data: [] - description: "Schema without specifying parent object types" clientOptions: autoEncryptOpts: schemaMap: "default.default": {'properties': {'foo': {'properties': {'bar': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Random'}}}}}} kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: insertOne arguments: document: *doc0 result: errorContains: "Invalid schema" outcome: collection: data: [] - description: "Schema with siblings of encrypt document" clientOptions: autoEncryptOpts: schemaMap: "default.default": {'properties': {'encrypted_string': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Random'}, 'bsonType': 'object'}}} kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: insertOne arguments: document: *doc0 result: errorContains: "'encrypt' cannot be used in conjunction with 'bsonType'" outcome: collection: data: [] - description: "Schema with logical keywords" clientOptions: autoEncryptOpts: schemaMap: "default.default": {'anyOf': [{'properties': {'encrypted_string': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Random'}}}}]} kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: insertOne arguments: document: *doc0 result: errorContains: "Invalid schema" outcome: collection: data: []mongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_encryption/basic.yml000066400000000000000000000116571505113246500275610ustar00rootroot00000000000000runOn: - minServerVersion: "4.1.10" database_name: &database_name "default" collection_name: &collection_name "default" data: [] json_schema: {'properties': {'encrypted_w_altname': {'encrypt': {'keyId': '/altname', 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Random'}}, 'encrypted_string': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}, 'random': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Random'}}, 'encrypted_string_equivalent': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}}, 'bsonType': 'object'} key_vault_data: [{'status': 1, '_id': {'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}, 'masterKey': {'provider': 'aws', 'key': 'arn:aws:kms:us-east-1:579766882180:key/89fcc2c4-08b0-4bd9-9f25-e30687b580d0', 'region': 'us-east-1'}, 'updateDate': {'$date': {'$numberLong': '1552949630483'}}, 'keyMaterial': {'$binary': {'base64': 'AQICAHhQNmWG2CzOm1dq3kWLM+iDUZhEqnhJwH9wZVpuZ94A8gEqnsxXlR51T5EbEVezUqqKAAAAwjCBvwYJKoZIhvcNAQcGoIGxMIGuAgEAMIGoBgkqhkiG9w0BBwEwHgYJYIZIAWUDBAEuMBEEDHa4jo6yp0Z18KgbUgIBEIB74sKxWtV8/YHje5lv5THTl0HIbhSwM6EqRlmBiFFatmEWaeMk4tO4xBX65eq670I5TWPSLMzpp8ncGHMmvHqRajNBnmFtbYxN3E3/WjxmdbOOe+OXpnGJPcGsftc7cB2shRfA4lICPnE26+oVNXT6p0Lo20nY5XC7jyCO', 'subType': '00'}}, 'creationDate': {'$date': {'$numberLong': '1552949630483'}}, 'keyAltNames': ['altname', 'another_altname']}] tests: - description: "Insert with deterministic encryption, then find it" clientOptions: autoEncryptOpts: kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: insertOne arguments: document: &doc0 { _id: 1, encrypted_string: "string0" } - name: find arguments: filter: { _id: 1 } result: [*doc0] expectations: # Auto encryption will request the collection info. - command_started_event: command: listCollections: 1 filter: name: *collection_name command_name: listCollections # Then key is fetched from the key vault. - command_started_event: command: find: datakeys filter: {"$or": [{"_id": {"$in": [ {'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}} ] }}, {"keyAltNames": {"$in": []}}]} $db: keyvault readConcern: { level: "majority" } command_name: find - command_started_event: command: insert: *collection_name documents: - &doc0_encrypted { _id: 1, encrypted_string: {'$binary': {'base64': 'AQAAAAAAAAAAAAAAAAAAAAACwj+3zkv2VM+aTfk60RqhXq6a/77WlLwu/BxXFkL7EppGsju/m8f0x5kBDD3EZTtGALGXlym5jnpZAoSIkswHoA==', 'subType': '06'}} } ordered: true command_name: insert - command_started_event: command: find: *collection_name filter: { _id: 1 } command_name: find outcome: collection: # Outcome is checked using a separate MongoClient without auto encryption. data: - *doc0_encrypted - description: "Insert with randomized encryption, then find it" clientOptions: autoEncryptOpts: kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: insertOne arguments: document: &doc1 { _id: 1, random: "123" } - name: find arguments: filter: { _id: 1 } result: [*doc1] expectations: # Auto encryption will request the collection info. - command_started_event: command: listCollections: 1 filter: name: *collection_name command_name: listCollections # Then key is fetched from the key vault. - command_started_event: command: find: datakeys filter: {"$or": [{"_id": {"$in": [ {'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}} ] }}, {"keyAltNames": {"$in": []}}]} $db: keyvault readConcern: { level: "majority" } command_name: find - command_started_event: command: insert: *collection_name documents: - { _id: 1, random: { $$type: "binData" } } ordered: true command_name: insert - command_started_event: command: find: *collection_name filter: { _id: 1 } command_name: find outcome: collection: # Outcome is checked using a separate MongoClient without auto encryption. data: - { _id: 1, random: { $$type: "binData" } }mongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_encryption/bulk.yml000066400000000000000000000125021505113246500274230ustar00rootroot00000000000000runOn: - minServerVersion: "4.1.10" database_name: &database_name "default" collection_name: &collection_name "default" data: [] json_schema: {'properties': {'encrypted_w_altname': {'encrypt': {'keyId': '/altname', 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Random'}}, 'encrypted_string': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}, 'random': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Random'}}, 'encrypted_string_equivalent': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}}, 'bsonType': 'object'} key_vault_data: [{'status': 1, '_id': {'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}, 'masterKey': {'provider': 'aws', 'key': 'arn:aws:kms:us-east-1:579766882180:key/89fcc2c4-08b0-4bd9-9f25-e30687b580d0', 'region': 'us-east-1'}, 'updateDate': {'$date': {'$numberLong': '1552949630483'}}, 'keyMaterial': {'$binary': {'base64': 'AQICAHhQNmWG2CzOm1dq3kWLM+iDUZhEqnhJwH9wZVpuZ94A8gEqnsxXlR51T5EbEVezUqqKAAAAwjCBvwYJKoZIhvcNAQcGoIGxMIGuAgEAMIGoBgkqhkiG9w0BBwEwHgYJYIZIAWUDBAEuMBEEDHa4jo6yp0Z18KgbUgIBEIB74sKxWtV8/YHje5lv5THTl0HIbhSwM6EqRlmBiFFatmEWaeMk4tO4xBX65eq670I5TWPSLMzpp8ncGHMmvHqRajNBnmFtbYxN3E3/WjxmdbOOe+OXpnGJPcGsftc7cB2shRfA4lICPnE26+oVNXT6p0Lo20nY5XC7jyCO', 'subType': '00'}}, 'creationDate': {'$date': {'$numberLong': '1552949630483'}}, 'keyAltNames': ['altname', 'another_altname']}] tests: - description: "Bulk write with encryption" clientOptions: autoEncryptOpts: kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: bulkWrite arguments: requests: - name: insertOne arguments: document: &doc0 { _id: 1, encrypted_string: "string0", random: "abc" } - name: insertOne arguments: document: &doc1 { _id: 2, encrypted_string: "string1" } - name: updateOne arguments: filter: { encrypted_string: "string0" } update: { $set: { encrypted_string: "string1" } } - name: deleteOne arguments: filter: { $and: [{ encrypted_string: "string1" }, { _id: 2 }]} options: { ordered: true } expectations: # Auto encryption will request the collection info. - command_started_event: command: listCollections: 1 filter: name: *collection_name command_name: listCollections # Then key is fetched from the key vault. - command_started_event: command: find: datakeys filter: {"$or": [{"_id": {"$in": [ {'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}} ] }}, {"keyAltNames": {"$in": []}}]} $db: keyvault readConcern: { level: "majority" } command_name: find - command_started_event: command: insert: *collection_name documents: - &doc0_encrypted { _id: 1, encrypted_string: {'$binary': {'base64': 'AQAAAAAAAAAAAAAAAAAAAAACwj+3zkv2VM+aTfk60RqhXq6a/77WlLwu/BxXFkL7EppGsju/m8f0x5kBDD3EZTtGALGXlym5jnpZAoSIkswHoA==', 'subType': '06'}}, random: { $$type: "binData" } } - &doc1_encrypted { _id: 2, encrypted_string: {'$binary': {'base64': 'AQAAAAAAAAAAAAAAAAAAAAACDdw4KFz3ZLquhsbt7RmDjD0N67n0uSXx7IGnQNCLeIKvot6s/ouI21Eo84IOtb6lhwUNPlSEBNY0/hbszWAKJg==', 'subType': '06'}} } ordered: true command_name: insert - command_started_event: command: update: *collection_name updates: - q: { encrypted_string: { $eq: {'$binary': {'base64': 'AQAAAAAAAAAAAAAAAAAAAAACwj+3zkv2VM+aTfk60RqhXq6a/77WlLwu/BxXFkL7EppGsju/m8f0x5kBDD3EZTtGALGXlym5jnpZAoSIkswHoA==', 'subType': '06'}} }} u: {$set: { encrypted_string: {'$binary': {'base64': 'AQAAAAAAAAAAAAAAAAAAAAACDdw4KFz3ZLquhsbt7RmDjD0N67n0uSXx7IGnQNCLeIKvot6s/ouI21Eo84IOtb6lhwUNPlSEBNY0/hbszWAKJg==', 'subType': '06'}} }} # DRIVERS-976: mongocryptd adds upsert and multi fields to all update commands, so these fields should be added to spec tests upsert: false multi: false ordered: true command_name: update - command_started_event: command: delete: *collection_name deletes: - q: { "$and": [ { "encrypted_string": { "$eq": {'$binary': {'base64': 'AQAAAAAAAAAAAAAAAAAAAAACDdw4KFz3ZLquhsbt7RmDjD0N67n0uSXx7IGnQNCLeIKvot6s/ouI21Eo84IOtb6lhwUNPlSEBNY0/hbszWAKJg==', 'subType': '06'}} }}, { "_id": { "$eq": 2 }} ] } limit: 1 ordered: true command_name: delete outcome: collection: # Outcome is checked using a separate MongoClient without auto encryption. data: - { _id: 1, encrypted_string: {'$binary': {'base64': 'AQAAAAAAAAAAAAAAAAAAAAACDdw4KFz3ZLquhsbt7RmDjD0N67n0uSXx7IGnQNCLeIKvot6s/ouI21Eo84IOtb6lhwUNPlSEBNY0/hbszWAKJg==', 'subType': '06'}}, random: { $$type: "binData" } } mongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_encryption/bypassAutoEncryption.yml000066400000000000000000000137041505113246500327000ustar00rootroot00000000000000runOn: - minServerVersion: "4.1.10" database_name: &database_name "default" collection_name: &collection_name "default" data: [{_id: 1, encrypted_string: {'$binary': {'base64': 'AQAAAAAAAAAAAAAAAAAAAAACwj+3zkv2VM+aTfk60RqhXq6a/77WlLwu/BxXFkL7EppGsju/m8f0x5kBDD3EZTtGALGXlym5jnpZAoSIkswHoA==', 'subType': '06'}} }] json_schema: {'properties': {'encrypted_w_altname': {'encrypt': {'keyId': '/altname', 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Random'}}, 'encrypted_string': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}, 'random': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Random'}}, 'encrypted_string_equivalent': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}}, 'bsonType': 'object'} key_vault_data: [{'status': 1, '_id': {'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}, 'masterKey': {'provider': 'aws', 'key': 'arn:aws:kms:us-east-1:579766882180:key/89fcc2c4-08b0-4bd9-9f25-e30687b580d0', 'region': 'us-east-1'}, 'updateDate': {'$date': {'$numberLong': '1552949630483'}}, 'keyMaterial': {'$binary': {'base64': 'AQICAHhQNmWG2CzOm1dq3kWLM+iDUZhEqnhJwH9wZVpuZ94A8gEqnsxXlR51T5EbEVezUqqKAAAAwjCBvwYJKoZIhvcNAQcGoIGxMIGuAgEAMIGoBgkqhkiG9w0BBwEwHgYJYIZIAWUDBAEuMBEEDHa4jo6yp0Z18KgbUgIBEIB74sKxWtV8/YHje5lv5THTl0HIbhSwM6EqRlmBiFFatmEWaeMk4tO4xBX65eq670I5TWPSLMzpp8ncGHMmvHqRajNBnmFtbYxN3E3/WjxmdbOOe+OXpnGJPcGsftc7cB2shRfA4lICPnE26+oVNXT6p0Lo20nY5XC7jyCO', 'subType': '00'}}, 'creationDate': {'$date': {'$numberLong': '1552949630483'}}, 'keyAltNames': ['altname', 'another_altname']}] tests: - description: "Insert with bypassAutoEncryption" clientOptions: autoEncryptOpts: bypassAutoEncryption: true kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: insertOne arguments: document: { _id: 2, encrypted_string: "string0" } bypassDocumentValidation: true - name: find arguments: filter: { } result: - { _id: 1, encrypted_string: "string0" } - { _id: 2, encrypted_string: "string0" } expectations: - command_started_event: command: insert: *collection_name documents: # No encryption. - { _id: 2, encrypted_string: "string0" } ordered: true command_name: insert - command_started_event: command: find: *collection_name filter: { } command_name: find - command_started_event: command: find: datakeys filter: {"$or": [{"_id": {"$in": [ {'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}} ] }}, {"keyAltNames": {"$in": []}}]} $db: keyvault readConcern: { level: "majority" } command_name: find outcome: collection: # Outcome is checked using a separate MongoClient without auto encryption. data: - { _id: 1, encrypted_string: {'$binary': {'base64': 'AQAAAAAAAAAAAAAAAAAAAAACwj+3zkv2VM+aTfk60RqhXq6a/77WlLwu/BxXFkL7EppGsju/m8f0x5kBDD3EZTtGALGXlym5jnpZAoSIkswHoA==', 'subType': '06'}} } - { _id: 2, encrypted_string: "string0" } - description: "Insert with bypassAutoEncryption for local schema" clientOptions: autoEncryptOpts: schemaMap: "default.default": {'properties': {'encrypted_w_altname': {'encrypt': {'keyId': '/altname', 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Random'}}, 'encrypted_string': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}, 'random': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Random'}}, 'encrypted_string_equivalent': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}}, 'bsonType': 'object'} bypassAutoEncryption: true kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: insertOne arguments: document: { _id: 2, encrypted_string: "string0" } bypassDocumentValidation: true - name: find arguments: filter: { } result: - { _id: 1, encrypted_string: "string0" } - { _id: 2, encrypted_string: "string0" } expectations: - command_started_event: command: insert: *collection_name documents: # No encryption. - { _id: 2, encrypted_string: "string0" } ordered: true command_name: insert - command_started_event: command: find: *collection_name filter: { } command_name: find - command_started_event: command: find: datakeys filter: {"$or": [{"_id": {"$in": [ {'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}} ] }}, {"keyAltNames": {"$in": []}}]} $db: keyvault readConcern: { level: "majority" } command_name: find outcome: collection: # Outcome is checked using a separate MongoClient without auto encryption. data: - { _id: 1, encrypted_string: {'$binary': {'base64': 'AQAAAAAAAAAAAAAAAAAAAAACwj+3zkv2VM+aTfk60RqhXq6a/77WlLwu/BxXFkL7EppGsju/m8f0x5kBDD3EZTtGALGXlym5jnpZAoSIkswHoA==', 'subType': '06'}} } - { _id: 2, encrypted_string: "string0" }mongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_encryption/bypassedCommand.yml000066400000000000000000000035571505113246500316110ustar00rootroot00000000000000runOn: - minServerVersion: "4.1.10" database_name: &database_name "default" collection_name: &collection_name "default" data: [] json_schema: {} key_vault_data: [{'status': 1, '_id': {'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}, 'masterKey': {'provider': 'aws', 'key': 'arn:aws:kms:us-east-1:579766882180:key/89fcc2c4-08b0-4bd9-9f25-e30687b580d0', 'region': 'us-east-1'}, 'updateDate': {'$date': {'$numberLong': '1552949630483'}}, 'keyMaterial': {'$binary': {'base64': 'AQICAHhQNmWG2CzOm1dq3kWLM+iDUZhEqnhJwH9wZVpuZ94A8gEqnsxXlR51T5EbEVezUqqKAAAAwjCBvwYJKoZIhvcNAQcGoIGxMIGuAgEAMIGoBgkqhkiG9w0BBwEwHgYJYIZIAWUDBAEuMBEEDHa4jo6yp0Z18KgbUgIBEIB74sKxWtV8/YHje5lv5THTl0HIbhSwM6EqRlmBiFFatmEWaeMk4tO4xBX65eq670I5TWPSLMzpp8ncGHMmvHqRajNBnmFtbYxN3E3/WjxmdbOOe+OXpnGJPcGsftc7cB2shRfA4lICPnE26+oVNXT6p0Lo20nY5XC7jyCO', 'subType': '00'}}, 'creationDate': {'$date': {'$numberLong': '1552949630483'}}, 'keyAltNames': ['altname', 'another_altname']}] tests: - description: "ping is bypassed" clientOptions: autoEncryptOpts: kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: runCommand object: database command_name: ping arguments: command: ping: 1 expectations: # No listCollections, no mongocryptd command, just the ping. - command_started_event: command: ping: 1 command_name: ping - description: "kill op is not bypassed" clientOptions: autoEncryptOpts: kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: runCommand object: database command_name: killOp arguments: command: killOp: 1 op: 1234 result: errorContains: "command not supported for auto encryption: killOp"mongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_encryption/count.yml000066400000000000000000000073221505113246500276220ustar00rootroot00000000000000runOn: - minServerVersion: "4.1.10" database_name: &database_name "default" collection_name: &collection_name "default" data: - &doc0_encrypted { _id: 1, encrypted_string: {'$binary': {'base64': 'AQAAAAAAAAAAAAAAAAAAAAACwj+3zkv2VM+aTfk60RqhXq6a/77WlLwu/BxXFkL7EppGsju/m8f0x5kBDD3EZTtGALGXlym5jnpZAoSIkswHoA==', 'subType': '06'}} } - &doc1_encrypted { _id: 2, encrypted_string: {'$binary': {'base64': 'AQAAAAAAAAAAAAAAAAAAAAACwj+3zkv2VM+aTfk60RqhXq6a/77WlLwu/BxXFkL7EppGsju/m8f0x5kBDD3EZTtGALGXlym5jnpZAoSIkswHoA==', 'subType': '06'}} } json_schema: {'properties': {'encrypted_w_altname': {'encrypt': {'keyId': '/altname', 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Random'}}, 'encrypted_string': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}, 'random': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Random'}}, 'encrypted_string_equivalent': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}}, 'bsonType': 'object'} key_vault_data: [{'status': 1, '_id': {'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}, 'masterKey': {'provider': 'aws', 'key': 'arn:aws:kms:us-east-1:579766882180:key/89fcc2c4-08b0-4bd9-9f25-e30687b580d0', 'region': 'us-east-1'}, 'updateDate': {'$date': {'$numberLong': '1552949630483'}}, 'keyMaterial': {'$binary': {'base64': 'AQICAHhQNmWG2CzOm1dq3kWLM+iDUZhEqnhJwH9wZVpuZ94A8gEqnsxXlR51T5EbEVezUqqKAAAAwjCBvwYJKoZIhvcNAQcGoIGxMIGuAgEAMIGoBgkqhkiG9w0BBwEwHgYJYIZIAWUDBAEuMBEEDHa4jo6yp0Z18KgbUgIBEIB74sKxWtV8/YHje5lv5THTl0HIbhSwM6EqRlmBiFFatmEWaeMk4tO4xBX65eq670I5TWPSLMzpp8ncGHMmvHqRajNBnmFtbYxN3E3/WjxmdbOOe+OXpnGJPcGsftc7cB2shRfA4lICPnE26+oVNXT6p0Lo20nY5XC7jyCO', 'subType': '00'}}, 'creationDate': {'$date': {'$numberLong': '1552949630483'}}, 'keyAltNames': ['altname', 'another_altname']}] tests: - description: "Count with deterministic encryption" clientOptions: autoEncryptOpts: kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: count arguments: filter: { encrypted_string: "string0" } result: 2 expectations: # Auto encryption will request the collection info. - command_started_event: command: listCollections: 1 filter: name: *collection_name command_name: listCollections # Then key is fetched from the key vault. - command_started_event: command: find: datakeys filter: {"$or": [{"_id": {"$in": [ {'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}} ] }}, {"keyAltNames": {"$in": []}}]} $db: keyvault readConcern: { level: "majority" } command_name: find - command_started_event: command: count: *collection_name query: { encrypted_string: { $eq: {'$binary': {'base64': 'AQAAAAAAAAAAAAAAAAAAAAACwj+3zkv2VM+aTfk60RqhXq6a/77WlLwu/BxXFkL7EppGsju/m8f0x5kBDD3EZTtGALGXlym5jnpZAoSIkswHoA==', 'subType': '06'}} } } command_name: count - description: "Count fails when filtering on a random encrypted field" clientOptions: autoEncryptOpts: kmsProviders: aws: {} # Credentials filled in from environment operations: - name: count arguments: filter: { random: "abc" } result: errorContains: "Cannot query on fields encrypted with the randomized encryption"mongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_encryption/countDocuments.yml000066400000000000000000000072151505113246500315050ustar00rootroot00000000000000runOn: - minServerVersion: "4.1.10" database_name: &database_name "default" collection_name: &collection_name "default" data: - &doc0_encrypted { _id: 1, encrypted_string: {'$binary': {'base64': 'AQAAAAAAAAAAAAAAAAAAAAACwj+3zkv2VM+aTfk60RqhXq6a/77WlLwu/BxXFkL7EppGsju/m8f0x5kBDD3EZTtGALGXlym5jnpZAoSIkswHoA==', 'subType': '06'}} } - &doc1_encrypted { _id: 2, encrypted_string: {'$binary': {'base64': 'AQAAAAAAAAAAAAAAAAAAAAACDdw4KFz3ZLquhsbt7RmDjD0N67n0uSXx7IGnQNCLeIKvot6s/ouI21Eo84IOtb6lhwUNPlSEBNY0/hbszWAKJg==', 'subType': '06'}} } json_schema: {'properties': {'encrypted_w_altname': {'encrypt': {'keyId': '/altname', 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Random'}}, 'encrypted_string': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}, 'random': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Random'}}, 'encrypted_string_equivalent': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}}, 'bsonType': 'object'} key_vault_data: [{'status': 1, '_id': {'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}, 'masterKey': {'provider': 'aws', 'key': 'arn:aws:kms:us-east-1:579766882180:key/89fcc2c4-08b0-4bd9-9f25-e30687b580d0', 'region': 'us-east-1'}, 'updateDate': {'$date': {'$numberLong': '1552949630483'}}, 'keyMaterial': {'$binary': {'base64': 'AQICAHhQNmWG2CzOm1dq3kWLM+iDUZhEqnhJwH9wZVpuZ94A8gEqnsxXlR51T5EbEVezUqqKAAAAwjCBvwYJKoZIhvcNAQcGoIGxMIGuAgEAMIGoBgkqhkiG9w0BBwEwHgYJYIZIAWUDBAEuMBEEDHa4jo6yp0Z18KgbUgIBEIB74sKxWtV8/YHje5lv5THTl0HIbhSwM6EqRlmBiFFatmEWaeMk4tO4xBX65eq670I5TWPSLMzpp8ncGHMmvHqRajNBnmFtbYxN3E3/WjxmdbOOe+OXpnGJPcGsftc7cB2shRfA4lICPnE26+oVNXT6p0Lo20nY5XC7jyCO', 'subType': '00'}}, 'creationDate': {'$date': {'$numberLong': '1552949630483'}}, 'keyAltNames': ['altname', 'another_altname']}] tests: - description: "countDocuments with deterministic encryption" skipReason: "waiting on SERVER-39395" clientOptions: autoEncryptOpts: kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: countDocuments arguments: filter: { encrypted_string: "string0" } result: 1 expectations: # Auto encryption will request the collection info. - command_started_event: command: listCollections: 1 filter: name: *collection_name command_name: listCollections # Then key is fetched from the key vault. - command_started_event: command: find: datakeys filter: {"$or": [{"_id": {"$in": [ {'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}} ] }}, {"keyAltNames": {"$in": []}}]} $db: keyvault readConcern: { level: "majority" } command_name: find - command_started_event: command: aggregate: *collection_name pipeline: - { $match: { encrypted_string: {'$binary': {'base64': 'AQAAAAAAAAAAAAAAAAAAAAACwj+3zkv2VM+aTfk60RqhXq6a/77WlLwu/BxXFkL7EppGsju/m8f0x5kBDD3EZTtGALGXlym5jnpZAoSIkswHoA==', 'subType': '06'}} }} - { $group: { _id: 1, n: { $sum: 1 }}} command_name: aggregate outcome: collection: # Outcome is checked using a separate MongoClient without auto encryption. data: - *doc0_encrypted - *doc1_encryptedmongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_encryption/create-and-createIndexes.yml000066400000000000000000000036331505113246500332570ustar00rootroot00000000000000runOn: - minServerVersion: "4.1.10" database_name: &database_name "default" collection_name: &collection_name "default" data: [] tests: - description: "create is OK" clientOptions: autoEncryptOpts: kmsProviders: local: {'key': {'$binary': {'base64': 'Mng0NCt4ZHVUYUJCa1kxNkVyNUR1QURhZ2h2UzR2d2RrZzh0cFBwM3R6NmdWMDFBMUN3YkQ5aXRRMkhGRGdQV09wOGVNYUMxT2k3NjZKelhaQmRCZGJkTXVyZG9uSjFk', 'subType': '00'}}} operations: # Drop to remove a collection that may exist from previous test runs. - name: dropCollection object: database arguments: collection: "unencryptedCollection" - name: createCollection object: database arguments: collection: "unencryptedCollection" validator: unencrypted_string: "foo" - name: assertCollectionExists object: testRunner arguments: database: *database_name collection: "unencryptedCollection" - description: "createIndexes is OK" clientOptions: autoEncryptOpts: kmsProviders: local: {'key': {'$binary': {'base64': 'Mng0NCt4ZHVUYUJCa1kxNkVyNUR1QURhZ2h2UzR2d2RrZzh0cFBwM3R6NmdWMDFBMUN3YkQ5aXRRMkhGRGdQV09wOGVNYUMxT2k3NjZKelhaQmRCZGJkTXVyZG9uSjFk', 'subType': '00'}}} operations: # Drop to remove a collection that may exist from previous test runs. - name: dropCollection object: database arguments: collection: "unencryptedCollection" - name: createCollection object: database arguments: collection: "unencryptedCollection" - name: runCommand object: database arguments: command: createIndexes: "unencryptedCollection" indexes: - name: "name" key: { name: 1 } - name: assertIndexExists object: testRunner arguments: database: *database_name collection: "unencryptedCollection" index: namemongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_encryption/delete.yml000066400000000000000000000123431505113246500277330ustar00rootroot00000000000000runOn: - minServerVersion: "4.1.10" database_name: &database_name "default" collection_name: &collection_name "default" data: - &doc0_encrypted { _id: 1, encrypted_string: {'$binary': {'base64': 'AQAAAAAAAAAAAAAAAAAAAAACwj+3zkv2VM+aTfk60RqhXq6a/77WlLwu/BxXFkL7EppGsju/m8f0x5kBDD3EZTtGALGXlym5jnpZAoSIkswHoA==', 'subType': '06'}} } - &doc1_encrypted { _id: 2, encrypted_string: {'$binary': {'base64': 'AQAAAAAAAAAAAAAAAAAAAAACDdw4KFz3ZLquhsbt7RmDjD0N67n0uSXx7IGnQNCLeIKvot6s/ouI21Eo84IOtb6lhwUNPlSEBNY0/hbszWAKJg==', 'subType': '06'}} } json_schema: {'properties': {'encrypted_w_altname': {'encrypt': {'keyId': '/altname', 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Random'}}, 'encrypted_string': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}, 'random': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Random'}}, 'encrypted_string_equivalent': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}}, 'bsonType': 'object'} key_vault_data: [{'status': 1, '_id': {'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}, 'masterKey': {'provider': 'aws', 'key': 'arn:aws:kms:us-east-1:579766882180:key/89fcc2c4-08b0-4bd9-9f25-e30687b580d0', 'region': 'us-east-1'}, 'updateDate': {'$date': {'$numberLong': '1552949630483'}}, 'keyMaterial': {'$binary': {'base64': 'AQICAHhQNmWG2CzOm1dq3kWLM+iDUZhEqnhJwH9wZVpuZ94A8gEqnsxXlR51T5EbEVezUqqKAAAAwjCBvwYJKoZIhvcNAQcGoIGxMIGuAgEAMIGoBgkqhkiG9w0BBwEwHgYJYIZIAWUDBAEuMBEEDHa4jo6yp0Z18KgbUgIBEIB74sKxWtV8/YHje5lv5THTl0HIbhSwM6EqRlmBiFFatmEWaeMk4tO4xBX65eq670I5TWPSLMzpp8ncGHMmvHqRajNBnmFtbYxN3E3/WjxmdbOOe+OXpnGJPcGsftc7cB2shRfA4lICPnE26+oVNXT6p0Lo20nY5XC7jyCO', 'subType': '00'}}, 'creationDate': {'$date': {'$numberLong': '1552949630483'}}, 'keyAltNames': ['altname', 'another_altname']}] tests: - description: "deleteOne with deterministic encryption" clientOptions: autoEncryptOpts: kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: deleteOne arguments: filter: { encrypted_string: "string0" } result: deletedCount: 1 expectations: # Auto encryption will request the collection info. - command_started_event: command: listCollections: 1 filter: name: *collection_name command_name: listCollections # Then key is fetched from the key vault. - command_started_event: command: find: datakeys filter: {"$or": [{"_id": {"$in": [ {'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}} ] }}, {"keyAltNames": {"$in": []}}]} $db: keyvault readConcern: { level: "majority" } command_name: find - command_started_event: command: delete: *collection_name deletes: - q: { encrypted_string: { $eq: {'$binary': {'base64': 'AQAAAAAAAAAAAAAAAAAAAAACwj+3zkv2VM+aTfk60RqhXq6a/77WlLwu/BxXFkL7EppGsju/m8f0x5kBDD3EZTtGALGXlym5jnpZAoSIkswHoA==', 'subType': '06'}} } } limit: 1 ordered: true command_name: delete outcome: collection: # Outcome is checked using a separate MongoClient without auto encryption. data: - *doc1_encrypted - description: "deleteMany with deterministic encryption" clientOptions: autoEncryptOpts: kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: deleteMany arguments: filter: { encrypted_string: { $in: [ "string0", "string1" ] } } result: deletedCount: 2 expectations: # Auto encryption will request the collection info. - command_started_event: command: listCollections: 1 filter: name: *collection_name command_name: listCollections # Then key is fetched from the key vault. - command_started_event: command: find: datakeys filter: {"$or": [{"_id": {"$in": [ {'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}} ] }}, {"keyAltNames": {"$in": []}}]} $db: keyvault readConcern: { level: "majority" } command_name: find - command_started_event: command: delete: *collection_name deletes: - q: { encrypted_string: { $in : [ {'$binary': {'base64': 'AQAAAAAAAAAAAAAAAAAAAAACwj+3zkv2VM+aTfk60RqhXq6a/77WlLwu/BxXFkL7EppGsju/m8f0x5kBDD3EZTtGALGXlym5jnpZAoSIkswHoA==', 'subType': '06'}}, {'$binary': {'base64': 'AQAAAAAAAAAAAAAAAAAAAAACDdw4KFz3ZLquhsbt7RmDjD0N67n0uSXx7IGnQNCLeIKvot6s/ouI21Eo84IOtb6lhwUNPlSEBNY0/hbszWAKJg==', 'subType': '06'}} ] } } limit: 0 ordered: true command_name: delete outcome: collection: # Outcome is checked using a separate MongoClient without auto encryption. data: []mongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_encryption/distinct.yml000066400000000000000000000104141505113246500303070ustar00rootroot00000000000000runOn: - minServerVersion: "4.1.10" database_name: &database_name "default" collection_name: &collection_name "default" data: - &doc0_encrypted { _id: 1, encrypted_string: {'$binary': {'base64': 'AQAAAAAAAAAAAAAAAAAAAAACwj+3zkv2VM+aTfk60RqhXq6a/77WlLwu/BxXFkL7EppGsju/m8f0x5kBDD3EZTtGALGXlym5jnpZAoSIkswHoA==', 'subType': '06'}} } - &doc1_encrypted { _id: 2, encrypted_string: {'$binary': {'base64': 'AQAAAAAAAAAAAAAAAAAAAAACwj+3zkv2VM+aTfk60RqhXq6a/77WlLwu/BxXFkL7EppGsju/m8f0x5kBDD3EZTtGALGXlym5jnpZAoSIkswHoA==', 'subType': '06'}} } - &doc2_encrypted { _id: 3, encrypted_string: {'$binary': {'base64': 'AQAAAAAAAAAAAAAAAAAAAAACDdw4KFz3ZLquhsbt7RmDjD0N67n0uSXx7IGnQNCLeIKvot6s/ouI21Eo84IOtb6lhwUNPlSEBNY0/hbszWAKJg==', 'subType': '06'}} } json_schema: {'properties': {'encrypted_w_altname': {'encrypt': {'keyId': '/altname', 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Random'}}, 'encrypted_string': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}, 'random': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Random'}}, 'encrypted_string_equivalent': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}}, 'bsonType': 'object'} key_vault_data: [{'status': 1, '_id': {'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}, 'masterKey': {'provider': 'aws', 'key': 'arn:aws:kms:us-east-1:579766882180:key/89fcc2c4-08b0-4bd9-9f25-e30687b580d0', 'region': 'us-east-1'}, 'updateDate': {'$date': {'$numberLong': '1552949630483'}}, 'keyMaterial': {'$binary': {'base64': 'AQICAHhQNmWG2CzOm1dq3kWLM+iDUZhEqnhJwH9wZVpuZ94A8gEqnsxXlR51T5EbEVezUqqKAAAAwjCBvwYJKoZIhvcNAQcGoIGxMIGuAgEAMIGoBgkqhkiG9w0BBwEwHgYJYIZIAWUDBAEuMBEEDHa4jo6yp0Z18KgbUgIBEIB74sKxWtV8/YHje5lv5THTl0HIbhSwM6EqRlmBiFFatmEWaeMk4tO4xBX65eq670I5TWPSLMzpp8ncGHMmvHqRajNBnmFtbYxN3E3/WjxmdbOOe+OXpnGJPcGsftc7cB2shRfA4lICPnE26+oVNXT6p0Lo20nY5XC7jyCO', 'subType': '00'}}, 'creationDate': {'$date': {'$numberLong': '1552949630483'}}, 'keyAltNames': ['altname', 'another_altname']}] tests: - description: "distinct with deterministic encryption" clientOptions: autoEncryptOpts: kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: distinct arguments: filter: { encrypted_string: "string0" } fieldName: "encrypted_string" result: - "string0" expectations: # Auto encryption will request the collection info. - command_started_event: command: listCollections: 1 filter: name: *collection_name command_name: listCollections # Then key is fetched from the key vault. - command_started_event: command: find: datakeys filter: {"$or": [{"_id": {"$in": [ {'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}} ] }}, {"keyAltNames": {"$in": []}}]} $db: keyvault readConcern: { level: "majority" } command_name: find - command_started_event: command: distinct: *collection_name key: encrypted_string query: { encrypted_string: {$eq: {'$binary': {'base64': 'AQAAAAAAAAAAAAAAAAAAAAACwj+3zkv2VM+aTfk60RqhXq6a/77WlLwu/BxXFkL7EppGsju/m8f0x5kBDD3EZTtGALGXlym5jnpZAoSIkswHoA==', 'subType': '06'}} } } command_name: distinct outcome: collection: # Outcome is checked using a separate MongoClient without auto encryption. data: - *doc0_encrypted - *doc1_encrypted - *doc2_encrypted - description: "Distinct fails when filtering on a random encrypted field" clientOptions: autoEncryptOpts: kmsProviders: aws: {} # Credentials filled in from environment operations: - name: distinct arguments: filter: { random: "abc" } fieldName: "encrypted_string" result: errorContains: "Cannot query on fields encrypted with the randomized encryption"mongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_encryption/explain.yml000066400000000000000000000073531505113246500301360ustar00rootroot00000000000000runOn: - minServerVersion: "7.0.0" database_name: &database_name "default" collection_name: &collection_name "default" data: - &doc0_encrypted { _id: 1, encrypted_string: {'$binary': {'base64': 'AQAAAAAAAAAAAAAAAAAAAAACwj+3zkv2VM+aTfk60RqhXq6a/77WlLwu/BxXFkL7EppGsju/m8f0x5kBDD3EZTtGALGXlym5jnpZAoSIkswHoA==', 'subType': '06'}} } - &doc1_encrypted { _id: 2, encrypted_string: {'$binary': {'base64': 'AQAAAAAAAAAAAAAAAAAAAAACDdw4KFz3ZLquhsbt7RmDjD0N67n0uSXx7IGnQNCLeIKvot6s/ouI21Eo84IOtb6lhwUNPlSEBNY0/hbszWAKJg==', 'subType': '06'}} } json_schema: {'properties': {'encrypted_w_altname': {'encrypt': {'keyId': '/altname', 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Random'}}, 'encrypted_string': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}, 'random': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Random'}}, 'encrypted_string_equivalent': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}}, 'bsonType': 'object'} key_vault_data: [{'status': 1, '_id': {'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}, 'masterKey': {'provider': 'aws', 'key': 'arn:aws:kms:us-east-1:579766882180:key/89fcc2c4-08b0-4bd9-9f25-e30687b580d0', 'region': 'us-east-1'}, 'updateDate': {'$date': {'$numberLong': '1552949630483'}}, 'keyMaterial': {'$binary': {'base64': 'AQICAHhQNmWG2CzOm1dq3kWLM+iDUZhEqnhJwH9wZVpuZ94A8gEqnsxXlR51T5EbEVezUqqKAAAAwjCBvwYJKoZIhvcNAQcGoIGxMIGuAgEAMIGoBgkqhkiG9w0BBwEwHgYJYIZIAWUDBAEuMBEEDHa4jo6yp0Z18KgbUgIBEIB74sKxWtV8/YHje5lv5THTl0HIbhSwM6EqRlmBiFFatmEWaeMk4tO4xBX65eq670I5TWPSLMzpp8ncGHMmvHqRajNBnmFtbYxN3E3/WjxmdbOOe+OXpnGJPcGsftc7cB2shRfA4lICPnE26+oVNXT6p0Lo20nY5XC7jyCO', 'subType': '00'}}, 'creationDate': {'$date': {'$numberLong': '1552949630483'}}, 'keyAltNames': ['altname', 'another_altname']}] tests: - description: "Explain a find with deterministic encryption" clientOptions: autoEncryptOpts: kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: runCommand object: database command_name: explain arguments: command: explain: find: *collection_name filter: { encrypted_string : "string1" } expectations: # Auto encryption will request the collection info. - command_started_event: command: listCollections: 1 filter: name: *collection_name command_name: listCollections # Then key is fetched from the key vault. - command_started_event: command: find: datakeys filter: {"$or": [{"_id": {"$in": [ {'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}} ] }}, {"keyAltNames": {"$in": []}}]} $db: keyvault readConcern: { level: "majority" } command_name: find - command_started_event: command: explain: find: *collection_name filter: { encrypted_string: { $eq: {'$binary': {'base64': 'AQAAAAAAAAAAAAAAAAAAAAACDdw4KFz3ZLquhsbt7RmDjD0N67n0uSXx7IGnQNCLeIKvot6s/ouI21Eo84IOtb6lhwUNPlSEBNY0/hbszWAKJg==', 'subType': '06'}} } } verbosity: "allPlansExecution" command_name: explain outcome: collection: # Outcome is checked using a separate MongoClient without auto encryption. data: - *doc0_encrypted - *doc1_encrypted mongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_encryption/find.yml000066400000000000000000000137111505113246500274110ustar00rootroot00000000000000runOn: - minServerVersion: "4.1.10" database_name: &database_name "default" collection_name: &collection_name "default" data: - &doc0_encrypted { _id: 1, encrypted_string: {'$binary': {'base64': 'AQAAAAAAAAAAAAAAAAAAAAACwj+3zkv2VM+aTfk60RqhXq6a/77WlLwu/BxXFkL7EppGsju/m8f0x5kBDD3EZTtGALGXlym5jnpZAoSIkswHoA==', 'subType': '06'}} } - &doc1_encrypted { _id: 2, encrypted_string: {'$binary': {'base64': 'AQAAAAAAAAAAAAAAAAAAAAACDdw4KFz3ZLquhsbt7RmDjD0N67n0uSXx7IGnQNCLeIKvot6s/ouI21Eo84IOtb6lhwUNPlSEBNY0/hbszWAKJg==', 'subType': '06'}} , random: {'$binary': {'base64': 'AgAAAAAAAAAAAAAAAAAAAAACyfp+lXvKOi7f5vh6ZsCijLEaXFKq1X06RmyS98ZvmMQGixTw8HM1f/bGxZjGwvYwjXOkIEb7Exgb8p2KCDI5TQ==', 'subType': '06'}} } json_schema: {'properties': {'encrypted_w_altname': {'encrypt': {'keyId': '/altname', 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Random'}}, 'encrypted_string': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}, 'random': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Random'}}, 'encrypted_string_equivalent': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}}, 'bsonType': 'object'} key_vault_data: [{'status': 1, '_id': {'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}, 'masterKey': {'provider': 'aws', 'key': 'arn:aws:kms:us-east-1:579766882180:key/89fcc2c4-08b0-4bd9-9f25-e30687b580d0', 'region': 'us-east-1'}, 'updateDate': {'$date': {'$numberLong': '1552949630483'}}, 'keyMaterial': {'$binary': {'base64': 'AQICAHhQNmWG2CzOm1dq3kWLM+iDUZhEqnhJwH9wZVpuZ94A8gEqnsxXlR51T5EbEVezUqqKAAAAwjCBvwYJKoZIhvcNAQcGoIGxMIGuAgEAMIGoBgkqhkiG9w0BBwEwHgYJYIZIAWUDBAEuMBEEDHa4jo6yp0Z18KgbUgIBEIB74sKxWtV8/YHje5lv5THTl0HIbhSwM6EqRlmBiFFatmEWaeMk4tO4xBX65eq670I5TWPSLMzpp8ncGHMmvHqRajNBnmFtbYxN3E3/WjxmdbOOe+OXpnGJPcGsftc7cB2shRfA4lICPnE26+oVNXT6p0Lo20nY5XC7jyCO', 'subType': '00'}}, 'creationDate': {'$date': {'$numberLong': '1552949630483'}}, 'keyAltNames': ['altname', 'another_altname']}] tests: - description: "Find with deterministic encryption" clientOptions: autoEncryptOpts: kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: find arguments: filter: { encrypted_string: "string0" } result: - &doc0 { _id: 1, encrypted_string: "string0" } expectations: # Auto encryption will request the collection info. - command_started_event: command: listCollections: 1 filter: name: *collection_name command_name: listCollections # Then key is fetched from the key vault. - command_started_event: command: find: datakeys filter: {"$or": [{"_id": {"$in": [ {'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}} ] }}, {"keyAltNames": {"$in": []}}]} $db: keyvault readConcern: { level: "majority" } command_name: find - command_started_event: command: find: *collection_name filter: { encrypted_string: { $eq: {'$binary': {'base64': 'AQAAAAAAAAAAAAAAAAAAAAACwj+3zkv2VM+aTfk60RqhXq6a/77WlLwu/BxXFkL7EppGsju/m8f0x5kBDD3EZTtGALGXlym5jnpZAoSIkswHoA==', 'subType': '06'}} } } command_name: find outcome: collection: # Outcome is checked using a separate MongoClient without auto encryption. data: - *doc0_encrypted - *doc1_encrypted - description: "Find with $in with deterministic encryption" clientOptions: autoEncryptOpts: kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: find arguments: filter: { encrypted_string: { $in: [ "string0", "string1" ] } } result: - { _id: 1, encrypted_string: "string0" } - &doc1 { _id: 2, encrypted_string: "string1", random: "abc" } expectations: # Auto encryption will request the collection info. - command_started_event: command: listCollections: 1 filter: name: *collection_name command_name: listCollections # Then key is fetched from the key vault. - command_started_event: command: find: datakeys filter: {"$or": [{"_id": {"$in": [ {'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}} ] }}, {"keyAltNames": {"$in": []}}]} $db: keyvault readConcern: { level: "majority" } command_name: find - command_started_event: command: find: *collection_name filter: # Note, the values are re-ordered, but this is logically equivalent. { encrypted_string: { $in: [ {'$binary': {'base64': 'AQAAAAAAAAAAAAAAAAAAAAACwj+3zkv2VM+aTfk60RqhXq6a/77WlLwu/BxXFkL7EppGsju/m8f0x5kBDD3EZTtGALGXlym5jnpZAoSIkswHoA==', 'subType': '06'}}, {'$binary': {'base64': 'AQAAAAAAAAAAAAAAAAAAAAACDdw4KFz3ZLquhsbt7RmDjD0N67n0uSXx7IGnQNCLeIKvot6s/ouI21Eo84IOtb6lhwUNPlSEBNY0/hbszWAKJg==', 'subType': '06'}} ] } } command_name: find outcome: collection: # Outcome is checked using a separate MongoClient without auto encryption. data: - *doc0_encrypted - *doc1_encrypted - description: "Find fails when filtering on a random encrypted field" clientOptions: autoEncryptOpts: kmsProviders: aws: {} # Credentials filled in from environment operations: - name: find arguments: filter: { random: "abc" } result: errorContains: "Cannot query on fields encrypted with the randomized encryption"mongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_encryption/findOneAndDelete.yml000066400000000000000000000070521505113246500316220ustar00rootroot00000000000000runOn: - minServerVersion: "4.1.10" database_name: &database_name "default" collection_name: &collection_name "default" data: - &doc0_encrypted { _id: 1, encrypted_string: {'$binary': {'base64': 'AQAAAAAAAAAAAAAAAAAAAAACwj+3zkv2VM+aTfk60RqhXq6a/77WlLwu/BxXFkL7EppGsju/m8f0x5kBDD3EZTtGALGXlym5jnpZAoSIkswHoA==', 'subType': '06'}} } - &doc1_encrypted { _id: 2, encrypted_string: {'$binary': {'base64': 'AQAAAAAAAAAAAAAAAAAAAAACDdw4KFz3ZLquhsbt7RmDjD0N67n0uSXx7IGnQNCLeIKvot6s/ouI21Eo84IOtb6lhwUNPlSEBNY0/hbszWAKJg==', 'subType': '06'}} } json_schema: {'properties': {'encrypted_w_altname': {'encrypt': {'keyId': '/altname', 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Random'}}, 'encrypted_string': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}, 'random': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Random'}}, 'encrypted_string_equivalent': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}}, 'bsonType': 'object'} key_vault_data: [{'status': 1, '_id': {'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}, 'masterKey': {'provider': 'aws', 'key': 'arn:aws:kms:us-east-1:579766882180:key/89fcc2c4-08b0-4bd9-9f25-e30687b580d0', 'region': 'us-east-1'}, 'updateDate': {'$date': {'$numberLong': '1552949630483'}}, 'keyMaterial': {'$binary': {'base64': 'AQICAHhQNmWG2CzOm1dq3kWLM+iDUZhEqnhJwH9wZVpuZ94A8gEqnsxXlR51T5EbEVezUqqKAAAAwjCBvwYJKoZIhvcNAQcGoIGxMIGuAgEAMIGoBgkqhkiG9w0BBwEwHgYJYIZIAWUDBAEuMBEEDHa4jo6yp0Z18KgbUgIBEIB74sKxWtV8/YHje5lv5THTl0HIbhSwM6EqRlmBiFFatmEWaeMk4tO4xBX65eq670I5TWPSLMzpp8ncGHMmvHqRajNBnmFtbYxN3E3/WjxmdbOOe+OXpnGJPcGsftc7cB2shRfA4lICPnE26+oVNXT6p0Lo20nY5XC7jyCO', 'subType': '00'}}, 'creationDate': {'$date': {'$numberLong': '1552949630483'}}, 'keyAltNames': ['altname', 'another_altname']}] tests: - description: "findOneAndDelete with deterministic encryption" clientOptions: autoEncryptOpts: kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: findOneAndDelete arguments: filter: { encrypted_string: "string0" } expectations: # Auto encryption will request the collection info. - command_started_event: command: listCollections: 1 filter: name: *collection_name command_name: listCollections # Then key is fetched from the key vault. - command_started_event: command: find: datakeys filter: {"$or": [{"_id": {"$in": [ {'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}} ] }}, {"keyAltNames": {"$in": []}}]} $db: keyvault readConcern: { level: "majority" } command_name: find - command_started_event: command: findAndModify: *collection_name query: { encrypted_string: { $eq: {'$binary': {'base64': 'AQAAAAAAAAAAAAAAAAAAAAACwj+3zkv2VM+aTfk60RqhXq6a/77WlLwu/BxXFkL7EppGsju/m8f0x5kBDD3EZTtGALGXlym5jnpZAoSIkswHoA==', 'subType': '06'}} } } remove: true command_name: findAndModify outcome: collection: # Outcome is checked using a separate MongoClient without auto encryption. data: - *doc1_encryptedmongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_encryption/findOneAndReplace.yml000066400000000000000000000074551505113246500320020ustar00rootroot00000000000000runOn: - minServerVersion: "4.1.10" database_name: &database_name "default" collection_name: &collection_name "default" data: - &doc0_encrypted { _id: 1, encrypted_string: {'$binary': {'base64': 'AQAAAAAAAAAAAAAAAAAAAAACwj+3zkv2VM+aTfk60RqhXq6a/77WlLwu/BxXFkL7EppGsju/m8f0x5kBDD3EZTtGALGXlym5jnpZAoSIkswHoA==', 'subType': '06'}} } json_schema: {'properties': {'encrypted_w_altname': {'encrypt': {'keyId': '/altname', 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Random'}}, 'encrypted_string': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}, 'random': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Random'}}, 'encrypted_string_equivalent': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}}, 'bsonType': 'object'} key_vault_data: [{'status': 1, '_id': {'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}, 'masterKey': {'provider': 'aws', 'key': 'arn:aws:kms:us-east-1:579766882180:key/89fcc2c4-08b0-4bd9-9f25-e30687b580d0', 'region': 'us-east-1'}, 'updateDate': {'$date': {'$numberLong': '1552949630483'}}, 'keyMaterial': {'$binary': {'base64': 'AQICAHhQNmWG2CzOm1dq3kWLM+iDUZhEqnhJwH9wZVpuZ94A8gEqnsxXlR51T5EbEVezUqqKAAAAwjCBvwYJKoZIhvcNAQcGoIGxMIGuAgEAMIGoBgkqhkiG9w0BBwEwHgYJYIZIAWUDBAEuMBEEDHa4jo6yp0Z18KgbUgIBEIB74sKxWtV8/YHje5lv5THTl0HIbhSwM6EqRlmBiFFatmEWaeMk4tO4xBX65eq670I5TWPSLMzpp8ncGHMmvHqRajNBnmFtbYxN3E3/WjxmdbOOe+OXpnGJPcGsftc7cB2shRfA4lICPnE26+oVNXT6p0Lo20nY5XC7jyCO', 'subType': '00'}}, 'creationDate': {'$date': {'$numberLong': '1552949630483'}}, 'keyAltNames': ['altname', 'another_altname']}] tests: - description: "findOneAndReplace with deterministic encryption" clientOptions: autoEncryptOpts: kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: findOneAndReplace arguments: filter: { encrypted_string: "string0" } replacement: { encrypted_string: "string1" } returnDocument: Before result: { _id: 1, encrypted_string: "string0" } expectations: # Auto encryption will request the collection info. - command_started_event: command: listCollections: 1 filter: name: *collection_name command_name: listCollections # Then key is fetched from the key vault. - command_started_event: command: find: datakeys filter: {"$or": [{"_id": {"$in": [ {'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}} ] }}, {"keyAltNames": {"$in": []}}]} $db: keyvault readConcern: { level: "majority" } command_name: find - command_started_event: command: findAndModify: *collection_name query: { encrypted_string: { $eq: {'$binary': {'base64': 'AQAAAAAAAAAAAAAAAAAAAAACwj+3zkv2VM+aTfk60RqhXq6a/77WlLwu/BxXFkL7EppGsju/m8f0x5kBDD3EZTtGALGXlym5jnpZAoSIkswHoA==', 'subType': '06'}} } } update: { encrypted_string: {'$binary': {'base64': 'AQAAAAAAAAAAAAAAAAAAAAACDdw4KFz3ZLquhsbt7RmDjD0N67n0uSXx7IGnQNCLeIKvot6s/ouI21Eo84IOtb6lhwUNPlSEBNY0/hbszWAKJg==', 'subType': '06'}} } command_name: findAndModify outcome: collection: # Outcome is checked using a separate MongoClient without auto encryption. data: - { _id: 1, encrypted_string: {'$binary': {'base64': 'AQAAAAAAAAAAAAAAAAAAAAACDdw4KFz3ZLquhsbt7RmDjD0N67n0uSXx7IGnQNCLeIKvot6s/ouI21Eo84IOtb6lhwUNPlSEBNY0/hbszWAKJg==', 'subType': '06'}} }mongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_encryption/findOneAndUpdate.yml000066400000000000000000000074711505113246500316470ustar00rootroot00000000000000runOn: - minServerVersion: "4.1.10" database_name: &database_name "default" collection_name: &collection_name "default" data: - &doc0_encrypted { _id: 1, encrypted_string: {'$binary': {'base64': 'AQAAAAAAAAAAAAAAAAAAAAACwj+3zkv2VM+aTfk60RqhXq6a/77WlLwu/BxXFkL7EppGsju/m8f0x5kBDD3EZTtGALGXlym5jnpZAoSIkswHoA==', 'subType': '06'}} } json_schema: {'properties': {'encrypted_w_altname': {'encrypt': {'keyId': '/altname', 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Random'}}, 'encrypted_string': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}, 'random': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Random'}}, 'encrypted_string_equivalent': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}}, 'bsonType': 'object'} key_vault_data: [{'status': 1, '_id': {'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}, 'masterKey': {'provider': 'aws', 'key': 'arn:aws:kms:us-east-1:579766882180:key/89fcc2c4-08b0-4bd9-9f25-e30687b580d0', 'region': 'us-east-1'}, 'updateDate': {'$date': {'$numberLong': '1552949630483'}}, 'keyMaterial': {'$binary': {'base64': 'AQICAHhQNmWG2CzOm1dq3kWLM+iDUZhEqnhJwH9wZVpuZ94A8gEqnsxXlR51T5EbEVezUqqKAAAAwjCBvwYJKoZIhvcNAQcGoIGxMIGuAgEAMIGoBgkqhkiG9w0BBwEwHgYJYIZIAWUDBAEuMBEEDHa4jo6yp0Z18KgbUgIBEIB74sKxWtV8/YHje5lv5THTl0HIbhSwM6EqRlmBiFFatmEWaeMk4tO4xBX65eq670I5TWPSLMzpp8ncGHMmvHqRajNBnmFtbYxN3E3/WjxmdbOOe+OXpnGJPcGsftc7cB2shRfA4lICPnE26+oVNXT6p0Lo20nY5XC7jyCO', 'subType': '00'}}, 'creationDate': {'$date': {'$numberLong': '1552949630483'}}, 'keyAltNames': ['altname', 'another_altname']}] tests: - description: "findOneAndUpdate with deterministic encryption" clientOptions: autoEncryptOpts: kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: findOneAndUpdate arguments: filter: { encrypted_string: "string0" } update: { $set: { encrypted_string: "string1" } } returnDocument: Before result: { _id: 1, encrypted_string: "string0" } expectations: # Auto encryption will request the collection info. - command_started_event: command: listCollections: 1 filter: name: *collection_name command_name: listCollections # Then key is fetched from the key vault. - command_started_event: command: find: datakeys filter: {"$or": [{"_id": {"$in": [ {'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}} ] }}, {"keyAltNames": {"$in": []}}]} $db: keyvault readConcern: { level: "majority" } command_name: find - command_started_event: command: findAndModify: *collection_name query: { encrypted_string: { $eq: {'$binary': {'base64': 'AQAAAAAAAAAAAAAAAAAAAAACwj+3zkv2VM+aTfk60RqhXq6a/77WlLwu/BxXFkL7EppGsju/m8f0x5kBDD3EZTtGALGXlym5jnpZAoSIkswHoA==', 'subType': '06'}} } } update: { $set: { encrypted_string: {'$binary': {'base64': 'AQAAAAAAAAAAAAAAAAAAAAACDdw4KFz3ZLquhsbt7RmDjD0N67n0uSXx7IGnQNCLeIKvot6s/ouI21Eo84IOtb6lhwUNPlSEBNY0/hbszWAKJg==', 'subType': '06'}} } } command_name: findAndModify outcome: collection: # Outcome is checked using a separate MongoClient without auto encryption. data: - { _id: 1, encrypted_string: {'$binary': {'base64': 'AQAAAAAAAAAAAAAAAAAAAAACDdw4KFz3ZLquhsbt7RmDjD0N67n0uSXx7IGnQNCLeIKvot6s/ouI21Eo84IOtb6lhwUNPlSEBNY0/hbszWAKJg==', 'subType': '06'}} }mongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_encryption/fle2v2-BypassQueryAnalysis.yml000066400000000000000000000107271505113246500335660ustar00rootroot00000000000000# Requires libmongocrypt 1.8.0. runOn: - minServerVersion: "7.0.0" maxServerVersion: "7.99.99" # Skip QEv2 (also referred to as FLE2v2) tests on Serverless. Unskip once Serverless enables the QEv2 protocol. # FLE 2 Encrypted collections are not supported on standalone. topology: [ "replicaset", "sharded", "load-balanced" ] database_name: &database_name "default" collection_name: &collection_name "default" data: [] encrypted_fields: &encrypted_fields {'fields': [{'keyId': {'$binary': {'base64': 'EjRWeBI0mHYSNBI0VniQEg==', 'subType': '04'}}, 'path': 'encryptedIndexed', 'bsonType': 'string', 'queries': {'queryType': 'equality', 'contention': {'$numberLong': '0'}}}, {'keyId': {'$binary': {'base64': 'q83vqxI0mHYSNBI0VniQEg==', 'subType': '04'}}, 'path': 'encryptedUnindexed', 'bsonType': 'string'}]} key_vault_data: [{'_id': {'$binary': {'base64': 'EjRWeBI0mHYSNBI0VniQEg==', 'subType': '04'}}, 'keyMaterial': {'$binary': {'base64': 'sHe0kz57YW7v8g9VP9sf/+K1ex4JqKc5rf/URX3n3p8XdZ6+15uXPaSayC6adWbNxkFskuMCOifDoTT+rkqMtFkDclOy884RuGGtUysq3X7zkAWYTKi8QAfKkajvVbZl2y23UqgVasdQu3OVBQCrH/xY00nNAs/52e958nVjBuzQkSb1T8pKJAyjZsHJ60+FtnfafDZSTAIBJYn7UWBCwQ==', 'subType': '00'}}, 'creationDate': {'$date': {'$numberLong': '1648914851981'}}, 'updateDate': {'$date': {'$numberLong': '1648914851981'}}, 'status': {'$numberInt': '0'}, 'masterKey': {'provider': 'local'}} ] tests: - description: "BypassQueryAnalysis decrypts" clientOptions: autoEncryptOpts: kmsProviders: local: {'key': {'$binary': {'base64': 'Mng0NCt4ZHVUYUJCa1kxNkVyNUR1QURhZ2h2UzR2d2RrZzh0cFBwM3R6NmdWMDFBMUN3YkQ5aXRRMkhGRGdQV09wOGVNYUMxT2k3NjZKelhaQmRCZGJkTXVyZG9uSjFk', 'subType': '00'}}} bypassQueryAnalysis: true operations: - name: insertOne arguments: document: &doc0_encrypted { "_id": 1, "encryptedIndexed": { "$binary": { # Payload has an IndexKey of key1 and UserKey of key1. "base64": "C18BAAAFZAAgAAAAANnt+eLTkv4GdDPl8IAfJOvTzArOgFJQ2S/DcLza4W0DBXMAIAAAAAD2u+omZme3P2gBPehMQyQHQ153tPN1+z7bksYA9jKTpAVwADAAAAAAUnCOQqIvmR65YKyYnsiVfVrg9hwUVO3RhhKExo3RWOzgaS0QdsBL5xKFS0JhZSoWBXUAEAAAAAQSNFZ4EjSYdhI0EjRWeJASEHQAAgAAAAV2AFAAAAAAEjRWeBI0mHYSNBI0VniQEpQbp/ZJpWBKeDtKLiXb0P2E9wvc0g3f373jnYQYlJquOrlPOoEy3ngsHPJuSUijvWDsrQzqYa349K7G/66qaXEFZQAgAAAAAOuac/eRLYakKX6B0vZ1r3QodOQFfjqJD+xlGiPu4/PsBWwAIAAAAACkm0o9bj6j0HuADKc0svbqO2UHj6GrlNdF6yKNxh63xRJrAAAAAAAAAAAAAA==", "subType": "06" } } } - name: find arguments: filter: { "_id": 1 } result: [{"_id": 1, "encryptedIndexed": "123" }] expectations: - command_started_event: command: listCollections: 1 filter: name: *collection_name command_name: listCollections - command_started_event: command: insert: *collection_name documents: - *doc0_encrypted ordered: true encryptionInformation: type: 1 schema: "default.default": # libmongocrypt applies escCollection and ecocCollection to outgoing command. escCollection: "enxcol_.default.esc" ecocCollection: "enxcol_.default.ecoc" <<: *encrypted_fields command_name: insert - command_started_event: command: find: *collection_name filter: { "_id": 1 } command_name: find - command_started_event: command: find: datakeys filter: { "$or": [ { "_id": { "$in": [ {'$binary': {'base64': 'EjRWeBI0mHYSNBI0VniQEg==', 'subType': '04'}} ] } }, { "keyAltNames": { "$in": [] } } ] } $db: keyvault readConcern: { level: "majority" } command_name: find outcome: collection: data: - {"_id": 1, "encryptedIndexed": { "$$type": "binData" }, "__safeContent__": [{ "$binary" : { "base64" : "31eCYlbQoVboc5zwC8IoyJVSkag9PxREka8dkmbXJeY=", "subType" : "00" } }] }mongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_encryption/fle2v2-Compact.yml000066400000000000000000000105711505113246500311560ustar00rootroot00000000000000# Requires libmongocrypt 1.8.0. runOn: - minServerVersion: "7.0.0" maxServerVersion: "7.99.99" # Skip QEv2 (also referred to as FLE2v2) tests on Serverless. Unskip once Serverless enables the QEv2 protocol. # FLE 2 Encrypted collections are not supported on standalone. topology: [ "replicaset", "sharded", "load-balanced" ] database_name: &database_name "default" collection_name: &collection_name "default" data: [] encrypted_fields: &encrypted_fields {'fields': [{'keyId': {'$binary': {'base64': 'EjRWeBI0mHYSNBI0VniQEg==', 'subType': '04'}}, 'path': 'encryptedIndexed', 'bsonType': 'string', 'queries': {'queryType': 'equality', 'contention': {'$numberLong': '0'}}}, {'keyId': {'$binary': {'base64': 'q83vqxI0mHYSNBI0VniQEg==', 'subType': '04'}}, 'path': 'encryptedUnindexed', 'bsonType': 'string'}]} key_vault_data: [ {'_id': {'$binary': {'base64': 'EjRWeBI0mHYSNBI0VniQEg==', 'subType': '04'}}, 'keyMaterial': {'$binary': {'base64': 'sHe0kz57YW7v8g9VP9sf/+K1ex4JqKc5rf/URX3n3p8XdZ6+15uXPaSayC6adWbNxkFskuMCOifDoTT+rkqMtFkDclOy884RuGGtUysq3X7zkAWYTKi8QAfKkajvVbZl2y23UqgVasdQu3OVBQCrH/xY00nNAs/52e958nVjBuzQkSb1T8pKJAyjZsHJ60+FtnfafDZSTAIBJYn7UWBCwQ==', 'subType': '00'}}, 'creationDate': {'$date': {'$numberLong': '1648914851981'}}, 'updateDate': {'$date': {'$numberLong': '1648914851981'}}, 'status': {'$numberInt': '0'}, 'masterKey': {'provider': 'local'}}, {'_id': {'$binary': {'base64': 'q83vqxI0mHYSNBI0VniQEg==', 'subType': '04'}}, 'keyMaterial': {'$binary': {'base64': 'HBk9BWihXExNDvTp1lUxOuxuZK2Pe2ZdVdlsxPEBkiO1bS4mG5NNDsQ7zVxJAH8BtdOYp72Ku4Y3nwc0BUpIKsvAKX4eYXtlhv5zUQxWdeNFhg9qK7qb8nqhnnLeT0f25jFSqzWJoT379hfwDeu0bebJHr35QrJ8myZdPMTEDYF08QYQ48ShRBli0S+QzBHHAQiM2iJNr4svg2WR8JSeWQ==', 'subType': '00'}}, 'creationDate': {'$date': {'$numberLong': '1648914851981'}}, 'updateDate': {'$date': {'$numberLong': '1648914851981'}}, 'status': {'$numberInt': '0'}, 'masterKey': {'provider': 'local'}} ] tests: - description: "Compact works" clientOptions: autoEncryptOpts: kmsProviders: local: {'key': {'$binary': {'base64': 'Mng0NCt4ZHVUYUJCa1kxNkVyNUR1QURhZ2h2UzR2d2RrZzh0cFBwM3R6NmdWMDFBMUN3YkQ5aXRRMkhGRGdQV09wOGVNYUMxT2k3NjZKelhaQmRCZGJkTXVyZG9uSjFk', 'subType': '00'}}} operations: - name: runCommand object: database command_name: compactStructuredEncryptionData arguments: command: compactStructuredEncryptionData: *collection_name expectations: - command_started_event: command: listCollections: 1 filter: name: *collection_name command_name: listCollections - command_started_event: command: find: datakeys filter: { "$or": [ { "_id": { "$in": [ {'$binary': {'base64': 'EjRWeBI0mHYSNBI0VniQEg==', 'subType': '04'}}, {'$binary': {'base64': 'q83vqxI0mHYSNBI0VniQEg==', 'subType': '04'}} ] } }, { "keyAltNames": { "$in": [] } } ] } $db: keyvault readConcern: { level: "majority" } command_name: find - command_started_event: command: compactStructuredEncryptionData: *collection_name compactionTokens: { "encryptedIndexed": { "$binary": { "base64": "noN+05JsuO1oDg59yypIGj45i+eFH6HOTXOPpeZ//Mk=", "subType": "00" } }, "encryptedUnindexed": { "$binary": { "base64": "SWO8WEoZ2r2Kx/muQKb7+COizy85nIIUFiHh4K9kcvA=", "subType": "00" } } } command_name: compactStructuredEncryptionData - description: "Compact errors on an unencrypted client" operations: - name: runCommand object: database command_name: compactStructuredEncryptionData arguments: command: compactStructuredEncryptionData: *collection_name result: errorContains: "'compactStructuredEncryptionData.compactionTokens' is missing"fle2v2-CreateCollection-OldServer.yml000066400000000000000000000043301505113246500346270ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_encryption# Requires libmongocrypt 1.8.0. runOn: - minServerVersion: "6.0.0" maxServerVersion: "6.3.99" # FLE 2 Encrypted collections are not supported on standalone. topology: [ "replicaset", "sharded", "load-balanced" ] database_name: &database_name "default" collection_name: &collection_name "default" tests: - description: "driver returns an error if creating a QEv2 collection on unsupported server" clientOptions: autoEncryptOpts: kmsProviders: aws: {} # Credentials filled in from environment. encryptedFieldsMap: default.encryptedCollection: { "fields": [ { "path": "firstName", "bsonType": "string", "keyId": { "$binary": { "base64": "AAAAAAAAAAAAAAAAAAAAAA==", "subType": "04" }} } ] } operations: # Do an initial drop to remove collections that may exist from previous test runs. - name: dropCollection object: database arguments: collection: "encryptedCollection" - name: createCollection object: database arguments: collection: "encryptedCollection" result: errorContains: "Driver support of Queryable Encryption is incompatible with server. Upgrade server to use Queryable Encryption." # Assert no collections were created. - name: assertCollectionNotExists object: testRunner arguments: database: *database_name collection: &esc_collection_name "enxcol_.encryptedCollection.esc" # ecc collection is no longer created for QEv2 - name: assertCollectionNotExists object: testRunner arguments: database: *database_name collection: &ecc_collection_name "enxcol_.encryptedCollection.ecc" - name: assertCollectionNotExists object: testRunner arguments: database: *database_name collection: &ecoc_collection_name "enxcol_.encryptedCollection.ecoc" - name: assertCollectionNotExists object: testRunner arguments: database: *database_name collection: encryptedCollection mongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_encryption/fle2v2-CreateCollection.yml000066400000000000000000001022441505113246500330060ustar00rootroot00000000000000# Requires libmongocrypt 1.8.0. runOn: - minServerVersion: "7.0.0" maxServerVersion: "7.99.99" # Skip QEv2 (also referred to as FLE2v2) tests on Serverless. Unskip once Serverless enables the QEv2 protocol. # FLE 2 Encrypted collections are not supported on standalone. topology: [ "replicaset", "sharded", "load-balanced" ] database_name: &database_name "default" collection_name: &collection_name "default" tests: - description: "state collections and index are created" clientOptions: autoEncryptOpts: kmsProviders: aws: {} # Credentials filled in from environment. encryptedFieldsMap: default.encryptedCollection: &encrypted_fields { "fields": [ { "path": "firstName", "bsonType": "string", "keyId": { "$binary": { "subType": "04", "base64": "AAAAAAAAAAAAAAAAAAAAAA==" }} } ] } operations: # Do an initial drop to remove collections that may exist from previous test runs. - name: dropCollection object: database arguments: collection: &encrypted_collection_name "encryptedCollection" - name: createCollection object: database arguments: collection: *encrypted_collection_name - name: assertCollectionExists object: testRunner arguments: database: *database_name collection: &esc_collection_name "enxcol_.encryptedCollection.esc" # ecc collection is no longer created for QEv2 - name: assertCollectionNotExists object: testRunner arguments: database: *database_name collection: &ecc_collection_name "enxcol_.encryptedCollection.ecc" - name: assertCollectionExists object: testRunner arguments: database: *database_name collection: &ecoc_collection_name "enxcol_.encryptedCollection.ecoc" - name: assertCollectionExists object: testRunner arguments: database: *database_name collection: *encrypted_collection_name - name: assertIndexExists object: testRunner arguments: database: *database_name collection: *encrypted_collection_name index: __safeContent___1 expectations: # events from dropCollection ... begin - command_started_event: command: drop: *esc_collection_name command_name: drop database_name: *database_name - command_started_event: command: drop: *ecoc_collection_name command_name: drop database_name: *database_name - command_started_event: command: drop: *encrypted_collection_name command_name: drop database_name: *database_name # events from dropCollection ... end # events from createCollection ... begin # State collections are created first. - command_started_event: command: create: *esc_collection_name clusteredIndex: {key: {_id: 1}, unique: true} command_name: create database_name: *database_name - command_started_event: command: create: *ecoc_collection_name clusteredIndex: {key: {_id: 1}, unique: true} command_name: create database_name: *database_name # Data collection is created after. - command_started_event: command: create: *encrypted_collection_name encryptedFields: &encrypted_fields_expectation { "fields": [ { "path": "firstName", "bsonType": "string", "keyId": { "$binary": { "subType": "04", "base64": "AAAAAAAAAAAAAAAAAAAAAA==" }} } ] } command_name: create database_name: *database_name # Index on __safeContents__ is then created. - command_started_event: command: createIndexes: *encrypted_collection_name indexes: - name: __safeContent___1 key: { __safeContent__: 1 } command_name: createIndexes database_name: *database_name # events from createCollection ... end - description: "default state collection names are applied" clientOptions: autoEncryptOpts: kmsProviders: aws: {} # Credentials filled in from environment. encryptedFieldsMap: default.encryptedCollection: *encrypted_fields operations: # Do an initial drop to remove collections that may exist from previous test runs. - name: dropCollection object: database arguments: collection: *encrypted_collection_name - name: createCollection object: database arguments: collection: *encrypted_collection_name - name: assertCollectionExists object: testRunner arguments: database: *database_name collection: *esc_collection_name # ecc collection is no longer created for QEv2 - name: assertCollectionNotExists object: testRunner arguments: database: *database_name collection: *ecc_collection_name - name: assertCollectionExists object: testRunner arguments: database: *database_name collection: *ecoc_collection_name - name: assertCollectionExists object: testRunner arguments: database: *database_name collection: *encrypted_collection_name - name: assertIndexExists object: testRunner arguments: database: *database_name collection: *encrypted_collection_name index: __safeContent___1 expectations: # events from dropCollection ... begin - command_started_event: command: drop: *esc_collection_name command_name: drop database_name: *database_name - command_started_event: command: drop: *ecoc_collection_name command_name: drop database_name: *database_name - command_started_event: command: drop: *encrypted_collection_name command_name: drop database_name: *database_name # events from dropCollection ... end # events from createCollection ... begin # State collections are created first. - command_started_event: command: create: *esc_collection_name clusteredIndex: {key: {_id: 1}, unique: true} command_name: create database_name: *database_name - command_started_event: command: create: *ecoc_collection_name clusteredIndex: {key: {_id: 1}, unique: true} command_name: create database_name: *database_name # Data collection is created after. - command_started_event: command: create: *encrypted_collection_name encryptedFields: *encrypted_fields_expectation command_name: create database_name: *database_name # Index on __safeContents__ is then created. - command_started_event: command: createIndexes: *encrypted_collection_name indexes: - name: __safeContent___1 key: { __safeContent__: 1 } command_name: createIndexes database_name: *database_name # events from createCollection ... end - description: "drop removes all state collections" clientOptions: autoEncryptOpts: kmsProviders: aws: {} # Credentials filled in from environment. encryptedFieldsMap: default.encryptedCollection: *encrypted_fields operations: # Do an initial drop to remove collections that may exist from previous test runs. - name: dropCollection object: database arguments: collection: *encrypted_collection_name - name: createCollection object: database arguments: collection: *encrypted_collection_name - name: assertCollectionExists object: testRunner arguments: database: *database_name collection: *esc_collection_name # ecc collection is no longer created for QEv2 - name: assertCollectionNotExists object: testRunner arguments: database: *database_name collection: *ecc_collection_name - name: assertCollectionExists object: testRunner arguments: database: *database_name collection: *ecoc_collection_name - name: assertCollectionExists object: testRunner arguments: database: *database_name collection: *encrypted_collection_name - name: assertIndexExists object: testRunner arguments: database: *database_name collection: *encrypted_collection_name index: __safeContent___1 - name: dropCollection object: database arguments: collection: *encrypted_collection_name # ecc collection is no longer created for QEv2 - name: assertCollectionNotExists object: testRunner arguments: database: *database_name collection: *ecoc_collection_name # ecc collection is no longer created for QEv2 - name: assertCollectionNotExists object: testRunner arguments: database: *database_name collection: *encrypted_collection_name - name: assertIndexNotExists object: testRunner arguments: database: *database_name collection: *encrypted_collection_name index: __safeContent___1 expectations: # events from dropCollection ... begin - command_started_event: command: drop: *esc_collection_name command_name: drop database_name: *database_name - command_started_event: command: drop: *ecoc_collection_name command_name: drop database_name: *database_name - command_started_event: command: drop: *encrypted_collection_name command_name: drop database_name: *database_name # events from dropCollection ... end # events from createCollection ... begin # State collections are created first. - command_started_event: command: create: *esc_collection_name clusteredIndex: {key: {_id: 1}, unique: true} command_name: create database_name: *database_name - command_started_event: command: create: *ecoc_collection_name clusteredIndex: {key: {_id: 1}, unique: true} command_name: create database_name: *database_name # Data collection is created after. - command_started_event: command: create: *encrypted_collection_name encryptedFields: *encrypted_fields command_name: create database_name: *database_name # Index on __safeContents__ is then created. - command_started_event: command: createIndexes: *encrypted_collection_name indexes: - name: __safeContent___1 key: { __safeContent__: 1 } command_name: createIndexes database_name: *database_name # events from createCollection ... end # events from dropCollection ... begin - command_started_event: command: drop: *esc_collection_name command_name: drop database_name: *database_name - command_started_event: command: drop: *ecoc_collection_name command_name: drop database_name: *database_name - command_started_event: command: drop: *encrypted_collection_name command_name: drop database_name: *database_name # events from dropCollection ... end - description: "CreateCollection without encryptedFields." clientOptions: autoEncryptOpts: kmsProviders: aws: {} # Credentials filled in from environment. encryptedFieldsMap: default.encryptedCollection: *encrypted_fields operations: # Do an initial drop to remove collections that may exist from previous test runs. - name: dropCollection object: database arguments: collection: "plaintextCollection" - name: createCollection object: database arguments: collection: "plaintextCollection" - name: assertCollectionExists object: testRunner arguments: database: *database_name collection: "plaintextCollection" expectations: # events from dropCollection ... begin # expect listCollections to be sent on drop to check for remote encryptedFields. - command_started_event: command: listCollections: 1 filter: { name: "plaintextCollection" } command_name: listCollections database_name: *database_name - command_started_event: command: drop: "plaintextCollection" command_name: drop database_name: *database_name # events from dropCollection ... end - command_started_event: command: create: "plaintextCollection" command_name: create database_name: *database_name - description: "CreateCollection from encryptedFieldsMap." clientOptions: autoEncryptOpts: kmsProviders: aws: {} # Credentials filled in from environment. encryptedFieldsMap: default.encryptedCollection: *encrypted_fields operations: # Do an initial drop to remove collections that may exist from previous test runs. - name: dropCollection object: database arguments: collection: *encrypted_collection_name - name: createCollection object: database arguments: collection: *encrypted_collection_name - name: assertCollectionExists object: testRunner arguments: database: *database_name collection: *esc_collection_name # ecc collection is no longer created for QEv2 - name: assertCollectionNotExists object: testRunner arguments: database: *database_name collection: *ecc_collection_name - name: assertCollectionExists object: testRunner arguments: database: *database_name collection: *ecoc_collection_name - name: assertCollectionExists object: testRunner arguments: database: *database_name collection: *encrypted_collection_name - name: assertIndexExists object: testRunner arguments: database: *database_name collection: *encrypted_collection_name index: __safeContent___1 expectations: # events from dropCollection ... begin - command_started_event: command: drop: *esc_collection_name command_name: drop database_name: *database_name - command_started_event: command: drop: *ecoc_collection_name command_name: drop database_name: *database_name - command_started_event: command: drop: *encrypted_collection_name command_name: drop database_name: *database_name # events from dropCollection ... end # events from createCollection ... begin # State collections are created first. - command_started_event: command: create: *esc_collection_name clusteredIndex: {key: {_id: 1}, unique: true} command_name: create database_name: *database_name - command_started_event: command: create: *ecoc_collection_name clusteredIndex: {key: {_id: 1}, unique: true} command_name: create database_name: *database_name # Data collection is created after. - command_started_event: command: create: *encrypted_collection_name encryptedFields: *encrypted_fields_expectation command_name: create database_name: *database_name # Index on __safeContents__ is then created. - command_started_event: command: createIndexes: *encrypted_collection_name indexes: - name: __safeContent___1 key: { __safeContent__: 1 } command_name: createIndexes database_name: *database_name # events from createCollection ... end - description: "CreateCollection from encryptedFields." clientOptions: autoEncryptOpts: kmsProviders: aws: {} # Credentials filled in from environment. operations: # Do initial drops to remove collections that may exist from previous test runs. - name: dropCollection object: database arguments: collection: *encrypted_collection_name encryptedFields: *encrypted_fields - name: createCollection object: database arguments: collection: *encrypted_collection_name encryptedFields: *encrypted_fields - name: assertCollectionExists object: testRunner arguments: database: *database_name collection: *esc_collection_name # ecc collection is no longer created for QEv2 - name: assertCollectionNotExists object: testRunner arguments: database: *database_name collection: *ecc_collection_name - name: assertCollectionExists object: testRunner arguments: database: *database_name collection: *ecoc_collection_name - name: assertCollectionExists object: testRunner arguments: database: *database_name collection: *encrypted_collection_name - name: assertIndexExists object: testRunner arguments: database: *database_name collection: *encrypted_collection_name index: __safeContent___1 expectations: # events from dropCollection ... begin - command_started_event: command: drop: *esc_collection_name command_name: drop database_name: *database_name - command_started_event: command: drop: *ecoc_collection_name command_name: drop database_name: *database_name - command_started_event: command: drop: *encrypted_collection_name command_name: drop database_name: *database_name # events from dropCollection ... end # events from createCollection ... begin # State collections are created first. - command_started_event: command: create: *esc_collection_name clusteredIndex: {key: {_id: 1}, unique: true} command_name: create database_name: *database_name - command_started_event: command: create: *ecoc_collection_name clusteredIndex: {key: {_id: 1}, unique: true} command_name: create database_name: *database_name # Data collection is created after. - command_started_event: command: create: *encrypted_collection_name encryptedFields: *encrypted_fields_expectation command_name: create database_name: *database_name # libmongocrypt requests listCollections to get a schema for the "createIndexes" command. - command_started_event: command: listCollections: 1 filter: { name: *encrypted_collection_name } command_name: listCollections database_name: *database_name # Index on __safeContents__ is then created. - command_started_event: command: createIndexes: *encrypted_collection_name indexes: - name: __safeContent___1 key: { __safeContent__: 1 } command_name: createIndexes database_name: *database_name # events from createCollection ... end - description: "DropCollection from encryptedFieldsMap" clientOptions: autoEncryptOpts: kmsProviders: aws: {} # Credentials filled in from environment. encryptedFieldsMap: default.encryptedCollection: *encrypted_fields operations: - name: dropCollection object: database arguments: collection: *encrypted_collection_name expectations: # events from dropCollection ... begin - command_started_event: command: drop: *esc_collection_name command_name: drop database_name: *database_name - command_started_event: command: drop: *ecoc_collection_name command_name: drop database_name: *database_name - command_started_event: command: drop: *encrypted_collection_name command_name: drop database_name: *database_name # events from dropCollection ... end - description: "DropCollection from encryptedFields" clientOptions: autoEncryptOpts: kmsProviders: aws: {} # Credentials filled in from environment. encryptedFieldsMap: {} operations: # Do initial drops to remove collections that may exist from previous test runs. - name: dropCollection object: database arguments: collection: *encrypted_collection_name encryptedFields: *encrypted_fields - name: createCollection object: database arguments: collection: *encrypted_collection_name encryptedFields: *encrypted_fields - name: assertCollectionExists object: testRunner arguments: database: *database_name collection: *esc_collection_name # ecc collection is no longer created for QEv2 - name: assertCollectionNotExists object: testRunner arguments: database: *database_name collection: *ecc_collection_name - name: assertCollectionExists object: testRunner arguments: database: *database_name collection: *ecoc_collection_name - name: assertCollectionExists object: testRunner arguments: database: *database_name collection: *encrypted_collection_name - name: assertIndexExists object: testRunner arguments: database: *database_name collection: *encrypted_collection_name index: __safeContent___1 - name: dropCollection object: database arguments: collection: *encrypted_collection_name encryptedFields: *encrypted_fields # ecc collection is no longer created for QEv2 - name: assertCollectionNotExists object: testRunner arguments: database: *database_name collection: *esc_collection_name # ecc collection is no longer created for QEv2 - name: assertCollectionNotExists object: testRunner arguments: database: *database_name collection: *ecoc_collection_name # ecc collection is no longer created for QEv2 - name: assertCollectionNotExists object: testRunner arguments: database: *database_name collection: *encrypted_collection_name expectations: # events from dropCollection ... begin - command_started_event: command: drop: *esc_collection_name command_name: drop database_name: *database_name - command_started_event: command: drop: *ecoc_collection_name command_name: drop database_name: *database_name - command_started_event: command: drop: *encrypted_collection_name command_name: drop database_name: *database_name # events from dropCollection ... end # events from createCollection ... begin - command_started_event: command: create: *esc_collection_name clusteredIndex: {key: {_id: 1}, unique: true} command_name: create database_name: *database_name - command_started_event: command: create: *ecoc_collection_name clusteredIndex: {key: {_id: 1}, unique: true} command_name: create database_name: *database_name - command_started_event: command: create: *encrypted_collection_name encryptedFields: *encrypted_fields_expectation command_name: create database_name: *database_name # libmongocrypt requests listCollections to get a schema for the "createIndexes" command. - command_started_event: command: listCollections: 1 filter: { name: *encrypted_collection_name } command_name: listCollections database_name: *database_name # Index on __safeContents__ is then created. - command_started_event: command: createIndexes: *encrypted_collection_name indexes: - name: __safeContent___1 key: { __safeContent__: 1 } command_name: createIndexes database_name: *database_name # events from createCollection ... end # events from dropCollection ... begin - command_started_event: command: drop: *esc_collection_name command_name: drop database_name: *database_name - command_started_event: command: drop: *ecoc_collection_name command_name: drop database_name: *database_name - command_started_event: command: drop: *encrypted_collection_name command_name: drop database_name: *database_name # events from dropCollection ... end - description: "DropCollection from remote encryptedFields" clientOptions: autoEncryptOpts: kmsProviders: aws: {} # Credentials filled in from environment. encryptedFieldsMap: {} operations: # Do initial drops to remove collections that may exist from previous test runs. - name: dropCollection object: database arguments: collection: *encrypted_collection_name encryptedFields: *encrypted_fields - name: createCollection object: database arguments: collection: *encrypted_collection_name encryptedFields: *encrypted_fields - name: assertCollectionExists object: testRunner arguments: database: *database_name collection: *esc_collection_name # ecc collection is no longer created for QEv2 - name: assertCollectionNotExists object: testRunner arguments: database: *database_name collection: *ecc_collection_name - name: assertCollectionExists object: testRunner arguments: database: *database_name collection: *ecoc_collection_name - name: assertCollectionExists object: testRunner arguments: database: *database_name collection: *encrypted_collection_name - name: assertIndexExists object: testRunner arguments: database: *database_name collection: *encrypted_collection_name index: __safeContent___1 - name: dropCollection object: database arguments: collection: *encrypted_collection_name # ecc collection is no longer created for QEv2 - name: assertCollectionNotExists object: testRunner arguments: database: *database_name collection: *esc_collection_name # ecc collection is no longer created for QEv2 - name: assertCollectionNotExists object: testRunner arguments: database: *database_name collection: *ecoc_collection_name # ecc collection is no longer created for QEv2 - name: assertCollectionNotExists object: testRunner arguments: database: *database_name collection: *encrypted_collection_name expectations: # events from dropCollection ... begin - command_started_event: command: drop: *esc_collection_name command_name: drop database_name: *database_name - command_started_event: command: drop: *ecoc_collection_name command_name: drop database_name: *database_name - command_started_event: command: drop: *encrypted_collection_name command_name: drop database_name: *database_name # events from dropCollection ... end # events from createCollection ... begin - command_started_event: command: create: *esc_collection_name clusteredIndex: {key: {_id: 1}, unique: true} command_name: create database_name: *database_name - command_started_event: command: create: *ecoc_collection_name clusteredIndex: {key: {_id: 1}, unique: true} command_name: create database_name: *database_name - command_started_event: command: create: *encrypted_collection_name encryptedFields: *encrypted_fields_expectation command_name: create database_name: *database_name # libmongocrypt requests listCollections to get a schema for the "createIndexes" command. - command_started_event: command: listCollections: 1 filter: { name: *encrypted_collection_name } command_name: listCollections database_name: *database_name # Index on __safeContents__ is then created. - command_started_event: command: createIndexes: *encrypted_collection_name indexes: - name: __safeContent___1 key: { __safeContent__: 1 } command_name: createIndexes database_name: *database_name # events from createCollection ... end # events from dropCollection ... begin - command_started_event: command: listCollections: 1 filter: { name: *encrypted_collection_name } command_name: listCollections database_name: *database_name - command_started_event: command: drop: *esc_collection_name command_name: drop database_name: *database_name - command_started_event: command: drop: *ecoc_collection_name command_name: drop database_name: *database_name - command_started_event: command: drop: *encrypted_collection_name command_name: drop database_name: *database_name # events from dropCollection ... end - description: "encryptedFields are consulted for metadata collection names" clientOptions: autoEncryptOpts: kmsProviders: aws: {} # Credentials filled in from environment. encryptedFieldsMap: default.encryptedCollection: { "escCollection": "invalid_esc_name", "ecocCollection": "invalid_ecoc_name", "fields": [ { "path": "firstName", "bsonType": "string", "keyId": { "$binary": { "subType": "04", "base64": "AAAAAAAAAAAAAAAAAAAAAA==" }} } ] } operations: # Do an initial drop to remove collections that may exist from previous test runs. - name: dropCollection object: database arguments: collection: *encrypted_collection_name - name: createCollection object: database arguments: collection: *encrypted_collection_name result: # Expect error due to server constraints added in SERVER-74069 errorContains: "Encrypted State Collection name should follow" mongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_encryption/fle2v2-DecryptExistingData.yml000066400000000000000000000053361505113246500335120ustar00rootroot00000000000000# Requires libmongocrypt 1.8.0. runOn: - minServerVersion: "7.0.0" maxServerVersion: "7.99.99" # Skip QEv2 (also referred to as FLE2v2) tests on Serverless. Unskip once Serverless enables the QEv2 protocol. # FLE 2 Encrypted collections are not supported on standalone. topology: [ "replicaset", "sharded", "load-balanced" ] database_name: &database_name "default" collection_name: &collection_name "default" data: [ &doc0 { "_id": 1, "encryptedUnindexed": { "$binary": { "base64": "BqvN76sSNJh2EjQSNFZ4kBICTQaVZPWgXp41I7mPV1rLFTtw1tXzjcdSEyxpKKqujlko5TeizkB9hHQ009dVY1+fgIiDcefh+eQrm3CkhQ==", "subType": "06" } } } ] key_vault_data: [ {'_id': {'$binary': {'base64': 'q83vqxI0mHYSNBI0VniQEg==', 'subType': '04'}}, 'keyMaterial': {'$binary': {'base64': 'HBk9BWihXExNDvTp1lUxOuxuZK2Pe2ZdVdlsxPEBkiO1bS4mG5NNDsQ7zVxJAH8BtdOYp72Ku4Y3nwc0BUpIKsvAKX4eYXtlhv5zUQxWdeNFhg9qK7qb8nqhnnLeT0f25jFSqzWJoT379hfwDeu0bebJHr35QrJ8myZdPMTEDYF08QYQ48ShRBli0S+QzBHHAQiM2iJNr4svg2WR8JSeWQ==', 'subType': '00'}}, 'creationDate': {'$date': {'$numberLong': '1648914851981'}}, 'updateDate': {'$date': {'$numberLong': '1648914851981'}}, 'status': {'$numberInt': '0'}, 'masterKey': {'provider': 'local'}}] tests: - description: "FLE2 decrypt of existing data succeeds" clientOptions: autoEncryptOpts: kmsProviders: local: {'key': {'$binary': {'base64': 'Mng0NCt4ZHVUYUJCa1kxNkVyNUR1QURhZ2h2UzR2d2RrZzh0cFBwM3R6NmdWMDFBMUN3YkQ5aXRRMkhGRGdQV09wOGVNYUMxT2k3NjZKelhaQmRCZGJkTXVyZG9uSjFk', 'subType': '00'}}} operations: - name: find arguments: filter: { _id: 1 } result: [{ "_id": 1, "encryptedUnindexed": "value123" }] expectations: - command_started_event: command: listCollections: 1 filter: name: *collection_name command_name: listCollections - command_started_event: command: find: *collection_name filter: { "_id": 1 } command_name: find - command_started_event: command: find: datakeys filter: { "$or": [ { "_id": { "$in": [ {'$binary': {'base64': 'q83vqxI0mHYSNBI0VniQEg==', 'subType': '04'}} ] } }, { "keyAltNames": { "$in": [] } } ] } $db: keyvault readConcern: { level: "majority" } command_name: findmongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_encryption/fle2v2-Delete.yml000066400000000000000000000110361505113246500307670ustar00rootroot00000000000000# Requires libmongocrypt 1.8.0. runOn: - minServerVersion: "7.0.0" maxServerVersion: "7.99.99" # Skip QEv2 (also referred to as FLE2v2) tests on Serverless. Unskip once Serverless enables the QEv2 protocol. # FLE 2 Encrypted collections are not supported on standalone. topology: [ "replicaset", "sharded", "load-balanced" ] database_name: &database_name "default" collection_name: &collection_name "default" data: [] encrypted_fields: &encrypted_fields {'fields': [{'keyId': {'$binary': {'base64': 'EjRWeBI0mHYSNBI0VniQEg==', 'subType': '04'}}, 'path': 'encryptedIndexed', 'bsonType': 'string', 'queries': {'queryType': 'equality', 'contention': {'$numberLong': '0'}}}, {'keyId': {'$binary': {'base64': 'q83vqxI0mHYSNBI0VniQEg==', 'subType': '04'}}, 'path': 'encryptedUnindexed', 'bsonType': 'string'}]} key_vault_data: [ {'_id': {'$binary': {'base64': 'EjRWeBI0mHYSNBI0VniQEg==', 'subType': '04'}}, 'keyMaterial': {'$binary': {'base64': 'sHe0kz57YW7v8g9VP9sf/+K1ex4JqKc5rf/URX3n3p8XdZ6+15uXPaSayC6adWbNxkFskuMCOifDoTT+rkqMtFkDclOy884RuGGtUysq3X7zkAWYTKi8QAfKkajvVbZl2y23UqgVasdQu3OVBQCrH/xY00nNAs/52e958nVjBuzQkSb1T8pKJAyjZsHJ60+FtnfafDZSTAIBJYn7UWBCwQ==', 'subType': '00'}}, 'creationDate': {'$date': {'$numberLong': '1648914851981'}}, 'updateDate': {'$date': {'$numberLong': '1648914851981'}}, 'status': {'$numberInt': '0'}, 'masterKey': {'provider': 'local'}} ] tests: - description: "Delete can query an FLE2 indexed field" clientOptions: autoEncryptOpts: kmsProviders: local: {'key': {'$binary': {'base64': 'Mng0NCt4ZHVUYUJCa1kxNkVyNUR1QURhZ2h2UzR2d2RrZzh0cFBwM3R6NmdWMDFBMUN3YkQ5aXRRMkhGRGdQV09wOGVNYUMxT2k3NjZKelhaQmRCZGJkTXVyZG9uSjFk', 'subType': '00'}}} operations: - name: insertOne arguments: document: &doc0 {"_id": 1, "encryptedIndexed": "value123" } - name: deleteOne arguments: filter: { "encryptedIndexed": "value123" } result: deletedCount: 1 expectations: - command_started_event: command: listCollections: 1 filter: name: *collection_name command_name: listCollections - command_started_event: command: find: datakeys filter: { "$or": [ { "_id": { "$in": [ {'$binary': {'base64': 'EjRWeBI0mHYSNBI0VniQEg==', 'subType': '04'}} ] } }, { "keyAltNames": { "$in": [] } } ] } $db: keyvault readConcern: { level: "majority" } command_name: find - command_started_event: command: insert: *collection_name documents: - { "_id": 1, "encryptedIndexed": { $$type: "binData" } } ordered: true encryptionInformation: type: 1 schema: "default.default": # libmongocrypt applies escCollection and ecocCollection to outgoing command. escCollection: "enxcol_.default.esc" ecocCollection: "enxcol_.default.ecoc" <<: *encrypted_fields command_name: insert - command_started_event: command: delete: *collection_name deletes: - { "q": { "encryptedIndexed": { "$eq": { "$binary": { "base64": "DIkAAAAFZAAgAAAAAPtVteJQAlgb2YMa/+7YWH00sbQPyt7L6Rb8OwBdMmL2BXMAIAAAAAAd44hgVKnEnTFlwNVC14oyc9OZOTspeymusqkRQj57nAVsACAAAAAAaZ9s3G+4znfxStxeOZwcZy1OhzjMGc5hjmdMN+b/w6kSY20AAAAAAAAAAAAA", "subType": "06" } } } }, "limit": 1 } ordered: true encryptionInformation: type: 1 schema: "default.default": # libmongocrypt applies escCollection and ecocCollection to outgoing command. escCollection: "enxcol_.default.esc" ecocCollection: "enxcol_.default.ecoc" <<: *encrypted_fields command_name: delete outcome: collection: data: []fle2v2-EncryptedFields-vs-EncryptedFieldsMap.yml000066400000000000000000000070521505113246500367430ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_encryption# Requires libmongocrypt 1.8.0. runOn: - minServerVersion: "7.0.0" maxServerVersion: "7.99.99" # Skip QEv2 (also referred to as FLE2v2) tests on Serverless. Unskip once Serverless enables the QEv2 protocol. # FLE 2 Encrypted collections are not supported on standalone. topology: [ "replicaset", "sharded", "load-balanced" ] database_name: &database_name "default" collection_name: &collection_name "default" data: [] encrypted_fields: {'fields': [{'keyId': {'$binary': {'base64': 'EjRWeBI0mHYSNBI0VniQEg==', 'subType': '04'}}, 'path': 'encryptedIndexed', 'bsonType': 'string', 'queries': {'queryType': 'equality', 'contention': {'$numberLong': '0'}}}, {'keyId': {'$binary': {'base64': 'q83vqxI0mHYSNBI0VniQEg==', 'subType': '04'}}, 'path': 'encryptedUnindexed', 'bsonType': 'string'}]} key_vault_data: [ {'_id': {'$binary': {'base64': 'q83vqxI0mHYSNBI0VniQEg==', 'subType': '04'}}, 'keyMaterial': {'$binary': {'base64': 'HBk9BWihXExNDvTp1lUxOuxuZK2Pe2ZdVdlsxPEBkiO1bS4mG5NNDsQ7zVxJAH8BtdOYp72Ku4Y3nwc0BUpIKsvAKX4eYXtlhv5zUQxWdeNFhg9qK7qb8nqhnnLeT0f25jFSqzWJoT379hfwDeu0bebJHr35QrJ8myZdPMTEDYF08QYQ48ShRBli0S+QzBHHAQiM2iJNr4svg2WR8JSeWQ==', 'subType': '00'}}, 'creationDate': {'$date': {'$numberLong': '1648914851981'}}, 'updateDate': {'$date': {'$numberLong': '1648914851981'}}, 'status': {'$numberInt': '0'}, 'masterKey': {'provider': 'local'}}] tests: - description: "encryptedFieldsMap is preferred over remote encryptedFields" clientOptions: autoEncryptOpts: kmsProviders: local: {'key': {'$binary': {'base64': 'Mng0NCt4ZHVUYUJCa1kxNkVyNUR1QURhZ2h2UzR2d2RrZzh0cFBwM3R6NmdWMDFBMUN3YkQ5aXRRMkhGRGdQV09wOGVNYUMxT2k3NjZKelhaQmRCZGJkTXVyZG9uSjFk', 'subType': '00'}}} encryptedFieldsMap: { "default.default": { "fields": [] } } operations: # EncryptedFieldsMap overrides remote encryptedFields. # Automatic encryption does not occur on encryptedUnindexed. The value is validated on the server. - name: insertOne arguments: document: &doc0 { _id: 1, encryptedUnindexed: { "$binary": { "base64": "BqvN76sSNJh2EjQSNFZ4kBICTQaVZPWgXp41I7mPV1rLFTtw1tXzjcdSEyxpKKqujlko5TeizkB9hHQ009dVY1+fgIiDcefh+eQrm3CkhQ==", "subType": "06" } } } - name: find arguments: filter: { "_id": 1 } result: [{"_id": 1, "encryptedUnindexed": "value123" }] expectations: - command_started_event: command: insert: *collection_name documents: - *doc0 ordered: true command_name: insert - command_started_event: command: find: *collection_name filter: { "_id": 1} command_name: find - command_started_event: command: find: datakeys filter: { "$or": [ { "_id": { "$in": [ {'$binary': {'base64': 'q83vqxI0mHYSNBI0VniQEg==', 'subType': '04'}} ] } }, { "keyAltNames": { "$in": [] } } ] } $db: keyvault readConcern: { level: "majority" } command_name: find outcome: collection: data: - *doc0fle2v2-EncryptedFields-vs-jsonSchema.yml000066400000000000000000000112711505113246500353110ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_encryption# Requires libmongocrypt 1.8.0. runOn: - minServerVersion: "7.0.0" maxServerVersion: "7.99.99" # Skip QEv2 (also referred to as FLE2v2) tests on Serverless. Unskip once Serverless enables the QEv2 protocol. # FLE 2 Encrypted collections are not supported on standalone. topology: [ "replicaset", "sharded", "load-balanced" ] database_name: &database_name "default" collection_name: &collection_name "default" data: [] json_schema: { "properties": {}, "bsonType": "object" } encrypted_fields: &encrypted_fields {'fields': [{'keyId': {'$binary': {'base64': 'EjRWeBI0mHYSNBI0VniQEg==', 'subType': '04'}}, 'path': 'encryptedIndexed', 'bsonType': 'string', 'queries': {'queryType': 'equality', 'contention': {'$numberLong': '0'}}}, {'keyId': {'$binary': {'base64': 'q83vqxI0mHYSNBI0VniQEg==', 'subType': '04'}}, 'path': 'encryptedUnindexed', 'bsonType': 'string'}]} key_vault_data: [ {'_id': {'$binary': {'base64': 'EjRWeBI0mHYSNBI0VniQEg==', 'subType': '04'}}, 'keyMaterial': {'$binary': {'base64': 'sHe0kz57YW7v8g9VP9sf/+K1ex4JqKc5rf/URX3n3p8XdZ6+15uXPaSayC6adWbNxkFskuMCOifDoTT+rkqMtFkDclOy884RuGGtUysq3X7zkAWYTKi8QAfKkajvVbZl2y23UqgVasdQu3OVBQCrH/xY00nNAs/52e958nVjBuzQkSb1T8pKJAyjZsHJ60+FtnfafDZSTAIBJYn7UWBCwQ==', 'subType': '00'}}, 'creationDate': {'$date': {'$numberLong': '1648914851981'}}, 'updateDate': {'$date': {'$numberLong': '1648914851981'}}, 'status': {'$numberInt': '0'}, 'masterKey': {'provider': 'local'}} ] tests: - description: "encryptedFields is preferred over jsonSchema" clientOptions: autoEncryptOpts: kmsProviders: local: {'key': {'$binary': {'base64': 'Mng0NCt4ZHVUYUJCa1kxNkVyNUR1QURhZ2h2UzR2d2RrZzh0cFBwM3R6NmdWMDFBMUN3YkQ5aXRRMkhGRGdQV09wOGVNYUMxT2k3NjZKelhaQmRCZGJkTXVyZG9uSjFk', 'subType': '00'}}} operations: - name: insertOne arguments: document: &doc0 { _id: 1, encryptedIndexed: "123" } - name: find arguments: filter: { encryptedIndexed: "123" } result: [*doc0] expectations: - command_started_event: command: listCollections: 1 filter: name: *collection_name command_name: listCollections - command_started_event: command: find: datakeys filter: { "$or": [ { "_id": { "$in": [ {'$binary': {'base64': 'EjRWeBI0mHYSNBI0VniQEg==', 'subType': '04'}} ] } }, { "keyAltNames": { "$in": [] } } ] } $db: keyvault readConcern: { level: "majority" } command_name: find - command_started_event: command: insert: *collection_name documents: - &doc0_encrypted { "_id": 1, "encryptedIndexed": { $$type: "binData" } } ordered: true encryptionInformation: type: 1 schema: "default.default": # libmongocrypt applies escCollection and ecocCollection to outgoing command. escCollection: "enxcol_.default.esc" ecocCollection: "enxcol_.default.ecoc" <<: *encrypted_fields command_name: insert - command_started_event: command: find: *collection_name filter: { "encryptedIndexed": { "$eq": { "$binary": { "base64": "DIkAAAAFZAAgAAAAAPGmZcUzdE/FPILvRSyAScGvZparGI2y9rJ/vSBxgCujBXMAIAAAAACi1RjmndKqgnXy7xb22RzUbnZl1sOZRXPOC0KcJkAxmQVsACAAAAAApJtKPW4+o9B7gAynNLL26jtlB4+hq5TXResijcYet8USY20AAAAAAAAAAAAA", "subType": "06" } } } } encryptionInformation: type: 1 schema: "default.default": # libmongocrypt applies escCollection and ecocCollection to outgoing command. escCollection: "enxcol_.default.esc" ecocCollection: "enxcol_.default.ecoc" <<: *encrypted_fields command_name: find outcome: collection: # Outcome is checked using a separate MongoClient without auto encryption. data: - { "_id": 1, "encryptedIndexed": { $$type: "binData" }, "__safeContent__": [{ "$binary" : { "base64" : "31eCYlbQoVboc5zwC8IoyJVSkag9PxREka8dkmbXJeY=", "subType" : "00" } }] }fle2v2-EncryptedFieldsMap-defaults.yml000066400000000000000000000041271505113246500350400ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_encryption# Requires libmongocrypt 1.8.0. runOn: - minServerVersion: "7.0.0" maxServerVersion: "7.99.99" # Skip QEv2 (also referred to as FLE2v2) tests on Serverless. Unskip once Serverless enables the QEv2 protocol. # FLE 2 Encrypted collections are not supported on standalone. topology: [ "replicaset", "sharded", "load-balanced" ] database_name: &database_name "default" collection_name: &collection_name "default" data: [] key_vault_data: [] tests: - description: "default state collections are applied to encryptionInformation" clientOptions: autoEncryptOpts: kmsProviders: local: {'key': {'$binary': {'base64': 'Mng0NCt4ZHVUYUJCa1kxNkVyNUR1QURhZ2h2UzR2d2RrZzh0cFBwM3R6NmdWMDFBMUN3YkQ5aXRRMkhGRGdQV09wOGVNYUMxT2k3NjZKelhaQmRCZGJkTXVyZG9uSjFk', 'subType': '00'}}} encryptedFieldsMap: &efm { "default.default": { "fields": [] } } operations: - name: insertOne arguments: document: &doc0 { _id: 1, # Include a FLE2FindEncryptedPayload for 'encryptionInformation' to be appended. foo: { "$binary": { "base64": "BYkAAAAFZAAgAAAAAE8KGPgq7h3n9nH5lfHcia8wtOTLwGkZNLBesb6PULqbBXMAIAAAAACq0558QyD3c3jkR5k0Zc9UpQK8ByhXhtn2d1xVQnuJ3AVjACAAAAAA1003zUWGwD4zVZ0KeihnZOthS3V6CEHUfnJZcIYHefISY20AAAAAAAAAAAAA", "subType": "06" } } } expectations: - command_started_event: command: insert: *collection_name documents: - *doc0 encryptionInformation: { "type": { "$numberInt": "1" }, "schema": { "default.default": { "escCollection": "enxcol_.default.esc", "ecocCollection": "enxcol_.default.ecoc", "fields": [] } } } ordered: true command_name: insert outcome: collection: data: - *doc0mongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_encryption/fle2v2-FindOneAndUpdate.yml000066400000000000000000000210011505113246500326660ustar00rootroot00000000000000# Requires libmongocrypt 1.8.0. runOn: - minServerVersion: "7.0.0" maxServerVersion: "7.99.99" # Skip QEv2 (also referred to as FLE2v2) tests on Serverless. Unskip once Serverless enables the QEv2 protocol. # FLE 2 Encrypted collections are not supported on standalone. topology: [ "replicaset", "sharded", "load-balanced" ] database_name: &database_name "default" collection_name: &collection_name "default" data: [] encrypted_fields: &encrypted_fields {'fields': [{'keyId': {'$binary': {'base64': 'EjRWeBI0mHYSNBI0VniQEg==', 'subType': '04'}}, 'path': 'encryptedIndexed', 'bsonType': 'string', 'queries': {'queryType': 'equality', 'contention': {'$numberLong': '0'}}}, {'keyId': {'$binary': {'base64': 'q83vqxI0mHYSNBI0VniQEg==', 'subType': '04'}}, 'path': 'encryptedUnindexed', 'bsonType': 'string'}]} key_vault_data: [ {'_id': {'$binary': {'base64': 'EjRWeBI0mHYSNBI0VniQEg==', 'subType': '04'}}, 'keyMaterial': {'$binary': {'base64': 'sHe0kz57YW7v8g9VP9sf/+K1ex4JqKc5rf/URX3n3p8XdZ6+15uXPaSayC6adWbNxkFskuMCOifDoTT+rkqMtFkDclOy884RuGGtUysq3X7zkAWYTKi8QAfKkajvVbZl2y23UqgVasdQu3OVBQCrH/xY00nNAs/52e958nVjBuzQkSb1T8pKJAyjZsHJ60+FtnfafDZSTAIBJYn7UWBCwQ==', 'subType': '00'}}, 'creationDate': {'$date': {'$numberLong': '1648914851981'}}, 'updateDate': {'$date': {'$numberLong': '1648914851981'}}, 'status': {'$numberInt': '0'}, 'masterKey': {'provider': 'local'}} ] tests: - description: "findOneAndUpdate can query an FLE2 indexed field" clientOptions: autoEncryptOpts: kmsProviders: local: {'key': {'$binary': {'base64': 'Mng0NCt4ZHVUYUJCa1kxNkVyNUR1QURhZ2h2UzR2d2RrZzh0cFBwM3R6NmdWMDFBMUN3YkQ5aXRRMkhGRGdQV09wOGVNYUMxT2k3NjZKelhaQmRCZGJkTXVyZG9uSjFk', 'subType': '00'}}} operations: - name: insertOne arguments: document: {"_id": 1, "encryptedIndexed": "value123" } - name: findOneAndUpdate arguments: filter: { "encryptedIndexed": "value123" } update: { "$set": { "foo": "bar"}} returnDocument: Before result: { "_id": 1, "encryptedIndexed": "value123" } expectations: - command_started_event: command: listCollections: 1 filter: name: *collection_name command_name: listCollections - command_started_event: command: find: datakeys filter: { "$or": [ { "_id": { "$in": [ {'$binary': {'base64': 'EjRWeBI0mHYSNBI0VniQEg==', 'subType': '04'}} ] } }, { "keyAltNames": { "$in": [] } } ] } $db: keyvault readConcern: { level: "majority" } command_name: find - command_started_event: command: insert: *collection_name documents: - { "_id": 1, "encryptedIndexed": { $$type: "binData" } } ordered: true encryptionInformation: type: 1 schema: "default.default": # libmongocrypt applies escCollection and ecocCollection to outgoing command. escCollection: "enxcol_.default.esc" ecocCollection: "enxcol_.default.ecoc" <<: *encrypted_fields command_name: insert - command_started_event: command: findAndModify: *collection_name query: { "encryptedIndexed": { "$eq": { "$binary": { "base64": "DIkAAAAFZAAgAAAAAPtVteJQAlgb2YMa/+7YWH00sbQPyt7L6Rb8OwBdMmL2BXMAIAAAAAAd44hgVKnEnTFlwNVC14oyc9OZOTspeymusqkRQj57nAVsACAAAAAAaZ9s3G+4znfxStxeOZwcZy1OhzjMGc5hjmdMN+b/w6kSY20AAAAAAAAAAAAA", "subType": "06" } } } } update: { "$set": { "foo": "bar"} } encryptionInformation: type: 1 schema: "default.default": # libmongocrypt applies escCollection and ecocCollection to outgoing command. escCollection: "enxcol_.default.esc" ecocCollection: "enxcol_.default.ecoc" <<: *encrypted_fields command_name: findAndModify outcome: collection: data: - { "_id": 1, "encryptedIndexed": { "$$type": "binData" }, "foo": "bar", "__safeContent__": [{ "$binary" : { "base64" : "ThpoKfQ8AkOzkFfNC1+9PF0pY2nIzfXvRdxQgjkNbBw=", "subType" : "00" } }] } - description: "findOneAndUpdate can modify an FLE2 indexed field" clientOptions: autoEncryptOpts: kmsProviders: local: {'key': {'$binary': {'base64': 'Mng0NCt4ZHVUYUJCa1kxNkVyNUR1QURhZ2h2UzR2d2RrZzh0cFBwM3R6NmdWMDFBMUN3YkQ5aXRRMkhGRGdQV09wOGVNYUMxT2k3NjZKelhaQmRCZGJkTXVyZG9uSjFk', 'subType': '00'}}} operations: - name: insertOne arguments: document: {"_id": 1, "encryptedIndexed": "value123" } - name: findOneAndUpdate arguments: filter: { "encryptedIndexed": "value123" } update: { "$set": { "encryptedIndexed": "value456"}} returnDocument: Before result: { "_id": 1, "encryptedIndexed": "value123" } - name: find arguments: filter: { "_id": 1} result: [ "encryptedIndexed": "value456" ] expectations: - command_started_event: command: listCollections: 1 filter: name: *collection_name command_name: listCollections - command_started_event: command: find: datakeys filter: { "$or": [ { "_id": { "$in": [ {'$binary': {'base64': 'EjRWeBI0mHYSNBI0VniQEg==', 'subType': '04'}} ] } }, { "keyAltNames": { "$in": [] } } ] } $db: keyvault readConcern: { level: "majority" } command_name: find - command_started_event: command: insert: *collection_name documents: - { "_id": 1, "encryptedIndexed": { $$type: "binData" } } ordered: true encryptionInformation: type: 1 schema: "default.default": # libmongocrypt applies escCollection and ecocCollection to outgoing command. escCollection: "enxcol_.default.esc" ecocCollection: "enxcol_.default.ecoc" <<: *encrypted_fields command_name: insert - command_started_event: command: findAndModify: *collection_name query: { "encryptedIndexed": { "$eq": { "$binary": { "base64": "DIkAAAAFZAAgAAAAAPtVteJQAlgb2YMa/+7YWH00sbQPyt7L6Rb8OwBdMmL2BXMAIAAAAAAd44hgVKnEnTFlwNVC14oyc9OZOTspeymusqkRQj57nAVsACAAAAAAaZ9s3G+4znfxStxeOZwcZy1OhzjMGc5hjmdMN+b/w6kSY20AAAAAAAAAAAAA", "subType": "06" } } } } update: { "$set": { "encryptedIndexed": { "$$type": "binData" }} } encryptionInformation: type: 1 schema: "default.default": # libmongocrypt applies escCollection and ecocCollection to outgoing command. escCollection: "enxcol_.default.esc" ecocCollection: "enxcol_.default.ecoc" <<: *encrypted_fields command_name: findAndModify - command_started_event: command: find: *collection_name filter: { "_id": { "$eq": 1 }} command_name: find outcome: collection: data: - { "_id": 1, "encryptedIndexed": { "$$type": "binData" }, "__safeContent__": [{ "$binary" : { "base64" : "rhe7/w8Ob8Unl44rGr/moScx6m5VODQnscDhF4Nkn6g=", "subType" : "00" } }] }mongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_encryption/fle2v2-InsertFind-Indexed.yml000066400000000000000000000111311505113246500332040ustar00rootroot00000000000000# Requires libmongocrypt 1.8.0. runOn: - minServerVersion: "7.0.0" maxServerVersion: "7.99.99" # Skip QEv2 (also referred to as FLE2v2) tests on Serverless. Unskip once Serverless enables the QEv2 protocol. # FLE 2 Encrypted collections are not supported on standalone. topology: [ "replicaset", "sharded", "load-balanced" ] database_name: &database_name "default" collection_name: &collection_name "default" data: [] encrypted_fields: &encrypted_fields {'fields': [{'keyId': {'$binary': {'base64': 'EjRWeBI0mHYSNBI0VniQEg==', 'subType': '04'}}, 'path': 'encryptedIndexed', 'bsonType': 'string', 'queries': {'queryType': 'equality', 'contention': {'$numberLong': '0'}}}, {'keyId': {'$binary': {'base64': 'q83vqxI0mHYSNBI0VniQEg==', 'subType': '04'}}, 'path': 'encryptedUnindexed', 'bsonType': 'string'}]} key_vault_data: [ {'_id': {'$binary': {'base64': 'EjRWeBI0mHYSNBI0VniQEg==', 'subType': '04'}}, 'keyMaterial': {'$binary': {'base64': 'sHe0kz57YW7v8g9VP9sf/+K1ex4JqKc5rf/URX3n3p8XdZ6+15uXPaSayC6adWbNxkFskuMCOifDoTT+rkqMtFkDclOy884RuGGtUysq3X7zkAWYTKi8QAfKkajvVbZl2y23UqgVasdQu3OVBQCrH/xY00nNAs/52e958nVjBuzQkSb1T8pKJAyjZsHJ60+FtnfafDZSTAIBJYn7UWBCwQ==', 'subType': '00'}}, 'creationDate': {'$date': {'$numberLong': '1648914851981'}}, 'updateDate': {'$date': {'$numberLong': '1648914851981'}}, 'status': {'$numberInt': '0'}, 'masterKey': {'provider': 'local'}} ] tests: - description: "Insert and find FLE2 indexed field" clientOptions: autoEncryptOpts: kmsProviders: local: {'key': {'$binary': {'base64': 'Mng0NCt4ZHVUYUJCa1kxNkVyNUR1QURhZ2h2UzR2d2RrZzh0cFBwM3R6NmdWMDFBMUN3YkQ5aXRRMkhGRGdQV09wOGVNYUMxT2k3NjZKelhaQmRCZGJkTXVyZG9uSjFk', 'subType': '00'}}} operations: - name: insertOne arguments: document: &doc0 { _id: 1, encryptedIndexed: "123" } - name: find arguments: filter: { encryptedIndexed: "123" } result: [*doc0] expectations: - command_started_event: command: listCollections: 1 filter: name: *collection_name command_name: listCollections - command_started_event: command: find: datakeys filter: { "$or": [ { "_id": { "$in": [ {'$binary': {'base64': 'EjRWeBI0mHYSNBI0VniQEg==', 'subType': '04'}} ] } }, { "keyAltNames": { "$in": [] } } ] } $db: keyvault readConcern: { level: "majority" } command_name: find - command_started_event: command: insert: *collection_name documents: - &doc0_encrypted { "_id": 1, "encryptedIndexed": { $$type: "binData" } } ordered: true encryptionInformation: type: 1 schema: default.default: # libmongocrypt applies escCollection and ecocCollection to outgoing command. escCollection: "enxcol_.default.esc" ecocCollection: "enxcol_.default.ecoc" <<: *encrypted_fields command_name: insert - command_started_event: command: find: *collection_name filter: { "encryptedIndexed": { "$eq": { "$binary": { "base64": "DIkAAAAFZAAgAAAAAPGmZcUzdE/FPILvRSyAScGvZparGI2y9rJ/vSBxgCujBXMAIAAAAACi1RjmndKqgnXy7xb22RzUbnZl1sOZRXPOC0KcJkAxmQVsACAAAAAApJtKPW4+o9B7gAynNLL26jtlB4+hq5TXResijcYet8USY20AAAAAAAAAAAAA", "subType": "06" } } } } encryptionInformation: type: 1 schema: default.default: # libmongocrypt applies escCollection and ecocCollection to outgoing command. escCollection: "enxcol_.default.esc" ecocCollection: "enxcol_.default.ecoc" <<: *encrypted_fields command_name: find outcome: collection: # Outcome is checked using a separate MongoClient without auto encryption. data: - { "_id": 1, "encryptedIndexed": { $$type: "binData" }, "__safeContent__": [{ "$binary" : { "base64" : "31eCYlbQoVboc5zwC8IoyJVSkag9PxREka8dkmbXJeY=", "subType" : "00" } }] }mongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_encryption/fle2v2-InsertFind-Unindexed.yml000066400000000000000000000104501505113246500335520ustar00rootroot00000000000000# Requires libmongocrypt 1.8.0. runOn: - minServerVersion: "7.0.0" maxServerVersion: "7.99.99" # Skip QEv2 (also referred to as FLE2v2) tests on Serverless. Unskip once Serverless enables the QEv2 protocol. # FLE 2 Encrypted collections are not supported on standalone. topology: [ "replicaset", "sharded", "load-balanced" ] database_name: &database_name "default" collection_name: &collection_name "default" data: [] encrypted_fields: {'fields': [{'keyId': {'$binary': {'base64': 'EjRWeBI0mHYSNBI0VniQEg==', 'subType': '04'}}, 'path': 'encryptedIndexed', 'bsonType': 'string', 'queries': {'queryType': 'equality', 'contention': {'$numberLong': '0'}}}, {'keyId': {'$binary': {'base64': 'q83vqxI0mHYSNBI0VniQEg==', 'subType': '04'}}, 'path': 'encryptedUnindexed', 'bsonType': 'string'}]} key_vault_data: [ {'_id': {'$binary': {'base64': 'q83vqxI0mHYSNBI0VniQEg==', 'subType': '04'}}, 'keyMaterial': {'$binary': {'base64': 'HBk9BWihXExNDvTp1lUxOuxuZK2Pe2ZdVdlsxPEBkiO1bS4mG5NNDsQ7zVxJAH8BtdOYp72Ku4Y3nwc0BUpIKsvAKX4eYXtlhv5zUQxWdeNFhg9qK7qb8nqhnnLeT0f25jFSqzWJoT379hfwDeu0bebJHr35QrJ8myZdPMTEDYF08QYQ48ShRBli0S+QzBHHAQiM2iJNr4svg2WR8JSeWQ==', 'subType': '00'}}, 'creationDate': {'$date': {'$numberLong': '1648914851981'}}, 'updateDate': {'$date': {'$numberLong': '1648914851981'}}, 'status': {'$numberInt': '0'}, 'masterKey': {'provider': 'local'}} ] tests: - description: "Insert and find FLE2 unindexed field" clientOptions: autoEncryptOpts: kmsProviders: local: {'key': {'$binary': {'base64': 'Mng0NCt4ZHVUYUJCa1kxNkVyNUR1QURhZ2h2UzR2d2RrZzh0cFBwM3R6NmdWMDFBMUN3YkQ5aXRRMkhGRGdQV09wOGVNYUMxT2k3NjZKelhaQmRCZGJkTXVyZG9uSjFk', 'subType': '00'}}} operations: - name: insertOne arguments: document: &doc0 { _id: 1, encryptedUnindexed: "value123" } - name: find arguments: filter: { _id: 1 } result: [*doc0] expectations: - command_started_event: command: listCollections: 1 filter: name: *collection_name command_name: listCollections - command_started_event: command: find: datakeys filter: { "$or": [ { "_id": { "$in": [ {'$binary': {'base64': 'q83vqxI0mHYSNBI0VniQEg==', 'subType': '04'}} ] } }, { "keyAltNames": { "$in": [] } } ] } $db: keyvault readConcern: { level: "majority" } command_name: find - command_started_event: command: insert: *collection_name documents: - &doc0_encrypted { "_id": 1, "encryptedUnindexed": { $$type: "binData" } } ordered: true command_name: insert - command_started_event: command: find: *collection_name filter: { "_id": { "$eq": 1 }} command_name: find outcome: collection: data: - { "_id": 1, "encryptedUnindexed": { $$type: "binData" } } - description: "Query with an unindexed field fails" clientOptions: autoEncryptOpts: kmsProviders: local: {'key': {'$binary': {'base64': 'Mng0NCt4ZHVUYUJCa1kxNkVyNUR1QURhZ2h2UzR2d2RrZzh0cFBwM3R6NmdWMDFBMUN3YkQ5aXRRMkhGRGdQV09wOGVNYUMxT2k3NjZKelhaQmRCZGJkTXVyZG9uSjFk', 'subType': '00'}}} operations: - name: insertOne arguments: document: { _id: 1, encryptedUnindexed: "value123" } - name: find arguments: filter: { encryptedUnindexed: "value123" } result: # Expected error message changed in https://github.com/10gen/mongo-enterprise-modules/commit/212b584d4f7a44bed41c826a180a4aff00923d7a#diff-5f12b55e8d5c52c2f62853ec595dc2c1e2e5cb4fdbf7a32739a8e3acb3c6f818 # Before the message was "cannot query non-indexed fields with the randomized encryption algorithm" # After: "can only execute encrypted equality queries with an encrypted equality index" # Use a small common substring. errorContains: "encrypt"mongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_encryption/fle2v2-MissingKey.yml000066400000000000000000000040031505113246500316430ustar00rootroot00000000000000# Requires libmongocrypt 1.8.0. runOn: - minServerVersion: "7.0.0" maxServerVersion: "7.99.99" # Skip QEv2 (also referred to as FLE2v2) tests on Serverless. Unskip once Serverless enables the QEv2 protocol. # FLE 2 Encrypted collections are not supported on standalone. topology: [ "replicaset", "sharded", "load-balanced" ] database_name: &database_name "default" collection_name: &collection_name "default" data: [ &doc0 { "encryptedUnindexed": { "$binary": { "base64": "BqvN76sSNJh2EjQSNFZ4kBICTQaVZPWgXp41I7mPV1rLFTtw1tXzjcdSEyxpKKqujlko5TeizkB9hHQ009dVY1+fgIiDcefh+eQrm3CkhQ==", "subType": "06" } } } ] encrypted_fields: {'fields': [{'keyId': {'$binary': {'base64': 'EjRWeBI0mHYSNBI0VniQEg==', 'subType': '04'}}, 'path': 'encryptedIndexed', 'bsonType': 'string', 'queries': {'queryType': 'equality', 'contention': {'$numberLong': '0'}}}, {'keyId': {'$binary': {'base64': 'q83vqxI0mHYSNBI0VniQEg==', 'subType': '04'}}, 'path': 'encryptedUnindexed', 'bsonType': 'string'}]} key_vault_data: [] tests: - description: "FLE2 encrypt fails with mising key" clientOptions: autoEncryptOpts: kmsProviders: local: {'key': {'$binary': {'base64': 'Mng0NCt4ZHVUYUJCa1kxNkVyNUR1QURhZ2h2UzR2d2RrZzh0cFBwM3R6NmdWMDFBMUN3YkQ5aXRRMkhGRGdQV09wOGVNYUMxT2k3NjZKelhaQmRCZGJkTXVyZG9uSjFk', 'subType': '00'}}} operations: - name: insertOne arguments: document: { _id: 1, encryptedIndexed: "123" } result: errorContains: "not all keys requested were satisfied" - description: "FLE2 decrypt fails with mising key" clientOptions: autoEncryptOpts: kmsProviders: local: {'key': {'$binary': {'base64': 'Mng0NCt4ZHVUYUJCa1kxNkVyNUR1QURhZ2h2UzR2d2RrZzh0cFBwM3R6NmdWMDFBMUN3YkQ5aXRRMkhGRGdQV09wOGVNYUMxT2k3NjZKelhaQmRCZGJkTXVyZG9uSjFk', 'subType': '00'}}} operations: - name: find arguments: filter: { } result: errorContains: "not all keys requested were satisfied"mongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_encryption/fle2v2-NoEncryption.yml000066400000000000000000000025751505113246500322240ustar00rootroot00000000000000# Requires libmongocrypt 1.8.0. runOn: - minServerVersion: "7.0.0" maxServerVersion: "7.99.99" # Skip QEv2 (also referred to as FLE2v2) tests on Serverless. Unskip once Serverless enables the QEv2 protocol. # FLE 2 Encrypted collections are not supported on standalone. topology: [ "replicaset", "sharded", "load-balanced" ] database_name: &database_name "default" collection_name: &collection_name "default" data: [] key_vault_data: [] encrypted_fields: { "fields": [] } tests: - description: "insert with no encryption succeeds" clientOptions: autoEncryptOpts: kmsProviders: local: {'key': {'$binary': {'base64': 'Mng0NCt4ZHVUYUJCa1kxNkVyNUR1QURhZ2h2UzR2d2RrZzh0cFBwM3R6NmdWMDFBMUN3YkQ5aXRRMkhGRGdQV09wOGVNYUMxT2k3NjZKelhaQmRCZGJkTXVyZG9uSjFk', 'subType': '00'}}} operations: - name: insertOne arguments: document: &doc0 { _id: 1, foo: "bar" } expectations: - command_started_event: command: listCollections: 1 filter: name: *collection_name command_name: listCollections - command_started_event: command: insert: *collection_name documents: - *doc0 ordered: true command_name: insert outcome: collection: data: - *doc0mongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_encryption/fle2v2-Update.yml000066400000000000000000000211531505113246500310100ustar00rootroot00000000000000# Requires libmongocrypt 1.8.0. runOn: - minServerVersion: "7.0.0" maxServerVersion: "7.99.99" # Skip QEv2 (also referred to as FLE2v2) tests on Serverless. Unskip once Serverless enables the QEv2 protocol. # FLE 2 Encrypted collections are not supported on standalone. topology: [ "replicaset", "sharded", "load-balanced" ] database_name: &database_name "default" collection_name: &collection_name "default" data: [] encrypted_fields: &encrypted_fields {'fields': [{'keyId': {'$binary': {'base64': 'EjRWeBI0mHYSNBI0VniQEg==', 'subType': '04'}}, 'path': 'encryptedIndexed', 'bsonType': 'string', 'queries': {'queryType': 'equality', 'contention': {'$numberLong': '0'}}}, {'keyId': {'$binary': {'base64': 'q83vqxI0mHYSNBI0VniQEg==', 'subType': '04'}}, 'path': 'encryptedUnindexed', 'bsonType': 'string'}]} key_vault_data: [ {'_id': {'$binary': {'base64': 'EjRWeBI0mHYSNBI0VniQEg==', 'subType': '04'}}, 'keyMaterial': {'$binary': {'base64': 'sHe0kz57YW7v8g9VP9sf/+K1ex4JqKc5rf/URX3n3p8XdZ6+15uXPaSayC6adWbNxkFskuMCOifDoTT+rkqMtFkDclOy884RuGGtUysq3X7zkAWYTKi8QAfKkajvVbZl2y23UqgVasdQu3OVBQCrH/xY00nNAs/52e958nVjBuzQkSb1T8pKJAyjZsHJ60+FtnfafDZSTAIBJYn7UWBCwQ==', 'subType': '00'}}, 'creationDate': {'$date': {'$numberLong': '1648914851981'}}, 'updateDate': {'$date': {'$numberLong': '1648914851981'}}, 'status': {'$numberInt': '0'}, 'masterKey': {'provider': 'local'}} ] tests: - description: "Update can query an FLE2 indexed field" clientOptions: autoEncryptOpts: kmsProviders: local: {'key': {'$binary': {'base64': 'Mng0NCt4ZHVUYUJCa1kxNkVyNUR1QURhZ2h2UzR2d2RrZzh0cFBwM3R6NmdWMDFBMUN3YkQ5aXRRMkhGRGdQV09wOGVNYUMxT2k3NjZKelhaQmRCZGJkTXVyZG9uSjFk', 'subType': '00'}}} operations: - name: insertOne arguments: document: {"_id": 1, "encryptedIndexed": "value123" } - name: updateOne arguments: filter: { "encryptedIndexed": "value123" } update: { "$set": { "foo": "bar"}} result: matchedCount: 1 modifiedCount: 1 upsertedCount: 0 expectations: - command_started_event: command: listCollections: 1 filter: name: *collection_name command_name: listCollections - command_started_event: command: find: datakeys filter: { "$or": [ { "_id": { "$in": [ {'$binary': {'base64': 'EjRWeBI0mHYSNBI0VniQEg==', 'subType': '04'}} ] } }, { "keyAltNames": { "$in": [] } } ] } $db: keyvault readConcern: { level: "majority" } command_name: find - command_started_event: command: insert: *collection_name documents: - { "_id": 1, "encryptedIndexed": { $$type: "binData" } } ordered: true encryptionInformation: type: 1 schema: "default.default": # libmongocrypt applies escCollection and ecocCollection to outgoing command. escCollection: "enxcol_.default.esc" ecocCollection: "enxcol_.default.ecoc" <<: *encrypted_fields command_name: insert - command_started_event: command: update: *collection_name updates: - { "q": { "encryptedIndexed": { "$eq": { "$binary": { "base64": "DIkAAAAFZAAgAAAAAPtVteJQAlgb2YMa/+7YWH00sbQPyt7L6Rb8OwBdMmL2BXMAIAAAAAAd44hgVKnEnTFlwNVC14oyc9OZOTspeymusqkRQj57nAVsACAAAAAAaZ9s3G+4znfxStxeOZwcZy1OhzjMGc5hjmdMN+b/w6kSY20AAAAAAAAAAAAA", "subType": "06" } } } }, "u": { "$set": { "foo": "bar"} } } ordered: true encryptionInformation: type: 1 schema: "default.default": # libmongocrypt applies escCollection and ecocCollection to outgoing command. escCollection: "enxcol_.default.esc" ecocCollection: "enxcol_.default.ecoc" <<: *encrypted_fields command_name: update outcome: collection: data: - { "_id": 1, "encryptedIndexed": { "$$type": "binData" }, "foo": "bar", "__safeContent__": [{ "$binary" : { "base64" : "ThpoKfQ8AkOzkFfNC1+9PF0pY2nIzfXvRdxQgjkNbBw=", "subType" : "00" } }] } - description: "Update can modify an FLE2 indexed field" clientOptions: autoEncryptOpts: kmsProviders: local: {'key': {'$binary': {'base64': 'Mng0NCt4ZHVUYUJCa1kxNkVyNUR1QURhZ2h2UzR2d2RrZzh0cFBwM3R6NmdWMDFBMUN3YkQ5aXRRMkhGRGdQV09wOGVNYUMxT2k3NjZKelhaQmRCZGJkTXVyZG9uSjFk', 'subType': '00'}}} operations: - name: insertOne arguments: document: {"_id": 1, "encryptedIndexed": "value123" } - name: updateOne arguments: filter: { "encryptedIndexed": "value123" } update: { "$set": { "encryptedIndexed": "value456"}} result: matchedCount: 1 modifiedCount: 1 upsertedCount: 0 - name: find arguments: filter: { "_id": 1} result: [ "encryptedIndexed": "value456" ] expectations: - command_started_event: command: listCollections: 1 filter: name: *collection_name command_name: listCollections - command_started_event: command: find: datakeys filter: { "$or": [ { "_id": { "$in": [ {'$binary': {'base64': 'EjRWeBI0mHYSNBI0VniQEg==', 'subType': '04'}} ] } }, { "keyAltNames": { "$in": [] } } ] } $db: keyvault readConcern: { level: "majority" } command_name: find - command_started_event: command: insert: *collection_name documents: - { "_id": 1, "encryptedIndexed": { $$type: "binData" } } ordered: true encryptionInformation: type: 1 schema: "default.default": # libmongocrypt applies escCollection and ecocCollection to outgoing command. escCollection: "enxcol_.default.esc" ecocCollection: "enxcol_.default.ecoc" <<: *encrypted_fields command_name: insert - command_started_event: command: update: *collection_name updates: - { "q": { "encryptedIndexed": { "$eq": { "$binary": { "base64": "DIkAAAAFZAAgAAAAAPtVteJQAlgb2YMa/+7YWH00sbQPyt7L6Rb8OwBdMmL2BXMAIAAAAAAd44hgVKnEnTFlwNVC14oyc9OZOTspeymusqkRQj57nAVsACAAAAAAaZ9s3G+4znfxStxeOZwcZy1OhzjMGc5hjmdMN+b/w6kSY20AAAAAAAAAAAAA", "subType": "06" } } } }, "u": { "$set": { "encryptedIndexed": { "$$type": "binData" }} } } ordered: true encryptionInformation: type: 1 schema: "default.default": # libmongocrypt applies escCollection and ecocCollection to outgoing command. escCollection: "enxcol_.default.esc" ecocCollection: "enxcol_.default.ecoc" <<: *encrypted_fields command_name: update - command_started_event: command: find: *collection_name filter: { "_id": { "$eq": 1 }} command_name: find outcome: collection: data: - { "_id": 1, "encryptedIndexed": { "$$type": "binData" }, "__safeContent__": [{ "$binary" : { "base64" : "rhe7/w8Ob8Unl44rGr/moScx6m5VODQnscDhF4Nkn6g=", "subType" : "00" } }] }fle2v2-validatorAndPartialFieldExpression.yml000066400000000000000000000206111505113246500364560ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_encryption# Requires libmongocrypt 1.8.0. runOn: # Require server version 6.0.0 to get behavior added in SERVER-64911. - minServerVersion: "7.0.0" maxServerVersion: "7.99.99" # Skip QEv2 (also referred to as FLE2v2) tests on Serverless. Unskip once Serverless enables the QEv2 protocol. # FLE 2 Encrypted collections are not supported on standalone. topology: [ "replicaset", "sharded", "load-balanced" ] database_name: &database_name "default" collection_name: &collection_name "default" data: [] tests: - description: "create with a validator on an unencrypted field is OK" clientOptions: autoEncryptOpts: kmsProviders: local: {'key': {'$binary': {'base64': 'Mng0NCt4ZHVUYUJCa1kxNkVyNUR1QURhZ2h2UzR2d2RrZzh0cFBwM3R6NmdWMDFBMUN3YkQ5aXRRMkhGRGdQV09wOGVNYUMxT2k3NjZKelhaQmRCZGJkTXVyZG9uSjFk', 'subType': '00'}}} encryptedFieldsMap: "default.encryptedCollection": {'fields': [{'keyId': {'$binary': {'base64': 'EjRWeBI0mHYSNBI0VniQEg==', 'subType': '04'}}, 'path': 'encryptedIndexed', 'bsonType': 'string', 'queries': {'queryType': 'equality', 'contention': {'$numberLong': '0'}}}, {'keyId': {'$binary': {'base64': 'q83vqxI0mHYSNBI0VniQEg==', 'subType': '04'}}, 'path': 'encryptedUnindexed', 'bsonType': 'string'}]} operations: # Drop to remove a collection that may exist from previous test runs. - name: dropCollection object: database arguments: collection: "encryptedCollection" - name: createCollection object: database arguments: collection: "encryptedCollection" validator: unencrypted_string: "foo" - name: assertCollectionExists object: testRunner arguments: database: *database_name collection: "encryptedCollection" - description: "create with a validator on an encrypted field is an error" clientOptions: autoEncryptOpts: kmsProviders: local: {'key': {'$binary': {'base64': 'Mng0NCt4ZHVUYUJCa1kxNkVyNUR1QURhZ2h2UzR2d2RrZzh0cFBwM3R6NmdWMDFBMUN3YkQ5aXRRMkhGRGdQV09wOGVNYUMxT2k3NjZKelhaQmRCZGJkTXVyZG9uSjFk', 'subType': '00'}}} encryptedFieldsMap: "default.encryptedCollection": {'fields': [{'keyId': {'$binary': {'base64': 'EjRWeBI0mHYSNBI0VniQEg==', 'subType': '04'}}, 'path': 'encryptedIndexed', 'bsonType': 'string', 'queries': {'queryType': 'equality', 'contention': {'$numberLong': '0'}}}, {'keyId': {'$binary': {'base64': 'q83vqxI0mHYSNBI0VniQEg==', 'subType': '04'}}, 'path': 'encryptedUnindexed', 'bsonType': 'string'}]} operations: # Drop to remove a collection that may exist from previous test runs. - name: dropCollection object: database arguments: collection: "encryptedCollection" - name: createCollection object: database arguments: collection: "encryptedCollection" validator: encryptedIndexed: "foo" result: errorContains: "Comparison to encrypted fields not supported" - description: "collMod with a validator on an unencrypted field is OK" clientOptions: autoEncryptOpts: kmsProviders: local: {'key': {'$binary': {'base64': 'Mng0NCt4ZHVUYUJCa1kxNkVyNUR1QURhZ2h2UzR2d2RrZzh0cFBwM3R6NmdWMDFBMUN3YkQ5aXRRMkhGRGdQV09wOGVNYUMxT2k3NjZKelhaQmRCZGJkTXVyZG9uSjFk', 'subType': '00'}}} encryptedFieldsMap: "default.encryptedCollection": {'fields': [{'keyId': {'$binary': {'base64': 'EjRWeBI0mHYSNBI0VniQEg==', 'subType': '04'}}, 'path': 'encryptedIndexed', 'bsonType': 'string', 'queries': {'queryType': 'equality', 'contention': {'$numberLong': '0'}}}, {'keyId': {'$binary': {'base64': 'q83vqxI0mHYSNBI0VniQEg==', 'subType': '04'}}, 'path': 'encryptedUnindexed', 'bsonType': 'string'}]} operations: # Drop to remove a collection that may exist from previous test runs. - name: dropCollection object: database arguments: collection: "encryptedCollection" - name: createCollection object: database arguments: collection: "encryptedCollection" - name: runCommand object: database arguments: command: collMod: "encryptedCollection" validator: unencrypted_string: "foo" - description: "collMod with a validator on an encrypted field is an error" clientOptions: autoEncryptOpts: kmsProviders: local: {'key': {'$binary': {'base64': 'Mng0NCt4ZHVUYUJCa1kxNkVyNUR1QURhZ2h2UzR2d2RrZzh0cFBwM3R6NmdWMDFBMUN3YkQ5aXRRMkhGRGdQV09wOGVNYUMxT2k3NjZKelhaQmRCZGJkTXVyZG9uSjFk', 'subType': '00'}}} encryptedFieldsMap: "default.encryptedCollection": {'fields': [{'keyId': {'$binary': {'base64': 'EjRWeBI0mHYSNBI0VniQEg==', 'subType': '04'}}, 'path': 'encryptedIndexed', 'bsonType': 'string', 'queries': {'queryType': 'equality', 'contention': {'$numberLong': '0'}}}, {'keyId': {'$binary': {'base64': 'q83vqxI0mHYSNBI0VniQEg==', 'subType': '04'}}, 'path': 'encryptedUnindexed', 'bsonType': 'string'}]} operations: # Drop to remove a collection that may exist from previous test runs. - name: dropCollection object: database arguments: collection: "encryptedCollection" - name: createCollection object: database arguments: collection: "encryptedCollection" - name: runCommand object: database arguments: command: collMod: "encryptedCollection" validator: encryptedIndexed: "foo" result: errorContains: "Comparison to encrypted fields not supported" - description: "createIndexes with a partialFilterExpression on an unencrypted field is OK" clientOptions: autoEncryptOpts: kmsProviders: local: {'key': {'$binary': {'base64': 'Mng0NCt4ZHVUYUJCa1kxNkVyNUR1QURhZ2h2UzR2d2RrZzh0cFBwM3R6NmdWMDFBMUN3YkQ5aXRRMkhGRGdQV09wOGVNYUMxT2k3NjZKelhaQmRCZGJkTXVyZG9uSjFk', 'subType': '00'}}} encryptedFieldsMap: "default.encryptedCollection": {'fields': [{'keyId': {'$binary': {'base64': 'EjRWeBI0mHYSNBI0VniQEg==', 'subType': '04'}}, 'path': 'encryptedIndexed', 'bsonType': 'string', 'queries': {'queryType': 'equality', 'contention': {'$numberLong': '0'}}}, {'keyId': {'$binary': {'base64': 'q83vqxI0mHYSNBI0VniQEg==', 'subType': '04'}}, 'path': 'encryptedUnindexed', 'bsonType': 'string'}]} operations: # Drop to remove a collection that may exist from previous test runs. - name: dropCollection object: database arguments: collection: "encryptedCollection" - name: createCollection object: database arguments: collection: "encryptedCollection" - name: runCommand object: database arguments: command: createIndexes: "encryptedCollection" indexes: - name: "name" key: { name: 1 } partialFilterExpression: unencrypted_string: "foo" - name: assertIndexExists object: testRunner arguments: database: *database_name collection: "encryptedCollection" index: name - description: "createIndexes with a partialFilterExpression on an encrypted field is an error" clientOptions: autoEncryptOpts: kmsProviders: local: {'key': {'$binary': {'base64': 'Mng0NCt4ZHVUYUJCa1kxNkVyNUR1QURhZ2h2UzR2d2RrZzh0cFBwM3R6NmdWMDFBMUN3YkQ5aXRRMkhGRGdQV09wOGVNYUMxT2k3NjZKelhaQmRCZGJkTXVyZG9uSjFk', 'subType': '00'}}} encryptedFieldsMap: "default.encryptedCollection": {'fields': [{'keyId': {'$binary': {'base64': 'EjRWeBI0mHYSNBI0VniQEg==', 'subType': '04'}}, 'path': 'encryptedIndexed', 'bsonType': 'string', 'queries': {'queryType': 'equality', 'contention': {'$numberLong': '0'}}}, {'keyId': {'$binary': {'base64': 'q83vqxI0mHYSNBI0VniQEg==', 'subType': '04'}}, 'path': 'encryptedUnindexed', 'bsonType': 'string'}]} operations: # Drop to remove a collection that may exist from previous test runs. - name: dropCollection object: database arguments: collection: "encryptedCollection" - name: createCollection object: database arguments: collection: "encryptedCollection" - name: runCommand object: database arguments: command: createIndexes: "encryptedCollection" indexes: - name: "name" key: { name: 1 } partialFilterExpression: encryptedIndexed: "foo" result: errorContains: "Comparison to encrypted fields not supported" mongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_encryption/gcpKMS.yml000066400000000000000000000063771505113246500276270ustar00rootroot00000000000000runOn: - minServerVersion: "4.1.10" database_name: &database_name "default" collection_name: &collection_name "default" data: [] json_schema: {'properties': {'encrypted_string_aws': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}, 'encrypted_string_azure': {'encrypt': {'keyId': [{'$binary': {'base64': 'AZURE+AAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}, 'encrypted_string_gcp': {'encrypt': {'keyId': [{'$binary': {'base64': 'GCP+AAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}, 'encrypted_string_local': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}, 'encrypted_string_kmip': {'encrypt': {'keyId': [{'$binary': {'base64': 'dBHpr8aITfeBQ15grpbLpQ==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}}, 'bsonType': 'object'} key_vault_data: [{'_id': {'$binary': {'base64': 'GCP+AAAAAAAAAAAAAAAAAA==', 'subType': '04'}}, 'keyMaterial': {'$binary': {'base64': 'CiQAIgLj0WyktnB4dfYHo5SLZ41K4ASQrjJUaSzl5vvVH0G12G0SiQEAjlV8XPlbnHDEDFbdTO4QIe8ER2/172U1ouLazG0ysDtFFIlSvWX5ZnZUrRMmp/R2aJkzLXEt/zf8Mn4Lfm+itnjgo5R9K4pmPNvvPKNZX5C16lrPT+aA+rd+zXFSmlMg3i5jnxvTdLHhg3G7Q/Uv1ZIJskKt95bzLoe0tUVzRWMYXLIEcohnQg==', 'subType': '00'}}, 'creationDate': {'$date': {'$numberLong': '1601574333107'}}, 'updateDate': {'$date': {'$numberLong': '1601574333107'}}, 'status': {'$numberInt': '0'}, 'masterKey': {'provider': 'gcp', 'projectId': 'devprod-drivers', 'location': 'global', 'keyRing': 'key-ring-csfle', 'keyName': 'key-name-csfle'}, 'keyAltNames': ['altname', 'gcp_altname']}] tests: - description: "Insert a document with auto encryption using GCP KMS provider" clientOptions: autoEncryptOpts: kmsProviders: gcp: {} operations: - name: insertOne arguments: document: &doc0 { _id: 1, encrypted_string_gcp: "string0" } expectations: # Auto encryption will request the collection info. - command_started_event: command: listCollections: 1 filter: name: *collection_name command_name: listCollections # Then key is fetched from the key vault. - command_started_event: command: find: datakeys filter: { $or: [ { _id: { $in: [ {'$binary': {'base64': 'GCP+AAAAAAAAAAAAAAAAAA==', 'subType': '04'}} ] } }, { keyAltNames: { $in: [] } } ] } $db: keyvault command_name: find - command_started_event: command: insert: *collection_name documents: - &doc0_encrypted { _id: 1, encrypted_string_gcp: {'$binary': {'base64': 'ARgj/gAAAAAAAAAAAAAAAAACwFd+Y5Ojw45GUXNvbcIpN9YkRdoHDHkR4kssdn0tIMKlDQOLFkWFY9X07IRlXsxPD8DcTiKnl6XINK28vhcGlg==', 'subType': '06'}} } ordered: true command_name: insert outcome: collection: # Outcome is checked using a separate MongoClient without auto encryption. data: - *doc0_encryptedmongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_encryption/getMore.yml000066400000000000000000000075651505113246500301050ustar00rootroot00000000000000runOn: - minServerVersion: "4.1.10" database_name: &database_name "default" collection_name: &collection_name "default" data: - &doc0_encrypted { _id: 1, encrypted_string: {'$binary': {'base64': 'AQAAAAAAAAAAAAAAAAAAAAACwj+3zkv2VM+aTfk60RqhXq6a/77WlLwu/BxXFkL7EppGsju/m8f0x5kBDD3EZTtGALGXlym5jnpZAoSIkswHoA==', 'subType': '06'}} } - &doc1_encrypted { _id: 2, encrypted_string: {'$binary': {'base64': 'AQAAAAAAAAAAAAAAAAAAAAACDdw4KFz3ZLquhsbt7RmDjD0N67n0uSXx7IGnQNCLeIKvot6s/ouI21Eo84IOtb6lhwUNPlSEBNY0/hbszWAKJg==', 'subType': '06'}} } - &doc2_encrypted { _id: 3, encrypted_string: {'$binary': {'base64': 'AQAAAAAAAAAAAAAAAAAAAAACQ76HWOut3DZtQuV90hp1aaCpZn95vZIaWmn+wrBehcEtcFwyJlBdlyzDzZTWPZCPgiFq72Wvh6Y7VbpU9NAp3A==', 'subType': '06'}} } json_schema: {'properties': {'encrypted_w_altname': {'encrypt': {'keyId': '/altname', 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Random'}}, 'encrypted_string': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}, 'random': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Random'}}, 'encrypted_string_equivalent': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}}, 'bsonType': 'object'} key_vault_data: [{'status': 1, '_id': {'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}, 'masterKey': {'provider': 'aws', 'key': 'arn:aws:kms:us-east-1:579766882180:key/89fcc2c4-08b0-4bd9-9f25-e30687b580d0', 'region': 'us-east-1'}, 'updateDate': {'$date': {'$numberLong': '1552949630483'}}, 'keyMaterial': {'$binary': {'base64': 'AQICAHhQNmWG2CzOm1dq3kWLM+iDUZhEqnhJwH9wZVpuZ94A8gEqnsxXlR51T5EbEVezUqqKAAAAwjCBvwYJKoZIhvcNAQcGoIGxMIGuAgEAMIGoBgkqhkiG9w0BBwEwHgYJYIZIAWUDBAEuMBEEDHa4jo6yp0Z18KgbUgIBEIB74sKxWtV8/YHje5lv5THTl0HIbhSwM6EqRlmBiFFatmEWaeMk4tO4xBX65eq670I5TWPSLMzpp8ncGHMmvHqRajNBnmFtbYxN3E3/WjxmdbOOe+OXpnGJPcGsftc7cB2shRfA4lICPnE26+oVNXT6p0Lo20nY5XC7jyCO', 'subType': '00'}}, 'creationDate': {'$date': {'$numberLong': '1552949630483'}}, 'keyAltNames': ['altname', 'another_altname']}] tests: - description: "getMore with encryption" clientOptions: autoEncryptOpts: kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: find arguments: batchSize: 2 filter: {} result: - { _id: 1, encrypted_string: "string0" } - { _id: 2, encrypted_string: "string1" } - { _id: 3, encrypted_string: "string2" } expectations: # Auto encryption will request the collection info. - command_started_event: command: listCollections: 1 filter: name: *collection_name command_name: listCollections - command_started_event: command: find: *collection_name batchSize: 2 command_name: find # Then key is fetched from the key vault. - command_started_event: command: find: datakeys filter: {"$or": [{"_id": {"$in": [ {'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}} ] }}, {"keyAltNames": {"$in": []}}]} $db: keyvault readConcern: { level: "majority" } command_name: find - command_started_event: command: getMore: { $$type: "long" } collection: *collection_name batchSize: 2 command_name: getMore outcome: collection: # Outcome is checked using a separate MongoClient without auto encryption. data: - *doc0_encrypted - *doc1_encrypted - *doc2_encryptedmongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_encryption/insert.yml000066400000000000000000000113141505113246500277720ustar00rootroot00000000000000runOn: - minServerVersion: "4.1.10" database_name: &database_name "default" collection_name: &collection_name "default" data: [] json_schema: {'properties': {'encrypted_w_altname': {'encrypt': {'keyId': '/altname', 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Random'}}, 'encrypted_string': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}, 'random': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Random'}}, 'encrypted_string_equivalent': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}}, 'bsonType': 'object'} key_vault_data: [{'status': 1, '_id': {'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}, 'masterKey': {'provider': 'aws', 'key': 'arn:aws:kms:us-east-1:579766882180:key/89fcc2c4-08b0-4bd9-9f25-e30687b580d0', 'region': 'us-east-1'}, 'updateDate': {'$date': {'$numberLong': '1552949630483'}}, 'keyMaterial': {'$binary': {'base64': 'AQICAHhQNmWG2CzOm1dq3kWLM+iDUZhEqnhJwH9wZVpuZ94A8gEqnsxXlR51T5EbEVezUqqKAAAAwjCBvwYJKoZIhvcNAQcGoIGxMIGuAgEAMIGoBgkqhkiG9w0BBwEwHgYJYIZIAWUDBAEuMBEEDHa4jo6yp0Z18KgbUgIBEIB74sKxWtV8/YHje5lv5THTl0HIbhSwM6EqRlmBiFFatmEWaeMk4tO4xBX65eq670I5TWPSLMzpp8ncGHMmvHqRajNBnmFtbYxN3E3/WjxmdbOOe+OXpnGJPcGsftc7cB2shRfA4lICPnE26+oVNXT6p0Lo20nY5XC7jyCO', 'subType': '00'}}, 'creationDate': {'$date': {'$numberLong': '1552949630483'}}, 'keyAltNames': ['altname', 'another_altname']}] tests: - description: "insertOne with encryption" clientOptions: autoEncryptOpts: kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: insertOne arguments: document: &doc0 { _id: 1, encrypted_string: "string0", random: "abc" } expectations: # Auto encryption will request the collection info. - command_started_event: command: listCollections: 1 filter: name: *collection_name command_name: listCollections # Then key is fetched from the key vault. - command_started_event: command: find: datakeys filter: {"$or": [{"_id": {"$in": [ {'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}} ] }}, {"keyAltNames": {"$in": []}}]} $db: keyvault readConcern: { level: "majority" } command_name: find - command_started_event: command: insert: *collection_name documents: - &doc0_encrypted { _id: 1, encrypted_string: {'$binary': {'base64': 'AQAAAAAAAAAAAAAAAAAAAAACwj+3zkv2VM+aTfk60RqhXq6a/77WlLwu/BxXFkL7EppGsju/m8f0x5kBDD3EZTtGALGXlym5jnpZAoSIkswHoA==', 'subType': '06'}}, random: { $$type: "binData" } } ordered: true command_name: insert outcome: collection: # Outcome is checked using a separate MongoClient without auto encryption. data: - *doc0_encrypted - description: "insertMany with encryption" clientOptions: autoEncryptOpts: kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: insertMany arguments: documents: - *doc0 - &doc1 { _id: 2, encrypted_string: "string1" } expectations: # Auto encryption will request the collection info. - command_started_event: command: listCollections: 1 filter: name: *collection_name command_name: listCollections # Then key is fetched from the key vault. - command_started_event: command: find: datakeys filter: {"$or": [{"_id": {"$in": [ {'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}} ] }}, {"keyAltNames": {"$in": []}}]} $db: keyvault readConcern: { level: "majority" } command_name: find - command_started_event: command: insert: *collection_name documents: - *doc0_encrypted - &doc1_encrypted { _id: 2, encrypted_string: {'$binary': {'base64': 'AQAAAAAAAAAAAAAAAAAAAAACDdw4KFz3ZLquhsbt7RmDjD0N67n0uSXx7IGnQNCLeIKvot6s/ouI21Eo84IOtb6lhwUNPlSEBNY0/hbszWAKJg==', 'subType': '06'}} } ordered: true command_name: insert outcome: collection: # Outcome is checked using a separate MongoClient without auto encryption. data: - *doc0_encrypted - *doc1_encryptedmongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_encryption/keyAltName.yml000066400000000000000000000070601505113246500305230ustar00rootroot00000000000000runOn: - minServerVersion: "4.1.10" database_name: &database_name "default" collection_name: &collection_name "default" data: [] json_schema: {'properties': {'encrypted_w_altname': {'encrypt': {'keyId': '/altname', 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Random'}}, 'encrypted_string': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}, 'random': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Random'}}, 'encrypted_string_equivalent': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}}, 'bsonType': 'object'} key_vault_data: [{'status': 1, '_id': {'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}, 'masterKey': {'provider': 'aws', 'key': 'arn:aws:kms:us-east-1:579766882180:key/89fcc2c4-08b0-4bd9-9f25-e30687b580d0', 'region': 'us-east-1'}, 'updateDate': {'$date': {'$numberLong': '1552949630483'}}, 'keyMaterial': {'$binary': {'base64': 'AQICAHhQNmWG2CzOm1dq3kWLM+iDUZhEqnhJwH9wZVpuZ94A8gEqnsxXlR51T5EbEVezUqqKAAAAwjCBvwYJKoZIhvcNAQcGoIGxMIGuAgEAMIGoBgkqhkiG9w0BBwEwHgYJYIZIAWUDBAEuMBEEDHa4jo6yp0Z18KgbUgIBEIB74sKxWtV8/YHje5lv5THTl0HIbhSwM6EqRlmBiFFatmEWaeMk4tO4xBX65eq670I5TWPSLMzpp8ncGHMmvHqRajNBnmFtbYxN3E3/WjxmdbOOe+OXpnGJPcGsftc7cB2shRfA4lICPnE26+oVNXT6p0Lo20nY5XC7jyCO', 'subType': '00'}}, 'creationDate': {'$date': {'$numberLong': '1552949630483'}}, 'keyAltNames': ['altname', 'another_altname']}] tests: - description: "Insert with encryption using key alt name" clientOptions: autoEncryptOpts: kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: insertOne arguments: document: &doc0 { _id: 1, encrypted_w_altname: "string0", altname: "altname" } expectations: # Auto encryption will request the collection info. - command_started_event: command: listCollections: 1 filter: name: *collection_name command_name: listCollections # Then key is fetched from the key vault. - command_started_event: command: find: datakeys filter: {$or: [ { _id: { $in: [] } }, { keyAltNames: { $in: [ "altname" ] } } ] } $db: keyvault readConcern: { level: "majority" } command_name: find - command_started_event: command: insert: *collection_name documents: - &doc0_encrypted { _id: 1, encrypted_w_altname: { $$type: "binData" }, altname: "altname" } ordered: true command_name: insert outcome: collection: # Outcome is checked using a separate MongoClient without auto encryption. data: - *doc0_encrypted - description: "Replace with key alt name fails" clientOptions: autoEncryptOpts: kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: updateOne arguments: filter: {} update: { $set: { encrypted_w_altname: "string0" } } upsert: true result: errorContains: "A non-static (JSONPointer) keyId is not supported" outcome: collection: # Outcome is checked using a separate MongoClient without auto encryption. data: []mongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_encryption/localKMS.yml000066400000000000000000000053531505113246500301410ustar00rootroot00000000000000runOn: - minServerVersion: "4.1.10" database_name: &database_name "default" collection_name: &collection_name "default" data: [] json_schema: {'properties': {'encrypted_string': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}, 'random': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Random'}}}, 'bsonType': 'object'} key_vault_data: [{'_id': {'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}, 'keyMaterial': {'$binary': {'base64': 'Ce9HSz/HKKGkIt4uyy+jDuKGA+rLC2cycykMo6vc8jXxqa1UVDYHWq1r+vZKbnnSRBfB981akzRKZCFpC05CTyFqDhXv6OnMjpG97OZEREGIsHEYiJkBW0jJJvfLLgeLsEpBzsro9FztGGXASxyxFRZFhXvHxyiLOKrdWfs7X1O/iK3pEoHMx6uSNSfUOgbebLfIqW7TO++iQS5g1xovXA==', 'subType': '00'}}, 'creationDate': {'$date': {'$numberLong': '1552949630483'}}, 'updateDate': {'$date': {'$numberLong': '1552949630483'}}, 'status': {'$numberInt': '0'}, 'masterKey': {'provider': 'local'}}] tests: - description: "Insert a document with auto encryption using local KMS provider" clientOptions: autoEncryptOpts: kmsProviders: aws: {} # Credentials filled in from environment. local: {'key': {'$binary': {'base64': 'Mng0NCt4ZHVUYUJCa1kxNkVyNUR1QURhZ2h2UzR2d2RrZzh0cFBwM3R6NmdWMDFBMUN3YkQ5aXRRMkhGRGdQV09wOGVNYUMxT2k3NjZKelhaQmRCZGJkTXVyZG9uSjFk', 'subType': '00'}}} operations: - name: insertOne arguments: document: &doc0 { _id: 1, encrypted_string: "string0", random: "abc" } expectations: # Auto encryption will request the collection info. - command_started_event: command: listCollections: 1 filter: name: *collection_name command_name: listCollections # Then key is fetched from the key vault. - command_started_event: command: find: datakeys filter: { $or: [ { _id: { $in: [ {'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}} ] } }, { keyAltNames: { $in: [] } } ] } $db: keyvault command_name: find - command_started_event: command: insert: *collection_name documents: - &doc0_encrypted { _id: 1, encrypted_string: {'$binary': {'base64': 'AQAAAAAAAAAAAAAAAAAAAAACV/+zJmpqMU47yxS/xIVAviGi7wHDuFwaULAixEAoIh0xHz73UYOM3D8D44gcJn67EROjbz4ITpYzzlCJovDL0Q==', 'subType': '06'}}, random: { $$type: "binData" } } ordered: true command_name: insert outcome: collection: # Outcome is checked using a separate MongoClient without auto encryption. data: - *doc0_encryptedmongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_encryption/localSchema.yml000066400000000000000000000074031505113246500307050ustar00rootroot00000000000000runOn: - minServerVersion: "4.1.10" database_name: &database_name "default" collection_name: &collection_name "default" data: [] # configure an empty schema json_schema: {} key_vault_data: [{'status': 1, '_id': {'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}, 'masterKey': {'provider': 'aws', 'key': 'arn:aws:kms:us-east-1:579766882180:key/89fcc2c4-08b0-4bd9-9f25-e30687b580d0', 'region': 'us-east-1'}, 'updateDate': {'$date': {'$numberLong': '1552949630483'}}, 'keyMaterial': {'$binary': {'base64': 'AQICAHhQNmWG2CzOm1dq3kWLM+iDUZhEqnhJwH9wZVpuZ94A8gEqnsxXlR51T5EbEVezUqqKAAAAwjCBvwYJKoZIhvcNAQcGoIGxMIGuAgEAMIGoBgkqhkiG9w0BBwEwHgYJYIZIAWUDBAEuMBEEDHa4jo6yp0Z18KgbUgIBEIB74sKxWtV8/YHje5lv5THTl0HIbhSwM6EqRlmBiFFatmEWaeMk4tO4xBX65eq670I5TWPSLMzpp8ncGHMmvHqRajNBnmFtbYxN3E3/WjxmdbOOe+OXpnGJPcGsftc7cB2shRfA4lICPnE26+oVNXT6p0Lo20nY5XC7jyCO', 'subType': '00'}}, 'creationDate': {'$date': {'$numberLong': '1552949630483'}}, 'keyAltNames': ['altname', 'another_altname']}] tests: - description: "A local schema should override" clientOptions: autoEncryptOpts: schemaMap: "default.default": {'properties': {'encrypted_w_altname': {'encrypt': {'keyId': '/altname', 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Random'}}, 'encrypted_string': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}, 'random': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Random'}}, 'encrypted_string_equivalent': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}}, 'bsonType': 'object'} kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: insertOne arguments: document: &doc0 { _id: 1, encrypted_string: "string0" } - name: find arguments: filter: { _id: 1 } result: [*doc0] expectations: # Then key is fetched from the key vault. - command_started_event: command: find: datakeys filter: {"$or": [{"_id": {"$in": [ {'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}} ] }}, {"keyAltNames": {"$in": []}}]} $db: keyvault readConcern: { level: "majority" } command_name: find - command_started_event: command: insert: *collection_name documents: - &doc0_encrypted { _id: 1, encrypted_string: {'$binary': {'base64': 'AQAAAAAAAAAAAAAAAAAAAAACwj+3zkv2VM+aTfk60RqhXq6a/77WlLwu/BxXFkL7EppGsju/m8f0x5kBDD3EZTtGALGXlym5jnpZAoSIkswHoA==', 'subType': '06'}} } ordered: true command_name: insert - command_started_event: command: find: *collection_name filter: { _id: 1 } command_name: find outcome: collection: # Outcome is checked using a separate MongoClient without auto encryption. data: - *doc0_encrypted - description: "A local schema with no encryption is an error" clientOptions: autoEncryptOpts: schemaMap: "default.default": {'properties': {'test': {'bsonType': 'string'}}, 'bsonType': 'object', 'required': ['test']} kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: insertOne arguments: document: { _id: 1, encrypted_string: "string0" } result: errorContains: "JSON schema keyword 'required' is only allowed with a remote schema"mongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_encryption/malformedCiphertext.yml000066400000000000000000000115301505113246500324740ustar00rootroot00000000000000runOn: - minServerVersion: "4.1.10" database_name: &database_name "default" collection_name: &collection_name "default" data: - &doc0 _id: 1 encrypted_string: $binary: base64: AQAAAAAAAAAAAAAAAAAAAAACDdw4KFz3ZLquhsbt7RmDjD0N67n0uSXx7IGnQNCLeIKvot6s/ouI21Eo84IOtb6lhwUNPlSEBNY0/hbszWAKJg== subType: "00" - _id: 2 encrypted_string: $binary: base64: "AQ==" subType: "06" - _id: 3 encrypted_string: $binary: base64: "AQAAa2V2aW4gYWxiZXJ0c29uCg==" subType: "06" # Since test requires invalid data to be inserted, use a local schema. key_vault_data: [{'status': 1, '_id': {'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}, 'masterKey': {'provider': 'aws', 'key': 'arn:aws:kms:us-east-1:579766882180:key/89fcc2c4-08b0-4bd9-9f25-e30687b580d0', 'region': 'us-east-1'}, 'updateDate': {'$date': {'$numberLong': '1552949630483'}}, 'keyMaterial': {'$binary': {'base64': 'AQICAHhQNmWG2CzOm1dq3kWLM+iDUZhEqnhJwH9wZVpuZ94A8gEqnsxXlR51T5EbEVezUqqKAAAAwjCBvwYJKoZIhvcNAQcGoIGxMIGuAgEAMIGoBgkqhkiG9w0BBwEwHgYJYIZIAWUDBAEuMBEEDHa4jo6yp0Z18KgbUgIBEIB74sKxWtV8/YHje5lv5THTl0HIbhSwM6EqRlmBiFFatmEWaeMk4tO4xBX65eq670I5TWPSLMzpp8ncGHMmvHqRajNBnmFtbYxN3E3/WjxmdbOOe+OXpnGJPcGsftc7cB2shRfA4lICPnE26+oVNXT6p0Lo20nY5XC7jyCO', 'subType': '00'}}, 'creationDate': {'$date': {'$numberLong': '1552949630483'}}, 'keyAltNames': ['altname', 'another_altname']}] tests: - description: "Wrong subtype" clientOptions: autoEncryptOpts: schemaMap: "default.default": {'properties': {'encrypted_w_altname': {'encrypt': {'keyId': '/altname', 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Random'}}, 'encrypted_string': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}, 'random': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Random'}}, 'encrypted_string_equivalent': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}}, 'bsonType': 'object'} kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: find arguments: filter: { _id: 1 } result: # gets returned without decryption - *doc0 - description: "Empty data" clientOptions: autoEncryptOpts: schemaMap: "default.default": {'properties': {'encrypted_w_altname': {'encrypt': {'keyId': '/altname', 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Random'}}, 'encrypted_string': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}, 'random': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Random'}}, 'encrypted_string_equivalent': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}}, 'bsonType': 'object'} kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: find arguments: filter: { _id: 2 } result: errorContains: "malformed ciphertext" - description: "Malformed data" clientOptions: autoEncryptOpts: schemaMap: "default.default": {'properties': {'encrypted_w_altname': {'encrypt': {'keyId': '/altname', 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Random'}}, 'encrypted_string': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}, 'random': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Random'}}, 'encrypted_string_equivalent': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}}, 'bsonType': 'object'} kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: find arguments: filter: { _id: 3 } result: # ciphertext can only validate subtype (which is correct) # but takes the 16 byte UUID to look up key. Fails to find. errorContains: "not all keys requested were satisfied"mongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_encryption/maxWireVersion.yml000066400000000000000000000026571505113246500314620ustar00rootroot00000000000000runOn: - maxServerVersion: "4.0.99" database_name: &database_name "default" collection_name: &collection_name "default" data: [] key_vault_data: [{'status': 1, '_id': {'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}, 'masterKey': {'provider': 'aws', 'key': 'arn:aws:kms:us-east-1:579766882180:key/89fcc2c4-08b0-4bd9-9f25-e30687b580d0', 'region': 'us-east-1'}, 'updateDate': {'$date': {'$numberLong': '1552949630483'}}, 'keyMaterial': {'$binary': {'base64': 'AQICAHhQNmWG2CzOm1dq3kWLM+iDUZhEqnhJwH9wZVpuZ94A8gEqnsxXlR51T5EbEVezUqqKAAAAwjCBvwYJKoZIhvcNAQcGoIGxMIGuAgEAMIGoBgkqhkiG9w0BBwEwHgYJYIZIAWUDBAEuMBEEDHa4jo6yp0Z18KgbUgIBEIB74sKxWtV8/YHje5lv5THTl0HIbhSwM6EqRlmBiFFatmEWaeMk4tO4xBX65eq670I5TWPSLMzpp8ncGHMmvHqRajNBnmFtbYxN3E3/WjxmdbOOe+OXpnGJPcGsftc7cB2shRfA4lICPnE26+oVNXT6p0Lo20nY5XC7jyCO', 'subType': '00'}}, 'creationDate': {'$date': {'$numberLong': '1552949630483'}}, 'keyAltNames': ['altname', 'another_altname']}] tests: - description: "operation fails with maxWireVersion < 8" clientOptions: autoEncryptOpts: kmsProviders: aws: {} # Credentials filled in from environment. extraOptions: mongocryptdBypassSpawn: true # mongocryptd probably won't be on the path operations: - name: insertOne arguments: document: { encrypted_string: "string0" } result: errorContains: "Auto-encryption requires a minimum MongoDB version of 4.2"mongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_encryption/missingKey.yml000066400000000000000000000056031505113246500306140ustar00rootroot00000000000000runOn: - minServerVersion: "4.1.10" database_name: &database_name "default" collection_name: &collection_name "default" data: [] json_schema: {'properties': {'encrypted_w_altname': {'encrypt': {'keyId': '/altname', 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Random'}}, 'encrypted_string': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}, 'random': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Random'}}, 'encrypted_string_equivalent': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}}, 'bsonType': 'object'} key_vault_data: [{'status': 1, '_id': {'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}, 'masterKey': {'provider': 'aws', 'key': 'arn:aws:kms:us-east-1:579766882180:key/89fcc2c4-08b0-4bd9-9f25-e30687b580d0', 'region': 'us-east-1'}, 'updateDate': {'$date': {'$numberLong': '1552949630483'}}, 'keyMaterial': {'$binary': {'base64': 'AQICAHhQNmWG2CzOm1dq3kWLM+iDUZhEqnhJwH9wZVpuZ94A8gEqnsxXlR51T5EbEVezUqqKAAAAwjCBvwYJKoZIhvcNAQcGoIGxMIGuAgEAMIGoBgkqhkiG9w0BBwEwHgYJYIZIAWUDBAEuMBEEDHa4jo6yp0Z18KgbUgIBEIB74sKxWtV8/YHje5lv5THTl0HIbhSwM6EqRlmBiFFatmEWaeMk4tO4xBX65eq670I5TWPSLMzpp8ncGHMmvHqRajNBnmFtbYxN3E3/WjxmdbOOe+OXpnGJPcGsftc7cB2shRfA4lICPnE26+oVNXT6p0Lo20nY5XC7jyCO', 'subType': '00'}}, 'creationDate': {'$date': {'$numberLong': '1552949630483'}}, 'keyAltNames': ['altname', 'another_altname']}] tests: - description: "Insert with encryption on a missing key" clientOptions: autoEncryptOpts: keyVaultNamespace: "keyvault.different" kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: insertOne arguments: document: &doc0 { _id: 1, encrypted_string: "string0", random: "abc" } result: errorContains: "not all keys requested were satisfied" outcome: collection: # Outcome is checked using a separate MongoClient without auto encryption. data: [] expectations: # Auto encryption will request the collection info. - command_started_event: command: listCollections: 1 filter: name: *collection_name command_name: listCollections # Then key is fetched from the key vault. - command_started_event: command: find: different filter: {"$or": [{"_id": {"$in": [ {'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}} ] }}, {"keyAltNames": {"$in": []}}]} $db: keyvault readConcern: { level: "majority" } command_name: findmongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_encryption/noSchema.yml000066400000000000000000000021611505113246500302230ustar00rootroot00000000000000# Test auto encryption on a collection with no jsonSchema configured. # This is a regression test for MONGOCRYPT-378/PYTHON-3188. runOn: - minServerVersion: "4.1.10" database_name: &database_name "default" collection_name: &collection_name "unencrypted" data: [] tests: - description: "Insert on an unencrypted collection" clientOptions: autoEncryptOpts: kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: insertOne arguments: document: &doc0 { _id: 1 } expectations: # Auto encryption will request the collection info. - command_started_event: command: listCollections: 1 filter: name: *collection_name command_name: listCollections - command_started_event: command: insert: *collection_name documents: - *doc0 ordered: true command_name: insert outcome: collection: # Outcome is checked using a separate MongoClient without auto encryption. data: - *doc0mongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_encryption/replaceOne.yml000066400000000000000000000101161505113246500305420ustar00rootroot00000000000000runOn: - minServerVersion: "4.1.10" database_name: &database_name "default" collection_name: &collection_name "default" data: - &doc0_encrypted { _id: 1, encrypted_string: {'$binary': {'base64': 'AQAAAAAAAAAAAAAAAAAAAAACwj+3zkv2VM+aTfk60RqhXq6a/77WlLwu/BxXFkL7EppGsju/m8f0x5kBDD3EZTtGALGXlym5jnpZAoSIkswHoA==', 'subType': '06'}} } json_schema: {'properties': {'encrypted_w_altname': {'encrypt': {'keyId': '/altname', 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Random'}}, 'encrypted_string': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}, 'random': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Random'}}, 'encrypted_string_equivalent': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}}, 'bsonType': 'object'} key_vault_data: [{'status': 1, '_id': {'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}, 'masterKey': {'provider': 'aws', 'key': 'arn:aws:kms:us-east-1:579766882180:key/89fcc2c4-08b0-4bd9-9f25-e30687b580d0', 'region': 'us-east-1'}, 'updateDate': {'$date': {'$numberLong': '1552949630483'}}, 'keyMaterial': {'$binary': {'base64': 'AQICAHhQNmWG2CzOm1dq3kWLM+iDUZhEqnhJwH9wZVpuZ94A8gEqnsxXlR51T5EbEVezUqqKAAAAwjCBvwYJKoZIhvcNAQcGoIGxMIGuAgEAMIGoBgkqhkiG9w0BBwEwHgYJYIZIAWUDBAEuMBEEDHa4jo6yp0Z18KgbUgIBEIB74sKxWtV8/YHje5lv5THTl0HIbhSwM6EqRlmBiFFatmEWaeMk4tO4xBX65eq670I5TWPSLMzpp8ncGHMmvHqRajNBnmFtbYxN3E3/WjxmdbOOe+OXpnGJPcGsftc7cB2shRfA4lICPnE26+oVNXT6p0Lo20nY5XC7jyCO', 'subType': '00'}}, 'creationDate': {'$date': {'$numberLong': '1552949630483'}}, 'keyAltNames': ['altname', 'another_altname']}] tests: - description: "replaceOne with encryption" clientOptions: autoEncryptOpts: kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: replaceOne arguments: filter: { encrypted_string: "string0" } replacement: { encrypted_string: "string1", random: "abc" } result: matchedCount: 1 modifiedCount: 1 upsertedCount: 0 expectations: # Auto encryption will request the collection info. - command_started_event: command: listCollections: 1 filter: name: *collection_name command_name: listCollections # Then key is fetched from the key vault. - command_started_event: command: find: datakeys filter: {"$or": [{"_id": {"$in": [ {'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}} ] }}, {"keyAltNames": {"$in": []}}]} $db: keyvault readConcern: { level: "majority" } command_name: find - command_started_event: command: update: *collection_name updates: - q: { encrypted_string: { $eq: {'$binary': {'base64': 'AQAAAAAAAAAAAAAAAAAAAAACwj+3zkv2VM+aTfk60RqhXq6a/77WlLwu/BxXFkL7EppGsju/m8f0x5kBDD3EZTtGALGXlym5jnpZAoSIkswHoA==', 'subType': '06'}} } } u: { encrypted_string: {'$binary': {'base64': 'AQAAAAAAAAAAAAAAAAAAAAACDdw4KFz3ZLquhsbt7RmDjD0N67n0uSXx7IGnQNCLeIKvot6s/ouI21Eo84IOtb6lhwUNPlSEBNY0/hbszWAKJg==', 'subType': '06'}}, random: { $$type: "binData" } } # DRIVERS-976: mongocryptd adds upsert and multi fields to all update commands, so these fields should be added to spec tests upsert: false multi: false ordered: true command_name: update outcome: collection: # Outcome is checked using a separate MongoClient without auto encryption. data: - { _id: 1, encrypted_string: {'$binary': {'base64': 'AQAAAAAAAAAAAAAAAAAAAAACDdw4KFz3ZLquhsbt7RmDjD0N67n0uSXx7IGnQNCLeIKvot6s/ouI21Eo84IOtb6lhwUNPlSEBNY0/hbszWAKJg==', 'subType': '06'}}, random: { $$type: "binData" } } mongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_encryption/timeoutMS.yml000066400000000000000000000076371505113246500304310ustar00rootroot00000000000000runOn: - minServerVersion: "4.4" database_name: &database_name "cse-timeouts-db" collection_name: &collection_name "cse-timeouts-coll" data: [] json_schema: {'properties': {'encrypted_w_altname': {'encrypt': {'keyId': '/altname', 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Random'}}, 'encrypted_string': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}, 'random': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Random'}}, 'encrypted_string_equivalent': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}}, 'bsonType': 'object'} key_vault_data: [{'status': 1, '_id': {'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}, 'masterKey': {'provider': 'aws', 'key': 'arn:aws:kms:us-east-1:579766882180:key/89fcc2c4-08b0-4bd9-9f25-e30687b580d0', 'region': 'us-east-1'}, 'updateDate': {'$date': {'$numberLong': '1552949630483'}}, 'keyMaterial': {'$binary': {'base64': 'AQICAHhQNmWG2CzOm1dq3kWLM+iDUZhEqnhJwH9wZVpuZ94A8gEqnsxXlR51T5EbEVezUqqKAAAAwjCBvwYJKoZIhvcNAQcGoIGxMIGuAgEAMIGoBgkqhkiG9w0BBwEwHgYJYIZIAWUDBAEuMBEEDHa4jo6yp0Z18KgbUgIBEIB74sKxWtV8/YHje5lv5THTl0HIbhSwM6EqRlmBiFFatmEWaeMk4tO4xBX65eq670I5TWPSLMzpp8ncGHMmvHqRajNBnmFtbYxN3E3/WjxmdbOOe+OXpnGJPcGsftc7cB2shRfA4lICPnE26+oVNXT6p0Lo20nY5XC7jyCO', 'subType': '00'}}, 'creationDate': {'$date': {'$numberLong': '1552949630483'}}, 'keyAltNames': ['altname', 'another_altname']}] tests: - description: "timeoutMS applied to listCollections to get collection schema" failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["listCollections"] blockConnection: true blockTimeMS: 60 clientOptions: autoEncryptOpts: kmsProviders: aws: {} # Credentials filled in from environment. timeoutMS: 50 operations: - name: insertOne arguments: document: &doc0 { _id: 1, encrypted_string: "string0", random: "abc" } result: isTimeoutError: true expectations: # Auto encryption will request the collection info. - command_started_event: command: listCollections: 1 filter: name: *collection_name maxTimeMS: { $$type: ["int", "long"] } command_name: listCollections # Test that timeoutMS applies to the sum of all operations done for client-side encryption. This is done by blocking # listCollections and find for 30ms each and running an insertOne with timeoutMS=50. There should be one # listCollections command and one "find" command, so the sum should take more than timeoutMS. A second listCollections # event doesn't occur due to the internal MongoClient lacking configured auto encryption, plus libmongocrypt holds the # collection schema in cache for a minute. # # This test does not include command monitoring expectations because the exact command sequence is dependent on the # amount of time taken by mongocryptd communication. In slow runs, mongocryptd communication can breach the timeout # and result in the final "find" not being sent. - description: "remaining timeoutMS applied to find to get keyvault data" failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["listCollections", "find"] blockConnection: true blockTimeMS: 30 clientOptions: autoEncryptOpts: kmsProviders: aws: {} # Credentials filled in from environment. timeoutMS: 50 operations: - name: insertOne arguments: document: *doc0 result: isTimeoutError: true mongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_encryption/types.yml000066400000000000000000000561531505113246500276440ustar00rootroot00000000000000# Attempt to round trip some BSON types. # Note: db pointer is excluded since it is deprecated and numberlong is excluded due to different driver interpretations of { $numberLong: '123' } in relaxed JSON parsing. runOn: - minServerVersion: "4.1.10" database_name: &database_name "default" collection_name: &collection_name "default" data: [] json_schema: {} key_vault_data: [{'status': 1, '_id': {'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}, 'masterKey': {'provider': 'aws', 'key': 'arn:aws:kms:us-east-1:579766882180:key/89fcc2c4-08b0-4bd9-9f25-e30687b580d0', 'region': 'us-east-1'}, 'updateDate': {'$date': {'$numberLong': '1552949630483'}}, 'keyMaterial': {'$binary': {'base64': 'AQICAHhQNmWG2CzOm1dq3kWLM+iDUZhEqnhJwH9wZVpuZ94A8gEqnsxXlR51T5EbEVezUqqKAAAAwjCBvwYJKoZIhvcNAQcGoIGxMIGuAgEAMIGoBgkqhkiG9w0BBwEwHgYJYIZIAWUDBAEuMBEEDHa4jo6yp0Z18KgbUgIBEIB74sKxWtV8/YHje5lv5THTl0HIbhSwM6EqRlmBiFFatmEWaeMk4tO4xBX65eq670I5TWPSLMzpp8ncGHMmvHqRajNBnmFtbYxN3E3/WjxmdbOOe+OXpnGJPcGsftc7cB2shRfA4lICPnE26+oVNXT6p0Lo20nY5XC7jyCO', 'subType': '00'}}, 'creationDate': {'$date': {'$numberLong': '1552949630483'}}, 'keyAltNames': ['altname', 'another_altname']}] tests: - description: "type=objectId" clientOptions: autoEncryptOpts: schemaMap: "default.default": {'properties': {'encrypted_objectId': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'objectId', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}}, 'bsonType': 'object'} kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: insertOne arguments: document: &doc0 { _id: 1, encrypted_objectId: {"$oid": "AAAAAAAAAAAAAAAAAAAAAAAA"} } - name: findOne arguments: filter: { _id: 1 } result: *doc0 expectations: # Then key is fetched from the key vault. - command_started_event: command: find: datakeys filter: {"$or": [{"_id": {"$in": [ {'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}} ] }}, {"keyAltNames": {"$in": []}}]} $db: keyvault readConcern: { level: "majority" } command_name: find - command_started_event: command: insert: *collection_name documents: - &doc0_encrypted { _id: 1, encrypted_objectId: {'$binary': {'base64': 'AQAAAAAAAAAAAAAAAAAAAAAHmkTPqvzfHMWpvS1mEsrjOxVQ2dyihEgIFWD5E0eNEsiMBQsC0GuvjdqYRL5DHLFI1vKuGek7EYYp0Qyii/tHqA==', 'subType': '06'}} } ordered: true command_name: insert - command_started_event: command: find: *collection_name filter: {_id: 1} command_name: find outcome: collection: # Outcome is checked using a separate MongoClient without auto encryption. data: - *doc0_encrypted - description: "type=symbol" clientOptions: autoEncryptOpts: schemaMap: "default.default": {'properties': {'encrypted_symbol': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'symbol', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}}, 'bsonType': 'object'} kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: insertOne arguments: document: &doc1 { _id: 1, encrypted_symbol: {"$symbol": "test"} } - name: findOne arguments: filter: { _id: 1 } result: *doc1 expectations: # Then key is fetched from the key vault. - command_started_event: command: find: datakeys filter: {"$or": [{"_id": {"$in": [ {'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}} ] }}, {"keyAltNames": {"$in": []}}]} $db: keyvault readConcern: { level: "majority" } command_name: find - command_started_event: command: insert: *collection_name documents: - &doc1_encrypted { _id: 1, encrypted_symbol: {'$binary': {'base64': 'AQAAAAAAAAAAAAAAAAAAAAAOOmvDmWjcuKsSCO7U/7t9HJ8eI73B6wduyMbdkvn7n7V4uTJes/j+BTtneSdyG2JHKHGkevWAJSIU2XoO66BSXw==', 'subType': '06'}} } ordered: true command_name: insert - command_started_event: command: find: *collection_name filter: {_id: 1} command_name: find outcome: collection: # Outcome is checked using a separate MongoClient without auto encryption. data: - *doc1_encrypted - description: "type=int" clientOptions: autoEncryptOpts: schemaMap: "default.default": {'properties': {'encrypted_int': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'int', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}}, 'bsonType': 'object'} kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: insertOne arguments: document: &doc2 { _id: 1, encrypted_int: {"$numberInt": "123"} } - name: findOne arguments: filter: { _id: 1 } result: *doc2 expectations: # Then key is fetched from the key vault. - command_started_event: command: find: datakeys filter: {"$or": [{"_id": {"$in": [ {'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}} ] }}, {"keyAltNames": {"$in": []}}]} $db: keyvault readConcern: { level: "majority" } command_name: find - command_started_event: command: insert: *collection_name documents: - &doc2_encrypted { _id: 1, encrypted_int: {'$binary': {'base64': 'AQAAAAAAAAAAAAAAAAAAAAAQPNXJVXMEjGZnftMuf2INKufXCtQIRHdw5wTgn6QYt3ejcoAXyiwI4XIUizkpsob494qpt2in4tWeiO7b9zkA8Q==', 'subType': '06'}} } ordered: true command_name: insert - command_started_event: command: find: *collection_name filter: {_id: 1} command_name: find outcome: collection: # Outcome is checked using a separate MongoClient without auto encryption. data: - *doc2_encrypted - description: "type=double" clientOptions: autoEncryptOpts: schemaMap: "default.default": {'properties': {'encrypted_double': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'double', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}}, 'bsonType': 'object'} kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: insertOne arguments: document: &doc4 { _id: 1, encrypted_double: {"$numberDouble": "1.23"} } result: # DRIVERS-2272: The expected error message changed in mongocryptd 6.0. Before it was: # "Cannot use deterministic encryption for element of type: double" # After it is: # "Cannot encrypt element of type: double" # Only check for the common suffix. errorContains: "element of type: double" - description: "type=decimal" clientOptions: autoEncryptOpts: schemaMap: "default.default": {'properties': {'encrypted_decimal': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'decimal', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}}, 'bsonType': 'object'} kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: insertOne arguments: document: &doc5 { _id: 1, encrypted_decimal: {"$numberDecimal": "1.23"} } result: # DRIVERS-2272: The expected error message changed in mongocryptd 6.0. Before it was: # "Cannot use deterministic encryption for element of type: decimal" # After it is: # "Cannot encrypt element of type: decimal" # Only check for the common suffix. errorContains: "element of type: decimal" - description: "type=binData" clientOptions: autoEncryptOpts: schemaMap: "default.default": {'properties': {'encrypted_binData': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'binData', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}}, 'bsonType': 'object'} kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: insertOne arguments: document: &doc6 { _id: 1, encrypted_binData: {"$binary": { base64: "AAAA", subType: "00" } } } - name: findOne arguments: filter: { _id: 1 } result: *doc6 expectations: # Then key is fetched from the key vault. - command_started_event: command: find: datakeys filter: {"$or": [{"_id": {"$in": [ {'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}} ] }}, {"keyAltNames": {"$in": []}}]} $db: keyvault readConcern: { level: "majority" } command_name: find - command_started_event: command: insert: *collection_name documents: - &doc6_encrypted { _id: 1, encrypted_binData: {'$binary': {'base64': 'AQAAAAAAAAAAAAAAAAAAAAAFB/KHZQHaHHo8fctcl7v6kR+sLkJoTRx2cPSSck9ya+nbGROSeFhdhDRHaCzhV78fDEqnMDSVPNi+ZkbaIh46GQ==', 'subType': '06'}} } ordered: true command_name: insert - command_started_event: command: find: *collection_name filter: {_id: 1} command_name: find outcome: collection: # Outcome is checked using a separate MongoClient without auto encryption. data: - *doc6_encrypted - description: "type=javascript" clientOptions: autoEncryptOpts: schemaMap: "default.default": {'properties': {'encrypted_javascript': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'javascript', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}}, 'bsonType': 'object'} kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: insertOne arguments: document: &doc7 { _id: 1, encrypted_javascript: {"$code": "var x = 1;" } } - name: findOne arguments: filter: { _id: 1 } result: *doc7 expectations: # Then key is fetched from the key vault. - command_started_event: command: find: datakeys filter: {"$or": [{"_id": {"$in": [ {'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}} ] }}, {"keyAltNames": {"$in": []}}]} $db: keyvault readConcern: { level: "majority" } command_name: find - command_started_event: command: insert: *collection_name documents: - &doc7_encrypted { _id: 1, encrypted_javascript: {'$binary': {'base64': 'AQAAAAAAAAAAAAAAAAAAAAANrvMgJkTKWGMc9wt3E2RBR2Hu5gL9p+vIIdHe9FcOm99t1W480/oX1Gnd87ON3B399DuFaxi/aaIiQSo7gTX6Lw==', 'subType': '06'}} } ordered: true command_name: insert - command_started_event: command: find: *collection_name filter: {_id: 1} command_name: find outcome: collection: # Outcome is checked using a separate MongoClient without auto encryption. data: - *doc7_encrypted - description: "type=javascriptWithScope" clientOptions: autoEncryptOpts: schemaMap: "default.default": {'properties': {'encrypted_javascriptWithScope': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'javascriptWithScope', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}}, 'bsonType': 'object'} kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: insertOne arguments: document: &doc8 { _id: 1, encrypted_javascriptWithScope: {"$code": "var x = 1;", "$scope": {} } } result: # DRIVERS-2272: The expected error message changed in mongocryptd 6.0. Before it was: # "Cannot use deterministic encryption for element of type: javascriptWithScope" # After it is: # "Cannot encrypt element of type: javascriptWithScope" # Only check for the common suffix. errorContains: "element of type: javascriptWithScope" - description: "type=object" clientOptions: autoEncryptOpts: schemaMap: "default.default": {'properties': {'encrypted_object': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'object', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}}, 'bsonType': 'object'} kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: insertOne arguments: document: &doc9 { _id: 1, encrypted_object: {} } result: # DRIVERS-2272: The expected error message changed in mongocryptd 6.0. Before it was: # "Cannot use deterministic encryption for element of type: object" # After it is: # "Cannot encrypt element of type: object" # Only check for the common suffix. errorContains: "element of type: object" - description: "type=timestamp" clientOptions: autoEncryptOpts: schemaMap: "default.default": {'properties': {'encrypted_timestamp': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'timestamp', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}}, 'bsonType': 'object'} kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: insertOne arguments: document: &doc10 { _id: 1, encrypted_timestamp: {$timestamp: {t: 123, i: 456}} } - name: findOne arguments: filter: { _id: 1 } result: *doc10 expectations: # Then key is fetched from the key vault. - command_started_event: command: find: datakeys filter: {"$or": [{"_id": {"$in": [ {'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}} ] }}, {"keyAltNames": {"$in": []}}]} $db: keyvault readConcern: { level: "majority" } command_name: find - command_started_event: command: insert: *collection_name documents: - &doc10_encrypted { _id: 1, encrypted_timestamp: {'$binary': {'base64': 'AQAAAAAAAAAAAAAAAAAAAAARJHaM4Gq3MpDTdBasBsEolQaOmxJQU1wsZVaSFAOLpEh1QihDglXI95xemePFMKhg+KNpFg7lw1ChCs2Wn/c26Q==', 'subType': '06'}} } ordered: true command_name: insert - command_started_event: command: find: *collection_name filter: {_id: 1} command_name: find outcome: collection: # Outcome is checked using a separate MongoClient without auto encryption. data: - *doc10_encrypted - description: "type=regex" clientOptions: autoEncryptOpts: schemaMap: "default.default": {'properties': {'encrypted_regex': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'regex', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}}, 'bsonType': 'object'} kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: insertOne arguments: document: &doc11 { _id: 1, encrypted_regex: {$regularExpression: { pattern: "test", options: ""}} } - name: findOne arguments: filter: { _id: 1 } result: *doc11 expectations: # Then key is fetched from the key vault. - command_started_event: command: find: datakeys filter: {"$or": [{"_id": {"$in": [ {'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}} ] }}, {"keyAltNames": {"$in": []}}]} $db: keyvault readConcern: { level: "majority" } command_name: find - command_started_event: command: insert: *collection_name documents: - &doc11_encrypted { _id: 1, encrypted_regex: {'$binary': {'base64': 'AQAAAAAAAAAAAAAAAAAAAAALVnxM4UqGhqf5eXw6nsS08am3YJrTf1EvjKitT8tyyMAbHsICIU3GUjuC7EBofCHbusvgo7pDyaClGostFz44nA==', 'subType': '06'}} } ordered: true command_name: insert - command_started_event: command: find: *collection_name filter: {_id: 1} command_name: find outcome: collection: # Outcome is checked using a separate MongoClient without auto encryption. data: - *doc11_encrypted - description: "type=date" clientOptions: autoEncryptOpts: schemaMap: "default.default": {'properties': {'encrypted_date': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'date', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}}, 'bsonType': 'object'} kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: insertOne arguments: document: &doc13 { _id: 1, encrypted_date: {$date: { $numberLong: "123" }} } - name: findOne arguments: filter: { _id: 1 } result: *doc13 expectations: # Then key is fetched from the key vault. - command_started_event: command: find: datakeys filter: {"$or": [{"_id": {"$in": [ {'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}} ] }}, {"keyAltNames": {"$in": []}}]} $db: keyvault readConcern: { level: "majority" } command_name: find - command_started_event: command: insert: *collection_name documents: - &doc13_encrypted { _id: 1, encrypted_date: {'$binary': {'base64': 'AQAAAAAAAAAAAAAAAAAAAAAJ5sN7u6l97+DswfKTqZAijSTSOo5htinGKQKUD7pHNJYlLXGOkB4glrCu7ibu0g3344RHQ5yUp4YxMEa8GD+Snw==', 'subType': '06'}} } ordered: true command_name: insert - command_started_event: command: find: *collection_name filter: {_id: 1} command_name: find outcome: collection: # Outcome is checked using a separate MongoClient without auto encryption. data: - *doc13_encrypted - description: "type=minKey" clientOptions: autoEncryptOpts: schemaMap: "default.default": {'properties': {'encrypted_minKey': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'minKey', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}}, 'bsonType': 'object'} kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: insertOne arguments: document: &doc14 { _id: 1, encrypted_minKey: {$minKey: 1} } result: errorContains: "Cannot encrypt element of type: minKey" - description: "type=maxKey" clientOptions: autoEncryptOpts: schemaMap: "default.default": {'properties': {'encrypted_maxKey': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'maxKey', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}}, 'bsonType': 'object'} kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: insertOne arguments: document: &doc15 { _id: 1, encrypted_maxKey: {$maxKey: 1} } result: errorContains: "Cannot encrypt element of type: maxKey" - description: "type=undefined" clientOptions: autoEncryptOpts: schemaMap: "default.default": {'properties': {'encrypted_undefined': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'undefined', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}}, 'bsonType': 'object'} kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: insertOne arguments: document: &doc16 { _id: 1, encrypted_undefined: {$undefined: true} } result: errorContains: "Cannot encrypt element of type: undefined" - description: "type=array" clientOptions: autoEncryptOpts: schemaMap: "default.default": {'properties': {'encrypted_array': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'array', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}}, 'bsonType': 'object'} kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: insertOne arguments: document: &doc17 { _id: 1, encrypted_array: [] } result: # DRIVERS-2272: The expected error message changed in mongocryptd 6.0. Before it was: # "Cannot use deterministic encryption for element of type: array" # After it is: # "Cannot encrypt element of type: array" # Only check for the common suffix. errorContains: "element of type: array" - description: "type=bool" clientOptions: autoEncryptOpts: schemaMap: "default.default": {'properties': {'encrypted_bool': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'bool', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}}, 'bsonType': 'object'} kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: insertOne arguments: document: &doc18 { _id: 1, encrypted_bool: true } result: # DRIVERS-2272: The expected error message changed in mongocryptd 6.0. Before it was: # "Cannot use deterministic encryption for element of type: bool" # After it is: # "Cannot encrypt element of type: bool" # Only check for the common suffix. errorContains: "element of type: bool" - description: "type=null" clientOptions: autoEncryptOpts: schemaMap: "default.default": {'properties': {'encrypted_null': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'null', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}}, 'bsonType': 'object'} kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: insertOne arguments: document: &doc19 { _id: 1, encrypted_null: true } result: errorContains: "Cannot encrypt element of type: null"mongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_encryption/unified/000077500000000000000000000000001505113246500273665ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_encryption/unified/addKeyAltName.yml000066400000000000000000000150371505113246500325620ustar00rootroot00000000000000description: addKeyAltName schemaVersion: "1.8" runOnRequirements: - csfle: true createEntities: - client: id: &client0 client0 observeEvents: - commandStartedEvent - clientEncryption: id: &clientEncryption0 clientEncryption0 clientEncryptionOpts: keyVaultClient: *client0 keyVaultNamespace: keyvault.datakeys kmsProviders: local: { key: { $$placeholder: 1 } } - database: id: &database0 database0 client: *client0 databaseName: &database0Name keyvault - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name datakeys initialData: - databaseName: *database0Name collectionName: *collection0Name documents: - &local_key_doc _id: &local_key_id { $binary: { base64: bG9jYWxrZXlsb2NhbGtleQ==, subType: "04" } } keyMaterial: { $binary: { base64: ABKBldDEoDW323yejOnIRk6YQmlD9d3eQthd16scKL75nz2LjNL9fgPDZWrFFOlqlhMCFaSrNJfGrFUjYk5JFDO7soG5Syb50k1niJoKg4ilsj0L4mpimFUtTpOr2nzZOeQtvAksEXc7gsFgq8gV7t/U3lsaXPY7I0t42DfSE8EGlPdxRjFdHnxh+OR8h7U9b8Qs5K5UuhgyeyxaBZ1Hgw==, subType: "00" } } creationDate: { $date: { $numberLong: "1641024000000" } } updateDate: { $date: { $numberLong: "1641024000000" } } status: 1 masterKey: provider: local tests: - description: add keyAltName to non-existent data key operations: - name: addKeyAltName object: *clientEncryption0 arguments: # First 3 letters of local_key_id replaced with 'A' (value: "#alkeylocalkey"). id: &non_existent_id { $binary: { base64: AAAjYWxrZXlsb2NhbGtleQ==, subType: "04" } } keyAltName: new_key_alt_name expectResult: { $$unsetOrMatches: null } expectEvents: - client: *client0 events: - commandStartedEvent: databaseName: *database0Name command: findAndModify: *collection0Name query: { _id: *non_existent_id } update: { $addToSet: { keyAltNames: new_key_alt_name } } writeConcern: { w: majority } outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - *local_key_doc - description: add new keyAltName to data key with no keyAltNames operations: - name: addKeyAltName object: *clientEncryption0 arguments: id: *local_key_id keyAltName: local_key expectResult: *local_key_doc - name: find object: *collection0 arguments: filter: {} projection: { _id: 0, keyAltNames: 1 } expectResult: - keyAltNames: [local_key] expectEvents: - client: *client0 events: - commandStartedEvent: databaseName: *database0Name command: findAndModify: *collection0Name query: { _id: *local_key_id } update: { $addToSet: { keyAltNames: local_key } } writeConcern: { w: majority } - commandStartedEvent: { commandName: find } - description: add existing keyAltName to existing data key operations: - name: addKeyAltName object: *clientEncryption0 arguments: id: *local_key_id keyAltName: local_key expectResult: *local_key_doc - name: addKeyAltName # Attempting to add a duplicate keyAltName to the data key should not be an error. object: *clientEncryption0 arguments: id: *local_key_id keyAltName: local_key expectResult: _id: *local_key_id keyAltNames: [local_key] keyMaterial: { $$type: binData } creationDate: { $$type: date } updateDate: { $$type: date } status: 1 masterKey: provider: local - name: find object: *collection0 arguments: filter: {} projection: { _id: 0, keyAltNames: 1 } expectResult: - keyAltNames: [local_key] expectEvents: - client: *client0 events: - commandStartedEvent: databaseName: *database0Name command: findAndModify: *collection0Name query: { _id: *local_key_id } update: { $addToSet: { keyAltNames: local_key } } writeConcern: { w: majority } - commandStartedEvent: databaseName: *database0Name command: findAndModify: *collection0Name query: { _id: *local_key_id } update: { $addToSet: { keyAltNames: local_key } } writeConcern: { w: majority } - commandStartedEvent: { commandName: find } - description: add new keyAltName to data key with keyAltNames operations: - name: addKeyAltName object: *clientEncryption0 arguments: id: *local_key_id keyAltName: local_key expectResult: *local_key_doc - name: addKeyAltName object: *clientEncryption0 arguments: id: *local_key_id keyAltName: another_name expectResult: _id: *local_key_id keyAltNames: [local_key] keyMaterial: { $$type: binData } creationDate: { $$type: date } updateDate: { $$type: date } status: 1 masterKey: provider: local - name: aggregate object: *collection0 arguments: pipeline: # Ensure keyAltNames are in deterministically sorted order. - $project: { _id: 0, keyAltNames: $keyAltNames } - $unwind: $keyAltNames - $sort: { keyAltNames: 1 } expectResult: - keyAltNames: another_name - keyAltNames: local_key expectEvents: - client: *client0 events: - commandStartedEvent: databaseName: *database0Name command: findAndModify: *collection0Name query: { _id: *local_key_id } update: { $addToSet: { keyAltNames: local_key } } writeConcern: { w: majority } - commandStartedEvent: databaseName: *database0Name command: findAndModify: *collection0Name query: { _id: *local_key_id } update: { $addToSet: { keyAltNames: another_name } } writeConcern: { w: majority } - commandStartedEvent: { commandName: aggregate } createDataKey-kms_providers-invalid.yml000066400000000000000000000033241505113246500370530ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_encryption/unifieddescription: createDataKey-kms_providers-invalid schemaVersion: "1.8" runOnRequirements: - csfle: true createEntities: - client: id: &client0 client0 observeEvents: - commandStartedEvent - commandSucceededEvent - commandFailedEvent - clientEncryption: id: &clientEncryption0 clientEncryption0 clientEncryptionOpts: keyVaultClient: *client0 keyVaultNamespace: keyvault.datakeys kmsProviders: aws: { accessKeyId: { $$placeholder: 1 }, secretAccessKey: { $$placeholder: 1 } } tests: - description: create data key without required master key fields operations: - name: createDataKey object: *clientEncryption0 arguments: kmsProvider: aws opts: masterKey: {} expectError: isClientError: true expectEvents: - client: *client0 events: [] - description: create data key with invalid master key field operations: - name: createDataKey object: *clientEncryption0 arguments: kmsProvider: local opts: masterKey: invalid: 1 expectError: isClientError: true expectEvents: - client: *client0 events: [] - description: create data key with invalid master key operations: - name: createDataKey object: *clientEncryption0 arguments: kmsProvider: aws opts: masterKey: key: arn:aws:kms:us-east-1:579766882180:key/89fcc2c4-08b0-4bd9-9f25-e30687b580d0 region: invalid expectError: isClientError: true expectEvents: - client: *client0 events: [] mongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_encryption/unified/createDataKey.yml000066400000000000000000000252271505113246500326270ustar00rootroot00000000000000description: createDataKey schemaVersion: "1.8" runOnRequirements: - csfle: true createEntities: - client: id: &client0 client0 observeEvents: - commandStartedEvent - clientEncryption: id: &clientEncryption0 clientEncryption0 clientEncryptionOpts: keyVaultClient: *client0 keyVaultNamespace: keyvault.datakeys kmsProviders: aws: { accessKeyId: { $$placeholder: 1 }, secretAccessKey: { $$placeholder: 1 } } azure: { tenantId: { $$placeholder: 1 }, clientId: { $$placeholder: 1 }, clientSecret: { $$placeholder: 1 } } gcp: { email: { $$placeholder: 1 }, privateKey: { $$placeholder: 1 } } kmip: { endpoint: { $$placeholder: 1 } } local: { key: { $$placeholder: 1 } } - database: id: &database0 database0 client: *client0 databaseName: &database0Name keyvault - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name datakeys initialData: - databaseName: *database0Name collectionName: *collection0Name documents: [] tests: - description: create data key with AWS KMS provider operations: - name: createDataKey object: *clientEncryption0 arguments: kmsProvider: aws opts: masterKey: &new_aws_masterkey key: arn:aws:kms:us-east-1:579766882180:key/89fcc2c4-08b0-4bd9-9f25-e30687b580d0 region: us-east-1 expectResult: { $$type: binData } expectEvents: - client: *client0 events: - commandStartedEvent: databaseName: *database0Name command: insert: *collection0Name documents: - _id: { $$type: binData } keyMaterial: { $$type: binData } creationDate: { $$type: date } updateDate: { $$type: date } status: { $$exists: true } masterKey: provider: aws <<: *new_aws_masterkey writeConcern: { w: majority } - description: create datakey with Azure KMS provider operations: - name: createDataKey object: *clientEncryption0 arguments: kmsProvider: azure opts: masterKey: &new_azure_masterkey keyVaultEndpoint: key-vault-csfle.vault.azure.net keyName: key-name-csfle expectResult: { $$type: binData } expectEvents: - client: *client0 events: - commandStartedEvent: databaseName: *database0Name command: insert: *collection0Name documents: - _id: { $$type: binData } keyMaterial: { $$type: binData } creationDate: { $$type: date } updateDate: { $$type: date } status: { $$exists: true } masterKey: provider: azure <<: *new_azure_masterkey writeConcern: { w: majority } - description: create datakey with GCP KMS provider operations: - name: createDataKey object: *clientEncryption0 arguments: kmsProvider: gcp opts: masterKey: &new_gcp_masterkey projectId: devprod-drivers location: global keyRing: key-ring-csfle keyName: key-name-csfle expectResult: { $$type: binData } expectEvents: - client: *client0 events: - commandStartedEvent: databaseName: *database0Name command: insert: *collection0Name documents: - _id: { $$type: binData } keyMaterial: { $$type: binData } creationDate: { $$type: date } updateDate: { $$type: date } status: { $$exists: true } masterKey: provider: gcp <<: *new_gcp_masterkey writeConcern: { w: majority } - description: create datakey with KMIP KMS provider operations: - name: createDataKey object: *clientEncryption0 arguments: kmsProvider: kmip expectResult: { $$type: binData } expectEvents: - client: *client0 events: - commandStartedEvent: databaseName: *database0Name command: insert: *collection0Name documents: - _id: { $$type: binData } keyMaterial: { $$type: binData } creationDate: { $$type: date } updateDate: { $$type: date } status: { $$exists: true } masterKey: provider: kmip keyId: { $$type: string } writeConcern: { w: majority } - description: create datakey with local KMS provider operations: - name: createDataKey object: *clientEncryption0 arguments: kmsProvider: local expectResult: { $$type: binData } expectEvents: - client: *client0 events: - commandStartedEvent: databaseName: *database0Name command: insert: *collection0Name documents: - _id: { $$type: binData } keyMaterial: { $$type: binData } creationDate: { $$type: date } updateDate: { $$type: date } status: { $$exists: true } masterKey: provider: local writeConcern: { w: majority } - description: create datakey with no keyAltName operations: - name: createDataKey object: *clientEncryption0 arguments: kmsProvider: local opts: keyAltNames: [] expectResult: { $$type: binData } expectEvents: - client: *client0 events: - commandStartedEvent: databaseName: *database0Name command: insert: *collection0Name documents: - _id: { $$type: binData } # keyAltNames field should not exist if no keyAltNames are given. keyAltNames: { $$exists: false } keyMaterial: { $$type: binData } creationDate: { $$type: date } updateDate: { $$type: date } status: { $$type: int } masterKey: { $$type: object } writeConcern: { w: majority } - description: create datakey with single keyAltName operations: - name: createDataKey object: *clientEncryption0 arguments: kmsProvider: local opts: keyAltNames: ["local_key"] expectResult: { $$type: binData } expectEvents: - client: *client0 events: - commandStartedEvent: databaseName: *database0Name command: insert: *collection0Name documents: - _id: { $$type: binData } keyAltNames: [local_key] keyMaterial: { $$type: binData } creationDate: { $$type: date } updateDate: { $$type: date } status: { $$type: int } masterKey: { $$type: object } writeConcern: { w: majority } - description: create datakey with multiple keyAltNames operations: - name: createDataKey object: *clientEncryption0 arguments: kmsProvider: local opts: keyAltNames: ["abc", "def"] expectResult: { $$type: binData } - name: aggregate object: *collection0 arguments: # Need to use pipeline to sort keyAltNames for deterministic matching # because keyAltNames is not required to be sorted. pipeline: - $project: { _id: 0, keyAltNames: 1 } - $unwind: $keyAltNames - $sort: { keyAltNames: 1 } expectResult: - keyAltNames: abc - keyAltNames: def expectEvents: - client: *client0 events: - commandStartedEvent: databaseName: *database0Name command: insert: *collection0Name documents: - _id: { $$type: binData } keyAltNames: { $$type: array } keyMaterial: { $$type: binData } creationDate: { $$type: date } updateDate: { $$type: date } status: { $$type: int } masterKey: { $$type: object } writeConcern: { w: majority } - commandStartedEvent: { commandName: aggregate } - description: create datakey with custom key material operations: - name: createDataKey object: *clientEncryption0 arguments: kmsProvider: local opts: # "key_material" repeated 8 times. keyMaterial: &custom_key_material { $binary: { base64: a2V5X21hdGVyaWFsa2V5X21hdGVyaWFsa2V5X21hdGVyaWFsa2V5X21hdGVyaWFsa2V5X21hdGVyaWFsa2V5X21hdGVyaWFsa2V5X21hdGVyaWFsa2V5X21hdGVyaWFs, subType: "00" } } expectResult: { $$type: binData } expectEvents: - client: *client0 events: - commandStartedEvent: databaseName: *database0Name command: insert: *collection0Name documents: - _id: { $$type: binData } # Cannot match exact value of encrypted key material. keyMaterial: { $$type: binData } creationDate: { $$type: date } updateDate: { $$type: date } status: { $$type: int } masterKey: { $$type: object } writeConcern: { w: majority } - description: create datakey with invalid custom key material (too short) operations: - name: createDataKey object: *clientEncryption0 arguments: kmsProvider: local opts: # "key_material" repeated only 7 times (key material length == 84). keyMaterial: { $binary: { base64: a2V5X21hdGVyaWFsa2V5X21hdGVyaWFsa2V5X21hdGVyaWFsa2V5X21hdGVyaWFsa2V5X21hdGVyaWFsa2V5X21hdGVyaWFsa2V5X21hdGVyaWFs, subType: "00" } } expectError: isClientError: true expectEvents: - client: *client0 events: [] mongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_encryption/unified/deleteKey.yml000066400000000000000000000125741505113246500320350ustar00rootroot00000000000000description: deleteKey schemaVersion: "1.8" runOnRequirements: - csfle: true createEntities: - client: id: &client0 client0 observeEvents: - commandStartedEvent - clientEncryption: id: &clientEncryption0 clientEncryption0 clientEncryptionOpts: keyVaultClient: *client0 keyVaultNamespace: keyvault.datakeys kmsProviders: local: { key: { $$placeholder: 1 } } - database: id: &database0 database0 client: *client0 databaseName: &database0Name keyvault - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name datakeys initialData: - databaseName: *database0Name collectionName: *collection0Name documents: - &aws_key_doc _id: &aws_key_id { $binary: { base64: YXdzYXdzYXdzYXdzYXdzYQ==, subType: "04" } } keyAltNames: ["aws_key"] keyMaterial: { $binary: { base64: AQICAHhQNmWG2CzOm1dq3kWLM+iDUZhEqnhJwH9wZVpuZ94A8gFXJqbF0Fy872MD7xl56D/2AAAAwjCBvwYJKoZIhvcNAQcGoIGxMIGuAgEAMIGoBgkqhkiG9w0BBwEwHgYJYIZIAWUDBAEuMBEEDO7HPisPUlGzaio9vgIBEIB7/Qow46PMh/8JbEUbdXgTGhLfXPE+KIVW7T8s6YEMlGiRvMu7TV0QCIUJlSHPKZxzlJ2iwuz5yXeOag+EdY+eIQ0RKrsJ3b8UTisZYzGjfzZnxUKLzLoeXremtRCm3x47wCuHKd1dhh6FBbYt5TL2tDaj+vL2GBrKat2L, subType: "00" } } creationDate: { $date: { $numberLong: "1641024000000" } } updateDate: { $date: { $numberLong: "1641024000000" } } status: 1 masterKey: provider: aws key: arn:aws:kms:us-east-1:579766882180:key/89fcc2c4-08b0-4bd9-9f25-e30687b580d0 region: us-east-1 - &local_key_doc _id: &local_key_id { $binary: { base64: bG9jYWxrZXlsb2NhbGtleQ==, subType: "04" } } keyAltNames: ["local_key"] keyMaterial: { $binary: { base64: ABKBldDEoDW323yejOnIRk6YQmlD9d3eQthd16scKL75nz2LjNL9fgPDZWrFFOlqlhMCFaSrNJfGrFUjYk5JFDO7soG5Syb50k1niJoKg4ilsj0L4mpimFUtTpOr2nzZOeQtvAksEXc7gsFgq8gV7t/U3lsaXPY7I0t42DfSE8EGlPdxRjFdHnxh+OR8h7U9b8Qs5K5UuhgyeyxaBZ1Hgw==, subType: "00" } } creationDate: { $date: { $numberLong: "1641024000000" } } updateDate: { $date: { $numberLong: "1641024000000" } } status: 1 masterKey: provider: local tests: - description: delete non-existent data key operations: - name: deleteKey object: *clientEncryption0 arguments: # *aws_key_id with first three letters replaced with 'A' (value: "3awsawsawsawsa"). id: &non_existent_id { $binary: { base64: AAAzYXdzYXdzYXdzYXdzYQ==, subType: "04" } } expectResult: deletedCount: 0 expectEvents: - client: *client0 events: - commandStartedEvent: databaseName: *database0Name command: delete: *collection0Name deletes: [{ q: { _id: *non_existent_id }, limit: 1 }] writeConcern: { w: majority } outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - *aws_key_doc - *local_key_doc - description: delete existing AWS data key operations: - name: deleteKey object: *clientEncryption0 arguments: id: *aws_key_id expectResult: deletedCount: 1 expectEvents: - client: *client0 events: - commandStartedEvent: databaseName: *database0Name command: delete: *collection0Name deletes: [{ q: { _id: *aws_key_id }, limit: 1 }] writeConcern: { w: majority } outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - *local_key_doc - description: delete existing local data key operations: - name: deleteKey object: *clientEncryption0 arguments: id: *local_key_id expectResult: deletedCount: 1 expectEvents: - client: *client0 events: - commandStartedEvent: databaseName: *database0Name command: delete: *collection0Name deletes: [{ q: { _id: *local_key_id }, limit: 1 }] writeConcern: { w: majority } outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - *aws_key_doc - description: delete existing data key twice operations: - name: deleteKey object: *clientEncryption0 arguments: id: *aws_key_id expectResult: deletedCount: 1 - name: deleteKey object: *clientEncryption0 arguments: id: *aws_key_id expectResult: deletedCount: 0 expectEvents: - client: *client0 events: - commandStartedEvent: databaseName: *database0Name command: delete: *collection0Name deletes: [{ q: { _id: *aws_key_id }, limit: 1 }] writeConcern: { w: majority } - commandStartedEvent: databaseName: *database0Name command: delete: *collection0Name deletes: [{ q: { _id: *aws_key_id }, limit: 1 }] writeConcern: { w: majority } outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - *local_key_doc mongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_encryption/unified/getKey.yml000066400000000000000000000074721505113246500313530ustar00rootroot00000000000000description: getKey schemaVersion: "1.8" runOnRequirements: - csfle: true createEntities: - client: id: &client0 client0 observeEvents: - commandStartedEvent - clientEncryption: id: &clientEncryption0 clientEncryption0 clientEncryptionOpts: keyVaultClient: *client0 keyVaultNamespace: keyvault.datakeys kmsProviders: local: { key: { $$placeholder: 1 } } - database: id: &database0 database0 client: *client0 databaseName: &database0Name keyvault - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name datakeys initialData: - databaseName: *database0Name collectionName: *collection0Name documents: - &aws_key_doc _id: &aws_key_id { $binary: { base64: YXdzYXdzYXdzYXdzYXdzYQ==, subType: "04" } } keyAltNames: ["aws_key"] keyMaterial: { $binary: { base64: AQICAHhQNmWG2CzOm1dq3kWLM+iDUZhEqnhJwH9wZVpuZ94A8gFXJqbF0Fy872MD7xl56D/2AAAAwjCBvwYJKoZIhvcNAQcGoIGxMIGuAgEAMIGoBgkqhkiG9w0BBwEwHgYJYIZIAWUDBAEuMBEEDO7HPisPUlGzaio9vgIBEIB7/Qow46PMh/8JbEUbdXgTGhLfXPE+KIVW7T8s6YEMlGiRvMu7TV0QCIUJlSHPKZxzlJ2iwuz5yXeOag+EdY+eIQ0RKrsJ3b8UTisZYzGjfzZnxUKLzLoeXremtRCm3x47wCuHKd1dhh6FBbYt5TL2tDaj+vL2GBrKat2L, subType: "00" } } creationDate: { $date: { $numberLong: "1641024000000" } } updateDate: { $date: { $numberLong: "1641024000000" } } status: 1 masterKey: provider: aws key: arn:aws:kms:us-east-1:579766882180:key/89fcc2c4-08b0-4bd9-9f25-e30687b580d0 region: us-east-1 - &local_key_doc _id: &local_key_id { $binary: { base64: bG9jYWxrZXlsb2NhbGtleQ==, subType: "04" } } keyAltNames: ["local_key"] keyMaterial: { $binary: { base64: ABKBldDEoDW323yejOnIRk6YQmlD9d3eQthd16scKL75nz2LjNL9fgPDZWrFFOlqlhMCFaSrNJfGrFUjYk5JFDO7soG5Syb50k1niJoKg4ilsj0L4mpimFUtTpOr2nzZOeQtvAksEXc7gsFgq8gV7t/U3lsaXPY7I0t42DfSE8EGlPdxRjFdHnxh+OR8h7U9b8Qs5K5UuhgyeyxaBZ1Hgw==, subType: "00" } } creationDate: { $date: { $numberLong: "1641024000000" } } updateDate: { $date: { $numberLong: "1641024000000" } } status: 1 masterKey: provider: local tests: - description: get non-existent data key operations: - name: getKey object: *clientEncryption0 arguments: # *aws_key_id with first three letters replaced with 'A' (value: "3awsawsawsawsa"). id: &non_existent_id { $binary: { base64: AAAzYXdzYXdzYXdzYXdzYQ==, subType: "04" } } expectResult: { $$unsetOrMatches: null } expectEvents: - client: *client0 events: - commandStartedEvent: databaseName: *database0Name command: find: *collection0Name filter: { _id: *non_existent_id } readConcern: { level: majority } - description: get existing AWS data key operations: - name: getKey object: *clientEncryption0 arguments: id: *aws_key_id expectResult: *aws_key_doc expectEvents: - client: *client0 events: - commandStartedEvent: databaseName: *database0Name command: find: *collection0Name filter: { _id: *aws_key_id } readConcern: { level: majority } - description: get existing local data key operations: - name: getKey object: *clientEncryption0 arguments: id: *local_key_id expectResult: *local_key_doc expectEvents: - client: *client0 events: - commandStartedEvent: databaseName: *database0Name command: find: *collection0Name filter: { _id: *local_key_id } readConcern: { level: majority } mongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_encryption/unified/getKeyByAltName.yml000066400000000000000000000073011505113246500330770ustar00rootroot00000000000000description: getKeyByAltName schemaVersion: "1.8" runOnRequirements: - csfle: true createEntities: - client: id: &client0 client0 observeEvents: - commandStartedEvent - clientEncryption: id: &clientEncryption0 clientEncryption0 clientEncryptionOpts: keyVaultClient: *client0 keyVaultNamespace: keyvault.datakeys kmsProviders: local: { key: { $$placeholder: 1 } } - database: id: &database0 database0 client: *client0 databaseName: &database0Name keyvault - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name datakeys initialData: - databaseName: *database0Name collectionName: *collection0Name documents: - &aws_key_doc _id: { $binary: { base64: YXdzYXdzYXdzYXdzYXdzYQ==, subType: "04" } } keyAltNames: ["aws_key"] keyMaterial: { $binary: { base64: AQICAHhQNmWG2CzOm1dq3kWLM+iDUZhEqnhJwH9wZVpuZ94A8gFXJqbF0Fy872MD7xl56D/2AAAAwjCBvwYJKoZIhvcNAQcGoIGxMIGuAgEAMIGoBgkqhkiG9w0BBwEwHgYJYIZIAWUDBAEuMBEEDO7HPisPUlGzaio9vgIBEIB7/Qow46PMh/8JbEUbdXgTGhLfXPE+KIVW7T8s6YEMlGiRvMu7TV0QCIUJlSHPKZxzlJ2iwuz5yXeOag+EdY+eIQ0RKrsJ3b8UTisZYzGjfzZnxUKLzLoeXremtRCm3x47wCuHKd1dhh6FBbYt5TL2tDaj+vL2GBrKat2L, subType: "00" } } creationDate: { $date: { $numberLong: "1641024000000" } } updateDate: { $date: { $numberLong: "1641024000000" } } status: 1 masterKey: provider: aws key: arn:aws:kms:us-east-1:579766882180:key/89fcc2c4-08b0-4bd9-9f25-e30687b580d0 region: us-east-1 - &local_key_doc _id: { $binary: { base64: bG9jYWxrZXlsb2NhbGtleQ==, subType: "04" } } keyAltNames: ["local_key"] keyMaterial: { $binary: { base64: ABKBldDEoDW323yejOnIRk6YQmlD9d3eQthd16scKL75nz2LjNL9fgPDZWrFFOlqlhMCFaSrNJfGrFUjYk5JFDO7soG5Syb50k1niJoKg4ilsj0L4mpimFUtTpOr2nzZOeQtvAksEXc7gsFgq8gV7t/U3lsaXPY7I0t42DfSE8EGlPdxRjFdHnxh+OR8h7U9b8Qs5K5UuhgyeyxaBZ1Hgw==, subType: "00" } } creationDate: { $date: { $numberLong: "1641024000000" } } updateDate: { $date: { $numberLong: "1641024000000" } } status: 1 masterKey: provider: local tests: - description: get non-existent data key operations: - name: getKeyByAltName object: *clientEncryption0 arguments: keyAltName: does_not_exist expectResult: { $$unsetOrMatches: null } expectEvents: - client: *client0 events: - commandStartedEvent: databaseName: *database0Name command: find: *collection0Name filter: { keyAltNames: does_not_exist } readConcern: { level: majority } - description: get existing AWS data key operations: - name: getKeyByAltName object: *clientEncryption0 arguments: keyAltName: aws_key expectResult: *aws_key_doc expectEvents: - client: *client0 events: - commandStartedEvent: databaseName: *database0Name command: find: *collection0Name filter: { keyAltNames: aws_key } readConcern: { level: majority } - description: get existing local data key operations: - name: getKeyByAltName object: *clientEncryption0 arguments: keyAltName: local_key expectResult: *local_key_doc expectEvents: - client: *client0 events: - commandStartedEvent: databaseName: *database0Name command: find: *collection0Name filter: { keyAltNames: local_key } readConcern: { level: majority } mongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_encryption/unified/getKeys.yml000066400000000000000000000070701505113246500315300ustar00rootroot00000000000000description: getKeys schemaVersion: "1.8" runOnRequirements: - csfle: true createEntities: - client: id: &client0 client0 observeEvents: - commandStartedEvent - clientEncryption: id: &clientEncryption0 clientEncryption0 clientEncryptionOpts: keyVaultClient: *client0 keyVaultNamespace: keyvault.datakeys kmsProviders: local: { key: { $$placeholder: 1 } } - database: id: &database0 database0 client: *client0 databaseName: &database0Name keyvault - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name datakeys initialData: - databaseName: *database0Name collectionName: *collection0Name documents: [] tests: - description: getKeys with zero key documents operations: - name: getKeys object: *clientEncryption0 expectResult: [] expectEvents: - client: *client0 events: - commandStartedEvent: databaseName: *database0Name command: find: *collection0Name filter: {} readConcern: { level: majority } - description: getKeys with single key documents operations: - name: createDataKey object: *clientEncryption0 arguments: kmsProvider: local opts: keyAltNames: ["abc"] expectResult: { $$type: binData } - name: getKeys object: *clientEncryption0 expectResult: - _id: { $$type: binData } keyAltNames: ["abc"] keyMaterial: { $$type: binData } creationDate: { $$type: date } updateDate: { $$type: date } status: { $$type: int } masterKey: { $$type: object } expectEvents: - client: *client0 events: - commandStartedEvent: commandName: insert - commandStartedEvent: databaseName: *database0Name command: find: *collection0Name filter: {} readConcern: { level: majority } - description: getKeys with many key documents operations: - name: createDataKey object: *clientEncryption0 arguments: kmsProvider: local expectResult: { $$type: binData } - name: createDataKey object: *clientEncryption0 arguments: kmsProvider: local expectResult: { $$type: binData } - name: getKeys object: *clientEncryption0 expectResult: # Cannot expect deterministic order of results, so only assert that # exactly two key documents are returned. - _id: { $$type: binData } keyMaterial: { $$type: binData } creationDate: { $$type: date } updateDate: { $$type: date } status: { $$type: int } masterKey: { $$type: object } - _id: { $$type: binData } keyMaterial: { $$type: binData } creationDate: { $$type: date } updateDate: { $$type: date } status: { $$type: int } masterKey: { $$type: object } expectEvents: - client: *client0 events: - commandStartedEvent: commandName: insert - commandStartedEvent: commandName: insert - commandStartedEvent: databaseName: *database0Name command: find: *collection0Name filter: {} readConcern: { level: majority } mongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_encryption/unified/removeKeyAltName.yml000066400000000000000000000135461505113246500333320ustar00rootroot00000000000000description: removeKeyAltName schemaVersion: "1.8" runOnRequirements: - csfle: true createEntities: - client: id: &client0 client0 observeEvents: - commandStartedEvent - clientEncryption: id: &clientEncryption0 clientEncryption0 clientEncryptionOpts: keyVaultClient: *client0 keyVaultNamespace: keyvault.datakeys kmsProviders: local: { key: { $$placeholder: 1 } } - database: id: &database0 database0 client: *client0 databaseName: &database0Name keyvault - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name datakeys initialData: - databaseName: *database0Name collectionName: *collection0Name documents: - &local_key_doc _id: &local_key_id { $binary: { base64: bG9jYWxrZXlsb2NhbGtleQ==, subType: "04" } } keyAltNames: [alternate_name, local_key] keyMaterial: { $binary: { base64: ABKBldDEoDW323yejOnIRk6YQmlD9d3eQthd16scKL75nz2LjNL9fgPDZWrFFOlqlhMCFaSrNJfGrFUjYk5JFDO7soG5Syb50k1niJoKg4ilsj0L4mpimFUtTpOr2nzZOeQtvAksEXc7gsFgq8gV7t/U3lsaXPY7I0t42DfSE8EGlPdxRjFdHnxh+OR8h7U9b8Qs5K5UuhgyeyxaBZ1Hgw==, subType: "00" } } creationDate: { $date: { $numberLong: "1641024000000" } } updateDate: { $date: { $numberLong: "1641024000000" } } status: 1 masterKey: provider: local tests: - description: remove keyAltName from non-existent data key operations: - name: removeKeyAltName object: *clientEncryption0 arguments: # First 3 letters of local_key_id replaced with 'A' (value: "#alkeylocalkey"). id: &non_existent_id { $binary: { base64: AAAjYWxrZXlsb2NhbGtleQ==, subType: "04" } } keyAltName: does_not_exist expectResult: { $$unsetOrMatches: null } expectEvents: - client: *client0 events: - commandStartedEvent: databaseName: *database0Name command: findAndModify: *collection0Name query: { _id: *non_existent_id } update: [{ $set: { keyAltNames: { $cond: [{ $eq: [$keyAltNames, [does_not_exist]] }, $$REMOVE, { $filter: { input: $keyAltNames, cond: { $ne: [$$this, does_not_exist] } } }] } } }] writeConcern: { w: majority } outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - *local_key_doc - description: remove non-existent keyAltName from existing data key operations: - name: removeKeyAltName object: *clientEncryption0 arguments: id: *local_key_id keyAltName: does_not_exist expectResult: *local_key_doc expectEvents: - client: *client0 events: - commandStartedEvent: databaseName: *database0Name command: findAndModify: *collection0Name query: { _id: *local_key_id } update: [{ $set: { keyAltNames: { $cond: [{ $eq: [$keyAltNames, [does_not_exist]] }, $$REMOVE, { $filter: { input: $keyAltNames, cond: { $ne: [$$this, does_not_exist] } } }] } } }] writeConcern: { w: majority } outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - *local_key_doc - description: remove an existing keyAltName from an existing data key operations: - name: removeKeyAltName object: *clientEncryption0 arguments: id: *local_key_id keyAltName: alternate_name expectResult: *local_key_doc - name: find object: *collection0 arguments: filter: {} projection: { _id: 0, keyAltNames: 1 } expectResult: - keyAltNames: [local_key] expectEvents: - client: *client0 events: - commandStartedEvent: databaseName: *database0Name command: findAndModify: *collection0Name query: { _id: *local_key_id } update: [{ $set: { keyAltNames: { $cond: [{ $eq: [$keyAltNames, [alternate_name]] }, $$REMOVE, { $filter: { input: $keyAltNames, cond: { $ne: [$$this, alternate_name] } } }] } } }] writeConcern: { w: majority } - commandStartedEvent: { commandName: find } - description: remove the last keyAltName from an existing data key operations: - name: removeKeyAltName object: *clientEncryption0 arguments: id: *local_key_id keyAltName: alternate_name expectResult: *local_key_doc - name: removeKeyAltName object: *clientEncryption0 arguments: id: *local_key_id keyAltName: local_key expectResult: _id: *local_key_id keyAltNames: [local_key] keyMaterial: { $$type: binData } creationDate: { $$type: date } updateDate: { $$type: date } status: 1 masterKey: provider: local expectEvents: - client: *client0 events: - commandStartedEvent: databaseName: *database0Name command: findAndModify: *collection0Name query: { _id: *local_key_id } update: [{ $set: { keyAltNames: { $cond: [{ $eq: [$keyAltNames, [alternate_name]] }, $$REMOVE, { $filter: { input: $keyAltNames, cond: { $ne: [$$this, alternate_name] } } }] } } }] writeConcern: { w: majority } - commandStartedEvent: databaseName: *database0Name command: findAndModify: *collection0Name query: { _id: *local_key_id } update: [{ $set: { keyAltNames: { $cond: [{ $eq: [$keyAltNames, [local_key]] }, $$REMOVE, { $filter: { input: $keyAltNames, cond: { $ne: [$$this, local_key] } } }] } } }] rewrapManyDataKey-decrypt_failure.yml000066400000000000000000000051271505113246500366060ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_encryption/unifieddescription: rewrapManyDataKey-decrypt_failure schemaVersion: "1.8" runOnRequirements: - csfle: true createEntities: - client: id: &client0 client0 observeEvents: - commandStartedEvent - clientEncryption: id: &clientEncryption0 clientEncryption0 clientEncryptionOpts: keyVaultClient: *client0 keyVaultNamespace: keyvault.datakeys kmsProviders: aws: { accessKeyId: { $$placeholder: 1 }, secretAccessKey: { $$placeholder: 1 } } azure: { tenantId: { $$placeholder: 1 }, clientId: { $$placeholder: 1 }, clientSecret: { $$placeholder: 1 } } gcp: { email: { $$placeholder: 1 }, privateKey: { $$placeholder: 1 } } kmip: { endpoint: { $$placeholder: 1 } } local: { key: { $$placeholder: 1 } } - database: id: &database0 database0 client: *client0 databaseName: &database0Name keyvault - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name datakeys initialData: - databaseName: *database0Name collectionName: *collection0Name documents: - _id: { $binary: { base64: YXdzYXdzYXdzYXdzYXdzYQ==, subType: "04" } } keyAltNames: ["aws_key"] keyMaterial: { $binary: { base64: AQICAHhQNmWG2CzOm1dq3kWLM+iDUZhEqnhJwH9wZVpuZ94A8gFXJqbF0Fy872MD7xl56D/2AAAAwjCBvwYJKoZIhvcNAQcGoIGxMIGuAgEAMIGoBgkqhkiG9w0BBwEwHgYJYIZIAWUDBAEuMBEEDO7HPisPUlGzaio9vgIBEIB7/Qow46PMh/8JbEUbdXgTGhLfXPE+KIVW7T8s6YEMlGiRvMu7TV0QCIUJlSHPKZxzlJ2iwuz5yXeOag+EdY+eIQ0RKrsJ3b8UTisZYzGjfzZnxUKLzLoeXremtRCm3x47wCuHKd1dhh6FBbYt5TL2tDaj+vL2GBrKat2L, subType: "00" } } creationDate: { $date: { $numberLong: "1641024000000" } } updateDate: { $date: { $numberLong: "1641024000000" } } status: 1 masterKey: provider: aws # "us-east-1" changed to "us-east-2" in both key and region. key: arn:aws:kms:us-east-2:579766882180:key/89fcc2c4-08b0-4bd9-9f25-e30687b580d0 region: us-east-2 tests: - description: "rewrap data key that fails during decryption due to invalid masterKey" operations: - name: rewrapManyDataKey object: *clientEncryption0 arguments: filter: {} opts: provider: local expectError: isClientError: true expectEvents: - client: *client0 events: - commandStartedEvent: commandName: find databaseName: *database0Name command: find: *collection0Name filter: {} readConcern: { level: majority } rewrapManyDataKey-encrypt_failure.yml000066400000000000000000000101461505113246500366150ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_encryption/unifieddescription: rewrapManyDataKey-encrypt_failure schemaVersion: "1.8" runOnRequirements: - csfle: true createEntities: - client: id: &client0 client0 observeEvents: - commandStartedEvent - clientEncryption: id: &clientEncryption0 clientEncryption0 clientEncryptionOpts: keyVaultClient: *client0 keyVaultNamespace: keyvault.datakeys kmsProviders: aws: { accessKeyId: { $$placeholder: 1 }, secretAccessKey: { $$placeholder: 1 } } azure: { tenantId: { $$placeholder: 1 }, clientId: { $$placeholder: 1 }, clientSecret: { $$placeholder: 1 } } gcp: { email: { $$placeholder: 1 }, privateKey: { $$placeholder: 1 } } kmip: { endpoint: { $$placeholder: 1 } } local: { key: { $$placeholder: 1 } } - database: id: &database0 database0 client: *client0 databaseName: &database0Name keyvault - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name datakeys initialData: - databaseName: *database0Name collectionName: *collection0Name documents: - _id: { $binary: { base64: bG9jYWxrZXlsb2NhbGtleQ==, subType: "04" } } keyAltNames: ["local_key"] keyMaterial: { $binary: { base64: ABKBldDEoDW323yejOnIRk6YQmlD9d3eQthd16scKL75nz2LjNL9fgPDZWrFFOlqlhMCFaSrNJfGrFUjYk5JFDO7soG5Syb50k1niJoKg4ilsj0L4mpimFUtTpOr2nzZOeQtvAksEXc7gsFgq8gV7t/U3lsaXPY7I0t42DfSE8EGlPdxRjFdHnxh+OR8h7U9b8Qs5K5UuhgyeyxaBZ1Hgw==, subType: "00" } } creationDate: { $date: { $numberLong: "1641024000000" } } updateDate: { $date: { $numberLong: "1641024000000" } } status: 1 masterKey: provider: local tests: - description: "rewrap with invalid masterKey for AWS KMS provider" operations: - name: rewrapManyDataKey object: *clientEncryption0 arguments: filter: {} opts: provider: aws masterKey: # "us-east-1" changed to "us-east-2" in both key and region. key: arn:aws:kms:us-east-2:579766882180:key/89fcc2c4-08b0-4bd9-9f25-e30687b580d0 region: us-east-2 expectError: isClientError: true expectEvents: - client: *client0 events: - commandStartedEvent: commandName: find databaseName: *database0Name command: find: *collection0Name filter: {} readConcern: { level: majority } - description: "rewrap with invalid masterKey for Azure KMS provider" operations: - name: rewrapManyDataKey object: *clientEncryption0 arguments: filter: {} opts: provider: azure masterKey: # "key" changed to "invalid" in both keyVaultEndpoint and keyName. keyVaultEndpoint: invalid-vault-csfle.vault.azure.net keyName: invalid-name-csfle expectError: isClientError: true expectEvents: - client: *client0 events: - commandStartedEvent: commandName: find databaseName: *database0Name command: find: *collection0Name filter: {} readConcern: { level: majority } - description: "rewrap with invalid masterKey for GCP KMS provider" operations: - name: rewrapManyDataKey object: *clientEncryption0 arguments: filter: {} opts: provider: gcp masterKey: # "key" changed to "invalid" in both keyRing and keyName. projectId: devprod-drivers location: global keyRing: invalid-ring-csfle keyName: invalid-name-csfle expectError: isClientError: true expectEvents: - client: *client0 events: - commandStartedEvent: commandName: find databaseName: *database0Name command: find: *collection0Name filter: {} readConcern: { level: majority } mongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_encryption/unified/rewrapManyDataKey.yml000066400000000000000000000505521505113246500335100ustar00rootroot00000000000000# To ensure consistent ordering for expectResult matching purposes, find # commands sort the resulting documents in ascending order by the single-element # keyAltNames array to ensure alphabetic order by original KMS provider as # defined in initialData. description: rewrapManyDataKey schemaVersion: "1.8" runOnRequirements: - csfle: true createEntities: - client: id: &client0 client0 observeEvents: - commandStartedEvent - clientEncryption: id: &clientEncryption0 clientEncryption0 clientEncryptionOpts: keyVaultClient: *client0 keyVaultNamespace: keyvault.datakeys kmsProviders: aws: { accessKeyId: { $$placeholder: 1 }, secretAccessKey: { $$placeholder: 1 } } azure: { tenantId: { $$placeholder: 1 }, clientId: { $$placeholder: 1 }, clientSecret: { $$placeholder: 1 } } gcp: { email: { $$placeholder: 1 }, privateKey: { $$placeholder: 1 } } kmip: { endpoint: { $$placeholder: 1 } } local: { key: { $$placeholder: 1 } } - database: id: &database0 database0 client: *client0 databaseName: &database0Name keyvault - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name datakeys initialData: - databaseName: *database0Name collectionName: *collection0Name documents: - _id: &aws_key_id { $binary: { base64: YXdzYXdzYXdzYXdzYXdzYQ==, subType: "04" } } keyAltNames: ["aws_key"] keyMaterial: { $binary: { base64: AQICAHhQNmWG2CzOm1dq3kWLM+iDUZhEqnhJwH9wZVpuZ94A8gFXJqbF0Fy872MD7xl56D/2AAAAwjCBvwYJKoZIhvcNAQcGoIGxMIGuAgEAMIGoBgkqhkiG9w0BBwEwHgYJYIZIAWUDBAEuMBEEDO7HPisPUlGzaio9vgIBEIB7/Qow46PMh/8JbEUbdXgTGhLfXPE+KIVW7T8s6YEMlGiRvMu7TV0QCIUJlSHPKZxzlJ2iwuz5yXeOag+EdY+eIQ0RKrsJ3b8UTisZYzGjfzZnxUKLzLoeXremtRCm3x47wCuHKd1dhh6FBbYt5TL2tDaj+vL2GBrKat2L, subType: "00" } } creationDate: { $date: { $numberLong: "1641024000000" } } updateDate: { $date: { $numberLong: "1641024000000" } } status: 1 masterKey: &aws_masterkey provider: aws key: arn:aws:kms:us-east-1:579766882180:key/89fcc2c4-08b0-4bd9-9f25-e30687b580d0 region: us-east-1 - _id: &azure_key_id { $binary: { base64: YXp1cmVhenVyZWF6dXJlYQ==, subType: "04" } } keyAltNames: ["azure_key"] keyMaterial: { $binary: { base64: pr01l7qDygUkFE/0peFwpnNlv3iIy8zrQK38Q9i12UCN2jwZHDmfyx8wokiIKMb9kAleeY+vnt3Cf1MKu9kcDmI+KxbNDd+V3ytAAGzOVLDJr77CiWjF9f8ntkXRHrAY9WwnVDANYkDwXlyU0Y2GQFTiW65jiQhUtYLYH63Tk48SsJuQvnWw1Q+PzY8ga+QeVec8wbcThwtm+r2IHsCFnc72Gv73qq7weISw+O4mN08z3wOp5FOS2ZM3MK7tBGmPdBcktW7F8ODGsOQ1FU53OrWUnyX2aTi2ftFFFMWVHqQo7EYuBZHru8RRODNKMyQk0BFfKovAeTAVRv9WH9QU7g==, subType: "00" } } creationDate: { $date: { $numberLong: "1641024000000" } } updateDate: { $date: { $numberLong: "1641024000000" } } status: 1 masterKey: &azure_masterkey provider: azure keyVaultEndpoint: key-vault-csfle.vault.azure.net keyName: key-name-csfle - _id: &gcp_key_id { $binary: { base64: Z2NwZ2NwZ2NwZ2NwZ2NwZw==, subType: "04" } } keyAltNames: ["gcp_key"] keyMaterial: { $binary: { base64: CiQAIgLj0USbQtof/pYRLQO96yg/JEtZbD1UxKueaC37yzT5tTkSiQEAhClWB5ZCSgzHgxv8raWjNB4r7e8ePGdsmSuYTYmLC5oHHS/BdQisConzNKFaobEQZHamTCjyhy5NotKF8MWoo+dyfQApwI29+vAGyrUIQCXzKwRnNdNQ+lb3vJtS5bqvLTvSxKHpVca2kqyC9nhonV+u4qru5Q2bAqUgVFc8fL4pBuvlowZFTQ==, subType: "00" } } creationDate: { $date: { $numberLong: "1641024000000" } } updateDate: { $date: { $numberLong: "1641024000000" } } status: 1 masterKey: &gcp_masterkey provider: gcp projectId: devprod-drivers location: global keyRing: key-ring-csfle keyName: key-name-csfle - _id: &kmip_key_id { $binary: { base64: a21pcGttaXBrbWlwa21pcA==, subType: "04" } } keyAltNames: ["kmip_key"] keyMaterial: { $binary: { base64: CklVctHzke4mcytd0TxGqvepkdkQN8NUF4+jV7aZQITAKdz6WjdDpq3lMt9nSzWGG2vAEfvRb3mFEVjV57qqGqxjq2751gmiMRHXz0btStbIK3mQ5xbY9kdye4tsixlCryEwQONr96gwlwKKI9Nubl9/8+uRF6tgYjje7Q7OjauEf1SrJwKcoQ3WwnjZmEqAug0kImCpJ/irhdqPzivRiA==, subType: "00" } } creationDate: { $date: { $numberLong: "1641024000000" } } updateDate: { $date: { $numberLong: "1641024000000" } } status: 1 masterKey: &kmip_masterkey provider: kmip keyId: "1" - _id: &local_key_id { $binary: { base64: bG9jYWxrZXlsb2NhbGtleQ==, subType: "04" } } keyAltNames: ["local_key"] keyMaterial: { $binary: { base64: ABKBldDEoDW323yejOnIRk6YQmlD9d3eQthd16scKL75nz2LjNL9fgPDZWrFFOlqlhMCFaSrNJfGrFUjYk5JFDO7soG5Syb50k1niJoKg4ilsj0L4mpimFUtTpOr2nzZOeQtvAksEXc7gsFgq8gV7t/U3lsaXPY7I0t42DfSE8EGlPdxRjFdHnxh+OR8h7U9b8Qs5K5UuhgyeyxaBZ1Hgw==, subType: "00" } } creationDate: { $date: { $numberLong: "1641024000000" } } updateDate: { $date: { $numberLong: "1641024000000" } } status: 1 masterKey: &local_masterkey provider: local tests: - description: "no keys to rewrap due to no filter matches" operations: - name: rewrapManyDataKey object: *clientEncryption0 arguments: filter: { keyAltNames: no_matching_keys } opts: provider: local expectResult: # If no bulk write operation, then no bulk write result. bulkWriteResult: { $$exists: false } expectEvents: - client: *client0 events: - commandStartedEvent: databaseName: *database0Name command: find: *collection0Name filter: { keyAltNames: no_matching_keys } readConcern: { level: majority } - description: "rewrap with new AWS KMS provider" operations: - name: rewrapManyDataKey object: *clientEncryption0 arguments: filter: { keyAltNames: { $ne: aws_key } } opts: provider: aws # Different key: 89fcc2c4-08b0-4bd9-9f25-e30687b580d0 -> 061334ae-07a8-4ceb-a813-8135540e837d. masterKey: &new_aws_masterkey key: arn:aws:kms:us-east-1:579766882180:key/061334ae-07a8-4ceb-a813-8135540e837d region: us-east-1 expectResult: bulkWriteResult: insertedCount: 0 matchedCount: 4 modifiedCount: 4 deletedCount: 0 upsertedCount: 0 upsertedIds: {} expectEvents: - client: *client0 events: - commandStartedEvent: databaseName: *database0Name command: find: *collection0Name filter: { keyAltNames: { $ne: aws_key } } readConcern: { level: majority } - commandStartedEvent: databaseName: *database0Name command: update: *collection0Name ordered: true updates: - q: { _id: { $$type: binData } } u: { $set: { masterKey: { provider: aws, <<: *new_aws_masterkey }, keyMaterial: { $$type: binData } }, $currentDate: { updateDate: true } } multi: { $$unsetOrMatches: false } upsert: { $$unsetOrMatches: false } - q: { _id: { $$type: binData } } u: { $set: { masterKey: { provider: aws, <<: *new_aws_masterkey }, keyMaterial: { $$type: binData } }, $currentDate: { updateDate: true } } multi: { $$unsetOrMatches: false } upsert: { $$unsetOrMatches: false } - q: { _id: { $$type: binData } } u: { $set: { masterKey: { provider: aws, <<: *new_aws_masterkey }, keyMaterial: { $$type: binData } }, $currentDate: { updateDate: true } } multi: { $$unsetOrMatches: false } upsert: { $$unsetOrMatches: false } - q: { _id: { $$type: binData } } u: { $set: { masterKey: { provider: aws, <<: *new_aws_masterkey }, keyMaterial: { $$type: binData } }, $currentDate: { updateDate: true } } multi: { $$unsetOrMatches: false } upsert: { $$unsetOrMatches: false } writeConcern: { w: majority } - description: "rewrap with new Azure KMS provider" operations: - name: rewrapManyDataKey object: *clientEncryption0 arguments: filter: { keyAltNames: { $ne: azure_key } } opts: provider: azure masterKey: &new_azure_masterkey keyVaultEndpoint: key-vault-csfle.vault.azure.net keyName: key-name-csfle expectResult: bulkWriteResult: insertedCount: 0 matchedCount: 4 modifiedCount: 4 deletedCount: 0 upsertedCount: 0 upsertedIds: {} expectEvents: - client: *client0 events: - commandStartedEvent: databaseName: *database0Name command: find: *collection0Name filter: { keyAltNames: { $ne: azure_key } } readConcern: { level: majority } - commandStartedEvent: databaseName: *database0Name command: update: *collection0Name ordered: true updates: - q: { _id: { $$type: binData } } u: { $set: { masterKey: { provider: azure, <<: *new_azure_masterkey }, keyMaterial: { $$type: binData } }, $currentDate: { updateDate: true } } multi: { $$unsetOrMatches: false } upsert: { $$unsetOrMatches: false } - q: { _id: { $$type: binData } } u: { $set: { masterKey: { provider: azure, <<: *new_azure_masterkey }, keyMaterial: { $$type: binData } }, $currentDate: { updateDate: true } } multi: { $$unsetOrMatches: false } upsert: { $$unsetOrMatches: false } - q: { _id: { $$type: binData } } u: { $set: { masterKey: { provider: azure, <<: *new_azure_masterkey }, keyMaterial: { $$type: binData } }, $currentDate: { updateDate: true } } multi: { $$unsetOrMatches: false } upsert: { $$unsetOrMatches: false } - q: { _id: { $$type: binData } } u: { $set: { masterKey: { provider: azure, <<: *new_azure_masterkey }, keyMaterial: { $$type: binData } }, $currentDate: { updateDate: true } } multi: { $$unsetOrMatches: false } upsert: { $$unsetOrMatches: false } writeConcern: { w: majority } - description: "rewrap with new GCP KMS provider" operations: - name: rewrapManyDataKey object: *clientEncryption0 arguments: filter: { keyAltNames: { $ne: gcp_key } } opts: provider: gcp masterKey: &new_gcp_masterkey projectId: devprod-drivers location: global keyRing: key-ring-csfle keyName: key-name-csfle expectResult: bulkWriteResult: insertedCount: 0 matchedCount: 4 modifiedCount: 4 deletedCount: 0 upsertedCount: 0 upsertedIds: {} expectEvents: - client: *client0 events: - commandStartedEvent: databaseName: *database0Name command: find: *collection0Name filter: { keyAltNames: { $ne: gcp_key } } readConcern: { level: majority } - commandStartedEvent: databaseName: *database0Name command: update: *collection0Name ordered: true updates: - q: { _id: { $$type: binData } } u: { $set: { masterKey: { provider: gcp, <<: *new_gcp_masterkey }, keyMaterial: { $$type: binData } }, $currentDate: { updateDate: true } } multi: { $$unsetOrMatches: false } upsert: { $$unsetOrMatches: false } - q: { _id: { $$type: binData } } u: { $set: { masterKey: { provider: gcp, <<: *new_gcp_masterkey }, keyMaterial: { $$type: binData } }, $currentDate: { updateDate: true } } multi: { $$unsetOrMatches: false } upsert: { $$unsetOrMatches: false } - q: { _id: { $$type: binData } } u: { $set: { masterKey: { provider: gcp, <<: *new_gcp_masterkey }, keyMaterial: { $$type: binData } }, $currentDate: { updateDate: true } } multi: { $$unsetOrMatches: false } upsert: { $$unsetOrMatches: false } - q: { _id: { $$type: binData } } u: { $set: { masterKey: { provider: gcp, <<: *new_gcp_masterkey }, keyMaterial: { $$type: binData } }, $currentDate: { updateDate: true } } multi: { $$unsetOrMatches: false } upsert: { $$unsetOrMatches: false } writeConcern: { w: majority } - description: "rewrap with new KMIP KMS provider" operations: - name: rewrapManyDataKey object: *clientEncryption0 arguments: filter: { keyAltNames: { $ne: kmip_key } } opts: provider: kmip expectResult: bulkWriteResult: insertedCount: 0 matchedCount: 4 modifiedCount: 4 deletedCount: 0 upsertedCount: 0 upsertedIds: {} expectEvents: - client: *client0 events: - commandStartedEvent: databaseName: *database0Name command: find: *collection0Name filter: { keyAltNames: { $ne: kmip_key } } readConcern: { level: majority } - commandStartedEvent: databaseName: *database0Name command: update: *collection0Name ordered: true updates: - q: { _id: { $$type: binData } } u: { $set: { masterKey: { provider: kmip, keyId: { $$type: string } }, keyMaterial: { $$type: binData } }, $currentDate: { updateDate: true } } multi: { $$unsetOrMatches: false } upsert: { $$unsetOrMatches: false } - q: { _id: { $$type: binData } } u: { $set: { masterKey: { provider: kmip, keyId: { $$type: string } }, keyMaterial: { $$type: binData } }, $currentDate: { updateDate: true } } multi: { $$unsetOrMatches: false } upsert: { $$unsetOrMatches: false } - q: { _id: { $$type: binData } } u: { $set: { masterKey: { provider: kmip, keyId: { $$type: string } }, keyMaterial: { $$type: binData } }, $currentDate: { updateDate: true } } multi: { $$unsetOrMatches: false } upsert: { $$unsetOrMatches: false } - q: { _id: { $$type: binData } } u: { $set: { masterKey: { provider: kmip, keyId: { $$type: string } }, keyMaterial: { $$type: binData } }, $currentDate: { updateDate: true } } multi: { $$unsetOrMatches: false } upsert: { $$unsetOrMatches: false } writeConcern: { w: majority } - description: "rewrap with new local KMS provider" operations: - name: rewrapManyDataKey object: *clientEncryption0 arguments: filter: { keyAltNames: { $ne: local_key } } opts: provider: local expectResult: bulkWriteResult: insertedCount: 0 matchedCount: 4 modifiedCount: 4 deletedCount: 0 upsertedCount: 0 upsertedIds: {} expectEvents: - client: *client0 events: - commandStartedEvent: databaseName: *database0Name command: find: *collection0Name filter: { keyAltNames: { $ne: local_key } } readConcern: { level: majority } - commandStartedEvent: databaseName: *database0Name command: update: *collection0Name ordered: true updates: - q: { _id: { $$type: binData } } u: { $set: { masterKey: { provider: local }, keyMaterial: { $$type: binData } }, $currentDate: { updateDate: true } } multi: { $$unsetOrMatches: false } upsert: { $$unsetOrMatches: false } - q: { _id: { $$type: binData } } u: { $set: { masterKey: { provider: local }, keyMaterial: { $$type: binData } }, $currentDate: { updateDate: true } } multi: { $$unsetOrMatches: false } upsert: { $$unsetOrMatches: false } - q: { _id: { $$type: binData } } u: { $set: { masterKey: { provider: local }, keyMaterial: { $$type: binData } }, $currentDate: { updateDate: true } } multi: { $$unsetOrMatches: false } upsert: { $$unsetOrMatches: false } - q: { _id: { $$type: binData } } u: { $set: { masterKey: { provider: local }, keyMaterial: { $$type: binData } }, $currentDate: { updateDate: true } } multi: { $$unsetOrMatches: false } upsert: { $$unsetOrMatches: false } writeConcern: { w: majority } - description: "rewrap with current KMS provider" operations: - name: rewrapManyDataKey object: *clientEncryption0 arguments: filter: {} expectResult: bulkWriteResult: insertedCount: 0 matchedCount: 5 modifiedCount: 5 deletedCount: 0 upsertedCount: 0 upsertedIds: {} - name: find object: *collection0 arguments: filter: {} projection: { masterKey: 1 } sort: { keyAltNames: 1 } expectResult: - { _id: *aws_key_id, masterKey: *aws_masterkey } - { _id: *azure_key_id, masterKey: *azure_masterkey } - { _id: *gcp_key_id, masterKey: *gcp_masterkey } - { _id: *kmip_key_id, masterKey: *kmip_masterkey } - { _id: *local_key_id, masterKey: *local_masterkey } expectEvents: - client: *client0 events: - commandStartedEvent: databaseName: *database0Name command: find: *collection0Name filter: {} readConcern: { level: majority } - commandStartedEvent: databaseName: *database0Name command: update: *collection0Name ordered: true updates: - q: { _id: { $$type: binData } } u: { $set: { masterKey: { $$type: object }, keyMaterial: { $$type: binData } }, $currentDate: { updateDate: true } } multi: { $$unsetOrMatches: false } upsert: { $$unsetOrMatches: false } - q: { _id: { $$type: binData } } u: { $set: { masterKey: { $$type: object }, keyMaterial: { $$type: binData } }, $currentDate: { updateDate: true } } multi: { $$unsetOrMatches: false } upsert: { $$unsetOrMatches: false } - q: { _id: { $$type: binData } } u: { $set: { masterKey: { $$type: object }, keyMaterial: { $$type: binData } }, $currentDate: { updateDate: true } } multi: { $$unsetOrMatches: false } upsert: { $$unsetOrMatches: false } - q: { _id: { $$type: binData } } u: { $set: { masterKey: { $$type: object }, keyMaterial: { $$type: binData } }, $currentDate: { updateDate: true } } multi: { $$unsetOrMatches: false } upsert: { $$unsetOrMatches: false } - q: { _id: { $$type: binData } } u: { $set: { masterKey: { $$type: object }, keyMaterial: { $$type: binData } }, $currentDate: { updateDate: true } } multi: { $$unsetOrMatches: false } upsert: { $$unsetOrMatches: false } writeConcern: { w: majority } - commandStartedEvent: { commandName: find } mongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_encryption/unsupportedCommand.yml000066400000000000000000000052241505113246500323600ustar00rootroot00000000000000runOn: - minServerVersion: "4.1.10" database_name: &database_name "default" collection_name: &collection_name "default" data: - &doc0_encrypted { _id: 1, x: 1, encrypted_string: {'$binary': {'base64': 'AQAAAAAAAAAAAAAAAAAAAAACwj+3zkv2VM+aTfk60RqhXq6a/77WlLwu/BxXFkL7EppGsju/m8f0x5kBDD3EZTtGALGXlym5jnpZAoSIkswHoA==', 'subType': '06'}} } - &doc1_encrypted { _id: 2, x: 2, encrypted_string: {'$binary': {'base64': 'AQAAAAAAAAAAAAAAAAAAAAACDdw4KFz3ZLquhsbt7RmDjD0N67n0uSXx7IGnQNCLeIKvot6s/ouI21Eo84IOtb6lhwUNPlSEBNY0/hbszWAKJg==', 'subType': '06'}} } json_schema: {'properties': {'encrypted_w_altname': {'encrypt': {'keyId': '/altname', 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Random'}}, 'encrypted_string': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}, 'random': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Random'}}, 'encrypted_string_equivalent': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}}, 'bsonType': 'object'} key_vault_data: [{'status': 1, '_id': {'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}, 'masterKey': {'provider': 'aws', 'key': 'arn:aws:kms:us-east-1:579766882180:key/89fcc2c4-08b0-4bd9-9f25-e30687b580d0', 'region': 'us-east-1'}, 'updateDate': {'$date': {'$numberLong': '1552949630483'}}, 'keyMaterial': {'$binary': {'base64': 'AQICAHhQNmWG2CzOm1dq3kWLM+iDUZhEqnhJwH9wZVpuZ94A8gEqnsxXlR51T5EbEVezUqqKAAAAwjCBvwYJKoZIhvcNAQcGoIGxMIGuAgEAMIGoBgkqhkiG9w0BBwEwHgYJYIZIAWUDBAEuMBEEDHa4jo6yp0Z18KgbUgIBEIB74sKxWtV8/YHje5lv5THTl0HIbhSwM6EqRlmBiFFatmEWaeMk4tO4xBX65eq670I5TWPSLMzpp8ncGHMmvHqRajNBnmFtbYxN3E3/WjxmdbOOe+OXpnGJPcGsftc7cB2shRfA4lICPnE26+oVNXT6p0Lo20nY5XC7jyCO', 'subType': '00'}}, 'creationDate': {'$date': {'$numberLong': '1552949630483'}}, 'keyAltNames': ['altname', 'another_altname']}] tests: - description: "mapReduce deterministic encryption (unsupported)" clientOptions: autoEncryptOpts: kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: mapReduce arguments: map: { $code: "function inc() { return emit(0, this.x + 1) }" } reduce: { $code: "function sum(key, values) { return values.reduce((acc, x) => acc + x); }" } out: { inline: 1 } result: errorContains: "command not supported for auto encryption: mapReduce" mongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_encryption/updateMany.yml000066400000000000000000000120221505113246500305720ustar00rootroot00000000000000runOn: - minServerVersion: "4.1.10" database_name: &database_name "default" collection_name: &collection_name "default" data: - &doc0_encrypted { _id: 1, encrypted_string: {'$binary': {'base64': 'AQAAAAAAAAAAAAAAAAAAAAACwj+3zkv2VM+aTfk60RqhXq6a/77WlLwu/BxXFkL7EppGsju/m8f0x5kBDD3EZTtGALGXlym5jnpZAoSIkswHoA==', 'subType': '06'}} } - &doc1_encrypted { _id: 2, encrypted_string: {'$binary': {'base64': 'AQAAAAAAAAAAAAAAAAAAAAACDdw4KFz3ZLquhsbt7RmDjD0N67n0uSXx7IGnQNCLeIKvot6s/ouI21Eo84IOtb6lhwUNPlSEBNY0/hbszWAKJg==', 'subType': '06'}} } json_schema: {'properties': {'encrypted_w_altname': {'encrypt': {'keyId': '/altname', 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Random'}}, 'encrypted_string': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}, 'random': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Random'}}, 'encrypted_string_equivalent': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}}, 'bsonType': 'object'} key_vault_data: [{'status': 1, '_id': {'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}, 'masterKey': {'provider': 'aws', 'key': 'arn:aws:kms:us-east-1:579766882180:key/89fcc2c4-08b0-4bd9-9f25-e30687b580d0', 'region': 'us-east-1'}, 'updateDate': {'$date': {'$numberLong': '1552949630483'}}, 'keyMaterial': {'$binary': {'base64': 'AQICAHhQNmWG2CzOm1dq3kWLM+iDUZhEqnhJwH9wZVpuZ94A8gEqnsxXlR51T5EbEVezUqqKAAAAwjCBvwYJKoZIhvcNAQcGoIGxMIGuAgEAMIGoBgkqhkiG9w0BBwEwHgYJYIZIAWUDBAEuMBEEDHa4jo6yp0Z18KgbUgIBEIB74sKxWtV8/YHje5lv5THTl0HIbhSwM6EqRlmBiFFatmEWaeMk4tO4xBX65eq670I5TWPSLMzpp8ncGHMmvHqRajNBnmFtbYxN3E3/WjxmdbOOe+OXpnGJPcGsftc7cB2shRfA4lICPnE26+oVNXT6p0Lo20nY5XC7jyCO', 'subType': '00'}}, 'creationDate': {'$date': {'$numberLong': '1552949630483'}}, 'keyAltNames': ['altname', 'another_altname']}] tests: - description: "updateMany with deterministic encryption" clientOptions: autoEncryptOpts: kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: updateMany arguments: filter: { encrypted_string: { $in: [ "string0", "string1" ] } } update: { $set: { encrypted_string: "string2", random: "abc" } } result: matchedCount: 2 modifiedCount: 2 upsertedCount: 0 expectations: # Auto encryption will request the collection info. - command_started_event: command: listCollections: 1 filter: name: *collection_name command_name: listCollections # Then key is fetched from the key vault. - command_started_event: command: find: datakeys filter: {"$or": [{"_id": {"$in": [ {'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}} ] }}, {"keyAltNames": {"$in": []}}]} $db: keyvault readConcern: { level: "majority" } command_name: find - command_started_event: command: update: *collection_name updates: - q: { encrypted_string: { $in: [ {'$binary': {'base64': 'AQAAAAAAAAAAAAAAAAAAAAACwj+3zkv2VM+aTfk60RqhXq6a/77WlLwu/BxXFkL7EppGsju/m8f0x5kBDD3EZTtGALGXlym5jnpZAoSIkswHoA==', 'subType': '06'}}, {'$binary': {'base64': 'AQAAAAAAAAAAAAAAAAAAAAACDdw4KFz3ZLquhsbt7RmDjD0N67n0uSXx7IGnQNCLeIKvot6s/ouI21Eo84IOtb6lhwUNPlSEBNY0/hbszWAKJg==', 'subType': '06'}} ] } } u: { $set: { encrypted_string: {'$binary': {'base64': 'AQAAAAAAAAAAAAAAAAAAAAACQ76HWOut3DZtQuV90hp1aaCpZn95vZIaWmn+wrBehcEtcFwyJlBdlyzDzZTWPZCPgiFq72Wvh6Y7VbpU9NAp3A==', 'subType': '06'}}, random: { $$type: "binData" } } } multi: true upsert: false ordered: true command_name: update outcome: collection: # Outcome is checked using a separate MongoClient without auto encryption. data: - { _id: 1, encrypted_string: {'$binary': {'base64': 'AQAAAAAAAAAAAAAAAAAAAAACQ76HWOut3DZtQuV90hp1aaCpZn95vZIaWmn+wrBehcEtcFwyJlBdlyzDzZTWPZCPgiFq72Wvh6Y7VbpU9NAp3A==', 'subType': '06'}}, random: { $$type: "binData" } } - { _id: 2, encrypted_string: {'$binary': {'base64': 'AQAAAAAAAAAAAAAAAAAAAAACQ76HWOut3DZtQuV90hp1aaCpZn95vZIaWmn+wrBehcEtcFwyJlBdlyzDzZTWPZCPgiFq72Wvh6Y7VbpU9NAp3A==', 'subType': '06'}}, random: { $$type: "binData" } } - description: "updateMany fails when filtering on a random field" clientOptions: autoEncryptOpts: kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: updateMany arguments: filter: { random: "abc" } update: { $set: { encrypted_string: "string1" } } result: errorContains: "Cannot query on fields encrypted with the randomized encryption"mongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_encryption/updateOne.yml000066400000000000000000000175511505113246500304230ustar00rootroot00000000000000runOn: - minServerVersion: "4.1.10" database_name: &database_name "default" collection_name: &collection_name "default" data: - &doc0_encrypted { _id: 1, encrypted_string: {'$binary': {'base64': 'AQAAAAAAAAAAAAAAAAAAAAACwj+3zkv2VM+aTfk60RqhXq6a/77WlLwu/BxXFkL7EppGsju/m8f0x5kBDD3EZTtGALGXlym5jnpZAoSIkswHoA==', 'subType': '06'}} } json_schema: {'properties': {'encrypted_w_altname': {'encrypt': {'keyId': '/altname', 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Random'}}, 'encrypted_string': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}, 'random': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Random'}}, 'encrypted_string_equivalent': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}}, 'bsonType': 'object'} key_vault_data: [{'status': 1, '_id': {'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}, 'masterKey': {'provider': 'aws', 'key': 'arn:aws:kms:us-east-1:579766882180:key/89fcc2c4-08b0-4bd9-9f25-e30687b580d0', 'region': 'us-east-1'}, 'updateDate': {'$date': {'$numberLong': '1552949630483'}}, 'keyMaterial': {'$binary': {'base64': 'AQICAHhQNmWG2CzOm1dq3kWLM+iDUZhEqnhJwH9wZVpuZ94A8gEqnsxXlR51T5EbEVezUqqKAAAAwjCBvwYJKoZIhvcNAQcGoIGxMIGuAgEAMIGoBgkqhkiG9w0BBwEwHgYJYIZIAWUDBAEuMBEEDHa4jo6yp0Z18KgbUgIBEIB74sKxWtV8/YHje5lv5THTl0HIbhSwM6EqRlmBiFFatmEWaeMk4tO4xBX65eq670I5TWPSLMzpp8ncGHMmvHqRajNBnmFtbYxN3E3/WjxmdbOOe+OXpnGJPcGsftc7cB2shRfA4lICPnE26+oVNXT6p0Lo20nY5XC7jyCO', 'subType': '00'}}, 'creationDate': {'$date': {'$numberLong': '1552949630483'}}, 'keyAltNames': ['altname', 'another_altname']}] tests: - description: "updateOne with deterministic encryption" clientOptions: autoEncryptOpts: kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: updateOne arguments: filter: { encrypted_string: "string0" } update: { $set: { encrypted_string: "string1", random: "abc" } } result: matchedCount: 1 modifiedCount: 1 upsertedCount: 0 expectations: # Auto encryption will request the collection info. - command_started_event: command: listCollections: 1 filter: name: *collection_name command_name: listCollections # Then key is fetched from the key vault. - command_started_event: command: find: datakeys filter: {"$or": [{"_id": {"$in": [ {'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}} ] }}, {"keyAltNames": {"$in": []}}]} $db: keyvault readConcern: { level: "majority" } command_name: find - command_started_event: command: update: *collection_name updates: - q: { encrypted_string: { $eq: {'$binary': {'base64': 'AQAAAAAAAAAAAAAAAAAAAAACwj+3zkv2VM+aTfk60RqhXq6a/77WlLwu/BxXFkL7EppGsju/m8f0x5kBDD3EZTtGALGXlym5jnpZAoSIkswHoA==', 'subType': '06'}} } } u: { $set: {encrypted_string: {'$binary': {'base64': 'AQAAAAAAAAAAAAAAAAAAAAACDdw4KFz3ZLquhsbt7RmDjD0N67n0uSXx7IGnQNCLeIKvot6s/ouI21Eo84IOtb6lhwUNPlSEBNY0/hbszWAKJg==', 'subType': '06'}}, random: { $$type: "binData" } } } # DRIVERS-976: mongocryptd adds upsert and multi fields to all update commands, so these fields should be added to spec tests upsert: false multi: false ordered: true command_name: update outcome: collection: # Outcome is checked using a separate MongoClient without auto encryption. data: - { _id: 1, encrypted_string: {'$binary': {'base64': 'AQAAAAAAAAAAAAAAAAAAAAACDdw4KFz3ZLquhsbt7RmDjD0N67n0uSXx7IGnQNCLeIKvot6s/ouI21Eo84IOtb6lhwUNPlSEBNY0/hbszWAKJg==', 'subType': '06'}}, random: { $$type: "binData"} } - description: "updateOne fails when filtering on a random field" clientOptions: autoEncryptOpts: kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: updateOne arguments: filter: { random: "abc" } update: { $set: { encrypted_string: "string1" } } result: errorContains: "Cannot query on fields encrypted with the randomized encryption" - description: "$unset works with an encrypted field" clientOptions: autoEncryptOpts: kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: updateOne arguments: filter: { } update: { $unset: { encrypted_string: "" } } result: matchedCount: 1 modifiedCount: 1 upsertedCount: 0 expectations: # Auto encryption will request the collection info. - command_started_event: command: listCollections: 1 filter: name: *collection_name command_name: listCollections - command_started_event: command: update: *collection_name updates: - q: { } u: { $unset: {encrypted_string: "" } } ordered: true command_name: update outcome: collection: # Outcome is checked using a separate MongoClient without auto encryption. data: - { _id: 1 } - description: "$rename works if target value has same encryption options" clientOptions: autoEncryptOpts: kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: updateOne arguments: filter: { } update: { $rename: { encrypted_string: "encrypted_string_equivalent" } } result: matchedCount: 1 modifiedCount: 1 upsertedCount: 0 expectations: # Auto encryption will request the collection info. - command_started_event: command: listCollections: 1 filter: name: *collection_name command_name: listCollections - command_started_event: command: update: *collection_name updates: - q: { } u: { $rename: {encrypted_string: "encrypted_string_equivalent" } } ordered: true command_name: update outcome: collection: # Outcome is checked using a separate MongoClient without auto encryption. data: - { _id: 1, encrypted_string_equivalent: {'$binary': {'base64': 'AQAAAAAAAAAAAAAAAAAAAAACwj+3zkv2VM+aTfk60RqhXq6a/77WlLwu/BxXFkL7EppGsju/m8f0x5kBDD3EZTtGALGXlym5jnpZAoSIkswHoA==', 'subType': '06'}} } - description: "$rename fails if target value has different encryption options" clientOptions: autoEncryptOpts: kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: updateOne arguments: filter: { } update: { $rename: { encrypted_string: "random" } } result: errorContains: "$rename between two encrypted fields must have the same metadata or both be unencrypted" - description: "an invalid update (no $ operators) is validated and errors" skipReason: "The Ruby Driver supports this kind of update command" clientOptions: autoEncryptOpts: kmsProviders: aws: {} # Credentials filled in from environment. operations: - name: updateOne arguments: filter: { } update: { encrypted_string: "random" } result: errorContains: "" # Note, drivers differ in the error message. Just ensure an error is thrown. validatorAndPartialFieldExpression.yml000066400000000000000000000247411505113246500353700ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_encryption# This test requires libmongocrypt 1.5.0-alpha2. runOn: # Require server version 6.0.0 to get behavior added in SERVER-64911. - minServerVersion: "6.0.0" database_name: &database_name "default" collection_name: &collection_name "default" data: [] tests: - description: "create with a validator on an unencrypted field is OK" clientOptions: autoEncryptOpts: kmsProviders: local: {'key': {'$binary': {'base64': 'Mng0NCt4ZHVUYUJCa1kxNkVyNUR1QURhZ2h2UzR2d2RrZzh0cFBwM3R6NmdWMDFBMUN3YkQ5aXRRMkhGRGdQV09wOGVNYUMxT2k3NjZKelhaQmRCZGJkTXVyZG9uSjFk', 'subType': '00'}}} schemaMap: "default.encryptedCollection": {'properties': {'encrypted_w_altname': {'encrypt': {'keyId': '/altname', 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Random'}}, 'encrypted_string': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}, 'random': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Random'}}, 'encrypted_string_equivalent': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}}, 'bsonType': 'object'} operations: # Drop to remove a collection that may exist from previous test runs. - name: dropCollection object: database arguments: collection: "encryptedCollection" - name: createCollection object: database arguments: collection: "encryptedCollection" validator: unencrypted_string: "foo" - name: assertCollectionExists object: testRunner arguments: database: *database_name collection: "encryptedCollection" - description: "create with a validator on an encrypted field is an error" clientOptions: autoEncryptOpts: kmsProviders: local: {'key': {'$binary': {'base64': 'Mng0NCt4ZHVUYUJCa1kxNkVyNUR1QURhZ2h2UzR2d2RrZzh0cFBwM3R6NmdWMDFBMUN3YkQ5aXRRMkhGRGdQV09wOGVNYUMxT2k3NjZKelhaQmRCZGJkTXVyZG9uSjFk', 'subType': '00'}}} schemaMap: "default.encryptedCollection": {'properties': {'encrypted_w_altname': {'encrypt': {'keyId': '/altname', 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Random'}}, 'encrypted_string': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}, 'random': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Random'}}, 'encrypted_string_equivalent': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}}, 'bsonType': 'object'} operations: # Drop to remove a collection that may exist from previous test runs. - name: dropCollection object: database arguments: collection: "encryptedCollection" - name: createCollection object: database arguments: collection: "encryptedCollection" validator: encrypted_string: "foo" result: errorContains: "Comparison to encrypted fields not supported" - description: "collMod with a validator on an unencrypted field is OK" clientOptions: autoEncryptOpts: kmsProviders: local: {'key': {'$binary': {'base64': 'Mng0NCt4ZHVUYUJCa1kxNkVyNUR1QURhZ2h2UzR2d2RrZzh0cFBwM3R6NmdWMDFBMUN3YkQ5aXRRMkhGRGdQV09wOGVNYUMxT2k3NjZKelhaQmRCZGJkTXVyZG9uSjFk', 'subType': '00'}}} schemaMap: "default.encryptedCollection": {'properties': {'encrypted_w_altname': {'encrypt': {'keyId': '/altname', 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Random'}}, 'encrypted_string': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}, 'random': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Random'}}, 'encrypted_string_equivalent': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}}, 'bsonType': 'object'} operations: # Drop to remove a collection that may exist from previous test runs. - name: dropCollection object: database arguments: collection: "encryptedCollection" - name: createCollection object: database arguments: collection: "encryptedCollection" - name: runCommand object: database arguments: command: collMod: "encryptedCollection" validator: unencrypted_string: "foo" - description: "collMod with a validator on an encrypted field is an error" clientOptions: autoEncryptOpts: kmsProviders: local: {'key': {'$binary': {'base64': 'Mng0NCt4ZHVUYUJCa1kxNkVyNUR1QURhZ2h2UzR2d2RrZzh0cFBwM3R6NmdWMDFBMUN3YkQ5aXRRMkhGRGdQV09wOGVNYUMxT2k3NjZKelhaQmRCZGJkTXVyZG9uSjFk', 'subType': '00'}}} schemaMap: "default.encryptedCollection": {'properties': {'encrypted_w_altname': {'encrypt': {'keyId': '/altname', 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Random'}}, 'encrypted_string': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}, 'random': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Random'}}, 'encrypted_string_equivalent': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}}, 'bsonType': 'object'} operations: # Drop to remove a collection that may exist from previous test runs. - name: dropCollection object: database arguments: collection: "encryptedCollection" - name: createCollection object: database arguments: collection: "encryptedCollection" - name: runCommand object: database arguments: command: collMod: "encryptedCollection" validator: encrypted_string: "foo" result: errorContains: "Comparison to encrypted fields not supported" - description: "createIndexes with a partialFilterExpression on an unencrypted field is OK" clientOptions: autoEncryptOpts: kmsProviders: local: {'key': {'$binary': {'base64': 'Mng0NCt4ZHVUYUJCa1kxNkVyNUR1QURhZ2h2UzR2d2RrZzh0cFBwM3R6NmdWMDFBMUN3YkQ5aXRRMkhGRGdQV09wOGVNYUMxT2k3NjZKelhaQmRCZGJkTXVyZG9uSjFk', 'subType': '00'}}} schemaMap: "default.encryptedCollection": {'properties': {'encrypted_w_altname': {'encrypt': {'keyId': '/altname', 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Random'}}, 'encrypted_string': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}, 'random': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Random'}}, 'encrypted_string_equivalent': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}}, 'bsonType': 'object'} operations: # Drop to remove a collection that may exist from previous test runs. - name: dropCollection object: database arguments: collection: "encryptedCollection" - name: createCollection object: database arguments: collection: "encryptedCollection" - name: runCommand object: database arguments: command: createIndexes: "encryptedCollection" indexes: - name: "name" key: { name: 1 } partialFilterExpression: unencrypted_string: "foo" - name: assertIndexExists object: testRunner arguments: database: *database_name collection: "encryptedCollection" index: name - description: "createIndexes with a partialFilterExpression on an encrypted field is an error" clientOptions: autoEncryptOpts: kmsProviders: local: {'key': {'$binary': {'base64': 'Mng0NCt4ZHVUYUJCa1kxNkVyNUR1QURhZ2h2UzR2d2RrZzh0cFBwM3R6NmdWMDFBMUN3YkQ5aXRRMkhGRGdQV09wOGVNYUMxT2k3NjZKelhaQmRCZGJkTXVyZG9uSjFk', 'subType': '00'}}} schemaMap: "default.encryptedCollection": {'properties': {'encrypted_w_altname': {'encrypt': {'keyId': '/altname', 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Random'}}, 'encrypted_string': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}, 'random': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Random'}}, 'encrypted_string_equivalent': {'encrypt': {'keyId': [{'$binary': {'base64': 'AAAAAAAAAAAAAAAAAAAAAA==', 'subType': '04'}}], 'bsonType': 'string', 'algorithm': 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic'}}}, 'bsonType': 'object'} operations: # Drop to remove a collection that may exist from previous test runs. - name: dropCollection object: database arguments: collection: "encryptedCollection" - name: createCollection object: database arguments: collection: "encryptedCollection" - name: runCommand object: database arguments: command: createIndexes: "encryptedCollection" indexes: - name: "name" key: { name: 1 } partialFilterExpression: encrypted_string: "foo" result: errorContains: "Comparison to encrypted fields not supported"mongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_operations_timeout/000077500000000000000000000000001505113246500275025ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_operations_timeout/bulkWrite.yml000066400000000000000000000051561505113246500322040ustar00rootroot00000000000000description: "timeoutMS behaves correctly for bulkWrite operations" schemaVersion: "1.9" runOnRequirements: - minServerVersion: "4.4" createEntities: - client: id: &failPointClient failPointClient useMultipleMongoses: false - client: id: &client client useMultipleMongoses: false observeEvents: - commandStartedEvent uriOptions: # Used to speed up the test w: 1 - database: id: &database database client: *client databaseName: &databaseName test - collection: id: &collection collection database: *database collectionName: &collectionName coll initialData: - collectionName: *collectionName databaseName: *databaseName documents: [] tests: # Test that drivers do not refresh timeoutMS between commands. This is done by running a bulkWrite that will require # two commands with timeoutMS=200 and blocking each command for 120ms. The server should take over 200ms total, so the # bulkWrite should fail with a timeout error. - description: "timeoutMS applied to entire bulkWrite, not individual commands" operations: # Do an operation without a timeout to ensure the servers are discovered. - name: insertOne object: *collection arguments: document: {} - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["insert", "update"] blockConnection: true blockTimeMS: 120 - name: bulkWrite object: *collection arguments: requests: - insertOne: document: { _id: 1 } - replaceOne: filter: { _id: 1 } replacement: { x: 1 } timeoutMS: 200 expectError: isTimeoutError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: insert databaseName: *databaseName command: insert: *collectionName - commandStartedEvent: commandName: insert databaseName: *databaseName command: insert: *collectionName maxTimeMS: { $$type: ["int", "long"] } - commandStartedEvent: commandName: update databaseName: *databaseName command: update: *collectionName maxTimeMS: { $$type: ["int", "long"] } mongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_operations_timeout/change-streams.yml000066400000000000000000000326331505113246500331350ustar00rootroot00000000000000description: "timeoutMS behaves correctly for change streams" schemaVersion: "1.9" runOnRequirements: - minServerVersion: "4.4" topologies: ["replicaset", "sharded"] createEntities: - client: id: &failPointClient failPointClient useMultipleMongoses: false - client: id: &client client useMultipleMongoses: false observeEvents: - commandStartedEvent # Drivers are not required to execute killCursors during resume attempts, so it should be ignored for command # monitoring assertions. ignoreCommandMonitoringEvents: ["killCursors"] - database: id: &database database client: *client databaseName: &databaseName test - collection: id: &collection collection database: *database collectionName: &collectionName coll initialData: - collectionName: *collectionName databaseName: *databaseName documents: [] tests: - description: "error if maxAwaitTimeMS is greater than timeoutMS" operations: - name: createChangeStream object: *collection arguments: pipeline: [] timeoutMS: 5 maxAwaitTimeMS: 10 expectError: isClientError: true - description: "error if maxAwaitTimeMS is equal to timeoutMS" operations: - name: createChangeStream object: *collection arguments: pipeline: [] timeoutMS: 5 maxAwaitTimeMS: 5 expectError: isClientError: true - description: "timeoutMS applied to initial aggregate" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["aggregate"] blockConnection: true blockTimeMS: 55 - name: createChangeStream object: *collection arguments: pipeline: [] timeoutMS: 50 expectError: isTimeoutError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: *collectionName maxTimeMS: { $$type: ["int", "long"] } # If maxAwaitTimeMS is not set, timeoutMS should be refreshed for the getMore and the getMore should not have a # maxTimeMS field. This test requires a high timeout because the server applies a default 1000ms maxAwaitTime. To # ensure that the driver is refreshing the timeout between commands, the test blocks aggregate and getMore commands # for 30ms each and creates/iterates a change stream with timeoutMS=1050. The initial aggregate will block for 30ms # and the getMore will block for 1030ms. - description: "timeoutMS is refreshed for getMore if maxAwaitTimeMS is not set" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["aggregate", "getMore"] blockConnection: true blockTimeMS: 30 - name: createChangeStream object: *collection arguments: pipeline: [] timeoutMS: 1050 saveResultAsEntity: &changeStream changeStream - name: iterateOnce object: *changeStream expectEvents: - client: *client events: - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: *collectionName maxTimeMS: { $$type: ["int", "long"] } - commandStartedEvent: commandName: getMore databaseName: *databaseName command: getMore: { $$type: ["int", "long"] } collection: *collectionName maxTimeMS: { $$exists: false } # If maxAwaitTimeMS is set, timeoutMS should still be refreshed for the getMore and the getMore command should have a # maxTimeMS field. - description: "timeoutMS is refreshed for getMore if maxAwaitTimeMS is set" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["aggregate", "getMore"] blockConnection: true # was 15, changed to 30 to account for jruby driver latency. blockTimeMS: 30 - name: createChangeStream object: *collection arguments: pipeline: [] # was 20, changed to 29 to account for native ruby driver latency. # Changed again to 59 to account for additional jruby driver latency. # The idea for this test is that each operation is delayed by 15ms # (by failpoint). the timeout for each operation is set to (originally) # 20ms, because if timeoutMS was not refreshed for getMore, it would timeout. # However, we're tickling the 20ms timeout because the driver itself # is taking more than 5ms to do its thing. # # Changing the blockTimeMS in the failpoint to 30ms, and then bumping # the timeout to almost twice that (59ms) should give us the same # effect in the test. timeoutMS: 59 batchSize: 2 maxAwaitTimeMS: 1 saveResultAsEntity: &changeStream changeStream - name: iterateOnce object: *changeStream expectEvents: - client: *client events: - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: *collectionName maxTimeMS: { $$type: ["int", "long"] } - commandStartedEvent: commandName: getMore databaseName: *databaseName command: getMore: { $$type: ["int", "long"] } collection: *collectionName maxTimeMS: 1 # The timeout should be applied to the entire resume attempt, not individually to each command. The test creates a # change stream with timeoutMS=20 which returns an empty initial batch and then sets a fail point to block both # getMore and aggregate for 12ms each and fail with a resumable error. When the resume attempt happens, the getMore # and aggregate block for longer than 20ms total, so it times out. - description: "timeoutMS applies to full resume attempt in a next call" operations: - name: createChangeStream object: *collection arguments: pipeline: [] # Originally set to 20, but the Ruby driver was too-often taking # that much time, and causing the timing of the test to fail. Instead, # bumped the timout to 23ms, which is just less than twice the # blockTimeMS for the failpoint. It still failed on jruby, so the # timeout (and blockTimeMS) were drastically increased to accomodate # JRuby. This tests the same thing, but gives the driver a bit more # breathing space. timeoutMS: 99 saveResultAsEntity: &changeStream changeStream - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["getMore", "aggregate"] blockConnection: true # Originally 12, bumped it to 50 to give the jruby driver a bit # more breathing room. blockTimeMS: 50 errorCode: 7 # HostNotFound - resumable but does not require an SDAM state change. # failCommand doesn't correctly add the ResumableChangeStreamError by default. It needs to be specified # manually here so the error is considered resumable. The failGetMoreAfterCursorCheckout fail point # would add the label without this, but it does not support blockConnection functionality. errorLabels: ["ResumableChangeStreamError"] - name: iterateUntilDocumentOrError object: *changeStream expectError: isTimeoutError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: *collectionName maxTimeMS: { $$type: ["int", "long"] } - commandStartedEvent: commandName: getMore databaseName: *databaseName command: getMore: { $$type: ["int", "long"] } collection: *collectionName maxTimeMS: { $$exists: false } - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "change stream can be iterated again if previous iteration times out" operations: - name: createChangeStream object: *collection arguments: pipeline: [] # Specify a short maxAwaitTimeMS because otherwise the getMore on the new cursor will wait for 1000ms and # time out. maxAwaitTimeMS: 1 timeoutMS: 100 saveResultAsEntity: &changeStream changeStream # Block getMore for 150ms to force the next() call to time out. - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["getMore"] blockConnection: true blockTimeMS: 150 # The original aggregate didn't return any events so this should do a getMore and return a timeout error. - name: iterateUntilDocumentOrError object: *changeStream expectError: isTimeoutError: true # The previous iteration attempt timed out so this should re-create the change stream. We use iterateOnce rather # than iterateUntilDocumentOrError because there haven't been any events and we only want to assert that the # cursor was re-created. - name: iterateOnce object: *changeStream expectEvents: - client: *client events: - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: *collectionName maxTimeMS: { $$type: ["int", "long"] } # The iterateUntilDocumentOrError operation should send a getMore. - commandStartedEvent: commandName: getMore databaseName: *databaseName command: getMore: { $$type: ["int", "long"] } collection: *collectionName # The iterateOnce operation should re-create the cursor via an aggregate and then send a getMore to iterate # the new cursor. - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: *collectionName maxTimeMS: { $$type: ["int", "long"] } - commandStartedEvent: commandName: getMore databaseName: *databaseName command: getMore: { $$type: ["int", "long"] } collection: *collectionName # The timeoutMS value should be refreshed for getMore's. This is a failure test. The createChangeStream operation # sets timeoutMS=10 and the getMore blocks for 15ms, causing iteration to fail with a timeout error. - description: "timeoutMS is refreshed for getMore - failure" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["getMore"] blockConnection: true # blockTimeMS: 15 # Increase timeout blockTimeMS: 30 - name: createChangeStream object: *collection arguments: pipeline: [] # timeoutMS: 10 # Increase timeout timeoutMS: 20 saveResultAsEntity: &changeStream changeStream # The first iteration should do a getMore - name: iterateUntilDocumentOrError object: *changeStream expectError: isTimeoutError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: *collectionName maxTimeMS: { $$type: ["int", "long"] } # The iterateUntilDocumentOrError operation should send a getMore. - commandStartedEvent: commandName: getMore databaseName: *databaseName command: getMore: { $$type: ["int", "long"] } collection: *collectionName mongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_operations_timeout/close-cursors.yml000066400000000000000000000073761505113246500330450ustar00rootroot00000000000000description: "timeoutMS behaves correctly when closing cursors" schemaVersion: "1.9" runOnRequirements: - minServerVersion: "4.4" createEntities: - client: id: &failPointClient failPointClient useMultipleMongoses: false - client: id: &client client useMultipleMongoses: false observeEvents: - commandStartedEvent - commandSucceededEvent - commandFailedEvent - database: id: &database database client: *client databaseName: &databaseName test - collection: id: &collection collection database: *database collectionName: &collectionName coll initialData: - collectionName: *collectionName databaseName: *databaseName documents: - { _id: 0 } - { _id: 1 } - { _id: 2 } tests: - description: "timeoutMS is refreshed for close" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["getMore"] blockConnection: true blockTimeMS: 50 - name: createFindCursor object: *collection arguments: filter: {} batchSize: 2 timeoutMS: 20 saveResultAsEntity: &cursor cursor # Iterate the cursor three times. The third should do a getMore, which should fail with a timeout error. - name: iterateUntilDocumentOrError object: *cursor - name: iterateUntilDocumentOrError object: *cursor - name: iterateUntilDocumentOrError object: *cursor expectError: isTimeoutError: true # All errors from close() are ignored, so we close the cursor here but assert that killCursors was executed # successfully via command monitoring expectations below. - name: close object: *cursor expectEvents: - client: *client events: - commandStartedEvent: commandName: find - commandSucceededEvent: commandName: find - commandStartedEvent: commandName: getMore - commandFailedEvent: commandName: getMore - commandStartedEvent: command: killCursors: *collectionName # The close() operation should inherit timeoutMS from the initial find(). maxTimeMS: { $$type: ["int", "long"] } commandName: killCursors - commandSucceededEvent: commandName: killCursors - description: "timeoutMS can be overridden for close" operations: - name: failPoint object: testRunner arguments: client: *client failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["killCursors"] blockConnection: true blockTimeMS: 30 - name: createFindCursor object: *collection arguments: filter: {} batchSize: 2 timeoutMS: 20 saveResultAsEntity: &cursor cursor - name: close object: *cursor arguments: # timeoutMS: 40 # Increase timeout timeoutMS: 50 expectEvents: - client: *client events: - commandStartedEvent: commandName: find - commandSucceededEvent: commandName: find - commandStartedEvent: command: killCursors: *collectionName maxTimeMS: { $$type: ["int", "long"] } commandName: killCursors - commandSucceededEvent: commandName: killCursors mongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_operations_timeout/command-execution.yml000066400000000000000000000221721505113246500336500ustar00rootroot00000000000000description: "timeoutMS behaves correctly during command execution" schemaVersion: "1.9" runOnRequirements: # Require SERVER-49336 for failCommand + appName on the initial handshake. - minServerVersion: "4.4.7" # Skip load-balanced and serverless which do not support RTT measurements. topologies: [ single, replicaset, sharded ] serverless: forbid createEntities: - client: id: &failPointClient failPointClient useMultipleMongoses: false initialData: # The corresponding entities for the collections defined here are created in test-level createEntities operations. # This is done so that tests can set fail points that will affect all of the handshakes and heartbeats done by a # client. The collection and database names are listed here so that the collections will be dropped and re-created at # the beginning of each test. - collectionName: ®ularCollectionName coll databaseName: &databaseName test documents: [] - collectionName: &timeoutCollectionName timeoutColl databaseName: &databaseName test documents: [] tests: - description: "maxTimeMS value in the command is less than timeoutMS" operations: # Artificially increase the server RTT to ~50ms. - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: "alwaysOn" data: failCommands: ["hello", "isMaster"] appName: &appName reduceMaxTimeMSTest blockConnection: true blockTimeMS: 50 # Create a client with the app name specified in the fail point and timeoutMS higher than blockTimeMS. # Also create database and collection entities derived from the new client. - name: createEntities object: testRunner arguments: entities: - client: id: &client client useMultipleMongoses: false uriOptions: appName: *appName w: 1 # Override server's w:majority default to speed up the test. timeoutMS: 500 heartbeatFrequencyMS: 500 observeEvents: - commandStartedEvent - database: id: &database database client: *client databaseName: *databaseName - collection: id: &timeoutCollection timeoutCollection database: *database collectionName: *timeoutCollectionName # Do an operation with a large timeout to ensure the servers are discovered. - name: insertOne object: *timeoutCollection arguments: document: { _id: 1 } timeoutMS: 100000 # Wait until short-circuiting has been enabled (at least 2 RTT measurements). - name: wait object: testRunner arguments: ms: 1000 # Do an operation with timeoutCollection so the event will include a maxTimeMS field. - name: insertOne object: *timeoutCollection arguments: document: { _id: 2 } expectEvents: - client: *client events: - commandStartedEvent: commandName: insert databaseName: *databaseName command: insert: *timeoutCollectionName - commandStartedEvent: commandName: insert databaseName: *databaseName command: insert: *timeoutCollectionName maxTimeMS: { $$lte: 450 } - description: "command is not sent if RTT is greater than timeoutMS" operations: # Artificially increase the server RTT to ~50ms. - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: "alwaysOn" data: failCommands: ["hello", "isMaster"] appName: &appName rttTooHighTest blockConnection: true blockTimeMS: 50 # Create a client with the app name specified in the fail point. Also create database and collection entities # derived from the new client. There is one collection entity with no timeoutMS and another with a timeoutMS # that's lower than the fail point's blockTimeMS value. - name: createEntities object: testRunner arguments: entities: - client: id: &client client useMultipleMongoses: false uriOptions: appName: *appName w: 1 # Override server's w:majority default to speed up the test. timeoutMS: 10 heartbeatFrequencyMS: 500 observeEvents: - commandStartedEvent - database: id: &database database client: *client databaseName: *databaseName - collection: id: &timeoutCollection timeoutCollection database: *database collectionName: *timeoutCollectionName # Do an operation with a large timeout to ensure the servers are discovered. - name: insertOne object: *timeoutCollection arguments: document: { _id: 1 } timeoutMS: 100000 # Wait until short-circuiting has been enabled (at least 2 RTT measurements). - name: wait object: testRunner arguments: ms: 1000 # Do an operation with timeoutCollection which will error. - name: insertOne object: *timeoutCollection arguments: document: { _id: 2 } expectError: isTimeoutError: true # Do an operation with timeoutCollection which will error. - name: insertOne object: *timeoutCollection arguments: document: { _id: 3 } expectError: isTimeoutError: true # Do an operation with timeoutCollection which will error. - name: insertOne object: *timeoutCollection arguments: document: { _id: 4 } expectError: isTimeoutError: true expectEvents: # There should only be one event, which corresponds to the first # insertOne call. For the subsequent insertOne calls, drivers should # fail client-side. - client: *client events: - commandStartedEvent: commandName: insert databaseName: *databaseName command: insert: *timeoutCollectionName - description: "short-circuit is not enabled with only 1 RTT measurement" operations: # Artificially increase the server RTT to ~300ms. - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: "alwaysOn" data: failCommands: ["hello", "isMaster"] appName: &appName reduceMaxTimeMSTest blockConnection: true blockTimeMS: 100 # Create a client with the app name specified in the fail point and timeoutMS lower than blockTimeMS. # Also create database and collection entities derived from the new client. - name: createEntities object: testRunner arguments: entities: - client: id: &client client useMultipleMongoses: false uriOptions: appName: *appName w: 1 # Override server's w:majority default to speed up the test. timeoutMS: 90 heartbeatFrequencyMS: 100000 # Override heartbeatFrequencyMS to ensure only 1 RTT is recorded. observeEvents: - commandStartedEvent - database: id: &database database client: *client databaseName: *databaseName - collection: id: &timeoutCollection timeoutCollection database: *database collectionName: *timeoutCollectionName # Do an operation with a large timeout to ensure the servers are discovered. - name: insertOne object: *timeoutCollection arguments: document: { _id: 1 } timeoutMS: 100000 # Do an operation with timeoutCollection which will succeed. If this # fails it indicates the driver mistakenly used the min RTT even though # there has only been one sample. - name: insertOne object: *timeoutCollection arguments: document: { _id: 2 } expectEvents: - client: *client events: - commandStartedEvent: commandName: insert databaseName: *databaseName command: insert: *timeoutCollectionName - commandStartedEvent: commandName: insert databaseName: *databaseName command: insert: *timeoutCollectionName maxTimeMS: { $$lte: 450 } convenient-transactions.yml000066400000000000000000000072241505113246500350310ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_operations_timeoutdescription: "timeoutMS behaves correctly for the withTransaction API" schemaVersion: "1.9" runOnRequirements: - minServerVersion: "4.4" topologies: ["replicaset", "sharded"] createEntities: - client: id: &failPointClient failPointClient useMultipleMongoses: false - client: id: &client client uriOptions: timeoutMS: 50 useMultipleMongoses: false observeEvents: - commandStartedEvent - database: id: &database database client: *client databaseName: &databaseName test - collection: id: &collection collection database: *database collectionName: &collectionName coll - session: id: &session session client: *client initialData: - collectionName: *collectionName databaseName: *databaseName documents: [] tests: - description: "withTransaction raises a client-side error if timeoutMS is overridden inside the callback" operations: - name: withTransaction object: *session arguments: callback: - name: insertOne object: *collection arguments: document: { _id: 1 } session: *session timeoutMS: 100 expectError: isClientError: true expectEvents: # The only operation run fails with a client-side error, so there should be no events for the client. - client: *client events: [] - description: "timeoutMS is not refreshed for each operation in the callback" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["insert"] blockConnection: true # Was 30, but JRuby was taking too long in preparing and issuing # the operation. We now specify the timeoutMS below, and set this # value to just more than half of it (so that two inserts will # exceed the timeout, but one won't--or shouldn't). blockTimeMS: 51 - name: withTransaction object: *session arguments: # Was originally not specified here, inheriting the client value of 50ms. # That wasn't giving JRuby enough time, so we specify a larger value # here. timeoutMS: 100 callback: - name: insertOne object: *collection arguments: document: { _id: 1 } session: *session - name: insertOne object: *collection arguments: document: { _id: 2 } session: *session expectError: isTimeoutError: true expectError: isTimeoutError: true expectEvents: - client: *client events: # Because the second insert expects an error and gets an error, it technically succeeds, so withTransaction # will try to run commitTransaction. This will fail client-side, though, because the timeout has already # expired, so no command is sent. - commandStartedEvent: commandName: insert databaseName: *databaseName command: insert: *collectionName maxTimeMS: { $$type: ["int", "long"] } - commandStartedEvent: commandName: insert databaseName: *databaseName command: insert: *collectionName maxTimeMS: { $$type: ["int", "long"] } mongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_operations_timeout/cursors.yml000066400000000000000000000035201505113246500317250ustar00rootroot00000000000000description: "tests for timeoutMS behavior that applies to all cursor types" schemaVersion: "1.0" createEntities: - client: id: &client client - database: id: &database database client: *client databaseName: &databaseName test - collection: id: &collection collection database: *database collectionName: &collectionName coll initialData: - collectionName: *collectionName databaseName: *databaseName documents: [] tests: - description: "find errors if timeoutMode is set and timeoutMS is not" operations: - name: find object: *collection arguments: filter: {} timeoutMode: cursorLifetime expectError: isClientError: true - description: "collection aggregate errors if timeoutMode is set and timeoutMS is not" operations: - name: aggregate object: *collection arguments: pipeline: [] timeoutMode: cursorLifetime expectError: isClientError: true - description: "database aggregate errors if timeoutMode is set and timeoutMS is not" operations: - name: aggregate object: *database arguments: pipeline: [] timeoutMode: cursorLifetime expectError: isClientError: true - description: "listCollections errors if timeoutMode is set and timeoutMS is not" operations: - name: listCollections object: *database arguments: filter: {} timeoutMode: cursorLifetime expectError: isClientError: true - description: "listIndexes errors if timeoutMode is set and timeoutMS is not" operations: - name: listIndexes object: *collection arguments: timeoutMode: cursorLifetime expectError: isClientError: true mongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_operations_timeout/deprecated-options.yml000066400000000000000000003612261505113246500340300ustar00rootroot00000000000000description: "operations ignore deprecated timeout options if timeoutMS is set" schemaVersion: "1.9" # Most tests in this file can be executed against any server version, but some tests execute operations that are only # available on higher server versions (e.g. abortTransaction). To avoid too many special cases in templated tests, the # min server version is set to 4.2 for all. runOnRequirements: - minServerVersion: "4.2" topologies: ["replicaset", "sharded"] createEntities: - client: id: &failPointClient failPointClient useMultipleMongoses: false initialData: - collectionName: &collectionName coll databaseName: &databaseName test documents: [] tests: # For each operation, run these tests: # # 1. socketTimeoutMS is ignored if timeoutMS is set. The test creates a client with socketTimeoutMS=1, configures and # a failpoint to block the operation for 5ms, runs the operation with timeoutMS=10000, and expects it to succeed. # # 2. wTimeoutMS is ignored if timeoutMS is set. The test creates a client with wTimeoutMS=1, runs the operation with # timeoutMS=10000, expects the operation to succeed, and uses command monitoring expectations to assert that the # command sent to the server does not contain a writeConcern field. # # 3. If the operation supports maxTimeMS, it ignores maxTimeMS if timeoutMS is set. The test executes the operation # with timeoutMS=1000 and maxTimeMS=5000. It expects the operation to succeed and uses command monitoring expectations # to assert that the actual maxTimeMS value sent was less than or equal to 100, thereby asserting that it was # actually derived from timeoutMS. # Tests for commitTransaction. These are not included in the operations loop because the tests need to execute # additional "startTransaction" and "insertOne" operations to establish a server-side transaction. There is also one # additional test to assert that maxCommitTimeMS is ignored if timeoutMS is set. - description: "commitTransaction ignores socketTimeoutMS if timeoutMS is set" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: # This test uses 20 instead of 1 like other tests because socketTimeoutMS also applies to the # operation done to start the server-side transaction and it needs time to succeed. socketTimeoutMS: 20 useMultipleMongoses: false observeEvents: - commandStartedEvent ignoreCommandMonitoringEvents: ["aggregate"] - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - session: id: &session session client: *client - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["commitTransaction"] blockConnection: true blockTimeMS: 5 - name: startTransaction object: *session - name: countDocuments object: *collection arguments: filter: {} session: *session - name: commitTransaction object: *session arguments: timeoutMS: 10000 expectEvents: - client: *client events: - commandStartedEvent: commandName: commitTransaction databaseName: admin command: commitTransaction: 1 writeConcern: { $$exists: false } maxTimeMS: { $$type: ["int", "long"] } - description: "commitTransaction ignores wTimeoutMS if timeoutMS is set" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: wTimeoutMS: 1 observeEvents: - commandStartedEvent ignoreCommandMonitoringEvents: ["aggregate"] - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - session: id: &session session client: *client - name: startTransaction object: *session - name: countDocuments object: *collection arguments: filter: {} session: *session - name: commitTransaction object: *session arguments: timeoutMS: 10000 expectEvents: - client: *client events: - commandStartedEvent: commandName: commitTransaction databaseName: admin command: commitTransaction: 1 writeConcern: { $$exists: false } maxTimeMS: { $$type: ["int", "long"] } - description: "commitTransaction ignores maxCommitTimeMS if timeoutMS is set" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client useMultipleMongoses: false observeEvents: - commandStartedEvent ignoreCommandMonitoringEvents: ["aggregate"] - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - session: id: &session session client: *client sessionOptions: defaultTransactionOptions: maxCommitTimeMS: 5000 - name: startTransaction object: *session - name: countDocuments object: *collection arguments: filter: {} session: *session - name: commitTransaction object: *session arguments: timeoutMS: &timeoutMS 1000 expectEvents: - client: *client events: - commandStartedEvent: commandName: commitTransaction databaseName: admin command: commitTransaction: 1 # Assert that the final maxTimeMS field is derived from timeoutMS, not maxCommitTimeMS. maxTimeMS: { $$lte: *timeoutMS } # Tests for abortTransaction. These are not included in the operations loop because the tests need to execute # additional "startTransaction" and "insertOne" operations to establish a server-side transaction. - description: "abortTransaction ignores socketTimeoutMS if timeoutMS is set" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: # This test uses 20 instead of 1 like other tests because socketTimeoutMS also applies to the # operation done to start the server-side transaction and it needs time to succeed. socketTimeoutMS: 20 useMultipleMongoses: false observeEvents: - commandStartedEvent ignoreCommandMonitoringEvents: ["aggregate"] - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - session: id: &session session client: *client - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["abortTransaction"] blockConnection: true blockTimeMS: 5 - name: startTransaction object: *session - name: countDocuments object: *collection arguments: filter: {} session: *session - name: abortTransaction object: *session arguments: timeoutMS: 10000 expectEvents: - client: *client events: - commandStartedEvent: commandName: abortTransaction databaseName: admin command: abortTransaction: 1 writeConcern: { $$exists: false } maxTimeMS: { $$type: ["int", "long"] } - description: "abortTransaction ignores wTimeoutMS if timeoutMS is set" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: wTimeoutMS: 1 observeEvents: - commandStartedEvent ignoreCommandMonitoringEvents: ["aggregate"] - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - session: id: &session session client: *client - name: startTransaction object: *session - name: countDocuments object: *collection arguments: filter: {} session: *session - name: abortTransaction object: *session arguments: timeoutMS: 10000 expectEvents: - client: *client events: - commandStartedEvent: commandName: abortTransaction databaseName: admin command: abortTransaction: 1 writeConcern: { $$exists: false } maxTimeMS: { $$type: ["int", "long"] } # Tests for withTransaction. These are not included in the operations loop because the command monitoring # expectations contain multiple commands. There is also one additional test to assert that maxCommitTimeMS is ignored # if timeoutMS is set. - description: "withTransaction ignores socketTimeoutMS if timeoutMS is set" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: # This test uses 20 instead of 1 like other tests because socketTimeoutMS also applies to the # operation done to start the server-side transaction and it needs time to succeed. socketTimeoutMS: 20 useMultipleMongoses: false observeEvents: - commandStartedEvent - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - session: id: &session session client: *client - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["commitTransaction"] blockConnection: true blockTimeMS: 5 - name: withTransaction object: *session arguments: timeoutMS: 10000 callback: - name: countDocuments object: *collection arguments: filter: {} session: *session expectEvents: - client: *client events: - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: *collectionName maxTimeMS: { $$type: ["int", "long"] } - commandStartedEvent: commandName: commitTransaction databaseName: admin command: commitTransaction: 1 writeConcern: { $$exists: false } maxTimeMS: { $$type: ["int", "long"] } - description: "withTransaction ignores wTimeoutMS if timeoutMS is set" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: wTimeoutMS: 1 observeEvents: - commandStartedEvent - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - session: id: &session session client: *client - name: withTransaction object: *session arguments: timeoutMS: 10000 callback: - name: countDocuments object: *collection arguments: filter: {} session: *session expectEvents: - client: *client events: - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: *collectionName writeConcern: { $$exists: false } maxTimeMS: { $$type: ["int", "long"] } - commandStartedEvent: commandName: commitTransaction databaseName: admin command: commitTransaction: 1 writeConcern: { $$exists: false } maxTimeMS: { $$type: ["int", "long"] } - description: "withTransaction ignores maxCommitTimeMS if timeoutMS is set" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client useMultipleMongoses: false observeEvents: - commandStartedEvent - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - session: id: &session session client: *client sessionOptions: defaultTransactionOptions: maxCommitTimeMS: 5000 - name: withTransaction object: *session arguments: timeoutMS: &timeoutMS 1000 callback: - name: countDocuments object: *collection arguments: filter: {} session: *session expectEvents: - client: *client events: - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: *collectionName writeConcern: { $$exists: false } maxTimeMS: { $$type: ["int", "long"] } - commandStartedEvent: commandName: commitTransaction databaseName: admin command: commitTransaction: 1 # Assert that the final maxTimeMS field is derived from timeoutMS, not maxCommitTimeMS. maxTimeMS: { $$lte: *timeoutMS } # Tests for operations that can be generated. - description: "socketTimeoutMS is ignored if timeoutMS is set - listDatabases on client" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: socketTimeoutMS: 1 useMultipleMongoses: false - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - bucket: id: &bucket bucket database: *database - session: id: &session session client: *client - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["listDatabases"] blockConnection: true blockTimeMS: 5 - name: listDatabases object: *client arguments: timeoutMS: 100000 filter: {} - description: "wTimeoutMS is ignored if timeoutMS is set - listDatabases on client" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: wTimeoutMS: 1 observeEvents: - commandStartedEvent ignoreCommandMonitoringEvents: - killCursors - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - bucket: id: &bucket bucket database: *database - session: id: &session session client: *client - name: listDatabases object: *client arguments: timeoutMS: 100000 filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: listDatabases databaseName: admin command: listDatabases: 1 writeConcern: { $$exists: false } maxTimeMS: { $$type: ["int", "long"] } - description: "socketTimeoutMS is ignored if timeoutMS is set - listDatabaseNames on client" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: socketTimeoutMS: 1 useMultipleMongoses: false - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - bucket: id: &bucket bucket database: *database - session: id: &session session client: *client - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["listDatabases"] blockConnection: true blockTimeMS: 5 - name: listDatabaseNames object: *client arguments: timeoutMS: 100000 - description: "wTimeoutMS is ignored if timeoutMS is set - listDatabaseNames on client" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: wTimeoutMS: 1 observeEvents: - commandStartedEvent ignoreCommandMonitoringEvents: - killCursors - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - bucket: id: &bucket bucket database: *database - session: id: &session session client: *client - name: listDatabaseNames object: *client arguments: timeoutMS: 100000 expectEvents: - client: *client events: - commandStartedEvent: commandName: listDatabases databaseName: admin command: listDatabases: 1 writeConcern: { $$exists: false } maxTimeMS: { $$type: ["int", "long"] } - description: "socketTimeoutMS is ignored if timeoutMS is set - createChangeStream on client" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: socketTimeoutMS: 1 useMultipleMongoses: false - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - bucket: id: &bucket bucket database: *database - session: id: &session session client: *client - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["aggregate"] blockConnection: true blockTimeMS: 5 - name: createChangeStream object: *client arguments: timeoutMS: 100000 pipeline: [] - description: "wTimeoutMS is ignored if timeoutMS is set - createChangeStream on client" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: wTimeoutMS: 1 observeEvents: - commandStartedEvent ignoreCommandMonitoringEvents: - killCursors - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - bucket: id: &bucket bucket database: *database - session: id: &session session client: *client - name: createChangeStream object: *client arguments: timeoutMS: 100000 pipeline: [] expectEvents: - client: *client events: - commandStartedEvent: commandName: aggregate databaseName: admin command: aggregate: 1 writeConcern: { $$exists: false } maxTimeMS: { $$type: ["int", "long"] } - description: "socketTimeoutMS is ignored if timeoutMS is set - aggregate on database" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: socketTimeoutMS: 1 useMultipleMongoses: false - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - bucket: id: &bucket bucket database: *database - session: id: &session session client: *client - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["aggregate"] blockConnection: true blockTimeMS: 5 - name: aggregate object: *database arguments: timeoutMS: 100000 pipeline: [ { $listLocalSessions: {} }, { $limit: 1 } ] - description: "wTimeoutMS is ignored if timeoutMS is set - aggregate on database" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: wTimeoutMS: 1 observeEvents: - commandStartedEvent ignoreCommandMonitoringEvents: - killCursors - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - bucket: id: &bucket bucket database: *database - session: id: &session session client: *client - name: aggregate object: *database arguments: timeoutMS: 100000 pipeline: [ { $listLocalSessions: {} }, { $limit: 1 } ] expectEvents: - client: *client events: - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: 1 writeConcern: { $$exists: false } maxTimeMS: { $$type: ["int", "long"] } - description: "maxTimeMS is ignored if timeoutMS is set - aggregate on database" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client useMultipleMongoses: false observeEvents: - commandStartedEvent ignoreCommandMonitoringEvents: - killCursors - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - bucket: id: &bucket bucket database: *database - session: id: &session session client: *client - name: aggregate object: *database arguments: timeoutMS: &timeoutMS 1000 maxTimeMS: 5000 pipeline: [ { $listLocalSessions: {} }, { $limit: 1 } ] expectEvents: - client: *client events: - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: 1 maxTimeMS: { $$lte: *timeoutMS } - description: "socketTimeoutMS is ignored if timeoutMS is set - listCollections on database" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: socketTimeoutMS: 1 useMultipleMongoses: false - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - bucket: id: &bucket bucket database: *database - session: id: &session session client: *client - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["listCollections"] blockConnection: true blockTimeMS: 5 - name: listCollections object: *database arguments: timeoutMS: 100000 filter: {} - description: "wTimeoutMS is ignored if timeoutMS is set - listCollections on database" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: wTimeoutMS: 1 observeEvents: - commandStartedEvent ignoreCommandMonitoringEvents: - killCursors - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - bucket: id: &bucket bucket database: *database - session: id: &session session client: *client - name: listCollections object: *database arguments: timeoutMS: 100000 filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: listCollections databaseName: *databaseName command: listCollections: 1 writeConcern: { $$exists: false } maxTimeMS: { $$type: ["int", "long"] } - description: "socketTimeoutMS is ignored if timeoutMS is set - listCollectionNames on database" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: socketTimeoutMS: 1 useMultipleMongoses: false - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - bucket: id: &bucket bucket database: *database - session: id: &session session client: *client - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["listCollections"] blockConnection: true blockTimeMS: 5 - name: listCollectionNames object: *database arguments: timeoutMS: 100000 filter: {} - description: "wTimeoutMS is ignored if timeoutMS is set - listCollectionNames on database" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: wTimeoutMS: 1 observeEvents: - commandStartedEvent ignoreCommandMonitoringEvents: - killCursors - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - bucket: id: &bucket bucket database: *database - session: id: &session session client: *client - name: listCollectionNames object: *database arguments: timeoutMS: 100000 filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: listCollections databaseName: *databaseName command: listCollections: 1 writeConcern: { $$exists: false } maxTimeMS: { $$type: ["int", "long"] } - description: "socketTimeoutMS is ignored if timeoutMS is set - runCommand on database" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: socketTimeoutMS: 1 useMultipleMongoses: false - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - bucket: id: &bucket bucket database: *database - session: id: &session session client: *client - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["ping"] blockConnection: true blockTimeMS: 5 - name: runCommand object: *database arguments: timeoutMS: 100000 command: { ping: 1 } commandName: ping - description: "wTimeoutMS is ignored if timeoutMS is set - runCommand on database" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: wTimeoutMS: 1 observeEvents: - commandStartedEvent ignoreCommandMonitoringEvents: - killCursors - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - bucket: id: &bucket bucket database: *database - session: id: &session session client: *client - name: runCommand object: *database arguments: timeoutMS: 100000 command: { ping: 1 } commandName: ping expectEvents: - client: *client events: - commandStartedEvent: commandName: ping databaseName: *databaseName command: ping: 1 writeConcern: { $$exists: false } maxTimeMS: { $$type: ["int", "long"] } - description: "socketTimeoutMS is ignored if timeoutMS is set - createChangeStream on database" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: socketTimeoutMS: 1 useMultipleMongoses: false - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - bucket: id: &bucket bucket database: *database - session: id: &session session client: *client - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["aggregate"] blockConnection: true blockTimeMS: 5 - name: createChangeStream object: *database arguments: timeoutMS: 100000 pipeline: [] - description: "wTimeoutMS is ignored if timeoutMS is set - createChangeStream on database" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: wTimeoutMS: 1 observeEvents: - commandStartedEvent ignoreCommandMonitoringEvents: - killCursors - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - bucket: id: &bucket bucket database: *database - session: id: &session session client: *client - name: createChangeStream object: *database arguments: timeoutMS: 100000 pipeline: [] expectEvents: - client: *client events: - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: 1 writeConcern: { $$exists: false } maxTimeMS: { $$type: ["int", "long"] } - description: "socketTimeoutMS is ignored if timeoutMS is set - aggregate on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: socketTimeoutMS: 1 useMultipleMongoses: false - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - bucket: id: &bucket bucket database: *database - session: id: &session session client: *client - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["aggregate"] blockConnection: true blockTimeMS: 5 - name: aggregate object: *collection arguments: timeoutMS: 100000 pipeline: [] - description: "wTimeoutMS is ignored if timeoutMS is set - aggregate on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: wTimeoutMS: 1 observeEvents: - commandStartedEvent ignoreCommandMonitoringEvents: - killCursors - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - bucket: id: &bucket bucket database: *database - session: id: &session session client: *client - name: aggregate object: *collection arguments: timeoutMS: 100000 pipeline: [] expectEvents: - client: *client events: - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: *collectionName writeConcern: { $$exists: false } maxTimeMS: { $$type: ["int", "long"] } - description: "maxTimeMS is ignored if timeoutMS is set - aggregate on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client useMultipleMongoses: false observeEvents: - commandStartedEvent ignoreCommandMonitoringEvents: - killCursors - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - bucket: id: &bucket bucket database: *database - session: id: &session session client: *client - name: aggregate object: *collection arguments: timeoutMS: &timeoutMS 1000 maxTimeMS: 5000 pipeline: [] expectEvents: - client: *client events: - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: *collectionName maxTimeMS: { $$lte: *timeoutMS } - description: "socketTimeoutMS is ignored if timeoutMS is set - count on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: socketTimeoutMS: 1 useMultipleMongoses: false - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - bucket: id: &bucket bucket database: *database - session: id: &session session client: *client - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["count"] blockConnection: true blockTimeMS: 5 - name: count object: *collection arguments: timeoutMS: 100000 filter: {} - description: "wTimeoutMS is ignored if timeoutMS is set - count on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: wTimeoutMS: 1 observeEvents: - commandStartedEvent ignoreCommandMonitoringEvents: - killCursors - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - bucket: id: &bucket bucket database: *database - session: id: &session session client: *client - name: count object: *collection arguments: timeoutMS: 100000 filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: count databaseName: *databaseName command: count: *collectionName writeConcern: { $$exists: false } maxTimeMS: { $$type: ["int", "long"] } - description: "maxTimeMS is ignored if timeoutMS is set - count on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client useMultipleMongoses: false observeEvents: - commandStartedEvent ignoreCommandMonitoringEvents: - killCursors - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - bucket: id: &bucket bucket database: *database - session: id: &session session client: *client - name: count object: *collection arguments: timeoutMS: &timeoutMS 1000 maxTimeMS: 5000 filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: count databaseName: *databaseName command: count: *collectionName maxTimeMS: { $$lte: *timeoutMS } - description: "socketTimeoutMS is ignored if timeoutMS is set - countDocuments on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: socketTimeoutMS: 1 useMultipleMongoses: false - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - bucket: id: &bucket bucket database: *database - session: id: &session session client: *client - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["aggregate"] blockConnection: true blockTimeMS: 5 - name: countDocuments object: *collection arguments: timeoutMS: 100000 filter: {} - description: "wTimeoutMS is ignored if timeoutMS is set - countDocuments on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: wTimeoutMS: 1 observeEvents: - commandStartedEvent ignoreCommandMonitoringEvents: - killCursors - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - bucket: id: &bucket bucket database: *database - session: id: &session session client: *client - name: countDocuments object: *collection arguments: timeoutMS: 100000 filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: *collectionName writeConcern: { $$exists: false } maxTimeMS: { $$type: ["int", "long"] } - description: "socketTimeoutMS is ignored if timeoutMS is set - estimatedDocumentCount on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: socketTimeoutMS: 1 useMultipleMongoses: false - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - bucket: id: &bucket bucket database: *database - session: id: &session session client: *client - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["count"] blockConnection: true blockTimeMS: 5 - name: estimatedDocumentCount object: *collection arguments: timeoutMS: 100000 - description: "wTimeoutMS is ignored if timeoutMS is set - estimatedDocumentCount on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: wTimeoutMS: 1 observeEvents: - commandStartedEvent ignoreCommandMonitoringEvents: - killCursors - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - bucket: id: &bucket bucket database: *database - session: id: &session session client: *client - name: estimatedDocumentCount object: *collection arguments: timeoutMS: 100000 expectEvents: - client: *client events: - commandStartedEvent: commandName: count databaseName: *databaseName command: count: *collectionName writeConcern: { $$exists: false } maxTimeMS: { $$type: ["int", "long"] } - description: "maxTimeMS is ignored if timeoutMS is set - estimatedDocumentCount on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client useMultipleMongoses: false observeEvents: - commandStartedEvent ignoreCommandMonitoringEvents: - killCursors - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - bucket: id: &bucket bucket database: *database - session: id: &session session client: *client - name: estimatedDocumentCount object: *collection arguments: timeoutMS: &timeoutMS 1000 maxTimeMS: 5000 expectEvents: - client: *client events: - commandStartedEvent: commandName: count databaseName: *databaseName command: count: *collectionName maxTimeMS: { $$lte: *timeoutMS } - description: "socketTimeoutMS is ignored if timeoutMS is set - distinct on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: socketTimeoutMS: 1 useMultipleMongoses: false - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - bucket: id: &bucket bucket database: *database - session: id: &session session client: *client - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["distinct"] blockConnection: true blockTimeMS: 5 - name: distinct object: *collection arguments: timeoutMS: 100000 fieldName: x filter: {} - description: "wTimeoutMS is ignored if timeoutMS is set - distinct on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: wTimeoutMS: 1 observeEvents: - commandStartedEvent ignoreCommandMonitoringEvents: - killCursors - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - bucket: id: &bucket bucket database: *database - session: id: &session session client: *client - name: distinct object: *collection arguments: timeoutMS: 100000 fieldName: x filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: distinct databaseName: *databaseName command: distinct: *collectionName writeConcern: { $$exists: false } maxTimeMS: { $$type: ["int", "long"] } - description: "maxTimeMS is ignored if timeoutMS is set - distinct on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client useMultipleMongoses: false observeEvents: - commandStartedEvent ignoreCommandMonitoringEvents: - killCursors - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - bucket: id: &bucket bucket database: *database - session: id: &session session client: *client - name: distinct object: *collection arguments: timeoutMS: &timeoutMS 1000 maxTimeMS: 5000 fieldName: x filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: distinct databaseName: *databaseName command: distinct: *collectionName maxTimeMS: { $$lte: *timeoutMS } - description: "socketTimeoutMS is ignored if timeoutMS is set - find on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: socketTimeoutMS: 1 useMultipleMongoses: false - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - bucket: id: &bucket bucket database: *database - session: id: &session session client: *client - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["find"] blockConnection: true blockTimeMS: 5 - name: find object: *collection arguments: timeoutMS: 100000 filter: {} - description: "wTimeoutMS is ignored if timeoutMS is set - find on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: wTimeoutMS: 1 observeEvents: - commandStartedEvent ignoreCommandMonitoringEvents: - killCursors - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - bucket: id: &bucket bucket database: *database - session: id: &session session client: *client - name: find object: *collection arguments: timeoutMS: 100000 filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: find databaseName: *databaseName command: find: *collectionName writeConcern: { $$exists: false } maxTimeMS: { $$type: ["int", "long"] } - description: "maxTimeMS is ignored if timeoutMS is set - find on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client useMultipleMongoses: false observeEvents: - commandStartedEvent ignoreCommandMonitoringEvents: - killCursors - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - bucket: id: &bucket bucket database: *database - session: id: &session session client: *client - name: find object: *collection arguments: timeoutMS: &timeoutMS 1000 maxTimeMS: 5000 filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: find databaseName: *databaseName command: find: *collectionName maxTimeMS: { $$lte: *timeoutMS } - description: "socketTimeoutMS is ignored if timeoutMS is set - findOne on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: socketTimeoutMS: 1 useMultipleMongoses: false - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - bucket: id: &bucket bucket database: *database - session: id: &session session client: *client - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["find"] blockConnection: true blockTimeMS: 5 - name: findOne object: *collection arguments: timeoutMS: 100000 filter: {} - description: "wTimeoutMS is ignored if timeoutMS is set - findOne on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: wTimeoutMS: 1 observeEvents: - commandStartedEvent ignoreCommandMonitoringEvents: - killCursors - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - bucket: id: &bucket bucket database: *database - session: id: &session session client: *client - name: findOne object: *collection arguments: timeoutMS: 100000 filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: find databaseName: *databaseName command: find: *collectionName writeConcern: { $$exists: false } maxTimeMS: { $$type: ["int", "long"] } - description: "maxTimeMS is ignored if timeoutMS is set - findOne on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client useMultipleMongoses: false observeEvents: - commandStartedEvent ignoreCommandMonitoringEvents: - killCursors - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - bucket: id: &bucket bucket database: *database - session: id: &session session client: *client - name: findOne object: *collection arguments: timeoutMS: &timeoutMS 1000 maxTimeMS: 5000 filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: find databaseName: *databaseName command: find: *collectionName maxTimeMS: { $$lte: *timeoutMS } - description: "socketTimeoutMS is ignored if timeoutMS is set - listIndexes on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: socketTimeoutMS: 1 useMultipleMongoses: false - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - bucket: id: &bucket bucket database: *database - session: id: &session session client: *client - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["listIndexes"] blockConnection: true blockTimeMS: 5 - name: listIndexes object: *collection arguments: timeoutMS: 100000 - description: "wTimeoutMS is ignored if timeoutMS is set - listIndexes on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: wTimeoutMS: 1 observeEvents: - commandStartedEvent ignoreCommandMonitoringEvents: - killCursors - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - bucket: id: &bucket bucket database: *database - session: id: &session session client: *client - name: listIndexes object: *collection arguments: timeoutMS: 100000 expectEvents: - client: *client events: - commandStartedEvent: commandName: listIndexes databaseName: *databaseName command: listIndexes: *collectionName writeConcern: { $$exists: false } maxTimeMS: { $$type: ["int", "long"] } - description: "socketTimeoutMS is ignored if timeoutMS is set - listIndexNames on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: socketTimeoutMS: 1 useMultipleMongoses: false - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - bucket: id: &bucket bucket database: *database - session: id: &session session client: *client - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["listIndexes"] blockConnection: true blockTimeMS: 5 - name: listIndexNames object: *collection arguments: timeoutMS: 100000 - description: "wTimeoutMS is ignored if timeoutMS is set - listIndexNames on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: wTimeoutMS: 1 observeEvents: - commandStartedEvent ignoreCommandMonitoringEvents: - killCursors - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - bucket: id: &bucket bucket database: *database - session: id: &session session client: *client - name: listIndexNames object: *collection arguments: timeoutMS: 100000 expectEvents: - client: *client events: - commandStartedEvent: commandName: listIndexes databaseName: *databaseName command: listIndexes: *collectionName writeConcern: { $$exists: false } maxTimeMS: { $$type: ["int", "long"] } - description: "socketTimeoutMS is ignored if timeoutMS is set - createChangeStream on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: socketTimeoutMS: 1 useMultipleMongoses: false - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - bucket: id: &bucket bucket database: *database - session: id: &session session client: *client - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["aggregate"] blockConnection: true blockTimeMS: 5 - name: createChangeStream object: *collection arguments: timeoutMS: 100000 pipeline: [] - description: "wTimeoutMS is ignored if timeoutMS is set - createChangeStream on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: wTimeoutMS: 1 observeEvents: - commandStartedEvent ignoreCommandMonitoringEvents: - killCursors - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - bucket: id: &bucket bucket database: *database - session: id: &session session client: *client - name: createChangeStream object: *collection arguments: timeoutMS: 100000 pipeline: [] expectEvents: - client: *client events: - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: *collectionName writeConcern: { $$exists: false } maxTimeMS: { $$type: ["int", "long"] } - description: "socketTimeoutMS is ignored if timeoutMS is set - insertOne on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: socketTimeoutMS: 1 useMultipleMongoses: false - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - bucket: id: &bucket bucket database: *database - session: id: &session session client: *client - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["insert"] blockConnection: true blockTimeMS: 5 - name: insertOne object: *collection arguments: timeoutMS: 100000 document: { x: 1 } - description: "wTimeoutMS is ignored if timeoutMS is set - insertOne on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: wTimeoutMS: 1 observeEvents: - commandStartedEvent ignoreCommandMonitoringEvents: - killCursors - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - bucket: id: &bucket bucket database: *database - session: id: &session session client: *client - name: insertOne object: *collection arguments: timeoutMS: 100000 document: { x: 1 } expectEvents: - client: *client events: - commandStartedEvent: commandName: insert databaseName: *databaseName command: insert: *collectionName writeConcern: { $$exists: false } maxTimeMS: { $$type: ["int", "long"] } - description: "socketTimeoutMS is ignored if timeoutMS is set - insertMany on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: socketTimeoutMS: 1 useMultipleMongoses: false - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - bucket: id: &bucket bucket database: *database - session: id: &session session client: *client - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["insert"] blockConnection: true blockTimeMS: 5 - name: insertMany object: *collection arguments: timeoutMS: 100000 documents: - { x: 1 } - description: "wTimeoutMS is ignored if timeoutMS is set - insertMany on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: wTimeoutMS: 1 observeEvents: - commandStartedEvent ignoreCommandMonitoringEvents: - killCursors - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - bucket: id: &bucket bucket database: *database - session: id: &session session client: *client - name: insertMany object: *collection arguments: timeoutMS: 100000 documents: - { x: 1 } expectEvents: - client: *client events: - commandStartedEvent: commandName: insert databaseName: *databaseName command: insert: *collectionName writeConcern: { $$exists: false } maxTimeMS: { $$type: ["int", "long"] } - description: "socketTimeoutMS is ignored if timeoutMS is set - deleteOne on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: socketTimeoutMS: 1 useMultipleMongoses: false - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - bucket: id: &bucket bucket database: *database - session: id: &session session client: *client - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["delete"] blockConnection: true blockTimeMS: 5 - name: deleteOne object: *collection arguments: timeoutMS: 100000 filter: {} - description: "wTimeoutMS is ignored if timeoutMS is set - deleteOne on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: wTimeoutMS: 1 observeEvents: - commandStartedEvent ignoreCommandMonitoringEvents: - killCursors - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - bucket: id: &bucket bucket database: *database - session: id: &session session client: *client - name: deleteOne object: *collection arguments: timeoutMS: 100000 filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: delete databaseName: *databaseName command: delete: *collectionName writeConcern: { $$exists: false } maxTimeMS: { $$type: ["int", "long"] } - description: "socketTimeoutMS is ignored if timeoutMS is set - deleteMany on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: socketTimeoutMS: 1 useMultipleMongoses: false - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - bucket: id: &bucket bucket database: *database - session: id: &session session client: *client - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["delete"] blockConnection: true blockTimeMS: 5 - name: deleteMany object: *collection arguments: timeoutMS: 100000 filter: {} - description: "wTimeoutMS is ignored if timeoutMS is set - deleteMany on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: wTimeoutMS: 1 observeEvents: - commandStartedEvent ignoreCommandMonitoringEvents: - killCursors - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - bucket: id: &bucket bucket database: *database - session: id: &session session client: *client - name: deleteMany object: *collection arguments: timeoutMS: 100000 filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: delete databaseName: *databaseName command: delete: *collectionName writeConcern: { $$exists: false } maxTimeMS: { $$type: ["int", "long"] } - description: "socketTimeoutMS is ignored if timeoutMS is set - replaceOne on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: socketTimeoutMS: 1 useMultipleMongoses: false - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - bucket: id: &bucket bucket database: *database - session: id: &session session client: *client - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["update"] blockConnection: true blockTimeMS: 5 - name: replaceOne object: *collection arguments: timeoutMS: 100000 filter: {} replacement: { x: 1 } - description: "wTimeoutMS is ignored if timeoutMS is set - replaceOne on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: wTimeoutMS: 1 observeEvents: - commandStartedEvent ignoreCommandMonitoringEvents: - killCursors - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - bucket: id: &bucket bucket database: *database - session: id: &session session client: *client - name: replaceOne object: *collection arguments: timeoutMS: 100000 filter: {} replacement: { x: 1 } expectEvents: - client: *client events: - commandStartedEvent: commandName: update databaseName: *databaseName command: update: *collectionName writeConcern: { $$exists: false } maxTimeMS: { $$type: ["int", "long"] } - description: "socketTimeoutMS is ignored if timeoutMS is set - updateOne on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: socketTimeoutMS: 1 useMultipleMongoses: false - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - bucket: id: &bucket bucket database: *database - session: id: &session session client: *client - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["update"] blockConnection: true blockTimeMS: 5 - name: updateOne object: *collection arguments: timeoutMS: 100000 filter: {} update: { $set: { x: 1 } } - description: "wTimeoutMS is ignored if timeoutMS is set - updateOne on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: wTimeoutMS: 1 observeEvents: - commandStartedEvent ignoreCommandMonitoringEvents: - killCursors - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - bucket: id: &bucket bucket database: *database - session: id: &session session client: *client - name: updateOne object: *collection arguments: timeoutMS: 100000 filter: {} update: { $set: { x: 1 } } expectEvents: - client: *client events: - commandStartedEvent: commandName: update databaseName: *databaseName command: update: *collectionName writeConcern: { $$exists: false } maxTimeMS: { $$type: ["int", "long"] } - description: "socketTimeoutMS is ignored if timeoutMS is set - updateMany on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: socketTimeoutMS: 1 useMultipleMongoses: false - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - bucket: id: &bucket bucket database: *database - session: id: &session session client: *client - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["update"] blockConnection: true blockTimeMS: 5 - name: updateMany object: *collection arguments: timeoutMS: 100000 filter: {} update: { $set: { x: 1 } } - description: "wTimeoutMS is ignored if timeoutMS is set - updateMany on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: wTimeoutMS: 1 observeEvents: - commandStartedEvent ignoreCommandMonitoringEvents: - killCursors - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - bucket: id: &bucket bucket database: *database - session: id: &session session client: *client - name: updateMany object: *collection arguments: timeoutMS: 100000 filter: {} update: { $set: { x: 1 } } expectEvents: - client: *client events: - commandStartedEvent: commandName: update databaseName: *databaseName command: update: *collectionName writeConcern: { $$exists: false } maxTimeMS: { $$type: ["int", "long"] } - description: "socketTimeoutMS is ignored if timeoutMS is set - findOneAndDelete on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: socketTimeoutMS: 1 useMultipleMongoses: false - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - bucket: id: &bucket bucket database: *database - session: id: &session session client: *client - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["findAndModify"] blockConnection: true blockTimeMS: 5 - name: findOneAndDelete object: *collection arguments: timeoutMS: 100000 filter: {} - description: "wTimeoutMS is ignored if timeoutMS is set - findOneAndDelete on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: wTimeoutMS: 1 observeEvents: - commandStartedEvent ignoreCommandMonitoringEvents: - killCursors - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - bucket: id: &bucket bucket database: *database - session: id: &session session client: *client - name: findOneAndDelete object: *collection arguments: timeoutMS: 100000 filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: findAndModify databaseName: *databaseName command: findAndModify: *collectionName writeConcern: { $$exists: false } maxTimeMS: { $$type: ["int", "long"] } - description: "maxTimeMS is ignored if timeoutMS is set - findOneAndDelete on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client useMultipleMongoses: false observeEvents: - commandStartedEvent ignoreCommandMonitoringEvents: - killCursors - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - bucket: id: &bucket bucket database: *database - session: id: &session session client: *client - name: findOneAndDelete object: *collection arguments: timeoutMS: &timeoutMS 1000 maxTimeMS: 5000 filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: findAndModify databaseName: *databaseName command: findAndModify: *collectionName maxTimeMS: { $$lte: *timeoutMS } - description: "socketTimeoutMS is ignored if timeoutMS is set - findOneAndReplace on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: socketTimeoutMS: 1 useMultipleMongoses: false - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - bucket: id: &bucket bucket database: *database - session: id: &session session client: *client - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["findAndModify"] blockConnection: true blockTimeMS: 5 - name: findOneAndReplace object: *collection arguments: timeoutMS: 100000 filter: {} replacement: { x: 1 } - description: "wTimeoutMS is ignored if timeoutMS is set - findOneAndReplace on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: wTimeoutMS: 1 observeEvents: - commandStartedEvent ignoreCommandMonitoringEvents: - killCursors - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - bucket: id: &bucket bucket database: *database - session: id: &session session client: *client - name: findOneAndReplace object: *collection arguments: timeoutMS: 100000 filter: {} replacement: { x: 1 } expectEvents: - client: *client events: - commandStartedEvent: commandName: findAndModify databaseName: *databaseName command: findAndModify: *collectionName writeConcern: { $$exists: false } maxTimeMS: { $$type: ["int", "long"] } - description: "maxTimeMS is ignored if timeoutMS is set - findOneAndReplace on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client useMultipleMongoses: false observeEvents: - commandStartedEvent ignoreCommandMonitoringEvents: - killCursors - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - bucket: id: &bucket bucket database: *database - session: id: &session session client: *client - name: findOneAndReplace object: *collection arguments: timeoutMS: &timeoutMS 1000 maxTimeMS: 5000 filter: {} replacement: { x: 1 } expectEvents: - client: *client events: - commandStartedEvent: commandName: findAndModify databaseName: *databaseName command: findAndModify: *collectionName maxTimeMS: { $$lte: *timeoutMS } - description: "socketTimeoutMS is ignored if timeoutMS is set - findOneAndUpdate on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: socketTimeoutMS: 1 useMultipleMongoses: false - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - bucket: id: &bucket bucket database: *database - session: id: &session session client: *client - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["findAndModify"] blockConnection: true blockTimeMS: 5 - name: findOneAndUpdate object: *collection arguments: timeoutMS: 100000 filter: {} update: { $set: { x: 1 } } - description: "wTimeoutMS is ignored if timeoutMS is set - findOneAndUpdate on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: wTimeoutMS: 1 observeEvents: - commandStartedEvent ignoreCommandMonitoringEvents: - killCursors - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - bucket: id: &bucket bucket database: *database - session: id: &session session client: *client - name: findOneAndUpdate object: *collection arguments: timeoutMS: 100000 filter: {} update: { $set: { x: 1 } } expectEvents: - client: *client events: - commandStartedEvent: commandName: findAndModify databaseName: *databaseName command: findAndModify: *collectionName writeConcern: { $$exists: false } maxTimeMS: { $$type: ["int", "long"] } - description: "maxTimeMS is ignored if timeoutMS is set - findOneAndUpdate on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client useMultipleMongoses: false observeEvents: - commandStartedEvent ignoreCommandMonitoringEvents: - killCursors - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - bucket: id: &bucket bucket database: *database - session: id: &session session client: *client - name: findOneAndUpdate object: *collection arguments: timeoutMS: &timeoutMS 1000 maxTimeMS: 5000 filter: {} update: { $set: { x: 1 } } expectEvents: - client: *client events: - commandStartedEvent: commandName: findAndModify databaseName: *databaseName command: findAndModify: *collectionName maxTimeMS: { $$lte: *timeoutMS } - description: "socketTimeoutMS is ignored if timeoutMS is set - bulkWrite on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: socketTimeoutMS: 1 useMultipleMongoses: false - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - bucket: id: &bucket bucket database: *database - session: id: &session session client: *client - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["insert"] blockConnection: true blockTimeMS: 5 - name: bulkWrite object: *collection arguments: timeoutMS: 100000 requests: - insertOne: document: { _id: 1 } - description: "wTimeoutMS is ignored if timeoutMS is set - bulkWrite on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: wTimeoutMS: 1 observeEvents: - commandStartedEvent ignoreCommandMonitoringEvents: - killCursors - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - bucket: id: &bucket bucket database: *database - session: id: &session session client: *client - name: bulkWrite object: *collection arguments: timeoutMS: 100000 requests: - insertOne: document: { _id: 1 } expectEvents: - client: *client events: - commandStartedEvent: commandName: insert databaseName: *databaseName command: insert: *collectionName writeConcern: { $$exists: false } maxTimeMS: { $$type: ["int", "long"] } - description: "socketTimeoutMS is ignored if timeoutMS is set - createIndex on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: socketTimeoutMS: 1 useMultipleMongoses: false - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - bucket: id: &bucket bucket database: *database - session: id: &session session client: *client - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["createIndexes"] blockConnection: true blockTimeMS: 5 - name: createIndex object: *collection arguments: timeoutMS: 100000 keys: { x: 1 } name: "x_1" - description: "wTimeoutMS is ignored if timeoutMS is set - createIndex on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: wTimeoutMS: 1 observeEvents: - commandStartedEvent ignoreCommandMonitoringEvents: - killCursors - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - bucket: id: &bucket bucket database: *database - session: id: &session session client: *client - name: createIndex object: *collection arguments: timeoutMS: 100000 keys: { x: 1 } name: "x_1" expectEvents: - client: *client events: - commandStartedEvent: commandName: createIndexes databaseName: *databaseName command: createIndexes: *collectionName writeConcern: { $$exists: false } maxTimeMS: { $$type: ["int", "long"] } - description: "maxTimeMS is ignored if timeoutMS is set - createIndex on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client useMultipleMongoses: false observeEvents: - commandStartedEvent ignoreCommandMonitoringEvents: - killCursors - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - bucket: id: &bucket bucket database: *database - session: id: &session session client: *client - name: createIndex object: *collection arguments: timeoutMS: &timeoutMS 1000 maxTimeMS: 5000 keys: { x: 1 } name: "x_1" expectEvents: - client: *client events: - commandStartedEvent: commandName: createIndexes databaseName: *databaseName command: createIndexes: *collectionName maxTimeMS: { $$lte: *timeoutMS } - description: "socketTimeoutMS is ignored if timeoutMS is set - dropIndex on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: socketTimeoutMS: 1 useMultipleMongoses: false - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - bucket: id: &bucket bucket database: *database - session: id: &session session client: *client - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["dropIndexes"] blockConnection: true blockTimeMS: 5 - name: dropIndex object: *collection arguments: timeoutMS: 100000 name: "x_1" expectError: isClientError: false isTimeoutError: false - description: "wTimeoutMS is ignored if timeoutMS is set - dropIndex on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: wTimeoutMS: 1 observeEvents: - commandStartedEvent ignoreCommandMonitoringEvents: - killCursors - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - bucket: id: &bucket bucket database: *database - session: id: &session session client: *client - name: dropIndex object: *collection arguments: timeoutMS: 100000 name: "x_1" expectError: isClientError: false isTimeoutError: false expectEvents: - client: *client events: - commandStartedEvent: commandName: dropIndexes databaseName: *databaseName command: dropIndexes: *collectionName writeConcern: { $$exists: false } maxTimeMS: { $$type: ["int", "long"] } - description: "maxTimeMS is ignored if timeoutMS is set - dropIndex on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client useMultipleMongoses: false observeEvents: - commandStartedEvent ignoreCommandMonitoringEvents: - killCursors - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - bucket: id: &bucket bucket database: *database - session: id: &session session client: *client - name: dropIndex object: *collection arguments: timeoutMS: &timeoutMS 1000 maxTimeMS: 5000 name: "x_1" expectError: isClientError: false isTimeoutError: false expectEvents: - client: *client events: - commandStartedEvent: commandName: dropIndexes databaseName: *databaseName command: dropIndexes: *collectionName maxTimeMS: { $$lte: *timeoutMS } - description: "socketTimeoutMS is ignored if timeoutMS is set - dropIndexes on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: socketTimeoutMS: 1 useMultipleMongoses: false - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - bucket: id: &bucket bucket database: *database - session: id: &session session client: *client - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["dropIndexes"] blockConnection: true blockTimeMS: 5 - name: dropIndexes object: *collection arguments: timeoutMS: 100000 - description: "wTimeoutMS is ignored if timeoutMS is set - dropIndexes on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: wTimeoutMS: 1 observeEvents: - commandStartedEvent ignoreCommandMonitoringEvents: - killCursors - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - bucket: id: &bucket bucket database: *database - session: id: &session session client: *client - name: dropIndexes object: *collection arguments: timeoutMS: 100000 expectEvents: - client: *client events: - commandStartedEvent: commandName: dropIndexes databaseName: *databaseName command: dropIndexes: *collectionName writeConcern: { $$exists: false } maxTimeMS: { $$type: ["int", "long"] } - description: "maxTimeMS is ignored if timeoutMS is set - dropIndexes on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client useMultipleMongoses: false observeEvents: - commandStartedEvent ignoreCommandMonitoringEvents: - killCursors - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - bucket: id: &bucket bucket database: *database - session: id: &session session client: *client - name: dropIndexes object: *collection arguments: timeoutMS: &timeoutMS 1000 maxTimeMS: 5000 expectEvents: - client: *client events: - commandStartedEvent: commandName: dropIndexes databaseName: *databaseName command: dropIndexes: *collectionName maxTimeMS: { $$lte: *timeoutMS } error-transformations.yml000066400000000000000000000053211505113246500345270ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_operations_timeoutdescription: "MaxTimeMSExpired server errors are transformed into a custom timeout error" schemaVersion: "1.9" # failCommand is available on 4.0 for replica sets and 4.2 for sharded clusters. runOnRequirements: - minServerVersion: "4.0" topologies: ["replicaset"] - minServerVersion: "4.2" topologies: ["sharded"] createEntities: - client: id: &failPointClient failPointClient useMultipleMongoses: false - client: id: &client client uriOptions: timeoutMS: 250 useMultipleMongoses: false observeEvents: - commandStartedEvent - database: id: &database database client: *client databaseName: &databaseName test - collection: id: &collection collection database: *database collectionName: &collectionName coll initialData: - collectionName: *collectionName databaseName: *databaseName documents: [] tests: # A server response like {ok: 0, code: 50, ...} is transformed. - description: "basic MaxTimeMSExpired error is transformed" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["insert"] errorCode: 50 - name: insertOne object: *collection arguments: document: { _id: 1 } expectError: isTimeoutError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: insert databaseName: *databaseName command: insert: *collectionName maxTimeMS: { $$type: ["int", "long"] } # A server response like {ok: 1, writeConcernError: {code: 50, ...}} is transformed. - description: "write concern error MaxTimeMSExpired is transformed" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["insert"] writeConcernError: code: 50 errmsg: "maxTimeMS expired" - name: insertOne object: *collection arguments: document: { _id: 1 } expectError: isTimeoutError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: insert databaseName: *databaseName command: insert: *collectionName maxTimeMS: { $$type: ["int", "long"] } mongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_operations_timeout/global-timeoutMS.yml000066400000000000000000003157601505113246500334250ustar00rootroot00000000000000# Tests in this file are generated from global-timeoutMS.yml.template. description: "timeoutMS can be configured on a MongoClient" schemaVersion: "1.9" runOnRequirements: - minServerVersion: "4.4" topologies: ["replicaset", "sharded"] createEntities: - client: id: &failPointClient failPointClient useMultipleMongoses: false initialData: - collectionName: &collectionName coll databaseName: &databaseName test documents: [] tests: # For each operation, we execute two tests: # # 1. timeoutMS can be configured to a non-zero value on a MongoClient and is inherited by the operation. Each test # constructs a client entity with timeoutMS=250 and configures a fail point to block the operation for 350ms so # execution results in a timeout error. # # 2. timeoutMS can be set to 0 for a MongoClient. Each test constructs a client entity with timeoutMS=0 and # configures a fail point to block the operation for 15ms. The tests expect the operation to succeed and the command # sent to not contain a maxTimeMS field. - description: "timeoutMS can be configured on a MongoClient - listDatabases on client" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: timeoutMS: 250 useMultipleMongoses: false observeEvents: - commandStartedEvent - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand # Use "times: 2" to workaround a quirk in Python on Windows where # socket I/O can timeout ~20ms earlier than expected. With # "times: 1" the retry would succeed within the remaining ~20ms. mode: { times: 2 } data: failCommands: ["listDatabases"] blockConnection: true blockTimeMS: 350 - name: listDatabases object: *client arguments: filter: {} expectError: isTimeoutError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: listDatabases databaseName: admin command: listDatabases: 1 maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 on a MongoClient - listDatabases on client" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: timeoutMS: 0 useMultipleMongoses: false observeEvents: - commandStartedEvent ignoreCommandMonitoringEvents: - killCursors - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["listDatabases"] blockConnection: true blockTimeMS: 15 - name: listDatabases object: *client arguments: filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: listDatabases databaseName: admin command: listDatabases: 1 maxTimeMS: { $$exists: false } - description: "timeoutMS can be configured on a MongoClient - listDatabaseNames on client" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: timeoutMS: 250 useMultipleMongoses: false observeEvents: - commandStartedEvent - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand # Use "times: 2" to workaround a quirk in Python on Windows where # socket I/O can timeout ~20ms earlier than expected. With # "times: 1" the retry would succeed within the remaining ~20ms. mode: { times: 2 } data: failCommands: ["listDatabases"] blockConnection: true blockTimeMS: 350 - name: listDatabaseNames object: *client expectError: isTimeoutError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: listDatabases databaseName: admin command: listDatabases: 1 maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 on a MongoClient - listDatabaseNames on client" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: timeoutMS: 0 useMultipleMongoses: false observeEvents: - commandStartedEvent ignoreCommandMonitoringEvents: - killCursors - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["listDatabases"] blockConnection: true blockTimeMS: 15 - name: listDatabaseNames object: *client expectEvents: - client: *client events: - commandStartedEvent: commandName: listDatabases databaseName: admin command: listDatabases: 1 maxTimeMS: { $$exists: false } - description: "timeoutMS can be configured on a MongoClient - createChangeStream on client" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: timeoutMS: 250 useMultipleMongoses: false observeEvents: - commandStartedEvent - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand # Use "times: 2" to workaround a quirk in Python on Windows where # socket I/O can timeout ~20ms earlier than expected. With # "times: 1" the retry would succeed within the remaining ~20ms. mode: { times: 2 } data: failCommands: ["aggregate"] blockConnection: true blockTimeMS: 350 - name: createChangeStream object: *client arguments: pipeline: [] expectError: isTimeoutError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: aggregate databaseName: admin command: aggregate: 1 maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 on a MongoClient - createChangeStream on client" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: timeoutMS: 0 useMultipleMongoses: false observeEvents: - commandStartedEvent ignoreCommandMonitoringEvents: - killCursors - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["aggregate"] blockConnection: true blockTimeMS: 15 - name: createChangeStream object: *client arguments: pipeline: [] expectEvents: - client: *client events: - commandStartedEvent: commandName: aggregate databaseName: admin command: aggregate: 1 maxTimeMS: { $$exists: false } - description: "timeoutMS can be configured on a MongoClient - aggregate on database" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: timeoutMS: 250 useMultipleMongoses: false observeEvents: - commandStartedEvent - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand # Use "times: 2" to workaround a quirk in Python on Windows where # socket I/O can timeout ~20ms earlier than expected. With # "times: 1" the retry would succeed within the remaining ~20ms. mode: { times: 2 } data: failCommands: ["aggregate"] blockConnection: true blockTimeMS: 350 - name: aggregate object: *database arguments: pipeline: [ { $listLocalSessions: {} }, { $limit: 1 } ] expectError: isTimeoutError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: 1 maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 on a MongoClient - aggregate on database" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: timeoutMS: 0 useMultipleMongoses: false observeEvents: - commandStartedEvent ignoreCommandMonitoringEvents: - killCursors - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["aggregate"] blockConnection: true blockTimeMS: 15 - name: aggregate object: *database arguments: pipeline: [ { $listLocalSessions: {} }, { $limit: 1 } ] expectEvents: - client: *client events: - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: 1 maxTimeMS: { $$exists: false } - description: "timeoutMS can be configured on a MongoClient - listCollections on database" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: timeoutMS: 250 useMultipleMongoses: false observeEvents: - commandStartedEvent - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand # Use "times: 2" to workaround a quirk in Python on Windows where # socket I/O can timeout ~20ms earlier than expected. With # "times: 1" the retry would succeed within the remaining ~20ms. mode: { times: 2 } data: failCommands: ["listCollections"] blockConnection: true blockTimeMS: 350 - name: listCollections object: *database arguments: filter: {} expectError: isTimeoutError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: listCollections databaseName: *databaseName command: listCollections: 1 maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 on a MongoClient - listCollections on database" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: timeoutMS: 0 useMultipleMongoses: false observeEvents: - commandStartedEvent ignoreCommandMonitoringEvents: - killCursors - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["listCollections"] blockConnection: true blockTimeMS: 15 - name: listCollections object: *database arguments: filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: listCollections databaseName: *databaseName command: listCollections: 1 maxTimeMS: { $$exists: false } - description: "timeoutMS can be configured on a MongoClient - listCollectionNames on database" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: timeoutMS: 250 useMultipleMongoses: false observeEvents: - commandStartedEvent - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand # Use "times: 2" to workaround a quirk in Python on Windows where # socket I/O can timeout ~20ms earlier than expected. With # "times: 1" the retry would succeed within the remaining ~20ms. mode: { times: 2 } data: failCommands: ["listCollections"] blockConnection: true blockTimeMS: 350 - name: listCollectionNames object: *database arguments: filter: {} expectError: isTimeoutError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: listCollections databaseName: *databaseName command: listCollections: 1 maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 on a MongoClient - listCollectionNames on database" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: timeoutMS: 0 useMultipleMongoses: false observeEvents: - commandStartedEvent ignoreCommandMonitoringEvents: - killCursors - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["listCollections"] blockConnection: true blockTimeMS: 15 - name: listCollectionNames object: *database arguments: filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: listCollections databaseName: *databaseName command: listCollections: 1 maxTimeMS: { $$exists: false } - description: "timeoutMS can be configured on a MongoClient - runCommand on database" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: timeoutMS: 250 useMultipleMongoses: false observeEvents: - commandStartedEvent - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand # Use "times: 2" to workaround a quirk in Python on Windows where # socket I/O can timeout ~20ms earlier than expected. With # "times: 1" the retry would succeed within the remaining ~20ms. mode: { times: 2 } data: failCommands: ["ping"] blockConnection: true blockTimeMS: 350 - name: runCommand object: *database arguments: command: { ping: 1 } commandName: ping expectError: isTimeoutError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: ping databaseName: *databaseName command: ping: 1 maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 on a MongoClient - runCommand on database" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: timeoutMS: 0 useMultipleMongoses: false observeEvents: - commandStartedEvent ignoreCommandMonitoringEvents: - killCursors - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["ping"] blockConnection: true blockTimeMS: 15 - name: runCommand object: *database arguments: command: { ping: 1 } commandName: ping expectEvents: - client: *client events: - commandStartedEvent: commandName: ping databaseName: *databaseName command: ping: 1 maxTimeMS: { $$exists: false } - description: "timeoutMS can be configured on a MongoClient - createChangeStream on database" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: timeoutMS: 250 useMultipleMongoses: false observeEvents: - commandStartedEvent - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand # Use "times: 2" to workaround a quirk in Python on Windows where # socket I/O can timeout ~20ms earlier than expected. With # "times: 1" the retry would succeed within the remaining ~20ms. mode: { times: 2 } data: failCommands: ["aggregate"] blockConnection: true blockTimeMS: 350 - name: createChangeStream object: *database arguments: pipeline: [] expectError: isTimeoutError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: 1 maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 on a MongoClient - createChangeStream on database" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: timeoutMS: 0 useMultipleMongoses: false observeEvents: - commandStartedEvent ignoreCommandMonitoringEvents: - killCursors - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["aggregate"] blockConnection: true blockTimeMS: 15 - name: createChangeStream object: *database arguments: pipeline: [] expectEvents: - client: *client events: - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: 1 maxTimeMS: { $$exists: false } - description: "timeoutMS can be configured on a MongoClient - aggregate on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: timeoutMS: 250 useMultipleMongoses: false observeEvents: - commandStartedEvent - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand # Use "times: 2" to workaround a quirk in Python on Windows where # socket I/O can timeout ~20ms earlier than expected. With # "times: 1" the retry would succeed within the remaining ~20ms. mode: { times: 2 } data: failCommands: ["aggregate"] blockConnection: true blockTimeMS: 350 - name: aggregate object: *collection arguments: pipeline: [] expectError: isTimeoutError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 on a MongoClient - aggregate on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: timeoutMS: 0 useMultipleMongoses: false observeEvents: - commandStartedEvent ignoreCommandMonitoringEvents: - killCursors - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["aggregate"] blockConnection: true blockTimeMS: 15 - name: aggregate object: *collection arguments: pipeline: [] expectEvents: - client: *client events: - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: *collectionName maxTimeMS: { $$exists: false } - description: "timeoutMS can be configured on a MongoClient - count on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: timeoutMS: 250 useMultipleMongoses: false observeEvents: - commandStartedEvent - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand # Use "times: 2" to workaround a quirk in Python on Windows where # socket I/O can timeout ~20ms earlier than expected. With # "times: 1" the retry would succeed within the remaining ~20ms. mode: { times: 2 } data: failCommands: ["count"] blockConnection: true blockTimeMS: 350 - name: count object: *collection arguments: filter: {} expectError: isTimeoutError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: count databaseName: *databaseName command: count: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 on a MongoClient - count on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: timeoutMS: 0 useMultipleMongoses: false observeEvents: - commandStartedEvent ignoreCommandMonitoringEvents: - killCursors - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["count"] blockConnection: true blockTimeMS: 15 - name: count object: *collection arguments: filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: count databaseName: *databaseName command: count: *collectionName maxTimeMS: { $$exists: false } - description: "timeoutMS can be configured on a MongoClient - countDocuments on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: timeoutMS: 250 useMultipleMongoses: false observeEvents: - commandStartedEvent - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand # Use "times: 2" to workaround a quirk in Python on Windows where # socket I/O can timeout ~20ms earlier than expected. With # "times: 1" the retry would succeed within the remaining ~20ms. mode: { times: 2 } data: failCommands: ["aggregate"] blockConnection: true blockTimeMS: 350 - name: countDocuments object: *collection arguments: filter: {} expectError: isTimeoutError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 on a MongoClient - countDocuments on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: timeoutMS: 0 useMultipleMongoses: false observeEvents: - commandStartedEvent ignoreCommandMonitoringEvents: - killCursors - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["aggregate"] blockConnection: true blockTimeMS: 15 - name: countDocuments object: *collection arguments: filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: *collectionName maxTimeMS: { $$exists: false } - description: "timeoutMS can be configured on a MongoClient - estimatedDocumentCount on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: timeoutMS: 250 useMultipleMongoses: false observeEvents: - commandStartedEvent - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand # Use "times: 2" to workaround a quirk in Python on Windows where # socket I/O can timeout ~20ms earlier than expected. With # "times: 1" the retry would succeed within the remaining ~20ms. mode: { times: 2 } data: failCommands: ["count"] blockConnection: true blockTimeMS: 350 - name: estimatedDocumentCount object: *collection expectError: isTimeoutError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: count databaseName: *databaseName command: count: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 on a MongoClient - estimatedDocumentCount on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: timeoutMS: 0 useMultipleMongoses: false observeEvents: - commandStartedEvent ignoreCommandMonitoringEvents: - killCursors - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["count"] blockConnection: true blockTimeMS: 15 - name: estimatedDocumentCount object: *collection expectEvents: - client: *client events: - commandStartedEvent: commandName: count databaseName: *databaseName command: count: *collectionName maxTimeMS: { $$exists: false } - description: "timeoutMS can be configured on a MongoClient - distinct on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: timeoutMS: 250 useMultipleMongoses: false observeEvents: - commandStartedEvent - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand # Use "times: 2" to workaround a quirk in Python on Windows where # socket I/O can timeout ~20ms earlier than expected. With # "times: 1" the retry would succeed within the remaining ~20ms. mode: { times: 2 } data: failCommands: ["distinct"] blockConnection: true blockTimeMS: 350 - name: distinct object: *collection arguments: fieldName: x filter: {} expectError: isTimeoutError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: distinct databaseName: *databaseName command: distinct: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 on a MongoClient - distinct on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: timeoutMS: 0 useMultipleMongoses: false observeEvents: - commandStartedEvent ignoreCommandMonitoringEvents: - killCursors - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["distinct"] blockConnection: true blockTimeMS: 15 - name: distinct object: *collection arguments: fieldName: x filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: distinct databaseName: *databaseName command: distinct: *collectionName maxTimeMS: { $$exists: false } - description: "timeoutMS can be configured on a MongoClient - find on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: timeoutMS: 250 useMultipleMongoses: false observeEvents: - commandStartedEvent - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand # Use "times: 2" to workaround a quirk in Python on Windows where # socket I/O can timeout ~20ms earlier than expected. With # "times: 1" the retry would succeed within the remaining ~20ms. mode: { times: 2 } data: failCommands: ["find"] blockConnection: true blockTimeMS: 350 - name: find object: *collection arguments: filter: {} expectError: isTimeoutError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: find databaseName: *databaseName command: find: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 on a MongoClient - find on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: timeoutMS: 0 useMultipleMongoses: false observeEvents: - commandStartedEvent ignoreCommandMonitoringEvents: - killCursors - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["find"] blockConnection: true blockTimeMS: 15 - name: find object: *collection arguments: filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: find databaseName: *databaseName command: find: *collectionName maxTimeMS: { $$exists: false } - description: "timeoutMS can be configured on a MongoClient - findOne on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: timeoutMS: 250 useMultipleMongoses: false observeEvents: - commandStartedEvent - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand # Use "times: 2" to workaround a quirk in Python on Windows where # socket I/O can timeout ~20ms earlier than expected. With # "times: 1" the retry would succeed within the remaining ~20ms. mode: { times: 2 } data: failCommands: ["find"] blockConnection: true blockTimeMS: 350 - name: findOne object: *collection arguments: filter: {} expectError: isTimeoutError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: find databaseName: *databaseName command: find: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 on a MongoClient - findOne on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: timeoutMS: 0 useMultipleMongoses: false observeEvents: - commandStartedEvent ignoreCommandMonitoringEvents: - killCursors - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["find"] blockConnection: true blockTimeMS: 15 - name: findOne object: *collection arguments: filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: find databaseName: *databaseName command: find: *collectionName maxTimeMS: { $$exists: false } - description: "timeoutMS can be configured on a MongoClient - listIndexes on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: timeoutMS: 250 useMultipleMongoses: false observeEvents: - commandStartedEvent - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand # Use "times: 2" to workaround a quirk in Python on Windows where # socket I/O can timeout ~20ms earlier than expected. With # "times: 1" the retry would succeed within the remaining ~20ms. mode: { times: 2 } data: failCommands: ["listIndexes"] blockConnection: true blockTimeMS: 350 - name: listIndexes object: *collection expectError: isTimeoutError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: listIndexes databaseName: *databaseName command: listIndexes: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 on a MongoClient - listIndexes on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: timeoutMS: 0 useMultipleMongoses: false observeEvents: - commandStartedEvent ignoreCommandMonitoringEvents: - killCursors - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["listIndexes"] blockConnection: true blockTimeMS: 15 - name: listIndexes object: *collection expectEvents: - client: *client events: - commandStartedEvent: commandName: listIndexes databaseName: *databaseName command: listIndexes: *collectionName maxTimeMS: { $$exists: false } - description: "timeoutMS can be configured on a MongoClient - listIndexNames on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: timeoutMS: 250 useMultipleMongoses: false observeEvents: - commandStartedEvent - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand # Use "times: 2" to workaround a quirk in Python on Windows where # socket I/O can timeout ~20ms earlier than expected. With # "times: 1" the retry would succeed within the remaining ~20ms. mode: { times: 2 } data: failCommands: ["listIndexes"] blockConnection: true blockTimeMS: 350 - name: listIndexNames object: *collection expectError: isTimeoutError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: listIndexes databaseName: *databaseName command: listIndexes: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 on a MongoClient - listIndexNames on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: timeoutMS: 0 useMultipleMongoses: false observeEvents: - commandStartedEvent ignoreCommandMonitoringEvents: - killCursors - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["listIndexes"] blockConnection: true blockTimeMS: 15 - name: listIndexNames object: *collection expectEvents: - client: *client events: - commandStartedEvent: commandName: listIndexes databaseName: *databaseName command: listIndexes: *collectionName maxTimeMS: { $$exists: false } - description: "timeoutMS can be configured on a MongoClient - createChangeStream on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: timeoutMS: 250 useMultipleMongoses: false observeEvents: - commandStartedEvent - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand # Use "times: 2" to workaround a quirk in Python on Windows where # socket I/O can timeout ~20ms earlier than expected. With # "times: 1" the retry would succeed within the remaining ~20ms. mode: { times: 2 } data: failCommands: ["aggregate"] blockConnection: true blockTimeMS: 350 - name: createChangeStream object: *collection arguments: pipeline: [] expectError: isTimeoutError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 on a MongoClient - createChangeStream on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: timeoutMS: 0 useMultipleMongoses: false observeEvents: - commandStartedEvent ignoreCommandMonitoringEvents: - killCursors - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["aggregate"] blockConnection: true blockTimeMS: 15 - name: createChangeStream object: *collection arguments: pipeline: [] expectEvents: - client: *client events: - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: *collectionName maxTimeMS: { $$exists: false } - description: "timeoutMS can be configured on a MongoClient - insertOne on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: timeoutMS: 250 useMultipleMongoses: false observeEvents: - commandStartedEvent - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand # Use "times: 2" to workaround a quirk in Python on Windows where # socket I/O can timeout ~20ms earlier than expected. With # "times: 1" the retry would succeed within the remaining ~20ms. mode: { times: 2 } data: failCommands: ["insert"] blockConnection: true blockTimeMS: 350 - name: insertOne object: *collection arguments: document: { x: 1 } expectError: isTimeoutError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: insert databaseName: *databaseName command: insert: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 on a MongoClient - insertOne on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: timeoutMS: 0 useMultipleMongoses: false observeEvents: - commandStartedEvent ignoreCommandMonitoringEvents: - killCursors - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["insert"] blockConnection: true blockTimeMS: 15 - name: insertOne object: *collection arguments: document: { x: 1 } expectEvents: - client: *client events: - commandStartedEvent: commandName: insert databaseName: *databaseName command: insert: *collectionName maxTimeMS: { $$exists: false } - description: "timeoutMS can be configured on a MongoClient - insertMany on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: timeoutMS: 250 useMultipleMongoses: false observeEvents: - commandStartedEvent - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand # Use "times: 2" to workaround a quirk in Python on Windows where # socket I/O can timeout ~20ms earlier than expected. With # "times: 1" the retry would succeed within the remaining ~20ms. mode: { times: 2 } data: failCommands: ["insert"] blockConnection: true blockTimeMS: 350 - name: insertMany object: *collection arguments: documents: - { x: 1 } expectError: isTimeoutError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: insert databaseName: *databaseName command: insert: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 on a MongoClient - insertMany on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: timeoutMS: 0 useMultipleMongoses: false observeEvents: - commandStartedEvent ignoreCommandMonitoringEvents: - killCursors - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["insert"] blockConnection: true blockTimeMS: 15 - name: insertMany object: *collection arguments: documents: - { x: 1 } expectEvents: - client: *client events: - commandStartedEvent: commandName: insert databaseName: *databaseName command: insert: *collectionName maxTimeMS: { $$exists: false } - description: "timeoutMS can be configured on a MongoClient - deleteOne on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: timeoutMS: 250 useMultipleMongoses: false observeEvents: - commandStartedEvent - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand # Use "times: 2" to workaround a quirk in Python on Windows where # socket I/O can timeout ~20ms earlier than expected. With # "times: 1" the retry would succeed within the remaining ~20ms. mode: { times: 2 } data: failCommands: ["delete"] blockConnection: true blockTimeMS: 350 - name: deleteOne object: *collection arguments: filter: {} expectError: isTimeoutError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: delete databaseName: *databaseName command: delete: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 on a MongoClient - deleteOne on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: timeoutMS: 0 useMultipleMongoses: false observeEvents: - commandStartedEvent ignoreCommandMonitoringEvents: - killCursors - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["delete"] blockConnection: true blockTimeMS: 15 - name: deleteOne object: *collection arguments: filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: delete databaseName: *databaseName command: delete: *collectionName maxTimeMS: { $$exists: false } - description: "timeoutMS can be configured on a MongoClient - deleteMany on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: timeoutMS: 250 useMultipleMongoses: false observeEvents: - commandStartedEvent - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand # Use "times: 2" to workaround a quirk in Python on Windows where # socket I/O can timeout ~20ms earlier than expected. With # "times: 1" the retry would succeed within the remaining ~20ms. mode: { times: 2 } data: failCommands: ["delete"] blockConnection: true blockTimeMS: 350 - name: deleteMany object: *collection arguments: filter: {} expectError: isTimeoutError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: delete databaseName: *databaseName command: delete: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 on a MongoClient - deleteMany on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: timeoutMS: 0 useMultipleMongoses: false observeEvents: - commandStartedEvent ignoreCommandMonitoringEvents: - killCursors - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["delete"] blockConnection: true blockTimeMS: 15 - name: deleteMany object: *collection arguments: filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: delete databaseName: *databaseName command: delete: *collectionName maxTimeMS: { $$exists: false } - description: "timeoutMS can be configured on a MongoClient - replaceOne on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: timeoutMS: 250 useMultipleMongoses: false observeEvents: - commandStartedEvent - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand # Use "times: 2" to workaround a quirk in Python on Windows where # socket I/O can timeout ~20ms earlier than expected. With # "times: 1" the retry would succeed within the remaining ~20ms. mode: { times: 2 } data: failCommands: ["update"] blockConnection: true blockTimeMS: 350 - name: replaceOne object: *collection arguments: filter: {} replacement: { x: 1 } expectError: isTimeoutError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: update databaseName: *databaseName command: update: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 on a MongoClient - replaceOne on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: timeoutMS: 0 useMultipleMongoses: false observeEvents: - commandStartedEvent ignoreCommandMonitoringEvents: - killCursors - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["update"] blockConnection: true blockTimeMS: 15 - name: replaceOne object: *collection arguments: filter: {} replacement: { x: 1 } expectEvents: - client: *client events: - commandStartedEvent: commandName: update databaseName: *databaseName command: update: *collectionName maxTimeMS: { $$exists: false } - description: "timeoutMS can be configured on a MongoClient - updateOne on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: timeoutMS: 250 useMultipleMongoses: false observeEvents: - commandStartedEvent - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand # Use "times: 2" to workaround a quirk in Python on Windows where # socket I/O can timeout ~20ms earlier than expected. With # "times: 1" the retry would succeed within the remaining ~20ms. mode: { times: 2 } data: failCommands: ["update"] blockConnection: true blockTimeMS: 350 - name: updateOne object: *collection arguments: filter: {} update: { $set: { x: 1 } } expectError: isTimeoutError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: update databaseName: *databaseName command: update: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 on a MongoClient - updateOne on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: timeoutMS: 0 useMultipleMongoses: false observeEvents: - commandStartedEvent ignoreCommandMonitoringEvents: - killCursors - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["update"] blockConnection: true blockTimeMS: 15 - name: updateOne object: *collection arguments: filter: {} update: { $set: { x: 1 } } expectEvents: - client: *client events: - commandStartedEvent: commandName: update databaseName: *databaseName command: update: *collectionName maxTimeMS: { $$exists: false } - description: "timeoutMS can be configured on a MongoClient - updateMany on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: timeoutMS: 250 useMultipleMongoses: false observeEvents: - commandStartedEvent - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand # Use "times: 2" to workaround a quirk in Python on Windows where # socket I/O can timeout ~20ms earlier than expected. With # "times: 1" the retry would succeed within the remaining ~20ms. mode: { times: 2 } data: failCommands: ["update"] blockConnection: true blockTimeMS: 350 - name: updateMany object: *collection arguments: filter: {} update: { $set: { x: 1 } } expectError: isTimeoutError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: update databaseName: *databaseName command: update: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 on a MongoClient - updateMany on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: timeoutMS: 0 useMultipleMongoses: false observeEvents: - commandStartedEvent ignoreCommandMonitoringEvents: - killCursors - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["update"] blockConnection: true blockTimeMS: 15 - name: updateMany object: *collection arguments: filter: {} update: { $set: { x: 1 } } expectEvents: - client: *client events: - commandStartedEvent: commandName: update databaseName: *databaseName command: update: *collectionName maxTimeMS: { $$exists: false } - description: "timeoutMS can be configured on a MongoClient - findOneAndDelete on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: timeoutMS: 250 useMultipleMongoses: false observeEvents: - commandStartedEvent - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand # Use "times: 2" to workaround a quirk in Python on Windows where # socket I/O can timeout ~20ms earlier than expected. With # "times: 1" the retry would succeed within the remaining ~20ms. mode: { times: 2 } data: failCommands: ["findAndModify"] blockConnection: true blockTimeMS: 350 - name: findOneAndDelete object: *collection arguments: filter: {} expectError: isTimeoutError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: findAndModify databaseName: *databaseName command: findAndModify: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 on a MongoClient - findOneAndDelete on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: timeoutMS: 0 useMultipleMongoses: false observeEvents: - commandStartedEvent ignoreCommandMonitoringEvents: - killCursors - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["findAndModify"] blockConnection: true blockTimeMS: 15 - name: findOneAndDelete object: *collection arguments: filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: findAndModify databaseName: *databaseName command: findAndModify: *collectionName maxTimeMS: { $$exists: false } - description: "timeoutMS can be configured on a MongoClient - findOneAndReplace on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: timeoutMS: 250 useMultipleMongoses: false observeEvents: - commandStartedEvent - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand # Use "times: 2" to workaround a quirk in Python on Windows where # socket I/O can timeout ~20ms earlier than expected. With # "times: 1" the retry would succeed within the remaining ~20ms. mode: { times: 2 } data: failCommands: ["findAndModify"] blockConnection: true blockTimeMS: 350 - name: findOneAndReplace object: *collection arguments: filter: {} replacement: { x: 1 } expectError: isTimeoutError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: findAndModify databaseName: *databaseName command: findAndModify: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 on a MongoClient - findOneAndReplace on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: timeoutMS: 0 useMultipleMongoses: false observeEvents: - commandStartedEvent ignoreCommandMonitoringEvents: - killCursors - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["findAndModify"] blockConnection: true blockTimeMS: 15 - name: findOneAndReplace object: *collection arguments: filter: {} replacement: { x: 1 } expectEvents: - client: *client events: - commandStartedEvent: commandName: findAndModify databaseName: *databaseName command: findAndModify: *collectionName maxTimeMS: { $$exists: false } - description: "timeoutMS can be configured on a MongoClient - findOneAndUpdate on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: timeoutMS: 250 useMultipleMongoses: false observeEvents: - commandStartedEvent - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand # Use "times: 2" to workaround a quirk in Python on Windows where # socket I/O can timeout ~20ms earlier than expected. With # "times: 1" the retry would succeed within the remaining ~20ms. mode: { times: 2 } data: failCommands: ["findAndModify"] blockConnection: true blockTimeMS: 350 - name: findOneAndUpdate object: *collection arguments: filter: {} update: { $set: { x: 1 } } expectError: isTimeoutError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: findAndModify databaseName: *databaseName command: findAndModify: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 on a MongoClient - findOneAndUpdate on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: timeoutMS: 0 useMultipleMongoses: false observeEvents: - commandStartedEvent ignoreCommandMonitoringEvents: - killCursors - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["findAndModify"] blockConnection: true blockTimeMS: 15 - name: findOneAndUpdate object: *collection arguments: filter: {} update: { $set: { x: 1 } } expectEvents: - client: *client events: - commandStartedEvent: commandName: findAndModify databaseName: *databaseName command: findAndModify: *collectionName maxTimeMS: { $$exists: false } - description: "timeoutMS can be configured on a MongoClient - bulkWrite on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: timeoutMS: 250 useMultipleMongoses: false observeEvents: - commandStartedEvent - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand # Use "times: 2" to workaround a quirk in Python on Windows where # socket I/O can timeout ~20ms earlier than expected. With # "times: 1" the retry would succeed within the remaining ~20ms. mode: { times: 2 } data: failCommands: ["insert"] blockConnection: true blockTimeMS: 350 - name: bulkWrite object: *collection arguments: requests: - insertOne: document: { _id: 1 } expectError: isTimeoutError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: insert databaseName: *databaseName command: insert: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 on a MongoClient - bulkWrite on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: timeoutMS: 0 useMultipleMongoses: false observeEvents: - commandStartedEvent ignoreCommandMonitoringEvents: - killCursors - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["insert"] blockConnection: true blockTimeMS: 15 - name: bulkWrite object: *collection arguments: requests: - insertOne: document: { _id: 1 } expectEvents: - client: *client events: - commandStartedEvent: commandName: insert databaseName: *databaseName command: insert: *collectionName maxTimeMS: { $$exists: false } - description: "timeoutMS can be configured on a MongoClient - createIndex on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: timeoutMS: 250 useMultipleMongoses: false observeEvents: - commandStartedEvent - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand # Use "times: 2" to workaround a quirk in Python on Windows where # socket I/O can timeout ~20ms earlier than expected. With # "times: 1" the retry would succeed within the remaining ~20ms. mode: { times: 2 } data: failCommands: ["createIndexes"] blockConnection: true blockTimeMS: 350 - name: createIndex object: *collection arguments: keys: { x: 1 } name: "x_1" expectError: isTimeoutError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: createIndexes databaseName: *databaseName command: createIndexes: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 on a MongoClient - createIndex on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: timeoutMS: 0 useMultipleMongoses: false observeEvents: - commandStartedEvent ignoreCommandMonitoringEvents: - killCursors - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["createIndexes"] blockConnection: true blockTimeMS: 15 - name: createIndex object: *collection arguments: keys: { x: 1 } name: "x_1" expectEvents: - client: *client events: - commandStartedEvent: commandName: createIndexes databaseName: *databaseName command: createIndexes: *collectionName maxTimeMS: { $$exists: false } - description: "timeoutMS can be configured on a MongoClient - dropIndex on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: timeoutMS: 250 useMultipleMongoses: false observeEvents: - commandStartedEvent - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand # Use "times: 2" to workaround a quirk in Python on Windows where # socket I/O can timeout ~20ms earlier than expected. With # "times: 1" the retry would succeed within the remaining ~20ms. mode: { times: 2 } data: failCommands: ["dropIndexes"] blockConnection: true blockTimeMS: 350 - name: dropIndex object: *collection arguments: name: "x_1" expectError: isTimeoutError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: dropIndexes databaseName: *databaseName command: dropIndexes: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 on a MongoClient - dropIndex on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: timeoutMS: 0 useMultipleMongoses: false observeEvents: - commandStartedEvent ignoreCommandMonitoringEvents: - killCursors - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["dropIndexes"] blockConnection: true blockTimeMS: 15 - name: dropIndex object: *collection arguments: name: "x_1" expectError: isClientError: false isTimeoutError: false expectEvents: - client: *client events: - commandStartedEvent: commandName: dropIndexes databaseName: *databaseName command: dropIndexes: *collectionName maxTimeMS: { $$exists: false } - description: "timeoutMS can be configured on a MongoClient - dropIndexes on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: timeoutMS: 250 useMultipleMongoses: false observeEvents: - commandStartedEvent - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand # Use "times: 2" to workaround a quirk in Python on Windows where # socket I/O can timeout ~20ms earlier than expected. With # "times: 1" the retry would succeed within the remaining ~20ms. mode: { times: 2 } data: failCommands: ["dropIndexes"] blockConnection: true blockTimeMS: 350 - name: dropIndexes object: *collection expectError: isTimeoutError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: dropIndexes databaseName: *databaseName command: dropIndexes: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 on a MongoClient - dropIndexes on collection" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client uriOptions: timeoutMS: 0 useMultipleMongoses: false observeEvents: - commandStartedEvent ignoreCommandMonitoringEvents: - killCursors - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["dropIndexes"] blockConnection: true blockTimeMS: 15 - name: dropIndexes object: *collection expectEvents: - client: *client events: - commandStartedEvent: commandName: dropIndexes databaseName: *databaseName command: dropIndexes: *collectionName maxTimeMS: { $$exists: false } mongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_operations_timeout/gridfs-advanced.yml000066400000000000000000000144461505113246500332570ustar00rootroot00000000000000description: "timeoutMS behaves correctly for advanced GridFS API operations" schemaVersion: "1.9" runOnRequirements: - minServerVersion: "4.4" serverless: forbid # GridFS ops can be slow on serverless. createEntities: - client: id: &failPointClient failPointClient useMultipleMongoses: false - client: id: &client client uriOptions: timeoutMS: 75 useMultipleMongoses: false observeEvents: - commandStartedEvent - database: id: &database database client: *client databaseName: &databaseName test - bucket: id: &bucket bucket database: *database - collection: id: &filesCollection filesCollection database: *database collectionName: &filesCollectionName fs.files - collection: id: &chunksCollection chunksCollection database: *database collectionName: &chunksCollectionName fs.chunks initialData: - collectionName: *filesCollectionName databaseName: *databaseName documents: - _id: &fileDocumentId { $oid: "000000000000000000000005" } length: 8 chunkSize: 4 uploadDate: { $date: "1970-01-01T00:00:00.000Z" } filename: "length-8" contentType: "application/octet-stream" aliases: [] metadata: {} - collectionName: *chunksCollectionName databaseName: *databaseName documents: - _id: { $oid: "000000000000000000000005" } files_id: *fileDocumentId n: 0 data: { $binary: { base64: "ESIzRA==", subType: "00" } } # hex: 11223344 - _id: { $oid: "000000000000000000000006" } files_id: *fileDocumentId n: 1 data: { $binary: { base64: "ESIzRA==", subType: "00" } } # hex: 11223344 tests: # Tests for the "rename" operation. # Ruby driver does not support rename for GridFS bucket # - description: "timeoutMS can be overridden for a rename" # operations: # - name: failPoint # object: testRunner # arguments: # client: *failPointClient # failPoint: # configureFailPoint: failCommand # mode: { times: 1 } # data: # failCommands: ["update"] # blockConnection: true # blockTimeMS: 100 # - name: rename # object: *bucket # arguments: # id: *fileDocumentId # newFilename: "foo" # timeoutMS: 2000 # The client timeoutMS is 75ms and the operation blocks for 100ms, so 2000ms should let it succeed. # expectEvents: # - client: *client # events: # - commandStartedEvent: # commandName: update # databaseName: *databaseName # command: # update: *filesCollectionName # maxTimeMS: { $$type: ["int", "long"] } # - description: "timeoutMS applied to update during a rename" # operations: # - name: failPoint # object: testRunner # arguments: # client: *failPointClient # failPoint: # configureFailPoint: failCommand # mode: { times: 1 } # data: # failCommands: ["update"] # blockConnection: true # blockTimeMS: 100 # - name: rename # object: *bucket # arguments: # id: *fileDocumentId # newFilename: "foo" # expectError: # isTimeoutError: true # expectEvents: # - client: *client # events: # - commandStartedEvent: # commandName: update # databaseName: *databaseName # command: # update: *filesCollectionName # maxTimeMS: { $$type: ["int", "long"] } # Tests for the "drop" operation. Any tests that might result in multiple commands being sent do not have expectEvents # assertions as these assertions reduce test robustness and can cause flaky failures. - description: "timeoutMS can be overridden for drop" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["drop"] blockConnection: true blockTimeMS: 100 - name: drop object: *bucket arguments: timeoutMS: 2000 # The client timeoutMS is 75ms and the operation blocks for 100ms, so 2000ms should let it succeed. - description: "timeoutMS applied to files collection drop" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["drop"] blockConnection: true blockTimeMS: 100 - name: drop object: *bucket expectError: isTimeoutError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: drop databaseName: *databaseName command: drop: *filesCollectionName maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS applied to chunks collection drop" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: # Skip the drop for the files collection. skip: 1 data: failCommands: ["drop"] blockConnection: true blockTimeMS: 100 - name: drop object: *bucket expectError: isTimeoutError: true - description: "timeoutMS applied to drop as a whole, not individual parts" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["drop"] blockConnection: true blockTimeMS: 50 - name: drop object: *bucket expectError: isTimeoutError: true mongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_operations_timeout/gridfs-delete.yml000066400000000000000000000111711505113246500327440ustar00rootroot00000000000000description: "timeoutMS behaves correctly for GridFS delete operations" schemaVersion: "1.9" runOnRequirements: - minServerVersion: "4.4" serverless: forbid # GridFS ops can be slow on serverless. createEntities: - client: id: &failPointClient failPointClient useMultipleMongoses: false - client: id: &client client uriOptions: timeoutMS: 75 useMultipleMongoses: false observeEvents: - commandStartedEvent - database: id: &database database client: *client databaseName: &databaseName test - bucket: id: &bucket bucket database: *database - collection: id: &filesCollection filesCollection database: *database collectionName: &filesCollectionName fs.files - collection: id: &chunksCollection chunksCollection database: *database collectionName: &chunksCollectionName fs.chunks initialData: - collectionName: *filesCollectionName databaseName: *databaseName documents: - _id: &fileDocumentId { $oid: "000000000000000000000005" } length: 8 chunkSize: 4 uploadDate: { $date: "1970-01-01T00:00:00.000Z" } filename: "length-8" contentType: "application/octet-stream" aliases: [] metadata: {} - collectionName: *chunksCollectionName databaseName: *databaseName documents: - _id: { $oid: "000000000000000000000005" } files_id: *fileDocumentId n: 0 data: { $binary: { base64: "ESIzRA==", subType: "00" } } # hex: 11223344 - _id: { $oid: "000000000000000000000006" } files_id: *fileDocumentId n: 1 data: { $binary: { base64: "ESIzRA==", subType: "00" } } # hex: 11223344 tests: - description: "timeoutMS can be overridden for delete" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["delete"] blockConnection: true blockTimeMS: 100 - name: delete object: *bucket arguments: id: *fileDocumentId timeoutMS: 1000 # The client timeoutMS is 75ms and the operation blocks for 100ms, so 1000ms should let it succeed. - description: "timeoutMS applied to delete against the files collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["delete"] blockConnection: true blockTimeMS: 100 - name: delete object: *bucket arguments: id: *fileDocumentId expectError: isTimeoutError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: delete databaseName: *databaseName command: delete: *filesCollectionName maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS applied to delete against the chunks collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: # The first "delete" will be against the files collection, so we skip it. skip: 1 data: failCommands: ["delete"] blockConnection: true blockTimeMS: 100 - name: delete object: *bucket arguments: id: *fileDocumentId expectError: isTimeoutError: true # Test that drivers are not refreshing the timeout between commands. We test this by blocking both "delete" commands # for 50ms each. The delete should inherit timeoutMS=75 from the client/database and the server takes over 75ms # total, so the operation should fail. - description: "timeoutMS applied to entire delete, not individual parts" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["delete"] blockConnection: true blockTimeMS: 50 - name: delete object: *bucket arguments: id: *fileDocumentId expectError: isTimeoutError: true mongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_operations_timeout/gridfs-download.yml000066400000000000000000000131441505113246500333130ustar00rootroot00000000000000description: "timeoutMS behaves correctly for GridFS download operations" schemaVersion: "1.9" runOnRequirements: - minServerVersion: "4.4" serverless: forbid # GridFS ops can be slow on serverless. createEntities: - client: id: &failPointClient failPointClient useMultipleMongoses: false - client: id: &client client uriOptions: timeoutMS: 75 useMultipleMongoses: false observeEvents: - commandStartedEvent - database: id: &database database client: *client databaseName: &databaseName test - bucket: id: &bucket bucket database: *database - collection: id: &filesCollection filesCollection database: *database collectionName: &filesCollectionName fs.files - collection: id: &chunksCollection chunksCollection database: *database collectionName: &chunksCollectionName fs.chunks initialData: - collectionName: *filesCollectionName databaseName: *databaseName documents: - _id: &fileDocumentId { $oid: "000000000000000000000005" } length: 8 chunkSize: 4 uploadDate: { $date: "1970-01-01T00:00:00.000Z" } filename: "length-8" contentType: "application/octet-stream" aliases: [] metadata: {} - collectionName: *chunksCollectionName databaseName: *databaseName documents: - _id: { $oid: "000000000000000000000005" } files_id: *fileDocumentId n: 0 data: { $binary: { base64: "ESIzRA==", subType: "00" } } # hex: 11223344 - _id: { $oid: "000000000000000000000006" } files_id: *fileDocumentId n: 1 data: { $binary: { base64: "ESIzRA==", subType: "00" } } # hex: 11223344 tests: - description: "timeoutMS can be overridden for download" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["find"] blockConnection: true blockTimeMS: 100 - name: download object: *bucket arguments: id: *fileDocumentId timeoutMS: 1000 # The client timeoutMS is 75ms and the operation blocks for 100ms, so 1000ms should let it succeed. - description: "timeoutMS applied to find to get files document" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["find"] blockConnection: true blockTimeMS: 100 - name: download object: *bucket arguments: id: *fileDocumentId expectError: isTimeoutError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: find databaseName: *databaseName command: find: *filesCollectionName maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS applied to find to get chunks" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: # The first "find" will be against the files collection, so we skip it. skip: 1 data: failCommands: ["find"] blockConnection: true blockTimeMS: 100 - name: download object: *bucket arguments: id: *fileDocumentId expectError: isTimeoutError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: find databaseName: *databaseName command: find: *filesCollectionName maxTimeMS: { $$type: ["int", "long"] } - commandStartedEvent: commandName: find databaseName: *databaseName command: find: *chunksCollectionName maxTimeMS: { $$type: ["int", "long"] } # Test that drivers are not refreshing the timeout between commands. We test this by blocking both "find" commands # for 50ms each. The download should inherit timeoutMS=75 from the client/database and the server takes over 75ms # total, so the operation should fail. - description: "timeoutMS applied to entire download, not individual parts" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["find"] blockConnection: true blockTimeMS: 50 - name: download object: *bucket arguments: id: *fileDocumentId expectError: isTimeoutError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: find databaseName: *databaseName command: find: *filesCollectionName maxTimeMS: { $$type: ["int", "long"] } - commandStartedEvent: commandName: find databaseName: *databaseName command: find: *chunksCollectionName maxTimeMS: { $$type: ["int", "long"] } mongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_operations_timeout/gridfs-find.yml000066400000000000000000000054121505113246500324230ustar00rootroot00000000000000description: "timeoutMS behaves correctly for GridFS find operations" schemaVersion: "1.9" runOnRequirements: - minServerVersion: "4.4" serverless: forbid # GridFS ops can be slow on serverless. createEntities: - client: id: &failPointClient failPointClient useMultipleMongoses: false - client: id: &client client uriOptions: timeoutMS: 75 useMultipleMongoses: false observeEvents: - commandStartedEvent - database: id: &database database client: *client databaseName: &databaseName test - bucket: id: &bucket bucket database: *database - collection: id: &filesCollection filesCollection database: *database collectionName: &filesCollectionName fs.files - collection: id: &chunksCollection chunksCollection database: *database collectionName: &chunksCollectionName fs.chunks initialData: - collectionName: *filesCollectionName databaseName: *databaseName documents: [] - collectionName: *chunksCollectionName databaseName: *databaseName documents: [] tests: - description: "timeoutMS can be overridden for a find" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["find"] blockConnection: true blockTimeMS: 100 - name: find object: *bucket arguments: filter: {} timeoutMS: 1000 # The client timeoutMS is 75ms and the operation blocks for 100ms, so 1000ms should let it succeed. expectEvents: - client: *client events: - commandStartedEvent: commandName: find databaseName: *databaseName command: find: *filesCollectionName maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS applied to find command" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["find"] blockConnection: true blockTimeMS: 100 - name: find object: *bucket arguments: filter: {} expectError: isTimeoutError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: find databaseName: *databaseName command: find: *filesCollectionName maxTimeMS: { $$type: ["int", "long"] } mongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_operations_timeout/gridfs-upload.yml000066400000000000000000000177011505113246500327730ustar00rootroot00000000000000description: "timeoutMS behaves correctly for GridFS upload operations" schemaVersion: "1.9" runOnRequirements: - minServerVersion: "4.4" serverless: forbid # GridFS ops can be slow on serverless. createEntities: - client: id: &failPointClient failPointClient useMultipleMongoses: false - client: id: &client client uriOptions: timeoutMS: 75 useMultipleMongoses: false - database: id: &database database client: *client databaseName: &databaseName test - bucket: id: &bucket bucket database: *database - collection: id: &filesCollection filesCollection database: *database collectionName: &filesCollectionName fs.files - collection: id: &chunksCollection chunksCollection database: *database collectionName: &chunksCollectionName fs.chunks initialData: - collectionName: *filesCollectionName databaseName: *databaseName documents: [] - collectionName: *chunksCollectionName databaseName: *databaseName documents: [] tests: # Many tests in this file do not specify command monitoring expectations because GridFS uploads internally do a # number of operations, so expecting an exact set of commands can cause flaky failures. - description: "timeoutMS can be overridden for upload" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["find"] blockConnection: true blockTimeMS: 100 - name: upload object: *bucket arguments: filename: filename source: { $$hexBytes: "1122334455" } timeoutMS: 1000 # On the first write to the bucket, drivers check if the files collection is empty to see if indexes need to be # created. - description: "timeoutMS applied to initial find on files collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["find"] blockConnection: true blockTimeMS: 100 - name: upload object: *bucket arguments: filename: filename source: { $$hexBytes: "1122334455" } expectError: isTimeoutError: true # On the first write to the bucket, drivers check if the files collection has the correct indexes. - description: "timeoutMS applied to listIndexes on files collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["listIndexes"] blockConnection: true blockTimeMS: 100 - name: upload object: *bucket arguments: filename: filename source: { $$hexBytes: "1122334455" } expectError: isTimeoutError: true # If the files collection is empty when the first write to the bucket occurs, drivers attempt to create an index # on the bucket's files collection. - description: "timeoutMS applied to index creation for files collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["createIndexes"] blockConnection: true blockTimeMS: 100 - name: upload object: *bucket arguments: filename: filename source: { $$hexBytes: "1122334455" } expectError: isTimeoutError: true # On the first write to the bucket, drivers check if the chunks collection has the correct indexes. - description: "timeoutMS applied to listIndexes on chunks collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand # The first listIndexes will be on the files collection, so we skip it. mode: { skip: 1 } data: failCommands: ["listIndexes"] blockConnection: true blockTimeMS: 100 - name: upload object: *bucket arguments: filename: filename source: { $$hexBytes: "1122334455" } expectError: isTimeoutError: true # If the files collection is empty when the first write to the bucket occurs, drivers attempt to create an index # on the bucket's chunks collection. - description: "timeoutMS applied to index creation for chunks collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand # This index is created after the one on the files collection, so we skip the first createIndexes command # and target the second. mode: { skip: 1 } data: failCommands: ["createIndexes"] blockConnection: true blockTimeMS: 100 - name: upload object: *bucket arguments: filename: filename source: { $$hexBytes: "1122334455" } expectError: isTimeoutError: true - description: "timeoutMS applied to chunk insertion" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["insert"] blockConnection: true blockTimeMS: 100 - name: upload object: *bucket arguments: filename: filename source: { $$hexBytes: "1122334455" } expectError: isTimeoutError: true - description: "timeoutMS applied to creation of files document" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand # Skip the insert to upload the chunk. Because the whole file fits into one chunk, the second insert will # be the files document upload. mode: { skip: 1 } data: failCommands: ["insert"] blockConnection: true blockTimeMS: 100 - name: upload object: *bucket arguments: filename: filename source: { $$hexBytes: "1122334455" } expectError: isTimeoutError: true # Test that drivers apply timeoutMS to the entire upload rather than refreshing it between individual commands. We # test this by blocking the "find" and "listIndexes" commands for 50ms each and performing an upload. The upload # should inherit timeoutMS=75 from the client/database and the server takes over 75ms total, so the operation should # fail. - description: "timeoutMS applied to upload as a whole, not individual parts" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["find", "listIndexes"] blockConnection: true blockTimeMS: 50 - name: upload object: *bucket arguments: filename: filename source: { $$hexBytes: "1122334455" } expectError: isTimeoutError: true mongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_operations_timeout/legacy-timeouts.yml000066400000000000000000000147611505113246500333510ustar00rootroot00000000000000description: "legacy timeouts continue to work if timeoutMS is not set" schemaVersion: "1.0" runOnRequirements: - minServerVersion: "4.4" initialData: - collectionName: &collectionName coll databaseName: &databaseName test documents: [] tests: - description: "socketTimeoutMS is not used to derive a maxTimeMS command field" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client observeEvents: - commandStartedEvent uriOptions: socketTimeoutMS: 50000 - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - name: insertOne object: *collection arguments: document: { x: 1 } expectEvents: - client: *client events: - commandStartedEvent: commandName: insert databaseName: *databaseName command: insert: *collectionName maxTimeMS: { $$exists: false } - description: "waitQueueTimeoutMS is not used to derive a maxTimeMS command field" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client observeEvents: - commandStartedEvent uriOptions: waitQueueTimeoutMS: 50000 - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - name: insertOne object: *collection arguments: document: { x: 1 } expectEvents: - client: *client events: - commandStartedEvent: commandName: insert databaseName: *databaseName command: insert: *collectionName maxTimeMS: { $$exists: false } - description: "wTimeoutMS is not used to derive a maxTimeMS command field" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client observeEvents: - commandStartedEvent uriOptions: wTimeoutMS: &wTimeoutMS 50000 - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - name: insertOne object: *collection arguments: document: { x: 1 } expectEvents: - client: *client events: - commandStartedEvent: commandName: insert databaseName: *databaseName command: insert: *collectionName maxTimeMS: { $$exists: false } writeConcern: wtimeout: *wTimeoutMS # If the maxTimeMS option is set for a specific command, it should be used as the maxTimeMS command field without any # modifications. This is different from timeoutMS because in that case, drivers subtract the target server's min # RTT from the remaining timeout to derive a maxTimeMS field. - description: "maxTimeMS option is used directly as the maxTimeMS field on a command" operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client observeEvents: - commandStartedEvent - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - name: estimatedDocumentCount object: *collection arguments: maxTimeMS: &maxTimeMS 50000 expectEvents: - client: *client events: - commandStartedEvent: commandName: count databaseName: *databaseName command: count: *collectionName maxTimeMS: *maxTimeMS # Same test as above but with the maxCommitTimeMS option. - description: "maxCommitTimeMS option is used directly as the maxTimeMS field on a commitTransaction command" runOnRequirements: # Note: minServerVersion is specified in top-level runOnRequirements - topologies: ["replicaset", "sharded"] operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client observeEvents: - commandStartedEvent - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - session: id: &session session client: *client sessionOptions: defaultTransactionOptions: maxCommitTimeMS: &maxCommitTimeMS 1000 - name: startTransaction object: *session - name: insertOne object: *collection arguments: document: { _id: 1 } session: *session - name: commitTransaction object: *session expectEvents: - client: *client events: - commandStartedEvent: commandName: insert databaseName: *databaseName command: insert: *collectionName maxTimeMS: { $$exists: false } - commandStartedEvent: commandName: commitTransaction databaseName: admin command: commitTransaction: 1 maxTimeMS: *maxCommitTimeMS non-tailable-cursors.yml000066400000000000000000000236111505113246500342140ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_operations_timeoutdescription: "timeoutMS behaves correctly for non-tailable cursors" schemaVersion: "1.9" runOnRequirements: - minServerVersion: "4.4" createEntities: - client: id: &failPointClient failPointClient useMultipleMongoses: false - client: id: &client client uriOptions: timeoutMS: 10 useMultipleMongoses: false observeEvents: - commandStartedEvent ignoreCommandMonitoringEvents: - killCursors - database: id: &database database client: *client databaseName: &databaseName test - collection: id: &collection collection database: *database collectionName: &collectionName coll initialData: - collectionName: *collectionName databaseName: *databaseName documents: - { _id: 0 } - { _id: 1 } - { _id: 2 } - collectionName: &aggregateOutputCollectionName aggregateOutputColl databaseName: *databaseName documents: [] tests: # If timeoutMode is explicitly set to CURSOR_LIFETIME, the timeout should apply to the initial command. # This should also be the case if timeoutMode is unset, but this is already tested in global-timeoutMS.yml. - description: "timeoutMS applied to find if timeoutMode is cursor_lifetime" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["find"] blockConnection: true # changed to 30ms to accommodate jruby latencies blockTimeMS: 30 - name: find object: *collection arguments: filter: {} # added as a 25ms timeout to accommodate jruby latencies timeoutMS: 25 timeoutMode: cursorLifetime expectError: isTimeoutError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: find databaseName: *databaseName command: find: *collectionName maxTimeMS: { $$type: ["int", "long"] } # If timeoutMode is unset, it should default to CURSOR_LIFETIME and the time remaining after the find succeeds should # be applied to the getMore. - description: "remaining timeoutMS applied to getMore if timeoutMode is unset" operations: # Block find/getMore for 15ms. - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["find", "getMore"] blockConnection: true # bumped to 50 to accommodate jruby latencies blockTimeMS: 50 # Run a find with timeoutMS=39 and batchSize=1 to force two batches, which will cause a find and a getMore to be # sent. Both will block for 20ms so together they will go over the timeout. - name: find object: *collection arguments: filter: {} # bumped to 99 to accommodate jruby latencies timeoutMS: 99 batchSize: 2 expectError: isTimeoutError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: find databaseName: *databaseName command: find: *collectionName maxTimeMS: { $$type: ["int", "long"] } - commandStartedEvent: commandName: getMore databaseName: *databaseName command: getMore: { $$type: ["int", "long"] } collection: *collectionName maxTimeMS: { $$exists: false } # Same test as above, but with timeoutMode explicitly set to CURSOR_LIFETIME. - description: "remaining timeoutMS applied to getMore if timeoutMode is cursor_lifetime" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["find", "getMore"] blockConnection: true blockTimeMS: 20 - name: find object: *collection arguments: filter: {} timeoutMode: cursorLifetime timeoutMS: 39 batchSize: 2 expectError: isTimeoutError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: find databaseName: *databaseName command: find: *collectionName maxTimeMS: { $$type: ["int", "long"] } - commandStartedEvent: commandName: getMore databaseName: *databaseName command: getMore: { $$type: ["int", "long"] } collection: *collectionName maxTimeMS: { $$exists: false } # If timeoutMode=ITERATION, timeoutMS should apply to the initial find command and the command shouldn't have a # maxTimeMS field. - description: "timeoutMS applied to find if timeoutMode is iteration" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["find"] blockConnection: true blockTimeMS: 15 - name: find object: *collection arguments: filter: {} timeoutMode: iteration expectError: isTimeoutError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: find databaseName: *databaseName command: find: *collectionName maxTimeMS: { $$exists: false } # If timeoutMode=ITERATION, timeoutMS applies separately to the initial find and the getMore on the cursor. Neither # command should have a maxTimeMS field. This is a success test. The "find" is executed with timeoutMS=29 and both # "find" and "getMore" commands are blocked for 15ms each. Neither exceeds the timeout, so iteration succeeds. - description: "timeoutMS is refreshed for getMore if timeoutMode is iteration - success" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["find", "getMore"] blockConnection: true # blockTimeMS: 15 # Increase timeout blockTimeMS: 20 - name: find object: *collection arguments: filter: {} timeoutMode: iteration # timeoutMS: 29 # Increase timeout timeoutMS: 39 batchSize: 2 expectEvents: - client: *client events: - commandStartedEvent: commandName: find databaseName: *databaseName command: find: *collectionName maxTimeMS: { $$exists: false } - commandStartedEvent: commandName: getMore databaseName: *databaseName command: getMore: { $$type: ["int", "long"] } collection: *collectionName maxTimeMS: { $$exists: false } # If timeoutMode=ITERATION, timeoutMS applies separately to the initial find and the getMore on the cursor. Neither # command should have a maxTimeMS field. This is a failure test. The "find" inherits timeoutMS=10 and "getMore" # commands are blocked for 15ms, causing iteration to fail with a timeout error. - description: "timeoutMS is refreshed for getMore if timeoutMode is iteration - failure" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["getMore"] blockConnection: true blockTimeMS: 15 - name: find object: *collection arguments: filter: {} timeoutMode: iteration batchSize: 2 expectError: isTimeoutError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: find databaseName: *databaseName command: find: *collectionName maxTimeMS: { $$exists: false } - commandStartedEvent: commandName: getMore databaseName: *databaseName command: getMore: { $$type: ["int", "long"] } collection: *collectionName maxTimeMS: { $$exists: false } - description: "aggregate with $out errors if timeoutMode is iteration" operations: - name: aggregate object: *collection arguments: pipeline: - $out: *aggregateOutputCollectionName timeoutMS: 100 timeoutMode: iteration expectError: isClientError: true expectEvents: - client: *client events: [] - description: "aggregate with $merge errors if timeoutMode is iteration" operations: - name: aggregate object: *collection arguments: pipeline: - $merge: *aggregateOutputCollectionName timeoutMS: 100 timeoutMode: iteration expectError: isClientError: true expectEvents: - client: *client events: [] override-collection-timeoutMS.yml000066400000000000000000001576401505113246500360570ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_operations_timeout# Tests in this file are generated from override-collection-timeoutMS.yml.template. description: "timeoutMS can be overridden for a MongoCollection" schemaVersion: "1.9" runOnRequirements: - minServerVersion: "4.4" topologies: ["replicaset", "sharded"] createEntities: - client: id: &failPointClient failPointClient useMultipleMongoses: false - client: id: &client client uriOptions: timeoutMS: 10 useMultipleMongoses: false observeEvents: - commandStartedEvent ignoreCommandMonitoringEvents: - killCursors - database: id: &database database client: *client databaseName: &databaseName test initialData: - collectionName: &collectionName coll databaseName: *databaseName documents: [] tests: # For each collection-level operation, we execute two tests: # # 1. timeoutMS can be overridden to a non-zero value for a MongoCollection. Each test uses the client entity defined # above to construct a collection entity with timeoutMS=1000 and configures a fail point to block the operation for # 15ms so the operation succeeds. # # 2. timeoutMS can be overridden to 0 for a MongoCollection. Each test constructs a collection entity with # timeoutMS=0 using the global client entity and configures a fail point to block the operation for 15ms. The # operation should succeed and the command sent to the server should not contain a maxTimeMS field. - description: "timeoutMS can be configured on a MongoCollection - aggregate on collection" operations: - name: createEntities object: testRunner arguments: entities: - collection: id: &collection collection database: *database collectionName: *collectionName collectionOptions: timeoutMS: 1000 - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["aggregate"] blockConnection: true blockTimeMS: 15 - name: aggregate object: *collection arguments: pipeline: [] expectEvents: - client: *client events: - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 on a MongoCollection - aggregate on collection" operations: - name: createEntities object: testRunner arguments: entities: - collection: id: &collection collection database: *database collectionName: *collectionName collectionOptions: timeoutMS: 0 - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["aggregate"] blockConnection: true blockTimeMS: 15 - name: aggregate object: *collection arguments: pipeline: [] expectEvents: - client: *client events: - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: *collectionName maxTimeMS: { $$exists: false } - description: "timeoutMS can be configured on a MongoCollection - count on collection" operations: - name: createEntities object: testRunner arguments: entities: - collection: id: &collection collection database: *database collectionName: *collectionName collectionOptions: timeoutMS: 1000 - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["count"] blockConnection: true blockTimeMS: 15 - name: count object: *collection arguments: filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: count databaseName: *databaseName command: count: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 on a MongoCollection - count on collection" operations: - name: createEntities object: testRunner arguments: entities: - collection: id: &collection collection database: *database collectionName: *collectionName collectionOptions: timeoutMS: 0 - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["count"] blockConnection: true blockTimeMS: 15 - name: count object: *collection arguments: filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: count databaseName: *databaseName command: count: *collectionName maxTimeMS: { $$exists: false } - description: "timeoutMS can be configured on a MongoCollection - countDocuments on collection" operations: - name: createEntities object: testRunner arguments: entities: - collection: id: &collection collection database: *database collectionName: *collectionName collectionOptions: timeoutMS: 1000 - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["aggregate"] blockConnection: true blockTimeMS: 15 - name: countDocuments object: *collection arguments: filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 on a MongoCollection - countDocuments on collection" operations: - name: createEntities object: testRunner arguments: entities: - collection: id: &collection collection database: *database collectionName: *collectionName collectionOptions: timeoutMS: 0 - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["aggregate"] blockConnection: true blockTimeMS: 15 - name: countDocuments object: *collection arguments: filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: *collectionName maxTimeMS: { $$exists: false } - description: "timeoutMS can be configured on a MongoCollection - estimatedDocumentCount on collection" operations: - name: createEntities object: testRunner arguments: entities: - collection: id: &collection collection database: *database collectionName: *collectionName collectionOptions: timeoutMS: 1000 - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["count"] blockConnection: true blockTimeMS: 15 - name: estimatedDocumentCount object: *collection expectEvents: - client: *client events: - commandStartedEvent: commandName: count databaseName: *databaseName command: count: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 on a MongoCollection - estimatedDocumentCount on collection" operations: - name: createEntities object: testRunner arguments: entities: - collection: id: &collection collection database: *database collectionName: *collectionName collectionOptions: timeoutMS: 0 - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["count"] blockConnection: true blockTimeMS: 15 - name: estimatedDocumentCount object: *collection expectEvents: - client: *client events: - commandStartedEvent: commandName: count databaseName: *databaseName command: count: *collectionName maxTimeMS: { $$exists: false } - description: "timeoutMS can be configured on a MongoCollection - distinct on collection" operations: - name: createEntities object: testRunner arguments: entities: - collection: id: &collection collection database: *database collectionName: *collectionName collectionOptions: timeoutMS: 1000 - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["distinct"] blockConnection: true blockTimeMS: 15 - name: distinct object: *collection arguments: fieldName: x filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: distinct databaseName: *databaseName command: distinct: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 on a MongoCollection - distinct on collection" operations: - name: createEntities object: testRunner arguments: entities: - collection: id: &collection collection database: *database collectionName: *collectionName collectionOptions: timeoutMS: 0 - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["distinct"] blockConnection: true blockTimeMS: 15 - name: distinct object: *collection arguments: fieldName: x filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: distinct databaseName: *databaseName command: distinct: *collectionName maxTimeMS: { $$exists: false } - description: "timeoutMS can be configured on a MongoCollection - find on collection" operations: - name: createEntities object: testRunner arguments: entities: - collection: id: &collection collection database: *database collectionName: *collectionName collectionOptions: timeoutMS: 1000 - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["find"] blockConnection: true blockTimeMS: 15 - name: find object: *collection arguments: filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: find databaseName: *databaseName command: find: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 on a MongoCollection - find on collection" operations: - name: createEntities object: testRunner arguments: entities: - collection: id: &collection collection database: *database collectionName: *collectionName collectionOptions: timeoutMS: 0 - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["find"] blockConnection: true blockTimeMS: 15 - name: find object: *collection arguments: filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: find databaseName: *databaseName command: find: *collectionName maxTimeMS: { $$exists: false } - description: "timeoutMS can be configured on a MongoCollection - findOne on collection" operations: - name: createEntities object: testRunner arguments: entities: - collection: id: &collection collection database: *database collectionName: *collectionName collectionOptions: timeoutMS: 1000 - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["find"] blockConnection: true blockTimeMS: 15 - name: findOne object: *collection arguments: filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: find databaseName: *databaseName command: find: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 on a MongoCollection - findOne on collection" operations: - name: createEntities object: testRunner arguments: entities: - collection: id: &collection collection database: *database collectionName: *collectionName collectionOptions: timeoutMS: 0 - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["find"] blockConnection: true blockTimeMS: 15 - name: findOne object: *collection arguments: filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: find databaseName: *databaseName command: find: *collectionName maxTimeMS: { $$exists: false } - description: "timeoutMS can be configured on a MongoCollection - listIndexes on collection" operations: - name: createEntities object: testRunner arguments: entities: - collection: id: &collection collection database: *database collectionName: *collectionName collectionOptions: timeoutMS: 1000 - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["listIndexes"] blockConnection: true blockTimeMS: 15 - name: listIndexes object: *collection expectEvents: - client: *client events: - commandStartedEvent: commandName: listIndexes databaseName: *databaseName command: listIndexes: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 on a MongoCollection - listIndexes on collection" operations: - name: createEntities object: testRunner arguments: entities: - collection: id: &collection collection database: *database collectionName: *collectionName collectionOptions: timeoutMS: 0 - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["listIndexes"] blockConnection: true blockTimeMS: 15 - name: listIndexes object: *collection expectEvents: - client: *client events: - commandStartedEvent: commandName: listIndexes databaseName: *databaseName command: listIndexes: *collectionName maxTimeMS: { $$exists: false } - description: "timeoutMS can be configured on a MongoCollection - listIndexNames on collection" operations: - name: createEntities object: testRunner arguments: entities: - collection: id: &collection collection database: *database collectionName: *collectionName collectionOptions: timeoutMS: 1000 - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["listIndexes"] blockConnection: true blockTimeMS: 15 - name: listIndexNames object: *collection expectEvents: - client: *client events: - commandStartedEvent: commandName: listIndexes databaseName: *databaseName command: listIndexes: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 on a MongoCollection - listIndexNames on collection" operations: - name: createEntities object: testRunner arguments: entities: - collection: id: &collection collection database: *database collectionName: *collectionName collectionOptions: timeoutMS: 0 - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["listIndexes"] blockConnection: true blockTimeMS: 15 - name: listIndexNames object: *collection expectEvents: - client: *client events: - commandStartedEvent: commandName: listIndexes databaseName: *databaseName command: listIndexes: *collectionName maxTimeMS: { $$exists: false } - description: "timeoutMS can be configured on a MongoCollection - createChangeStream on collection" operations: - name: createEntities object: testRunner arguments: entities: - collection: id: &collection collection database: *database collectionName: *collectionName collectionOptions: timeoutMS: 1000 - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["aggregate"] blockConnection: true blockTimeMS: 15 - name: createChangeStream object: *collection arguments: pipeline: [] expectEvents: - client: *client events: - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 on a MongoCollection - createChangeStream on collection" operations: - name: createEntities object: testRunner arguments: entities: - collection: id: &collection collection database: *database collectionName: *collectionName collectionOptions: timeoutMS: 0 - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["aggregate"] blockConnection: true blockTimeMS: 15 - name: createChangeStream object: *collection arguments: pipeline: [] expectEvents: - client: *client events: - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: *collectionName maxTimeMS: { $$exists: false } - description: "timeoutMS can be configured on a MongoCollection - insertOne on collection" operations: - name: createEntities object: testRunner arguments: entities: - collection: id: &collection collection database: *database collectionName: *collectionName collectionOptions: timeoutMS: 1000 - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["insert"] blockConnection: true blockTimeMS: 15 - name: insertOne object: *collection arguments: document: { x: 1 } expectEvents: - client: *client events: - commandStartedEvent: commandName: insert databaseName: *databaseName command: insert: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 on a MongoCollection - insertOne on collection" operations: - name: createEntities object: testRunner arguments: entities: - collection: id: &collection collection database: *database collectionName: *collectionName collectionOptions: timeoutMS: 0 - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["insert"] blockConnection: true blockTimeMS: 15 - name: insertOne object: *collection arguments: document: { x: 1 } expectEvents: - client: *client events: - commandStartedEvent: commandName: insert databaseName: *databaseName command: insert: *collectionName maxTimeMS: { $$exists: false } - description: "timeoutMS can be configured on a MongoCollection - insertMany on collection" operations: - name: createEntities object: testRunner arguments: entities: - collection: id: &collection collection database: *database collectionName: *collectionName collectionOptions: timeoutMS: 1000 - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["insert"] blockConnection: true blockTimeMS: 15 - name: insertMany object: *collection arguments: documents: - { x: 1 } expectEvents: - client: *client events: - commandStartedEvent: commandName: insert databaseName: *databaseName command: insert: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 on a MongoCollection - insertMany on collection" operations: - name: createEntities object: testRunner arguments: entities: - collection: id: &collection collection database: *database collectionName: *collectionName collectionOptions: timeoutMS: 0 - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["insert"] blockConnection: true blockTimeMS: 15 - name: insertMany object: *collection arguments: documents: - { x: 1 } expectEvents: - client: *client events: - commandStartedEvent: commandName: insert databaseName: *databaseName command: insert: *collectionName maxTimeMS: { $$exists: false } - description: "timeoutMS can be configured on a MongoCollection - deleteOne on collection" operations: - name: createEntities object: testRunner arguments: entities: - collection: id: &collection collection database: *database collectionName: *collectionName collectionOptions: timeoutMS: 1000 - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["delete"] blockConnection: true blockTimeMS: 15 - name: deleteOne object: *collection arguments: filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: delete databaseName: *databaseName command: delete: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 on a MongoCollection - deleteOne on collection" operations: - name: createEntities object: testRunner arguments: entities: - collection: id: &collection collection database: *database collectionName: *collectionName collectionOptions: timeoutMS: 0 - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["delete"] blockConnection: true blockTimeMS: 15 - name: deleteOne object: *collection arguments: filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: delete databaseName: *databaseName command: delete: *collectionName maxTimeMS: { $$exists: false } - description: "timeoutMS can be configured on a MongoCollection - deleteMany on collection" operations: - name: createEntities object: testRunner arguments: entities: - collection: id: &collection collection database: *database collectionName: *collectionName collectionOptions: timeoutMS: 1000 - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["delete"] blockConnection: true blockTimeMS: 15 - name: deleteMany object: *collection arguments: filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: delete databaseName: *databaseName command: delete: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 on a MongoCollection - deleteMany on collection" operations: - name: createEntities object: testRunner arguments: entities: - collection: id: &collection collection database: *database collectionName: *collectionName collectionOptions: timeoutMS: 0 - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["delete"] blockConnection: true blockTimeMS: 15 - name: deleteMany object: *collection arguments: filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: delete databaseName: *databaseName command: delete: *collectionName maxTimeMS: { $$exists: false } - description: "timeoutMS can be configured on a MongoCollection - replaceOne on collection" operations: - name: createEntities object: testRunner arguments: entities: - collection: id: &collection collection database: *database collectionName: *collectionName collectionOptions: timeoutMS: 1000 - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["update"] blockConnection: true blockTimeMS: 15 - name: replaceOne object: *collection arguments: filter: {} replacement: { x: 1 } expectEvents: - client: *client events: - commandStartedEvent: commandName: update databaseName: *databaseName command: update: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 on a MongoCollection - replaceOne on collection" operations: - name: createEntities object: testRunner arguments: entities: - collection: id: &collection collection database: *database collectionName: *collectionName collectionOptions: timeoutMS: 0 - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["update"] blockConnection: true blockTimeMS: 15 - name: replaceOne object: *collection arguments: filter: {} replacement: { x: 1 } expectEvents: - client: *client events: - commandStartedEvent: commandName: update databaseName: *databaseName command: update: *collectionName maxTimeMS: { $$exists: false } - description: "timeoutMS can be configured on a MongoCollection - updateOne on collection" operations: - name: createEntities object: testRunner arguments: entities: - collection: id: &collection collection database: *database collectionName: *collectionName collectionOptions: timeoutMS: 1000 - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["update"] blockConnection: true blockTimeMS: 15 - name: updateOne object: *collection arguments: filter: {} update: { $set: { x: 1 } } expectEvents: - client: *client events: - commandStartedEvent: commandName: update databaseName: *databaseName command: update: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 on a MongoCollection - updateOne on collection" operations: - name: createEntities object: testRunner arguments: entities: - collection: id: &collection collection database: *database collectionName: *collectionName collectionOptions: timeoutMS: 0 - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["update"] blockConnection: true blockTimeMS: 15 - name: updateOne object: *collection arguments: filter: {} update: { $set: { x: 1 } } expectEvents: - client: *client events: - commandStartedEvent: commandName: update databaseName: *databaseName command: update: *collectionName maxTimeMS: { $$exists: false } - description: "timeoutMS can be configured on a MongoCollection - updateMany on collection" operations: - name: createEntities object: testRunner arguments: entities: - collection: id: &collection collection database: *database collectionName: *collectionName collectionOptions: timeoutMS: 1000 - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["update"] blockConnection: true blockTimeMS: 15 - name: updateMany object: *collection arguments: filter: {} update: { $set: { x: 1 } } expectEvents: - client: *client events: - commandStartedEvent: commandName: update databaseName: *databaseName command: update: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 on a MongoCollection - updateMany on collection" operations: - name: createEntities object: testRunner arguments: entities: - collection: id: &collection collection database: *database collectionName: *collectionName collectionOptions: timeoutMS: 0 - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["update"] blockConnection: true blockTimeMS: 15 - name: updateMany object: *collection arguments: filter: {} update: { $set: { x: 1 } } expectEvents: - client: *client events: - commandStartedEvent: commandName: update databaseName: *databaseName command: update: *collectionName maxTimeMS: { $$exists: false } - description: "timeoutMS can be configured on a MongoCollection - findOneAndDelete on collection" operations: - name: createEntities object: testRunner arguments: entities: - collection: id: &collection collection database: *database collectionName: *collectionName collectionOptions: timeoutMS: 1000 - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["findAndModify"] blockConnection: true blockTimeMS: 15 - name: findOneAndDelete object: *collection arguments: filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: findAndModify databaseName: *databaseName command: findAndModify: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 on a MongoCollection - findOneAndDelete on collection" operations: - name: createEntities object: testRunner arguments: entities: - collection: id: &collection collection database: *database collectionName: *collectionName collectionOptions: timeoutMS: 0 - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["findAndModify"] blockConnection: true blockTimeMS: 15 - name: findOneAndDelete object: *collection arguments: filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: findAndModify databaseName: *databaseName command: findAndModify: *collectionName maxTimeMS: { $$exists: false } - description: "timeoutMS can be configured on a MongoCollection - findOneAndReplace on collection" operations: - name: createEntities object: testRunner arguments: entities: - collection: id: &collection collection database: *database collectionName: *collectionName collectionOptions: timeoutMS: 1000 - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["findAndModify"] blockConnection: true blockTimeMS: 15 - name: findOneAndReplace object: *collection arguments: filter: {} replacement: { x: 1 } expectEvents: - client: *client events: - commandStartedEvent: commandName: findAndModify databaseName: *databaseName command: findAndModify: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 on a MongoCollection - findOneAndReplace on collection" operations: - name: createEntities object: testRunner arguments: entities: - collection: id: &collection collection database: *database collectionName: *collectionName collectionOptions: timeoutMS: 0 - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["findAndModify"] blockConnection: true blockTimeMS: 15 - name: findOneAndReplace object: *collection arguments: filter: {} replacement: { x: 1 } expectEvents: - client: *client events: - commandStartedEvent: commandName: findAndModify databaseName: *databaseName command: findAndModify: *collectionName maxTimeMS: { $$exists: false } - description: "timeoutMS can be configured on a MongoCollection - findOneAndUpdate on collection" operations: - name: createEntities object: testRunner arguments: entities: - collection: id: &collection collection database: *database collectionName: *collectionName collectionOptions: timeoutMS: 1000 - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["findAndModify"] blockConnection: true blockTimeMS: 15 - name: findOneAndUpdate object: *collection arguments: filter: {} update: { $set: { x: 1 } } expectEvents: - client: *client events: - commandStartedEvent: commandName: findAndModify databaseName: *databaseName command: findAndModify: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 on a MongoCollection - findOneAndUpdate on collection" operations: - name: createEntities object: testRunner arguments: entities: - collection: id: &collection collection database: *database collectionName: *collectionName collectionOptions: timeoutMS: 0 - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["findAndModify"] blockConnection: true blockTimeMS: 15 - name: findOneAndUpdate object: *collection arguments: filter: {} update: { $set: { x: 1 } } expectEvents: - client: *client events: - commandStartedEvent: commandName: findAndModify databaseName: *databaseName command: findAndModify: *collectionName maxTimeMS: { $$exists: false } - description: "timeoutMS can be configured on a MongoCollection - bulkWrite on collection" operations: - name: createEntities object: testRunner arguments: entities: - collection: id: &collection collection database: *database collectionName: *collectionName collectionOptions: timeoutMS: 1000 - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["insert"] blockConnection: true blockTimeMS: 15 - name: bulkWrite object: *collection arguments: requests: - insertOne: document: { _id: 1 } expectEvents: - client: *client events: - commandStartedEvent: commandName: insert databaseName: *databaseName command: insert: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 on a MongoCollection - bulkWrite on collection" operations: - name: createEntities object: testRunner arguments: entities: - collection: id: &collection collection database: *database collectionName: *collectionName collectionOptions: timeoutMS: 0 - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["insert"] blockConnection: true blockTimeMS: 15 - name: bulkWrite object: *collection arguments: requests: - insertOne: document: { _id: 1 } expectEvents: - client: *client events: - commandStartedEvent: commandName: insert databaseName: *databaseName command: insert: *collectionName maxTimeMS: { $$exists: false } - description: "timeoutMS can be configured on a MongoCollection - createIndex on collection" operations: - name: createEntities object: testRunner arguments: entities: - collection: id: &collection collection database: *database collectionName: *collectionName collectionOptions: timeoutMS: 1000 - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["createIndexes"] blockConnection: true blockTimeMS: 15 - name: createIndex object: *collection arguments: keys: { x: 1 } name: "x_1" expectEvents: - client: *client events: - commandStartedEvent: commandName: createIndexes databaseName: *databaseName command: createIndexes: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 on a MongoCollection - createIndex on collection" operations: - name: createEntities object: testRunner arguments: entities: - collection: id: &collection collection database: *database collectionName: *collectionName collectionOptions: timeoutMS: 0 - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["createIndexes"] blockConnection: true blockTimeMS: 15 - name: createIndex object: *collection arguments: keys: { x: 1 } name: "x_1" expectEvents: - client: *client events: - commandStartedEvent: commandName: createIndexes databaseName: *databaseName command: createIndexes: *collectionName maxTimeMS: { $$exists: false } - description: "timeoutMS can be configured on a MongoCollection - dropIndex on collection" operations: - name: createEntities object: testRunner arguments: entities: - collection: id: &collection collection database: *database collectionName: *collectionName collectionOptions: timeoutMS: 1000 - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["dropIndexes"] blockConnection: true blockTimeMS: 15 - name: dropIndex object: *collection arguments: name: "x_1" expectError: isClientError: false isTimeoutError: false expectEvents: - client: *client events: - commandStartedEvent: commandName: dropIndexes databaseName: *databaseName command: dropIndexes: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 on a MongoCollection - dropIndex on collection" operations: - name: createEntities object: testRunner arguments: entities: - collection: id: &collection collection database: *database collectionName: *collectionName collectionOptions: timeoutMS: 0 - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["dropIndexes"] blockConnection: true blockTimeMS: 15 - name: dropIndex object: *collection arguments: name: "x_1" expectError: isClientError: false isTimeoutError: false expectEvents: - client: *client events: - commandStartedEvent: commandName: dropIndexes databaseName: *databaseName command: dropIndexes: *collectionName maxTimeMS: { $$exists: false } - description: "timeoutMS can be configured on a MongoCollection - dropIndexes on collection" operations: - name: createEntities object: testRunner arguments: entities: - collection: id: &collection collection database: *database collectionName: *collectionName collectionOptions: timeoutMS: 1000 - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["dropIndexes"] blockConnection: true blockTimeMS: 15 - name: dropIndexes object: *collection expectEvents: - client: *client events: - commandStartedEvent: commandName: dropIndexes databaseName: *databaseName command: dropIndexes: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 on a MongoCollection - dropIndexes on collection" operations: - name: createEntities object: testRunner arguments: entities: - collection: id: &collection collection database: *database collectionName: *collectionName collectionOptions: timeoutMS: 0 - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["dropIndexes"] blockConnection: true blockTimeMS: 15 - name: dropIndexes object: *collection expectEvents: - client: *client events: - commandStartedEvent: commandName: dropIndexes databaseName: *databaseName command: dropIndexes: *collectionName maxTimeMS: { $$exists: false } override-operation-timeoutMS.yml000066400000000000000000001571351505113246500357230ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_operations_timeout# Tests in this file are generated from override-operation-timeoutMS.yml.template. description: "timeoutMS can be overridden for an operation" schemaVersion: "1.9" runOnRequirements: - minServerVersion: "4.4" topologies: ["replicaset", "sharded"] createEntities: - client: id: &failPointClient failPointClient useMultipleMongoses: false - client: id: &client client uriOptions: timeoutMS: 10 useMultipleMongoses: false observeEvents: - commandStartedEvent ignoreCommandMonitoringEvents: - killCursors - database: id: &database database client: *client databaseName: &databaseName test - collection: id: &collection collection database: *database collectionName: &collectionName coll initialData: - collectionName: *collectionName databaseName: *databaseName documents: [] tests: # For each level operation, we execute two tests: # # 1. timeoutMS can be overridden to a non-zero value for an operation. Each test executes an operation using one of # the entities defined above with an overridden timeoutMS=1000 and configures a fail point to block the operation for # 15ms so the operation succeeds. # # 2. timeoutMS can be overridden to 0 for an operation. Each test executes an operation using the entities defined # above with an overridden timeoutMS=0 so the operation succeeds. - description: "timeoutMS can be configured for an operation - listDatabases on client" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["listDatabases"] blockConnection: true blockTimeMS: 15 - name: listDatabases object: *client arguments: timeoutMS: 1000 filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: listDatabases databaseName: admin command: listDatabases: 1 maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 for an operation - listDatabases on client" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["listDatabases"] blockConnection: true blockTimeMS: 15 - name: listDatabases object: *client arguments: timeoutMS: 0 filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: listDatabases databaseName: admin command: listDatabases: 1 maxTimeMS: { $$exists: false } - description: "timeoutMS can be configured for an operation - listDatabaseNames on client" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["listDatabases"] blockConnection: true blockTimeMS: 15 - name: listDatabaseNames object: *client arguments: timeoutMS: 1000 expectEvents: - client: *client events: - commandStartedEvent: commandName: listDatabases databaseName: admin command: listDatabases: 1 maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 for an operation - listDatabaseNames on client" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["listDatabases"] blockConnection: true blockTimeMS: 15 - name: listDatabaseNames object: *client arguments: timeoutMS: 0 expectEvents: - client: *client events: - commandStartedEvent: commandName: listDatabases databaseName: admin command: listDatabases: 1 maxTimeMS: { $$exists: false } - description: "timeoutMS can be configured for an operation - createChangeStream on client" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["aggregate"] blockConnection: true blockTimeMS: 15 - name: createChangeStream object: *client arguments: timeoutMS: 1000 pipeline: [] expectEvents: - client: *client events: - commandStartedEvent: commandName: aggregate databaseName: admin command: aggregate: 1 maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 for an operation - createChangeStream on client" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["aggregate"] blockConnection: true blockTimeMS: 15 - name: createChangeStream object: *client arguments: timeoutMS: 0 pipeline: [] expectEvents: - client: *client events: - commandStartedEvent: commandName: aggregate databaseName: admin command: aggregate: 1 maxTimeMS: { $$exists: false } - description: "timeoutMS can be configured for an operation - aggregate on database" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["aggregate"] blockConnection: true blockTimeMS: 15 - name: aggregate object: *database arguments: timeoutMS: 1000 pipeline: [ { $listLocalSessions: {} }, { $limit: 1 } ] expectEvents: - client: *client events: - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: 1 maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 for an operation - aggregate on database" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["aggregate"] blockConnection: true blockTimeMS: 15 - name: aggregate object: *database arguments: timeoutMS: 0 pipeline: [ { $listLocalSessions: {} }, { $limit: 1 } ] expectEvents: - client: *client events: - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: 1 maxTimeMS: { $$exists: false } - description: "timeoutMS can be configured for an operation - listCollections on database" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["listCollections"] blockConnection: true blockTimeMS: 15 - name: listCollections object: *database arguments: timeoutMS: 1000 filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: listCollections databaseName: *databaseName command: listCollections: 1 maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 for an operation - listCollections on database" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["listCollections"] blockConnection: true blockTimeMS: 15 - name: listCollections object: *database arguments: timeoutMS: 0 filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: listCollections databaseName: *databaseName command: listCollections: 1 maxTimeMS: { $$exists: false } - description: "timeoutMS can be configured for an operation - listCollectionNames on database" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["listCollections"] blockConnection: true blockTimeMS: 15 - name: listCollectionNames object: *database arguments: timeoutMS: 1000 filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: listCollections databaseName: *databaseName command: listCollections: 1 maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 for an operation - listCollectionNames on database" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["listCollections"] blockConnection: true blockTimeMS: 15 - name: listCollectionNames object: *database arguments: timeoutMS: 0 filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: listCollections databaseName: *databaseName command: listCollections: 1 maxTimeMS: { $$exists: false } - description: "timeoutMS can be configured for an operation - runCommand on database" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["ping"] blockConnection: true blockTimeMS: 15 - name: runCommand object: *database arguments: timeoutMS: 1000 command: { ping: 1 } commandName: ping expectEvents: - client: *client events: - commandStartedEvent: commandName: ping databaseName: *databaseName command: ping: 1 maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 for an operation - runCommand on database" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["ping"] blockConnection: true blockTimeMS: 15 - name: runCommand object: *database arguments: timeoutMS: 0 command: { ping: 1 } commandName: ping expectEvents: - client: *client events: - commandStartedEvent: commandName: ping databaseName: *databaseName command: ping: 1 maxTimeMS: { $$exists: false } - description: "timeoutMS can be configured for an operation - createChangeStream on database" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["aggregate"] blockConnection: true blockTimeMS: 15 - name: createChangeStream object: *database arguments: timeoutMS: 1000 pipeline: [] expectEvents: - client: *client events: - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: 1 maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 for an operation - createChangeStream on database" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["aggregate"] blockConnection: true blockTimeMS: 15 - name: createChangeStream object: *database arguments: timeoutMS: 0 pipeline: [] expectEvents: - client: *client events: - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: 1 maxTimeMS: { $$exists: false } - description: "timeoutMS can be configured for an operation - aggregate on collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["aggregate"] blockConnection: true blockTimeMS: 15 - name: aggregate object: *collection arguments: timeoutMS: 1000 pipeline: [] expectEvents: - client: *client events: - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 for an operation - aggregate on collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["aggregate"] blockConnection: true blockTimeMS: 15 - name: aggregate object: *collection arguments: timeoutMS: 0 pipeline: [] expectEvents: - client: *client events: - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: *collectionName maxTimeMS: { $$exists: false } - description: "timeoutMS can be configured for an operation - count on collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["count"] blockConnection: true blockTimeMS: 15 - name: count object: *collection arguments: timeoutMS: 1000 filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: count databaseName: *databaseName command: count: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 for an operation - count on collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["count"] blockConnection: true blockTimeMS: 15 - name: count object: *collection arguments: timeoutMS: 0 filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: count databaseName: *databaseName command: count: *collectionName maxTimeMS: { $$exists: false } - description: "timeoutMS can be configured for an operation - countDocuments on collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["aggregate"] blockConnection: true blockTimeMS: 15 - name: countDocuments object: *collection arguments: timeoutMS: 1000 filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 for an operation - countDocuments on collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["aggregate"] blockConnection: true blockTimeMS: 15 - name: countDocuments object: *collection arguments: timeoutMS: 0 filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: *collectionName maxTimeMS: { $$exists: false } - description: "timeoutMS can be configured for an operation - estimatedDocumentCount on collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["count"] blockConnection: true blockTimeMS: 15 - name: estimatedDocumentCount object: *collection arguments: timeoutMS: 1000 expectEvents: - client: *client events: - commandStartedEvent: commandName: count databaseName: *databaseName command: count: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 for an operation - estimatedDocumentCount on collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["count"] blockConnection: true blockTimeMS: 15 - name: estimatedDocumentCount object: *collection arguments: timeoutMS: 0 expectEvents: - client: *client events: - commandStartedEvent: commandName: count databaseName: *databaseName command: count: *collectionName maxTimeMS: { $$exists: false } - description: "timeoutMS can be configured for an operation - distinct on collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["distinct"] blockConnection: true blockTimeMS: 15 - name: distinct object: *collection arguments: timeoutMS: 1000 fieldName: x filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: distinct databaseName: *databaseName command: distinct: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 for an operation - distinct on collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["distinct"] blockConnection: true blockTimeMS: 15 - name: distinct object: *collection arguments: timeoutMS: 0 fieldName: x filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: distinct databaseName: *databaseName command: distinct: *collectionName maxTimeMS: { $$exists: false } - description: "timeoutMS can be configured for an operation - find on collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["find"] blockConnection: true blockTimeMS: 15 - name: find object: *collection arguments: timeoutMS: 1000 filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: find databaseName: *databaseName command: find: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 for an operation - find on collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["find"] blockConnection: true blockTimeMS: 15 - name: find object: *collection arguments: timeoutMS: 0 filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: find databaseName: *databaseName command: find: *collectionName maxTimeMS: { $$exists: false } - description: "timeoutMS can be configured for an operation - findOne on collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["find"] blockConnection: true blockTimeMS: 15 - name: findOne object: *collection arguments: timeoutMS: 1000 filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: find databaseName: *databaseName command: find: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 for an operation - findOne on collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["find"] blockConnection: true blockTimeMS: 15 - name: findOne object: *collection arguments: timeoutMS: 0 filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: find databaseName: *databaseName command: find: *collectionName maxTimeMS: { $$exists: false } - description: "timeoutMS can be configured for an operation - listIndexes on collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["listIndexes"] blockConnection: true blockTimeMS: 15 - name: listIndexes object: *collection arguments: timeoutMS: 1000 expectEvents: - client: *client events: - commandStartedEvent: commandName: listIndexes databaseName: *databaseName command: listIndexes: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 for an operation - listIndexes on collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["listIndexes"] blockConnection: true blockTimeMS: 15 - name: listIndexes object: *collection arguments: timeoutMS: 0 expectEvents: - client: *client events: - commandStartedEvent: commandName: listIndexes databaseName: *databaseName command: listIndexes: *collectionName maxTimeMS: { $$exists: false } - description: "timeoutMS can be configured for an operation - listIndexNames on collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["listIndexes"] blockConnection: true blockTimeMS: 15 - name: listIndexNames object: *collection arguments: timeoutMS: 1000 expectEvents: - client: *client events: - commandStartedEvent: commandName: listIndexes databaseName: *databaseName command: listIndexes: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 for an operation - listIndexNames on collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["listIndexes"] blockConnection: true blockTimeMS: 15 - name: listIndexNames object: *collection arguments: timeoutMS: 0 expectEvents: - client: *client events: - commandStartedEvent: commandName: listIndexes databaseName: *databaseName command: listIndexes: *collectionName maxTimeMS: { $$exists: false } - description: "timeoutMS can be configured for an operation - createChangeStream on collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["aggregate"] blockConnection: true blockTimeMS: 15 - name: createChangeStream object: *collection arguments: timeoutMS: 1000 pipeline: [] expectEvents: - client: *client events: - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 for an operation - createChangeStream on collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["aggregate"] blockConnection: true blockTimeMS: 15 - name: createChangeStream object: *collection arguments: timeoutMS: 0 pipeline: [] expectEvents: - client: *client events: - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: *collectionName maxTimeMS: { $$exists: false } - description: "timeoutMS can be configured for an operation - insertOne on collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["insert"] blockConnection: true blockTimeMS: 15 - name: insertOne object: *collection arguments: timeoutMS: 1000 document: { x: 1 } expectEvents: - client: *client events: - commandStartedEvent: commandName: insert databaseName: *databaseName command: insert: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 for an operation - insertOne on collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["insert"] blockConnection: true blockTimeMS: 15 - name: insertOne object: *collection arguments: timeoutMS: 0 document: { x: 1 } expectEvents: - client: *client events: - commandStartedEvent: commandName: insert databaseName: *databaseName command: insert: *collectionName maxTimeMS: { $$exists: false } - description: "timeoutMS can be configured for an operation - insertMany on collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["insert"] blockConnection: true blockTimeMS: 15 - name: insertMany object: *collection arguments: timeoutMS: 1000 documents: - { x: 1 } expectEvents: - client: *client events: - commandStartedEvent: commandName: insert databaseName: *databaseName command: insert: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 for an operation - insertMany on collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["insert"] blockConnection: true blockTimeMS: 15 - name: insertMany object: *collection arguments: timeoutMS: 0 documents: - { x: 1 } expectEvents: - client: *client events: - commandStartedEvent: commandName: insert databaseName: *databaseName command: insert: *collectionName maxTimeMS: { $$exists: false } - description: "timeoutMS can be configured for an operation - deleteOne on collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["delete"] blockConnection: true blockTimeMS: 15 - name: deleteOne object: *collection arguments: timeoutMS: 1000 filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: delete databaseName: *databaseName command: delete: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 for an operation - deleteOne on collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["delete"] blockConnection: true blockTimeMS: 15 - name: deleteOne object: *collection arguments: timeoutMS: 0 filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: delete databaseName: *databaseName command: delete: *collectionName maxTimeMS: { $$exists: false } - description: "timeoutMS can be configured for an operation - deleteMany on collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["delete"] blockConnection: true blockTimeMS: 15 - name: deleteMany object: *collection arguments: timeoutMS: 1000 filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: delete databaseName: *databaseName command: delete: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 for an operation - deleteMany on collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["delete"] blockConnection: true blockTimeMS: 15 - name: deleteMany object: *collection arguments: timeoutMS: 0 filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: delete databaseName: *databaseName command: delete: *collectionName maxTimeMS: { $$exists: false } - description: "timeoutMS can be configured for an operation - replaceOne on collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["update"] blockConnection: true blockTimeMS: 15 - name: replaceOne object: *collection arguments: timeoutMS: 1000 filter: {} replacement: { x: 1 } expectEvents: - client: *client events: - commandStartedEvent: commandName: update databaseName: *databaseName command: update: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 for an operation - replaceOne on collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["update"] blockConnection: true blockTimeMS: 15 - name: replaceOne object: *collection arguments: timeoutMS: 0 filter: {} replacement: { x: 1 } expectEvents: - client: *client events: - commandStartedEvent: commandName: update databaseName: *databaseName command: update: *collectionName maxTimeMS: { $$exists: false } - description: "timeoutMS can be configured for an operation - updateOne on collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["update"] blockConnection: true blockTimeMS: 15 - name: updateOne object: *collection arguments: timeoutMS: 1000 filter: {} update: { $set: { x: 1 } } expectEvents: - client: *client events: - commandStartedEvent: commandName: update databaseName: *databaseName command: update: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 for an operation - updateOne on collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["update"] blockConnection: true blockTimeMS: 15 - name: updateOne object: *collection arguments: timeoutMS: 0 filter: {} update: { $set: { x: 1 } } expectEvents: - client: *client events: - commandStartedEvent: commandName: update databaseName: *databaseName command: update: *collectionName maxTimeMS: { $$exists: false } - description: "timeoutMS can be configured for an operation - updateMany on collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["update"] blockConnection: true blockTimeMS: 15 - name: updateMany object: *collection arguments: timeoutMS: 1000 filter: {} update: { $set: { x: 1 } } expectEvents: - client: *client events: - commandStartedEvent: commandName: update databaseName: *databaseName command: update: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 for an operation - updateMany on collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["update"] blockConnection: true blockTimeMS: 15 - name: updateMany object: *collection arguments: timeoutMS: 0 filter: {} update: { $set: { x: 1 } } expectEvents: - client: *client events: - commandStartedEvent: commandName: update databaseName: *databaseName command: update: *collectionName maxTimeMS: { $$exists: false } - description: "timeoutMS can be configured for an operation - findOneAndDelete on collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["findAndModify"] blockConnection: true blockTimeMS: 15 - name: findOneAndDelete object: *collection arguments: timeoutMS: 1000 filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: findAndModify databaseName: *databaseName command: findAndModify: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 for an operation - findOneAndDelete on collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["findAndModify"] blockConnection: true blockTimeMS: 15 - name: findOneAndDelete object: *collection arguments: timeoutMS: 0 filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: findAndModify databaseName: *databaseName command: findAndModify: *collectionName maxTimeMS: { $$exists: false } - description: "timeoutMS can be configured for an operation - findOneAndReplace on collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["findAndModify"] blockConnection: true blockTimeMS: 15 - name: findOneAndReplace object: *collection arguments: timeoutMS: 1000 filter: {} replacement: { x: 1 } expectEvents: - client: *client events: - commandStartedEvent: commandName: findAndModify databaseName: *databaseName command: findAndModify: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 for an operation - findOneAndReplace on collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["findAndModify"] blockConnection: true blockTimeMS: 15 - name: findOneAndReplace object: *collection arguments: timeoutMS: 0 filter: {} replacement: { x: 1 } expectEvents: - client: *client events: - commandStartedEvent: commandName: findAndModify databaseName: *databaseName command: findAndModify: *collectionName maxTimeMS: { $$exists: false } - description: "timeoutMS can be configured for an operation - findOneAndUpdate on collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["findAndModify"] blockConnection: true blockTimeMS: 15 - name: findOneAndUpdate object: *collection arguments: timeoutMS: 1000 filter: {} update: { $set: { x: 1 } } expectEvents: - client: *client events: - commandStartedEvent: commandName: findAndModify databaseName: *databaseName command: findAndModify: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 for an operation - findOneAndUpdate on collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["findAndModify"] blockConnection: true blockTimeMS: 15 - name: findOneAndUpdate object: *collection arguments: timeoutMS: 0 filter: {} update: { $set: { x: 1 } } expectEvents: - client: *client events: - commandStartedEvent: commandName: findAndModify databaseName: *databaseName command: findAndModify: *collectionName maxTimeMS: { $$exists: false } - description: "timeoutMS can be configured for an operation - bulkWrite on collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["insert"] blockConnection: true blockTimeMS: 15 - name: bulkWrite object: *collection arguments: timeoutMS: 1000 requests: - insertOne: document: { _id: 1 } expectEvents: - client: *client events: - commandStartedEvent: commandName: insert databaseName: *databaseName command: insert: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 for an operation - bulkWrite on collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["insert"] blockConnection: true blockTimeMS: 15 - name: bulkWrite object: *collection arguments: timeoutMS: 0 requests: - insertOne: document: { _id: 1 } expectEvents: - client: *client events: - commandStartedEvent: commandName: insert databaseName: *databaseName command: insert: *collectionName maxTimeMS: { $$exists: false } - description: "timeoutMS can be configured for an operation - createIndex on collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["createIndexes"] blockConnection: true blockTimeMS: 15 - name: createIndex object: *collection arguments: timeoutMS: 1000 keys: { x: 1 } name: "x_1" expectEvents: - client: *client events: - commandStartedEvent: commandName: createIndexes databaseName: *databaseName command: createIndexes: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 for an operation - createIndex on collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["createIndexes"] blockConnection: true blockTimeMS: 15 - name: createIndex object: *collection arguments: timeoutMS: 0 keys: { x: 1 } name: "x_1" expectEvents: - client: *client events: - commandStartedEvent: commandName: createIndexes databaseName: *databaseName command: createIndexes: *collectionName maxTimeMS: { $$exists: false } - description: "timeoutMS can be configured for an operation - dropIndex on collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["dropIndexes"] blockConnection: true blockTimeMS: 15 - name: dropIndex object: *collection arguments: timeoutMS: 1000 name: "x_1" expectError: isTimeoutError: false # IndexNotFound expectEvents: - client: *client events: - commandStartedEvent: commandName: dropIndexes databaseName: *databaseName command: dropIndexes: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 for an operation - dropIndex on collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["dropIndexes"] blockConnection: true blockTimeMS: 15 - name: dropIndex object: *collection arguments: timeoutMS: 0 name: "x_1" expectError: isTimeoutError: false # IndexNotFound expectEvents: - client: *client events: - commandStartedEvent: commandName: dropIndexes databaseName: *databaseName command: dropIndexes: *collectionName maxTimeMS: { $$exists: false } - description: "timeoutMS can be configured for an operation - dropIndexes on collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["dropIndexes"] blockConnection: true blockTimeMS: 15 - name: dropIndexes object: *collection arguments: timeoutMS: 1000 expectEvents: - client: *client events: - commandStartedEvent: commandName: dropIndexes databaseName: *databaseName command: dropIndexes: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "timeoutMS can be set to 0 for an operation - dropIndexes on collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["dropIndexes"] blockConnection: true blockTimeMS: 15 - name: dropIndexes object: *collection arguments: timeoutMS: 0 expectEvents: - client: *client events: - commandStartedEvent: commandName: dropIndexes databaseName: *databaseName command: dropIndexes: *collectionName maxTimeMS: { $$exists: false } retryability-legacy-timeouts.yml000066400000000000000000001471561505113246500360200ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_operations_timeout# Tests in this file are generated from retryability-legacy-timeouts.yml.template. description: "legacy timeouts behave correctly for retryable operations" schemaVersion: "1.9" runOnRequirements: - minServerVersion: "4.4" topologies: ["replicaset", "sharded"] createEntities: - client: id: &failPointClient failPointClient useMultipleMongoses: false - client: id: &client client uriOptions: socketTimeoutMS: 100 useMultipleMongoses: false observeEvents: - commandStartedEvent ignoreCommandMonitoringEvents: - killCursors - database: id: &database database client: *client databaseName: &databaseName test - collection: id: &collection collection database: *database collectionName: &collectionName coll initialData: - collectionName: *collectionName databaseName: *databaseName documents: [] tests: # For each retryable operation, run two tests: # # 1. Socket timeouts are retried once - Each test constructs a client entity with socketTimeoutMS=100, configures a # fail point to block the operation once for 125ms, and expects the operation to succeed. # # 2. Operations fail after two consecutive socket timeouts - Same as (1) but the fail point is configured to block # the operation twice and the test expects the operation to fail. - description: "operation succeeds after one socket timeout - insertOne on collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["insert"] blockConnection: true blockTimeMS: 125 - name: insertOne object: *collection arguments: document: { x: 1 } expectEvents: - client: *client events: - commandStartedEvent: commandName: insert databaseName: *databaseName command: insert: *collectionName - commandStartedEvent: commandName: insert databaseName: *databaseName command: insert: *collectionName - description: "operation fails after two consecutive socket timeouts - insertOne on collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["insert"] blockConnection: true blockTimeMS: 125 - name: insertOne object: *collection arguments: document: { x: 1 } expectError: # Network errors are considered client errors by the unified test format spec. isClientError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: insert databaseName: *databaseName command: insert: *collectionName - commandStartedEvent: commandName: insert databaseName: *databaseName command: insert: *collectionName - description: "operation succeeds after one socket timeout - insertMany on collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["insert"] blockConnection: true blockTimeMS: 125 - name: insertMany object: *collection arguments: documents: - { x: 1 } expectEvents: - client: *client events: - commandStartedEvent: commandName: insert databaseName: *databaseName command: insert: *collectionName - commandStartedEvent: commandName: insert databaseName: *databaseName command: insert: *collectionName - description: "operation fails after two consecutive socket timeouts - insertMany on collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["insert"] blockConnection: true blockTimeMS: 125 - name: insertMany object: *collection arguments: documents: - { x: 1 } expectError: # Network errors are considered client errors by the unified test format spec. isClientError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: insert databaseName: *databaseName command: insert: *collectionName - commandStartedEvent: commandName: insert databaseName: *databaseName command: insert: *collectionName - description: "operation succeeds after one socket timeout - deleteOne on collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["delete"] blockConnection: true blockTimeMS: 125 - name: deleteOne object: *collection arguments: filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: delete databaseName: *databaseName command: delete: *collectionName - commandStartedEvent: commandName: delete databaseName: *databaseName command: delete: *collectionName - description: "operation fails after two consecutive socket timeouts - deleteOne on collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["delete"] blockConnection: true blockTimeMS: 125 - name: deleteOne object: *collection arguments: filter: {} expectError: # Network errors are considered client errors by the unified test format spec. isClientError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: delete databaseName: *databaseName command: delete: *collectionName - commandStartedEvent: commandName: delete databaseName: *databaseName command: delete: *collectionName - description: "operation succeeds after one socket timeout - replaceOne on collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["update"] blockConnection: true blockTimeMS: 125 - name: replaceOne object: *collection arguments: filter: {} replacement: { x: 1 } expectEvents: - client: *client events: - commandStartedEvent: commandName: update databaseName: *databaseName command: update: *collectionName - commandStartedEvent: commandName: update databaseName: *databaseName command: update: *collectionName - description: "operation fails after two consecutive socket timeouts - replaceOne on collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["update"] blockConnection: true blockTimeMS: 125 - name: replaceOne object: *collection arguments: filter: {} replacement: { x: 1 } expectError: # Network errors are considered client errors by the unified test format spec. isClientError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: update databaseName: *databaseName command: update: *collectionName - commandStartedEvent: commandName: update databaseName: *databaseName command: update: *collectionName - description: "operation succeeds after one socket timeout - updateOne on collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["update"] blockConnection: true blockTimeMS: 125 - name: updateOne object: *collection arguments: filter: {} update: { $set: { x: 1 } } expectEvents: - client: *client events: - commandStartedEvent: commandName: update databaseName: *databaseName command: update: *collectionName - commandStartedEvent: commandName: update databaseName: *databaseName command: update: *collectionName - description: "operation fails after two consecutive socket timeouts - updateOne on collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["update"] blockConnection: true blockTimeMS: 125 - name: updateOne object: *collection arguments: filter: {} update: { $set: { x: 1 } } expectError: # Network errors are considered client errors by the unified test format spec. isClientError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: update databaseName: *databaseName command: update: *collectionName - commandStartedEvent: commandName: update databaseName: *databaseName command: update: *collectionName - description: "operation succeeds after one socket timeout - findOneAndDelete on collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["findAndModify"] blockConnection: true blockTimeMS: 125 - name: findOneAndDelete object: *collection arguments: filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: findAndModify databaseName: *databaseName command: findAndModify: *collectionName - commandStartedEvent: commandName: findAndModify databaseName: *databaseName command: findAndModify: *collectionName - description: "operation fails after two consecutive socket timeouts - findOneAndDelete on collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["findAndModify"] blockConnection: true blockTimeMS: 125 - name: findOneAndDelete object: *collection arguments: filter: {} expectError: # Network errors are considered client errors by the unified test format spec. isClientError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: findAndModify databaseName: *databaseName command: findAndModify: *collectionName - commandStartedEvent: commandName: findAndModify databaseName: *databaseName command: findAndModify: *collectionName - description: "operation succeeds after one socket timeout - findOneAndReplace on collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["findAndModify"] blockConnection: true blockTimeMS: 125 - name: findOneAndReplace object: *collection arguments: filter: {} replacement: { x: 1 } expectEvents: - client: *client events: - commandStartedEvent: commandName: findAndModify databaseName: *databaseName command: findAndModify: *collectionName - commandStartedEvent: commandName: findAndModify databaseName: *databaseName command: findAndModify: *collectionName - description: "operation fails after two consecutive socket timeouts - findOneAndReplace on collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["findAndModify"] blockConnection: true blockTimeMS: 125 - name: findOneAndReplace object: *collection arguments: filter: {} replacement: { x: 1 } expectError: # Network errors are considered client errors by the unified test format spec. isClientError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: findAndModify databaseName: *databaseName command: findAndModify: *collectionName - commandStartedEvent: commandName: findAndModify databaseName: *databaseName command: findAndModify: *collectionName - description: "operation succeeds after one socket timeout - findOneAndUpdate on collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["findAndModify"] blockConnection: true blockTimeMS: 125 - name: findOneAndUpdate object: *collection arguments: filter: {} update: { $set: { x: 1 } } expectEvents: - client: *client events: - commandStartedEvent: commandName: findAndModify databaseName: *databaseName command: findAndModify: *collectionName - commandStartedEvent: commandName: findAndModify databaseName: *databaseName command: findAndModify: *collectionName - description: "operation fails after two consecutive socket timeouts - findOneAndUpdate on collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["findAndModify"] blockConnection: true blockTimeMS: 125 - name: findOneAndUpdate object: *collection arguments: filter: {} update: { $set: { x: 1 } } expectError: # Network errors are considered client errors by the unified test format spec. isClientError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: findAndModify databaseName: *databaseName command: findAndModify: *collectionName - commandStartedEvent: commandName: findAndModify databaseName: *databaseName command: findAndModify: *collectionName - description: "operation succeeds after one socket timeout - bulkWrite on collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["insert"] blockConnection: true blockTimeMS: 125 - name: bulkWrite object: *collection arguments: requests: - insertOne: document: { _id: 1 } expectEvents: - client: *client events: - commandStartedEvent: commandName: insert databaseName: *databaseName command: insert: *collectionName - commandStartedEvent: commandName: insert databaseName: *databaseName command: insert: *collectionName - description: "operation fails after two consecutive socket timeouts - bulkWrite on collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["insert"] blockConnection: true blockTimeMS: 125 - name: bulkWrite object: *collection arguments: requests: - insertOne: document: { _id: 1 } expectError: # Network errors are considered client errors by the unified test format spec. isClientError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: insert databaseName: *databaseName command: insert: *collectionName - commandStartedEvent: commandName: insert databaseName: *databaseName command: insert: *collectionName - description: "operation succeeds after one socket timeout - listDatabases on client" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["listDatabases"] blockConnection: true blockTimeMS: 125 - name: listDatabases object: *client arguments: filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: listDatabases databaseName: admin command: listDatabases: 1 - commandStartedEvent: commandName: listDatabases databaseName: admin command: listDatabases: 1 - description: "operation fails after two consecutive socket timeouts - listDatabases on client" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["listDatabases"] blockConnection: true blockTimeMS: 125 - name: listDatabases object: *client arguments: filter: {} expectError: # Network errors are considered client errors by the unified test format spec. isClientError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: listDatabases databaseName: admin command: listDatabases: 1 - commandStartedEvent: commandName: listDatabases databaseName: admin command: listDatabases: 1 - description: "operation succeeds after one socket timeout - listDatabaseNames on client" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["listDatabases"] blockConnection: true blockTimeMS: 125 - name: listDatabaseNames object: *client expectEvents: - client: *client events: - commandStartedEvent: commandName: listDatabases databaseName: admin command: listDatabases: 1 - commandStartedEvent: commandName: listDatabases databaseName: admin command: listDatabases: 1 - description: "operation fails after two consecutive socket timeouts - listDatabaseNames on client" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["listDatabases"] blockConnection: true blockTimeMS: 125 - name: listDatabaseNames object: *client expectError: # Network errors are considered client errors by the unified test format spec. isClientError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: listDatabases databaseName: admin command: listDatabases: 1 - commandStartedEvent: commandName: listDatabases databaseName: admin command: listDatabases: 1 - description: "operation succeeds after one socket timeout - createChangeStream on client" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["aggregate"] blockConnection: true blockTimeMS: 125 - name: createChangeStream object: *client arguments: pipeline: [] expectEvents: - client: *client events: - commandStartedEvent: commandName: aggregate databaseName: admin command: aggregate: 1 - commandStartedEvent: commandName: aggregate databaseName: admin command: aggregate: 1 - description: "operation fails after two consecutive socket timeouts - createChangeStream on client" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["aggregate"] blockConnection: true blockTimeMS: 125 - name: createChangeStream object: *client arguments: pipeline: [] expectError: # Network errors are considered client errors by the unified test format spec. isClientError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: aggregate databaseName: admin command: aggregate: 1 - commandStartedEvent: commandName: aggregate databaseName: admin command: aggregate: 1 - description: "operation succeeds after one socket timeout - aggregate on database" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["aggregate"] blockConnection: true blockTimeMS: 125 - name: aggregate object: *database arguments: pipeline: [ { $listLocalSessions: {} }, { $limit: 1 } ] expectEvents: - client: *client events: - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: 1 - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: 1 - description: "operation fails after two consecutive socket timeouts - aggregate on database" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["aggregate"] blockConnection: true blockTimeMS: 125 - name: aggregate object: *database arguments: pipeline: [ { $listLocalSessions: {} }, { $limit: 1 } ] expectError: # Network errors are considered client errors by the unified test format spec. isClientError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: 1 - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: 1 - description: "operation succeeds after one socket timeout - listCollections on database" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["listCollections"] blockConnection: true blockTimeMS: 125 - name: listCollections object: *database arguments: filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: listCollections databaseName: *databaseName command: listCollections: 1 - commandStartedEvent: commandName: listCollections databaseName: *databaseName command: listCollections: 1 - description: "operation fails after two consecutive socket timeouts - listCollections on database" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["listCollections"] blockConnection: true blockTimeMS: 125 - name: listCollections object: *database arguments: filter: {} expectError: # Network errors are considered client errors by the unified test format spec. isClientError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: listCollections databaseName: *databaseName command: listCollections: 1 - commandStartedEvent: commandName: listCollections databaseName: *databaseName command: listCollections: 1 - description: "operation succeeds after one socket timeout - listCollectionNames on database" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["listCollections"] blockConnection: true blockTimeMS: 125 - name: listCollectionNames object: *database arguments: filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: listCollections databaseName: *databaseName command: listCollections: 1 - commandStartedEvent: commandName: listCollections databaseName: *databaseName command: listCollections: 1 - description: "operation fails after two consecutive socket timeouts - listCollectionNames on database" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["listCollections"] blockConnection: true blockTimeMS: 125 - name: listCollectionNames object: *database arguments: filter: {} expectError: # Network errors are considered client errors by the unified test format spec. isClientError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: listCollections databaseName: *databaseName command: listCollections: 1 - commandStartedEvent: commandName: listCollections databaseName: *databaseName command: listCollections: 1 - description: "operation succeeds after one socket timeout - createChangeStream on database" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["aggregate"] blockConnection: true blockTimeMS: 125 - name: createChangeStream object: *database arguments: pipeline: [] expectEvents: - client: *client events: - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: 1 - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: 1 - description: "operation fails after two consecutive socket timeouts - createChangeStream on database" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["aggregate"] blockConnection: true blockTimeMS: 125 - name: createChangeStream object: *database arguments: pipeline: [] expectError: # Network errors are considered client errors by the unified test format spec. isClientError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: 1 - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: 1 - description: "operation succeeds after one socket timeout - aggregate on collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["aggregate"] blockConnection: true blockTimeMS: 125 - name: aggregate object: *collection arguments: pipeline: [] expectEvents: - client: *client events: - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: *collectionName - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: *collectionName - description: "operation fails after two consecutive socket timeouts - aggregate on collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["aggregate"] blockConnection: true blockTimeMS: 125 - name: aggregate object: *collection arguments: pipeline: [] expectError: # Network errors are considered client errors by the unified test format spec. isClientError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: *collectionName - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: *collectionName - description: "operation succeeds after one socket timeout - count on collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["count"] blockConnection: true blockTimeMS: 125 - name: count object: *collection arguments: filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: count databaseName: *databaseName command: count: *collectionName - commandStartedEvent: commandName: count databaseName: *databaseName command: count: *collectionName - description: "operation fails after two consecutive socket timeouts - count on collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["count"] blockConnection: true blockTimeMS: 125 - name: count object: *collection arguments: filter: {} expectError: # Network errors are considered client errors by the unified test format spec. isClientError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: count databaseName: *databaseName command: count: *collectionName - commandStartedEvent: commandName: count databaseName: *databaseName command: count: *collectionName - description: "operation succeeds after one socket timeout - countDocuments on collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["aggregate"] blockConnection: true blockTimeMS: 125 - name: countDocuments object: *collection arguments: filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: *collectionName - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: *collectionName - description: "operation fails after two consecutive socket timeouts - countDocuments on collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["aggregate"] blockConnection: true blockTimeMS: 125 - name: countDocuments object: *collection arguments: filter: {} expectError: # Network errors are considered client errors by the unified test format spec. isClientError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: *collectionName - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: *collectionName - description: "operation succeeds after one socket timeout - estimatedDocumentCount on collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["count"] blockConnection: true blockTimeMS: 125 - name: estimatedDocumentCount object: *collection expectEvents: - client: *client events: - commandStartedEvent: commandName: count databaseName: *databaseName command: count: *collectionName - commandStartedEvent: commandName: count databaseName: *databaseName command: count: *collectionName - description: "operation fails after two consecutive socket timeouts - estimatedDocumentCount on collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["count"] blockConnection: true blockTimeMS: 125 - name: estimatedDocumentCount object: *collection expectError: # Network errors are considered client errors by the unified test format spec. isClientError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: count databaseName: *databaseName command: count: *collectionName - commandStartedEvent: commandName: count databaseName: *databaseName command: count: *collectionName - description: "operation succeeds after one socket timeout - distinct on collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["distinct"] blockConnection: true blockTimeMS: 125 - name: distinct object: *collection arguments: fieldName: x filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: distinct databaseName: *databaseName command: distinct: *collectionName - commandStartedEvent: commandName: distinct databaseName: *databaseName command: distinct: *collectionName - description: "operation fails after two consecutive socket timeouts - distinct on collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["distinct"] blockConnection: true blockTimeMS: 125 - name: distinct object: *collection arguments: fieldName: x filter: {} expectError: # Network errors are considered client errors by the unified test format spec. isClientError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: distinct databaseName: *databaseName command: distinct: *collectionName - commandStartedEvent: commandName: distinct databaseName: *databaseName command: distinct: *collectionName - description: "operation succeeds after one socket timeout - find on collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["find"] blockConnection: true blockTimeMS: 125 - name: find object: *collection arguments: filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: find databaseName: *databaseName command: find: *collectionName - commandStartedEvent: commandName: find databaseName: *databaseName command: find: *collectionName - description: "operation fails after two consecutive socket timeouts - find on collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["find"] blockConnection: true blockTimeMS: 125 - name: find object: *collection arguments: filter: {} expectError: # Network errors are considered client errors by the unified test format spec. isClientError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: find databaseName: *databaseName command: find: *collectionName - commandStartedEvent: commandName: find databaseName: *databaseName command: find: *collectionName - description: "operation succeeds after one socket timeout - findOne on collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["find"] blockConnection: true blockTimeMS: 125 - name: findOne object: *collection arguments: filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: find databaseName: *databaseName command: find: *collectionName - commandStartedEvent: commandName: find databaseName: *databaseName command: find: *collectionName - description: "operation fails after two consecutive socket timeouts - findOne on collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["find"] blockConnection: true blockTimeMS: 125 - name: findOne object: *collection arguments: filter: {} expectError: # Network errors are considered client errors by the unified test format spec. isClientError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: find databaseName: *databaseName command: find: *collectionName - commandStartedEvent: commandName: find databaseName: *databaseName command: find: *collectionName - description: "operation succeeds after one socket timeout - listIndexes on collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["listIndexes"] blockConnection: true blockTimeMS: 125 - name: listIndexes object: *collection expectEvents: - client: *client events: - commandStartedEvent: commandName: listIndexes databaseName: *databaseName command: listIndexes: *collectionName - commandStartedEvent: commandName: listIndexes databaseName: *databaseName command: listIndexes: *collectionName - description: "operation fails after two consecutive socket timeouts - listIndexes on collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["listIndexes"] blockConnection: true blockTimeMS: 125 - name: listIndexes object: *collection expectError: # Network errors are considered client errors by the unified test format spec. isClientError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: listIndexes databaseName: *databaseName command: listIndexes: *collectionName - commandStartedEvent: commandName: listIndexes databaseName: *databaseName command: listIndexes: *collectionName - description: "operation succeeds after one socket timeout - createChangeStream on collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["aggregate"] blockConnection: true blockTimeMS: 125 - name: createChangeStream object: *collection arguments: pipeline: [] expectEvents: - client: *client events: - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: *collectionName - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: *collectionName - description: "operation fails after two consecutive socket timeouts - createChangeStream on collection" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["aggregate"] blockConnection: true blockTimeMS: 125 - name: createChangeStream object: *collection arguments: pipeline: [] expectError: # Network errors are considered client errors by the unified test format spec. isClientError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: *collectionName - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: *collectionName retryability-timeoutMS.yml000066400000000000000000002675361505113246500346400ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_operations_timeout# Tests in this file are generated from retryability-timeoutMS.yml.template. description: "timeoutMS behaves correctly for retryable operations" schemaVersion: "1.9" # failCommand is available on 4.0+ replica sets and 4.2+ sharded clusters. runOnRequirements: - minServerVersion: "4.0" topologies: ["replicaset"] - minServerVersion: "4.2" topologies: ["sharded"] createEntities: - client: id: &failPointClient failPointClient useMultipleMongoses: false - client: id: &client client uriOptions: timeoutMS: 100 useMultipleMongoses: false observeEvents: - commandStartedEvent ignoreCommandMonitoringEvents: - killCursors - database: id: &database database client: *client databaseName: &databaseName test - collection: id: &collection collection database: *database collectionName: &collectionName coll initialData: - collectionName: *collectionName databaseName: *databaseName documents: [] tests: # For each retryable operation, run three tests: # # 1. timeoutMS applies to the whole operation, not to individual attempts - Client timeoutMS=100 and the operation is # fails with a retryable error after being blocked server-side for 60ms. The operation should fail with a timeout error # because the second attempt should take it over the 100ms limit. This test only runs on 4.4+ because it uses the # blockConnection option in failCommand. # # 2. operation is retried multiple times if timeoutMS is set to a non-zero value - Client timeoutMS=100 and the # operation fails with a retryable error twice. Drivers should send the original operation and two retries, the # second of which should succeed. # # 3. operation is retried multiple times if timeoutMS is set to a zero - Override timeoutMS to zero for the operation # and set a fail point to force a retryable error twice. Drivers should send the original operation and two retries, # the second of which should succeed. # # The fail points in these tests use error code 7 (HostNotFound) because it is a retryable error but does not trigger # an SDAM state change so we don't lose any time to server rediscovery. The tests also explicitly specify an # errorLabels array in the fail point to avoid behavioral differences among server types and ensure that the error # will be considered retryable. - description: "timeoutMS applies to whole operation, not individual attempts - insertOne on collection" runOnRequirements: - minServerVersion: "4.4" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 4 } data: failCommands: ["insert"] blockConnection: true blockTimeMS: 60 errorCode: 7 errorLabels: ["RetryableWriteError"] - name: insertOne object: *collection arguments: document: { x: 1 } expectError: isTimeoutError: true - description: "operation is retried multiple times for non-zero timeoutMS - insertOne on collection" runOnRequirements: - minServerVersion: "4.3.1" # failCommand errorLabels option operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["insert"] errorCode: 7 closeConnection: false errorLabels: ["RetryableWriteError"] - name: insertOne object: *collection arguments: timeoutMS: 1000 document: { x: 1 } expectEvents: - client: *client events: - commandStartedEvent: commandName: insert databaseName: *databaseName command: insert: *collectionName maxTimeMS: { $$type: ["int", "long"] } - commandStartedEvent: commandName: insert databaseName: *databaseName command: insert: *collectionName maxTimeMS: { $$type: ["int", "long"] } - commandStartedEvent: commandName: insert databaseName: *databaseName command: insert: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "operation is retried multiple times if timeoutMS is zero - insertOne on collection" runOnRequirements: - minServerVersion: "4.3.1" # failCommand errorLabels option operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["insert"] errorCode: 7 closeConnection: false errorLabels: ["RetryableWriteError"] - name: insertOne object: *collection arguments: timeoutMS: 0 document: { x: 1 } expectEvents: - client: *client events: - commandStartedEvent: commandName: insert databaseName: *databaseName command: insert: *collectionName maxTimeMS: { $$exists: false } - commandStartedEvent: commandName: insert databaseName: *databaseName command: insert: *collectionName maxTimeMS: { $$exists: false } - commandStartedEvent: commandName: insert databaseName: *databaseName command: insert: *collectionName maxTimeMS: { $$exists: false } - description: "timeoutMS applies to whole operation, not individual attempts - insertMany on collection" runOnRequirements: - minServerVersion: "4.4" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 4 } data: failCommands: ["insert"] blockConnection: true blockTimeMS: 60 errorCode: 7 errorLabels: ["RetryableWriteError"] - name: insertMany object: *collection arguments: documents: - { x: 1 } expectError: isTimeoutError: true - description: "operation is retried multiple times for non-zero timeoutMS - insertMany on collection" runOnRequirements: - minServerVersion: "4.3.1" # failCommand errorLabels option operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["insert"] errorCode: 7 closeConnection: false errorLabels: ["RetryableWriteError"] - name: insertMany object: *collection arguments: timeoutMS: 1000 documents: - { x: 1 } expectEvents: - client: *client events: - commandStartedEvent: commandName: insert databaseName: *databaseName command: insert: *collectionName maxTimeMS: { $$type: ["int", "long"] } - commandStartedEvent: commandName: insert databaseName: *databaseName command: insert: *collectionName maxTimeMS: { $$type: ["int", "long"] } - commandStartedEvent: commandName: insert databaseName: *databaseName command: insert: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "operation is retried multiple times if timeoutMS is zero - insertMany on collection" runOnRequirements: - minServerVersion: "4.3.1" # failCommand errorLabels option operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["insert"] errorCode: 7 closeConnection: false errorLabels: ["RetryableWriteError"] - name: insertMany object: *collection arguments: timeoutMS: 0 documents: - { x: 1 } expectEvents: - client: *client events: - commandStartedEvent: commandName: insert databaseName: *databaseName command: insert: *collectionName maxTimeMS: { $$exists: false } - commandStartedEvent: commandName: insert databaseName: *databaseName command: insert: *collectionName maxTimeMS: { $$exists: false } - commandStartedEvent: commandName: insert databaseName: *databaseName command: insert: *collectionName maxTimeMS: { $$exists: false } - description: "timeoutMS applies to whole operation, not individual attempts - deleteOne on collection" runOnRequirements: - minServerVersion: "4.4" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 4 } data: failCommands: ["delete"] blockConnection: true blockTimeMS: 60 errorCode: 7 errorLabels: ["RetryableWriteError"] - name: deleteOne object: *collection arguments: filter: {} expectError: isTimeoutError: true - description: "operation is retried multiple times for non-zero timeoutMS - deleteOne on collection" runOnRequirements: - minServerVersion: "4.3.1" # failCommand errorLabels option operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["delete"] errorCode: 7 closeConnection: false errorLabels: ["RetryableWriteError"] - name: deleteOne object: *collection arguments: timeoutMS: 1000 filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: delete databaseName: *databaseName command: delete: *collectionName maxTimeMS: { $$type: ["int", "long"] } - commandStartedEvent: commandName: delete databaseName: *databaseName command: delete: *collectionName maxTimeMS: { $$type: ["int", "long"] } - commandStartedEvent: commandName: delete databaseName: *databaseName command: delete: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "operation is retried multiple times if timeoutMS is zero - deleteOne on collection" runOnRequirements: - minServerVersion: "4.3.1" # failCommand errorLabels option operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["delete"] errorCode: 7 closeConnection: false errorLabels: ["RetryableWriteError"] - name: deleteOne object: *collection arguments: timeoutMS: 0 filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: delete databaseName: *databaseName command: delete: *collectionName maxTimeMS: { $$exists: false } - commandStartedEvent: commandName: delete databaseName: *databaseName command: delete: *collectionName maxTimeMS: { $$exists: false } - commandStartedEvent: commandName: delete databaseName: *databaseName command: delete: *collectionName maxTimeMS: { $$exists: false } - description: "timeoutMS applies to whole operation, not individual attempts - replaceOne on collection" runOnRequirements: - minServerVersion: "4.4" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 4 } data: failCommands: ["update"] blockConnection: true blockTimeMS: 60 errorCode: 7 errorLabels: ["RetryableWriteError"] - name: replaceOne object: *collection arguments: filter: {} replacement: { x: 1 } expectError: isTimeoutError: true - description: "operation is retried multiple times for non-zero timeoutMS - replaceOne on collection" runOnRequirements: - minServerVersion: "4.3.1" # failCommand errorLabels option operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["update"] errorCode: 7 closeConnection: false errorLabels: ["RetryableWriteError"] - name: replaceOne object: *collection arguments: timeoutMS: 1000 filter: {} replacement: { x: 1 } expectEvents: - client: *client events: - commandStartedEvent: commandName: update databaseName: *databaseName command: update: *collectionName maxTimeMS: { $$type: ["int", "long"] } - commandStartedEvent: commandName: update databaseName: *databaseName command: update: *collectionName maxTimeMS: { $$type: ["int", "long"] } - commandStartedEvent: commandName: update databaseName: *databaseName command: update: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "operation is retried multiple times if timeoutMS is zero - replaceOne on collection" runOnRequirements: - minServerVersion: "4.3.1" # failCommand errorLabels option operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["update"] errorCode: 7 closeConnection: false errorLabels: ["RetryableWriteError"] - name: replaceOne object: *collection arguments: timeoutMS: 0 filter: {} replacement: { x: 1 } expectEvents: - client: *client events: - commandStartedEvent: commandName: update databaseName: *databaseName command: update: *collectionName maxTimeMS: { $$exists: false } - commandStartedEvent: commandName: update databaseName: *databaseName command: update: *collectionName maxTimeMS: { $$exists: false } - commandStartedEvent: commandName: update databaseName: *databaseName command: update: *collectionName maxTimeMS: { $$exists: false } - description: "timeoutMS applies to whole operation, not individual attempts - updateOne on collection" runOnRequirements: - minServerVersion: "4.4" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 4 } data: failCommands: ["update"] blockConnection: true blockTimeMS: 60 errorCode: 7 errorLabels: ["RetryableWriteError"] - name: updateOne object: *collection arguments: filter: {} update: { $set: { x: 1 } } expectError: isTimeoutError: true - description: "operation is retried multiple times for non-zero timeoutMS - updateOne on collection" runOnRequirements: - minServerVersion: "4.3.1" # failCommand errorLabels option operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["update"] errorCode: 7 closeConnection: false errorLabels: ["RetryableWriteError"] - name: updateOne object: *collection arguments: timeoutMS: 1000 filter: {} update: { $set: { x: 1 } } expectEvents: - client: *client events: - commandStartedEvent: commandName: update databaseName: *databaseName command: update: *collectionName maxTimeMS: { $$type: ["int", "long"] } - commandStartedEvent: commandName: update databaseName: *databaseName command: update: *collectionName maxTimeMS: { $$type: ["int", "long"] } - commandStartedEvent: commandName: update databaseName: *databaseName command: update: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "operation is retried multiple times if timeoutMS is zero - updateOne on collection" runOnRequirements: - minServerVersion: "4.3.1" # failCommand errorLabels option operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["update"] errorCode: 7 closeConnection: false errorLabels: ["RetryableWriteError"] - name: updateOne object: *collection arguments: timeoutMS: 0 filter: {} update: { $set: { x: 1 } } expectEvents: - client: *client events: - commandStartedEvent: commandName: update databaseName: *databaseName command: update: *collectionName maxTimeMS: { $$exists: false } - commandStartedEvent: commandName: update databaseName: *databaseName command: update: *collectionName maxTimeMS: { $$exists: false } - commandStartedEvent: commandName: update databaseName: *databaseName command: update: *collectionName maxTimeMS: { $$exists: false } - description: "timeoutMS applies to whole operation, not individual attempts - findOneAndDelete on collection" runOnRequirements: - minServerVersion: "4.4" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 4 } data: failCommands: ["findAndModify"] blockConnection: true blockTimeMS: 60 errorCode: 7 errorLabels: ["RetryableWriteError"] - name: findOneAndDelete object: *collection arguments: filter: {} expectError: isTimeoutError: true - description: "operation is retried multiple times for non-zero timeoutMS - findOneAndDelete on collection" runOnRequirements: - minServerVersion: "4.3.1" # failCommand errorLabels option operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["findAndModify"] errorCode: 7 closeConnection: false errorLabels: ["RetryableWriteError"] - name: findOneAndDelete object: *collection arguments: timeoutMS: 1000 filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: findAndModify databaseName: *databaseName command: findAndModify: *collectionName maxTimeMS: { $$type: ["int", "long"] } - commandStartedEvent: commandName: findAndModify databaseName: *databaseName command: findAndModify: *collectionName maxTimeMS: { $$type: ["int", "long"] } - commandStartedEvent: commandName: findAndModify databaseName: *databaseName command: findAndModify: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "operation is retried multiple times if timeoutMS is zero - findOneAndDelete on collection" runOnRequirements: - minServerVersion: "4.3.1" # failCommand errorLabels option operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["findAndModify"] errorCode: 7 closeConnection: false errorLabels: ["RetryableWriteError"] - name: findOneAndDelete object: *collection arguments: timeoutMS: 0 filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: findAndModify databaseName: *databaseName command: findAndModify: *collectionName maxTimeMS: { $$exists: false } - commandStartedEvent: commandName: findAndModify databaseName: *databaseName command: findAndModify: *collectionName maxTimeMS: { $$exists: false } - commandStartedEvent: commandName: findAndModify databaseName: *databaseName command: findAndModify: *collectionName maxTimeMS: { $$exists: false } - description: "timeoutMS applies to whole operation, not individual attempts - findOneAndReplace on collection" runOnRequirements: - minServerVersion: "4.4" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 4 } data: failCommands: ["findAndModify"] blockConnection: true blockTimeMS: 60 errorCode: 7 errorLabels: ["RetryableWriteError"] - name: findOneAndReplace object: *collection arguments: filter: {} replacement: { x: 1 } expectError: isTimeoutError: true - description: "operation is retried multiple times for non-zero timeoutMS - findOneAndReplace on collection" runOnRequirements: - minServerVersion: "4.3.1" # failCommand errorLabels option operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["findAndModify"] errorCode: 7 closeConnection: false errorLabels: ["RetryableWriteError"] - name: findOneAndReplace object: *collection arguments: timeoutMS: 1000 filter: {} replacement: { x: 1 } expectEvents: - client: *client events: - commandStartedEvent: commandName: findAndModify databaseName: *databaseName command: findAndModify: *collectionName maxTimeMS: { $$type: ["int", "long"] } - commandStartedEvent: commandName: findAndModify databaseName: *databaseName command: findAndModify: *collectionName maxTimeMS: { $$type: ["int", "long"] } - commandStartedEvent: commandName: findAndModify databaseName: *databaseName command: findAndModify: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "operation is retried multiple times if timeoutMS is zero - findOneAndReplace on collection" runOnRequirements: - minServerVersion: "4.3.1" # failCommand errorLabels option operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["findAndModify"] errorCode: 7 closeConnection: false errorLabels: ["RetryableWriteError"] - name: findOneAndReplace object: *collection arguments: timeoutMS: 0 filter: {} replacement: { x: 1 } expectEvents: - client: *client events: - commandStartedEvent: commandName: findAndModify databaseName: *databaseName command: findAndModify: *collectionName maxTimeMS: { $$exists: false } - commandStartedEvent: commandName: findAndModify databaseName: *databaseName command: findAndModify: *collectionName maxTimeMS: { $$exists: false } - commandStartedEvent: commandName: findAndModify databaseName: *databaseName command: findAndModify: *collectionName maxTimeMS: { $$exists: false } - description: "timeoutMS applies to whole operation, not individual attempts - findOneAndUpdate on collection" runOnRequirements: - minServerVersion: "4.4" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 4 } data: failCommands: ["findAndModify"] blockConnection: true blockTimeMS: 60 errorCode: 7 errorLabels: ["RetryableWriteError"] - name: findOneAndUpdate object: *collection arguments: filter: {} update: { $set: { x: 1 } } expectError: isTimeoutError: true - description: "operation is retried multiple times for non-zero timeoutMS - findOneAndUpdate on collection" runOnRequirements: - minServerVersion: "4.3.1" # failCommand errorLabels option operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["findAndModify"] errorCode: 7 closeConnection: false errorLabels: ["RetryableWriteError"] - name: findOneAndUpdate object: *collection arguments: timeoutMS: 1000 filter: {} update: { $set: { x: 1 } } expectEvents: - client: *client events: - commandStartedEvent: commandName: findAndModify databaseName: *databaseName command: findAndModify: *collectionName maxTimeMS: { $$type: ["int", "long"] } - commandStartedEvent: commandName: findAndModify databaseName: *databaseName command: findAndModify: *collectionName maxTimeMS: { $$type: ["int", "long"] } - commandStartedEvent: commandName: findAndModify databaseName: *databaseName command: findAndModify: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "operation is retried multiple times if timeoutMS is zero - findOneAndUpdate on collection" runOnRequirements: - minServerVersion: "4.3.1" # failCommand errorLabels option operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["findAndModify"] errorCode: 7 closeConnection: false errorLabels: ["RetryableWriteError"] - name: findOneAndUpdate object: *collection arguments: timeoutMS: 0 filter: {} update: { $set: { x: 1 } } expectEvents: - client: *client events: - commandStartedEvent: commandName: findAndModify databaseName: *databaseName command: findAndModify: *collectionName maxTimeMS: { $$exists: false } - commandStartedEvent: commandName: findAndModify databaseName: *databaseName command: findAndModify: *collectionName maxTimeMS: { $$exists: false } - commandStartedEvent: commandName: findAndModify databaseName: *databaseName command: findAndModify: *collectionName maxTimeMS: { $$exists: false } - description: "timeoutMS applies to whole operation, not individual attempts - bulkWrite on collection" runOnRequirements: - minServerVersion: "4.4" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 4 } data: failCommands: ["insert"] blockConnection: true blockTimeMS: 60 errorCode: 7 errorLabels: ["RetryableWriteError"] - name: bulkWrite object: *collection arguments: requests: - insertOne: document: { _id: 1 } expectError: isTimeoutError: true - description: "operation is retried multiple times for non-zero timeoutMS - bulkWrite on collection" runOnRequirements: - minServerVersion: "4.3.1" # failCommand errorLabels option operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["insert"] errorCode: 7 closeConnection: false errorLabels: ["RetryableWriteError"] - name: bulkWrite object: *collection arguments: timeoutMS: 1000 requests: - insertOne: document: { _id: 1 } expectEvents: - client: *client events: - commandStartedEvent: commandName: insert databaseName: *databaseName command: insert: *collectionName maxTimeMS: { $$type: ["int", "long"] } - commandStartedEvent: commandName: insert databaseName: *databaseName command: insert: *collectionName maxTimeMS: { $$type: ["int", "long"] } - commandStartedEvent: commandName: insert databaseName: *databaseName command: insert: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "operation is retried multiple times if timeoutMS is zero - bulkWrite on collection" runOnRequirements: - minServerVersion: "4.3.1" # failCommand errorLabels option operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["insert"] errorCode: 7 closeConnection: false errorLabels: ["RetryableWriteError"] - name: bulkWrite object: *collection arguments: timeoutMS: 0 requests: - insertOne: document: { _id: 1 } expectEvents: - client: *client events: - commandStartedEvent: commandName: insert databaseName: *databaseName command: insert: *collectionName maxTimeMS: { $$exists: false } - commandStartedEvent: commandName: insert databaseName: *databaseName command: insert: *collectionName maxTimeMS: { $$exists: false } - commandStartedEvent: commandName: insert databaseName: *databaseName command: insert: *collectionName maxTimeMS: { $$exists: false } - description: "timeoutMS applies to whole operation, not individual attempts - listDatabases on client" runOnRequirements: - minServerVersion: "4.4" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 4 } data: failCommands: ["listDatabases"] blockConnection: true blockTimeMS: 60 errorCode: 7 errorLabels: ["RetryableWriteError"] - name: listDatabases object: *client arguments: filter: {} expectError: isTimeoutError: true - description: "operation is retried multiple times for non-zero timeoutMS - listDatabases on client" runOnRequirements: - minServerVersion: "4.3.1" # failCommand errorLabels option operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["listDatabases"] errorCode: 7 closeConnection: false errorLabels: ["RetryableWriteError"] - name: listDatabases object: *client arguments: timeoutMS: 1000 filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: listDatabases databaseName: admin command: listDatabases: 1 maxTimeMS: { $$type: ["int", "long"] } - commandStartedEvent: commandName: listDatabases databaseName: admin command: listDatabases: 1 maxTimeMS: { $$type: ["int", "long"] } - commandStartedEvent: commandName: listDatabases databaseName: admin command: listDatabases: 1 maxTimeMS: { $$type: ["int", "long"] } - description: "operation is retried multiple times if timeoutMS is zero - listDatabases on client" runOnRequirements: - minServerVersion: "4.3.1" # failCommand errorLabels option operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["listDatabases"] errorCode: 7 closeConnection: false errorLabels: ["RetryableWriteError"] - name: listDatabases object: *client arguments: timeoutMS: 0 filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: listDatabases databaseName: admin command: listDatabases: 1 maxTimeMS: { $$exists: false } - commandStartedEvent: commandName: listDatabases databaseName: admin command: listDatabases: 1 maxTimeMS: { $$exists: false } - commandStartedEvent: commandName: listDatabases databaseName: admin command: listDatabases: 1 maxTimeMS: { $$exists: false } - description: "timeoutMS applies to whole operation, not individual attempts - listDatabaseNames on client" runOnRequirements: - minServerVersion: "4.4" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 4 } data: failCommands: ["listDatabases"] blockConnection: true blockTimeMS: 60 errorCode: 7 errorLabels: ["RetryableWriteError"] - name: listDatabaseNames object: *client expectError: isTimeoutError: true - description: "operation is retried multiple times for non-zero timeoutMS - listDatabaseNames on client" runOnRequirements: - minServerVersion: "4.3.1" # failCommand errorLabels option operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["listDatabases"] errorCode: 7 closeConnection: false errorLabels: ["RetryableWriteError"] - name: listDatabaseNames object: *client arguments: timeoutMS: 1000 expectEvents: - client: *client events: - commandStartedEvent: commandName: listDatabases databaseName: admin command: listDatabases: 1 maxTimeMS: { $$type: ["int", "long"] } - commandStartedEvent: commandName: listDatabases databaseName: admin command: listDatabases: 1 maxTimeMS: { $$type: ["int", "long"] } - commandStartedEvent: commandName: listDatabases databaseName: admin command: listDatabases: 1 maxTimeMS: { $$type: ["int", "long"] } - description: "operation is retried multiple times if timeoutMS is zero - listDatabaseNames on client" runOnRequirements: - minServerVersion: "4.3.1" # failCommand errorLabels option operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["listDatabases"] errorCode: 7 closeConnection: false errorLabels: ["RetryableWriteError"] - name: listDatabaseNames object: *client arguments: timeoutMS: 0 expectEvents: - client: *client events: - commandStartedEvent: commandName: listDatabases databaseName: admin command: listDatabases: 1 maxTimeMS: { $$exists: false } - commandStartedEvent: commandName: listDatabases databaseName: admin command: listDatabases: 1 maxTimeMS: { $$exists: false } - commandStartedEvent: commandName: listDatabases databaseName: admin command: listDatabases: 1 maxTimeMS: { $$exists: false } - description: "timeoutMS applies to whole operation, not individual attempts - createChangeStream on client" runOnRequirements: - minServerVersion: "4.4" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 4 } data: failCommands: ["aggregate"] blockConnection: true blockTimeMS: 60 errorCode: 7 errorLabels: ["RetryableWriteError"] - name: createChangeStream object: *client arguments: pipeline: [] expectError: isTimeoutError: true - description: "operation is retried multiple times for non-zero timeoutMS - createChangeStream on client" runOnRequirements: - minServerVersion: "4.3.1" # failCommand errorLabels option operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["aggregate"] errorCode: 7 closeConnection: false errorLabels: ["RetryableWriteError"] - name: createChangeStream object: *client arguments: timeoutMS: 1000 pipeline: [] expectEvents: - client: *client events: - commandStartedEvent: commandName: aggregate databaseName: admin command: aggregate: 1 maxTimeMS: { $$type: ["int", "long"] } - commandStartedEvent: commandName: aggregate databaseName: admin command: aggregate: 1 maxTimeMS: { $$type: ["int", "long"] } - commandStartedEvent: commandName: aggregate databaseName: admin command: aggregate: 1 maxTimeMS: { $$type: ["int", "long"] } - description: "operation is retried multiple times if timeoutMS is zero - createChangeStream on client" runOnRequirements: - minServerVersion: "4.3.1" # failCommand errorLabels option operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["aggregate"] errorCode: 7 closeConnection: false errorLabels: ["RetryableWriteError"] - name: createChangeStream object: *client arguments: timeoutMS: 0 pipeline: [] expectEvents: - client: *client events: - commandStartedEvent: commandName: aggregate databaseName: admin command: aggregate: 1 maxTimeMS: { $$exists: false } - commandStartedEvent: commandName: aggregate databaseName: admin command: aggregate: 1 maxTimeMS: { $$exists: false } - commandStartedEvent: commandName: aggregate databaseName: admin command: aggregate: 1 maxTimeMS: { $$exists: false } - description: "timeoutMS applies to whole operation, not individual attempts - aggregate on database" runOnRequirements: - minServerVersion: "4.4" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 4 } data: failCommands: ["aggregate"] blockConnection: true blockTimeMS: 60 errorCode: 7 errorLabels: ["RetryableWriteError"] - name: aggregate object: *database arguments: pipeline: [ { $listLocalSessions: {} }, { $limit: 1 } ] expectError: isTimeoutError: true - description: "operation is retried multiple times for non-zero timeoutMS - aggregate on database" runOnRequirements: - minServerVersion: "4.3.1" # failCommand errorLabels option operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["aggregate"] errorCode: 7 closeConnection: false errorLabels: ["RetryableWriteError"] - name: aggregate object: *database arguments: timeoutMS: 1000 pipeline: [ { $listLocalSessions: {} }, { $limit: 1 } ] expectEvents: - client: *client events: - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: 1 maxTimeMS: { $$type: ["int", "long"] } - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: 1 maxTimeMS: { $$type: ["int", "long"] } - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: 1 maxTimeMS: { $$type: ["int", "long"] } - description: "operation is retried multiple times if timeoutMS is zero - aggregate on database" runOnRequirements: - minServerVersion: "4.3.1" # failCommand errorLabels option operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["aggregate"] errorCode: 7 closeConnection: false errorLabels: ["RetryableWriteError"] - name: aggregate object: *database arguments: timeoutMS: 0 pipeline: [ { $listLocalSessions: {} }, { $limit: 1 } ] expectEvents: - client: *client events: - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: 1 maxTimeMS: { $$exists: false } - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: 1 maxTimeMS: { $$exists: false } - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: 1 maxTimeMS: { $$exists: false } - description: "timeoutMS applies to whole operation, not individual attempts - listCollections on database" runOnRequirements: - minServerVersion: "4.4" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 4 } data: failCommands: ["listCollections"] blockConnection: true blockTimeMS: 60 errorCode: 7 errorLabels: ["RetryableWriteError"] - name: listCollections object: *database arguments: filter: {} expectError: isTimeoutError: true - description: "operation is retried multiple times for non-zero timeoutMS - listCollections on database" runOnRequirements: - minServerVersion: "4.3.1" # failCommand errorLabels option operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["listCollections"] errorCode: 7 closeConnection: false errorLabels: ["RetryableWriteError"] - name: listCollections object: *database arguments: timeoutMS: 1000 filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: listCollections databaseName: *databaseName command: listCollections: 1 maxTimeMS: { $$type: ["int", "long"] } - commandStartedEvent: commandName: listCollections databaseName: *databaseName command: listCollections: 1 maxTimeMS: { $$type: ["int", "long"] } - commandStartedEvent: commandName: listCollections databaseName: *databaseName command: listCollections: 1 maxTimeMS: { $$type: ["int", "long"] } - description: "operation is retried multiple times if timeoutMS is zero - listCollections on database" runOnRequirements: - minServerVersion: "4.3.1" # failCommand errorLabels option operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["listCollections"] errorCode: 7 closeConnection: false errorLabels: ["RetryableWriteError"] - name: listCollections object: *database arguments: timeoutMS: 0 filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: listCollections databaseName: *databaseName command: listCollections: 1 maxTimeMS: { $$exists: false } - commandStartedEvent: commandName: listCollections databaseName: *databaseName command: listCollections: 1 maxTimeMS: { $$exists: false } - commandStartedEvent: commandName: listCollections databaseName: *databaseName command: listCollections: 1 maxTimeMS: { $$exists: false } - description: "timeoutMS applies to whole operation, not individual attempts - listCollectionNames on database" runOnRequirements: - minServerVersion: "4.4" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 4 } data: failCommands: ["listCollections"] blockConnection: true blockTimeMS: 60 errorCode: 7 errorLabels: ["RetryableWriteError"] - name: listCollectionNames object: *database arguments: filter: {} expectError: isTimeoutError: true - description: "operation is retried multiple times for non-zero timeoutMS - listCollectionNames on database" runOnRequirements: - minServerVersion: "4.3.1" # failCommand errorLabels option operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["listCollections"] errorCode: 7 closeConnection: false errorLabels: ["RetryableWriteError"] - name: listCollectionNames object: *database arguments: timeoutMS: 1000 filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: listCollections databaseName: *databaseName command: listCollections: 1 maxTimeMS: { $$type: ["int", "long"] } - commandStartedEvent: commandName: listCollections databaseName: *databaseName command: listCollections: 1 maxTimeMS: { $$type: ["int", "long"] } - commandStartedEvent: commandName: listCollections databaseName: *databaseName command: listCollections: 1 maxTimeMS: { $$type: ["int", "long"] } - description: "operation is retried multiple times if timeoutMS is zero - listCollectionNames on database" runOnRequirements: - minServerVersion: "4.3.1" # failCommand errorLabels option operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["listCollections"] errorCode: 7 closeConnection: false errorLabels: ["RetryableWriteError"] - name: listCollectionNames object: *database arguments: timeoutMS: 0 filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: listCollections databaseName: *databaseName command: listCollections: 1 maxTimeMS: { $$exists: false } - commandStartedEvent: commandName: listCollections databaseName: *databaseName command: listCollections: 1 maxTimeMS: { $$exists: false } - commandStartedEvent: commandName: listCollections databaseName: *databaseName command: listCollections: 1 maxTimeMS: { $$exists: false } - description: "timeoutMS applies to whole operation, not individual attempts - createChangeStream on database" runOnRequirements: - minServerVersion: "4.4" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 4 } data: failCommands: ["aggregate"] blockConnection: true blockTimeMS: 60 errorCode: 7 errorLabels: ["RetryableWriteError"] - name: createChangeStream object: *database arguments: pipeline: [] expectError: isTimeoutError: true - description: "operation is retried multiple times for non-zero timeoutMS - createChangeStream on database" runOnRequirements: - minServerVersion: "4.3.1" # failCommand errorLabels option operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["aggregate"] errorCode: 7 closeConnection: false errorLabels: ["RetryableWriteError"] - name: createChangeStream object: *database arguments: timeoutMS: 1000 pipeline: [] expectEvents: - client: *client events: - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: 1 maxTimeMS: { $$type: ["int", "long"] } - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: 1 maxTimeMS: { $$type: ["int", "long"] } - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: 1 maxTimeMS: { $$type: ["int", "long"] } - description: "operation is retried multiple times if timeoutMS is zero - createChangeStream on database" runOnRequirements: - minServerVersion: "4.3.1" # failCommand errorLabels option operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["aggregate"] errorCode: 7 closeConnection: false errorLabels: ["RetryableWriteError"] - name: createChangeStream object: *database arguments: timeoutMS: 0 pipeline: [] expectEvents: - client: *client events: - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: 1 maxTimeMS: { $$exists: false } - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: 1 maxTimeMS: { $$exists: false } - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: 1 maxTimeMS: { $$exists: false } - description: "timeoutMS applies to whole operation, not individual attempts - aggregate on collection" runOnRequirements: - minServerVersion: "4.4" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 4 } data: failCommands: ["aggregate"] blockConnection: true blockTimeMS: 60 errorCode: 7 errorLabels: ["RetryableWriteError"] - name: aggregate object: *collection arguments: pipeline: [] expectError: isTimeoutError: true - description: "operation is retried multiple times for non-zero timeoutMS - aggregate on collection" runOnRequirements: - minServerVersion: "4.3.1" # failCommand errorLabels option operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["aggregate"] errorCode: 7 closeConnection: false errorLabels: ["RetryableWriteError"] - name: aggregate object: *collection arguments: timeoutMS: 1000 pipeline: [] expectEvents: - client: *client events: - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: *collectionName maxTimeMS: { $$type: ["int", "long"] } - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: *collectionName maxTimeMS: { $$type: ["int", "long"] } - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "operation is retried multiple times if timeoutMS is zero - aggregate on collection" runOnRequirements: - minServerVersion: "4.3.1" # failCommand errorLabels option operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["aggregate"] errorCode: 7 closeConnection: false errorLabels: ["RetryableWriteError"] - name: aggregate object: *collection arguments: timeoutMS: 0 pipeline: [] expectEvents: - client: *client events: - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: *collectionName maxTimeMS: { $$exists: false } - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: *collectionName maxTimeMS: { $$exists: false } - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: *collectionName maxTimeMS: { $$exists: false } - description: "timeoutMS applies to whole operation, not individual attempts - count on collection" runOnRequirements: - minServerVersion: "4.4" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 4 } data: failCommands: ["count"] blockConnection: true blockTimeMS: 60 errorCode: 7 errorLabels: ["RetryableWriteError"] - name: count object: *collection arguments: filter: {} expectError: isTimeoutError: true - description: "operation is retried multiple times for non-zero timeoutMS - count on collection" runOnRequirements: - minServerVersion: "4.3.1" # failCommand errorLabels option operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["count"] errorCode: 7 closeConnection: false errorLabels: ["RetryableWriteError"] - name: count object: *collection arguments: timeoutMS: 1000 filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: count databaseName: *databaseName command: count: *collectionName maxTimeMS: { $$type: ["int", "long"] } - commandStartedEvent: commandName: count databaseName: *databaseName command: count: *collectionName maxTimeMS: { $$type: ["int", "long"] } - commandStartedEvent: commandName: count databaseName: *databaseName command: count: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "operation is retried multiple times if timeoutMS is zero - count on collection" runOnRequirements: - minServerVersion: "4.3.1" # failCommand errorLabels option operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["count"] errorCode: 7 closeConnection: false errorLabels: ["RetryableWriteError"] - name: count object: *collection arguments: timeoutMS: 0 filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: count databaseName: *databaseName command: count: *collectionName maxTimeMS: { $$exists: false } - commandStartedEvent: commandName: count databaseName: *databaseName command: count: *collectionName maxTimeMS: { $$exists: false } - commandStartedEvent: commandName: count databaseName: *databaseName command: count: *collectionName maxTimeMS: { $$exists: false } - description: "timeoutMS applies to whole operation, not individual attempts - countDocuments on collection" runOnRequirements: - minServerVersion: "4.4" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 4 } data: failCommands: ["aggregate"] blockConnection: true blockTimeMS: 60 errorCode: 7 errorLabels: ["RetryableWriteError"] - name: countDocuments object: *collection arguments: filter: {} expectError: isTimeoutError: true - description: "operation is retried multiple times for non-zero timeoutMS - countDocuments on collection" runOnRequirements: - minServerVersion: "4.3.1" # failCommand errorLabels option operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["aggregate"] errorCode: 7 closeConnection: false errorLabels: ["RetryableWriteError"] - name: countDocuments object: *collection arguments: timeoutMS: 1000 filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: *collectionName maxTimeMS: { $$type: ["int", "long"] } - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: *collectionName maxTimeMS: { $$type: ["int", "long"] } - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "operation is retried multiple times if timeoutMS is zero - countDocuments on collection" runOnRequirements: - minServerVersion: "4.3.1" # failCommand errorLabels option operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["aggregate"] errorCode: 7 closeConnection: false errorLabels: ["RetryableWriteError"] - name: countDocuments object: *collection arguments: timeoutMS: 0 filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: *collectionName maxTimeMS: { $$exists: false } - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: *collectionName maxTimeMS: { $$exists: false } - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: *collectionName maxTimeMS: { $$exists: false } - description: "timeoutMS applies to whole operation, not individual attempts - estimatedDocumentCount on collection" runOnRequirements: - minServerVersion: "4.4" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 4 } data: failCommands: ["count"] blockConnection: true blockTimeMS: 60 errorCode: 7 errorLabels: ["RetryableWriteError"] - name: estimatedDocumentCount object: *collection expectError: isTimeoutError: true - description: "operation is retried multiple times for non-zero timeoutMS - estimatedDocumentCount on collection" runOnRequirements: - minServerVersion: "4.3.1" # failCommand errorLabels option operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["count"] errorCode: 7 closeConnection: false errorLabels: ["RetryableWriteError"] - name: estimatedDocumentCount object: *collection arguments: timeoutMS: 1000 expectEvents: - client: *client events: - commandStartedEvent: commandName: count databaseName: *databaseName command: count: *collectionName maxTimeMS: { $$type: ["int", "long"] } - commandStartedEvent: commandName: count databaseName: *databaseName command: count: *collectionName maxTimeMS: { $$type: ["int", "long"] } - commandStartedEvent: commandName: count databaseName: *databaseName command: count: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "operation is retried multiple times if timeoutMS is zero - estimatedDocumentCount on collection" runOnRequirements: - minServerVersion: "4.3.1" # failCommand errorLabels option operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["count"] errorCode: 7 closeConnection: false errorLabels: ["RetryableWriteError"] - name: estimatedDocumentCount object: *collection arguments: timeoutMS: 0 expectEvents: - client: *client events: - commandStartedEvent: commandName: count databaseName: *databaseName command: count: *collectionName maxTimeMS: { $$exists: false } - commandStartedEvent: commandName: count databaseName: *databaseName command: count: *collectionName maxTimeMS: { $$exists: false } - commandStartedEvent: commandName: count databaseName: *databaseName command: count: *collectionName maxTimeMS: { $$exists: false } - description: "timeoutMS applies to whole operation, not individual attempts - distinct on collection" runOnRequirements: - minServerVersion: "4.4" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 4 } data: failCommands: ["distinct"] blockConnection: true blockTimeMS: 60 errorCode: 7 errorLabels: ["RetryableWriteError"] - name: distinct object: *collection arguments: fieldName: x filter: {} expectError: isTimeoutError: true - description: "operation is retried multiple times for non-zero timeoutMS - distinct on collection" runOnRequirements: - minServerVersion: "4.3.1" # failCommand errorLabels option operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["distinct"] errorCode: 7 closeConnection: false errorLabels: ["RetryableWriteError"] - name: distinct object: *collection arguments: timeoutMS: 1000 fieldName: x filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: distinct databaseName: *databaseName command: distinct: *collectionName maxTimeMS: { $$type: ["int", "long"] } - commandStartedEvent: commandName: distinct databaseName: *databaseName command: distinct: *collectionName maxTimeMS: { $$type: ["int", "long"] } - commandStartedEvent: commandName: distinct databaseName: *databaseName command: distinct: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "operation is retried multiple times if timeoutMS is zero - distinct on collection" runOnRequirements: - minServerVersion: "4.3.1" # failCommand errorLabels option operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["distinct"] errorCode: 7 closeConnection: false errorLabels: ["RetryableWriteError"] - name: distinct object: *collection arguments: timeoutMS: 0 fieldName: x filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: distinct databaseName: *databaseName command: distinct: *collectionName maxTimeMS: { $$exists: false } - commandStartedEvent: commandName: distinct databaseName: *databaseName command: distinct: *collectionName maxTimeMS: { $$exists: false } - commandStartedEvent: commandName: distinct databaseName: *databaseName command: distinct: *collectionName maxTimeMS: { $$exists: false } - description: "timeoutMS applies to whole operation, not individual attempts - find on collection" runOnRequirements: - minServerVersion: "4.4" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 4 } data: failCommands: ["find"] blockConnection: true blockTimeMS: 60 errorCode: 7 errorLabels: ["RetryableWriteError"] - name: find object: *collection arguments: filter: {} expectError: isTimeoutError: true - description: "operation is retried multiple times for non-zero timeoutMS - find on collection" runOnRequirements: - minServerVersion: "4.3.1" # failCommand errorLabels option operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["find"] errorCode: 7 closeConnection: false errorLabels: ["RetryableWriteError"] - name: find object: *collection arguments: timeoutMS: 1000 filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: find databaseName: *databaseName command: find: *collectionName maxTimeMS: { $$type: ["int", "long"] } - commandStartedEvent: commandName: find databaseName: *databaseName command: find: *collectionName maxTimeMS: { $$type: ["int", "long"] } - commandStartedEvent: commandName: find databaseName: *databaseName command: find: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "operation is retried multiple times if timeoutMS is zero - find on collection" runOnRequirements: - minServerVersion: "4.3.1" # failCommand errorLabels option operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["find"] errorCode: 7 closeConnection: false errorLabels: ["RetryableWriteError"] - name: find object: *collection arguments: timeoutMS: 0 filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: find databaseName: *databaseName command: find: *collectionName maxTimeMS: { $$exists: false } - commandStartedEvent: commandName: find databaseName: *databaseName command: find: *collectionName maxTimeMS: { $$exists: false } - commandStartedEvent: commandName: find databaseName: *databaseName command: find: *collectionName maxTimeMS: { $$exists: false } - description: "timeoutMS applies to whole operation, not individual attempts - findOne on collection" runOnRequirements: - minServerVersion: "4.4" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 4 } data: failCommands: ["find"] blockConnection: true blockTimeMS: 60 errorCode: 7 errorLabels: ["RetryableWriteError"] - name: findOne object: *collection arguments: filter: {} expectError: isTimeoutError: true - description: "operation is retried multiple times for non-zero timeoutMS - findOne on collection" runOnRequirements: - minServerVersion: "4.3.1" # failCommand errorLabels option operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["find"] errorCode: 7 closeConnection: false errorLabels: ["RetryableWriteError"] - name: findOne object: *collection arguments: timeoutMS: 1000 filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: find databaseName: *databaseName command: find: *collectionName maxTimeMS: { $$type: ["int", "long"] } - commandStartedEvent: commandName: find databaseName: *databaseName command: find: *collectionName maxTimeMS: { $$type: ["int", "long"] } - commandStartedEvent: commandName: find databaseName: *databaseName command: find: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "operation is retried multiple times if timeoutMS is zero - findOne on collection" runOnRequirements: - minServerVersion: "4.3.1" # failCommand errorLabels option operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["find"] errorCode: 7 closeConnection: false errorLabels: ["RetryableWriteError"] - name: findOne object: *collection arguments: timeoutMS: 0 filter: {} expectEvents: - client: *client events: - commandStartedEvent: commandName: find databaseName: *databaseName command: find: *collectionName maxTimeMS: { $$exists: false } - commandStartedEvent: commandName: find databaseName: *databaseName command: find: *collectionName maxTimeMS: { $$exists: false } - commandStartedEvent: commandName: find databaseName: *databaseName command: find: *collectionName maxTimeMS: { $$exists: false } - description: "timeoutMS applies to whole operation, not individual attempts - listIndexes on collection" runOnRequirements: - minServerVersion: "4.4" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 4 } data: failCommands: ["listIndexes"] blockConnection: true blockTimeMS: 60 errorCode: 7 errorLabels: ["RetryableWriteError"] - name: listIndexes object: *collection expectError: isTimeoutError: true - description: "operation is retried multiple times for non-zero timeoutMS - listIndexes on collection" runOnRequirements: - minServerVersion: "4.3.1" # failCommand errorLabels option operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["listIndexes"] errorCode: 7 closeConnection: false errorLabels: ["RetryableWriteError"] - name: listIndexes object: *collection arguments: timeoutMS: 1000 expectEvents: - client: *client events: - commandStartedEvent: commandName: listIndexes databaseName: *databaseName command: listIndexes: *collectionName maxTimeMS: { $$type: ["int", "long"] } - commandStartedEvent: commandName: listIndexes databaseName: *databaseName command: listIndexes: *collectionName maxTimeMS: { $$type: ["int", "long"] } - commandStartedEvent: commandName: listIndexes databaseName: *databaseName command: listIndexes: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "operation is retried multiple times if timeoutMS is zero - listIndexes on collection" runOnRequirements: - minServerVersion: "4.3.1" # failCommand errorLabels option operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["listIndexes"] errorCode: 7 closeConnection: false errorLabels: ["RetryableWriteError"] - name: listIndexes object: *collection arguments: timeoutMS: 0 expectEvents: - client: *client events: - commandStartedEvent: commandName: listIndexes databaseName: *databaseName command: listIndexes: *collectionName maxTimeMS: { $$exists: false } - commandStartedEvent: commandName: listIndexes databaseName: *databaseName command: listIndexes: *collectionName maxTimeMS: { $$exists: false } - commandStartedEvent: commandName: listIndexes databaseName: *databaseName command: listIndexes: *collectionName maxTimeMS: { $$exists: false } - description: "timeoutMS applies to whole operation, not individual attempts - createChangeStream on collection" runOnRequirements: - minServerVersion: "4.4" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 4 } data: failCommands: ["aggregate"] blockConnection: true blockTimeMS: 60 errorCode: 7 errorLabels: ["RetryableWriteError"] - name: createChangeStream object: *collection arguments: pipeline: [] expectError: isTimeoutError: true - description: "operation is retried multiple times for non-zero timeoutMS - createChangeStream on collection" runOnRequirements: - minServerVersion: "4.3.1" # failCommand errorLabels option operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["aggregate"] errorCode: 7 closeConnection: false errorLabels: ["RetryableWriteError"] - name: createChangeStream object: *collection arguments: timeoutMS: 1000 pipeline: [] expectEvents: - client: *client events: - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: *collectionName maxTimeMS: { $$type: ["int", "long"] } - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: *collectionName maxTimeMS: { $$type: ["int", "long"] } - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: *collectionName maxTimeMS: { $$type: ["int", "long"] } - description: "operation is retried multiple times if timeoutMS is zero - createChangeStream on collection" runOnRequirements: - minServerVersion: "4.3.1" # failCommand errorLabels option operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["aggregate"] errorCode: 7 closeConnection: false errorLabels: ["RetryableWriteError"] - name: createChangeStream object: *collection arguments: timeoutMS: 0 pipeline: [] expectEvents: - client: *client events: - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: *collectionName maxTimeMS: { $$exists: false } - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: *collectionName maxTimeMS: { $$exists: false } - commandStartedEvent: commandName: aggregate databaseName: *databaseName command: aggregate: *collectionName maxTimeMS: { $$exists: false } sessions-inherit-timeoutMS.yml000066400000000000000000000121121505113246500353750ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_operations_timeoutdescription: "sessions inherit timeoutMS from their parent MongoClient" schemaVersion: "1.9" runOnRequirements: - minServerVersion: "4.4" topologies: ["replicaset", "sharded"] createEntities: - client: id: &failPointClient failPointClient useMultipleMongoses: false - client: id: &client client uriOptions: timeoutMS: 50 useMultipleMongoses: false observeEvents: - commandStartedEvent - commandSucceededEvent - commandFailedEvent - database: id: &database database client: *client databaseName: &databaseName test - collection: id: &collection collection database: *database collectionName: &collectionName coll - session: id: &session session client: *client initialData: - collectionName: *collectionName databaseName: *databaseName documents: [] tests: # Drivers ignore errors from abortTransaction, so the tests in this file use commandSucceededEvent and # commandFailedEvent events to assert success/failure. - description: "timeoutMS applied to commitTransaction" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["commitTransaction"] blockConnection: true blockTimeMS: 60 - name: startTransaction object: *session - name: insertOne object: *collection arguments: session: *session document: { _id: 1 } - name: commitTransaction object: *session expectError: isTimeoutError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: insert databaseName: *databaseName command: insert: *collectionName - commandSucceededEvent: commandName: insert - commandStartedEvent: commandName: commitTransaction databaseName: admin command: commitTransaction: 1 maxTimeMS: { $$type: ["int", "long"] } - commandFailedEvent: commandName: commitTransaction - description: "timeoutMS applied to abortTransaction" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["abortTransaction"] blockConnection: true blockTimeMS: 60 - name: startTransaction object: *session - name: insertOne object: *collection arguments: session: *session document: { _id: 1 } - name: abortTransaction object: *session expectEvents: - client: *client events: - commandStartedEvent: commandName: insert databaseName: *databaseName command: insert: *collectionName - commandSucceededEvent: commandName: insert - commandStartedEvent: commandName: abortTransaction databaseName: admin command: abortTransaction: 1 maxTimeMS: { $$type: ["int", "long"] } - commandFailedEvent: commandName: abortTransaction - description: "timeoutMS applied to withTransaction" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["insert"] blockConnection: true blockTimeMS: 60 - name: withTransaction object: *session arguments: callback: - name: insertOne object: *collection arguments: session: *session document: { _id: 1 } expectError: isTimeoutError: true expectError: isTimeoutError: true expectEvents: - client: *client events: # Because the insert expects an error and gets an error, it technically succeeds, so withTransaction will # try to run commitTransaction. This will fail client-side, though, because the timeout has already expired, # so no command is sent. - commandStartedEvent: commandName: insert databaseName: *databaseName command: insert: *collectionName # withTransaction specifies timeoutMS for each operation in the callback that uses the session, so the # insert command should have a maxTimeMS field. maxTimeMS: { $$type: ["int", "long"] } - commandFailedEvent: commandName: insert sessions-override-operation-timeoutMS.yml000066400000000000000000000122401505113246500375520ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_operations_timeoutdescription: "timeoutMS can be overridden for individual session operations" schemaVersion: "1.9" runOnRequirements: - minServerVersion: "4.4" topologies: ["replicaset", "sharded"] createEntities: - client: id: &failPointClient failPointClient useMultipleMongoses: false - client: id: &client client useMultipleMongoses: false observeEvents: - commandStartedEvent - commandSucceededEvent - commandFailedEvent - database: id: &database database client: *client databaseName: &databaseName test - collection: id: &collection collection database: *database collectionName: &collectionName coll - session: id: &session session client: *client initialData: - collectionName: *collectionName databaseName: *databaseName documents: [] tests: # Drivers ignore errors from abortTransaction, so the tests in this file use commandSucceededEvent and # commandFailedEvent events to assert success/failure. - description: "timeoutMS can be overridden for commitTransaction" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["commitTransaction"] blockConnection: true blockTimeMS: 60 - name: startTransaction object: *session - name: insertOne object: *collection arguments: session: *session document: { _id: 1 } - name: commitTransaction object: *session arguments: timeoutMS: 50 expectError: isTimeoutError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: insert databaseName: *databaseName command: insert: *collectionName - commandSucceededEvent: commandName: insert - commandStartedEvent: commandName: commitTransaction databaseName: admin command: commitTransaction: 1 maxTimeMS: { $$type: ["int", "long"] } - commandFailedEvent: commandName: commitTransaction - description: "timeoutMS applied to abortTransaction" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["abortTransaction"] blockConnection: true blockTimeMS: 60 - name: startTransaction object: *session - name: insertOne object: *collection arguments: session: *session document: { _id: 1 } - name: abortTransaction object: *session arguments: timeoutMS: 50 expectEvents: - client: *client events: - commandStartedEvent: commandName: insert databaseName: *databaseName command: insert: *collectionName - commandSucceededEvent: commandName: insert - commandStartedEvent: commandName: abortTransaction databaseName: admin command: abortTransaction: 1 maxTimeMS: { $$type: ["int", "long"] } - commandFailedEvent: commandName: abortTransaction - description: "timeoutMS applied to withTransaction" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["insert"] blockConnection: true blockTimeMS: 60 - name: withTransaction object: *session arguments: timeoutMS: 50 callback: - name: insertOne object: *collection arguments: session: *session document: { _id: 1 } expectError: isTimeoutError: true expectError: isTimeoutError: true expectEvents: - client: *client events: # Because the insert expects an error and gets an error, it technically succeeds, so withTransaction will # try to run commitTransaction. This will fail client-side, though, because the timeout has already expired, # so no command is sent. - commandStartedEvent: commandName: insert databaseName: *databaseName command: insert: *collectionName # withTransaction specifies timeoutMS for each operation in the callback that uses the session, so the # insert command should have a maxTimeMS field. maxTimeMS: { $$type: ["int", "long"] } - commandFailedEvent: commandName: insert sessions-override-timeoutMS.yml000066400000000000000000000121301505113246500355520ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_operations_timeoutdescription: "timeoutMS can be overridden at the level of a ClientSession" schemaVersion: "1.9" runOnRequirements: - minServerVersion: "4.4" topologies: ["replicaset", "sharded"] createEntities: - client: id: &failPointClient failPointClient useMultipleMongoses: false - client: id: &client client useMultipleMongoses: false observeEvents: - commandStartedEvent - commandSucceededEvent - commandFailedEvent - database: id: &database database client: *client databaseName: &databaseName test - collection: id: &collection collection database: *database collectionName: &collectionName coll - session: id: &session session client: *client sessionOptions: defaultTimeoutMS: 50 initialData: - collectionName: *collectionName databaseName: *databaseName documents: [] tests: # Drivers ignore errors from abortTransaction, so the tests in this file use commandSucceededEvent and # commandFailedEvent events to assert success/failure. - description: "timeoutMS applied to commitTransaction" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["commitTransaction"] blockConnection: true blockTimeMS: 60 - name: startTransaction object: *session - name: insertOne object: *collection arguments: session: *session document: { _id: 1 } - name: commitTransaction object: *session expectError: isTimeoutError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: insert databaseName: *databaseName command: insert: *collectionName - commandSucceededEvent: commandName: insert - commandStartedEvent: commandName: commitTransaction databaseName: admin command: commitTransaction: 1 maxTimeMS: { $$type: ["int", "long"] } - commandFailedEvent: commandName: commitTransaction - description: "timeoutMS applied to abortTransaction" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["abortTransaction"] blockConnection: true blockTimeMS: 60 - name: startTransaction object: *session - name: insertOne object: *collection arguments: session: *session document: { _id: 1 } - name: abortTransaction object: *session expectEvents: - client: *client events: - commandStartedEvent: commandName: insert databaseName: *databaseName command: insert: *collectionName - commandSucceededEvent: commandName: insert - commandStartedEvent: commandName: abortTransaction databaseName: admin command: abortTransaction: 1 maxTimeMS: { $$type: ["int", "long"] } - commandFailedEvent: commandName: abortTransaction - description: "timeoutMS applied to withTransaction" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["insert"] blockConnection: true blockTimeMS: 60 - name: withTransaction object: *session arguments: callback: - name: insertOne object: *collection arguments: session: *session document: { _id: 1 } expectError: isTimeoutError: true expectError: isTimeoutError: true expectEvents: - client: *client events: # Because the insert expects an error and gets an error, it technically succeeds, so withTransaction will # try to run commitTransaction. This will fail client-side, though, because the timeout has already expired, # so no command is sent. - commandStartedEvent: commandName: insert databaseName: *databaseName command: insert: *collectionName # withTransaction specifies timeoutMS for each operation in the callback that uses the session, so the # insert command should have a maxTimeMS field. maxTimeMS: { $$type: ["int", "long"] } - commandFailedEvent: commandName: insert mongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_operations_timeout/tailable-awaitData.yml000066400000000000000000000170611505113246500337040ustar00rootroot00000000000000description: "timeoutMS behaves correctly for tailable awaitData cursors" schemaVersion: "1.9" runOnRequirements: - minServerVersion: "4.4" serverless: forbid # Capped collections are not allowed for serverless. createEntities: - client: id: &failPointClient failPointClient useMultipleMongoses: false - client: id: &client client uriOptions: timeoutMS: 200 useMultipleMongoses: false observeEvents: - commandStartedEvent - database: id: &database database client: *client databaseName: &databaseName test - collection: id: &collection collection database: *database collectionName: &collectionName coll initialData: - collectionName: *collectionName databaseName: *databaseName createOptions: capped: true size: 500 documents: - { _id: 0 } - { _id: 1 } tests: - description: "error if timeoutMode is cursor_lifetime" operations: - name: find object: *collection arguments: filter: {} timeoutMode: cursorLifetime cursorType: tailableAwait expectError: isClientError: true - description: "error if maxAwaitTimeMS is greater than timeoutMS" operations: - name: find object: *collection arguments: filter: {} cursorType: tailableAwait timeoutMS: 5 maxAwaitTimeMS: 10 expectError: isClientError: true - description: "error if maxAwaitTimeMS is equal to timeoutMS" operations: - name: find object: *collection arguments: filter: {} cursorType: tailableAwait timeoutMS: 5 maxAwaitTimeMS: 5 expectError: isClientError: true - description: "timeoutMS applied to find" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["find"] blockConnection: true blockTimeMS: 300 - name: find object: *collection arguments: filter: {} cursorType: tailableAwait expectError: isTimeoutError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: find databaseName: *databaseName command: find: *collectionName tailable: true awaitData: true maxTimeMS: { $$exists: true } # If maxAwaitTimeMS is not set, timeoutMS should be refreshed for the getMore and the getMore should not have a # maxTimeMS field. - description: "timeoutMS is refreshed for getMore if maxAwaitTimeMS is not set" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["find", "getMore"] blockConnection: true blockTimeMS: 150 - name: createFindCursor object: *collection arguments: filter: {} cursorType: tailableAwait timeoutMS: 250 batchSize: 1 saveResultAsEntity: &tailableCursor tailableCursor # Iterate twice to force a getMore. The first iteration will return the document from the first batch and the # second will do a getMore. - name: iterateUntilDocumentOrError object: *tailableCursor - name: iterateUntilDocumentOrError object: *tailableCursor expectEvents: - client: *client events: - commandStartedEvent: commandName: find databaseName: *databaseName command: find: *collectionName tailable: true awaitData: true maxTimeMS: { $$exists: true } - commandStartedEvent: commandName: getMore databaseName: *databaseName command: getMore: { $$type: ["int", "long"] } collection: *collectionName maxTimeMS: { $$exists: false } # If maxAwaitTimeMS is set for the initial command, timeoutMS should still be refreshed for the getMore and the # getMore command should have a maxTimeMS field. - description: "timeoutMS is refreshed for getMore if maxAwaitTimeMS is set" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["find", "getMore"] blockConnection: true blockTimeMS: 150 - name: createFindCursor object: *collection arguments: filter: {} cursorType: tailableAwait timeoutMS: 250 batchSize: 1 maxAwaitTimeMS: 1 saveResultAsEntity: &tailableCursor tailableCursor # Iterate twice to force a getMore. - name: iterateUntilDocumentOrError object: *tailableCursor - name: iterateUntilDocumentOrError object: *tailableCursor expectEvents: - client: *client events: - commandStartedEvent: commandName: find databaseName: *databaseName command: find: *collectionName tailable: true awaitData: true maxTimeMS: { $$exists: true } - commandStartedEvent: commandName: getMore databaseName: *databaseName command: getMore: { $$type: ["int", "long"] } collection: *collectionName maxTimeMS: 1 # The timeoutMS value should be refreshed for getMore's. This is a failure test. The find inherits timeoutMS=200 from # the collection and the getMore blocks for 250ms, causing iteration to fail with a timeout error. - description: "timeoutMS is refreshed for getMore - failure" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["getMore"] blockConnection: true blockTimeMS: 250 - name: createFindCursor object: *collection arguments: filter: {} cursorType: tailableAwait batchSize: 1 saveResultAsEntity: &tailableCursor tailableCursor # Iterate twice to force a getMore. - name: iterateUntilDocumentOrError object: *tailableCursor - name: iterateUntilDocumentOrError object: *tailableCursor expectError: isTimeoutError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: find databaseName: *databaseName command: find: *collectionName tailable: true awaitData: true maxTimeMS: { $$exists: true } - commandStartedEvent: commandName: getMore databaseName: *databaseName command: getMore: { $$type: ["int", "long"] } collection: *collectionName tailable-non-awaitData.yml000066400000000000000000000134531505113246500344160ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/client_side_operations_timeoutdescription: "timeoutMS behaves correctly for tailable non-awaitData cursors" schemaVersion: "1.9" runOnRequirements: - minServerVersion: "4.4" createEntities: - client: id: &failPointClient failPointClient useMultipleMongoses: false - client: id: &client client uriOptions: timeoutMS: 10 useMultipleMongoses: false observeEvents: - commandStartedEvent - database: id: &database database client: *client databaseName: &databaseName test - collection: id: &collection collection database: *database collectionName: &collectionName coll initialData: - collectionName: *collectionName databaseName: *databaseName createOptions: capped: true size: 500 documents: - { _id: 0 } - { _id: 1 } tests: - description: "error if timeoutMode is cursor_lifetime" operations: - name: find object: *collection arguments: filter: {} timeoutMode: cursorLifetime cursorType: tailable expectError: isClientError: true - description: "timeoutMS applied to find" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["find"] blockConnection: true blockTimeMS: 15 - name: find object: *collection arguments: filter: {} cursorType: tailable expectError: isTimeoutError: true expectEvents: - client: *client events: # Due to SERVER-51153, the find command should not contain a maxTimeMS field for tailable non-awaitData # cursors because that would cap the lifetime of the created cursor. - commandStartedEvent: commandName: find databaseName: *databaseName command: find: *collectionName tailable: true awaitData: { $$exists: false } maxTimeMS: { $$exists: false } # The timeoutMS option should apply separately to the initial "find" and each getMore. This is a success test. The # find is executed with timeoutMS=20 and both find and getMore commands are configured to block for 15ms each. Neither # exceeds the timeout so the operation succeeds. - description: "timeoutMS is refreshed for getMore - success" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["find", "getMore"] blockConnection: true blockTimeMS: 15 - name: createFindCursor object: *collection arguments: filter: {} cursorType: tailable timeoutMS: 20 batchSize: 1 saveResultAsEntity: &tailableCursor tailableCursor # Iterate the cursor twice: the first iteration will return the document from the batch in the find and the # second will do a getMore. - name: iterateUntilDocumentOrError object: *tailableCursor - name: iterateUntilDocumentOrError object: *tailableCursor expectEvents: - client: *client events: - commandStartedEvent: commandName: find databaseName: *databaseName command: find: *collectionName tailable: true awaitData: { $$exists: false } maxTimeMS: { $$exists: false } - commandStartedEvent: commandName: getMore databaseName: *databaseName command: getMore: { $$type: ["int", "long"] } collection: *collectionName maxTimeMS: { $$exists: false } # The timeoutMS option should apply separately to the initial "find" and each getMore. This is a failure test. The # find inherits timeoutMS=10 from the collection and the getMore command blocks for 15ms, causing iteration to fail # with a timeout error. - description: "timeoutMS is refreshed for getMore - failure" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["getMore"] blockConnection: true blockTimeMS: 15 - name: createFindCursor object: *collection arguments: filter: {} cursorType: tailable batchSize: 1 saveResultAsEntity: &tailableCursor tailableCursor # Iterate the cursor twice: the first iteration will return the document from the batch in the find and the # second will do a getMore. - name: iterateUntilDocumentOrError object: *tailableCursor - name: iterateUntilDocumentOrError object: *tailableCursor expectError: isTimeoutError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: find databaseName: *databaseName command: find: *collectionName tailable: true awaitData: { $$exists: false } maxTimeMS: { $$exists: false } - commandStartedEvent: commandName: getMore databaseName: *databaseName command: getMore: { $$type: ["int", "long"] } collection: *collectionName maxTimeMS: { $$exists: false } mongo-ruby-driver-2.21.3/spec/spec_tests/data/cmap/000077500000000000000000000000001505113246500221275ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/cmap/connection-must-have-id.yml000066400000000000000000000011611505113246500273110ustar00rootroot00000000000000version: 1 style: unit description: must have an ID number associated with it operations: - name: ready - name: checkOut - name: checkOut events: - type: ConnectionCheckOutStarted address: 42 - type: ConnectionCreated connectionId: 42 address: 42 - type: ConnectionCheckedOut connectionId: 42 address: 42 - type: ConnectionCheckOutStarted address: 42 - type: ConnectionCreated connectionId: 42 address: 42 - type: ConnectionCheckedOut connectionId: 42 address: 42 ignore: - ConnectionPoolCreated - ConnectionPoolReady - ConnectionPoolClosed - ConnectionReady mongo-ruby-driver-2.21.3/spec/spec_tests/data/cmap/connection-must-order-ids.yml000066400000000000000000000011571505113246500276710ustar00rootroot00000000000000version: 1 style: unit description: must have IDs assigned in order of creation operations: - name: ready - name: checkOut - name: checkOut events: - type: ConnectionCheckOutStarted address: 42 - type: ConnectionCreated connectionId: 1 address: 42 - type: ConnectionCheckedOut connectionId: 1 address: 42 - type: ConnectionCheckOutStarted address: 42 - type: ConnectionCreated connectionId: 2 address: 42 - type: ConnectionCheckedOut connectionId: 2 address: 42 ignore: - ConnectionPoolCreated - ConnectionPoolReady - ConnectionPoolClosed - ConnectionReady mongo-ruby-driver-2.21.3/spec/spec_tests/data/cmap/pool-checkin-destroy-closed.yml000066400000000000000000000011511505113246500301610ustar00rootroot00000000000000version: 1 style: unit description: must destroy checked in connection if pool has been closed operations: - name: ready - name: checkOut label: conn - name: close - name: checkIn connection: conn events: - type: ConnectionCheckedOut connectionId: 1 address: 42 - type: ConnectionPoolClosed address: 42 - type: ConnectionCheckedIn connectionId: 1 address: 42 - type: ConnectionClosed connectionId: 1 reason: poolClosed address: 42 ignore: - ConnectionPoolCreated - ConnectionPoolReady - ConnectionCreated - ConnectionReady - ConnectionCheckOutStarted mongo-ruby-driver-2.21.3/spec/spec_tests/data/cmap/pool-checkin-destroy-stale.yml000066400000000000000000000011341505113246500300210ustar00rootroot00000000000000version: 1 style: unit description: must destroy checked in connection if it is stale operations: - name: ready - name: checkOut label: conn - name: clear - name: checkIn connection: conn events: - type: ConnectionCheckedOut connectionId: 1 address: 42 - type: ConnectionPoolCleared address: 42 - type: ConnectionCheckedIn connectionId: 1 address: 42 - type: ConnectionClosed connectionId: 1 reason: stale address: 42 ignore: - ConnectionPoolCreated - ConnectionPoolReady - ConnectionCreated - ConnectionReady - ConnectionCheckOutStarted mongo-ruby-driver-2.21.3/spec/spec_tests/data/cmap/pool-checkin-make-available.yml000066400000000000000000000010371505113246500300570ustar00rootroot00000000000000version: 1 style: unit description: must make valid checked in connection available operations: - name: ready - name: checkOut label: conn - name: checkIn connection: conn - name: checkOut events: - type: ConnectionCheckedOut connectionId: 1 address: 42 - type: ConnectionCheckedIn connectionId: 1 address: 42 - type: ConnectionCheckedOut connectionId: 1 address: 42 ignore: - ConnectionPoolCreated - ConnectionPoolReady - ConnectionCreated - ConnectionReady - ConnectionCheckOutStarted mongo-ruby-driver-2.21.3/spec/spec_tests/data/cmap/pool-checkin.yml000066400000000000000000000011201505113246500252170ustar00rootroot00000000000000version: 1 style: unit description: must have a method of allowing the driver to check in a connection # Remove the topology runOn requirement when cmap specs are adjusted for lbs runOn: - topology: [ "single", "replicaset", "sharded" ] operations: - name: ready - name: checkOut label: conn - name: checkIn connection: conn events: - type: ConnectionCheckedIn connectionId: 42 address: 42 ignore: - ConnectionPoolCreated - ConnectionPoolReady - ConnectionCreated - ConnectionReady - ConnectionClosed - ConnectionCheckOutStarted - ConnectionCheckedOut mongo-ruby-driver-2.21.3/spec/spec_tests/data/cmap/pool-checkout-connection.yml000066400000000000000000000006611505113246500275660ustar00rootroot00000000000000version: 1 style: unit description: must be able to check out a connection operations: - name: ready - name: checkOut events: - type: ConnectionCheckOutStarted address: 42 - type: ConnectionCreated connectionId: 1 address: 42 - type: ConnectionReady connectionId: 1 address: 42 - type: ConnectionCheckedOut connectionId: 1 address: 42 ignore: - ConnectionPoolReady - ConnectionPoolCreated pool-checkout-custom-maxConnecting-is-enforced.yml000066400000000000000000000025311505113246500336470ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/cmapversion: 1 style: integration description: custom maxConnecting is enforced runOn: - minServerVersion: "4.4.0" - topology: [ "single", "replicaset", "sharded" ] failPoint: configureFailPoint: failCommand mode: "alwaysOn" data: failCommands: ["isMaster","hello"] closeConnection: false blockConnection: true blockTimeMS: 500 poolOptions: maxConnecting: 1 # gives opportunity for the checkout in thread2 to establish a new connection, which it must not do until thread1 establishes one maxPoolSize: 2 waitQueueTimeoutMS: 5000 operations: - name: ready # thread1 exists to consume the single permit to open a connection, # so that thread2 would be blocked acquiring a permit, which results in ordering its ConnectionCreated event after # the ConnectionReady event from thread1. - name: start target: thread1 - name: start target: thread2 - name: checkOut thread: thread1 - name: waitForEvent event: ConnectionCreated count: 1 - name: checkOut thread: thread2 - name: waitForEvent event: ConnectionReady count: 2 events: - type: ConnectionCreated - type: ConnectionReady - type: ConnectionCreated - type: ConnectionReady ignore: - ConnectionCheckOutStarted - ConnectionCheckedIn - ConnectionCheckedOut - ConnectionClosed - ConnectionPoolCreated - ConnectionPoolReady mongo-ruby-driver-2.21.3/spec/spec_tests/data/cmap/pool-checkout-error-closed.yml000066400000000000000000000015401505113246500300240ustar00rootroot00000000000000version: 1 style: unit description: must throw error if checkOut is called on a closed pool operations: - name: ready - name: checkOut label: conn1 - name: checkIn connection: conn1 - name: close - name: checkOut error: type: PoolClosedError message: Attempted to check out a connection from closed connection pool events: - type: ConnectionPoolCreated address: 42 options: 42 - type: ConnectionCheckOutStarted address: 42 - type: ConnectionCheckedOut address: 42 connectionId: 42 - type: ConnectionCheckedIn address: 42 connectionId: 42 - type: ConnectionPoolClosed address: 42 - type: ConnectionCheckOutStarted address: 42 - type: ConnectionCheckOutFailed address: 42 reason: poolClosed ignore: - ConnectionPoolReady - ConnectionCreated - ConnectionReady - ConnectionClosed mongo-ruby-driver-2.21.3/spec/spec_tests/data/cmap/pool-checkout-maxConnecting-is-enforced.yml000066400000000000000000000044321505113246500324200ustar00rootroot00000000000000version: 1 style: integration description: maxConnecting is enforced # Remove the topology runOn requirement when cmap specs are adjusted for lbs runOn: - # required for blockConnection in fail point minServerVersion: "4.4.0" - topology: [ "single", "replicaset", "sharded" ] failPoint: configureFailPoint: failCommand # high amount to ensure not interfered with by monitor checks. mode: { times: 50 } data: failCommands: ["isMaster","hello"] closeConnection: false blockConnection: true blockTimeMS: 750 poolOptions: maxPoolSize: 10 waitQueueTimeoutMS: 5000 operations: - name: ready # start 3 threads - name: start target: thread1 - name: start target: thread2 - name: start target: thread3 # start creating a Connection. This will take a while # due to the fail point. - name: checkOut thread: thread1 # wait for thread1 to actually start creating a Connection - name: waitForEvent event: ConnectionCreated count: 1 # wait some more time to ensure thread1 has begun establishing a Connection - name: wait ms: 100 # start 2 check out requests. Only one thread should # start creating a Connection and the other one should be # waiting for pendingConnectionCount to be less than maxConnecting, # only starting once thread1 finishes creating its Connection. - name: checkOut thread: thread2 - name: checkOut thread: thread3 # wait until all Connections have been created. - name: waitForEvent event: ConnectionReady count: 3 events: # thread1 creates its connection - type: ConnectionCreated address: 42 connectionId: 1 # either thread2 or thread3 creates its connection # the other thread is stuck waiting for maxConnecting to come down - type: ConnectionCreated address: 42 # thread1 finishes establishing its connection, freeing # up the blocked thread to start establishing - type: ConnectionReady address: 42 connectionId: 1 - type: ConnectionCreated address: 42 # the remaining two Connections finish establishing - type: ConnectionReady address: 42 - type: ConnectionReady address: 42 ignore: - ConnectionCheckOutStarted - ConnectionCheckedIn - ConnectionCheckedOut - ConnectionClosed - ConnectionPoolCreated - ConnectionPoolReady mongo-ruby-driver-2.21.3/spec/spec_tests/data/cmap/pool-checkout-maxConnecting-timeout.yml000066400000000000000000000040221505113246500317030ustar00rootroot00000000000000version: 1 style: integration description: waiting on maxConnecting is limited by WaitQueueTimeoutMS # Remove the topology runOn requirement when cmap specs are adjusted for lbs runOn: - # required for blockConnection in fail point minServerVersion: "4.4.0" - topology: [ "single", "replicaset", "sharded" ] failPoint: configureFailPoint: failCommand # high amount to ensure not interfered with by monitor checks. mode: { times: 50 } data: failCommands: ["isMaster","hello"] closeConnection: false blockConnection: true blockTimeMS: 750 poolOptions: maxPoolSize: 10 # Drivers that limit connection establishment by waitQueueTimeoutMS may skip # this test. While waitQueueTimeoutMS is technically not supposed to limit establishment time, # it will soon be deprecated, so it is easier for those drivers to just skip this test. waitQueueTimeoutMS: 50 operations: - name: ready # start creating two connections simultaneously. - name: start target: thread1 - name: checkOut thread: thread1 - name: start target: thread2 - name: checkOut thread: thread2 # wait for other two threads to start establishing - name: waitForEvent event: ConnectionCreated count: 2 # start a third thread that will be blocked waiting for # one of the other two to finish - name: start target: thread3 - name: checkOut thread: thread3 - name: waitForEvent event: ConnectionCheckOutFailed count: 1 # rejoin thread3, should experience error - name: waitForThread target: thread3 error: type: WaitQueueTimeoutError message: Timed out while checking out a connection from connection pool events: - type: ConnectionCheckOutStarted address: 42 - type: ConnectionCheckOutStarted address: 42 - type: ConnectionCheckOutStarted address: 42 - type: ConnectionCheckOutFailed reason: timeout address: 42 ignore: - ConnectionCreated - ConnectionCheckedIn - ConnectionCheckedOut - ConnectionClosed - ConnectionPoolCreated - ConnectionPoolReady pool-checkout-minPoolSize-connection-maxConnecting.yml000066400000000000000000000043371505113246500345540ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/cmapversion: 1 style: integration description: threads blocked by maxConnecting check out minPoolSize connections runOn: - # required for blockConnection in fail point minServerVersion: "4.4.0" - topology: [ "single", "replicaset", "sharded" ] failPoint: configureFailPoint: failCommand mode: "alwaysOn" data: failCommands: ["isMaster","hello"] closeConnection: false blockConnection: true blockTimeMS: 1000 poolOptions: # allows both thread1 and the background thread to start opening connections concurrently minPoolSize: 2 # gives opportunity for the checkout in thread2 to open a new connection, which it must not do nonetheless maxPoolSize: 3 waitQueueTimeoutMS: 5000 operations: - name: ready # thread1 exists to hold on one of the two permits to open a connection (the other one is initially held by the background thread), # so that thread2 would be blocked acquiring a permit, which opens an opportunity for it to grab the connection newly opened # by the background thread instead of opening a third connection. - name: start target: thread1 - name: start target: thread2 # Ideally, thread1 should be holding for its permit to open a connection till the end of the test, but we cannot express that. # This delay emulates the above requirement: # - it is long enough to make sure that the background thread opens a connection before thread1 releases its permit; # - it is short enough to allow thread2 to become blocked acquiring a permit to open a connection, and then grab the connection # opened by the background thread, before the background thread releases its permit. - name: wait ms: 200 - name: checkOut thread: thread1 - name: waitForEvent event: ConnectionCreated count: 2 - name: checkOut thread: thread2 - name: waitForEvent event: ConnectionCheckedOut count: 2 events: # exactly 2 connections must be created and checked out - type: ConnectionCreated address: 42 - type: ConnectionCreated address: 42 - type: ConnectionCheckedOut address: 42 - type: ConnectionCheckedOut address: 42 ignore: - ConnectionPoolReady - ConnectionClosed - ConnectionReady - ConnectionPoolCreated - ConnectionCheckOutStarted mongo-ruby-driver-2.21.3/spec/spec_tests/data/cmap/pool-checkout-multiple.yml000066400000000000000000000014731505113246500272640ustar00rootroot00000000000000version: 1 style: unit description: must be able to check out multiple connections at the same time operations: - name: ready - name: start target: thread1 - name: start target: thread2 - name: start target: thread3 - name: checkOut thread: thread1 - name: checkOut thread: thread2 - name: checkOut thread: thread3 - name: waitForThread target: thread1 - name: waitForThread target: thread2 - name: waitForThread target: thread3 events: - type: ConnectionCheckedOut connectionId: 42 address: 42 - type: ConnectionCheckedOut connectionId: 42 address: 42 - type: ConnectionCheckedOut connectionId: 42 address: 42 ignore: - ConnectionCreated - ConnectionPoolReady - ConnectionReady - ConnectionPoolCreated - ConnectionCheckOutStarted mongo-ruby-driver-2.21.3/spec/spec_tests/data/cmap/pool-checkout-no-idle.yml000066400000000000000000000016411505113246500267550ustar00rootroot00000000000000version: 1 style: unit description: must destroy and must not check out an idle connection if found while iterating available connections poolOptions: maxIdleTimeMS: 10 backgroundThreadIntervalMS: -1 operations: - name: ready - name: checkOut label: conn - name: checkIn connection: conn - name: wait ms: 50 - name: checkOut - name: waitForEvent event: ConnectionCheckedOut count: 2 events: - type: ConnectionPoolCreated address: 42 options: 42 - type: ConnectionCheckedOut connectionId: 1 address: 42 - type: ConnectionCheckedIn connectionId: 1 address: 42 # In between these, wait so connection becomes idle - type: ConnectionClosed connectionId: 1 reason: idle address: 42 - type: ConnectionCheckedOut connectionId: 2 address: 42 ignore: - ConnectionReady - ConnectionPoolReady - ConnectionCreated - ConnectionCheckOutStarted mongo-ruby-driver-2.21.3/spec/spec_tests/data/cmap/pool-checkout-no-stale.yml000066400000000000000000000016161505113246500271520ustar00rootroot00000000000000version: 1 style: unit description: must destroy and must not check out a stale connection if found while iterating available connections poolOptions: backgroundThreadIntervalMS: -1 operations: - name: ready - name: checkOut label: conn - name: checkIn connection: conn - name: clear - name: ready - name: checkOut - name: waitForEvent event: ConnectionCheckedOut count: 2 events: - type: ConnectionPoolCreated address: 42 options: 42 - type: ConnectionCheckedOut connectionId: 1 address: 42 - type: ConnectionCheckedIn connectionId: 1 address: 42 - type: ConnectionPoolCleared address: 42 - type: ConnectionClosed connectionId: 1 reason: stale address: 42 - type: ConnectionCheckedOut connectionId: 2 address: 42 ignore: - ConnectionReady - ConnectionPoolReady - ConnectionCreated - ConnectionCheckOutStarted pool-checkout-returned-connection-maxConnecting.yml000066400000000000000000000047011505113246500341270ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/cmapversion: 1 style: integration description: threads blocked by maxConnecting check out returned connections # Remove the topology runOn requirement when cmap specs are adjusted for lbs runOn: - # required for blockConnection in fail point minServerVersion: "4.4.0" - topology: [ "single", "replicaset", "sharded" ] failPoint: configureFailPoint: failCommand # high amount to ensure not interfered with by monitor checks. mode: { times: 50 } data: failCommands: ["isMaster","hello"] closeConnection: false blockConnection: true blockTimeMS: 750 poolOptions: maxPoolSize: 10 waitQueueTimeoutMS: 5000 operations: - name: ready # check out a connection and hold on to it. - name: checkOut label: conn0 # then start three threads that all attempt to check out. Two threads # will fill maxConnecting, and the other should be waiting either for # the other two to finish or for the main thread to check its connection # back in. - name: start target: thread1 - name: checkOut thread: thread1 - name: start target: thread2 - name: checkOut thread: thread2 - name: start target: thread3 - name: checkOut thread: thread3 # wait for all three to start checking out and a little longer # for the establishments to begin. - name: waitForEvent event: ConnectionCheckOutStarted count: 4 - name: wait ms: 100 # check original connection back in, so the thread that isn't # currently establishing will become unblocked. Then wait for # all threads to complete. - name: checkIn connection: conn0 - name: waitForEvent event: ConnectionCheckedOut count: 4 events: # main thread checking out a Connection and holding it - type: ConnectionCreated address: 42 connectionId: 1 - type: ConnectionCheckedOut address: 42 # two threads creating their Connections - type: ConnectionCreated address: 42 - type: ConnectionCreated address: 42 # main thread checking its Connection back in - type: ConnectionCheckedIn connectionId: 1 address: 42 # remaining thread checking out the returned Connection - type: ConnectionCheckedOut connectionId: 1 address: 42 # first two threads finishing Connection establishment - type: ConnectionCheckedOut address: 42 - type: ConnectionCheckedOut address: 42 ignore: - ConnectionPoolReady - ConnectionClosed - ConnectionReady - ConnectionPoolCreated - ConnectionCheckOutStarted mongo-ruby-driver-2.21.3/spec/spec_tests/data/cmap/pool-clear-interrupting-pending-connections.yml000066400000000000000000000021111505113246500333740ustar00rootroot00000000000000version: 1 style: integration description: clear with interruptInUseConnections = true closes pending connections # Remove the topology runOn requirement when cmap specs are adjusted for lbs runOn: - minServerVersion: "4.9.0" - topology: [ "single", "replicaset", "sharded" ] failPoint: configureFailPoint: failCommand mode: "alwaysOn" data: failCommands: ["isMaster","hello"] closeConnection: false blockConnection: true blockTimeMS: 1000 poolOptions: minPoolSize: 0 operations: - name: ready - name: start target: thread1 - name: checkOut thread: thread1 - name: waitForEvent event: ConnectionCreated count: 1 - name: clear interruptInUseConnections: true - name: waitForEvent event: ConnectionCheckOutFailed count: 1 events: - type: ConnectionCheckOutStarted - type: ConnectionCreated - type: ConnectionPoolCleared interruptInUseConnections: true - type: ConnectionClosed - type: ConnectionCheckOutFailed ignore: - ConnectionCheckedIn - ConnectionCheckedOut - ConnectionPoolCreated - ConnectionPoolReady mongo-ruby-driver-2.21.3/spec/spec_tests/data/cmap/pool-clear-min-size.yml000066400000000000000000000016301505113246500264400ustar00rootroot00000000000000version: 1 style: unit description: pool clear halts background minPoolSize establishments poolOptions: minPoolSize: 1 backgroundThreadIntervalMS: 50 # Remove this runOn requirement when cmap specs are adjusted for lbs runOn: - topology: [ "single", "replicaset", "sharded" ] operations: - name: ready - name: waitForEvent event: ConnectionReady count: 1 - name: clear # ensure no connections created after clear - name: wait ms: 200 - name: ready - name: waitForEvent event: ConnectionReady count: 2 events: - type: ConnectionPoolReady address: 42 - type: ConnectionCreated address: 42 - type: ConnectionReady address: 42 - type: ConnectionPoolCleared address: 42 - type: ConnectionPoolReady address: 42 - type: ConnectionCreated address: 42 - type: ConnectionReady address: 42 ignore: - ConnectionPoolCreated - ConnectionClosed mongo-ruby-driver-2.21.3/spec/spec_tests/data/cmap/pool-clear-paused.yml000066400000000000000000000006471505113246500261750ustar00rootroot00000000000000version: 1 style: unit description: clearing a paused pool emits no events # Remove the topology runOn requirement when cmap specs are adjusted for lbs runOn: - topology: [ "single", "replicaset", "sharded" ] operations: - name: clear - name: ready - name: clear - name: clear events: - type: ConnectionPoolReady address: 42 - type: ConnectionPoolCleared address: 42 ignore: - ConnectionPoolCreated mongo-ruby-driver-2.21.3/spec/spec_tests/data/cmap/pool-clear-ready.yml000066400000000000000000000016341505113246500260150ustar00rootroot00000000000000version: 1 style: unit description: after clear, cannot check out connections until pool ready # Remove the topology runOn requirement when cmap specs are adjusted for lbs runOn: - topology: [ "single", "replicaset", "sharded" ] operations: - name: ready - name: checkOut - name: clear - name: start target: thread1 - name: checkOut thread: thread1 - name: waitForEvent event: ConnectionCheckOutFailed count: 1 - name: ready - name: checkOut events: - type: ConnectionPoolReady address: 42 - type: ConnectionCheckedOut address: 42 connectionId: 42 - type: ConnectionPoolCleared address: 42 - type: ConnectionCheckOutFailed address: 42 reason: connectionError - type: ConnectionPoolReady address: 42 - type: ConnectionCheckedOut address: 42 ignore: - ConnectionPoolCreated - ConnectionReady - ConnectionCheckOutStarted - ConnectionCreated pool-clear-schedule-run-interruptInUseConnections-false.yml000066400000000000000000000021451505113246500355170ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/cmapversion: 1 style: unit description: Pool clear SHOULD schedule the next background thread run immediately (interruptInUseConnections = false) poolOptions: # ensure it's not involved by default backgroundThreadIntervalMS: 10000 operations: - name: ready - name: checkOut - name: checkOut label: conn - name: checkIn connection: conn - name: clear interruptInUseConnections: false - name: waitForEvent event: ConnectionPoolCleared count: 1 timeout: 1000 - name: waitForEvent event: ConnectionClosed count: 1 timeout: 1000 - name: close events: - type: ConnectionCheckedOut connectionId: 1 address: 42 - type: ConnectionCheckedOut connectionId: 2 address: 42 - type: ConnectionCheckedIn connectionId: 2 address: 42 - type: ConnectionPoolCleared interruptInUseConnections: false - type: ConnectionClosed connectionId: 2 reason: stale address: 42 - type: ConnectionPoolClosed address: 42 ignore: - ConnectionCreated - ConnectionPoolReady - ConnectionReady - ConnectionCheckOutStarted - ConnectionPoolCreated mongo-ruby-driver-2.21.3/spec/spec_tests/data/cmap/pool-close-destroy-conns.yml000066400000000000000000000012071505113246500275330ustar00rootroot00000000000000version: 1 style: unit description: When a pool is closed, it MUST first destroy all available connections in that pool operations: - name: ready - name: checkOut - name: checkOut label: conn - name: checkOut - name: checkIn connection: conn - name: close events: - type: ConnectionCheckedIn connectionId: 2 address: 42 - type: ConnectionClosed connectionId: 2 reason: poolClosed address: 42 - type: ConnectionPoolClosed address: 42 ignore: - ConnectionCreated - ConnectionPoolReady - ConnectionReady - ConnectionPoolCreated - ConnectionCheckOutStarted - ConnectionCheckedOut mongo-ruby-driver-2.21.3/spec/spec_tests/data/cmap/pool-close.yml000066400000000000000000000003351505113246500247270ustar00rootroot00000000000000version: 1 style: unit description: must be able to manually close a pool operations: - name: close events: - type: ConnectionPoolCreated address: 42 options: 42 - type: ConnectionPoolClosed address: 42 mongo-ruby-driver-2.21.3/spec/spec_tests/data/cmap/pool-create-max-size.yml000066400000000000000000000030511505113246500266160ustar00rootroot00000000000000version: 1 style: unit description: must never exceed maxPoolSize total connections poolOptions: maxPoolSize: 3 operations: - name: ready - name: checkOut label: conn1 - name: checkOut - name: checkOut label: conn2 - name: checkIn connection: conn2 - name: checkOut - name: start target: thread1 - name: checkOut thread: thread1 - name: waitForEvent event: ConnectionCheckOutStarted count: 5 - name: checkIn connection: conn1 - name: waitForThread target: thread1 events: - type: ConnectionPoolCreated address: 42 options: 42 - type: ConnectionCheckOutStarted address: 42 - type: ConnectionCreated connectionId: 42 address: 42 - type: ConnectionCheckedOut connectionId: 42 address: 42 - type: ConnectionCheckOutStarted address: 42 - type: ConnectionCreated connectionId: 42 address: 42 - type: ConnectionCheckedOut connectionId: 42 address: 42 - type: ConnectionCheckOutStarted address: 42 - type: ConnectionCreated connectionId: 42 address: 42 - type: ConnectionCheckedOut connectionId: 42 address: 42 - type: ConnectionCheckedIn connectionId: 42 address: 42 - type: ConnectionCheckOutStarted address: 42 - type: ConnectionCheckedOut connectionId: 42 address: 42 - type: ConnectionCheckOutStarted address: 42 - type: ConnectionCheckedIn connectionId: 42 address: 42 - type: ConnectionCheckedOut connectionId: 42 address: 42 ignore: - ConnectionReady - ConnectionPoolReady mongo-ruby-driver-2.21.3/spec/spec_tests/data/cmap/pool-create-min-size-error.yml000066400000000000000000000022101505113246500277370ustar00rootroot00000000000000version: 1 style: integration description: error during minPoolSize population clears pool runOn: - # required for appName in fail point minServerVersion: "4.9.0" # Remove the topology runOn requirement when cmap specs are adjusted for lbs - topology: [ "single", "replicaset", "sharded" ] failPoint: configureFailPoint: failCommand # high amount to ensure not interfered with by monitor checks. mode: { times: 50 } data: failCommands: ["isMaster","hello"] closeConnection: true appName: "poolCreateMinSizeErrorTest" poolOptions: minPoolSize: 1 backgroundThreadIntervalMS: 50 appName: "poolCreateMinSizeErrorTest" operations: - name: ready - name: waitForEvent event: ConnectionPoolCleared count: 1 # ensure pool doesn't start making new connections - name: wait ms: 200 events: - type: ConnectionPoolReady address: 42 - type: ConnectionCreated address: 42 # The ruby driver clears the pool before closing the connection. - type: ConnectionPoolCleared address: 42 - type: ConnectionClosed address: 42 connectionId: 42 reason: error ignore: - ConnectionPoolCreated mongo-ruby-driver-2.21.3/spec/spec_tests/data/cmap/pool-create-min-size.yml000066400000000000000000000020471505113246500266200ustar00rootroot00000000000000version: 1 style: unit description: must be able to start a pool with minPoolSize connections # Remove the topology runOn requirement when cmap specs are adjusted for lbs runOn: - topology: [ "single", "replicaset", "sharded" ] poolOptions: minPoolSize: 3 operations: # ensure no connections are created until this pool is ready - name: wait ms: 200 - name: ready - name: waitForEvent event: ConnectionCreated count: 3 - name: waitForEvent event: ConnectionReady count: 3 - name: checkOut events: - type: ConnectionPoolCreated address: 42 options: 42 - type: ConnectionPoolReady address: 42 - type: ConnectionCreated connectionId: 42 address: 42 - type: ConnectionCreated connectionId: 42 address: 42 - type: ConnectionCreated connectionId: 42 address: 42 # Ensures that by the time pool is closed, there are at least 3 connections - type: ConnectionCheckedOut connectionId: 42 address: 42 ignore: - ConnectionReady - ConnectionClosed - ConnectionCheckOutStarted mongo-ruby-driver-2.21.3/spec/spec_tests/data/cmap/pool-create-with-options.yml000066400000000000000000000006531505113246500275320ustar00rootroot00000000000000version: 1 style: unit description: must be able to start a pool with various options set poolOptions: maxPoolSize: 50 minPoolSize: 5 maxIdleTimeMS: 100 operations: - name: waitForEvent event: ConnectionPoolCreated count: 1 events: - type: ConnectionPoolCreated address: 42 options: maxPoolSize: 50 minPoolSize: 5 maxIdleTimeMS: 100 ignore: - ConnectionCreated - ConnectionReady mongo-ruby-driver-2.21.3/spec/spec_tests/data/cmap/pool-create.yml000066400000000000000000000003421505113246500250630ustar00rootroot00000000000000version: 1 style: unit description: must be able to create a pool operations: - name: waitForEvent event: ConnectionPoolCreated count: 1 events: - type: ConnectionPoolCreated address: 42 options: 42 mongo-ruby-driver-2.21.3/spec/spec_tests/data/cmap/pool-ready-ready.yml000066400000000000000000000010361505113246500260270ustar00rootroot00000000000000version: 1 style: unit description: readying a ready pool emits no events # Remove the topology runOn requirement when cmap specs are adjusted for lbs runOn: - topology: [ "single", "replicaset", "sharded" ] operations: - name: ready - name: ready - name: ready # the first ready after this clear should emit an event - name: clear - name: ready events: - type: ConnectionPoolReady address: 42 - type: ConnectionPoolCleared address: 42 - type: ConnectionPoolReady address: 42 ignore: - ConnectionPoolCreated mongo-ruby-driver-2.21.3/spec/spec_tests/data/cmap/pool-ready.yml000066400000000000000000000012151505113246500247240ustar00rootroot00000000000000version: 1 style: unit description: pool starts as cleared and becomes ready operations: - name: start target: thread1 - name: checkOut thread: thread1 - name: waitForEvent event: ConnectionCheckOutFailed count: 1 - name: ready - name: checkOut events: - type: ConnectionCheckOutStarted address: 42 - type: ConnectionCheckOutFailed reason: connectionError address: 42 - type: ConnectionPoolReady address: 42 - type: ConnectionCheckOutStarted address: 42 - type: ConnectionCreated address: 42 - type: ConnectionCheckedOut address: 42 ignore: - ConnectionPoolCreated - ConnectionReady mongo-ruby-driver-2.21.3/spec/spec_tests/data/cmap/wait-queue-fairness.yml000066400000000000000000000065041505113246500265550ustar00rootroot00000000000000version: 1 style: unit description: must issue Connections to threads in the order that the threads entered the queue poolOptions: maxPoolSize: 1 waitQueueTimeoutMS: 5000 operations: - name: ready # Check out sole connection in pool - name: checkOut label: conn0 # Create 4 threads, have them all queue up for connections # Note: this might become non-deterministic depending on how you # implement your test runner. The goal is for each thread to # have started and begun checkOut before the next thread starts. # The sleep operations should make this more consistent. - name: start target: thread1 - name: checkOut thread: thread1 label: conn1 - name: waitForEvent event: ConnectionCheckOutStarted count: 2 # Give thread1 some time to actually enter the wait queue since the # ConnectionCheckOutStarted event is publish beforehand. - name: wait ms: 100 - name: start target: thread2 - name: checkOut thread: thread2 label: conn2 - name: waitForEvent event: ConnectionCheckOutStarted count: 3 # Give thread2 some time to actually enter the wait queue since the # ConnectionCheckOutStarted event is publish beforehand. - name: wait ms: 100 - name: start target: thread3 - name: checkOut thread: thread3 label: conn3 - name: waitForEvent event: ConnectionCheckOutStarted count: 4 # Give thread3 some time to actually enter the wait queue since the # ConnectionCheckOutStarted event is publish beforehand. - name: wait ms: 100 - name: start target: thread4 - name: checkOut thread: thread4 label: conn4 - name: waitForEvent event: ConnectionCheckOutStarted count: 5 # Give thread4 some time to actually enter the wait queue since the # ConnectionCheckOutStarted event is publish beforehand. - name: wait ms: 100 # From main thread, keep checking in connection and then wait for appropriate thread # Test will timeout if threads are not enqueued in proper order - name: checkIn connection: conn0 - name: waitForThread target: thread1 - name: checkIn connection: conn1 - name: waitForThread target: thread2 - name: checkIn connection: conn2 - name: waitForThread target: thread3 - name: checkIn connection: conn3 - name: waitForThread target: thread4 events: - type: ConnectionCheckOutStarted address: 42 - type: ConnectionCheckedOut connectionId: 42 address: 42 - type: ConnectionCheckOutStarted address: 42 - type: ConnectionCheckOutStarted address: 42 - type: ConnectionCheckOutStarted address: 42 - type: ConnectionCheckOutStarted address: 42 - type: ConnectionCheckedIn connectionId: 42 address: 42 - type: ConnectionCheckedOut connectionId: 42 address: 42 - type: ConnectionCheckedIn connectionId: 42 address: 42 - type: ConnectionCheckedOut connectionId: 42 address: 42 - type: ConnectionCheckedIn connectionId: 42 address: 42 - type: ConnectionCheckedOut connectionId: 42 address: 42 - type: ConnectionCheckedIn connectionId: 42 address: 42 - type: ConnectionCheckedOut connectionId: 42 address: 42 ignore: - ConnectionCreated - ConnectionReady - ConnectionClosed - ConnectionPoolReady - ConnectionPoolCreated mongo-ruby-driver-2.21.3/spec/spec_tests/data/cmap/wait-queue-timeout.yml000066400000000000000000000024761505113246500264350ustar00rootroot00000000000000version: 1 style: unit description: must aggressively timeout threads enqueued longer than waitQueueTimeoutMS # Remove the topology runOn requirement when cmap specs are adjusted for lbs runOn: - topology: [ "single", "replicaset", "sharded" ] poolOptions: maxPoolSize: 1 waitQueueTimeoutMS: 50 operations: - name: ready # Check out only possible connection - name: checkOut label: conn0 # Start a thread, have it enter the wait queue - name: start target: thread1 - name: checkOut thread: thread1 # Wait for other thread to time out, then check in connection - name: waitForEvent event: ConnectionCheckOutFailed count: 1 - name: checkIn connection: conn0 # Rejoin thread1, should experience error - name: waitForThread target: thread1 error: type: WaitQueueTimeoutError message: Timed out while checking out a connection from connection pool events: - type: ConnectionCheckOutStarted address: 42 - type: ConnectionCheckedOut connectionId: 42 address: 42 - type: ConnectionCheckOutStarted address: 42 - type: ConnectionCheckOutFailed reason: timeout address: 42 - type: ConnectionCheckedIn connectionId: 42 address: 42 ignore: - ConnectionCreated - ConnectionReady - ConnectionClosed - ConnectionPoolCreated - ConnectionPoolReady mongo-ruby-driver-2.21.3/spec/spec_tests/data/collection_management/000077500000000000000000000000001505113246500255365ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/collection_management/clustered-indexes.yml000066400000000000000000000075471505113246500317250ustar00rootroot00000000000000description: "clustered-indexes" schemaVersion: "1.4" runOnRequirements: - minServerVersion: "5.3" serverless: forbid createEntities: - client: id: &client0 client0 observeEvents: [ commandStartedEvent ] - database: id: &database0 database0 client: *client0 databaseName: &database0Name ci-tests - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name test initialData: - collectionName: *collection0Name databaseName: *database0Name documents: [] tests: - description: "createCollection with clusteredIndex" operations: - name: dropCollection object: *database0 arguments: collection: *collection0Name - name: createCollection object: *database0 arguments: collection: *collection0Name clusteredIndex: &clusteredIndex key: { _id: 1 } unique: true name: &index0Name "test index" - name: assertCollectionExists object: testRunner arguments: databaseName: *database0Name collectionName: *collection0Name expectEvents: - client: *client0 events: - commandStartedEvent: command: drop: *collection0Name databaseName: *database0Name - commandStartedEvent: command: create: *collection0Name clusteredIndex: *clusteredIndex databaseName: *database0Name - description: "listCollections includes clusteredIndex" operations: - name: dropCollection object: *database0 arguments: collection: *collection0Name - name: createCollection object: *database0 arguments: collection: *collection0Name clusteredIndex: *clusteredIndex - name: listCollections object: *database0 arguments: filter: &filter { name: { $eq: *collection0Name } } expectResult: - name: *collection0Name options: clusteredIndex: key: { _id: 1 } unique: true name: *index0Name v: { $$type: [ int, long ] } expectEvents: - client: *client0 events: - commandStartedEvent: command: drop: *collection0Name databaseName: *database0Name - commandStartedEvent: command: create: *collection0Name clusteredIndex: *clusteredIndex databaseName: *database0Name - commandStartedEvent: command: listCollections: 1 filter: *filter databaseName: *database0Name - description: "listIndexes returns the index" operations: - name: dropCollection object: *database0 arguments: collection: *collection0Name - name: createCollection object: *database0 arguments: collection: *collection0Name clusteredIndex: *clusteredIndex - name: listIndexes object: *collection0 expectResult: - key: { _id: 1 } name: *index0Name clustered: true unique: true v: { $$type: [ int, long ] } expectEvents: - client: *client0 events: - commandStartedEvent: command: drop: *collection0Name databaseName: *database0Name - commandStartedEvent: command: create: *collection0Name clusteredIndex: *clusteredIndex databaseName: *database0Name - commandStartedEvent: command: listIndexes: *collection0Name databaseName: *database0Name createCollection-pre_and_post_images.yml000066400000000000000000000026361505113246500354700ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/collection_managementdescription: "createCollection-pre_and_post_images" schemaVersion: "1.4" runOnRequirements: - minServerVersion: "6.0" serverless: forbid createEntities: - client: id: &client0 client0 observeEvents: [ commandStartedEvent ] - database: id: &database0 database0 client: *client0 databaseName: &database0Name papi-tests - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name test tests: - description: "createCollection with changeStreamPreAndPostImages enabled" operations: - name: dropCollection object: *database0 arguments: collection: *collection0Name - name: createCollection object: *database0 arguments: collection: *collection0Name changeStreamPreAndPostImages: { enabled: true } - name: assertCollectionExists object: testRunner arguments: databaseName: *database0Name collectionName: *collection0Name expectEvents: - client: *client0 events: - commandStartedEvent: command: drop: *collection0Name databaseName: *database0Name - commandStartedEvent: command: create: *collection0Name changeStreamPreAndPostImages: { enabled: true } databaseName: *database0Name modifyCollection-errorResponse.yml000066400000000000000000000030671505113246500343610ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/collection_managementdescription: "modifyCollection-errorResponse" schemaVersion: "1.12" createEntities: - client: id: &client0 client0 observeEvents: [ commandStartedEvent ] - database: id: &database0 database0 client: *client0 databaseName: &database0Name collMod-tests - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name test initialData: &initialData - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1, x: 1 } - { _id: 2, x: 1 } tests: - description: "modifyCollection prepareUnique violations are accessible" runOnRequirements: - minServerVersion: "5.2" # SERVER-61158 operations: - name: createIndex object: *collection0 arguments: keys: { x: 1 } - name: modifyCollection object: *database0 arguments: collection: *collection0Name index: keyPattern: { x: 1 } prepareUnique: true - name: insertOne object: *collection0 arguments: document: { _id: 3, x: 1 } expectError: errorCode: 11000 # DuplicateKey - name: modifyCollection object: *database0 arguments: collection: *collection0Name index: keyPattern: { x: 1 } unique: true expectError: isClientError: false errorCode: 359 # CannotConvertIndexToUnique errorResponse: violations: - { ids: [ 1, 2 ] } modifyCollection-pre_and_post_images.yml000066400000000000000000000033031505113246500355040ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/collection_managementdescription: "modifyCollection-pre_and_post_images" schemaVersion: "1.4" runOnRequirements: - minServerVersion: "6.0" serverless: forbid createEntities: - client: id: &client0 client0 observeEvents: [ commandStartedEvent ] - database: id: &database0 database0 client: *client0 databaseName: &database0Name papi-tests - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name test tests: - description: "modifyCollection to changeStreamPreAndPostImages enabled" operations: - name: dropCollection object: *database0 arguments: collection: *collection0Name - name: createCollection object: *database0 arguments: collection: *collection0Name changeStreamPreAndPostImages: { enabled: false } - name: assertCollectionExists object: testRunner arguments: databaseName: *database0Name collectionName: *collection0Name - name: modifyCollection object: *database0 arguments: collection: *collection0Name changeStreamPreAndPostImages: { enabled: true } expectEvents: - client: *client0 events: - commandStartedEvent: command: drop: *collection0Name databaseName: *database0Name - commandStartedEvent: command: create: *collection0Name changeStreamPreAndPostImages: { enabled: false } - commandStartedEvent: command: collMod: *collection0Name changeStreamPreAndPostImages: { enabled: true } mongo-ruby-driver-2.21.3/spec/spec_tests/data/collection_management/timeseries-collection.yml000066400000000000000000000113131505113246500325620ustar00rootroot00000000000000description: "timeseries-collection" schemaVersion: "1.0" runOnRequirements: - minServerVersion: "5.0" createEntities: - client: id: &client0 client0 observeEvents: [ commandStartedEvent ] - database: id: &database0 database0 client: *client0 databaseName: &database0Name ts-tests - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name test initialData: - collectionName: *collection0Name databaseName: *database0Name documents: [] tests: - description: "createCollection with all options" operations: - name: dropCollection object: *database0 arguments: collection: *collection0Name - name: createCollection object: *database0 arguments: collection: *collection0Name # expireAfterSeconds should be an int64 (as it is stored on the server). expireAfterSeconds: 604800 timeseries: ×eries0 timeField: "time" metaField: "meta" granularity: "minutes" - name: assertCollectionExists object: testRunner arguments: databaseName: *database0Name collectionName: *collection0Name expectEvents: - client: *client0 events: - commandStartedEvent: command: drop: *collection0Name databaseName: *database0Name - commandStartedEvent: command: create: *collection0Name expireAfterSeconds: 604800 timeseries: *timeseries0 databaseName: *database0Name # Unlike regular collections, time-series collections allow duplicate ids. - description: "insertMany with duplicate ids" operations: - name: dropCollection object: *database0 arguments: collection: *collection0Name - name: createCollection object: *database0 arguments: collection: *collection0Name # expireAfterSeconds should be an int64 (as it is stored on the server). expireAfterSeconds: 604800 timeseries: *timeseries0 - name: assertCollectionExists object: testRunner arguments: databaseName: *database0Name collectionName: *collection0Name - name: insertMany object: *collection0 arguments: documents: &docs - { _id: 1, time: { $date: { $numberLong: "1552949630482" } } } - { _id: 1, time: { $date: { $numberLong: "1552949630483" } } } - name: find object: *collection0 arguments: filter: {} sort: { time: 1 } expectResult: *docs expectEvents: - client: *client0 events: - commandStartedEvent: command: drop: *collection0Name databaseName: *database0Name - commandStartedEvent: command: create: *collection0Name expireAfterSeconds: 604800 timeseries: *timeseries0 databaseName: *database0Name - commandStartedEvent: command: insert: *collection0Name documents: *docs - commandStartedEvent: command: find: *collection0Name filter: {} sort: { time: 1 } databaseName: *database0Name - description: "createCollection with bucketing options" runOnRequirements: - minServerVersion: "7.0" operations: - name: dropCollection object: *database0 arguments: collection: *collection0Name - name: createCollection object: *database0 arguments: collection: *collection0Name timeseries: ×eries1 timeField: "time" bucketMaxSpanSeconds: 3600 bucketRoundingSeconds: 3600 - name: assertCollectionExists object: testRunner arguments: databaseName: *database0Name collectionName: *collection0Name expectEvents: - client: *client0 events: - commandStartedEvent: command: drop: *collection0Name databaseName: *database0Name - commandStartedEvent: command: create: *collection0Name timeseries: *timeseries1 databaseName: *database0Name mongo-ruby-driver-2.21.3/spec/spec_tests/data/command_monitoring_unified/000077500000000000000000000000001505113246500265755ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/command_monitoring_unified/bulkWrite.yml000066400000000000000000000036001505113246500312670ustar00rootroot00000000000000description: "bulkWrite" schemaVersion: "1.0" createEntities: - client: id: &client client observeEvents: - commandStartedEvent - commandSucceededEvent - commandFailedEvent - database: id: &database database client: *client databaseName: &databaseName command-monitoring-tests - collection: id: &collection collection database: *database collectionName: &collectionName test initialData: - collectionName: *collectionName databaseName: *databaseName documents: - { _id: 1, x: 11 } - { _id: 2, x: 22 } - { _id: 3, x: 33 } tests: - description: "A successful mixed bulk write" operations: - name: bulkWrite object: *collection arguments: requests: - insertOne: document: { _id: 4, x: 44 } - updateOne: filter: { _id: 3 } update: { $set: { x: 333 } } expectEvents: - client: *client events: - commandStartedEvent: command: insert: *collectionName documents: - { _id: 4, x: 44 } ordered: true commandName: insert databaseName: *databaseName - commandSucceededEvent: reply: { ok: 1, n: 1 } commandName: insert - commandStartedEvent: command: update: *collectionName updates: - q: {_id: 3 } u: { $set: { x: 333 } } upsert: { $$unsetOrMatches: false } multi: { $$unsetOrMatches: false } ordered: true commandName: update databaseName: *databaseName - commandSucceededEvent: reply: { ok: 1, n: 1 } commandName: update mongo-ruby-driver-2.21.3/spec/spec_tests/data/command_monitoring_unified/command.yml000066400000000000000000000023761505113246500307460ustar00rootroot00000000000000description: "command" schemaVersion: "1.0" createEntities: - client: id: &client client observeEvents: - commandStartedEvent - commandSucceededEvent - commandFailedEvent - database: id: &database database client: *client databaseName: &databaseName command-monitoring-tests - collection: id: &collection collection database: *database collectionName: &collectionName test initialData: - collectionName: *collectionName databaseName: *databaseName documents: - { _id: 1, x: 11 } tests: - description: "A successful command" operations: - name: runCommand object: *database arguments: command: { ping: 1 } commandName: ping expectEvents: - client: *client events: - commandStartedEvent: command: ping: 1 commandName: ping databaseName: *databaseName - commandSucceededEvent: reply: ok: 1 commandName: ping # The legacy "A failed command event" test was removed in the test conversion, as the # behavior when a command fails is already covered by the test "A failed find event" # in find.yml. mongo-ruby-driver-2.21.3/spec/spec_tests/data/command_monitoring_unified/deleteMany.yml000066400000000000000000000046241505113246500314150ustar00rootroot00000000000000description: "deleteMany" schemaVersion: "1.0" createEntities: - client: id: &client client observeEvents: - commandStartedEvent - commandSucceededEvent - commandFailedEvent - database: id: &database database client: *client databaseName: &databaseName command-monitoring-tests - collection: id: &collection collection database: *database collectionName: &collectionName test initialData: - collectionName: *collectionName databaseName: *databaseName documents: - { _id: 1, x: 11 } - { _id: 2, x: 22 } - { _id: 3, x: 33 } tests: - description: "A successful deleteMany" operations: - name: deleteMany object: *collection arguments: filter: { _id: { $gt: 1 } } expectEvents: - client: *client events: - commandStartedEvent: command: delete: *collectionName deletes: - { q: { _id: { $gt: 1 }}, limit: 0 } ordered: true commandName: delete databaseName: *databaseName - commandSucceededEvent: reply: { ok: 1, n: 2 } commandName: delete - description: "A successful deleteMany with write errors" operations: - name: deleteMany object: *collection arguments: filter: { _id: { $unsupported: 1 } } expectError: isClientError: false expectEvents: - client: *client events: - commandStartedEvent: command: delete: *collectionName deletes: - { q: { _id: { $unsupported: 1 }}, limit: 0 } ordered: true commandName: delete databaseName: *databaseName - commandSucceededEvent: reply: ok: 1 n: 0 # The legacy version of this test included an assertion that writeErrors contained a single document # with index=0, a "code" value, and a non-empty "errmsg". However, writeErrors can contain extra fields # beyond these, and the unified format currently does not permit allowing extra fields in sub-documents, # so those assertions are not present here. writeErrors: { $$type: array } commandName: delete mongo-ruby-driver-2.21.3/spec/spec_tests/data/command_monitoring_unified/deleteOne.yml000066400000000000000000000046171505113246500312340ustar00rootroot00000000000000description: "deleteOne" schemaVersion: "1.0" createEntities: - client: id: &client client observeEvents: - commandStartedEvent - commandSucceededEvent - commandFailedEvent - database: id: &database database client: *client databaseName: &databaseName command-monitoring-tests - collection: id: &collection collection database: *database collectionName: &collectionName test initialData: - collectionName: *collectionName databaseName: *databaseName documents: - { _id: 1, x: 11 } - { _id: 2, x: 22 } - { _id: 3, x: 33 } tests: - description: "A successful deleteOne" operations: - name: deleteOne object: *collection arguments: filter: { _id: { $gt: 1 } } expectEvents: - client: *client events: - commandStartedEvent: command: delete: *collectionName deletes: - { q: { _id: { $gt: 1 }}, limit: 1 } ordered: true commandName: delete databaseName: *databaseName - commandSucceededEvent: reply: { ok: 1, n: 1 } commandName: delete - description: "A successful deleteOne with write errors" operations: - name: deleteOne object: *collection arguments: filter: { _id: { $unsupported: 1 } } expectError: isClientError: false expectEvents: - client: *client events: - commandStartedEvent: command: delete: *collectionName deletes: - { q: { _id: { $unsupported: 1 }}, limit: 1 } ordered: true commandName: delete databaseName: *databaseName - commandSucceededEvent: reply: ok: 1 n: 0 # The legacy version of this test included an assertion that writeErrors contained a single document # with index=0, a "code" value, and a non-empty "errmsg". However, writeErrors can contain extra fields # beyond these, and the unified format currently does not permit allowing extra fields in sub-documents, # so those assertions are not present here. writeErrors: { $$type: array } commandName: delete mongo-ruby-driver-2.21.3/spec/spec_tests/data/command_monitoring_unified/find.yml000066400000000000000000000161341505113246500302450ustar00rootroot00000000000000description: "find" schemaVersion: "1.1" createEntities: - client: id: &client client observeEvents: - commandStartedEvent - commandSucceededEvent - commandFailedEvent - database: id: &database database client: *client databaseName: &databaseName command-monitoring-tests - collection: id: &collection collection database: *database collectionName: &collectionName test _yamlAnchors: namespace: &namespace "command-monitoring-tests.test" initialData: - collectionName: *collectionName databaseName: *databaseName documents: - { _id: 1, x: 11 } - { _id: 2, x: 22 } - { _id: 3, x: 33 } - { _id: 4, x: 44 } - { _id: 5, x: 55 } tests: - description: "A successful find with no options" operations: - name: find object: *collection arguments: filter: { _id: 1 } expectEvents: - client: *client events: - commandStartedEvent: command: find: *collectionName filter: { _id: 1 } commandName: find databaseName: *databaseName - commandSucceededEvent: reply: ok: 1 cursor: id: 0 ns: *namespace firstBatch: - { _id: 1, x: 11 } commandName: find - description: "A successful find with options" operations: - name: find object: *collection arguments: filter: { _id: { $gt: 1 } } sort: { x: -1 } projection: { _id: 0, x: 1 } skip: 2 comment: "test" hint: { _id: 1 } max: { _id: 6 } maxTimeMS: 6000 min: { _id: 0 } expectEvents: - client: *client events: - commandStartedEvent: command: find: *collectionName filter: { _id: { $gt: 1 } } sort: { x: -1 } projection: { _id: 0, x: 1 } skip: 2 comment: "test" hint: { _id: 1 } max: { _id: 6 } maxTimeMS: 6000 min: { _id: 0 } commandName: find databaseName: *databaseName - commandSucceededEvent: reply: ok: 1 cursor: id: 0 ns: *namespace firstBatch: - { x: 33 } - { x: 22 } commandName: find - description: "A successful find with showRecordId and returnKey" operations: - name: find object: *collection arguments: filter: { } sort: { _id: 1 } showRecordId: true returnKey: true expectEvents: - client: *client events: - commandStartedEvent: command: find: *collectionName showRecordId: true returnKey: true commandName: find databaseName: *databaseName - commandSucceededEvent: reply: ok: 1 cursor: id: 0 ns: *namespace firstBatch: - { _id: 1 } - { _id: 2 } - { _id: 3 } - { _id: 4 } - { _id: 5 } commandName: find - description: "A successful find with a getMore" operations: - name: find object: *collection arguments: filter: { _id: { $gte: 1 }} sort: { _id: 1 } batchSize: 3 expectEvents: - client: *client events: - commandStartedEvent: command: find: *collectionName filter: { _id: { $gte: 1 }} sort: { _id: 1 } batchSize: 3 commandName: find databaseName: *databaseName - commandSucceededEvent: reply: ok: 1 cursor: id: { $$type: [ int, long ] } ns: *namespace firstBatch: - { _id: 1, x: 11 } - { _id: 2, x: 22 } - { _id: 3, x: 33 } commandName: find - commandStartedEvent: command: getMore: { $$type: [ int, long ] } collection: *collectionName batchSize: 3 commandName: getMore databaseName: *databaseName - commandSucceededEvent: reply: ok: 1 cursor: id: 0 ns: *namespace nextBatch: - { _id: 4, x: 44 } - { _id: 5, x: 55 } commandName: getMore - description: "A successful find event with a getmore and the server kills the cursor (<= 4.4)" runOnRequirements: - minServerVersion: "3.1" maxServerVersion: "4.4.99" topologies: [ single, replicaset ] operations: - name: find object: *collection arguments: filter: { _id: { $gte: 1 } } sort: { _id: 1 } batchSize: 3 limit: 4 expectEvents: - client: *client events: - commandStartedEvent: command: find: *collectionName filter: { _id: { $gte: 1 } } sort: { _id: 1 } batchSize: 3 limit: 4 commandName: find databaseName: *databaseName - commandSucceededEvent: reply: ok: 1 cursor: id: { $$type: [ int, long ] } ns: *namespace firstBatch: - { _id: 1, x: 11 } - { _id: 2, x: 22 } - { _id: 3, x: 33 } commandName: find - commandStartedEvent: command: getMore: { $$type: [ int, long ] } collection: *collectionName batchSize: 1 commandName: getMore databaseName: *databaseName - commandSucceededEvent: reply: ok: 1 cursor: id: 0 ns: *namespace nextBatch: - { _id: 4, x: 44 } commandName: getMore - description: "A failed find event" operations: - name: find object: *collection arguments: filter: { $or: true } expectError: isClientError: false expectEvents: - client: *client events: - commandStartedEvent: command: find: *collectionName filter: { $or: true } commandName: find databaseName: *databaseName - commandFailedEvent: commandName: find mongo-ruby-driver-2.21.3/spec/spec_tests/data/command_monitoring_unified/insertMany.yml000066400000000000000000000045141505113246500314550ustar00rootroot00000000000000description: "insertMany" schemaVersion: "1.0" createEntities: - client: id: &client client observeEvents: - commandStartedEvent - commandSucceededEvent - commandFailedEvent - database: id: &database database client: *client databaseName: &databaseName command-monitoring-tests - collection: id: &collection collection database: *database collectionName: &collectionName test initialData: - collectionName: *collectionName databaseName: *databaseName documents: - { _id: 1, x: 11 } tests: - description: "A successful insertMany" operations: - name: insertMany object: *collection arguments: documents: - { _id: 2, x: 22 } expectEvents: - client: *client events: - commandStartedEvent: command: insert: *collectionName documents: - { _id: 2, x: 22 } ordered: true commandName: insert databaseName: *databaseName - commandSucceededEvent: reply: { ok: 1, n: 1 } commandName: insert - description: "A successful insertMany with write errors" operations: - name: insertMany object: *collection arguments: documents: - { _id: 1, x: 11 } expectError: isClientError: false expectEvents: - client: *client events: - commandStartedEvent: command: insert: *collectionName documents: - { _id: 1, x: 11 } ordered: true commandName: insert databaseName: *databaseName - commandSucceededEvent: reply: ok: 1 n: 0 # The legacy version of this test included an assertion that writeErrors contained a single document # with index=0, a "code" value, and a non-empty "errmsg". However, writeErrors can contain extra fields # beyond these, and the unified format currently does not permit allowing extra fields in sub-documents, # so those assertions are not present here. writeErrors: { $$type: array } commandName: insert mongo-ruby-driver-2.21.3/spec/spec_tests/data/command_monitoring_unified/insertOne.yml000066400000000000000000000044511505113246500312720ustar00rootroot00000000000000description: "insertOne" schemaVersion: "1.0" createEntities: - client: id: &client client observeEvents: - commandStartedEvent - commandSucceededEvent - commandFailedEvent - database: id: &database database client: *client databaseName: &databaseName command-monitoring-tests - collection: id: &collection collection database: *database collectionName: &collectionName test initialData: - collectionName: *collectionName databaseName: *databaseName documents: - { _id: 1, x: 11 } tests: - description: "A successful insertOne" operations: - name: insertOne object: *collection arguments: document: { _id: 2, x: 22 } expectEvents: - client: *client events: - commandStartedEvent: command: insert: *collectionName documents: - { _id: 2, x: 22 } ordered: true commandName: insert databaseName: *databaseName - commandSucceededEvent: reply: { ok: 1, n: 1 } commandName: insert - description: "A successful insertOne with write errors" operations: - name: insertOne object: *collection arguments: document: { _id: 1, x: 11 } expectError: isClientError: false expectEvents: - client: *client events: - commandStartedEvent: command: insert: *collectionName documents: - { _id: 1, x: 11 } ordered: true commandName: insert databaseName: *databaseName - commandSucceededEvent: reply: ok: 1 n: 0 # The legacy version of this test included an assertion that writeErrors contained a single document # with index=0, a "code" value, and a non-empty "errmsg". However, writeErrors can contain extra fields # beyond these, and the unified format currently does not permit allowing extra fields in sub-documents, # so those assertions are not present here. writeErrors: { $$type: array } commandName: insert pre-42-server-connection-id.yml000066400000000000000000000026641505113246500343150ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/command_monitoring_unifieddescription: "pre-42-server-connection-id" schemaVersion: "1.6" runOnRequirements: - maxServerVersion: "4.0.99" createEntities: - client: id: &client client observeEvents: - commandStartedEvent - commandSucceededEvent - commandFailedEvent - database: id: &database database client: *client databaseName: &databaseName server-connection-id-tests - collection: id: &collection collection database: *database collectionName: &collectionName coll initialData: - databaseName: *databaseName collectionName: *collectionName documents: [] tests: - description: "command events do not include server connection id" operations: - name: insertOne object: *collection arguments: document: { x: 1 } - name: find object: *collection arguments: filter: { $or: true } expectError: isError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: insert hasServerConnectionId: false - commandSucceededEvent: commandName: insert hasServerConnectionId: false - commandStartedEvent: commandName: find hasServerConnectionId: false - commandFailedEvent: commandName: find hasServerConnectionId: false mongo-ruby-driver-2.21.3/spec/spec_tests/data/command_monitoring_unified/redacted-commands.yml000066400000000000000000000241621505113246500326770ustar00rootroot00000000000000description: "redacted-commands" schemaVersion: "1.5" runOnRequirements: - minServerVersion: "5.0" auth: false createEntities: - client: id: &client client observeEvents: - commandStartedEvent - commandSucceededEvent observeSensitiveCommands: true - database: id: &database database client: *client databaseName: &databaseName command-monitoring-tests tests: - description: "authenticate" operations: - name: runCommand object: *database arguments: commandName: authenticate command: authenticate: 1 mechanism: "MONGODB-X509" user: "CN=myName,OU=myOrgUnit,O=myOrg,L=myLocality,ST=myState,C=myCountry" db: "$external" # An authentication error is expected, but we want to check that the # CommandStartedEvent is redacted expectError: isError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: authenticate # We cannot simply assert that command is an empty document # because it's at root-level, so we make a best effort to make # sure sensitive fields are redacted. command: authenticate: { $$exists: false } mechanism: { $$exists: false } user: { $$exists: false } db: { $$exists: false } - description: "saslStart" operations: - name: runCommand object: *database arguments: commandName: saslStart command: saslStart: 1 payload: "definitely-invalid-payload" db: "admin" expectError: isError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: saslStart command: saslStart: { $$exists: false } payload: { $$exists: false } db: { $$exists: false } - description: "saslContinue" operations: - name: runCommand object: *database arguments: commandName: saslContinue command: saslContinue: 1 conversationId: 0 payload: "definitely-invalid-payload" expectError: isError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: saslContinue command: saslContinue: { $$exists: false } conversationId: { $$exists: false } payload: { $$exists: false } - description: "getnonce" runOnRequirements: - maxServerVersion: 6.1.99 # getnonce removed as of 6.2 via SERVER-71007 operations: - name: runCommand object: *database arguments: commandName: getnonce command: getnonce: 1 expectEvents: - client: *client events: - commandStartedEvent: commandName: getnonce command: { getnonce: { $$exists: false } } - commandSucceededEvent: commandName: getnonce reply: ok: { $$exists: false } nonce: { $$exists: false } - description: "createUser" operations: - name: runCommand object: *database arguments: commandName: createUser command: createUser: "private" # Passing an object is prohibited and we want to trigger a command # failure pwd: {} roles: [] expectError: isError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: createUser command: createUser: { $$exists: false } pwd: { $$exists: false } roles: { $$exists: false } - description: "updateUser" operations: - name: runCommand object: *database arguments: commandName: updateUser command: updateUser: "private" pwd: {} roles: [] expectError: isError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: updateUser command: updateUser: { $$exists: false } pwd: { $$exists: false } roles: { $$exists: false } - description: "copydbgetnonce" runOnRequirements: - maxServerVersion: 3.6.99 # copydbgetnonce was removed as of 4.0 via SERVER-32276 operations: - name: runCommand object: *database arguments: commandName: copydbgetnonce command: copydbgetnonce: "private" expectError: isError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: copydbgetnonce command: { copydbgetnonce: { $$exists: false } } - description: "copydbsaslstart" runOnRequirements: - maxServerVersion: 4.0.99 # copydbsaslstart was removed as of 4.2 via SERVER-36211 operations: - name: runCommand object: *database arguments: commandName: copydbsaslstart command: copydbsaslstart: "private" expectError: isError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: copydbsaslstart command: { copydbsaslstart: { $$exists: false } } - description: "copydb" runOnRequirements: - maxServerVersion: 4.0.99 # copydb was removed as of 4.2 via SERVER-36257 operations: - name: runCommand object: *database arguments: commandName: copydb command: copydb: "private" expectError: isError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: copydb command: { copydb: { $$exists: false } } - description: "hello with speculative authenticate" runOnRequirements: - minServerVersion: "4.9" operations: - name: runCommand object: *database arguments: commandName: hello command: hello: 1 speculativeAuthenticate: saslStart: 1 expectEvents: - client: *client events: - commandStartedEvent: commandName: hello command: hello: { $$exists: false } speculativeAuthenticate: { $$exists: false } - commandSucceededEvent: commandName: hello reply: # Even though authentication above fails and the reply does not # contain sensitive information, we're expecting the reply to be # redacted as well. isWritablePrimary: { $$exists: false } # This assertion will currently always hold true since we're # not expecting successful authentication, in which case this # field is missing anyways. speculativeAuthenticate: { $$exists: false } - description: "legacy hello with speculative authenticate" operations: - name: runCommand object: *database arguments: commandName: ismaster command: ismaster: 1 speculativeAuthenticate: saslStart: 1 - name: runCommand object: *database arguments: commandName: isMaster command: isMaster: 1 speculativeAuthenticate: saslStart: 1 expectEvents: - client: *client events: - commandStartedEvent: commandName: ismaster command: ismaster: { $$exists: false } speculativeAuthenticate: { $$exists: false } - commandSucceededEvent: commandName: ismaster reply: ismaster: { $$exists: false } speculativeAuthenticate: { $$exists: false } - commandStartedEvent: commandName: isMaster command: isMaster: { $$exists: false } speculativeAuthenticate: { $$exists: false } - commandSucceededEvent: commandName: isMaster reply: ismaster: { $$exists: false } speculativeAuthenticate: { $$exists: false } - description: "hello without speculative authenticate is not redacted" runOnRequirements: - minServerVersion: "4.9" operations: - name: runCommand object: *database arguments: commandName: hello command: hello: 1 expectEvents: - client: *client events: - commandStartedEvent: commandName: hello command: hello: 1 - commandSucceededEvent: commandName: hello reply: isWritablePrimary: { $$exists: true } - description: "legacy hello without speculative authenticate is not redacted" operations: - name: runCommand object: *database arguments: commandName: ismaster command: ismaster: 1 - name: runCommand object: *database arguments: commandName: isMaster command: isMaster: 1 expectEvents: - client: *client events: - commandStartedEvent: commandName: ismaster command: ismaster: 1 - commandSucceededEvent: commandName: ismaster reply: ismaster: { $$exists: true } - commandStartedEvent: commandName: isMaster command: isMaster: 1 - commandSucceededEvent: commandName: isMaster reply: ismaster: { $$exists: true } mongo-ruby-driver-2.21.3/spec/spec_tests/data/command_monitoring_unified/server-connection-id.yml000066400000000000000000000026371505113246500333650ustar00rootroot00000000000000description: "server-connection-id" schemaVersion: "1.6" runOnRequirements: - minServerVersion: "4.2" createEntities: - client: id: &client client observeEvents: - commandStartedEvent - commandSucceededEvent - commandFailedEvent - database: id: &database database client: *client databaseName: &databaseName server-connection-id-tests - collection: id: &collection collection database: *database collectionName: &collectionName coll initialData: - databaseName: *databaseName collectionName: *collectionName documents: [] tests: - description: "command events include server connection id" operations: - name: insertOne object: *collection arguments: document: { x: 1 } - name: find object: *collection arguments: filter: { $or: true } expectError: isError: true expectEvents: - client: *client events: - commandStartedEvent: commandName: insert hasServerConnectionId: true - commandSucceededEvent: commandName: insert hasServerConnectionId: true - commandStartedEvent: commandName: find hasServerConnectionId: true - commandFailedEvent: commandName: find hasServerConnectionId: true mongo-ruby-driver-2.21.3/spec/spec_tests/data/command_monitoring_unified/unacknowledgedBulkWrite.yml000066400000000000000000000027551505113246500341540ustar00rootroot00000000000000description: "unacknowledgedBulkWrite" schemaVersion: "1.0" createEntities: - client: id: &client client observeEvents: - commandStartedEvent - commandSucceededEvent - commandFailedEvent - database: id: &database database client: *client databaseName: &databaseName command-monitoring-tests - collection: id: &collection collection database: *database collectionName: &collectionName test collectionOptions: writeConcern: { w: 0 } initialData: - collectionName: *collectionName databaseName: *databaseName documents: - { _id: 1, x: 11 } tests: - description: "A successful unordered bulk write with an unacknowledged write concern" operations: - name: bulkWrite object: *collection arguments: requests: - insertOne: document: { _id: "unorderedBulkWriteInsertW0", x: 44 } ordered: false expectEvents: - client: *client events: - commandStartedEvent: command: insert: *collectionName documents: - { _id: "unorderedBulkWriteInsertW0", x: 44 } ordered: false writeConcern: { w: 0 } commandName: insert databaseName: *databaseName - commandSucceededEvent: reply: ok: 1 n: { $$exists: false } commandName: insert mongo-ruby-driver-2.21.3/spec/spec_tests/data/command_monitoring_unified/updateMany.yml000066400000000000000000000053061505113246500314330ustar00rootroot00000000000000description: "updateMany" schemaVersion: "1.0" createEntities: - client: id: &client client observeEvents: - commandStartedEvent - commandSucceededEvent - commandFailedEvent - database: id: &database database client: *client databaseName: &databaseName command-monitoring-tests - collection: id: &collection collection database: *database collectionName: &collectionName test initialData: - collectionName: *collectionName databaseName: *databaseName documents: - { _id: 1, x: 11 } - { _id: 2, x: 22 } - { _id: 3, x: 33 } tests: - description: "A successful updateMany" operations: - name: updateMany object: *collection arguments: filter: { _id: { $gt: 1 } } update: { $inc: { x: 1 } } expectEvents: - client: *client events: - commandStartedEvent: command: update: *collectionName updates: - q: { _id: { $gt: 1 } } u: { $inc: { x: 1 } } upsert: { $$unsetOrMatches: false } multi: true ordered: true commandName: update databaseName: *databaseName - commandSucceededEvent: reply: { ok: 1, n: 2 } commandName: update - description: "A successful updateMany with write errors" operations: - name: updateMany object: *collection arguments: filter: { _id: { $gt: 1 } } update: { $unsupported: { x: 1 } } expectError: isClientError: false expectEvents: - client: *client events: - commandStartedEvent: command: update: *collectionName updates: - q: { _id: { $gt: 1 } } u: { $unsupported: { x: 1 } } upsert: { $$unsetOrMatches: false } multi: true ordered: true commandName: update databaseName: *databaseName - commandSucceededEvent: reply: ok: 1 n: 0 # The legacy version of this test included an assertion that writeErrors contained a single document # with index=0, a "code" value, and a non-empty "errmsg". However, writeErrors can contain extra fields # beyond these, and the unified format currently does not permit allowing extra fields in sub-documents, # so those assertions are not present here. writeErrors: { $$type: array } commandName: update mongo-ruby-driver-2.21.3/spec/spec_tests/data/command_monitoring_unified/updateOne.yml000066400000000000000000000072271505113246500312540ustar00rootroot00000000000000description: "updateOne" schemaVersion: "1.0" createEntities: - client: id: &client client observeEvents: - commandStartedEvent - commandSucceededEvent - commandFailedEvent - database: id: &database database client: *client databaseName: &databaseName command-monitoring-tests - collection: id: &collection collection database: *database collectionName: &collectionName test initialData: - collectionName: *collectionName databaseName: *databaseName documents: - { _id: 1, x: 11 } - { _id: 2, x: 22 } - { _id: 3, x: 33 } tests: - description: "A successful updateOne" operations: - name: updateOne object: *collection arguments: filter: { _id: { $gt: 1 } } update: { $inc: { x: 1 } } expectEvents: - client: *client events: - commandStartedEvent: command: update: *collectionName updates: - q: { _id: { $gt: 1 } } u: { $inc: { x: 1 } } upsert: { $$unsetOrMatches: false } multi: { $$unsetOrMatches: false } ordered: true commandName: update databaseName: *databaseName - commandSucceededEvent: reply: { ok: 1, n: 1 } commandName: update - description: "A successful updateOne with upsert where the upserted id is not an ObjectId" operations: - name: updateOne object: *collection arguments: filter: { _id: 4 } update: { $inc: { x: 1 } } upsert: true expectEvents: - client: *client events: - commandStartedEvent: command: update: *collectionName updates: - q: { _id: 4 } u: { $inc: { x: 1 } } upsert: true multi: { $$unsetOrMatches: false } ordered: true commandName: update databaseName: *databaseName - commandSucceededEvent: reply: ok: 1 n: 1 upserted: - index: 0 _id: 4 commandName: update - description: "A successful updateOne with write errors" operations: - name: updateOne object: *collection arguments: filter: { _id: { $gt: 1 } } update: { $unsupported: { x: 1 } } expectError: isClientError: false expectEvents: - client: *client events: - commandStartedEvent: command: update: *collectionName updates: - q: { _id: { $gt: 1 } } u: { $unsupported: { x: 1 } } upsert: { $$unsetOrMatches: false } multi: { $$unsetOrMatches: false } ordered: true commandName: update databaseName: *databaseName - commandSucceededEvent: reply: ok: 1 n: 0 # The legacy version of this test included an assertion that writeErrors contained a single document # with index=0, a "code" value, and a non-empty "errmsg". However, writeErrors can contain extra fields # beyond these, and the unified format currently does not permit allowing extra fields in sub-documents, # so those assertions are not present here. writeErrors: { $$type: array } commandName: update mongo-ruby-driver-2.21.3/spec/spec_tests/data/command_monitoring_unified/writeConcernError.yml000066400000000000000000000043021505113246500327730ustar00rootroot00000000000000description: "writeConcernError" schemaVersion: "1.13" runOnRequirements: - minServerVersion: 4.1.0 topologies: - replicaset serverless: "forbid" createEntities: - client: id: &client client observeEvents: - commandStartedEvent - commandSucceededEvent - commandFailedEvent - database: id: &database database client: *client databaseName: &databaseName command-monitoring-tests - collection: id: &collection collection database: *database collectionName: &collectionName test initialData: - collectionName: *collectionName databaseName: *databaseName documents: - { _id: 1, x: 11 } tests: - description: "A retryable write with write concern errors publishes success event" operations: - name: failPoint object: testRunner arguments: client: *client failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: [ insert ] writeConcernError: code: 91 # ShutdownInProgress errorLabels: [RetryableWriteError] - name: insertOne object: *collection arguments: document: { _id: 2, x: 22 } expectEvents: - client: *client events: - commandStartedEvent: command: insert: *collectionName documents: - { _id: 2, x: 22 } ordered: true commandName: insert databaseName: *databaseName - commandSucceededEvent: reply: ok: 1 n: 1 writeConcernError: { code: 91, errorLabels: [ "RetryableWriteError" ] } commandName: insert - commandStartedEvent: command: insert: *collectionName documents: - { _id: 2, x: 22 } ordered: true commandName: insert databaseName: *databaseName - commandSucceededEvent: reply: ok: 1 n: 1 commandName: insert mongo-ruby-driver-2.21.3/spec/spec_tests/data/connection_string/000077500000000000000000000000001505113246500247345ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/connection_string/invalid-uris.yml000066400000000000000000000145311505113246500300710ustar00rootroot00000000000000tests: - description: "Empty string" uri: "" valid: false warning: ~ hosts: ~ auth: ~ options: ~ - description: "Invalid scheme" uri: "mongo://localhost:27017" valid: false warning: ~ hosts: ~ auth: ~ options: ~ - description: "Missing host" uri: "mongodb://" valid: false warning: ~ hosts: ~ auth: ~ options: ~ - description: "Double colon in host identifier" uri: "mongodb://localhost::27017" valid: false warning: ~ hosts: ~ auth: ~ options: ~ - description: "Double colon in host identifier and trailing slash" uri: "mongodb://localhost::27017/" valid: false warning: ~ hosts: ~ auth: ~ options: ~ - description: "Double colon in host identifier with missing host and port" uri: "mongodb://::" valid: false warning: ~ hosts: ~ auth: ~ options: ~ - description: "Double colon in host identifier with missing port" uri: "mongodb://localhost,localhost::" valid: false warning: ~ hosts: ~ auth: ~ options: ~ - description: "Double colon in host identifier and second host" uri: "mongodb://localhost::27017,abc" valid: false warning: ~ hosts: ~ auth: ~ options: ~ - description: "Invalid port (negative number) with hostname" uri: "mongodb://localhost:-1" valid: false warning: ~ hosts: ~ auth: ~ options: ~ - description: "Invalid port (zero) with hostname" uri: "mongodb://localhost:0/" valid: false warning: ~ hosts: ~ auth: ~ options: ~ - description: "Invalid port (positive number) with hostname" uri: "mongodb://localhost:65536" valid: false warning: ~ hosts: ~ auth: ~ options: ~ - description: "Invalid port (positive number) with hostname and trailing slash" uri: "mongodb://localhost:65536/" valid: false warning: ~ hosts: ~ auth: ~ options: ~ - description: "Invalid port (non-numeric string) with hostname" uri: "mongodb://localhost:foo" valid: false warning: ~ hosts: ~ auth: ~ options: ~ - description: "Invalid port (negative number) with IP literal" uri: "mongodb://[::1]:-1" valid: false warning: ~ hosts: ~ auth: ~ options: ~ - description: "Invalid port (zero) with IP literal" uri: "mongodb://[::1]:0/" valid: false warning: ~ hosts: ~ auth: ~ options: ~ - description: "Invalid port (positive number) with IP literal" uri: "mongodb://[::1]:65536" valid: false warning: ~ hosts: ~ auth: ~ options: ~ - description: "Invalid port (positive number) with IP literal and trailing slash" uri: "mongodb://[::1]:65536/" valid: false warning: ~ hosts: ~ auth: ~ options: ~ - description: "Invalid port (non-numeric string) with IP literal" uri: "mongodb://[::1]:foo" valid: false warning: ~ hosts: ~ auth: ~ options: ~ - description: "Incomplete key value pair for option" uri: "mongodb://example.com/?w" valid: false warning: ~ hosts: ~ auth: ~ options: ~ - description: "Username with password containing an unescaped colon" uri: "mongodb://alice:foo:bar@127.0.0.1" valid: false warning: ~ hosts: ~ auth: ~ options: ~ - description: "Username containing an unescaped at-sign" uri: "mongodb://alice@@127.0.0.1" valid: false warning: ~ hosts: ~ auth: ~ options: ~ - description: "Username with password containing an unescaped at-sign" uri: "mongodb://alice@foo:bar@127.0.0.1" valid: false warning: ~ hosts: ~ auth: ~ options: ~ - description: "Username containing an unescaped slash" uri: "mongodb://alice/@localhost/db" valid: false warning: ~ hosts: ~ auth: ~ options: ~ - description: "Username containing unescaped slash with password" uri: "mongodb://alice/bob:foo@localhost/db" valid: false warning: ~ hosts: ~ auth: ~ options: ~ - description: "Username with password containing an unescaped slash" uri: "mongodb://alice:foo/bar@localhost/db" valid: false warning: ~ hosts: ~ auth: ~ options: ~ - description: "Host with unescaped slash" uri: "mongodb:///tmp/mongodb-27017.sock/" valid: false warning: ~ hosts: ~ auth: ~ options: ~ - description: "mongodb+srv with multiple service names" uri: "mongodb+srv://test5.test.mongodb.com,test6.test.mongodb.com" valid: false warning: ~ hosts: ~ auth: ~ options: ~ - description: "mongodb+srv with port number" uri: "mongodb+srv://test7.test.mongodb.com:27018" valid: false warning: ~ hosts: ~ auth: ~ options: ~ - description: "Username with password containing an unescaped percent sign" uri: "mongodb://alice%foo:bar@127.0.0.1" valid: false warning: ~ hosts: ~ auth: ~ options: ~ - description: "Username with password containing an unescaped percent sign and an escaped one" uri: "mongodb://user%20%:password@localhost" valid: false warning: ~ hosts: ~ auth: ~ options: ~ - description: "Username with password containing an unescaped percent sign (non hex digit)" uri: "mongodb://user%w:password@localhost" valid: false warning: ~ hosts: ~ auth: ~ options: ~ mongo-ruby-driver-2.21.3/spec/spec_tests/data/connection_string/valid-auth.yml000066400000000000000000000161611505113246500275220ustar00rootroot00000000000000tests: - description: "User info for single IPv4 host without database" uri: "mongodb://alice:foo@127.0.0.1" valid: true warning: false hosts: - type: "ipv4" host: "127.0.0.1" port: ~ auth: username: "alice" password: "foo" db: ~ options: ~ - description: "User info for single IPv4 host with database" uri: "mongodb://alice:foo@127.0.0.1/test" valid: true warning: false hosts: - type: "ipv4" host: "127.0.0.1" port: ~ auth: username: "alice" password: "foo" db: "test" options: ~ - description: "User info for single IP literal host without database" uri: "mongodb://bob:bar@[::1]:27018" valid: true warning: false hosts: - type: "ip_literal" host: "::1" port: 27018 auth: username: "bob" password: "bar" db: ~ options: ~ - description: "User info for single IP literal host with database" uri: "mongodb://bob:bar@[::1]:27018/admin" valid: true warning: false hosts: - type: "ip_literal" host: "::1" port: 27018 auth: username: "bob" password: "bar" db: "admin" options: ~ - description: "User info for single hostname without database" uri: "mongodb://eve:baz@example.com" valid: true warning: false hosts: - type: "hostname" host: "example.com" port: ~ auth: username: "eve" password: "baz" db: ~ options: ~ - description: "User info for single hostname with database" uri: "mongodb://eve:baz@example.com/db2" valid: true warning: false hosts: - type: "hostname" host: "example.com" port: ~ auth: username: "eve" password: "baz" db: "db2" options: ~ - description: "User info for multiple hosts without database" uri: "mongodb://alice:secret@127.0.0.1,example.com:27018" valid: true warning: false hosts: - type: "ipv4" host: "127.0.0.1" port: ~ - type: "hostname" host: "example.com" port: 27018 auth: username: "alice" password: "secret" db: ~ options: ~ - description: "User info for multiple hosts with database" uri: "mongodb://alice:secret@example.com,[::1]:27019/admin" valid: true warning: false hosts: - type: "hostname" host: "example.com" port: ~ - type: "ip_literal" host: "::1" port: 27019 auth: username: "alice" password: "secret" db: "admin" options: ~ - description: "Username without password" uri: "mongodb://alice@127.0.0.1" valid: true warning: false hosts: - type: "ipv4" host: "127.0.0.1" port: ~ auth: username: "alice" password: ~ db: ~ options: ~ - description: "Username with empty password" uri: "mongodb://alice:@127.0.0.1" valid: true warning: false hosts: - type: "ipv4" host: "127.0.0.1" port: ~ auth: username: "alice" password: "" db: ~ options: ~ - description: "Escaped username and database without password" uri: "mongodb://%40l%3Ace%2F%3D@example.com/my%3Ddb" valid: true warning: false hosts: - type: "hostname" host: "example.com" port: ~ auth: username: "@l:ce/=" password: ~ db: "my=db" options: ~ - description: "Escaped user info and database (MONGODB-CR)" uri: "mongodb://%24am:f%3Azzb%40z%2Fz%3D@127.0.0.1/admin%3F?authMechanism=MONGODB-CR" valid: true warning: false hosts: - type: "ipv4" host: "127.0.0.1" port: ~ auth: username: "$am" password: "f:zzb@z/z=" db: "admin?" options: authmechanism: "MONGODB-CR" - description: "Subdelimiters in user/pass don't need escaping (MONGODB-CR)" uri: "mongodb://!$&'()*+,;=:!$&'()*+,;=@127.0.0.1/admin?authMechanism=MONGODB-CR" valid: true warning: false hosts: - type: "ipv4" host: "127.0.0.1" port: ~ auth: username: "!$&'()*+,;=" password: "!$&'()*+,;=" db: "admin" options: authmechanism: "MONGODB-CR" - description: "Escaped username (MONGODB-X509)" uri: "mongodb://CN%3DmyName%2COU%3DmyOrgUnit%2CO%3DmyOrg%2CL%3DmyLocality%2CST%3DmyState%2CC%3DmyCountry@localhost/?authMechanism=MONGODB-X509" valid: true warning: false hosts: - type: "hostname" host: "localhost" port: ~ auth: username: "CN=myName,OU=myOrgUnit,O=myOrg,L=myLocality,ST=myState,C=myCountry" password: ~ db: ~ options: authmechanism: "MONGODB-X509" - description: "Escaped username (GSSAPI)" uri: "mongodb://user%40EXAMPLE.COM:secret@localhost/?authMechanismProperties=SERVICE_NAME:other,CANONICALIZE_HOST_NAME:true&authMechanism=GSSAPI" valid: true warning: false hosts: - type: "hostname" host: "localhost" port: ~ auth: username: "user@EXAMPLE.COM" password: "secret" db: ~ options: authmechanism: "GSSAPI" authmechanismproperties: SERVICE_NAME: "other" CANONICALIZE_HOST_NAME: true - description: "At-signs in options aren't part of the userinfo" uri: "mongodb://alice:secret@example.com/admin?replicaset=my@replicaset" valid: true warning: false hosts: - type: "hostname" host: "example.com" port: ~ auth: username: "alice" password: "secret" db: "admin" options: replicaset: "my@replicaset" mongo-ruby-driver-2.21.3/spec/spec_tests/data/connection_string/valid-db-with-dotted-name.yml000066400000000000000000000045051505113246500323150ustar00rootroot00000000000000tests: - description: "Multiple Unix domain sockets and auth DB resembling a socket (relative path)" uri: "mongodb://rel%2Fmongodb-27017.sock,rel%2Fmongodb-27018.sock/admin.sock" valid: true warning: false hosts: - type: "unix" host: "rel/mongodb-27017.sock" port: ~ - type: "unix" host: "rel/mongodb-27018.sock" port: ~ auth: username: ~ password: ~ db: "admin.sock" options: ~ - description: "Multiple Unix domain sockets with auth DB resembling a path (relative path)" uri: "mongodb://rel%2Fmongodb-27017.sock,rel%2Fmongodb-27018.sock/admin.shoe" valid: true warning: false hosts: - type: "unix" host: "rel/mongodb-27017.sock" port: ~ - type: "unix" host: "rel/mongodb-27018.sock" port: ~ auth: username: ~ password: ~ db: "admin.shoe" options: ~ - description: "Multiple Unix domain sockets and auth DB resembling a socket (absolute path)" uri: "mongodb://%2Ftmp%2Fmongodb-27017.sock,%2Ftmp%2Fmongodb-27018.sock/admin.sock" valid: true warning: false hosts: - type: "unix" host: "/tmp/mongodb-27017.sock" port: ~ - type: "unix" host: "/tmp/mongodb-27018.sock" port: ~ auth: username: ~ password: ~ db: "admin.sock" options: ~ - description: "Multiple Unix domain sockets with auth DB resembling a path (absolute path)" uri: "mongodb://%2Ftmp%2Fmongodb-27017.sock,%2Ftmp%2Fmongodb-27018.sock/admin.shoe" valid: true warning: false hosts: - type: "unix" host: "/tmp/mongodb-27017.sock" port: ~ - type: "unix" host: "/tmp/mongodb-27018.sock" port: ~ auth: username: ~ password: ~ db: "admin.shoe" options: ~ mongo-ruby-driver-2.21.3/spec/spec_tests/data/connection_string/valid-host_identifiers.yml000066400000000000000000000057501505113246500321250ustar00rootroot00000000000000tests: - description: "Single IPv4 host without port" uri: "mongodb://127.0.0.1" valid: true warning: false hosts: - type: "ipv4" host: "127.0.0.1" port: ~ auth: ~ options: ~ - description: "Single IPv4 host with port" uri: "mongodb://127.0.0.1:27018" valid: true warning: false hosts: - type: "ipv4" host: "127.0.0.1" port: 27018 auth: ~ options: ~ - description: "Single IP literal host without port" uri: "mongodb://[::1]" valid: true warning: false hosts: - type: "ip_literal" host: "::1" port: ~ auth: ~ options: ~ - description: "Single IP literal host with port" uri: "mongodb://[::1]:27019" valid: true warning: false hosts: - type: "ip_literal" host: "::1" port: 27019 auth: ~ options: ~ - description: "Single hostname without port" uri: "mongodb://example.com" valid: true warning: false hosts: - type: "hostname" host: "example.com" port: ~ auth: ~ options: ~ - description: "Single hostname with port" uri: "mongodb://example.com:27020" valid: true warning: false hosts: - type: "hostname" host: "example.com" port: 27020 auth: ~ options: ~ - description: "Single hostname (resembling IPv4) without port" uri: "mongodb://256.0.0.1" valid: true warning: false hosts: - type: "hostname" host: "256.0.0.1" port: ~ auth: ~ options: ~ - description: "Multiple hosts (mixed formats)" uri: "mongodb://127.0.0.1,[::1]:27018,example.com:27019" valid: true warning: false hosts: - type: "ipv4" host: "127.0.0.1" port: ~ - type: "ip_literal" host: "::1" port: 27018 - type: "hostname" host: "example.com" port: 27019 auth: ~ options: ~ - description: "UTF-8 hosts" uri: "mongodb://bücher.example.com,umläut.example.com/" valid: true warning: false hosts: - type: "hostname" host: "bücher.example.com" port: ~ - type: "hostname" host: "umläut.example.com" port: ~ auth: ~ options: ~ mongo-ruby-driver-2.21.3/spec/spec_tests/data/connection_string/valid-options.yml000066400000000000000000000014731505113246500302540ustar00rootroot00000000000000tests: - description: "Option names are normalized to lowercase" uri: "mongodb://alice:secret@example.com/admin?AUTHMechanism=MONGODB-CR" valid: true warning: false hosts: - type: "hostname" host: "example.com" port: ~ auth: username: "alice" password: "secret" db: "admin" options: authmechanism: "MONGODB-CR" - description: "Missing delimiting slash between hosts and options" uri: "mongodb://example.com?tls=true" valid: true warning: false hosts: - type: "hostname" host: "example.com" port: ~ auth: ~ options: tls: true mongo-ruby-driver-2.21.3/spec/spec_tests/data/connection_string/valid-unix_socket-absolute.yml000066400000000000000000000135161505113246500327310ustar00rootroot00000000000000tests: - description: "Unix domain socket (absolute path with trailing slash)" uri: "mongodb://%2Ftmp%2Fmongodb-27017.sock/" valid: true warning: false hosts: - type: "unix" host: "/tmp/mongodb-27017.sock" port: ~ auth: ~ options: ~ - description: "Unix domain socket (absolute path without trailing slash)" uri: "mongodb://%2Ftmp%2Fmongodb-27017.sock" valid: true warning: false hosts: - type: "unix" host: "/tmp/mongodb-27017.sock" port: ~ auth: ~ options: ~ - description: "Unix domain socket (absolute path with spaces in path)" uri: "mongodb://%2Ftmp%2F %2Fmongodb-27017.sock" valid: true warning: false hosts: - type: "unix" host: "/tmp/ /mongodb-27017.sock" port: ~ auth: ~ options: ~ - description: "Multiple Unix domain sockets (absolute paths)" uri: "mongodb://%2Ftmp%2Fmongodb-27017.sock,%2Ftmp%2Fmongodb-27018.sock" valid: true warning: false hosts: - type: "unix" host: "/tmp/mongodb-27017.sock" port: ~ - type: "unix" host: "/tmp/mongodb-27018.sock" port: ~ auth: ~ options: ~ - description: "Multiple hosts (absolute path and ipv4)" uri: "mongodb://127.0.0.1:27017,%2Ftmp%2Fmongodb-27017.sock" valid: true warning: false hosts: - type: "ipv4" host: "127.0.0.1" port: 27017 - type: "unix" host: "/tmp/mongodb-27017.sock" port: ~ auth: ~ options: ~ - description: "Multiple hosts (absolute path and hostname resembling relative path)" uri: "mongodb://mongodb-27017.sock,%2Ftmp%2Fmongodb-27018.sock" valid: true warning: false hosts: - type: "hostname" host: "mongodb-27017.sock" port: ~ - type: "unix" host: "/tmp/mongodb-27018.sock" port: ~ auth: ~ options: ~ - description: "Unix domain socket with auth database (absolute path)" uri: "mongodb://alice:foo@%2Ftmp%2Fmongodb-27017.sock/admin" valid: true warning: false hosts: - type: "unix" host: "/tmp/mongodb-27017.sock" port: ~ auth: username: "alice" password: "foo" db: "admin" options: ~ - description: "Unix domain socket with path resembling socket file (absolute path with trailing slash)" uri: "mongodb://%2Ftmp%2Fpath.to.sock%2Fmongodb-27017.sock/" valid: true warning: false hosts: - type: "unix" host: "/tmp/path.to.sock/mongodb-27017.sock" port: ~ auth: ~ options: ~ - description: "Unix domain socket with path resembling socket file (absolute path without trailing slash)" uri: "mongodb://%2Ftmp%2Fpath.to.sock%2Fmongodb-27017.sock" valid: true warning: false hosts: - type: "unix" host: "/tmp/path.to.sock/mongodb-27017.sock" port: ~ auth: ~ options: ~ - description: "Unix domain socket with path resembling socket file and auth (absolute path)" uri: "mongodb://bob:bar@%2Ftmp%2Fpath.to.sock%2Fmongodb-27017.sock/admin" valid: true warning: false hosts: - type: "unix" host: "/tmp/path.to.sock/mongodb-27017.sock" port: ~ auth: username: "bob" password: "bar" db: "admin" options: ~ - description: "Multiple Unix domain sockets and auth DB (absolute path)" uri: "mongodb://%2Ftmp%2Fmongodb-27017.sock,%2Ftmp%2Fmongodb-27018.sock/admin" valid: true warning: false hosts: - type: "unix" host: "/tmp/mongodb-27017.sock" port: ~ - type: "unix" host: "/tmp/mongodb-27018.sock" port: ~ auth: username: ~ password: ~ db: "admin" options: ~ - description: "Multiple Unix domain sockets with auth DB (absolute path)" uri: "mongodb://%2Ftmp%2Fmongodb-27017.sock,%2Ftmp%2Fmongodb-27018.sock/admin" valid: true warning: false hosts: - type: "unix" host: "/tmp/mongodb-27017.sock" port: ~ - type: "unix" host: "/tmp/mongodb-27018.sock" port: ~ auth: username: ~ password: ~ db: "admin" options: ~ - description: "Multiple Unix domain sockets with auth and query string (absolute path)" uri: "mongodb://bob:bar@%2Ftmp%2Fmongodb-27017.sock,%2Ftmp%2Fmongodb-27018.sock/admin?w=1" valid: true warning: false hosts: - type: "unix" host: "/tmp/mongodb-27017.sock" port: ~ - type: "unix" host: "/tmp/mongodb-27018.sock" port: ~ auth: username: "bob" password: "bar" db: "admin" options: w: 1 mongo-ruby-driver-2.21.3/spec/spec_tests/data/connection_string/valid-unix_socket-relative.yml000066400000000000000000000144201505113246500327210ustar00rootroot00000000000000tests: - description: "Unix domain socket (relative path with trailing slash)" uri: "mongodb://rel%2Fmongodb-27017.sock/" valid: true warning: false hosts: - type: "unix" host: "rel/mongodb-27017.sock" port: ~ auth: ~ options: ~ - description: "Unix domain socket (relative path without trailing slash)" uri: "mongodb://rel%2Fmongodb-27017.sock" valid: true warning: false hosts: - type: "unix" host: "rel/mongodb-27017.sock" port: ~ auth: ~ options: ~ - description: "Unix domain socket (relative path with spaces)" uri: "mongodb://rel%2F %2Fmongodb-27017.sock" valid: true warning: false hosts: - type: "unix" host: "rel/ /mongodb-27017.sock" port: ~ auth: ~ options: ~ - description: "Multiple Unix domain sockets (relative paths)" uri: "mongodb://rel%2Fmongodb-27017.sock,rel%2Fmongodb-27018.sock" valid: true warning: false hosts: - type: "unix" host: "rel/mongodb-27017.sock" port: ~ - type: "unix" host: "rel/mongodb-27018.sock" port: ~ auth: ~ options: ~ - description: "Multiple Unix domain sockets (relative and absolute paths)" uri: "mongodb://rel%2Fmongodb-27017.sock,%2Ftmp%2Fmongodb-27018.sock" valid: true warning: false hosts: - type: "unix" host: "rel/mongodb-27017.sock" port: ~ - type: "unix" host: "/tmp/mongodb-27018.sock" port: ~ auth: ~ options: ~ - description: "Multiple hosts (relative path and ipv4)" uri: "mongodb://127.0.0.1:27017,rel%2Fmongodb-27017.sock" valid: true warning: false hosts: - type: "ipv4" host: "127.0.0.1" port: 27017 - type: "unix" host: "rel/mongodb-27017.sock" port: ~ auth: ~ options: ~ - description: "Multiple hosts (relative path and hostname resembling relative path)" uri: "mongodb://mongodb-27017.sock,rel%2Fmongodb-27018.sock" valid: true warning: false hosts: - type: "hostname" host: "mongodb-27017.sock" port: ~ - type: "unix" host: "rel/mongodb-27018.sock" port: ~ auth: ~ options: ~ - description: "Unix domain socket with auth database (relative path)" uri: "mongodb://alice:foo@rel%2Fmongodb-27017.sock/admin" valid: true warning: false hosts: - type: "unix" host: "rel/mongodb-27017.sock" port: ~ auth: username: "alice" password: "foo" db: "admin" options: ~ - description: "Unix domain socket with path resembling socket file (relative path with trailing slash)" uri: "mongodb://rel%2Fpath.to.sock%2Fmongodb-27017.sock/" valid: true warning: false hosts: - type: "unix" host: "rel/path.to.sock/mongodb-27017.sock" port: ~ auth: ~ options: ~ - description: "Unix domain socket with path resembling socket file (relative path without trailing slash)" uri: "mongodb://rel%2Fpath.to.sock%2Fmongodb-27017.sock" valid: true warning: false hosts: - type: "unix" host: "rel/path.to.sock/mongodb-27017.sock" port: ~ auth: ~ options: ~ - description: "Unix domain socket with path resembling socket file and auth (relative path)" uri: "mongodb://bob:bar@rel%2Fpath.to.sock%2Fmongodb-27017.sock/admin" valid: true warning: false hosts: - type: "unix" host: "rel/path.to.sock/mongodb-27017.sock" port: ~ auth: username: "bob" password: "bar" db: "admin" options: ~ - description: "Multiple Unix domain sockets and auth DB resembling a socket (relative path)" uri: "mongodb://rel%2Fmongodb-27017.sock,rel%2Fmongodb-27018.sock/admin" valid: true warning: false hosts: - type: "unix" host: "rel/mongodb-27017.sock" port: ~ - type: "unix" host: "rel/mongodb-27018.sock" port: ~ auth: username: ~ password: ~ db: "admin" options: ~ - description: "Multiple Unix domain sockets with auth DB resembling a path (relative path)" uri: "mongodb://rel%2Fmongodb-27017.sock,rel%2Fmongodb-27018.sock/admin" valid: true warning: false hosts: - type: "unix" host: "rel/mongodb-27017.sock" port: ~ - type: "unix" host: "rel/mongodb-27018.sock" port: ~ auth: username: ~ password: ~ db: "admin" options: ~ - description: "Multiple Unix domain sockets with auth and query string (relative path)" uri: "mongodb://bob:bar@rel%2Fmongodb-27017.sock,rel%2Fmongodb-27018.sock/admin?w=1" valid: true warning: false hosts: - type: "unix" host: "rel/mongodb-27017.sock" port: ~ - type: "unix" host: "rel/mongodb-27018.sock" port: ~ auth: username: "bob" password: "bar" db: "admin" options: w: 1 mongo-ruby-driver-2.21.3/spec/spec_tests/data/connection_string/valid-warnings.yml000066400000000000000000000037141505113246500304110ustar00rootroot00000000000000tests: - description: "Unrecognized option keys are ignored" uri: "mongodb://example.com/?foo=bar" valid: true warning: true hosts: - type: "hostname" host: "example.com" port: ~ auth: ~ options: ~ - description: "Unsupported option values are ignored" uri: "mongodb://example.com/?fsync=ifPossible" valid: true warning: true hosts: - type: "hostname" host: "example.com" port: ~ auth: ~ options: ~ - description: "Repeated option keys" uri: "mongodb://example.com/?replicaSet=test&replicaSet=test" valid: true warning: true hosts: - type: "hostname" host: "example.com" port: ~ auth: ~ options: replicaset: "test" - description: "Deprecated (or unknown) options are ignored if replacement exists" uri: "mongodb://example.com/?wtimeout=5&wtimeoutMS=10" valid: true warning: true hosts: - type: "hostname" host: "example.com" port: ~ auth: ~ options: wtimeoutms: 10 - description: "Empty integer option values are ignored" uri: "mongodb://localhost/?maxIdleTimeMS=" valid: true warning: true hosts: - type: "hostname" host: "localhost" port: ~ auth: ~ options: ~ - description: "Empty boolean option value are ignored" uri: "mongodb://localhost/?journal=" valid: true warning: true hosts: - type: "hostname" host: "localhost" port: ~ auth: ~ options: ~ mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud/000077500000000000000000000000001505113246500221445ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud/read/000077500000000000000000000000001505113246500230575ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud/read/aggregate-collation.yml000066400000000000000000000007741505113246500275220ustar00rootroot00000000000000data: - {_id: 1, x: 'ping'} minServerVersion: '3.4' serverless: 'forbid' tests: - description: "Aggregate with collation" operation: name: aggregate arguments: pipeline: - $match: x: 'PING' collation: { locale: 'en_US', strength: 2 } # https://mongodb.com/docs/manual/reference/collation/#collation-document outcome: result: - {_id: 1, x: 'ping'} mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud/read/aggregate-out.yml000066400000000000000000000022311505113246500263330ustar00rootroot00000000000000data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} minServerVersion: '2.6' serverless: 'forbid' tests: - description: "Aggregate with $out" operation: name: aggregate arguments: pipeline: - $sort: {x: 1} - $match: _id: {$gt: 1} - $out: "other_test_collection" batchSize: 2 outcome: collection: name: "other_test_collection" data: - {_id: 2, x: 22} - {_id: 3, x: 33} - description: "Aggregate with $out and batch size of 0" operation: name: aggregate arguments: pipeline: - $sort: {x: 1} - $match: _id: {$gt: 1} - $out: "other_test_collection" batchSize: 0 outcome: collection: name: "other_test_collection" data: - {_id: 2, x: 22} - {_id: 3, x: 33} mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud/read/aggregate.yml000066400000000000000000000007341505113246500255340ustar00rootroot00000000000000data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} tests: - description: "Aggregate with multiple stages" operation: name: aggregate arguments: pipeline: - $sort: {x: 1} - $match: _id: {$gt: 1} batchSize: 2 outcome: result: - {_id: 2, x: 22} - {_id: 3, x: 33} mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud/read/count-collation.yml000066400000000000000000000012711505113246500267150ustar00rootroot00000000000000data: - {_id: 1, x: 'PING'} minServerVersion: '3.4' serverless: 'forbid' tests: - description: "Count documents with collation" operation: name: countDocuments arguments: filter: { x: 'ping' } collation: { locale: 'en_US', strength: 2 } # https://mongodb.com/docs/manual/reference/collation/#collation-document outcome: result: 1 - description: "Deprecated count with collation" operation: name: count arguments: filter: { x: 'ping' } collation: { locale: 'en_US', strength: 2 } outcome: result: 1 mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud/read/count-empty.yml000066400000000000000000000011701505113246500260650ustar00rootroot00000000000000data: [] tests: - description: "Estimated document count with empty collection" operation: name: estimatedDocumentCount arguments: { } outcome: result: 0 - description: "Count documents with empty collection" operation: name: countDocuments arguments: filter: { } outcome: result: 0 - description: "Deprecated count with empty collection" operation: name: count arguments: filter: { } outcome: result: 0 mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud/read/count.yml000066400000000000000000000031271505113246500247350ustar00rootroot00000000000000data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} tests: - description: "Estimated document count" operation: name: estimatedDocumentCount arguments: { } outcome: result: 3 - description: "Count documents without a filter" operation: name: countDocuments arguments: filter: { } outcome: result: 3 - description: "Count documents with a filter" operation: name: countDocuments arguments: filter: _id: {$gt: 1} outcome: result: 2 - description: "Count documents with skip and limit" operation: name: countDocuments arguments: filter: {} skip: 1 limit: 3 outcome: result: 2 - description: "Deprecated count without a filter" operation: name: count arguments: filter: { } outcome: result: 3 - description: "Deprecated count with a filter" operation: name: count arguments: filter: _id: {$gt: 1} outcome: result: 2 - description: "Deprecated count with skip and limit" operation: name: count arguments: filter: {} skip: 1 limit: 3 outcome: result: 2 mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud/read/distinct-collation.yml000066400000000000000000000007361505113246500274130ustar00rootroot00000000000000data: - {_id: 1, string: 'PING'} - {_id: 2, string: 'ping'} minServerVersion: '3.4' serverless: 'forbid' tests: - description: "Distinct with a collation" operation: name: distinct arguments: fieldName: "string" collation: { locale: 'en_US', strength: 2 } # https://mongodb.com/docs/manual/reference/collation/#collation-document outcome: result: - 'PING' mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud/read/distinct.yml000066400000000000000000000012171505113246500254240ustar00rootroot00000000000000data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} tests: - description: "Distinct without a filter" operation: name: distinct arguments: fieldName: "x" filter: {} outcome: result: - 11 - 22 - 33 - description: "Distinct with a filter" operation: name: distinct arguments: fieldName: "x" filter: _id: {$gt: 1} outcome: result: - 22 - 33mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud/read/find-collation.yml000066400000000000000000000007001505113246500265010ustar00rootroot00000000000000data: - {_id: 1, x: 'ping'} minServerVersion: '3.4' serverless: 'forbid' tests: - description: "Find with a collation" operation: name: "find" arguments: filter: {x: 'PING'} collation: { locale: 'en_US', strength: 2 } # https://mongodb.com/docs/manual/reference/collation/#collation-document outcome: result: - {_id: 1, x: 'ping'} mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud/read/find.yml000066400000000000000000000021301505113246500245160ustar00rootroot00000000000000data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} - {_id: 4, x: 44} - {_id: 5, x: 55} tests: - description: "Find with filter" operation: name: "find" arguments: filter: {_id: 1} outcome: result: - {_id: 1, x: 11} - description: "Find with filter, sort, skip, and limit" operation: name: "find" arguments: filter: _id: {$gt: 2} sort: {_id: 1} skip: 2 limit: 2 outcome: result: - {_id: 5, x: 55} - description: "Find with limit, sort, and batchsize" operation: name: "find" arguments: filter: {} sort: {_id: 1} limit: 4 batchSize: 2 outcome: result: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} - {_id: 4, x: 44} mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud/write/000077500000000000000000000000001505113246500232765ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud/write/bulkWrite-arrayFilters.yml000066400000000000000000000027371505113246500304470ustar00rootroot00000000000000data: - {_id: 1, y: [{b: 3}, {b: 1}]} - {_id: 2, y: [{b: 0}, {b: 1}]} minServerVersion: '3.5.6' tests: - description: "BulkWrite with arrayFilters" operation: name: "bulkWrite" arguments: requests: - # UpdateOne when one document matches arrayFilters name: "updateOne" arguments: filter: {} update: $set: {"y.$[i].b": 2} arrayFilters: - {i.b: 3} - # UpdateMany when multiple documents match arrayFilters name: "updateMany" arguments: filter: {} update: $set: {"y.$[i].b": 2} arrayFilters: - {i.b: 1} options: { ordered: true } outcome: result: deletedCount: 0 insertedCount: 0 insertedIds: {} matchedCount: 3 modifiedCount: 3 upsertedCount: 0 upsertedIds: {} collection: data: - {_id: 1, y: [{b: 2}, {b: 2}]} - {_id: 2, y: [{b: 0}, {b: 2}]} mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud/write/bulkWrite-collation.yml000066400000000000000000000075271505113246500277660ustar00rootroot00000000000000data: - {_id: 1, x: 11} - {_id: 2, x: 'ping'} - {_id: 3, x: 'pINg'} - {_id: 4, x: 'pong'} - {_id: 5, x: 'pONg'} minServerVersion: '3.4' serverless: 'forbid' # See: https://mongodb.com/docs/manual/reference/collation/#collation-document tests: - description: "BulkWrite with delete operations and collation" operation: name: "bulkWrite" arguments: requests: - # matches two documents but deletes one name: "deleteOne" arguments: filter: { x: "PING" } collation: { locale: "en_US", strength: 2 } - # matches the remaining document and deletes it name: "deleteOne" arguments: filter: { x: "PING" } collation: { locale: "en_US", strength: 2 } - # matches two documents and deletes them name: "deleteMany" arguments: filter: { x: "PONG" } collation: { locale: "en_US", strength: 2 } options: { ordered: true } outcome: result: deletedCount: 4 insertedCount: 0 insertedIds: {} matchedCount: 0 modifiedCount: 0 upsertedCount: 0 upsertedIds: {} collection: data: - {_id: 1, x: 11 } - description: "BulkWrite with update operations and collation" operation: name: "bulkWrite" arguments: requests: - # matches only one document due to strength and updates name: "updateMany" arguments: filter: { x: "ping" } update: { $set: { x: "PONG" } } collation: { locale: "en_US", strength: 3 } - # matches one document and updates name: "updateOne" arguments: filter: { x: "ping" } update: { $set: { x: "PONG" } } collation: { locale: "en_US", strength: 2 } - # matches no document due to strength and upserts name: "replaceOne" arguments: filter: { x: "ping" } replacement: { _id: 6, x: "ping" } upsert: true collation: { locale: "en_US", strength: 3 } - # matches two documents and updates name: "updateMany" arguments: filter: { x: "pong" } update: { $set: { x: "PONG" } } collation: { locale: "en_US", strength: 2 } options: { ordered: true } outcome: result: deletedCount: 0 insertedCount: 0 insertedIds: {} matchedCount: 6 modifiedCount: 4 upsertedCount: 1 upsertedIds: { 2: 6 } collection: data: - {_id: 1, x: 11 } - {_id: 2, x: "PONG" } - {_id: 3, x: "PONG" } - {_id: 4, x: "PONG" } - {_id: 5, x: "PONG" } - {_id: 6, x: "ping" } mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud/write/bulkWrite.yml000066400000000000000000000357131505113246500260020ustar00rootroot00000000000000data: - {_id: 1, x: 11} - {_id: 2, x: 22} minServerVersion: '2.6' tests: - description: "BulkWrite with deleteOne operations" operation: name: "bulkWrite" arguments: # Note: as in the "DeleteOne when many documents match" test in # deleteOne.yml, we omit a deleteOne operation that might match # multiple documents as that would hinder our ability to assert # the final state of the collection under test. requests: - # does not match an existing document name: "deleteOne" arguments: filter: { _id: 3 } - # deletes the matched document name: "deleteOne" arguments: filter: { _id: 2 } options: { ordered: true } outcome: result: deletedCount: 1 insertedCount: 0 insertedIds: {} matchedCount: 0 modifiedCount: 0 upsertedCount: 0 upsertedIds: {} collection: data: - {_id: 1, x: 11 } - description: "BulkWrite with deleteMany operations" operation: name: "bulkWrite" arguments: requests: - # does not match any existing documents name: "deleteMany" arguments: filter: { x: { $lt: 11 } } - # deletes the matched documents name: "deleteMany" arguments: filter: { x: { $lte: 22 } } options: { ordered: true } outcome: result: deletedCount: 2 insertedCount: 0 insertedIds: {} matchedCount: 0 modifiedCount: 0 upsertedCount: 0 upsertedIds: {} collection: data: [] - description: "BulkWrite with insertOne operations" operation: name: "bulkWrite" arguments: requests: - name: "insertOne" arguments: document: { _id: 3, x: 33 } - name: "insertOne" arguments: document: { _id: 4, x: 44 } options: { ordered: true } outcome: result: deletedCount: 0 insertedCount: 2 insertedIds: { 0: 3, 1: 4 } matchedCount: 0 modifiedCount: 0 upsertedCount: 0 upsertedIds: {} collection: data: - {_id: 1, x: 11 } - {_id: 2, x: 22 } - {_id: 3, x: 33 } - {_id: 4, x: 44 } - description: "BulkWrite with replaceOne operations" operation: name: "bulkWrite" arguments: # Note: as in the "ReplaceOne when many documents match" test in # replaceOne.yml, we omit a replaceOne operation that might # match multiple documents as that would hinder our ability to # assert the final state of the collection under test. requests: - # does not match an existing document name: "replaceOne" arguments: filter: { _id: 3 } replacement: { x: 33 } - # modifies the matched document name: "replaceOne" arguments: filter: { _id: 1 } replacement: { x: 12 } - # does not match an existing document and upserts name: "replaceOne" arguments: filter: { _id: 3 } replacement: { x: 33 } upsert: true options: { ordered: true } outcome: result: deletedCount: 0 insertedCount: 0 insertedIds: {} matchedCount: 1 modifiedCount: 1 upsertedCount: 1 upsertedIds: { 2: 3 } collection: data: - {_id: 1, x: 12 } - {_id: 2, x: 22 } - {_id: 3, x: 33 } - description: "BulkWrite with updateOne operations" operation: name: "bulkWrite" arguments: # Note: as in the "UpdateOne when many documents match" test in # updateOne.yml, we omit an updateOne operation that might match # multiple documents as that would hinder our ability to assert # the final state of the collection under test. requests: - # does not match an existing document name: "updateOne" arguments: filter: { _id: 0 } update: { $set: { x: 0 } } - # does not modify the matched document name: "updateOne" arguments: filter: { _id: 1 } update: { $set: { x: 11 } } - # modifies the matched document name: "updateOne" arguments: filter: { _id: 2 } update: { $inc: { x: 1 } } - # does not match an existing document and upserts name: "updateOne" arguments: filter: { _id: 3 } update: { $set: { x: 33 } } upsert: true options: { ordered: true } outcome: result: deletedCount: 0 insertedCount: 0 insertedIds: {} matchedCount: 2 modifiedCount: 1 upsertedCount: 1 upsertedIds: { 3: 3 } collection: data: - {_id: 1, x: 11 } - {_id: 2, x: 23 } - {_id: 3, x: 33 } - description: "BulkWrite with updateMany operations" operation: name: "bulkWrite" arguments: requests: - # does not match any existing documents name: "updateMany" arguments: filter: { x: { $lt: 11 } } update: { $set: { x: 0 } } - # does not modify the matched documents name: "updateMany" arguments: filter: { x: { $lte: 22 } } update: { $unset: { y: 1 } } - # modifies the matched documents name: "updateMany" arguments: filter: { x: { $lte: 22 } } update: { $inc: { x: 1 } } - # does not match any existing documents and upserts name: "updateMany" arguments: filter: { _id: 3 } update: { $set: { x: 33 } } upsert: true options: { ordered: true } outcome: result: deletedCount: 0 insertedCount: 0 insertedIds: {} matchedCount: 4 modifiedCount: 2 upsertedCount: 1 upsertedIds: { 3: 3 } collection: data: - {_id: 1, x: 12 } - {_id: 2, x: 23 } - {_id: 3, x: 33 } - description: "BulkWrite with mixed ordered operations" operation: name: "bulkWrite" arguments: requests: - name: "insertOne" arguments: document: { _id: 3, x: 33 } - name: "updateOne" arguments: filter: { _id: 2 } update: { $inc: { x: 1 } } - name: "updateMany" arguments: filter: { _id: { $gt: 1 } } update: { $inc: { x: 1 } } - name: "insertOne" arguments: document: { _id: 4, x: 44 } - name: "deleteMany" arguments: filter: { x: { $nin: [ 24, 34 ] } } - name: "replaceOne" arguments: filter: { _id: 4 } replacement: { _id: 4, x: 44 } upsert: true options: { ordered: true } outcome: result: deletedCount: 2 insertedCount: 2 insertedIds: { 0: 3, 3: 4 } matchedCount: 3 modifiedCount: 3 upsertedCount: 1 upsertedIds: { 5: 4 } collection: data: - {_id: 2, x: 24 } - {_id: 3, x: 34 } - {_id: 4, x: 44 } - description: "BulkWrite with mixed unordered operations" operation: name: "bulkWrite" arguments: # We omit inserting multiple documents and updating documents # that may not exist at the start of this test as we cannot # assume the order in which the operations will execute. requests: - name: "replaceOne" arguments: filter: { _id: 3 } replacement: { _id: 3, x: 33 } upsert: true - name: "deleteOne" arguments: filter: { _id: 1 } - name: "updateOne" arguments: filter: { _id: 2 } update: { $inc: { x: 1 } } options: { ordered: false } outcome: result: deletedCount: 1 insertedCount: 0 insertedIds: {} matchedCount: 1 modifiedCount: 1 upsertedCount: 1 upsertedIds: { 0: 3 } collection: data: - {_id: 2, x: 23 } - {_id: 3, x: 33 } - description: "BulkWrite continue-on-error behavior with unordered (preexisting duplicate key)" operation: name: "bulkWrite" arguments: requests: - name: "insertOne" arguments: document: { _id: 2, x: 22 } - name: "insertOne" arguments: document: { _id: 3, x: 33 } - name: "insertOne" arguments: document: { _id: 4, x: 44 } options: { ordered: false } outcome: error: true # Driver does not return a complete result in case of an error # Therefore, we cannot validate it. # result: # deletedCount: 0 # insertedCount: 2 # # Since the map of insertedIds is generated before execution it # # could indicate inserts that did not actually succeed. We omit # # this field rather than expect drivers to provide an accurate # # map filtered by write errors. # matchedCount: 0 # modifiedCount: 0 # upsertedCount: 0 # upsertedIds: { } collection: data: - { _id: 1, x: 11 } - { _id: 2, x: 22 } - { _id: 3, x: 33 } - { _id: 4, x: 44 } - description: "BulkWrite continue-on-error behavior with unordered (duplicate key in requests)" operation: name: "bulkWrite" arguments: requests: - name: "insertOne" arguments: document: { _id: 3, x: 33 } - name: "insertOne" arguments: document: { _id: 3, x: 33 } - name: "insertOne" arguments: document: { _id: 4, x: 44 } options: { ordered: false } outcome: error: true # Driver does not return a complete result in case of an error # Therefore, we cannot validate it. # result: # deletedCount: 0 # insertedCount: 2 # # Since the map of insertedIds is generated before execution it # # could indicate inserts that did not actually succeed. We omit # # this field rather than expect drivers to provide an accurate # # map filtered by write errors. # matchedCount: 0 # modifiedCount: 0 # upsertedCount: 0 # upsertedIds: { } collection: data: - { _id: 1, x: 11 } - { _id: 2, x: 22 } - { _id: 3, x: 33 } - { _id: 4, x: 44 } mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud/write/deleteMany-collation.yml000066400000000000000000000011651505113246500300750ustar00rootroot00000000000000data: - {_id: 1, x: 11} - {_id: 2, x: 'ping'} - {_id: 3, x: 'pINg'} minServerVersion: '3.4' serverless: 'forbid' tests: - description: "DeleteMany when many documents match with collation" operation: name: "deleteMany" arguments: filter: x: 'PING' collation: { locale: 'en_US', strength: 2 } # https://mongodb.com/docs/manual/reference/collation/#collation-document outcome: result: deletedCount: 2 collection: data: - {_id: 1, x: 11} mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud/write/deleteMany.yml000066400000000000000000000015011505113246500261050ustar00rootroot00000000000000data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} tests: - description: "DeleteMany when many documents match" operation: name: "deleteMany" arguments: filter: _id: {$gt: 1} outcome: result: deletedCount: 2 collection: data: - {_id: 1, x: 11} - description: "DeleteMany when no document matches" operation: name: "deleteMany" arguments: filter: {_id: 4} outcome: result: deletedCount: 0 collection: data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud/write/deleteOne-collation.yml000066400000000000000000000012131505113246500277040ustar00rootroot00000000000000data: - {_id: 1, x: 11} - {_id: 2, x: 'ping'} - {_id: 3, x: 'pINg'} minServerVersion: '3.4' serverless: 'forbid' tests: - description: "DeleteOne when many documents matches with collation" operation: name: "deleteOne" arguments: filter: {x: 'PING'} collation: { locale: 'en_US', strength: 2 } # https://mongodb.com/docs/manual/reference/collation/#collation-document outcome: result: deletedCount: 1 collection: data: - {_id: 1, x: 11} - {_id: 3, x: 'pINg'} mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud/write/deleteOne.yml000066400000000000000000000023121505113246500257230ustar00rootroot00000000000000data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} tests: - description: "DeleteOne when many documents match" operation: name: "deleteOne" arguments: filter: _id: {$gt: 1} outcome: result: deletedCount: 1 # can't verify collection because we don't have a way # of knowing which document gets deleted. - description: "DeleteOne when one document matches" operation: name: "deleteOne" arguments: filter: {_id: 2} outcome: result: deletedCount: 1 collection: data: - {_id: 1, x: 11} - {_id: 3, x: 33} - description: "DeleteOne when no documents match" operation: name: "deleteOne" arguments: filter: {_id: 4} outcome: result: deletedCount: 0 collection: data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud/write/findOneAndDelete-collation.yml000066400000000000000000000013231505113246500311320ustar00rootroot00000000000000data: - {_id: 1, x: 11} - {_id: 2, x: 'ping'} - {_id: 3, x: 'pINg'} minServerVersion: '3.4' serverless: 'forbid' tests: - description: "FindOneAndDelete when one document matches with collation" operation: name: findOneAndDelete arguments: filter: {_id: 2, x: 'PING'} projection: {x: 1, _id: 0} sort: {x: 1} collation: { locale: 'en_US', strength: 2 } # https://mongodb.com/docs/manual/reference/collation/#collation-document outcome: result: {x: 'ping'} collection: data: - {_id: 1, x: 11} - {_id: 3, x: 'pINg'} mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud/write/findOneAndDelete.yml000066400000000000000000000025741505113246500271610ustar00rootroot00000000000000data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} tests: - description: "FindOneAndDelete when many documents match" operation: name: findOneAndDelete arguments: filter: _id: {$gt: 1} projection: {x: 1, _id: 0} sort: {x: 1} outcome: result: {x: 22} collection: data: - {_id: 1, x: 11} - {_id: 3, x: 33} - description: "FindOneAndDelete when one document matches" operation: name: findOneAndDelete arguments: filter: {_id: 2} projection: {x: 1, _id: 0} sort: {x: 1} outcome: result: {x: 22} collection: data: - {_id: 1, x: 11} - {_id: 3, x: 33} - description: "FindOneAndDelete when no documents match" operation: name: findOneAndDelete arguments: filter: {_id: 4} projection: {x: 1, _id: 0} sort: {x: 1} outcome: result: null collection: data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33}mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud/write/findOneAndReplace-collation.yml000066400000000000000000000014541505113246500313100ustar00rootroot00000000000000data: - {_id: 1, x: 11} - {_id: 2, x: 'ping'} minServerVersion: '3.4' serverless: 'forbid' tests: - description: "FindOneAndReplace when one document matches with collation returning the document after modification" operation: name: findOneAndReplace arguments: filter: {x: 'PING'} replacement: {x: 'pong'} projection: {x: 1, _id: 0} returnDocument: After sort: {x: 1} collation: { locale: 'en_US', strength: 2 } # https://mongodb.com/docs/manual/reference/collation/#collation-document outcome: result: {x: 'pong'} collection: data: - {_id: 1, x: 11} - {_id: 2, x: 'pong'} mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud/write/findOneAndReplace-upsert.yml000066400000000000000000000062301505113246500306430ustar00rootroot00000000000000data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} minServerVersion: '2.6' tests: - description: "FindOneAndReplace when no documents match without id specified with upsert returning the document before modification" operation: name: findOneAndReplace arguments: filter: {_id: 4} replacement: {x: 44} projection: {x: 1, _id: 0} # Omit the sort option as it has no effect when no documents # match and would only cause an inconsistent return value on # pre-3.0 servers when combined with returnDocument "before" # (see: SERVER-17650). upsert: true outcome: result: null collection: data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} - {_id: 4, x: 44} - description: "FindOneAndReplace when no documents match without id specified with upsert returning the document after modification" operation: name: findOneAndReplace arguments: filter: {_id: 4} replacement: {x: 44} projection: {x: 1, _id: 0} returnDocument: After sort: {x: 1} upsert: true outcome: result: {x: 44} collection: data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} - {_id: 4, x: 44} - description: "FindOneAndReplace when no documents match with id specified with upsert returning the document before modification" operation: name: findOneAndReplace arguments: filter: {_id: 4} replacement: {_id: 4, x: 44} projection: {x: 1, _id: 0} # Omit the sort option as it has no effect when no documents # match and would only cause an inconsistent return value on # pre-3.0 servers when combined with returnDocument "before" # (see: SERVER-17650). upsert: true outcome: result: null collection: data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} - {_id: 4, x: 44} - description: "FindOneAndReplace when no documents match with id specified with upsert returning the document after modification" operation: name: findOneAndReplace arguments: filter: {_id: 4} replacement: {_id: 4, x: 44} projection: {x: 1, _id: 0} returnDocument: After sort: {x: 1} upsert: true outcome: result: {x: 44} collection: data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} - {_id: 4, x: 44} mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud/write/findOneAndReplace-upsert_pre_2.6.yml000066400000000000000000000065171505113246500321060ustar00rootroot00000000000000data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} # This file includes the same test cases as findOneAndReplace-upsert.yml with # some omissions for pre-2.6 servers. We cannot verify the ID of an upserted # document in some cases due to SERVER-5289. maxServerVersion: '2.4.99' tests: - description: "FindOneAndReplace when no documents match without id specified with upsert returning the document before modification" operation: name: findOneAndReplace arguments: filter: {_id: 4} replacement: {x: 44} projection: {x: 1, _id: 0} # Omit the sort option as it has no effect when no documents # match and would only cause an inconsistent return value on # pre-3.0 servers when combined with returnDocument "before" # (see: SERVER-17650). upsert: true outcome: result: null # Can't verify collection data because server versions before 2.6 do # not take the _id from the filter document during an upsert (see: # SERVER-5289). - description: "FindOneAndReplace when no documents match without id specified with upsert returning the document after modification" operation: name: findOneAndReplace arguments: filter: {_id: 4} replacement: {x: 44} projection: {x: 1, _id: 0} returnDocument: After sort: {x: 1} upsert: true outcome: result: {x: 44} # Can't verify collection data because server versions before 2.6 do # not take the _id from the filter document during an upsert (see: # SERVER-5289). - description: "FindOneAndReplace when no documents match with id specified with upsert returning the document before modification" operation: name: findOneAndReplace arguments: filter: {_id: 4} replacement: {_id: 4, x: 44} projection: {x: 1, _id: 0} # Omit the sort option as it has no effect when no documents # match and would only cause an inconsistent return value on # pre-3.0 servers when combined with returnDocument "before" # (see: SERVER-17650). upsert: true outcome: result: null collection: data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} - {_id: 4, x: 44} - description: "FindOneAndReplace when no documents match with id specified with upsert returning the document after modification" operation: name: findOneAndReplace arguments: filter: {_id: 4} replacement: {_id: 4, x: 44} projection: {x: 1, _id: 0} returnDocument: After sort: {x: 1} upsert: true outcome: result: {x: 44} collection: data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} - {_id: 4, x: 44} mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud/write/findOneAndReplace.yml000066400000000000000000000066351505113246500273340ustar00rootroot00000000000000data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} tests: - description: "FindOneAndReplace when many documents match returning the document before modification" operation: name: findOneAndReplace arguments: filter: _id: {$gt: 1} replacement: {x: 32} projection: {x: 1, _id: 0} sort: {x: 1} outcome: result: {x: 22} collection: data: - {_id: 1, x: 11} - {_id: 2, x: 32} - {_id: 3, x: 33} - description: "FindOneAndReplace when many documents match returning the document after modification" operation: name: findOneAndReplace arguments: filter: _id: {$gt: 1} replacement: {x: 32} projection: {x: 1, _id: 0} returnDocument: After sort: {x: 1} outcome: result: {x: 32} collection: data: - {_id: 1, x: 11} - {_id: 2, x: 32} - {_id: 3, x: 33} - description: "FindOneAndReplace when one document matches returning the document before modification" operation: name: findOneAndReplace arguments: filter: {_id: 2} replacement: {x: 32} projection: {x: 1, _id: 0} sort: {x: 1} outcome: result: {x: 22} collection: data: - {_id: 1, x: 11} - {_id: 2, x: 32} - {_id: 3, x: 33} - description: "FindOneAndReplace when one document matches returning the document after modification" operation: name: findOneAndReplace arguments: filter: {_id: 2} replacement: {x: 32} projection: {x: 1, _id: 0} returnDocument: After sort: {x: 1} outcome: result: {x: 32} collection: data: - {_id: 1, x: 11} - {_id: 2, x: 32} - {_id: 3, x: 33} - description: "FindOneAndReplace when no documents match returning the document before modification" operation: name: findOneAndReplace arguments: filter: {_id: 4} replacement: {x: 44} projection: {x: 1, _id: 0} sort: {x: 1} outcome: result: null collection: data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} - description: "FindOneAndReplace when no documents match returning the document after modification" operation: name: findOneAndReplace arguments: filter: {_id: 4} replacement: {x: 44} projection: {x: 1, _id: 0} returnDocument: After sort: {x: 1} outcome: result: null collection: data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud/write/findOneAndUpdate-arrayFilters.yml000066400000000000000000000036071505113246500316440ustar00rootroot00000000000000data: - {_id: 1, y: [{b: 3}, {b: 1}]} - {_id: 2, y: [{b: 0}, {b: 1}]} minServerVersion: '3.5.6' tests: - description: "FindOneAndUpdate when no document matches arrayFilters" operation: name: findOneAndUpdate arguments: filter: {} update: $set: {"y.$[i].b": 2} arrayFilters: - {i.b: 4} outcome: result: _id: 1 y: - {b: 3} - {b: 1} collection: data: - {_id: 1, y: [{b: 3}, {b: 1}]} - {_id: 2, y: [{b: 0}, {b: 1}]} - description: "FindOneAndUpdate when one document matches arrayFilters" operation: name: findOneAndUpdate arguments: filter: {} update: $set: {"y.$[i].b": 2} arrayFilters: - {i.b: 3} outcome: result: _id: 1 y: - {b: 3} - {b: 1} collection: data: - {_id: 1, y: [{b: 2}, {b: 1}]} - {_id: 2, y: [{b: 0}, {b: 1}]} - description: "FindOneAndUpdate when multiple documents match arrayFilters" operation: name: findOneAndUpdate arguments: filter: {} update: $set: {"y.$[i].b": 2} arrayFilters: - {i.b: 1} outcome: result: _id: 1 y: - {b: 3} - {b: 1} collection: data: - {_id: 1, y: [{b: 3}, {b: 2}]} - {_id: 2, y: [{b: 0}, {b: 1}]} mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud/write/findOneAndUpdate-collation.yml000066400000000000000000000015621505113246500311570ustar00rootroot00000000000000data: - {_id: 1, x: 11} - {_id: 2, x: 'ping'} - {_id: 3, x: 'pINg'} minServerVersion: '3.4' serverless: 'forbid' tests: - description: "FindOneAndUpdate when many documents match with collation returning the document before modification" operation: name: findOneAndUpdate arguments: filter: x: 'PING' update: $set: {x: 'pong'} projection: {x: 1, _id: 0} sort: {_id: 1} collation: { locale: 'en_US', strength: 2 } # https://mongodb.com/docs/manual/reference/collation/#collation-document outcome: result: {x: 'ping'} collection: data: - {_id: 1, x: 11} - {_id: 2, x: 'pong'} - {_id: 3, x: 'pINg'} mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud/write/findOneAndUpdate.yml000066400000000000000000000120321505113246500271670ustar00rootroot00000000000000data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} tests: - description: "FindOneAndUpdate when many documents match returning the document before modification" operation: name: findOneAndUpdate arguments: filter: _id: {$gt: 1} update: $inc: {x: 1} projection: {x: 1, _id: 0} sort: {x: 1} outcome: result: {x: 22} collection: data: - {_id: 1, x: 11} - {_id: 2, x: 23} - {_id: 3, x: 33} - description: "FindOneAndUpdate when many documents match returning the document after modification" operation: name: findOneAndUpdate arguments: filter: _id: {$gt: 1} update: $inc: {x: 1} projection: {x: 1, _id: 0} returnDocument: After sort: {x: 1} outcome: result: {x: 23} collection: data: - {_id: 1, x: 11} - {_id: 2, x: 23} - {_id: 3, x: 33} - description: "FindOneAndUpdate when one document matches returning the document before modification" operation: name: findOneAndUpdate arguments: filter: {_id: 2} update: $inc: {x: 1} projection: {x: 1, _id: 0} sort: {x: 1} outcome: result: {x: 22} collection: data: - {_id: 1, x: 11} - {_id: 2, x: 23} - {_id: 3, x: 33} - description: "FindOneAndUpdate when one document matches returning the document after modification" operation: name: findOneAndUpdate arguments: filter: {_id: 2} update: $inc: {x: 1} projection: {x: 1, _id: 0} returnDocument: After sort: {x: 1} outcome: result: {x: 23} collection: data: - {_id: 1, x: 11} - {_id: 2, x: 23} - {_id: 3, x: 33} - description: "FindOneAndUpdate when no documents match returning the document before modification" operation: name: findOneAndUpdate arguments: filter: {_id: 4} update: $inc: {x: 1} projection: {x: 1, _id: 0} sort: {x: 1} outcome: result: null collection: data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} - description: "FindOneAndUpdate when no documents match with upsert returning the document before modification" operation: name: findOneAndUpdate arguments: filter: {_id: 4} update: $inc: {x: 1} projection: {x: 1, _id: 0} # Omit the sort option as it has no effect when no documents # match and would only cause an inconsistent return value on # pre-3.0 servers when combined with returnDocument "before" # (see: SERVER-17650). upsert: true outcome: result: null collection: data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} - {_id: 4, x: 1} - description: "FindOneAndUpdate when no documents match returning the document after modification" operation: name: findOneAndUpdate arguments: filter: {_id: 4} update: $inc: {x: 1} projection: {x: 1, _id: 0} returnDocument: After sort: {x: 1} outcome: result: null collection: data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} - description: "FindOneAndUpdate when no documents match with upsert returning the document after modification" operation: name: findOneAndUpdate arguments: filter: {_id: 4} update: $inc: {x: 1} projection: {x: 1, _id: 0} returnDocument: After sort: {x: 1} upsert: true outcome: result: {x: 1} collection: data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} - {_id: 4, x: 1}mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud/write/insertMany.yml000066400000000000000000000056651505113246500261660ustar00rootroot00000000000000data: - {_id: 1, x: 11} tests: - description: "InsertMany with non-existing documents" operation: name: "insertMany" arguments: documents: - {_id: 2, x: 22} - {_id: 3, x: 33} options: { ordered: true } outcome: result: insertedIds: { 0: 2, 1: 3 } collection: data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} - description: "InsertMany continue-on-error behavior with unordered (preexisting duplicate key)" operation: name: "insertMany" arguments: documents: - { _id: 1, x: 11 } - { _id: 2, x: 22 } - { _id: 3, x: 33 } options: { ordered: false } outcome: error: true # Driver does not return a complete result in case of an error # Therefore, we cannot validate it. # result: # deletedCount: 0 # insertedCount: 2 # # Since the map of insertedIds is generated before execution it # # could indicate inserts that did not actually succeed. We omit # # this field rather than expect drivers to provide an accurate # # map filtered by write errors. # matchedCount: 0 # modifiedCount: 0 # upsertedCount: 0 # upsertedIds: { } collection: data: - { _id: 1, x: 11 } - { _id: 2, x: 22 } - { _id: 3, x: 33 } - description: "InsertMany continue-on-error behavior with unordered (duplicate key in requests)" operation: name: "insertMany" arguments: documents: - { _id: 2, x: 22 } - { _id: 2, x: 22 } - { _id: 3, x: 33 } options: { ordered: false } outcome: error: true # Driver does not return a complete result in case of an error # Therefore, we cannot validate it. # result: # deletedCount: 0 # insertedCount: 2 # # Since the map of insertedIds is generated before execution it # # could indicate inserts that did not actually succeed. We omit # # this field rather than expect drivers to provide an accurate # # map filtered by write errors. # matchedCount: 0 # modifiedCount: 0 # upsertedCount: 0 # upsertedIds: { } collection: data: - { _id: 1, x: 11 } - { _id: 2, x: 22 } - { _id: 3, x: 33 } mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud/write/insertOne.yml000066400000000000000000000006271505113246500257740ustar00rootroot00000000000000data: - {_id: 1, x: 11} tests: - description: "InsertOne with a non-existing document" operation: name: "insertOne" arguments: document: {_id: 2, x: 22} outcome: result: insertedId: 2 collection: data: - {_id: 1, x: 11} - {_id: 2, x: 22}mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud/write/replaceOne-collation.yml000066400000000000000000000013441505113246500300620ustar00rootroot00000000000000data: - {_id: 1, x: 11} - {_id: 2, x: 'ping'} minServerVersion: '3.4' serverless: 'forbid' tests: - description: "ReplaceOne when one document matches with collation" operation: name: "replaceOne" arguments: filter: {x: 'PING'} replacement: {_id: 2, x: 'pong'} collation: {locale: 'en_US', strength: 2} # https://mongodb.com/docs/manual/reference/collation/#collation-document outcome: result: matchedCount: 1 modifiedCount: 1 upsertedCount: 0 collection: data: - {_id: 1, x: 11} - {_id: 2, x: 'pong'} mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud/write/replaceOne-pre_2.6.yml000066400000000000000000000057341505113246500272600ustar00rootroot00000000000000data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} # This file includes the same test cases as replaceOne.yml with some omissions # for pre-2.6 servers. We cannot verify the update result's modifiedCount as it # is not available with legacy write operations and getLastError. Additionally, # we cannot verify the ID of an upserted document in some cases due to # SERVER-5289. maxServerVersion: '2.4.99' tests: - description: "ReplaceOne when many documents match" operation: name: "replaceOne" arguments: filter: _id: {$gt: 1} replacement: {x: 111} outcome: result: matchedCount: 1 upsertedCount: 0 # Can't verify collection data because we don't have a way of # knowing which document gets updated. - description: "ReplaceOne when one document matches" operation: name: "replaceOne" arguments: filter: {_id: 1} replacement: {_id: 1, x: 111} outcome: result: matchedCount: 1 upsertedCount: 0 collection: data: - {_id: 1, x: 111} - {_id: 2, x: 22} - {_id: 3, x: 33} - description: "ReplaceOne when no documents match" operation: name: "replaceOne" arguments: filter: {_id: 4} replacement: {_id: 4, x: 1} outcome: result: matchedCount: 0 upsertedCount: 0 collection: data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} - description: "ReplaceOne with upsert when no documents match without an id specified" operation: name: "replaceOne" arguments: filter: {_id: 4} replacement: {x: 1} upsert: true outcome: result: matchedCount: 0 upsertedCount: 1 # Can't verify upsertedId or collection data because server versions # before 2.6 do not take the _id from the filter document during an # upsert (see: SERVER-5289) - description: "ReplaceOne with upsert when no documents match with an id specified" operation: name: "replaceOne" arguments: filter: {_id: 4} replacement: {_id: 4, x: 1} upsert: true outcome: result: matchedCount: 0 upsertedCount: 1 upsertedId: 4 collection: data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} - {_id: 4, x: 1} mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud/write/replaceOne-upsert.yml000066400000000000000000000025701505113246500274220ustar00rootroot00000000000000data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} minServerVersion: '2.6' # See SERVER-5289 for why the collection data is only checked for server versions >= 2.6 tests: - description: "ReplaceOne with upsert when no documents match without an id specified" operation: name: "replaceOne" arguments: filter: {_id: 4} replacement: {x: 1} upsert: true outcome: result: matchedCount: 0 modifiedCount: 0 upsertedId: 4 collection: data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} - {_id: 4, x: 1} - description: "ReplaceOne with upsert when no documents match with an id specified" operation: name: "replaceOne" arguments: filter: {_id: 4} replacement: {_id: 4, x: 1} upsert: true outcome: result: matchedCount: 0 modifiedCount: 0 upsertedId: 4 collection: data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} - {_id: 4, x: 1} mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud/write/replaceOne.yml000066400000000000000000000055241505113246500261040ustar00rootroot00000000000000data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} minServerVersion: '2.6' tests: - description: "ReplaceOne when many documents match" operation: name: "replaceOne" arguments: filter: _id: {$gt: 1} replacement: {x: 111} outcome: result: matchedCount: 1 modifiedCount: 1 upsertedCount: 0 # Can't verify collection data because we don't have a way of # knowing which document gets updated. - description: "ReplaceOne when one document matches" operation: name: "replaceOne" arguments: filter: {_id: 1} replacement: {_id: 1, x: 111} outcome: result: matchedCount: 1 modifiedCount: 1 upsertedCount: 0 collection: data: - {_id: 1, x: 111} - {_id: 2, x: 22} - {_id: 3, x: 33} - description: "ReplaceOne when no documents match" operation: name: "replaceOne" arguments: filter: {_id: 4} replacement: {_id: 4, x: 1} outcome: result: matchedCount: 0 modifiedCount: 0 upsertedCount: 0 collection: data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} - description: "ReplaceOne with upsert when no documents match without an id specified" operation: name: "replaceOne" arguments: filter: {_id: 4} replacement: {x: 1} upsert: true outcome: result: matchedCount: 0 modifiedCount: 0 upsertedCount: 1 upsertedId: 4 collection: data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} - {_id: 4, x: 1} - description: "ReplaceOne with upsert when no documents match with an id specified" operation: name: "replaceOne" arguments: filter: {_id: 4} replacement: {_id: 4, x: 1} upsert: true outcome: result: matchedCount: 0 modifiedCount: 0 upsertedCount: 1 upsertedId: 4 collection: data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} - {_id: 4, x: 1} mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud/write/updateMany-arrayFilters.yml000066400000000000000000000035421505113246500306010ustar00rootroot00000000000000data: - {_id: 1, y: [{b: 3}, {b: 1}]} - {_id: 2, y: [{b: 0}, {b: 1}]} minServerVersion: '3.5.6' tests: - description: "UpdateMany when no documents match arrayFilters" operation: name: "updateMany" arguments: filter: {} update: $set: {"y.$[i].b": 2} arrayFilters: - {i.b: 4} outcome: result: matchedCount: 2 modifiedCount: 0 upsertedCount: 0 collection: data: - {_id: 1, y: [{b: 3}, {b: 1}]} - {_id: 2, y: [{b: 0}, {b: 1}]} - description: "UpdateMany when one document matches arrayFilters" operation: name: "updateMany" arguments: filter: {} update: $set: {"y.$[i].b": 2} arrayFilters: - {i.b: 3} outcome: result: matchedCount: 2 modifiedCount: 1 upsertedCount: 0 collection: data: - {_id: 1, y: [{b: 2}, {b: 1}]} - {_id: 2, y: [{b: 0}, {b: 1}]} - description: "UpdateMany when multiple documents match arrayFilters" operation: name: "updateMany" arguments: filter: {} update: $set: {"y.$[i].b": 2} arrayFilters: - {i.b: 1} outcome: result: matchedCount: 2 modifiedCount: 2 upsertedCount: 0 collection: data: - {_id: 1, y: [{b: 3}, {b: 2}]} - {_id: 2, y: [{b: 0}, {b: 2}]} mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud/write/updateMany-collation.yml000066400000000000000000000015111505113246500301100ustar00rootroot00000000000000data: - {_id: 1, x: 11} - {_id: 2, x: 'ping'} - {_id: 3, x: 'pINg'} minServerVersion: '3.4' serverless: 'forbid' tests: - description: "UpdateMany when many documents match with collation" operation: name: "updateMany" arguments: filter: x: 'ping' update: $set: {x: 'pong'} collation: { locale: 'en_US', strength: 2 } # https://mongodb.com/docs/manual/reference/collation/#collation-document outcome: result: matchedCount: 2 modifiedCount: 2 upsertedCount: 0 collection: data: - {_id: 1, x: 11} - {_id: 2, x: 'pong'} - {_id: 3, x: 'pong'} mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud/write/updateMany-pre_2.6.yml000066400000000000000000000046021505113246500273030ustar00rootroot00000000000000data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} # This file includes the same test cases as updateMany.yml with some omissions # for pre-2.6 servers. We cannot verify the update result's modifiedCount as it # is not available with legacy write operations and getLastError. maxServerVersion: '2.4.99' tests: - description: "UpdateMany when many documents match" operation: name: "updateMany" arguments: filter: _id: {$gt: 1} update: $inc: {x: 1} outcome: result: matchedCount: 2 upsertedCount: 0 collection: data: - {_id: 1, x: 11} - {_id: 2, x: 23} - {_id: 3, x: 34} - description: "UpdateMany when one document matches" operation: name: "updateMany" arguments: filter: {_id: 1} update: $inc: {x: 1} outcome: result: matchedCount: 1 upsertedCount: 0 collection: data: - {_id: 1, x: 12} - {_id: 2, x: 22} - {_id: 3, x: 33} - description: "UpdateMany when no documents match" operation: name: "updateMany" arguments: filter: {_id: 4} update: $inc: {x: 1} outcome: result: matchedCount: 0 upsertedCount: 0 collection: data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} - description: "UpdateMany with upsert when no documents match" operation: name: "updateMany" arguments: filter: {_id: 4} update: $inc: {x: 1} upsert: true outcome: result: matchedCount: 0 upsertedCount: 1 upsertedId: 4 collection: data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} - {_id: 4, x: 1} mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud/write/updateMany.yml000066400000000000000000000044421505113246500261340ustar00rootroot00000000000000data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} minServerVersion: '2.6' tests: - description: "UpdateMany when many documents match" operation: name: "updateMany" arguments: filter: _id: {$gt: 1} update: $inc: {x: 1} outcome: result: matchedCount: 2 modifiedCount: 2 upsertedCount: 0 collection: data: - {_id: 1, x: 11} - {_id: 2, x: 23} - {_id: 3, x: 34} - description: "UpdateMany when one document matches" operation: name: "updateMany" arguments: filter: {_id: 1} update: $inc: {x: 1} outcome: result: matchedCount: 1 modifiedCount: 1 upsertedCount: 0 collection: data: - {_id: 1, x: 12} - {_id: 2, x: 22} - {_id: 3, x: 33} - description: "UpdateMany when no documents match" operation: name: "updateMany" arguments: filter: {_id: 4} update: $inc: {x: 1} outcome: result: matchedCount: 0 modifiedCount: 0 upsertedCount: 0 collection: data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} - description: "UpdateMany with upsert when no documents match" operation: name: "updateMany" arguments: filter: {_id: 4} update: $inc: {x: 1} upsert: true outcome: result: matchedCount: 0 modifiedCount: 0 upsertedCount: 1 upsertedId: 4 collection: data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} - {_id: 4, x: 1} mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud/write/updateOne-arrayFilters.yml000066400000000000000000000066631505113246500304250ustar00rootroot00000000000000data: - {_id: 1, y: [{b: 3}, {b: 1}]} - {_id: 2, y: [{b: 0}, {b: 1}]} - {_id: 3, y: [{b: 5, c: [{d: 2}, {d: 1}] }]} minServerVersion: '3.5.6' tests: - description: "UpdateOne when no document matches arrayFilters" operation: name: "updateOne" arguments: filter: {} update: $set: {"y.$[i].b": 2} arrayFilters: - {i.b: 4} outcome: result: matchedCount: 1 modifiedCount: 0 upsertedCount: 0 collection: data: - {_id: 1, y: [{b: 3}, {b: 1}]} - {_id: 2, y: [{b: 0}, {b: 1}]} - {_id: 3, y: [{b: 5, c: [{d: 2}, {d: 1}] }]} - description: "UpdateOne when one document matches arrayFilters" operation: name: "updateOne" arguments: filter: {} update: $set: {"y.$[i].b": 2} arrayFilters: - {i.b: 3} outcome: result: matchedCount: 1 modifiedCount: 1 upsertedCount: 0 collection: data: - {_id: 1, y: [{b: 2}, {b: 1}]} - {_id: 2, y: [{b: 0}, {b: 1}]} - {_id: 3, y: [{b: 5, c: [{d: 2}, {d: 1}] }]} - description: "UpdateOne when multiple documents match arrayFilters" operation: name: "updateOne" arguments: filter: {} update: $set: {"y.$[i].b": 2} arrayFilters: - {i.b: 1} outcome: result: matchedCount: 1 modifiedCount: 1 upsertedCount: 0 collection: data: - {_id: 1, y: [{b: 3}, {b: 2}]} - {_id: 2, y: [{b: 0}, {b: 1}]} - {_id: 3, y: [{b: 5, c: [{d: 2}, {d: 1}] }]} - description: "UpdateOne when no documents match multiple arrayFilters" operation: name: "updateOne" arguments: filter: {_id: 3} update: $set: {"y.$[i].c.$[j].d": 0} arrayFilters: - {i.b: 5} - {j.d: 3} outcome: result: matchedCount: 1 modifiedCount: 0 upsertedCount: 0 collection: data: - {_id: 1, y: [{b: 3}, {b: 1}]} - {_id: 2, y: [{b: 0}, {b: 1}]} - {_id: 3, y: [{b: 5, c: [{d: 2}, {d: 1}] }]} - description: "UpdateOne when one document matches multiple arrayFilters" operation: name: "updateOne" arguments: filter: {_id: 3} update: $set: {"y.$[i].c.$[j].d": 0} arrayFilters: - {i.b: 5} - {j.d: 1} outcome: result: matchedCount: 1 modifiedCount: 1 upsertedCount: 0 collection: data: - {_id: 1, y: [{b: 3}, {b: 1}]} - {_id: 2, y: [{b: 0}, {b: 1}]} - {_id: 3, y: [{b: 5, c: [{d: 2}, {d: 0}] }]} mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud/write/updateOne-collation.yml000066400000000000000000000013601505113246500277270ustar00rootroot00000000000000data: - {_id: 1, x: 11} - {_id: 2, x: 'ping'} minServerVersion: '3.4' serverless: 'forbid' tests: - description: "UpdateOne when one document matches with collation" operation: name: "updateOne" arguments: filter: {x: 'PING'} update: $set: {x: 'pong'} collation: { locale: 'en_US', strength: 2} # https://mongodb.com/docs/manual/reference/collation/#collation-document outcome: result: matchedCount: 1 modifiedCount: 1 upsertedCount: 0 collection: data: - {_id: 1, x: 11} - {_id: 2, x: 'pong'} mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud/write/updateOne-pre_2.6.yml000066400000000000000000000045261505113246500271250ustar00rootroot00000000000000data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} # This file includes the same test cases as updateOne.yml with some omissions # for pre-2.6 servers. We cannot verify the update result's modifiedCount as it # is not available with legacy write operations and getLastError. maxServerVersion: '2.4.99' tests: - description: "UpdateOne when many documents match" operation: name: "updateOne" arguments: filter: _id: {$gt: 1} update: $inc: {x: 1} outcome: result: matchedCount: 1 upsertedCount: 0 # Can't verify collection data because we don't have a way of # knowing which document gets updated. - description: "UpdateOne when one document matches" operation: name: "updateOne" arguments: filter: {_id: 1} update: $inc: {x: 1} outcome: result: matchedCount: 1 upsertedCount: 0 collection: data: - {_id: 1, x: 12} - {_id: 2, x: 22} - {_id: 3, x: 33} - description: "UpdateOne when no documents match" operation: name: "updateOne" arguments: filter: {_id: 4} update: $inc: {x: 1} outcome: result: matchedCount: 0 upsertedCount: 0 collection: data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} - description: "UpdateOne with upsert when no documents match" operation: name: "updateOne" arguments: filter: {_id: 4} update: $inc: {x: 1} upsert: true outcome: result: matchedCount: 0 upsertedCount: 1 upsertedId: 4 collection: data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} - {_id: 4, x: 1} mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud/write/updateOne.yml000066400000000000000000000043671505113246500257570ustar00rootroot00000000000000data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} minServerVersion: '2.6' tests: - description: "UpdateOne when many documents match" operation: name: "updateOne" arguments: filter: _id: {$gt: 1} update: $inc: {x: 1} outcome: result: matchedCount: 1 modifiedCount: 1 upsertedCount: 0 # Can't verify collection data because we don't have a way of # knowing which document gets updated. - description: "UpdateOne when one document matches" operation: name: "updateOne" arguments: filter: {_id: 1} update: $inc: {x: 1} outcome: result: matchedCount: 1 modifiedCount: 1 upsertedCount: 0 collection: data: - {_id: 1, x: 12} - {_id: 2, x: 22} - {_id: 3, x: 33} - description: "UpdateOne when no documents match" operation: name: "updateOne" arguments: filter: {_id: 4} update: $inc: {x: 1} outcome: result: matchedCount: 0 modifiedCount: 0 upsertedCount: 0 collection: data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} - description: "UpdateOne with upsert when no documents match" operation: name: "updateOne" arguments: filter: {_id: 4} update: $inc: {x: 1} upsert: true outcome: result: matchedCount: 0 modifiedCount: 0 upsertedCount: 1 upsertedId: 4 collection: data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} - {_id: 4, x: 1} mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/000077500000000000000000000000001505113246500236475ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/aggregate-allowdiskuse.yml000066400000000000000000000041141505113246500310240ustar00rootroot00000000000000description: aggregate-allowdiskuse schemaVersion: '1.0' createEntities: - client: id: &client0 client0 observeEvents: [ commandStartedEvent ] - database: id: &database0 database0 client: *client0 databaseName: &database0Name crud-tests - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name coll0 initialData: - collectionName: *collection0Name databaseName: *database0Name documents: [] tests: - description: 'Aggregate does not send allowDiskUse when value is not specified' operations: - object: *collection0 name: aggregate arguments: pipeline: &pipeline [ { $match: {} } ] expectEvents: - client: *client0 events: - commandStartedEvent: command: aggregate: *collection0Name pipeline: *pipeline allowDiskUse: { $$exists: false } commandName: aggregate databaseName: *database0Name - description: 'Aggregate sends allowDiskUse false when false is specified' operations: - object: *collection0 name: aggregate arguments: pipeline: *pipeline allowDiskUse: false expectEvents: - client: *client0 events: - commandStartedEvent: command: aggregate: *collection0Name pipeline: *pipeline allowDiskUse: false commandName: aggregate databaseName: *database0Name - description: 'Aggregate sends allowDiskUse true when true is specified' operations: - object: *collection0 name: aggregate arguments: pipeline: *pipeline allowDiskUse: true expectEvents: - client: *client0 events: - commandStartedEvent: command: aggregate: *collection0Name pipeline: *pipeline allowDiskUse: true commandName: aggregate databaseName: *database0Name mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/aggregate-let.yml000066400000000000000000000102301505113246500270760ustar00rootroot00000000000000description: "aggregate-let" schemaVersion: "1.4" createEntities: - client: id: &client0 client0 observeEvents: [ commandStartedEvent ] - database: id: &database0 database0 client: *client0 databaseName: &database0Name crud-tests - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name coll0 - collection: id: &collection1 collection1 database: *database0 collectionName: &collection1Name coll1 initialData: &initialData - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1 } - collectionName: *collection1Name databaseName: *database0Name documents: [ ] tests: - description: "Aggregate with let option" runOnRequirements: - minServerVersion: "5.0" operations: - name: aggregate object: *collection0 arguments: pipeline: &pipeline0 # $match takes a query expression, so $expr is necessary to utilize # an aggregate expression context and access "let" variables. - $match: { $expr: { $eq: ["$_id", "$$id"] } } - $project: { _id: 0, x: "$$x", y: "$$y", rand: "$$rand" } # Values in "let" must be constant or closed expressions that do not # depend on document values. This test demonstrates a basic constant # value, a value wrapped with $literal (to avoid expression parsing), # and a closed expression (e.g. $rand). let: &let0 id: 1 x: foo y: { $literal: "$bar" } rand: { $rand: {} } expectResult: - { x: "foo", y: "$bar", rand: { $$type: "double" } } expectEvents: - client: *client0 events: - commandStartedEvent: command: aggregate: *collection0Name pipeline: *pipeline0 let: *let0 - description: "Aggregate with let option unsupported (server-side error)" runOnRequirements: - minServerVersion: "2.6.0" maxServerVersion: "4.4.99" operations: - name: aggregate object: *collection0 arguments: pipeline: &pipeline1 - $match: { _id: 1 } let: &let1 x: foo expectError: # Older server versions may not report an error code, but the error # message is consistent between 2.6.x and 4.4.x server versions. errorContains: "unrecognized field 'let'" isClientError: false expectEvents: - client: *client0 events: - commandStartedEvent: command: aggregate: *collection0Name pipeline: *pipeline1 let: *let1 - description: "Aggregate to collection with let option" runOnRequirements: - minServerVersion: "5.0" serverless: "forbid" operations: - name: aggregate object: *collection0 arguments: pipeline: &pipeline2 - $match: { $expr: { $eq: ["$_id", "$$id"] } } - $project: { _id: 1 } - $out: *collection1Name let: &let2 id: 1 expectEvents: - client: *client0 events: - commandStartedEvent: command: aggregate: *collection0Name pipeline: *pipeline2 let: *let2 outcome: - collectionName: *collection1Name databaseName: *database0Name documents: - { _id: 1 } - description: "Aggregate to collection with let option unsupported (server-side error)" runOnRequirements: - minServerVersion: "2.6.0" maxServerVersion: "4.4.99" operations: - name: aggregate object: *collection0 arguments: pipeline: *pipeline2 let: *let2 expectError: errorContains: "unrecognized field 'let'" isClientError: false expectEvents: - client: *client0 events: - commandStartedEvent: command: aggregate: *collection0Name pipeline: *pipeline2 let: *let2 mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/aggregate-merge-errorResponse.yml000066400000000000000000000023251505113246500322650ustar00rootroot00000000000000description: "aggregate-merge-errorResponse" schemaVersion: "1.12" createEntities: - client: id: &client0 client0 - database: id: &database0 database0 client: *client0 databaseName: &database0Name crud-tests - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name test initialData: &initialData - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1, x: 1 } - { _id: 2, x: 1 } tests: - description: "aggregate $merge DuplicateKey error is accessible" runOnRequirements: - minServerVersion: "5.1" # SERVER-59097 # Exclude sharded topologies since the aggregate command fails with # IllegalOperation(20) instead of DuplicateKey(11000) topologies: [ single, replicaset ] operations: - name: aggregate object: *database0 arguments: pipeline: - { $documents: [ { _id: 2, x: 1 } ] } - { $merge: { into: *collection0Name, whenMatched: "fail" } } expectError: errorCode: 11000 # DuplicateKey errorResponse: keyPattern: { _id: 1 } keyValue: { _id: 2 } mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/aggregate-merge.yml000066400000000000000000000107161505113246500274220ustar00rootroot00000000000000# This file was created automatically using mongodb-spec-converter. # Please review the generated file, then remove this notice. description: aggregate-merge schemaVersion: '1.0' runOnRequirements: - minServerVersion: 4.1.11 createEntities: - client: id: &client0 client0 observeEvents: - commandStartedEvent - database: id: &database0 database0 client: client0 databaseName: &database_name crud-v2 - collection: id: &collection0 collection0 database: database0 collectionName: &collection_name test_aggregate_merge - collection: id: &collection_readConcern_majority collection_readConcern_majority database: database0 collectionName: *collection_name collectionOptions: readConcern: { level: "majority" } - collection: id: &collection_readConcern_local collection_readConcern_local database: database0 collectionName: *collection_name collectionOptions: readConcern: { level: "local" } - collection: id: &collection_readConcern_available collection_readConcern_available database: database0 collectionName: *collection_name collectionOptions: readConcern: { level: "available" } initialData: - collectionName: *collection_name databaseName: *database_name documents: - _id: 1 x: 11 - _id: 2 x: 22 - _id: 3 x: 33 tests: - description: 'Aggregate with $merge' operations: - object: *collection0 name: aggregate arguments: &arguments pipeline: &pipeline - $sort: x: 1 - $match: _id: $gt: 1 - $merge: into: &output_collection other_test_collection expectEvents: - client: *client0 events: - commandStartedEvent: command: aggregate: *collection_name pipeline: *pipeline outcome: &outcome - collectionName: *output_collection databaseName: *database_name documents: - _id: 2 x: 22 - _id: 3 x: 33 - description: 'Aggregate with $merge and batch size of 0' operations: - object: *collection0 name: aggregate arguments: pipeline: &pipeline - $sort: x: 1 - $match: _id: $gt: 1 - $merge: into: &output_collection other_test_collection batchSize: 0 expectEvents: - client: *client0 events: - commandStartedEvent: command: aggregate: *collection_name pipeline: *pipeline cursor: { } outcome: *outcome - description: 'Aggregate with $merge and majority readConcern' operations: - object: *collection_readConcern_majority name: aggregate arguments: *arguments expectEvents: - client: *client0 events: - commandStartedEvent: command: aggregate: *collection_name pipeline: *pipeline readConcern: level: majority outcome: *outcome - description: 'Aggregate with $merge and local readConcern' operations: - object: *collection_readConcern_local name: aggregate arguments: *arguments expectEvents: - client: *client0 events: - commandStartedEvent: command: aggregate: *collection_name pipeline: *pipeline readConcern: level: local outcome: *outcome - description: 'Aggregate with $merge and available readConcern' operations: - object: *collection_readConcern_available name: aggregate arguments: *arguments expectEvents: - client: *client0 events: - commandStartedEvent: command: aggregate: *collection_name pipeline: *pipeline readConcern: level: available outcome: *outcome mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/aggregate-out-readConcern.yml000066400000000000000000000104411505113246500313460ustar00rootroot00000000000000# This file was created automatically using mongodb-spec-converter. # Please review the generated file, then remove this notice. description: aggregate-out-readConcern schemaVersion: '1.4' runOnRequirements: - minServerVersion: 4.1.0 topologies: - replicaset - sharded serverless: "forbid" createEntities: - client: id: &client0 client0 observeEvents: - commandStartedEvent - database: id: &database0 database0 client: client0 databaseName: &database_name crud-v2 - collection: id: &collection0 collection0 database: database0 collectionName: &collection_name test_aggregate_out_readconcern - collection: id: &collection_readConcern_majority collection_readConcern_majority database: database0 collectionName: *collection_name collectionOptions: readConcern: { level: "majority" } - collection: id: &collection_readConcern_local collection_readConcern_local database: database0 collectionName: *collection_name collectionOptions: readConcern: { level: "local" } - collection: id: &collection_readConcern_available collection_readConcern_available database: database0 collectionName: *collection_name collectionOptions: readConcern: { level: "available" } - collection: id: &collection_readConcern_linearizable collection_readConcern_linearizable database: database0 collectionName: *collection_name collectionOptions: readConcern: { level: "linearizable" } initialData: - collectionName: *collection_name databaseName: *database_name documents: - _id: 1 x: 11 - _id: 2 x: 22 - _id: 3 x: 33 tests: - description: 'readConcern majority with out stage' operations: - object: *collection_readConcern_majority name: aggregate arguments: &arguments pipeline: - $sort: x: 1 - $match: _id: $gt: 1 - $out: &output_collection other_test_collection expectEvents: - client: *client0 events: - commandStartedEvent: command: aggregate: *collection_name pipeline: &pipeline - { $sort: { x: 1 } } - { $match: { _id: { $gt: 1 } } } - { $out: other_test_collection } readConcern: level: majority outcome: &outcome - collectionName: *output_collection databaseName: *database_name documents: - _id: 2 x: 22 - _id: 3 x: 33 - description: 'readConcern local with out stage' operations: - object: *collection_readConcern_local name: aggregate arguments: *arguments expectEvents: - client: *client0 events: - commandStartedEvent: command: aggregate: *collection_name pipeline: *pipeline readConcern: level: local outcome: *outcome - description: 'readConcern available with out stage' operations: - object: *collection_readConcern_available name: aggregate arguments: *arguments expectEvents: - client: *client0 events: - commandStartedEvent: command: aggregate: *collection_name pipeline: *pipeline readConcern: level: available outcome: *outcome - description: 'readConcern linearizable with out stage' operations: - object: *collection_readConcern_linearizable name: aggregate arguments: *arguments expectError: isError: true expectEvents: - client: *client0 events: - commandStartedEvent: command: aggregate: *collection_name pipeline: *pipeline readConcern: level: linearizable mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/aggregate-write-readPreference.yml000066400000000000000000000122301505113246500323560ustar00rootroot00000000000000description: aggregate-write-readPreference schemaVersion: '1.4' runOnRequirements: # 3.6+ non-standalone is needed to utilize $readPreference in OP_MSG - minServerVersion: "3.6" # https://jira.mongodb.org/browse/DRIVERS-291 maxServerVersion: "7.99" topologies: [ replicaset, sharded, load-balanced ] # SERVER-90047: failures against latest server necessitate adding this for now maxServerVersion: "8.0.0" _yamlAnchors: readConcern: &readConcern level: &readConcernLevel "local" writeConcern: &writeConcern w: &writeConcernW 1 createEntities: - client: id: &client0 client0 observeEvents: - commandStartedEvent # Used to test that read and write concerns are still inherited uriOptions: readConcernLevel: *readConcernLevel w: *writeConcernW - database: id: &database0 database0 client: *client0 databaseName: &database0Name db0 - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name coll0 collectionOptions: readPreference: &readPreference # secondaryPreferred is specified for compatibility with clusters that # may not have a secondary (e.g. each shard is only a primary). mode: secondaryPreferred # maxStalenessSeconds is specified to ensure that drivers forward the # read preference to mongos or a load balancer. That would not be the # case with only secondaryPreferred. maxStalenessSeconds: 600 - collection: id: &collection1 collection1 database: *database0 collectionName: &collection1Name coll1 initialData: - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1, x: 11 } - { _id: 2, x: 22 } - { _id: 3, x: 33 } - collectionName: *collection1Name databaseName: *database0Name documents: [] tests: - description: "Aggregate with $out includes read preference for 5.0+ server" runOnRequirements: - minServerVersion: "5.0" # https://jira.mongodb.org/browse/RUBY-3539 maxServerVersion: "7.99" serverless: "forbid" operations: - object: *collection0 name: aggregate arguments: pipeline: &outPipeline - { $match: { _id: { $gt: 1 } } } - { $sort: { x: 1 } } - { $out: *collection1Name } expectEvents: - client: *client0 events: - commandStartedEvent: command: aggregate: *collection0Name pipeline: *outPipeline $readPreference: *readPreference readConcern: *readConcern writeConcern: *writeConcern outcome: &outcome - collectionName: *collection1Name databaseName: *database0Name documents: - { _id: 2, x: 22 } - { _id: 3, x: 33 } - description: "Aggregate with $out omits read preference for pre-5.0 server" runOnRequirements: # MongoDB 4.2 introduced support for read concerns and write stages. # Pre-4.2 servers may allow a "local" read concern anyway, but some # drivers may avoid inheriting a client-level read concern for pre-4.2. - minServerVersion: "4.2" maxServerVersion: "4.4.99" serverless: "forbid" operations: - object: *collection0 name: aggregate arguments: pipeline: *outPipeline expectEvents: - client: *client0 events: - commandStartedEvent: command: aggregate: *collection0Name pipeline: *outPipeline $readPreference: { $$exists: false } readConcern: *readConcern writeConcern: *writeConcern outcome: *outcome - description: "Aggregate with $merge includes read preference for 5.0+ server" runOnRequirements: - minServerVersion: "5.0" operations: - object: *collection0 name: aggregate arguments: pipeline: &mergePipeline - { $match: { _id: { $gt: 1 } } } - { $sort: { x: 1 } } - { $merge: { into: *collection1Name } } expectEvents: - client: *client0 events: - commandStartedEvent: command: aggregate: *collection0Name pipeline: *mergePipeline $readPreference: *readPreference readConcern: *readConcern writeConcern: *writeConcern outcome: *outcome - description: "Aggregate with $merge omits read preference for pre-5.0 server" runOnRequirements: - minServerVersion: "4.2" maxServerVersion: "4.4.99" operations: - object: *collection0 name: aggregate arguments: pipeline: *mergePipeline expectEvents: - client: *client0 events: - commandStartedEvent: command: aggregate: *collection0Name pipeline: *mergePipeline $readPreference: { $$exists: false } readConcern: *readConcern writeConcern: *writeConcern outcome: *outcome mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/aggregate.yml000066400000000000000000000151631505113246500263260ustar00rootroot00000000000000description: "aggregate" schemaVersion: "1.0" createEntities: - client: id: &client0 client0 useMultipleMongoses: true # ensure cursors pin to a single server observeEvents: [ commandStartedEvent ] - database: id: &database0 database0 client: *client0 databaseName: &database0Name aggregate-tests - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name coll0 initialData: - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1, x: 11 } - { _id: 2, x: 22 } - { _id: 3, x: 33 } - { _id: 4, x: 44 } - { _id: 5, x: 55 } - { _id: 6, x: 66 } tests: - description: "aggregate with multiple batches works" operations: - name: aggregate arguments: pipeline: [ { $match: { _id: { $gt: 1 } }} ] batchSize: 2 object: *collection0 expectResult: - { _id: 2, x: 22 } - { _id: 3, x: 33 } - { _id: 4, x: 44 } - { _id: 5, x: 55 } - { _id: 6, x: 66 } expectEvents: - client: *client0 events: - commandStartedEvent: command: aggregate: *collection0Name pipeline: [ { $match: { _id: { $gt: 1 } }} ] cursor: { batchSize: 2 } commandName: aggregate databaseName: *database0Name - commandStartedEvent: command: getMore: { $$type: [ int, long ] } collection: *collection0Name batchSize: 2 commandName: getMore databaseName: *database0Name - commandStartedEvent: command: getMore: { $$type: [ int, long ] } collection: *collection0Name batchSize: 2 commandName: getMore databaseName: *database0Name - description: "aggregate with a string comment" runOnRequirements: - minServerVersion: "3.6.0" operations: - name: aggregate arguments: pipeline: [ { $match: { _id: { $gt: 1 } }} ] comment: "comment" object: *collection0 expectEvents: - client: *client0 events: - commandStartedEvent: command: aggregate: *collection0Name pipeline: [ { $match: { _id: { $gt: 1 } } } ] comment: "comment" - description: "aggregate with a document comment" runOnRequirements: - minServerVersion: "4.4" operations: - name: aggregate arguments: pipeline: [ { $match: { _id: { $gt: 1 } }} ] comment: &comment0 { content: "test" } object: *collection0 expectEvents: - client: *client0 events: - commandStartedEvent: command: aggregate: *collection0Name pipeline: [ { $match: { _id: { $gt: 1 } } } ] comment: *comment0 - description: "aggregate with a document comment - pre 4.4" runOnRequirements: - minServerVersion: "3.6.0" maxServerVersion: "4.2.99" operations: - name: aggregate object: *collection0 arguments: pipeline: [ { $match: { _id: { $gt: 1 } }} ] comment: *comment0 expectError: isClientError: false expectEvents: - client: *client0 events: - commandStartedEvent: command: aggregate: *collection0Name pipeline: [ { $match: { _id: { $gt: 1 } }} ] comment: *comment0 commandName: aggregate databaseName: *database0Name - description: "aggregate with comment sets comment on getMore" runOnRequirements: - minServerVersion: "4.4.0" operations: - name: aggregate arguments: pipeline: [ { $match: { _id: { $gt: 1 } }} ] batchSize: 2 comment: *comment0 object: *collection0 expectResult: - { _id: 2, x: 22 } - { _id: 3, x: 33 } - { _id: 4, x: 44 } - { _id: 5, x: 55 } - { _id: 6, x: 66 } expectEvents: - client: *client0 events: - commandStartedEvent: command: aggregate: *collection0Name pipeline: [ { $match: { _id: { $gt: 1 } }} ] cursor: { batchSize: 2 } comment: *comment0 commandName: aggregate databaseName: *database0Name - commandStartedEvent: command: getMore: { $$type: [ int, long ] } collection: *collection0Name batchSize: 2 comment: *comment0 commandName: getMore databaseName: *database0Name - commandStartedEvent: command: getMore: { $$type: [ int, long ] } collection: *collection0Name batchSize: 2 comment: *comment0 commandName: getMore databaseName: *database0Name - description: "aggregate with comment does not set comment on getMore - pre 4.4" runOnRequirements: - minServerVersion: "3.6.0" maxServerVersion: "4.3.99" operations: - name: aggregate arguments: pipeline: [ { $match: { _id: { $gt: 1 } }} ] batchSize: 2 comment: "comment" object: *collection0 expectResult: - { _id: 2, x: 22 } - { _id: 3, x: 33 } - { _id: 4, x: 44 } - { _id: 5, x: 55 } - { _id: 6, x: 66 } expectEvents: - client: *client0 events: - commandStartedEvent: command: aggregate: *collection0Name pipeline: [ { $match: { _id: { $gt: 1 } }} ] cursor: { batchSize: 2 } comment: "comment" commandName: aggregate databaseName: *database0Name - commandStartedEvent: command: getMore: { $$type: [ int, long ] } collection: *collection0Name batchSize: 2 comment: { $$exists: false } commandName: getMore databaseName: *database0Name - commandStartedEvent: command: getMore: { $$type: [ int, long ] } collection: *collection0Name batchSize: 2 comment: { $$exists: false } commandName: getMore databaseName: *database0Name mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/bulkWrite-arrayFilters-clientError.yml000066400000000000000000000042171505113246500333010ustar00rootroot00000000000000# This file was created automatically using mongodb-spec-converter. # Please review the generated file, then remove this notice. description: bulkWrite-arrayFilters-clientError schemaVersion: '1.0' runOnRequirements: - maxServerVersion: 3.5.5 createEntities: - client: id: &client0 client0 observeEvents: - commandStartedEvent - database: id: &database0 database0 client: client0 databaseName: &database_name crud-v2 - collection: id: &collection0 collection0 database: database0 collectionName: &collection_name crud-v2 initialData: - collectionName: *collection_name databaseName: *database_name documents: - _id: 1 'y': - b: 3 - b: 1 - _id: 2 'y': - b: 0 - b: 1 tests: - description: 'BulkWrite on server that doesn''t support arrayFilters' operations: - object: *collection0 name: bulkWrite arguments: requests: - updateOne: filter: { } update: $set: y.0.b: 2 arrayFilters: - i.b: 1 ordered: true expectError: isError: true expectEvents: - client: *client0 events: [] - description: 'BulkWrite on server that doesn''t support arrayFilters with arrayFilters on second op' operations: - object: *collection0 name: bulkWrite arguments: requests: - updateOne: filter: { } update: $set: y.0.b: 2 - updateMany: filter: { } update: $set: 'y.$[i].b': 2 arrayFilters: - i.b: 1 ordered: true expectError: isError: true expectEvents: - client: *client0 events: [] mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/bulkWrite-arrayFilters.yml000066400000000000000000000100261505113246500310060ustar00rootroot00000000000000# This file was created automatically using mongodb-spec-converter. # Please review the generated file, then remove this notice. description: bulkWrite-arrayFilters schemaVersion: '1.0' runOnRequirements: - minServerVersion: 3.5.6 createEntities: - client: id: &client0 client0 observeEvents: - commandStartedEvent - database: id: &database0 database0 client: client0 databaseName: &database_name crud-tests - collection: id: &collection0 collection0 database: database0 collectionName: &collection_name test initialData: - collectionName: *collection_name databaseName: *database_name documents: - _id: 1 'y': - b: 3 - b: 1 - _id: 2 'y': - b: 0 - b: 1 tests: - description: 'BulkWrite updateOne with arrayFilters' operations: - object: *collection0 name: bulkWrite arguments: requests: - updateOne: filter: { } update: $set: 'y.$[i].b': 2 arrayFilters: - i.b: 3 ordered: true expectResult: deletedCount: 0 insertedCount: 0 insertedIds: { $$unsetOrMatches: {} } matchedCount: 1 modifiedCount: 1 upsertedCount: 0 upsertedIds: { } expectEvents: - client: *client0 events: - commandStartedEvent: command: update: *collection_name updates: - q: { } u: $set: { 'y.$[i].b': 2 } multi: { $$unsetOrMatches: false } upsert: { $$unsetOrMatches: false } arrayFilters: - { i.b: 3 } ordered: true commandName: update databaseName: *database_name outcome: - collectionName: *collection_name databaseName: *database_name documents: - _id: 1 'y': - b: 2 - b: 1 - _id: 2 'y': - b: 0 - b: 1 - description: 'BulkWrite updateMany with arrayFilters' operations: - object: *collection0 name: bulkWrite arguments: requests: - updateMany: filter: { } update: $set: 'y.$[i].b': 2 arrayFilters: - i.b: 1 ordered: true expectResult: deletedCount: 0 insertedCount: 0 insertedIds: { $$unsetOrMatches: {} } matchedCount: 2 modifiedCount: 2 upsertedCount: 0 upsertedIds: { } expectEvents: - client: *client0 events: - commandStartedEvent: command: update: *collection_name updates: - q: { } u: $set: { 'y.$[i].b': 2 } multi: true upsert: { $$unsetOrMatches: false } arrayFilters: - { i.b: 1 } ordered: true commandName: update databaseName: *database_name outcome: - collectionName: *collection_name databaseName: *database_name documents: - _id: 1 'y': - b: 3 - b: 2 - _id: 2 'y': - b: 0 - b: 2 mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/bulkWrite-comment.yml000066400000000000000000000132041505113246500300020ustar00rootroot00000000000000description: bulkWrite-comment schemaVersion: '1.0' createEntities: - client: id: &client0 client0 observeEvents: - commandStartedEvent - database: id: &database0 database0 client: client0 databaseName: &database_name crud-v2 - collection: id: &collection0 collection0 database: database0 collectionName: &collection_name BulkWrite_comment initialData: &initial_data - collectionName: *collection_name databaseName: *database_name documents: - _id: 1 x: 11 - _id: 2 x: 22 - _id: 3 x: 33 - _id: 4 x: 44 # Tests in this file differs from the one in specification repo because # Ruby dirver does not group bulk write operations by command. # See https://jira.mongodb.org/browse/DRIVERS-2215 tests: - description: 'BulkWrite with string comment' runOnRequirements: - minServerVersion: "4.4" operations: - object: *collection0 name: bulkWrite arguments: requests: &requests - insertOne: document: &inserted_document _id: 5 x: "inserted" - replaceOne: filter: &replaceOne_filter _id: 1 replacement: &replacement { _id: 1, x: "replaced" } - updateOne: filter: &updateOne_filter _id: 2 update: &update { $set: {x: "updated"} } - deleteOne: filter: &deleteOne_filter _id: 3 comment: &string_comment "comment" expectResult: &expect_results deletedCount: 1 insertedCount: 1 insertedIds: { $$unsetOrMatches: { 0: 5} } matchedCount: 2 modifiedCount: 2 upsertedCount: 0 upsertedIds: { } expectEvents: - client: *client0 events: - commandStartedEvent: command: insert: *collection_name documents: - *inserted_document ordered: true comment: *string_comment - commandStartedEvent: command: update: *collection_name updates: - q: *replaceOne_filter u: *replacement multi: { $$unsetOrMatches: false } upsert: { $$unsetOrMatches: false } ordered: true comment: *string_comment - commandStartedEvent: command: update: *collection_name updates: - q: *updateOne_filter u: *update multi: { $$unsetOrMatches: false } upsert: { $$unsetOrMatches: false } ordered: true comment: *string_comment - commandStartedEvent: command: delete: *collection_name deletes: - q: *deleteOne_filter limit: 1 ordered: true comment: *string_comment outcome: &outcome - collectionName: *collection_name databaseName: *database_name documents: - _id: 1 x: "replaced" - _id: 2 x: "updated" - _id: 4 x: 44 - _id: 5 x: "inserted" - description: 'BulkWrite with document comment' runOnRequirements: - minServerVersion: "4.4" operations: - object: *collection0 name: bulkWrite arguments: requests: *requests comment: &document_comment { key: "value" } expectResult: *expect_results expectEvents: - client: *client0 events: - commandStartedEvent: command: insert: *collection_name documents: - *inserted_document ordered: true comment: *document_comment - commandStartedEvent: command: update: *collection_name updates: - q: *replaceOne_filter u: *replacement multi: { $$unsetOrMatches: false } upsert: { $$unsetOrMatches: false } ordered: true comment: *document_comment - commandStartedEvent: command: update: *collection_name updates: - q: *updateOne_filter u: *update multi: { $$unsetOrMatches: false } upsert: { $$unsetOrMatches: false } ordered: true comment: *document_comment - commandStartedEvent: command: delete: *collection_name deletes: - q: *deleteOne_filter limit: 1 ordered: true comment: *document_comment outcome: *outcome - description: 'BulkWrite with comment - pre 4.4' runOnRequirements: - maxServerVersion: "4.2.99" operations: - object: *collection0 name: bulkWrite arguments: requests: *requests comment: "comment" expectError: isClientError: false expectEvents: - client: *client0 events: - commandStartedEvent: command: insert: *collection_name documents: - *inserted_document ordered: true comment: "comment" outcome: *initial_data mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/bulkWrite-delete-hint-clientError.yml000066400000000000000000000047641505113246500330430ustar00rootroot00000000000000# This file was created automatically using mongodb-spec-converter. # Please review the generated file, then remove this notice. description: bulkWrite-delete-hint-clientError schemaVersion: '1.0' runOnRequirements: - maxServerVersion: 3.3.99 createEntities: - client: id: &client0 client0 observeEvents: - commandStartedEvent - database: id: &database0 database0 client: client0 databaseName: &database_name crud-v2 - collection: id: &collection0 collection0 database: database0 collectionName: &collection_name BulkWrite_delete_hint initialData: - collectionName: *collection_name databaseName: *database_name documents: - _id: 1 x: 11 - _id: 2 x: 22 - _id: 3 x: 33 - _id: 4 x: 44 tests: - description: 'BulkWrite deleteOne with hints unsupported (client-side error)' operations: - object: *collection0 name: bulkWrite arguments: requests: - deleteOne: filter: &deleteOne_filter1 _id: 1 hint: &hint_string _id_ - deleteOne: filter: &deleteOne_filter2 _id: 2 hint: &hint_doc _id: 1 ordered: true expectError: isError: true expectEvents: - client: *client0 events: [] outcome: &outcome - collectionName: *collection_name databaseName: *database_name documents: - _id: 1 x: 11 - _id: 2 x: 22 - _id: 3 x: 33 - _id: 4 x: 44 - description: 'BulkWrite deleteMany with hints unsupported (client-side error)' operations: - object: *collection0 name: bulkWrite arguments: requests: - deleteMany: filter: &deleteMany_filter1 _id: $lt: 3 hint: *hint_string - deleteMany: filter: &deleteMany_filter2 _id: $gte: 4 hint: *hint_doc ordered: true expectError: isError: true expectEvents: - client: *client0 events: [] outcome: *outcome mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/bulkWrite-delete-hint-serverError.yml000066400000000000000000000065251505113246500330700ustar00rootroot00000000000000# This file was created automatically using mongodb-spec-converter. # Please review the generated file, then remove this notice. description: bulkWrite-delete-hint-serverError schemaVersion: '1.0' runOnRequirements: - minServerVersion: 3.4.0 maxServerVersion: 4.3.3 createEntities: - client: id: &client0 client0 observeEvents: - commandStartedEvent - database: id: &database0 database0 client: client0 databaseName: &database_name crud-v2 - collection: id: &collection0 collection0 database: database0 collectionName: &collection_name BulkWrite_delete_hint initialData: - collectionName: *collection_name databaseName: *database_name documents: - _id: 1 x: 11 - _id: 2 x: 22 - _id: 3 x: 33 - _id: 4 x: 44 tests: - description: 'BulkWrite deleteOne with hints unsupported (server-side error)' operations: - object: *collection0 name: bulkWrite arguments: requests: - deleteOne: filter: &deleteOne_filter1 _id: 1 hint: &hint_string _id_ - deleteOne: filter: &deleteOne_filter2 _id: 2 hint: &hint_doc _id: 1 ordered: true expectError: isError: true expectEvents: - client: *client0 events: - commandStartedEvent: command: delete: *collection_name deletes: - q: *deleteOne_filter1 hint: *hint_string limit: 1 - q: *deleteOne_filter2 hint: *hint_doc limit: 1 ordered: true outcome: &outcome - collectionName: *collection_name databaseName: *database_name documents: - _id: 1 x: 11 - _id: 2 x: 22 - _id: 3 x: 33 - _id: 4 x: 44 - description: 'BulkWrite deleteMany with hints unsupported (server-side error)' operations: - object: *collection0 name: bulkWrite arguments: requests: - deleteMany: filter: &deleteMany_filter1 _id: $lt: 3 hint: *hint_string - deleteMany: filter: &deleteMany_filter2 _id: $gte: 4 hint: *hint_doc ordered: true expectError: isError: true expectEvents: - client: *client0 events: - commandStartedEvent: command: delete: *collection_name deletes: - q: *deleteMany_filter1 hint: *hint_string limit: 0 - q: *deleteMany_filter2 hint: *hint_doc limit: 0 ordered: true outcome: *outcome mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/bulkWrite-delete-hint.yml000066400000000000000000000072071505113246500305500ustar00rootroot00000000000000# This file was created automatically using mongodb-spec-converter. # Please review the generated file, then remove this notice. description: bulkWrite-delete-hint schemaVersion: '1.0' runOnRequirements: - minServerVersion: 4.3.4 createEntities: - client: id: &client0 client0 observeEvents: - commandStartedEvent - database: id: &database0 database0 client: client0 databaseName: &database_name crud-v2 - collection: id: &collection0 collection0 database: database0 collectionName: &collection_name BulkWrite_delete_hint initialData: - collectionName: *collection_name databaseName: *database_name documents: - _id: 1 x: 11 - _id: 2 x: 22 - _id: 3 x: 33 - _id: 4 x: 44 tests: - description: 'BulkWrite deleteOne with hints' operations: - object: *collection0 name: bulkWrite arguments: requests: - deleteOne: filter: &deleteOne_filter1 _id: 1 hint: &hint_string _id_ - deleteOne: filter: &deleteOne_filter2 _id: 2 hint: &hint_doc _id: 1 ordered: true expectResult: deletedCount: 2 insertedCount: 0 insertedIds: { $$unsetOrMatches: {} } matchedCount: 0 modifiedCount: 0 upsertedCount: 0 upsertedIds: { } expectEvents: - client: *client0 events: - commandStartedEvent: command: delete: *collection_name deletes: - q: *deleteOne_filter1 hint: *hint_string limit: 1 - q: *deleteOne_filter2 hint: *hint_doc limit: 1 ordered: true outcome: - collectionName: *collection_name databaseName: *database_name documents: - _id: 3 x: 33 - _id: 4 x: 44 - description: 'BulkWrite deleteMany with hints' operations: - object: *collection0 name: bulkWrite arguments: requests: - deleteMany: filter: &deleteMany_filter1 _id: $lt: 3 hint: *hint_string - deleteMany: filter: &deleteMany_filter2 _id: $gte: 4 hint: *hint_doc ordered: true expectResult: deletedCount: 3 insertedCount: 0 insertedIds: { $$unsetOrMatches: {} } matchedCount: 0 modifiedCount: 0 upsertedCount: 0 upsertedIds: { } expectEvents: - client: *client0 events: - commandStartedEvent: command: delete: *collection_name deletes: - q: *deleteMany_filter1 hint: *hint_string limit: 0 - q: *deleteMany_filter2 hint: *hint_doc limit: 0 ordered: true outcome: - collectionName: *collection_name databaseName: *database_name documents: - _id: 3 x: 33 bulkWrite-deleteMany-hint-unacknowledged.yml000066400000000000000000000053661505113246500343120ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unifieddescription: bulkWrite-deleteMany-hint-unacknowledged schemaVersion: '1.0' createEntities: - client: id: &client0 client0 observeEvents: - commandStartedEvent - database: id: &database0 database0 client: *client0 databaseName: &database0Name db0 - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name coll0 collectionOptions: writeConcern: { w: 0 } initialData: - collectionName: *collection0Name databaseName: *database0Name documents: &documents - { _id: 1, x: 11 } - { _id: 2, x: 22 } - { _id: 3, x: 33 } tests: - description: "Unacknowledged deleteMany with hint string fails with client-side error on pre-4.4 server" runOnRequirements: - maxServerVersion: "4.2.99" operations: - object: *collection0 name: bulkWrite arguments: requests: - deleteMany: filter: &filter { _id: { $gt: 1 } } hint: _id_ expectError: isClientError: true expectEvents: &noEvents - client: *client0 events: [] - description: "Unacknowledged deleteMany with hint document fails with client-side error on pre-4.4 server" runOnRequirements: - maxServerVersion: "4.2.99" operations: - object: *collection0 name: bulkWrite arguments: requests: - deleteMany: filter: *filter hint: { _id: 1 } expectError: isClientError: true expectEvents: *noEvents - description: "Unacknowledged deleteMany with hint string on 4.4+ server" runOnRequirements: - minServerVersion: "4.4.0" operations: - object: *collection0 name: bulkWrite arguments: requests: - deleteMany: filter: *filter hint: _id_ expectResult: &unacknowledgedResult { $$unsetOrMatches: { acknowledged: { $$unsetOrMatches: false } } } expectEvents: &events - client: *client0 events: - commandStartedEvent: command: delete: *collection0Name deletes: - q: *filter hint: { $$type: [ string, object ]} limit: 0 writeConcern: { w: 0 } - description: "Unacknowledged deleteMany with hint document on 4.4+ server" runOnRequirements: - minServerVersion: "4.4.0" operations: - object: *collection0 name: bulkWrite arguments: requests: - deleteMany: filter: *filter hint: { _id: 1 } expectResult: *unacknowledgedResult expectEvents: *events mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/bulkWrite-deleteMany-let.yml000066400000000000000000000042661505113246500312210ustar00rootroot00000000000000description: "BulkWrite deleteMany-let" schemaVersion: "1.0" createEntities: - client: id: &client0 client0 observeEvents: [ commandStartedEvent ] - database: id: &database0 database0 client: *client0 databaseName: &database0Name crud-tests - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name coll0 initialData: - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1 } - { _id: 2 } tests: - description: "BulkWrite deleteMany with let option" runOnRequirements: - minServerVersion: "5.0" operations: - object: *collection0 name: bulkWrite arguments: requests: - deleteMany: filter: &filter $expr: $eq: [ "$_id", "$$id" ] let: &let id: 1 expectEvents: - client: *client0 events: - commandStartedEvent: command: delete: *collection0Name deletes: - q: *filter limit: 0 let: *let outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 2 } - description: "BulkWrite deleteMany with let option unsupported (server-side error)" runOnRequirements: - minServerVersion: "3.6.0" maxServerVersion: "4.4.99" operations: - object: *collection0 name: bulkWrite arguments: requests: - deleteOne: filter: *filter let: *let expectError: errorContains: "'delete.let' is an unknown field" isClientError: false expectEvents: - client: *client0 events: - commandStartedEvent: command: delete: *collection0Name deletes: - q: *filter limit: 1 let: *let outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1 } - { _id: 2 } bulkWrite-deleteOne-hint-unacknowledged.yml000066400000000000000000000053231505113246500341200ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unifieddescription: bulkWrite-deleteOne-hint-unacknowledged schemaVersion: '1.0' createEntities: - client: id: &client0 client0 observeEvents: - commandStartedEvent - database: id: &database0 database0 client: *client0 databaseName: &database0Name db0 - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name coll0 collectionOptions: writeConcern: { w: 0 } initialData: - collectionName: *collection0Name databaseName: *database0Name documents: &documents - { _id: 1, x: 11 } - { _id: 2, x: 22 } tests: - description: "Unacknowledged deleteOne with hint string fails with client-side error on pre-4.4 server" runOnRequirements: - maxServerVersion: "4.2.99" operations: - object: *collection0 name: bulkWrite arguments: requests: - deleteOne: filter: &filter { _id: { $gt: 1 } } hint: _id_ expectError: isClientError: true expectEvents: &noEvents - client: *client0 events: [] - description: "Unacknowledged deleteOne with hint document fails with client-side error on pre-4.4 server" runOnRequirements: - maxServerVersion: "4.2.99" operations: - object: *collection0 name: bulkWrite arguments: requests: - deleteOne: filter: *filter hint: { _id: 1 } expectError: isClientError: true expectEvents: *noEvents - description: "Unacknowledged deleteOne with hint string on 4.4+ server" runOnRequirements: - minServerVersion: "4.4.0" operations: - object: *collection0 name: bulkWrite arguments: requests: - deleteOne: filter: *filter hint: _id_ expectResult: &unacknowledgedResult { $$unsetOrMatches: { acknowledged: { $$unsetOrMatches: false } } } expectEvents: &events - client: *client0 events: - commandStartedEvent: command: delete: *collection0Name deletes: - q: *filter hint: { $$type: [ string, object ]} limit: 1 writeConcern: { w: 0 } - description: "Unacknowledged deleteOne with hint document on 4.4+ server" runOnRequirements: - minServerVersion: "4.4.0" operations: - object: *collection0 name: bulkWrite arguments: requests: - deleteOne: filter: *filter hint: { _id: 1 } expectResult: *unacknowledgedResult expectEvents: *events mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/bulkWrite-deleteOne-let.yml000066400000000000000000000042571505113246500310360ustar00rootroot00000000000000description: "BulkWrite deleteOne-let" schemaVersion: "1.0" createEntities: - client: id: &client0 client0 observeEvents: [ commandStartedEvent ] - database: id: &database0 database0 client: *client0 databaseName: &database0Name crud-tests - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name coll0 initialData: - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1 } - { _id: 2 } tests: - description: "BulkWrite deleteOne with let option" runOnRequirements: - minServerVersion: "5.0" operations: - object: *collection0 name: bulkWrite arguments: requests: - deleteOne: filter: &filter $expr: $eq: [ "$_id", "$$id" ] let: &let id: 1 expectEvents: - client: *client0 events: - commandStartedEvent: command: delete: *collection0Name deletes: - q: *filter limit: 1 let: *let outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 2 } - description: "BulkWrite deleteOne with let option unsupported (server-side error)" runOnRequirements: - minServerVersion: "3.6.0" maxServerVersion: "4.9" operations: - object: *collection0 name: bulkWrite arguments: requests: - deleteOne: filter: *filter let: *let expectError: errorContains: "'delete.let' is an unknown field" isClientError: false expectEvents: - client: *client0 events: - commandStartedEvent: command: delete: *collection0Name deletes: - q: *filter limit: 1 let: *let outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1 } - { _id: 2 } mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/bulkWrite-errorResponse.yml000066400000000000000000000030511505113246500312070ustar00rootroot00000000000000description: "bulkWrite-errorResponse" schemaVersion: "1.12" createEntities: - client: id: &client0 client0 useMultipleMongoses: false - database: id: &database0 database0 client: *client0 databaseName: &database0Name crud-tests - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name test tests: # This test intentionally executes only a single insert operation in the bulk # write to make the error code and response assertions less ambiguous. That # said, some drivers may still need to skip this test because the CRUD spec # does not prescribe how drivers should formulate a BulkWriteException beyond # collecting write and write concern errors. - description: "bulkWrite operations support errorResponse assertions" runOnRequirements: - minServerVersion: "4.0.0" topologies: [ single, replicaset ] - minServerVersion: "4.2.0" topologies: [ sharded ] operations: - name: failPoint object: testRunner arguments: client: *client0 failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: [ insert ] errorCode: &errorCode 8 # UnknownError - name: bulkWrite object: *collection0 arguments: requests: - insertOne: document: { _id: 1 } expectError: errorCode: *errorCode errorResponse: code: *errorCode mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/bulkWrite-insertOne-dots_and_dollars.yml000066400000000000000000000077051505113246500336300ustar00rootroot00000000000000description: "bulkWrite-insertOne-dots_and_dollars" schemaVersion: "1.0" createEntities: - client: id: &client0 client0 observeEvents: [ commandStartedEvent ] - database: id: &database0 database0 client: *client0 databaseName: &database0Name crud-tests - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name coll0 initialData: &initialData - collectionName: *collection0Name databaseName: *database0Name documents: [] tests: - description: "Inserting document with top-level dollar-prefixed key on 5.0+ server" runOnRequirements: - minServerVersion: "5.0" operations: - name: bulkWrite object: *collection0 arguments: requests: - insertOne: document: &dollarPrefixedKey { _id: 1, $a: 1 } expectResult: &bulkWriteResult deletedCount: 0 insertedCount: 1 insertedIds: { $$unsetOrMatches: { 0: 1 } } matchedCount: 0 modifiedCount: 0 upsertedCount: 0 upsertedIds: { } expectEvents: &expectEventsDollarPrefixedKey - client: *client0 events: - commandStartedEvent: command: insert: *collection0Name documents: - *dollarPrefixedKey outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - *dollarPrefixedKey - description: "Inserting document with top-level dollar-prefixed key on pre-5.0 server yields server-side error" runOnRequirements: - maxServerVersion: "4.99" operations: - name: bulkWrite object: *collection0 arguments: requests: - insertOne: document: *dollarPrefixedKey expectError: isClientError: false expectEvents: *expectEventsDollarPrefixedKey outcome: *initialData - description: "Inserting document with top-level dotted key" operations: - name: bulkWrite object: *collection0 arguments: requests: - insertOne: document: &dottedKey { _id: 1, a.b: 1 } expectResult: *bulkWriteResult expectEvents: - client: *client0 events: - commandStartedEvent: command: insert: *collection0Name documents: - *dottedKey outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - *dottedKey - description: "Inserting document with dollar-prefixed key in embedded doc" operations: - name: bulkWrite object: *collection0 arguments: requests: - insertOne: document: &dollarPrefixedKeyInEmbedded { _id: 1, a: { $b: 1 } } expectResult: *bulkWriteResult expectEvents: - client: *client0 events: - commandStartedEvent: command: insert: *collection0Name documents: - *dollarPrefixedKeyInEmbedded outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - *dollarPrefixedKeyInEmbedded - description: "Inserting document with dotted key in embedded doc" operations: - name: bulkWrite object: *collection0 arguments: requests: - insertOne: document: &dottedKeyInEmbedded { _id: 1, a: { b.c: 1 } } expectResult: *bulkWriteResult expectEvents: - client: *client0 events: - commandStartedEvent: command: insert: *collection0Name documents: - *dottedKeyInEmbedded outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - *dottedKeyInEmbedded mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/bulkWrite-replaceOne-dots_and_dollars.yml000066400000000000000000000121411505113246500337250ustar00rootroot00000000000000description: "bulkWrite-replaceOne-dots_and_dollars" schemaVersion: "1.0" createEntities: - client: id: &client0 client0 observeEvents: [ commandStartedEvent ] - database: id: &database0 database0 client: *client0 databaseName: &database0Name crud-tests - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name coll0 initialData: &initialData - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1 } tests: - description: "Replacing document with top-level dotted key on 3.6+ server" runOnRequirements: - minServerVersion: "3.6" operations: - name: bulkWrite object: *collection0 arguments: requests: - replaceOne: filter: { _id: 1 } replacement: &dottedKey { _id: 1, a.b: 1 } expectResult: &bulkWriteResult deletedCount: 0 insertedCount: 0 insertedIds: { $$unsetOrMatches: { } } matchedCount: 1 modifiedCount: 1 upsertedCount: 0 upsertedIds: { } expectEvents: &expectEventsDottedKey - client: *client0 events: - commandStartedEvent: command: update: *collection0Name updates: - q: { _id: 1 } u: *dottedKey multi: { $$unsetOrMatches: false } upsert: { $$unsetOrMatches: false } outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - *dottedKey - description: "Replacing document with top-level dotted key on pre-3.6 server yields server-side error" runOnRequirements: - maxServerVersion: "3.4.99" operations: - name: bulkWrite object: *collection0 arguments: requests: - replaceOne: filter: { _id: 1 } replacement: *dottedKey expectError: isClientError: false expectEvents: *expectEventsDottedKey outcome: *initialData - description: "Replacing document with dollar-prefixed key in embedded doc on 5.0+ server" runOnRequirements: - minServerVersion: "5.0" operations: - name: bulkWrite object: *collection0 arguments: requests: - replaceOne: filter: { _id: 1 } replacement: &dollarPrefixedKeyInEmbedded { _id: 1, a: { $b: 1 } } expectResult: *bulkWriteResult expectEvents: &expectEventsDollarPrefixedKeyInEmbedded - client: *client0 events: - commandStartedEvent: command: update: *collection0Name updates: - q: { _id: 1 } u: *dollarPrefixedKeyInEmbedded multi: { $$unsetOrMatches: false } upsert: { $$unsetOrMatches: false } outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - *dollarPrefixedKeyInEmbedded - description: "Replacing document with dollar-prefixed key in embedded doc on pre-5.0 server yields server-side error" runOnRequirements: - maxServerVersion: "4.99" operations: - name: bulkWrite object: *collection0 arguments: requests: - replaceOne: filter: { _id: 1 } replacement: *dollarPrefixedKeyInEmbedded expectError: isClientError: false expectEvents: *expectEventsDollarPrefixedKeyInEmbedded outcome: *initialData - description: "Replacing document with dotted key in embedded doc on 3.6+ server" runOnRequirements: - minServerVersion: "3.6" operations: - name: bulkWrite object: *collection0 arguments: requests: - replaceOne: filter: { _id: 1 } replacement: &dottedKeyInEmbedded { _id: 1, a: { b.c: 1 } } expectResult: *bulkWriteResult expectEvents: &expectEventsDottedKeyInEmbedded - client: *client0 events: - commandStartedEvent: command: update: *collection0Name updates: - q: { _id: 1 } u: *dottedKeyInEmbedded multi: { $$unsetOrMatches: false } upsert: { $$unsetOrMatches: false } outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - *dottedKeyInEmbedded - description: "Replacing document with dotted key in embedded doc on pre-3.6 server yields server-side error" runOnRequirements: - maxServerVersion: "3.4.99" operations: - name: bulkWrite object: *collection0 arguments: requests: - replaceOne: filter: { _id: 1 } replacement: *dottedKeyInEmbedded expectError: isClientError: false expectEvents: *expectEventsDottedKeyInEmbedded outcome: *initialData bulkWrite-replaceOne-hint-unacknowledged.yml000066400000000000000000000060051505113246500342670ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unifieddescription: bulkWrite-replaceOne-hint-unacknowledged schemaVersion: '1.0' createEntities: - client: id: &client0 client0 observeEvents: - commandStartedEvent - database: id: &database0 database0 client: *client0 databaseName: &database0Name db0 - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name coll0 collectionOptions: writeConcern: { w: 0 } initialData: - collectionName: *collection0Name databaseName: *database0Name documents: &documents - { _id: 1, x: 11 } - { _id: 2, x: 22 } tests: - description: "Unacknowledged replaceOne with hint string fails with client-side error on pre-4.2 server" runOnRequirements: - maxServerVersion: "4.0.99" operations: - object: *collection0 name: bulkWrite arguments: requests: - replaceOne: filter: &filter { _id: { $gt: 1 } } replacement: &replacement { x: 111 } hint: _id_ expectError: isClientError: true expectEvents: &noEvents - client: *client0 events: [] - description: "Unacknowledged replaceOne with hint document fails with client-side error on pre-4.2 server" runOnRequirements: - maxServerVersion: "4.0.99" operations: - object: *collection0 name: bulkWrite arguments: requests: - replaceOne: filter: *filter replacement: *replacement hint: { _id: 1 } expectError: isClientError: true expectEvents: *noEvents - description: "Unacknowledged replaceOne with hint string on 4.2+ server" runOnRequirements: - minServerVersion: "4.2.0" operations: - object: *collection0 name: bulkWrite arguments: requests: - replaceOne: filter: *filter replacement: *replacement hint: _id_ expectResult: &unacknowledgedResult { $$unsetOrMatches: { acknowledged: { $$unsetOrMatches: false } } } expectEvents: &events - client: *client0 events: - commandStartedEvent: command: update: *collection0Name updates: - q: *filter u: *replacement multi: { $$unsetOrMatches: false } upsert: { $$unsetOrMatches: false } hint: { $$type: [ string, object ]} writeConcern: { w: 0 } - description: "Unacknowledged replaceOne with hint document on 4.2+ server" runOnRequirements: - minServerVersion: "4.2.0" operations: - object: *collection0 name: bulkWrite arguments: requests: - replaceOne: filter: *filter replacement: *replacement hint: { _id: 1 } expectResult: *unacknowledgedResult expectEvents: *events mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/bulkWrite-replaceOne-let.yml000066400000000000000000000050301505113246500311750ustar00rootroot00000000000000description: "BulkWrite replaceOne-let" schemaVersion: "1.0" createEntities: - client: id: &client0 client0 observeEvents: [ commandStartedEvent ] - database: id: &database0 database0 client: *client0 databaseName: &database0Name crud-tests - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name coll0 initialData: - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1 } - { _id: 2 } tests: - description: "BulkWrite replaceOne with let option" runOnRequirements: - minServerVersion: "5.0" operations: - object: *collection0 name: bulkWrite arguments: requests: - replaceOne: filter: &filter $expr: $eq: [ "$_id", "$$id" ] replacement: &replacement {"x": 3} let: &let id: 1 expectEvents: - client: *client0 events: - commandStartedEvent: command: update: *collection0Name updates: - q: *filter u: *replacement multi: { $$unsetOrMatches: false } upsert: { $$unsetOrMatches: false } let: *let outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1, x: 3 } - { _id: 2 } - description: "BulkWrite replaceOne with let option unsupported (server-side error)" runOnRequirements: - minServerVersion: "4.2" maxServerVersion: "4.9" operations: - object: *collection0 name: bulkWrite arguments: requests: - replaceOne: filter: *filter replacement: *replacement let: *let expectError: errorContains: "'update.let' is an unknown field" isClientError: false expectEvents: - client: *client0 events: - commandStartedEvent: command: update: *collection0Name updates: - q: *filter u: *replacement multi: { $$unsetOrMatches: false } upsert: { $$unsetOrMatches: false } let: *let outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1 } - { _id: 2 } mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/bulkWrite-update-hint-clientError.yml000066400000000000000000000066271505113246500330630ustar00rootroot00000000000000# This file was created automatically using mongodb-spec-converter. # Please review the generated file, then remove this notice. description: bulkWrite-update-hint-clientError schemaVersion: '1.0' runOnRequirements: - maxServerVersion: 3.3.99 createEntities: - client: id: &client0 client0 observeEvents: - commandStartedEvent - database: id: &database0 database0 client: client0 databaseName: &database_name crud-v2 - collection: id: &collection0 collection0 database: database0 collectionName: &collection_name test_bulkwrite_update_hint initialData: - collectionName: *collection_name databaseName: *database_name documents: - _id: 1 x: 11 - _id: 2 x: 22 - _id: 3 x: 33 - _id: 4 x: 44 tests: - description: 'BulkWrite updateOne with update hints unsupported (client-side error)' operations: - object: *collection0 name: bulkWrite arguments: requests: - updateOne: filter: &updateOne_filter _id: 1 update: &updateOne_update $inc: x: 1 hint: &hint_string _id_ - updateOne: filter: *updateOne_filter update: *updateOne_update hint: &hint_doc _id: 1 ordered: true expectError: isError: true expectEvents: - client: *client0 events: [] outcome: &outcome - collectionName: *collection_name databaseName: *database_name documents: - _id: 1 x: 11 - _id: 2 x: 22 - _id: 3 x: 33 - _id: 4 x: 44 - description: 'BulkWrite updateMany with update hints unsupported (client-side error)' operations: - object: *collection0 name: bulkWrite arguments: requests: - updateMany: filter: &updateMany_filter _id: $lt: 3 update: &updateMany_update $inc: x: 1 hint: *hint_string - updateMany: filter: *updateMany_filter update: *updateMany_update hint: *hint_doc ordered: true expectError: isError: true expectEvents: - client: *client0 events: [] outcome: *outcome - description: 'BulkWrite replaceOne with update hints unsupported (client-side error)' operations: - object: *collection0 name: bulkWrite arguments: requests: - replaceOne: filter: _id: 3 replacement: x: 333 hint: *hint_string - replaceOne: filter: _id: 4 replacement: x: 444 hint: *hint_doc ordered: true expectError: isError: true expectEvents: - client: *client0 events: [] outcome: *outcome mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/bulkWrite-update-hint-serverError.yml000066400000000000000000000136651505113246500331130ustar00rootroot00000000000000# This file was created automatically using mongodb-spec-converter. # Please review the generated file, then remove this notice. description: bulkWrite-update-hint-serverError schemaVersion: '1.0' runOnRequirements: - minServerVersion: 3.4.0 maxServerVersion: 4.1.9 createEntities: - client: id: &client0 client0 observeEvents: - commandStartedEvent - database: id: &database0 database0 client: client0 databaseName: &database_name crud-v2 - collection: id: &collection0 collection0 database: database0 collectionName: &collection_name test_bulkwrite_update_hint initialData: - collectionName: *collection_name databaseName: *database_name documents: - _id: 1 x: 11 - _id: 2 x: 22 - _id: 3 x: 33 - _id: 4 x: 44 tests: - description: 'BulkWrite updateOne with update hints unsupported (server-side error)' operations: - object: *collection0 name: bulkWrite arguments: requests: - updateOne: filter: &updateOne_filter _id: 1 update: &updateOne_update $inc: x: 1 hint: &hint_string _id_ - updateOne: filter: *updateOne_filter update: *updateOne_update hint: &hint_doc _id: 1 ordered: true expectError: isError: true expectEvents: - client: *client0 events: - commandStartedEvent: command: update: *collection_name updates: - q: *updateOne_filter u: *updateOne_update hint: *hint_string multi: { $$unsetOrMatches: false } upsert: { $$unsetOrMatches: false } - q: *updateOne_filter u: *updateOne_update hint: *hint_doc multi: { $$unsetOrMatches: false } upsert: { $$unsetOrMatches: false } ordered: true outcome: - collectionName: *collection_name databaseName: *database_name documents: - _id: 1 x: 11 - _id: 2 x: 22 - _id: 3 x: 33 - _id: 4 x: 44 - description: 'BulkWrite updateMany with update hints unsupported (server-side error)' operations: - object: *collection0 name: bulkWrite arguments: requests: - updateMany: filter: &updateMany_filter _id: $lt: 3 update: &updateMany_update $inc: x: 1 hint: *hint_string - updateMany: filter: *updateMany_filter update: *updateMany_update hint: *hint_doc ordered: true expectError: isError: true expectEvents: - client: *client0 events: - commandStartedEvent: command: update: *collection_name updates: - q: *updateMany_filter u: *updateMany_update multi: true hint: *hint_string upsert: { $$unsetOrMatches: false } - q: *updateMany_filter u: *updateMany_update multi: true hint: *hint_doc upsert: { $$unsetOrMatches: false } ordered: true outcome: - collectionName: *collection_name databaseName: *database_name documents: - _id: 1 x: 11 - _id: 2 x: 22 - _id: 3 x: 33 - _id: 4 x: 44 - description: 'BulkWrite replaceOne with update hints unsupported (server-side error)' operations: - object: *collection0 name: bulkWrite arguments: requests: - replaceOne: filter: _id: 3 replacement: x: 333 hint: *hint_string - replaceOne: filter: _id: 4 replacement: x: 444 hint: *hint_doc ordered: true expectError: isError: true expectEvents: - client: *client0 events: - commandStartedEvent: command: update: *collection_name updates: - q: _id: 3 u: x: 333 hint: *hint_string multi: { $$unsetOrMatches: false } upsert: { $$unsetOrMatches: false } - q: _id: 4 u: x: 444 hint: *hint_doc multi: { $$unsetOrMatches: false } upsert: { $$unsetOrMatches: false } ordered: true outcome: - collectionName: *collection_name databaseName: *database_name documents: - _id: 1 x: 11 - _id: 2 x: 22 - _id: 3 x: 33 - _id: 4 x: 44 mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/bulkWrite-update-hint.yml000066400000000000000000000145351505113246500305720ustar00rootroot00000000000000# This file was created automatically using mongodb-spec-converter. # Please review the generated file, then remove this notice. description: bulkWrite-update-hint schemaVersion: '1.0' runOnRequirements: - minServerVersion: 4.2.0 createEntities: - client: id: &client0 client0 observeEvents: - commandStartedEvent - database: id: &database0 database0 client: client0 databaseName: &database_name crud-v2 - collection: id: &collection0 collection0 database: database0 collectionName: &collection_name test_bulkwrite_update_hint initialData: - collectionName: *collection_name databaseName: *database_name documents: - _id: 1 x: 11 - _id: 2 x: 22 - _id: 3 x: 33 - _id: 4 x: 44 tests: - description: 'BulkWrite updateOne with update hints' operations: - object: *collection0 name: bulkWrite arguments: requests: - updateOne: filter: &updateOne_filter _id: 1 update: &updateOne_update $inc: x: 1 hint: &hint_string _id_ - updateOne: filter: *updateOne_filter update: *updateOne_update hint: &hint_doc _id: 1 ordered: true expectResult: deletedCount: 0 insertedCount: 0 insertedIds: { $$unsetOrMatches: {} } matchedCount: 2 modifiedCount: 2 upsertedCount: 0 upsertedIds: { } expectEvents: - client: *client0 events: - commandStartedEvent: command: update: *collection_name updates: - q: *updateOne_filter u: *updateOne_update multi: { $$unsetOrMatches: false } upsert: { $$unsetOrMatches: false } hint: *hint_string - q: *updateOne_filter u: *updateOne_update multi: { $$unsetOrMatches: false } upsert: { $$unsetOrMatches: false } hint: *hint_doc ordered: true outcome: - collectionName: *collection_name databaseName: *database_name documents: - _id: 1 x: 13 - _id: 2 x: 22 - _id: 3 x: 33 - _id: 4 x: 44 - description: 'BulkWrite updateMany with update hints' operations: - object: *collection0 name: bulkWrite arguments: requests: - updateMany: filter: &updateMany_filter _id: $lt: 3 update: &updateMany_update $inc: x: 1 hint: *hint_string - updateMany: filter: *updateMany_filter update: *updateMany_update hint: *hint_doc ordered: true expectResult: deletedCount: 0 insertedCount: 0 insertedIds: { $$unsetOrMatches: {} } matchedCount: 4 modifiedCount: 4 upsertedCount: 0 upsertedIds: { } expectEvents: - client: *client0 events: - commandStartedEvent: command: update: *collection_name updates: - q: *updateMany_filter u: *updateMany_update multi: true upsert: { $$unsetOrMatches: false } hint: *hint_string - q: *updateMany_filter u: *updateMany_update multi: true upsert: { $$unsetOrMatches: false } hint: *hint_doc ordered: true outcome: - collectionName: *collection_name databaseName: *database_name documents: - _id: 1 x: 13 - _id: 2 x: 24 - _id: 3 x: 33 - _id: 4 x: 44 - description: 'BulkWrite replaceOne with update hints' operations: - object: *collection0 name: bulkWrite arguments: requests: - replaceOne: filter: _id: 3 replacement: x: 333 hint: *hint_string - replaceOne: filter: _id: 4 replacement: x: 444 hint: *hint_doc ordered: true expectResult: deletedCount: 0 insertedCount: 0 insertedIds: { $$unsetOrMatches: {} } matchedCount: 2 modifiedCount: 2 upsertedCount: 0 upsertedIds: { } expectEvents: - client: *client0 events: - commandStartedEvent: command: update: *collection_name updates: - q: _id: 3 u: x: 333 multi: { $$unsetOrMatches: false } upsert: { $$unsetOrMatches: false } hint: *hint_string - q: _id: 4 u: x: 444 multi: { $$unsetOrMatches: false } upsert: { $$unsetOrMatches: false } hint: *hint_doc ordered: true outcome: - collectionName: *collection_name databaseName: *database_name documents: - _id: 1 x: 11 - _id: 2 x: 22 - _id: 3 x: 333 - _id: 4 x: 444 mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/bulkWrite-update-validation.yml000066400000000000000000000034721505113246500317600ustar00rootroot00000000000000description: "bulkWrite-update-validation" schemaVersion: "1.0" createEntities: - client: id: &client0 client0 observeEvents: [ commandStartedEvent ] - database: id: &database0 database0 client: *client0 databaseName: &database0Name crud-tests - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name coll0 initialData: &initialData - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1, x: 11 } - { _id: 2, x: 22 } - { _id: 3, x: 33 } tests: - description: "BulkWrite replaceOne prohibits atomic modifiers" operations: - name: bulkWrite object: *collection0 arguments: requests: - replaceOne: filter: { _id: 1 } replacement: { $set: { x: 22 } } expectError: isClientError: true expectEvents: - client: *client0 events: [] outcome: *initialData - description: "BulkWrite updateOne requires atomic modifiers" operations: - name: bulkWrite object: *collection0 arguments: requests: - updateOne: filter: { _id: 1 } update: { x: 22 } expectError: isClientError: true expectEvents: - client: *client0 events: [] outcome: *initialData - description: "BulkWrite updateMany requires atomic modifiers" operations: - name: bulkWrite object: *collection0 arguments: requests: - updateMany: filter: { _id: { $gt: 1 } } update: { x: 44 } expectError: isClientError: true expectEvents: - client: *client0 events: [] outcome: *initialData mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/bulkWrite-updateMany-dots_and_dollars.yml000066400000000000000000000112251505113246500337610ustar00rootroot00000000000000description: "bulkWrite-updateMany-dots_and_dollars" schemaVersion: "1.0" createEntities: - client: id: &client0 client0 observeEvents: [ commandStartedEvent ] - database: id: &database0 database0 client: *client0 databaseName: &database0Name crud-tests - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name coll0 initialData: &initialData - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1, foo: {} } tests: - description: "Updating document to set top-level dollar-prefixed key on 5.0+ server" runOnRequirements: - minServerVersion: "5.0" operations: - name: bulkWrite object: *collection0 arguments: requests: - updateMany: filter: { _id: 1 } update: &dollarPrefixedKey - { $replaceWith: { $setField: { field: { $literal: $a }, value: 1, input: $$ROOT } } } expectResult: &bulkWriteResult deletedCount: 0 insertedCount: 0 insertedIds: { $$unsetOrMatches: { } } matchedCount: 1 modifiedCount: 1 upsertedCount: 0 upsertedIds: { } expectEvents: - client: *client0 events: - commandStartedEvent: command: update: *collection0Name updates: - q: { _id: 1 } u: *dollarPrefixedKey multi: true upsert: { $$unsetOrMatches: false } outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1, foo: {}, $a: 1 } - description: "Updating document to set top-level dotted key on 5.0+ server" runOnRequirements: - minServerVersion: "5.0" operations: - name: bulkWrite object: *collection0 arguments: requests: - updateMany: filter: { _id: 1 } update: &dottedKey - { $replaceWith: { $setField: { field: { $literal: a.b }, value: 1, input: $$ROOT } } } expectResult: *bulkWriteResult expectEvents: - client: *client0 events: - commandStartedEvent: command: update: *collection0Name updates: - q: { _id: 1 } u: *dottedKey multi: true upsert: { $$unsetOrMatches: false } outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1, foo: {}, a.b: 1 } - description: "Updating document to set dollar-prefixed key in embedded doc on 5.0+ server" runOnRequirements: - minServerVersion: "5.0" operations: - name: bulkWrite object: *collection0 arguments: requests: - updateMany: filter: { _id: 1 } update: &dollarPrefixedKeyInEmbedded - { $set: { foo: { $setField: { field: { $literal: $a }, value: 1, input: $foo } } } } expectResult: *bulkWriteResult expectEvents: - client: *client0 events: - commandStartedEvent: command: update: *collection0Name updates: - q: { _id: 1 } u: *dollarPrefixedKeyInEmbedded multi: true upsert: { $$unsetOrMatches: false } outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1, foo: { $a: 1 } } - description: "Updating document to set dotted key in embedded doc on 5.0+ server" runOnRequirements: - minServerVersion: "5.0" operations: - name: bulkWrite object: *collection0 arguments: requests: - updateMany: filter: { _id: 1 } update: &dottedKeyInEmbedded - { $set: { foo: { $setField: { field: { $literal: a.b }, value: 1, input: $foo } } } } expectResult: *bulkWriteResult expectEvents: - client: *client0 events: - commandStartedEvent: command: update: *collection0Name updates: - q: { _id: 1 } u: *dottedKeyInEmbedded multi: true upsert: { $$unsetOrMatches: false } outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1, foo: { a.b: 1 } } bulkWrite-updateMany-hint-unacknowledged.yml000066400000000000000000000057431505113246500343310ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unifieddescription: bulkWrite-updateMany-hint-unacknowledged schemaVersion: '1.0' createEntities: - client: id: &client0 client0 observeEvents: - commandStartedEvent - database: id: &database0 database0 client: *client0 databaseName: &database0Name db0 - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name coll0 collectionOptions: writeConcern: { w: 0 } initialData: - collectionName: *collection0Name databaseName: *database0Name documents: &documents - { _id: 1, x: 11 } - { _id: 2, x: 22 } - { _id: 3, x: 33 } tests: - description: "Unacknowledged updateMany with hint string fails with client-side error on pre-4.2 server" runOnRequirements: - maxServerVersion: "4.0.99" operations: - object: *collection0 name: bulkWrite arguments: requests: - updateMany: filter: &filter { _id: { $gt: 1 } } update: &update { $inc: { x: 1 } } hint: _id_ expectError: isClientError: true expectEvents: &noEvents - client: *client0 events: [] - description: "Unacknowledged updateMany with hint document fails with client-side error on pre-4.2 server" runOnRequirements: - maxServerVersion: "4.0.99" operations: - object: *collection0 name: bulkWrite arguments: requests: - updateMany: filter: *filter update: *update hint: { _id: 1 } expectError: isClientError: true expectEvents: *noEvents - description: "Unacknowledged updateMany with hint string on 4.2+ server" runOnRequirements: - minServerVersion: "4.2.0" operations: - object: *collection0 name: bulkWrite arguments: requests: - updateMany: filter: *filter update: *update hint: _id_ expectResult: &unacknowledgedResult { $$unsetOrMatches: { acknowledged: { $$unsetOrMatches: false } } } expectEvents: &events - client: *client0 events: - commandStartedEvent: command: update: *collection0Name updates: - q: *filter u: *update multi: true upsert: { $$unsetOrMatches: false } hint: { $$type: [ string, object ]} writeConcern: { w: 0 } - description: "Unacknowledged updateMany with hint document on 4.2+ server" runOnRequirements: - minServerVersion: "4.2.0" operations: - object: *collection0 name: bulkWrite arguments: requests: - updateMany: filter: *filter update: *update hint: { _id: 1 } expectResult: *unacknowledgedResult expectEvents: *events mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/bulkWrite-updateMany-let.yml000066400000000000000000000050401505113246500312300ustar00rootroot00000000000000description: "BulkWrite updateMany-let" schemaVersion: "1.0" createEntities: - client: id: &client0 client0 observeEvents: [ commandStartedEvent ] - database: id: &database0 database0 client: *client0 databaseName: &database0Name crud-tests - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name coll0 initialData: - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1, x: 20 } - { _id: 2, x: 21 } tests: - description: "BulkWrite updateMany with let option" runOnRequirements: - minServerVersion: "5.0" operations: - object: *collection0 name: bulkWrite arguments: requests: - updateMany: filter: &filter $expr: $eq: [ "$_id", "$$id" ] update: &update - $set: x: 21 let: &let id: 1 expectEvents: - client: *client0 events: - commandStartedEvent: command: update: *collection0Name updates: - q: *filter u: *update multi: true upsert: { $$unsetOrMatches: false } let: *let outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1, x: 21 } - { _id: 2, x: 21 } - description: "BulkWrite updateMany with let option unsupported (server-side error)" runOnRequirements: - minServerVersion: "4.2.0" maxServerVersion: "4.9" operations: - object: *collection0 name: bulkWrite arguments: requests: - updateMany: filter: *filter update: *update let: *let expectError: errorContains: "'update.let' is an unknown field" isClientError: false expectEvents: - client: *client0 events: - commandStartedEvent: command: update: *collection0Name updates: - q: *filter u: *update multi: true upsert: { $$unsetOrMatches: false } let: *let outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1, x: 20 } - { _id: 2, x: 21 } mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/bulkWrite-updateOne-dots_and_dollars.yml000066400000000000000000000113541505113246500336010ustar00rootroot00000000000000description: "bulkWrite-updateOne-dots_and_dollars" schemaVersion: "1.0" createEntities: - client: id: &client0 client0 observeEvents: [ commandStartedEvent ] - database: id: &database0 database0 client: *client0 databaseName: &database0Name crud-tests - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name coll0 initialData: &initialData - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1, foo: {} } tests: - description: "Updating document to set top-level dollar-prefixed key on 5.0+ server" runOnRequirements: - minServerVersion: "5.0" operations: - name: bulkWrite object: *collection0 arguments: requests: - updateOne: filter: { _id: 1 } update: &dollarPrefixedKey - { $replaceWith: { $setField: { field: { $literal: $a }, value: 1, input: $$ROOT } } } expectResult: &bulkWriteResult deletedCount: 0 insertedCount: 0 insertedIds: { $$unsetOrMatches: { } } matchedCount: 1 modifiedCount: 1 upsertedCount: 0 upsertedIds: { } expectEvents: - client: *client0 events: - commandStartedEvent: command: update: *collection0Name updates: - q: { _id: 1 } u: *dollarPrefixedKey multi: { $$unsetOrMatches: false } upsert: { $$unsetOrMatches: false } outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1, foo: {}, $a: 1 } - description: "Updating document to set top-level dotted key on 5.0+ server" runOnRequirements: - minServerVersion: "5.0" operations: - name: bulkWrite object: *collection0 arguments: requests: - updateOne: filter: { _id: 1 } update: &dottedKey - { $replaceWith: { $setField: { field: { $literal: a.b }, value: 1, input: $$ROOT } } } expectResult: *bulkWriteResult expectEvents: - client: *client0 events: - commandStartedEvent: command: update: *collection0Name updates: - q: { _id: 1 } u: *dottedKey multi: { $$unsetOrMatches: false } upsert: { $$unsetOrMatches: false } outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1, foo: {}, a.b: 1 } - description: "Updating document to set dollar-prefixed key in embedded doc on 5.0+ server" runOnRequirements: - minServerVersion: "5.0" operations: - name: bulkWrite object: *collection0 arguments: requests: - updateOne: filter: { _id: 1 } update: &dollarPrefixedKeyInEmbedded - { $set: { foo: { $setField: { field: { $literal: $a }, value: 1, input: $foo } } } } expectResult: *bulkWriteResult expectEvents: - client: *client0 events: - commandStartedEvent: command: update: *collection0Name updates: - q: { _id: 1 } u: *dollarPrefixedKeyInEmbedded multi: { $$unsetOrMatches: false } upsert: { $$unsetOrMatches: false } outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1, foo: { $a: 1 } } - description: "Updating document to set dotted key in embedded doc on 5.0+ server" runOnRequirements: - minServerVersion: "5.0" operations: - name: bulkWrite object: *collection0 arguments: requests: - updateOne: filter: { _id: 1 } update: &dottedKeyInEmbedded - { $set: { foo: { $setField: { field: { $literal: a.b }, value: 1, input: $foo } } } } expectResult: *bulkWriteResult expectEvents: - client: *client0 events: - commandStartedEvent: command: update: *collection0Name updates: - q: { _id: 1 } u: *dottedKeyInEmbedded multi: { $$unsetOrMatches: false } upsert: { $$unsetOrMatches: false } outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1, foo: { a.b: 1 } } bulkWrite-updateOne-hint-unacknowledged.yml000066400000000000000000000057271505113246500341500ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unifieddescription: bulkWrite-updateOne-hint-unacknowledged schemaVersion: '1.0' createEntities: - client: id: &client0 client0 observeEvents: - commandStartedEvent - database: id: &database0 database0 client: *client0 databaseName: &database0Name db0 - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name coll0 collectionOptions: writeConcern: { w: 0 } initialData: - collectionName: *collection0Name databaseName: *database0Name documents: &documents - { _id: 1, x: 11 } - { _id: 2, x: 22 } tests: - description: "Unacknowledged updateOne with hint string fails with client-side error on pre-4.2 server" runOnRequirements: - maxServerVersion: "4.0.99" operations: - object: *collection0 name: bulkWrite arguments: requests: - updateOne: filter: &filter { _id: { $gt: 1 } } update: &update { $inc: { x: 1 } } hint: _id_ expectError: isClientError: true expectEvents: &noEvents - client: *client0 events: [] - description: "Unacknowledged updateOne with hint document fails with client-side error on pre-4.2 server" runOnRequirements: - maxServerVersion: "4.0.99" operations: - object: *collection0 name: bulkWrite arguments: requests: - updateOne: filter: *filter update: *update hint: { _id: 1 } expectError: isClientError: true expectEvents: *noEvents - description: "Unacknowledged updateOne with hint string on 4.2+ server" runOnRequirements: - minServerVersion: "4.2.0" operations: - object: *collection0 name: bulkWrite arguments: requests: - updateOne: filter: *filter update: *update hint: _id_ expectResult: &unacknowledgedResult { $$unsetOrMatches: { acknowledged: { $$unsetOrMatches: false } } } expectEvents: &events - client: *client0 events: - commandStartedEvent: command: update: *collection0Name updates: - q: *filter u: *update multi: { $$unsetOrMatches: false } upsert: { $$unsetOrMatches: false } hint: { $$type: [ string, object ]} writeConcern: { w: 0 } - description: "Unacknowledged updateOne with hint document on 4.2+ server" runOnRequirements: - minServerVersion: "4.2.0" operations: - object: *collection0 name: bulkWrite arguments: requests: - updateOne: filter: *filter update: *update hint: { _id: 1 } expectResult: *unacknowledgedResult expectEvents: *events mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/bulkWrite-updateOne-let.yml000066400000000000000000000051101505113246500310430ustar00rootroot00000000000000description: "BulkWrite updateOne-let" schemaVersion: "1.0" createEntities: - client: id: &client0 client0 observeEvents: [ commandStartedEvent ] - database: id: &database0 database0 client: *client0 databaseName: &database0Name crud-tests - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name coll0 initialData: - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1, x: 20 } - { _id: 2, x: 21 } tests: - description: "BulkWrite updateOne with let option" runOnRequirements: - minServerVersion: "5.0" operations: - object: *collection0 name: bulkWrite arguments: requests: - updateOne: filter: &filter $expr: $eq: [ "$_id", "$$id" ] update: &update - $set: x: 22 let: &let id: 1 expectEvents: - client: *client0 events: - commandStartedEvent: command: update: *collection0Name updates: - q: *filter u: *update multi: { $$unsetOrMatches: false } upsert: { $$unsetOrMatches: false } let: *let outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1, x: 22 } - { _id: 2, x: 21 } - description: "BulkWrite updateOne with let option unsupported (server-side error)" runOnRequirements: - minServerVersion: "4.2.0" maxServerVersion: "4.9" operations: - object: *collection0 name: bulkWrite arguments: requests: - updateOne: filter: *filter update: *update let: *let expectError: errorContains: "'update.let' is an unknown field" isClientError: false expectEvents: - client: *client0 events: - commandStartedEvent: command: update: *collection0Name updates: - q: *filter u: *update multi: { $$unsetOrMatches: false } upsert: { $$unsetOrMatches: false } let: *let outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1, x: 20 } - { _id: 2, x: 21 } mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/countDocuments-comment.yml000066400000000000000000000050631505113246500310500ustar00rootroot00000000000000description: "countDocuments-comment" schemaVersion: "1.0" createEntities: - client: id: &client0 client0 observeEvents: [ commandStartedEvent ] - database: id: &database0 database0 client: *client0 databaseName: &database0Name countDocuments-comments-test - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name coll0 initialData: - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1, x: 11 } - { _id: 2, x: 22 } - { _id: 3, x: 33 } tests: - description: "countDocuments with document comment" runOnRequirements: - minServerVersion: 4.4.0 operations: - name: countDocuments object: *collection0 arguments: filter: {} comment: &documentComment { key: "value" } expectResult: 3 expectEvents: - client: *client0 events: - commandStartedEvent: command: aggregate: *collection0Name pipeline: &pipeline - $match: {} - $group: { _id: 1, n: { $sum: 1 } } comment: *documentComment commandName: aggregate databaseName: *database0Name - description: "countDocuments with string comment" runOnRequirements: - minServerVersion: 3.6.0 operations: - name: countDocuments object: *collection0 arguments: filter: {} comment: &stringComment "comment" expectResult: 3 expectEvents: - client: *client0 events: - commandStartedEvent: command: aggregate: *collection0Name pipeline: *pipeline comment: *stringComment commandName: aggregate databaseName: *database0Name - description: "countDocuments with document comment on less than 4.4.0 - server error" runOnRequirements: - minServerVersion: 3.6.0 maxServerVersion: 4.3.99 operations: - name: countDocuments object: *collection0 arguments: filter: {} comment: *documentComment expectError: isClientError: false expectEvents: - client: *client0 events: - commandStartedEvent: command: aggregate: *collection0Name pipeline: *pipeline comment: *documentComment commandName: aggregate databaseName: *database0Name mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/db-aggregate-write-readPreference.yml000066400000000000000000000121441505113246500327450ustar00rootroot00000000000000description: db-aggregate-write-readPreference schemaVersion: '1.4' runOnRequirements: # 3.6+ non-standalone is needed to utilize $readPreference in OP_MSG. # Serverless does not support $listLocalSessions and $currentOp stages, and # mongos does not allow combining them with $out or $merge. - minServerVersion: "3.6" # https://jira.mongodb.org/browse/DRIVERS-291 maxServerVersion: "7.99" topologies: [ replicaset ] serverless: forbid # SERVER-90047: failures against latest server necessitate adding this for now maxServerVersion: "8.0.0" _yamlAnchors: readConcern: &readConcern level: &readConcernLevel "local" writeConcern: &writeConcern w: &writeConcernW 1 createEntities: - client: id: &client0 client0 observeEvents: - commandStartedEvent # Used to test that read and write concerns are still inherited uriOptions: readConcernLevel: *readConcernLevel w: *writeConcernW - database: id: &database0 database0 client: *client0 databaseName: &database0Name db0 databaseOptions: readPreference: &readPreference # secondaryPreferred is specified for compatibility with clusters that # may not have a secondary (e.g. each shard is only a primary). mode: secondaryPreferred # maxStalenessSeconds is specified to ensure that drivers forward the # read preference to mongos or a load balancer. That would not be the # case with only secondaryPreferred. maxStalenessSeconds: 600 - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name coll0 initialData: - collectionName: *collection0Name databaseName: *database0Name documents: [] tests: - description: "Database-level aggregate with $out includes read preference for 5.0+ server" runOnRequirements: - minServerVersion: "5.0" # https://jira.mongodb.org/browse/RUBY-3539 maxServerVersion: "7.99" serverless: "forbid" operations: - object: *database0 name: aggregate arguments: pipeline: &outPipeline - { $listLocalSessions: {} } - { $limit: 1 } - { $addFields: { _id: 1 } } - { $project: { _id: 1 } } - { $out: *collection0Name } expectEvents: - client: *client0 events: - commandStartedEvent: command: aggregate: 1 pipeline: *outPipeline $readPreference: *readPreference readConcern: *readConcern writeConcern: *writeConcern outcome: &outcome - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1 } - description: "Database-level aggregate with $out omits read preference for pre-5.0 server" runOnRequirements: # MongoDB 4.2 introduced support for read concerns and write stages. # Pre-4.2 servers may allow a "local" read concern anyway, but some # drivers may avoid inheriting a client-level read concern for pre-4.2. - minServerVersion: "4.2" maxServerVersion: "4.4.99" serverless: "forbid" operations: - object: *database0 name: aggregate arguments: pipeline: *outPipeline expectEvents: - client: *client0 events: - commandStartedEvent: command: aggregate: 1 pipeline: *outPipeline $readPreference: { $$exists: false } readConcern: *readConcern writeConcern: *writeConcern outcome: *outcome - description: "Database-level aggregate with $merge includes read preference for 5.0+ server" runOnRequirements: - minServerVersion: "5.0" operations: - object: *database0 name: aggregate arguments: pipeline: &mergePipeline - { $listLocalSessions: {} } - { $limit: 1 } - { $addFields: { _id: 1 } } - { $project: { _id: 1 } } - { $merge: { into: *collection0Name } } expectEvents: - client: *client0 events: - commandStartedEvent: command: aggregate: 1 pipeline: *mergePipeline $readPreference: *readPreference readConcern: *readConcern writeConcern: *writeConcern outcome: *outcome - description: "Database-level aggregate with $merge omits read preference for pre-5.0 server" runOnRequirements: - minServerVersion: "4.2" maxServerVersion: "4.4.99" operations: - object: *database0 name: aggregate arguments: pipeline: *mergePipeline expectEvents: - client: *client0 events: - commandStartedEvent: command: aggregate: 1 pipeline: *mergePipeline $readPreference: { $$exists: false } readConcern: *readConcern writeConcern: *writeConcern outcome: *outcome mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/db-aggregate.yml000066400000000000000000000033521505113246500267060ustar00rootroot00000000000000# This file was created automatically using mongodb-spec-converter. # Please review the generated file, then remove this notice. description: db-aggregate schemaVersion: '1.4' runOnRequirements: - minServerVersion: 3.6.0 # serverless does not support either of the current database-level aggregation stages ($listLocalSessions and # $currentOp) serverless: forbid createEntities: - client: id: &client0 client0 observeEvents: - commandStartedEvent - database: id: &database0 database0 client: client0 databaseName: &database_name admin - collection: id: &collection0 collection0 database: database0 collectionName: &collection_name crud-v2 tests: - description: 'Aggregate with $listLocalSessions' operations: - object: *database0 name: aggregate arguments: pipeline: - $listLocalSessions: { } - $limit: 1 - $addFields: dummy: 'dummy field' - $project: _id: 0 dummy: 1 expectResult: - dummy: 'dummy field' - description: 'Aggregate with $listLocalSessions and allowDiskUse' operations: - object: *database0 name: aggregate arguments: pipeline: - $listLocalSessions: { } - $limit: 1 - $addFields: dummy: 'dummy field' - $project: _id: 0 dummy: 1 allowDiskUse: true expectResult: - dummy: 'dummy field' mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/deleteMany-comment.yml000066400000000000000000000050441505113246500301240ustar00rootroot00000000000000description: "deleteMany-comment" schemaVersion: "1.0" createEntities: - client: id: &client0 client0 observeEvents: [ commandStartedEvent ] - database: id: &database0 database0 client: *client0 databaseName: &database0Name crud-tests - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name coll0 initialData: &initialData - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1 } - { _id: 2, name: "name2" } - { _id: 3, name: "name3" } tests: - description: "deleteMany with string comment" runOnRequirements: - minServerVersion: "4.4" operations: - name: deleteMany object: *collection0 arguments: filter: &filter { _id: { $gt: 1 } } comment: "comment" expectResult: &expect_result deletedCount: 2 expectEvents: - client: *client0 events: - commandStartedEvent: command: delete: *collection0Name deletes: - q: *filter limit: 0 comment: "comment" outcome: &outcome - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1 } - description: "deleteMany with document comment" runOnRequirements: - minServerVersion: "4.4" operations: - name: deleteMany object: *collection0 arguments: filter: *filter comment: &comment { key: "value" } expectResult: *expect_result expectEvents: - client: *client0 events: - commandStartedEvent: command: delete: *collection0Name deletes: - q: *filter limit: 0 comment: *comment outcome: *outcome - description: "deleteMany with comment - pre 4.4" runOnRequirements: - minServerVersion: "3.4.0" maxServerVersion: "4.2.99" operations: - name: deleteMany object: *collection0 arguments: filter: *filter comment: "comment" expectError: isClientError: false expectEvents: - client: *client0 events: - commandStartedEvent: command: delete: *collection0Name deletes: - q: *filter limit: 0 comment: "comment" outcome: *initialData mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/deleteMany-hint-clientError.yml000066400000000000000000000035021505113246500317070ustar00rootroot00000000000000# This file was created automatically using mongodb-spec-converter. # Please review the generated file, then remove this notice. description: deleteMany-hint-clientError schemaVersion: '1.0' runOnRequirements: - maxServerVersion: 3.3.99 createEntities: - client: id: &client0 client0 observeEvents: - commandStartedEvent - database: id: &database0 database0 client: client0 databaseName: &database_name crud-v2 - collection: id: &collection0 collection0 database: database0 collectionName: &collection_name DeleteMany_hint initialData: - collectionName: *collection_name databaseName: *database_name documents: - _id: 1 x: 11 - _id: 2 x: 22 - _id: 3 x: 33 tests: - description: 'DeleteMany with hint string unsupported (client-side error)' operations: - object: *collection0 name: deleteMany arguments: filter: &filter _id: $gt: 1 hint: _id_ expectError: isError: true expectEvents: - client: *client0 events: [] outcome: &outcome - collectionName: *collection_name databaseName: *database_name documents: - _id: 1 x: 11 - _id: 2 x: 22 - _id: 3 x: 33 - description: 'DeleteMany with hint document unsupported (client-side error)' operations: - object: *collection0 name: deleteMany arguments: filter: *filter hint: _id: 1 expectError: isError: true expectEvents: - client: *client0 events: [] outcome: *outcome mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/deleteMany-hint-serverError.yml000066400000000000000000000045311505113246500317420ustar00rootroot00000000000000# This file was created automatically using mongodb-spec-converter. # Please review the generated file, then remove this notice. description: deleteMany-hint-serverError schemaVersion: '1.0' runOnRequirements: - minServerVersion: 3.4.0 maxServerVersion: 4.3.3 createEntities: - client: id: &client0 client0 observeEvents: - commandStartedEvent - database: id: &database0 database0 client: client0 databaseName: &database_name crud-v2 - collection: id: &collection0 collection0 database: database0 collectionName: &collection_name DeleteMany_hint initialData: - collectionName: *collection_name databaseName: *database_name documents: - _id: 1 x: 11 - _id: 2 x: 22 - _id: 3 x: 33 tests: - description: 'DeleteMany with hint string unsupported (server-side error)' operations: - object: *collection0 name: deleteMany arguments: filter: &filter _id: $gt: 1 hint: _id_ expectError: isError: true expectEvents: - client: *client0 events: - commandStartedEvent: command: delete: *collection_name deletes: - q: *filter hint: _id_ limit: 0 outcome: &outcome - collectionName: *collection_name databaseName: *database_name documents: - _id: 1 x: 11 - _id: 2 x: 22 - _id: 3 x: 33 - description: 'DeleteMany with hint document unsupported (server-side error)' operations: - object: *collection0 name: deleteMany arguments: filter: *filter hint: _id: 1 expectError: isError: true expectEvents: - client: *client0 events: - commandStartedEvent: command: delete: *collection_name deletes: - q: *filter hint: _id: 1 limit: 0 outcome: *outcome mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/deleteMany-hint-unacknowledged.yml000066400000000000000000000050101505113246500324050ustar00rootroot00000000000000description: deleteMany-hint-unacknowledged schemaVersion: '1.0' createEntities: - client: id: &client0 client0 observeEvents: - commandStartedEvent - database: id: &database0 database0 client: *client0 databaseName: &database0Name db0 - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name coll0 collectionOptions: writeConcern: { w: 0 } initialData: - collectionName: *collection0Name databaseName: *database0Name documents: &documents - { _id: 1, x: 11 } - { _id: 2, x: 22 } - { _id: 3, x: 33 } tests: - description: "Unacknowledged deleteMany with hint string fails with client-side error on pre-4.4 server" runOnRequirements: - maxServerVersion: "4.2.99" operations: - object: *collection0 name: deleteMany arguments: filter: &filter { _id: { $gt: 1 } } hint: _id_ expectError: isClientError: true expectEvents: &noEvents - client: *client0 events: [] - description: "Unacknowledged deleteMany with hint document fails with client-side error on pre-4.4 server" runOnRequirements: - maxServerVersion: "4.2.99" operations: - object: *collection0 name: deleteMany arguments: filter: *filter hint: { _id: 1 } expectError: isClientError: true expectEvents: *noEvents - description: "Unacknowledged deleteMany with hint string on 4.4+ server" runOnRequirements: - minServerVersion: "4.4.0" operations: - object: *collection0 name: deleteMany arguments: filter: *filter hint: _id_ expectResult: &unacknowledgedResult { $$unsetOrMatches: { acknowledged: { $$unsetOrMatches: false } } } expectEvents: &events - client: *client0 events: - commandStartedEvent: command: delete: *collection0Name deletes: - q: *filter hint: { $$type: [ string, object ]} limit: 0 writeConcern: { w: 0 } - description: "Unacknowledged deleteMany with hint document on 4.4+ server" runOnRequirements: - minServerVersion: "4.4.0" operations: - object: *collection0 name: deleteMany arguments: filter: *filter hint: { _id: 1 } expectResult: *unacknowledgedResult expectEvents: *events mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/deleteMany-hint.yml000066400000000000000000000042131505113246500274210ustar00rootroot00000000000000# This file was created automatically using mongodb-spec-converter. # Please review the generated file, then remove this notice. description: deleteMany-hint schemaVersion: '1.0' runOnRequirements: - minServerVersion: 4.3.4 createEntities: - client: id: &client0 client0 observeEvents: - commandStartedEvent - database: id: &database0 database0 client: client0 databaseName: &database_name crud-v2 - collection: id: &collection0 collection0 database: database0 collectionName: &collection_name DeleteMany_hint initialData: - collectionName: *collection_name databaseName: *database_name documents: - _id: 1 x: 11 - _id: 2 x: 22 - _id: 3 x: 33 tests: - description: 'DeleteMany with hint string' operations: - object: *collection0 name: deleteMany arguments: filter: &filter _id: $gt: 1 hint: _id_ expectResult: &result deletedCount: 2 expectEvents: - client: *client0 events: - commandStartedEvent: command: delete: *collection_name deletes: - q: *filter hint: _id_ limit: 0 outcome: &outcome - collectionName: *collection_name databaseName: *database_name documents: - _id: 1 x: 11 - description: 'DeleteMany with hint document' operations: - object: *collection0 name: deleteMany arguments: filter: *filter hint: _id: 1 expectResult: *result expectEvents: - client: *client0 events: - commandStartedEvent: command: delete: *collection_name deletes: - q: *filter hint: _id: 1 limit: 0 outcome: *outcome mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/deleteMany-let.yml000066400000000000000000000045431505113246500272510ustar00rootroot00000000000000description: "deleteMany-let" schemaVersion: "1.0" createEntities: - client: id: &client0 client0 observeEvents: [ commandStartedEvent ] - database: id: &database0 database0 client: *client0 databaseName: &database0Name crud-tests - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name coll0 initialData: &initialData - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1 } - { _id: 2, name: "name" } - { _id: 3, name: "name" } tests: - description: "deleteMany with let option" runOnRequirements: - minServerVersion: "5.0" operations: - name: deleteMany object: *collection0 arguments: filter: &filter $expr: $eq: [ "$name", "$$name" ] let: &let0 name: "name" expectResult: deletedCount: 2 expectEvents: - client: *client0 events: - commandStartedEvent: command: delete: *collection0Name deletes: - q: *filter limit: 0 let: *let0 outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1 } - description: "deleteMany with let option unsupported (server-side error)" runOnRequirements: - minServerVersion: "3.6.0" maxServerVersion: "4.4.99" operations: - name: deleteMany object: *collection0 arguments: filter: &filter1 $expr: $eq: [ "$name", "$$name" ] let: &let1 name: "name" expectError: errorContains: "'delete.let' is an unknown field" isClientError: false expectEvents: - client: *client0 events: - commandStartedEvent: command: delete: *collection0Name deletes: - q: *filter1 limit: 0 let: *let1 outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1 } - { _id: 2, name: "name" } - { _id: 3, name: "name" } mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/deleteOne-comment.yml000066400000000000000000000051051505113246500277370ustar00rootroot00000000000000description: "deleteOne-comment" schemaVersion: "1.0" createEntities: - client: id: &client0 client0 observeEvents: [ commandStartedEvent ] - database: id: &database0 database0 client: *client0 databaseName: &database0Name crud-tests - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name coll0 initialData: &initialData - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1 } - { _id: 2, name: "name" } - { _id: 3, name: "name" } tests: - description: "deleteOne with string comment" runOnRequirements: - minServerVersion: "4.4" operations: - name: deleteOne object: *collection0 arguments: filter: &filter { _id: 1 } comment: "comment" expectResult: &expect_result deletedCount: 1 expectEvents: - client: *client0 events: - commandStartedEvent: command: delete: *collection0Name deletes: - q: *filter limit: 1 comment: "comment" outcome: &outcome - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 2, name: "name" } - { _id: 3, name: "name" } - description: "deleteOne with document comment" runOnRequirements: - minServerVersion: "4.4" operations: - name: deleteOne object: *collection0 arguments: filter: *filter comment: &comment { key: "value" } expectResult: *expect_result expectEvents: - client: *client0 events: - commandStartedEvent: command: delete: *collection0Name deletes: - q: *filter limit: 1 comment: *comment outcome: *outcome - description: "deleteOne with comment - pre 4.4" runOnRequirements: - minServerVersion: "3.4.0" maxServerVersion: "4.2.99" operations: - name: deleteOne object: *collection0 arguments: filter: *filter comment: "comment" expectError: isClientError: false expectEvents: - client: *client0 events: - commandStartedEvent: command: delete: *collection0Name deletes: - q: *filter limit: 1 comment: "comment" outcome: *initialData mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/deleteOne-errorResponse.yml000066400000000000000000000025111505113246500311430ustar00rootroot00000000000000description: "deleteOne-errorResponse" schemaVersion: "1.12" createEntities: - client: id: &client0 client0 useMultipleMongoses: false - database: id: &database0 database0 client: *client0 databaseName: &database0Name crud-tests - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name test tests: # Some drivers may still need to skip this test because the CRUD spec does not # prescribe how drivers should formulate a WriteException beyond collecting a # write or write concern error. - description: "delete operations support errorResponse assertions" runOnRequirements: - minServerVersion: "4.0.0" topologies: [ single, replicaset ] - minServerVersion: "4.2.0" topologies: [ sharded ] operations: - name: failPoint object: testRunner arguments: client: *client0 failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: [ delete ] errorCode: &errorCode 8 # UnknownError - name: deleteOne object: *collection0 arguments: filter: { _id: 1 } expectError: errorCode: *errorCode errorResponse: code: *errorCode mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/deleteOne-hint-clientError.yml000066400000000000000000000033231505113246500315250ustar00rootroot00000000000000# This file was created automatically using mongodb-spec-converter. # Please review the generated file, then remove this notice. description: deleteOne-hint-clientError schemaVersion: '1.0' runOnRequirements: - maxServerVersion: 3.3.99 createEntities: - client: id: &client0 client0 observeEvents: - commandStartedEvent - database: id: &database0 database0 client: client0 databaseName: &database_name crud-v2 - collection: id: &collection0 collection0 database: database0 collectionName: &collection_name DeleteOne_hint initialData: - collectionName: *collection_name databaseName: *database_name documents: - _id: 1 x: 11 - _id: 2 x: 22 tests: - description: 'DeleteOne with hint string unsupported (client-side error)' operations: - object: *collection0 name: deleteOne arguments: filter: &filter _id: 1 hint: _id_ expectError: isError: true expectEvents: - client: *client0 events: [] outcome: &outcome - collectionName: *collection_name databaseName: *database_name documents: - _id: 1 x: 11 - _id: 2 x: 22 - description: 'DeleteOne with hint document unsupported (client-side error)' operations: - object: *collection0 name: deleteOne arguments: filter: *filter hint: _id: 1 expectError: isError: true expectEvents: - client: *client0 events: [] outcome: *outcome mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/deleteOne-hint-serverError.yml000066400000000000000000000043521505113246500315600ustar00rootroot00000000000000# This file was created automatically using mongodb-spec-converter. # Please review the generated file, then remove this notice. description: deleteOne-hint-serverError schemaVersion: '1.0' runOnRequirements: - minServerVersion: 3.4.0 maxServerVersion: 4.3.3 createEntities: - client: id: &client0 client0 observeEvents: - commandStartedEvent - database: id: &database0 database0 client: client0 databaseName: &database_name crud-v2 - collection: id: &collection0 collection0 database: database0 collectionName: &collection_name DeleteOne_hint initialData: - collectionName: *collection_name databaseName: *database_name documents: - _id: 1 x: 11 - _id: 2 x: 22 tests: - description: 'DeleteOne with hint string unsupported (server-side error)' operations: - object: *collection0 name: deleteOne arguments: filter: &filter _id: 1 hint: _id_ expectError: isError: true expectEvents: - client: *client0 events: - commandStartedEvent: command: delete: *collection_name deletes: - q: *filter hint: _id_ limit: 1 outcome: &outcome - collectionName: *collection_name databaseName: *database_name documents: - _id: 1 x: 11 - _id: 2 x: 22 - description: 'DeleteOne with hint document unsupported (server-side error)' operations: - object: *collection0 name: deleteOne arguments: filter: *filter hint: _id: 1 expectError: isError: true expectEvents: - client: *client0 events: - commandStartedEvent: command: delete: *collection_name deletes: - q: *filter hint: _id: 1 limit: 1 outcome: *outcome mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/deleteOne-hint-unacknowledged.yml000066400000000000000000000047451505113246500322400ustar00rootroot00000000000000description: deleteOne-hint-unacknowledged schemaVersion: '1.0' createEntities: - client: id: &client0 client0 observeEvents: - commandStartedEvent - database: id: &database0 database0 client: *client0 databaseName: &database0Name db0 - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name coll0 collectionOptions: writeConcern: { w: 0 } initialData: - collectionName: *collection0Name databaseName: *database0Name documents: &documents - { _id: 1, x: 11 } - { _id: 2, x: 22 } tests: - description: "Unacknowledged deleteOne with hint string fails with client-side error on pre-4.4 server" runOnRequirements: - maxServerVersion: "4.2.99" operations: - object: *collection0 name: deleteOne arguments: filter: &filter { _id: { $gt: 1 } } hint: _id_ expectError: isClientError: true expectEvents: &noEvents - client: *client0 events: [] - description: "Unacknowledged deleteOne with hint document fails with client-side error on pre-4.4 server" runOnRequirements: - maxServerVersion: "4.2.99" operations: - object: *collection0 name: deleteOne arguments: filter: *filter hint: { _id: 1 } expectError: isClientError: true expectEvents: *noEvents - description: "Unacknowledged deleteOne with hint string on 4.4+ server" runOnRequirements: - minServerVersion: "4.4.0" operations: - object: *collection0 name: deleteOne arguments: filter: *filter hint: _id_ expectResult: &unacknowledgedResult { $$unsetOrMatches: { acknowledged: { $$unsetOrMatches: false } } } expectEvents: &events - client: *client0 events: - commandStartedEvent: command: delete: *collection0Name deletes: - q: *filter hint: { $$type: [ string, object ]} limit: 1 writeConcern: { w: 0 } - description: "Unacknowledged deleteOne with hint document on 4.4+ server" runOnRequirements: - minServerVersion: "4.4.0" operations: - object: *collection0 name: deleteOne arguments: filter: *filter hint: { _id: 1 } expectResult: *unacknowledgedResult expectEvents: *events mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/deleteOne-hint.yml000066400000000000000000000041151505113246500272370ustar00rootroot00000000000000# This file was created automatically using mongodb-spec-converter. # Please review the generated file, then remove this notice. description: deleteOne-hint schemaVersion: '1.0' runOnRequirements: - minServerVersion: 4.3.4 createEntities: - client: id: &client0 client0 observeEvents: - commandStartedEvent - database: id: &database0 database0 client: client0 databaseName: &database_name crud-v2 - collection: id: &collection0 collection0 database: database0 collectionName: &collection_name DeleteOne_hint initialData: - collectionName: *collection_name databaseName: *database_name documents: - _id: 1 x: 11 - _id: 2 x: 22 tests: - description: 'DeleteOne with hint string' operations: - object: *collection0 name: deleteOne arguments: filter: &filter _id: 1 hint: _id_ expectResult: &result deletedCount: 1 expectEvents: - client: *client0 events: - commandStartedEvent: command: delete: *collection_name deletes: - q: *filter hint: _id_ limit: 1 outcome: &outcome - collectionName: *collection_name databaseName: *database_name documents: - _id: 2 x: 22 - description: 'deleteOne with hint document' operations: - object: *collection0 name: deleteOne arguments: filter: *filter hint: _id: 1 expectResult: *result expectEvents: - client: *client0 events: - commandStartedEvent: command: delete: *collection_name deletes: - q: *filter hint: _id: 1 limit: 1 outcome: *outcome mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/deleteOne-let.yml000066400000000000000000000043501505113246500270620ustar00rootroot00000000000000description: "deleteOne-let" schemaVersion: "1.0" createEntities: - client: id: &client0 client0 observeEvents: [ commandStartedEvent ] - database: id: &database0 database0 client: *client0 databaseName: &database0Name crud-tests - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name coll0 initialData: &initialData - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1 } - { _id: 2 } tests: - description: "deleteOne with let option" runOnRequirements: - minServerVersion: "5.0" operations: - name: deleteOne object: *collection0 arguments: filter: &filter $expr: $eq: [ "$_id", "$$id" ] let: &let0 id: 1 expectResult: deletedCount: 1 expectEvents: - client: *client0 events: - commandStartedEvent: command: delete: *collection0Name deletes: - q: *filter limit: 1 let: *let0 outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 2 } - description: "deleteOne with let option unsupported (server-side error)" runOnRequirements: - minServerVersion: "3.6.0" maxServerVersion: "4.4.99" operations: - name: deleteOne object: *collection0 arguments: filter: &filter1 $expr: $eq: [ "$_id", "$$id" ] let: &let1 id: 1 expectError: errorContains: "'delete.let' is an unknown field" isClientError: false expectEvents: - client: *client0 events: - commandStartedEvent: command: delete: *collection0Name deletes: - q: *filter1 limit: 1 let: *let1 outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1 } - { _id: 2 } mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/distinct-comment.yml000066400000000000000000000054251505113246500276610ustar00rootroot00000000000000description: "distinct-comment" schemaVersion: "1.0" createEntities: - client: id: &client0 client0 observeEvents: [ commandStartedEvent ] - database: id: &database0 database0 client: *client0 databaseName: &database0Name distinct-comment-tests - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name coll0 initialData: - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1, x: 11 } - { _id: 2, x: 22 } - { _id: 3, x: 33 } tests: - description: "distinct with document comment" runOnRequirements: # https://jira.mongodb.org/browse/SERVER-44847 # Server supports distinct with comment of any type for comment starting from 4.4.14. - minServerVersion: "4.4.14" operations: - name: distinct object: *collection0 arguments: fieldName: &fieldName x filter: &filter {} comment: &documentComment { key: "value"} expectResult: [ 11, 22, 33 ] expectEvents: - client: *client0 events: - commandStartedEvent: command: distinct: *collection0Name key: *fieldName query: *filter comment: *documentComment commandName: distinct databaseName: *database0Name - description: "distinct with string comment" runOnRequirements: - minServerVersion: "4.4.0" operations: - name: distinct object: *collection0 arguments: fieldName: *fieldName filter: *filter comment: &stringComment "comment" expectResult: [ 11, 22, 33 ] expectEvents: - client: *client0 events: - commandStartedEvent: command: distinct: *collection0Name key: *fieldName query: *filter comment: *stringComment commandName: distinct databaseName: *database0Name - description: "distinct with document comment - pre 4.4, server error" runOnRequirements: - minServerVersion: "3.6.0" maxServerVersion: "4.4.13" operations: - name: distinct object: *collection0 arguments: fieldName: *fieldName filter: *filter comment: *documentComment expectError: isClientError: false expectEvents: - client: *client0 events: - commandStartedEvent: command: distinct: *collection0Name key: *fieldName query: *filter comment: *documentComment commandName: distinct databaseName: *database0Name mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/estimatedDocumentCount-comment.yml000066400000000000000000000060421505113246500325230ustar00rootroot00000000000000description: "estimatedDocumentCount-comment" schemaVersion: "1.0" createEntities: - client: id: &client0 client0 observeEvents: [ commandStartedEvent ] - database: id: &database0 database0 client: *client0 databaseName: &database0Name edc-comment-tests - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name coll0 initialData: - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1, x: 11 } - { _id: 2, x: 22 } - { _id: 3, x: 33 } tests: - description: "estimatedDocumentCount with document comment" runOnRequirements: # https://jira.mongodb.org/browse/SERVER-63315 # Server supports count with comment of any type for comment starting from 4.4.14. - minServerVersion: "4.4.14" operations: - name: estimatedDocumentCount object: *collection0 arguments: comment: &documentComment { key: "value"} expectResult: 3 expectEvents: - client: *client0 events: - commandStartedEvent: command: count: *collection0Name comment: *documentComment commandName: count databaseName: *database0Name - description: "estimatedDocumentCount with string comment" runOnRequirements: - minServerVersion: "4.4.0" operations: - name: estimatedDocumentCount object: *collection0 arguments: comment: &stringComment "comment" expectResult: 3 expectEvents: - client: *client0 events: - commandStartedEvent: command: count: *collection0Name comment: *stringComment commandName: count databaseName: *database0Name - description: "estimatedDocumentCount with document comment - pre 4.4.14, server error" runOnRequirements: - minServerVersion: "3.6.0" maxServerVersion: "4.4.13" # Server does not raise an error if topology is sharded. # https://jira.mongodb.org/browse/SERVER-65954 topologies: [ single, replicaset ] operations: - name: estimatedDocumentCount object: *collection0 arguments: # Even though according to the docs count command does not support any # comment for server version less than 4.4, no error is raised by such # servers. Therefore, we have only one test with a document comment # to test server errors. # https://jira.mongodb.org/browse/SERVER-63315 # Server supports count with comment of any type for comment starting from 4.4.14. comment: *documentComment expectError: isClientError: false expectEvents: - client: *client0 events: - commandStartedEvent: command: count: *collection0Name comment: *documentComment commandName: count databaseName: *database0Name mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/estimatedDocumentCount.yml000066400000000000000000000125221505113246500310630ustar00rootroot00000000000000description: "estimatedDocumentCount" schemaVersion: "1.0" createEntities: - client: id: &client0 client0 useMultipleMongoses: false # Avoid setting fail points with multiple mongoses uriOptions: { retryReads: false } # Avoid retrying fail points with closeConnection observeEvents: [ commandStartedEvent ] - database: id: &database0 database0 client: *client0 databaseName: &database0Name edc-tests - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name coll0 - collection: # Nonexistent collection intentionally omitted from initialData id: &collection1 collection1 database: *database0 collectionName: &collection1Name coll1 - collection: id: &collection0View collection0View database: *database0 collectionName: &collection0ViewName coll0view initialData: - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1, x: 11 } - { _id: 2, x: 22 } - { _id: 3, x: 33 } tests: - description: "estimatedDocumentCount always uses count" operations: - name: estimatedDocumentCount object: *collection0 expectResult: 3 expectEvents: - client: *client0 events: - commandStartedEvent: command: count: *collection0Name commandName: count databaseName: *database0Name - description: "estimatedDocumentCount with maxTimeMS" operations: - name: estimatedDocumentCount object: *collection0 arguments: maxTimeMS: 6000 expectResult: 3 expectEvents: - client: *client0 events: - commandStartedEvent: command: count: *collection0Name maxTimeMS: 6000 commandName: count databaseName: *database0Name - description: "estimatedDocumentCount on non-existent collection" operations: - name: estimatedDocumentCount object: *collection1 expectResult: 0 expectEvents: - client: *client0 events: - commandStartedEvent: command: count: *collection1Name commandName: count databaseName: *database0Name - description: "estimatedDocumentCount errors correctly--command error" runOnRequirements: - minServerVersion: "4.0.0" topologies: [ single, replicaset ] - minServerVersion: "4.2.0" topologies: [ sharded ] operations: - name: failPoint object: testRunner arguments: client: *client0 failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: [ count ] errorCode: 8 # UnknownError - name: estimatedDocumentCount object: *collection0 expectError: errorCode: 8 # UnknownError expectEvents: - client: *client0 events: - commandStartedEvent: command: count: *collection0Name commandName: count databaseName: *database0Name - description: "estimatedDocumentCount errors correctly--socket error" runOnRequirements: - minServerVersion: "4.0.0" topologies: [ single, replicaset ] - minServerVersion: "4.2.0" topologies: [ sharded ] operations: - name: failPoint object: testRunner arguments: client: *client0 failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: [ count ] closeConnection: true - name: estimatedDocumentCount object: *collection0 expectError: isError: true expectEvents: - client: *client0 events: - commandStartedEvent: command: count: *collection0Name commandName: count databaseName: *database0Name - description: "estimatedDocumentCount works correctly on views" # viewOn option was added to the create command in 3.4 runOnRequirements: - minServerVersion: "3.4.0" operations: - name: dropCollection object: *database0 arguments: collection: *collection0ViewName - name: createCollection object: *database0 arguments: collection: *collection0ViewName viewOn: *collection0Name pipeline: &pipeline - { $match: { _id: { $gt: 1 } } } - name: estimatedDocumentCount object: *collection0View expectResult: 2 expectEvents: - client: *client0 events: - commandStartedEvent: command: drop: *collection0ViewName commandName: drop databaseName: *database0Name - commandStartedEvent: command: create: *collection0ViewName viewOn: *collection0Name pipeline: *pipeline commandName: create databaseName: *database0Name - commandStartedEvent: command: count: *collection0ViewName commandName: count databaseName: *database0Name mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/find-allowdiskuse-clientError.yml000066400000000000000000000024701505113246500323070ustar00rootroot00000000000000# This file was created automatically using mongodb-spec-converter. # Please review the generated file, then remove this notice. description: find-allowdiskuse-clientError schemaVersion: '1.0' runOnRequirements: - maxServerVersion: 3.0.99 createEntities: - client: id: &client0 client0 observeEvents: - commandStartedEvent - database: id: &database0 database0 client: client0 databaseName: &database_name crud-v2 - collection: id: &collection0 collection0 database: database0 collectionName: &collection_name test_find_allowdiskuse_clienterror tests: - description: 'Find fails when allowDiskUse true is specified against pre 3.2 server' operations: - object: *collection0 name: find arguments: filter: { } allowDiskUse: true expectError: isError: true expectEvents: - client: *client0 events: [] - description: 'Find fails when allowDiskUse false is specified against pre 3.2 server' operations: - object: *collection0 name: find arguments: filter: { } allowDiskUse: false expectError: isError: true expectEvents: - client: *client0 events: [] mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/find-allowdiskuse-serverError.yml000066400000000000000000000033351505113246500323400ustar00rootroot00000000000000# This file was created automatically using mongodb-spec-converter. # Please review the generated file, then remove this notice. description: find-allowdiskuse-serverError schemaVersion: '1.0' runOnRequirements: - minServerVersion: '3.2' maxServerVersion: 4.3.0 createEntities: - client: id: &client0 client0 observeEvents: - commandStartedEvent - database: id: &database0 database0 client: client0 databaseName: &database_name crud-v2 - collection: id: &collection0 collection0 database: database0 collectionName: &collection_name test_find_allowdiskuse_servererror tests: - description: 'Find fails when allowDiskUse true is specified against pre 4.4 server (server-side error)' operations: - object: *collection0 name: find arguments: filter: &filter { } allowDiskUse: true expectError: isError: true expectEvents: - client: *client0 events: - commandStartedEvent: command: find: *collection_name filter: *filter allowDiskUse: true - description: 'Find fails when allowDiskUse false is specified against pre 4.4 server (server-side error)' operations: - object: *collection0 name: find arguments: filter: *filter allowDiskUse: false expectError: isError: true expectEvents: - client: *client0 events: - commandStartedEvent: command: find: *collection_name filter: *filter allowDiskUse: false mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/find-allowdiskuse.yml000066400000000000000000000035541505113246500300250ustar00rootroot00000000000000# This file was created automatically using mongodb-spec-converter. # Please review the generated file, then remove this notice. description: find-allowdiskuse schemaVersion: '1.0' runOnRequirements: - minServerVersion: 4.3.1 createEntities: - client: id: &client0 client0 observeEvents: - commandStartedEvent - database: id: &database0 database0 client: client0 databaseName: &database_name crud-v2 - collection: id: &collection0 collection0 database: database0 collectionName: &collection_name test_find_allowdiskuse tests: - description: 'Find does not send allowDiskUse when value is not specified' operations: - object: *collection0 name: find arguments: filter: { } expectEvents: - client: *client0 events: - commandStartedEvent: command: find: *collection_name allowDiskUse: $$exists: false - description: 'Find sends allowDiskUse false when false is specified' operations: - object: *collection0 name: find arguments: filter: { } allowDiskUse: false expectEvents: - client: *client0 events: - commandStartedEvent: command: find: *collection_name allowDiskUse: false - description: 'Find sends allowDiskUse true when true is specified' operations: - object: *collection0 name: find arguments: filter: { } allowDiskUse: true expectEvents: - client: *client0 events: - commandStartedEvent: command: find: *collection_name allowDiskUse: true mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/find-comment.yml000066400000000000000000000111201505113246500267450ustar00rootroot00000000000000description: "find-comment" schemaVersion: "1.0" createEntities: - client: id: &client0 client0 observeEvents: [ commandStartedEvent ] - database: id: &database0 database0 client: *client0 databaseName: &database0Name crud-tests - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name coll0 initialData: &initialData - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1, x: 11 } - { _id: 2, x: 22 } - { _id: 3, x: 33 } - { _id: 4, x: 44 } - { _id: 5, x: 55 } - { _id: 6, x: 66 } tests: - description: "find with string comment" runOnRequirements: - minServerVersion: "3.6" operations: - name: find object: *collection0 arguments: filter: &filter _id: 1 comment: "comment" expectResult: &expect_result - { _id: 1 } expectEvents: - client: *client0 events: - commandStartedEvent: command: find: *collection0Name filter: *filter comment: "comment" - description: "find with document comment" runOnRequirements: - minServerVersion: "4.4" operations: - name: find object: *collection0 arguments: filter: *filter comment: &comment { key: "value"} expectResult: *expect_result expectEvents: - client: *client0 events: - commandStartedEvent: command: find: *collection0Name filter: *filter comment: *comment - description: "find with document comment - pre 4.4" runOnRequirements: - maxServerVersion: "4.2.99" minServerVersion: "3.6" operations: - name: find object: *collection0 arguments: filter: *filter comment: *comment expectError: isClientError: false expectEvents: - client: *client0 events: - commandStartedEvent: command: find: *collection0Name filter: *filter comment: *comment - description: "find with comment sets comment on getMore" runOnRequirements: - minServerVersion: "4.4.0" operations: - name: find object: *collection0 arguments: filter: &filter_get_more { _id: { $gt: 1 } } batchSize: 2 comment: *comment expectResult: - { _id: 2, x: 22 } - { _id: 3, x: 33 } - { _id: 4, x: 44 } - { _id: 5, x: 55 } - { _id: 6, x: 66 } expectEvents: - client: *client0 events: - commandStartedEvent: command: find: *collection0Name filter: { _id: { $gt: 1 } } batchSize: 2 comment: *comment - commandStartedEvent: command: getMore: { $$type: [ int, long ] } collection: *collection0Name batchSize: 2 comment: *comment - commandStartedEvent: command: getMore: { $$type: [ int, long ] } collection: *collection0Name batchSize: 2 comment: *comment - description: "find with comment does not set comment on getMore - pre 4.4" runOnRequirements: - minServerVersion: "3.6.0" maxServerVersion: "4.3.99" operations: - name: find object: *collection0 arguments: filter: &filter_get_more { _id: { $gt: 1 } } batchSize: 2 comment: "comment" expectResult: - { _id: 2, x: 22 } - { _id: 3, x: 33 } - { _id: 4, x: 44 } - { _id: 5, x: 55 } - { _id: 6, x: 66 } expectEvents: - client: *client0 events: - commandStartedEvent: command: find: *collection0Name filter: { _id: { $gt: 1 } } batchSize: 2 comment: "comment" - commandStartedEvent: command: getMore: { $$type: [ int, long ] } collection: *collection0Name batchSize: 2 comment: { $$exists: false } - commandStartedEvent: command: getMore: { $$type: [ int, long ] } collection: *collection0Name batchSize: 2 comment: { $$exists: false } mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/find-let.yml000066400000000000000000000033261505113246500261000ustar00rootroot00000000000000description: "find-let" schemaVersion: "1.0" createEntities: - client: id: &client0 client0 observeEvents: [ commandStartedEvent ] - database: id: &database0 database0 client: *client0 databaseName: &database0Name crud-tests - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name coll0 initialData: &initialData - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1 } - { _id: 2 } tests: - description: "Find with let option" runOnRequirements: - minServerVersion: "5.0" operations: - name: find object: *collection0 arguments: filter: &filter $expr: $eq: [ "$_id", "$$id" ] let: &let0 id: 1 expectResult: - { _id: 1 } expectEvents: - client: *client0 events: - commandStartedEvent: command: find: *collection0Name filter: *filter let: *let0 - description: "Find with let option unsupported (server-side error)" runOnRequirements: - minServerVersion: "3.6.0" maxServerVersion: "4.4.99" operations: - name: find object: *collection0 arguments: filter: &filter1 _id: 1 let: &let1 x: 1 expectError: errorContains: "Unrecognized field 'let'" isClientError: false expectEvents: - client: *client0 events: - commandStartedEvent: command: find: *collection0Name filter: *filter1 let: *let1 mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/find-test-all-options.yml000066400000000000000000000232651505113246500305360ustar00rootroot00000000000000# This spec is specific to the ruby driver, and is not part of the general # `specifications` repo. description: "find options" schemaVersion: "1.0" runOnRequirements: - serverless: 'forbid' createEntities: - client: id: &client0 client0 observeEvents: [ commandStartedEvent ] - database: id: &database0 database0 client: *client0 databaseName: &database0Name find-tests - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name coll0 tests: - description: "sort" operations: - name: find arguments: filter: &filter { _name: "John" } sort: &sort { _id: 1 } object: *collection0 expectEvents: - client: *client0 events: - commandStartedEvent: command: find: *collection0Name filter: *filter sort: *sort commandName: find - description: "projection" operations: - name: find arguments: filter: *filter projection: &projection { _id: 1 } object: *collection0 expectEvents: - client: *client0 events: - commandStartedEvent: command: find: *collection0Name filter: *filter projection: *projection commandName: find databaseName: *database0Name - description: "hint" operations: - name: find arguments: filter: *filter hint: &hint { _id: 1 } object: *collection0 expectEvents: - client: *client0 events: - commandStartedEvent: command: find: *collection0Name filter: *filter hint: *hint commandName: find databaseName: *database0Name - description: "skip" operations: - name: find arguments: filter: *filter skip: &skip 10 object: *collection0 expectEvents: - client: *client0 events: - commandStartedEvent: command: find: *collection0Name filter: *filter skip: *skip commandName: find databaseName: *database0Name - description: "limit" operations: - name: find arguments: filter: *filter limit: &limit 10 object: *collection0 expectEvents: - client: *client0 events: - commandStartedEvent: command: find: *collection0Name filter: *filter limit: *limit commandName: find databaseName: *database0Name - description: "batchSize" operations: - name: find arguments: filter: *filter batchSize: &batchSize 10 object: *collection0 expectEvents: - client: *client0 events: - commandStartedEvent: command: find: *collection0Name filter: *filter batchSize: *batchSize commandName: find databaseName: *database0Name - description: "comment" operations: - name: find arguments: filter: *filter comment: &comment 'comment' object: *collection0 expectEvents: - client: *client0 events: - commandStartedEvent: command: find: *collection0Name filter: *filter comment: *comment commandName: find databaseName: *database0Name - description: "maxTimeMS" operations: - name: find arguments: filter: *filter maxTimeMS: &maxTimeMS 1000 object: *collection0 expectEvents: - client: *client0 events: - commandStartedEvent: command: find: *collection0Name filter: *filter maxTimeMS: *maxTimeMS commandName: find databaseName: *database0Name - description: "timeoutMS" operations: - name: find arguments: filter: *filter timeoutMS: &timeoutMS 1000 object: *collection0 expectEvents: - client: *client0 events: - commandStartedEvent: command: find: *collection0Name filter: *filter maxTimeMS: { $$type: [ int ] } commandName: find databaseName: *database0Name - description: "max" operations: - name: find arguments: filter: *filter hint: { _id: 1 } max: &max { _id: 10 } object: *collection0 expectEvents: - client: *client0 events: - commandStartedEvent: command: find: *collection0Name filter: *filter max: *max commandName: find databaseName: *database0Name - description: "min" operations: - name: createIndex object: *collection0 arguments: name: "name_1" keys: { name: 1 } - name: find arguments: filter: *filter hint: { name: 1 } min: &min { name: 'John' } object: *collection0 expectEvents: - client: *client0 events: - commandStartedEvent: commandName: createIndexes - commandStartedEvent: command: find: *collection0Name filter: *filter min: *min commandName: find databaseName: *database0Name - description: "returnKey" operations: - name: find arguments: filter: *filter returnKey: &returnKey false object: *collection0 expectEvents: - client: *client0 events: - commandStartedEvent: command: find: *collection0Name filter: *filter returnKey: *returnKey commandName: find databaseName: *database0Name - description: "showRecordId" operations: - name: find arguments: filter: *filter showRecordId: &showRecordId false object: *collection0 expectEvents: - client: *client0 events: - commandStartedEvent: command: find: *collection0Name filter: *filter showRecordId: *showRecordId commandName: find databaseName: *database0Name - description: "oplogReplay" operations: - name: find arguments: filter: *filter oplogReplay: &oplogReplay false object: *collection0 expectEvents: - client: *client0 events: - commandStartedEvent: command: find: *collection0Name filter: *filter oplogReplay: *oplogReplay commandName: find databaseName: *database0Name - description: "noCursorTimeout" operations: - name: find arguments: filter: *filter noCursorTimeout: &noCursorTimeout false object: *collection0 expectEvents: - client: *client0 events: - commandStartedEvent: command: find: *collection0Name filter: *filter noCursorTimeout: *noCursorTimeout commandName: find databaseName: *database0Name - description: "allowPartialResults" operations: - name: find arguments: filter: *filter allowPartialResults: &allowPartialResults false object: *collection0 expectEvents: - client: *client0 events: - commandStartedEvent: command: find: *collection0Name filter: *filter allowPartialResults: *allowPartialResults commandName: find databaseName: *database0Name - description: "collation" operations: - name: find arguments: filter: *filter collation: &collation { locale: "en" } object: *collection0 expectEvents: - client: *client0 events: - commandStartedEvent: command: find: *collection0Name filter: *filter collation: *collation commandName: find databaseName: *database0Name - description: "allowDiskUse" runOnRequirements: - minServerVersion: 4.4 operations: - name: find arguments: filter: *filter allowDiskUse: &allowDiskUse true object: *collection0 expectEvents: - client: *client0 events: - commandStartedEvent: command: find: *collection0Name filter: *filter allowDiskUse: *allowDiskUse commandName: find databaseName: *database0Name - description: "let" runOnRequirements: - minServerVersion: "5.0" operations: - name: find arguments: filter: *filter let: &let { name: "Mary" } object: *collection0 expectEvents: - client: *client0 events: - commandStartedEvent: command: find: *collection0Name filter: *filter let: *let commandName: find databaseName: *database0Name mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/find.yml000066400000000000000000000035601505113246500253160ustar00rootroot00000000000000description: "find" schemaVersion: "1.0" createEntities: - client: id: &client0 client0 useMultipleMongoses: true # ensure cursors pin to a single server observeEvents: [ commandStartedEvent ] - database: id: &database0 database0 client: *client0 databaseName: &database0Name find-tests - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name coll0 initialData: - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1, x: 11 } - { _id: 2, x: 22 } - { _id: 3, x: 33 } - { _id: 4, x: 44 } - { _id: 5, x: 55 } - { _id: 6, x: 66 } tests: - description: "find with multiple batches works" operations: - name: find arguments: filter: { _id: { $gt: 1 } } batchSize: 2 object: *collection0 expectResult: - { _id: 2, x: 22 } - { _id: 3, x: 33 } - { _id: 4, x: 44 } - { _id: 5, x: 55 } - { _id: 6, x: 66 } expectEvents: - client: *client0 events: - commandStartedEvent: command: find: *collection0Name filter: { _id: { $gt: 1 } } batchSize: 2 commandName: find databaseName: *database0Name - commandStartedEvent: command: getMore: { $$type: [ int, long ] } collection: *collection0Name batchSize: 2 commandName: getMore databaseName: *database0Name - commandStartedEvent: command: getMore: { $$type: [ int, long ] } collection: *collection0Name batchSize: 2 commandName: getMore databaseName: *database0Name mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/findOneAndDelete-comment.yml000066400000000000000000000050101505113246500311560ustar00rootroot00000000000000description: "findOneAndDelete-comment" schemaVersion: "1.0" createEntities: - client: id: &client0 client0 observeEvents: [ commandStartedEvent ] - database: id: &database0 database0 client: *client0 databaseName: &database0Name crud-tests - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name coll0 initialData: &initialData - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1 } - { _id: 2 } tests: - description: "findOneAndDelete with string comment" runOnRequirements: - minServerVersion: "4.4" operations: - name: findOneAndDelete object: *collection0 arguments: filter: &filter _id: 1 comment: "comment" expectResult: _id: 1 expectEvents: - client: *client0 events: - commandStartedEvent: command: findAndModify: *collection0Name query: *filter remove: true comment: "comment" outcome: &outcome - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 2 } - description: "findOneAndDelete with document comment" runOnRequirements: - minServerVersion: "4.4" operations: - name: findOneAndDelete object: *collection0 arguments: filter: *filter comment: &comment { key: "value"} expectResult: _id: 1 expectEvents: - client: *client0 events: - commandStartedEvent: command: findAndModify: *collection0Name query: *filter remove: true comment: *comment outcome: *outcome - description: "findOneAndDelete with comment - pre 4.4" runOnRequirements: - minServerVersion: "4.2.0" # findAndModify option validation was introduced in 4.2 maxServerVersion: "4.2.99" operations: - name: findOneAndDelete object: *collection0 arguments: filter: *filter comment: "comment" expectError: isClientError: false expectEvents: - client: *client0 events: - commandStartedEvent: command: findAndModify: *collection0Name query: *filter remove: true comment: "comment" outcome: *initialData mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/findOneAndDelete-hint-clientError.yml000066400000000000000000000036731505113246500327610ustar00rootroot00000000000000# This file was created automatically using mongodb-spec-converter. # Please review the generated file, then remove this notice. description: findOneAndDelete-hint-clientError schemaVersion: '1.0' runOnRequirements: - maxServerVersion: 4.0.99 createEntities: - client: id: &client0 client0 observeEvents: - commandStartedEvent - database: id: &database0 database0 client: client0 databaseName: &database_name crud-v2 - collection: id: &collection0 collection0 database: database0 collectionName: &collection_name findOneAndDelete_hint initialData: - collectionName: *collection_name databaseName: *database_name documents: - _id: 1 x: 11 - _id: 2 x: 22 tests: - description: 'FindOneAndDelete with hint string unsupported (client-side error)' operations: - object: *collection0 name: findOneAndDelete arguments: filter: &filter _id: 1 hint: _id_ expectError: isError: true expectEvents: - client: *client0 events: [] outcome: &outcome - collectionName: *collection_name databaseName: *database_name documents: - _id: 1 x: 11 - _id: 2 x: 22 - description: 'FindOneAndDelete with hint document' operations: - object: *collection0 name: findOneAndDelete arguments: filter: &filter _id: 1 hint: _id: 1 expectError: isError: true expectEvents: - client: *client0 events: [] outcome: &outcome - collectionName: *collection_name databaseName: *database_name documents: - _id: 1 x: 11 - _id: 2 x: 22 mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/findOneAndDelete-hint-serverError.yml000066400000000000000000000046321505113246500330050ustar00rootroot00000000000000# This file was created automatically using mongodb-spec-converter. # Please review the generated file, then remove this notice. description: findOneAndDelete-hint-serverError schemaVersion: '1.0' runOnRequirements: - minServerVersion: 4.2.0 maxServerVersion: 4.3.3 createEntities: - client: id: &client0 client0 observeEvents: - commandStartedEvent - database: id: &database0 database0 client: client0 databaseName: &database_name crud-v2 - collection: id: &collection0 collection0 database: database0 collectionName: &collection_name findOneAndDelete_hint initialData: - collectionName: *collection_name databaseName: *database_name documents: - _id: 1 x: 11 - _id: 2 x: 22 tests: - description: 'FindOneAndDelete with hint string unsupported (server-side error)' operations: - object: *collection0 name: findOneAndDelete arguments: filter: &filter _id: 1 hint: _id_ expectError: isError: true expectEvents: - client: *client0 events: - commandStartedEvent: command: findAndModify: *collection_name query: *filter hint: _id_ remove: true outcome: &outcome - collectionName: *collection_name databaseName: *database_name documents: - _id: 1 x: 11 - _id: 2 x: 22 - description: 'FindOneAndDelete with hint document unsupported (server-side error)' operations: - object: *collection0 name: findOneAndDelete arguments: filter: &filter _id: 1 hint: _id: 1 expectError: isError: true expectEvents: - client: *client0 events: - commandStartedEvent: command: findAndModify: *collection_name query: *filter hint: _id: 1 remove: true outcome: &outcome - collectionName: *collection_name databaseName: *database_name documents: - _id: 1 x: 11 - _id: 2 x: 22 mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/findOneAndDelete-hint-unacknowledged.yml000066400000000000000000000047451505113246500334640ustar00rootroot00000000000000description: findOneAndDelete-hint-unacknowledged schemaVersion: '1.0' createEntities: - client: id: &client0 client0 observeEvents: - commandStartedEvent - database: id: &database0 database0 client: *client0 databaseName: &database0Name db0 - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name coll0 collectionOptions: writeConcern: { w: 0 } initialData: - collectionName: *collection0Name databaseName: *database0Name documents: &documents - { _id: 1, x: 11 } - { _id: 2, x: 22 } tests: - description: "Unacknowledged findOneAndDelete with hint string fails with client-side error on pre-4.4 server" runOnRequirements: - maxServerVersion: "4.2.99" operations: - object: *collection0 name: findOneAndDelete arguments: filter: &filter { _id: { $gt: 1 } } hint: _id_ expectError: isClientError: true expectEvents: &noEvents - client: *client0 events: [] - description: "Unacknowledged findOneAndDelete with hint document fails with client-side error on pre-4.4 server" runOnRequirements: - maxServerVersion: "4.2.99" operations: - object: *collection0 name: findOneAndDelete arguments: filter: *filter hint: { _id: 1 } expectError: isClientError: true expectEvents: *noEvents - description: "Unacknowledged findOneAndDelete with hint string on 4.4+ server" runOnRequirements: - minServerVersion: "4.4.0" operations: - object: *collection0 name: findOneAndDelete arguments: filter: *filter hint: _id_ expectResult: &unacknowledgedResult { $$unsetOrMatches: null } expectEvents: &events - client: *client0 events: - commandStartedEvent: command: findAndModify: *collection0Name query: *filter remove: true hint: { $$type: [ string, object ]} writeConcern: { w: 0 } - description: "Unacknowledged findOneAndDelete with hint document on 4.4+ server" runOnRequirements: - minServerVersion: "4.4.0" operations: - object: *collection0 name: findOneAndDelete arguments: filter: *filter hint: { _id: 1 } expectResult: *unacknowledgedResult expectEvents: *events mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/findOneAndDelete-hint.yml000066400000000000000000000043641505113246500304710ustar00rootroot00000000000000# This file was created automatically using mongodb-spec-converter. # Please review the generated file, then remove this notice. description: findOneAndDelete-hint schemaVersion: '1.0' runOnRequirements: - minServerVersion: 4.3.4 createEntities: - client: id: &client0 client0 observeEvents: - commandStartedEvent - database: id: &database0 database0 client: client0 databaseName: &database_name crud-v2 - collection: id: &collection0 collection0 database: database0 collectionName: &collection_name findOneAndDelete_hint initialData: - collectionName: *collection_name databaseName: *database_name documents: - _id: 1 x: 11 - _id: 2 x: 22 tests: - description: 'FindOneAndDelete with hint string' operations: - object: *collection0 name: findOneAndDelete arguments: filter: &filter _id: 1 hint: _id_ expectResult: &result _id: 1 x: 11 expectEvents: - client: *client0 events: - commandStartedEvent: command: findAndModify: *collection_name query: *filter hint: _id_ remove: true outcome: &outcome - collectionName: *collection_name databaseName: *database_name documents: - _id: 2 x: 22 - description: 'FindOneAndDelete with hint document' operations: - object: *collection0 name: findOneAndDelete arguments: filter: &filter _id: 1 hint: _id: 1 expectResult: &result _id: 1 x: 11 expectEvents: - client: *client0 events: - commandStartedEvent: command: findAndModify: *collection_name query: *filter hint: _id: 1 remove: true outcome: &outcome - collectionName: *collection_name databaseName: *database_name documents: - _id: 2 x: 22 mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/findOneAndDelete-let.yml000066400000000000000000000044111505113246500303040ustar00rootroot00000000000000description: "findOneAndDelete-let" schemaVersion: "1.0" createEntities: - client: id: &client0 client0 observeEvents: [ commandStartedEvent ] - database: id: &database0 database0 client: *client0 databaseName: &database0Name crud-tests - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name coll0 initialData: &initialData - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1 } - { _id: 2 } tests: - description: "findOneAndDelete with let option" runOnRequirements: - minServerVersion: "5.0" operations: - name: findOneAndDelete object: *collection0 arguments: filter: &filter $expr: $eq: [ "$_id", "$$id" ] let: &let0 id: 1 expectEvents: - client: *client0 events: - commandStartedEvent: command: findAndModify: *collection0Name query: *filter remove: true let: *let0 outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 2 } - description: "findOneAndDelete with let option unsupported (server-side error)" runOnRequirements: - minServerVersion: "4.2.0" maxServerVersion: "4.4.99" operations: - name: findOneAndDelete object: *collection0 arguments: filter: &filter1 $expr: $eq: [ "$_id", "$$id" ] let: &let1 id: 1 expectError: # This error message is consistent between 4.2.x and 4.4.x servers. # Older servers return a different error message. errorContains: "field 'let' is an unknown field" isClientError: false expectEvents: - client: *client0 events: - commandStartedEvent: command: findAndModify: *collection0Name query: *filter1 remove: true let: *let1 outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1 } - { _id: 2 } mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/findOneAndReplace-comment.yml000066400000000000000000000053011505113246500313320ustar00rootroot00000000000000description: "findOneAndReplace-comment" schemaVersion: "1.0" createEntities: - client: id: &client0 client0 observeEvents: [ commandStartedEvent ] - database: id: &database0 database0 client: *client0 databaseName: &database0Name crud-tests - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name coll0 initialData: &initialData - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1 } - { _id: 2 } tests: - description: "findOneAndReplace with string comment" runOnRequirements: - minServerVersion: "4.4" operations: - name: findOneAndReplace object: *collection0 arguments: filter: &filter _id: 1 replacement: &replacement x: 5 comment: "comment" expectResult: _id: 1 expectEvents: - client: *client0 events: - commandStartedEvent: command: findAndModify: *collection0Name query: *filter update: *replacement comment: "comment" outcome: &outcome - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1, x: 5 } - { _id: 2 } - description: "findOneAndReplace with document comment" runOnRequirements: - minServerVersion: "4.4" operations: - name: findOneAndReplace object: *collection0 arguments: filter: *filter replacement: *replacement comment: &comment { key: "value"} expectResult: _id: 1 expectEvents: - client: *client0 events: - commandStartedEvent: command: findAndModify: *collection0Name query: *filter update: *replacement comment: *comment outcome: *outcome - description: "findOneAndReplace with comment - pre 4.4" runOnRequirements: - minServerVersion: "4.2.0" # findAndModify option validation was introduced in 4.2 maxServerVersion: "4.2.99" operations: - name: findOneAndReplace object: *collection0 arguments: filter: *filter replacement: *replacement comment: "comment" expectError: isClientError: false expectEvents: - client: *client0 events: - commandStartedEvent: command: findAndModify: *collection0Name query: *filter update: *replacement comment: "comment" outcome: *initialData mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/findOneAndReplace-dots_and_dollars.yml000066400000000000000000000106051505113246500332060ustar00rootroot00000000000000description: "findOneAndReplace-dots_and_dollars" schemaVersion: "1.0" createEntities: - client: id: &client0 client0 observeEvents: [ commandStartedEvent ] - database: id: &database0 database0 client: *client0 databaseName: &database0Name crud-tests - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name coll0 initialData: &initialData - collectionName: *collection0Name databaseName: *database0Name documents: - &initialDocument { _id: 1 } tests: - description: "Replacing document with top-level dotted key on 3.6+ server" runOnRequirements: - minServerVersion: "3.6" operations: - name: findOneAndReplace object: *collection0 arguments: filter: { _id: 1 } replacement: &dottedKey { _id: 1, a.b: 1 } expectResult: *initialDocument expectEvents: &expectEventsDottedKey - client: *client0 events: - commandStartedEvent: command: findAndModify: *collection0Name query: { _id: 1 } update: *dottedKey new: { $$unsetOrMatches: false } outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - *dottedKey - description: "Replacing document with top-level dotted key on pre-3.6 server yields server-side error" runOnRequirements: - maxServerVersion: "3.4.99" operations: - name: findOneAndReplace object: *collection0 arguments: filter: { _id: 1 } replacement: *dottedKey expectError: isClientError: false expectEvents: *expectEventsDottedKey outcome: *initialData - description: "Replacing document with dollar-prefixed key in embedded doc on 5.0+ server" runOnRequirements: - minServerVersion: "5.0" operations: - name: findOneAndReplace object: *collection0 arguments: filter: { _id: 1 } replacement: &dollarPrefixedKeyInEmbedded { _id: 1, a: { $b: 1 } } expectResult: *initialDocument expectEvents: &expectEventsDollarPrefixedKeyInEmbedded - client: *client0 events: - commandStartedEvent: command: findAndModify: *collection0Name query: { _id: 1 } update: *dollarPrefixedKeyInEmbedded new: { $$unsetOrMatches: false } outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - *dollarPrefixedKeyInEmbedded - description: "Replacing document with dollar-prefixed key in embedded doc on pre-5.0 server yields server-side error" runOnRequirements: - maxServerVersion: "4.99" operations: - name: findOneAndReplace object: *collection0 arguments: filter: { _id: 1 } replacement: *dollarPrefixedKeyInEmbedded expectError: isClientError: false expectEvents: *expectEventsDollarPrefixedKeyInEmbedded outcome: *initialData - description: "Replacing document with dotted key in embedded doc on 3.6+ server" runOnRequirements: - minServerVersion: "3.6" operations: - name: findOneAndReplace object: *collection0 arguments: filter: { _id: 1 } replacement: &dottedKeyInEmbedded { _id: 1, a: { b.c: 1 } } expectResult: *initialDocument expectEvents: &expectEventsDottedKeyInEmbedded - client: *client0 events: - commandStartedEvent: command: findAndModify: *collection0Name query: { _id: 1 } update: *dottedKeyInEmbedded new: { $$unsetOrMatches: false } outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - *dottedKeyInEmbedded - description: "Replacing document with dotted key in embedded doc on pre-3.6 server yields server-side error" runOnRequirements: - maxServerVersion: "3.4.99" operations: - name: findOneAndReplace object: *collection0 arguments: filter: { _id: 1 } replacement: *dottedKeyInEmbedded expectError: isClientError: false expectEvents: *expectEventsDottedKeyInEmbedded outcome: *initialData mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/findOneAndReplace-hint-clientError.yml000066400000000000000000000035351505113246500331270ustar00rootroot00000000000000# This file was created automatically using mongodb-spec-converter. # Please review the generated file, then remove this notice. description: findOneAndReplace-hint-clientError schemaVersion: '1.0' runOnRequirements: - maxServerVersion: 4.0.99 createEntities: - client: id: &client0 client0 observeEvents: - commandStartedEvent - database: id: &database0 database0 client: client0 databaseName: &database_name crud-v2 - collection: id: &collection0 collection0 database: database0 collectionName: &collection_name findOneAndReplace_hint initialData: - collectionName: *collection_name databaseName: *database_name documents: - _id: 1 x: 11 - _id: 2 x: 22 tests: - description: 'FindOneAndReplace with hint string unsupported (client-side error)' operations: - object: *collection0 name: findOneAndReplace arguments: filter: &filter _id: 1 replacement: &replacement x: 33 hint: _id_ expectError: isError: true expectEvents: - client: *client0 events: [] outcome: &outcome - collectionName: *collection_name databaseName: *database_name documents: - _id: 1 x: 11 - _id: 2 x: 22 - description: 'FindOneAndReplace with hint document unsupported (client-side error)' operations: - object: *collection0 name: findOneAndReplace arguments: filter: *filter replacement: *replacement hint: _id: 1 expectError: isError: true expectEvents: - client: *client0 events: [] outcome: *outcome mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/findOneAndReplace-hint-serverError.yml000066400000000000000000000044541505113246500331600ustar00rootroot00000000000000# This file was created automatically using mongodb-spec-converter. # Please review the generated file, then remove this notice. description: findOneAndReplace-hint-serverError schemaVersion: '1.0' runOnRequirements: - minServerVersion: 4.2.0 maxServerVersion: 4.3.0 createEntities: - client: id: &client0 client0 observeEvents: - commandStartedEvent - database: id: &database0 database0 client: client0 databaseName: &database_name crud-v2 - collection: id: &collection0 collection0 database: database0 collectionName: &collection_name findOneAndReplace_hint initialData: - collectionName: *collection_name databaseName: *database_name documents: - _id: 1 x: 11 - _id: 2 x: 22 tests: - description: 'FindOneAndReplace with hint string unsupported (server-side error)' operations: - object: *collection0 name: findOneAndReplace arguments: filter: &filter _id: 1 replacement: &replacement x: 33 hint: _id_ expectError: isError: true expectEvents: - client: *client0 events: - commandStartedEvent: command: findAndModify: *collection_name query: *filter update: *replacement hint: _id_ outcome: &outcome - collectionName: *collection_name databaseName: *database_name documents: - _id: 1 x: 11 - _id: 2 x: 22 - description: 'FindOneAndReplace with hint document unsupported (server-side error)' operations: - object: *collection0 name: findOneAndReplace arguments: filter: *filter replacement: *replacement hint: _id: 1 expectError: isError: true expectEvents: - client: *client0 events: - commandStartedEvent: command: findAndModify: *collection_name query: *filter update: *replacement hint: _id: 1 outcome: *outcome mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/findOneAndReplace-hint-unacknowledged.yml000066400000000000000000000054061505113246500336300ustar00rootroot00000000000000description: findOneAndReplace-hint-unacknowledged schemaVersion: '1.0' createEntities: - client: id: &client0 client0 observeEvents: - commandStartedEvent - database: id: &database0 database0 client: *client0 databaseName: &database0Name db0 - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name coll0 collectionOptions: writeConcern: { w: 0 } - collection: id: &collection1 collection1 database: *database0 collectionName: *collection0Name initialData: - collectionName: *collection0Name databaseName: *database0Name documents: &documents - { _id: 1, x: 11 } - { _id: 2, x: 22 } tests: - description: "Unacknowledged findOneAndReplace with hint string fails with client-side error on pre-4.4 server" runOnRequirements: - maxServerVersion: "4.2.99" operations: - object: *collection0 name: findOneAndReplace arguments: filter: &filter { _id: { $gt: 1 } } replacement: &replacement { x: 111 } hint: _id_ expectError: isClientError: true expectEvents: &noEvents - client: *client0 events: [] - description: "Unacknowledged findOneAndReplace with hint document fails with client-side error on pre-4.4 server" runOnRequirements: - maxServerVersion: "4.2.99" operations: - object: *collection0 name: findOneAndReplace arguments: filter: *filter replacement: *replacement hint: { _id: 1 } expectError: isClientError: true expectEvents: *noEvents - description: "Unacknowledged findOneAndReplace with hint string on 4.4+ server" runOnRequirements: - minServerVersion: "4.4.0" operations: - object: *collection0 name: findOneAndReplace arguments: filter: *filter replacement: *replacement hint: _id_ expectResult: &unacknowledgedResult { $$unsetOrMatches: null } expectEvents: &events - client: *client0 events: - commandStartedEvent: command: findAndModify: *collection0Name query: *filter update: *replacement hint: { $$type: [ string, object ]} writeConcern: { w: 0 } - description: "Unacknowledged findOneAndReplace with hint document on 4.4+ server" runOnRequirements: - minServerVersion: "4.4.0" operations: - object: *collection0 name: findOneAndReplace arguments: filter: *filter replacement: *replacement hint: { _id: 1 } expectResult: *unacknowledgedResult expectEvents: *events mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/findOneAndReplace-hint.yml000066400000000000000000000043071505113246500306370ustar00rootroot00000000000000# This file was created automatically using mongodb-spec-converter. # Please review the generated file, then remove this notice. description: findOneAndReplace-hint schemaVersion: '1.0' runOnRequirements: - minServerVersion: 4.3.1 createEntities: - client: id: &client0 client0 observeEvents: - commandStartedEvent - database: id: &database0 database0 client: client0 databaseName: &database_name crud-v2 - collection: id: &collection0 collection0 database: database0 collectionName: &collection_name findOneAndReplace_hint initialData: - collectionName: *collection_name databaseName: *database_name documents: - _id: 1 x: 11 - _id: 2 x: 22 tests: - description: 'FindOneAndReplace with hint string' operations: - object: *collection0 name: findOneAndReplace arguments: filter: &filter _id: 1 replacement: &replacement x: 33 hint: _id_ expectResult: &result _id: 1 x: 11 expectEvents: - client: *client0 events: - commandStartedEvent: command: findAndModify: *collection_name query: *filter update: *replacement hint: _id_ outcome: &outcome - collectionName: *collection_name databaseName: *database_name documents: - _id: 1 x: 33 - _id: 2 x: 22 - description: 'FindOneAndReplace with hint document' operations: - object: *collection0 name: findOneAndReplace arguments: filter: *filter replacement: *replacement hint: _id: 1 expectResult: *result expectEvents: - client: *client0 events: - commandStartedEvent: command: findAndModify: *collection_name query: *filter update: *replacement hint: _id: 1 outcome: *outcome mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/findOneAndReplace-let.yml000066400000000000000000000047431505113246500304650ustar00rootroot00000000000000description: "findOneAndReplace-let" schemaVersion: "1.0" createEntities: - client: id: &client0 client0 observeEvents: [ commandStartedEvent ] - database: id: &database0 database0 client: *client0 databaseName: &database0Name crud-tests - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name coll0 initialData: &initialData - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1 } - { _id: 2 } tests: - description: "findOneAndReplace with let option" runOnRequirements: - minServerVersion: "5.0" operations: - name: findOneAndReplace object: *collection0 arguments: filter: &filter $expr: $eq: [ "$_id", "$$id" ] replacement: &replacement x: "x" let: &let0 id: 1 expectResult: _id: 1 expectEvents: - client: *client0 events: - commandStartedEvent: command: findAndModify: *collection0Name query: *filter update: *replacement let: *let0 outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1, x: "x" } - { _id: 2 } - description: "findOneAndReplace with let option unsupported (server-side error)" runOnRequirements: - minServerVersion: "4.2.0" maxServerVersion: "4.4.99" operations: - name: findOneAndReplace object: *collection0 arguments: filter: &filter1 $expr: $eq: [ "$_id", "$$id" ] replacement: &replacement1 x: "x" let: &let1 id: 1 expectError: # This error message is consistent between 4.2.x and 4.4.x servers. # Older servers return a different error message. errorContains: "field 'let' is an unknown field" isClientError: false expectEvents: - client: *client0 events: - commandStartedEvent: command: findAndModify: *collection0Name query: *filter1 update: *replacement1 let: *let1 outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1 } - { _id: 2 } mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/findOneAndUpdate-comment.yml000066400000000000000000000050051505113246500312020ustar00rootroot00000000000000description: "findOneAndUpdate-comment" schemaVersion: "1.0" createEntities: - client: id: &client0 client0 observeEvents: [ commandStartedEvent ] - database: id: &database0 database0 client: *client0 databaseName: &database0Name crud-tests - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name coll0 initialData: &initialData - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1 } - { _id: 2 } tests: - description: "findOneAndUpdate with string comment" runOnRequirements: - minServerVersion: "4.4" operations: - name: findOneAndUpdate object: *collection0 arguments: filter: &filter _id: 1 update: &update - $set: {x: 5 } comment: "comment" expectResult: _id: 1 expectEvents: - client: *client0 events: - commandStartedEvent: command: findAndModify: *collection0Name query: *filter update: *update comment: "comment" - description: "findOneAndUpdate with document comment" runOnRequirements: - minServerVersion: "4.4" operations: - name: findOneAndUpdate object: *collection0 arguments: filter: &filter _id: 1 update: &update - $set: {x: 5 } comment: &comment { key: "value"} expectResult: _id: 1 expectEvents: - client: *client0 events: - commandStartedEvent: command: findAndModify: *collection0Name query: *filter update: *update comment: *comment - description: "findOneAndUpdate with comment - pre 4.4" runOnRequirements: - minServerVersion: "4.2.0" # findAndModify option validation was introduced in 4.2 maxServerVersion: "4.2.99" operations: - name: findOneAndUpdate object: *collection0 arguments: filter: *filter update: *update comment: "comment" expectError: isClientError: false expectEvents: - client: *client0 events: - commandStartedEvent: command: findAndModify: *collection0Name query: *filter update: *update comment: "comment" outcome: *initialData mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/findOneAndUpdate-dots_and_dollars.yml000066400000000000000000000100151505113246500330500ustar00rootroot00000000000000description: "findOneAndUpdate-dots_and_dollars" schemaVersion: "1.0" createEntities: - client: id: &client0 client0 observeEvents: [ commandStartedEvent ] - database: id: &database0 database0 client: *client0 databaseName: &database0Name crud-tests - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name coll0 initialData: &initialData - collectionName: *collection0Name databaseName: *database0Name documents: - &initialDocument { _id: 1, foo: {} } tests: - description: "Updating document to set top-level dollar-prefixed key on 5.0+ server" runOnRequirements: - minServerVersion: "5.0" operations: - name: findOneAndUpdate object: *collection0 arguments: filter: { _id: 1 } update: &dollarPrefixedKey - { $replaceWith: { $setField: { field: { $literal: $a }, value: 1, input: $$ROOT } } } expectResult: *initialDocument expectEvents: - client: *client0 events: - commandStartedEvent: command: findAndModify: *collection0Name query: { _id: 1 } update: *dollarPrefixedKey new: { $$unsetOrMatches: false } outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1, foo: {}, $a: 1 } - description: "Updating document to set top-level dotted key on 5.0+ server" runOnRequirements: - minServerVersion: "5.0" operations: - name: findOneAndUpdate object: *collection0 arguments: filter: { _id: 1 } update: &dottedKey - { $replaceWith: { $setField: { field: { $literal: a.b }, value: 1, input: $$ROOT } } } expectResult: *initialDocument expectEvents: - client: *client0 events: - commandStartedEvent: command: findAndModify: *collection0Name query: { _id: 1 } update: *dottedKey new: { $$unsetOrMatches: false } outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1, foo: {}, a.b: 1 } - description: "Updating document to set dollar-prefixed key in embedded doc on 5.0+ server" runOnRequirements: - minServerVersion: "5.0" operations: - name: findOneAndUpdate object: *collection0 arguments: filter: { _id: 1 } update: &dollarPrefixedKeyInEmbedded - { $set: { foo: { $setField: { field: { $literal: $a }, value: 1, input: $foo } } } } expectResult: *initialDocument expectEvents: - client: *client0 events: - commandStartedEvent: command: findAndModify: *collection0Name query: { _id: 1 } update: *dollarPrefixedKeyInEmbedded new: { $$unsetOrMatches: false } outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1, foo: { $a: 1 } } - description: "Updating document to set dotted key in embedded doc on 5.0+ server" runOnRequirements: - minServerVersion: "5.0" operations: - name: findOneAndUpdate object: *collection0 arguments: filter: { _id: 1 } update: &dottedKeyInEmbedded - { $set: { foo: { $setField: { field: { $literal: a.b }, value: 1, input: $foo } } } } expectResult: *initialDocument expectEvents: - client: *client0 events: - commandStartedEvent: command: findAndModify: *collection0Name query: { _id: 1 } update: *dottedKeyInEmbedded new: { $$unsetOrMatches: false } outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1, foo: { a.b: 1 } } mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/findOneAndUpdate-errorResponse.yml000066400000000000000000000040451505113246500324130ustar00rootroot00000000000000description: "findOneAndUpdate-errorResponse" schemaVersion: "1.12" createEntities: - client: id: &client0 client0 - database: id: &database0 database0 client: *client0 databaseName: &database0Name crud-tests - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name test initialData: &initialData - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1, x: "foo" } tests: - description: "findOneAndUpdate DuplicateKey error is accessible" runOnRequirements: - minServerVersion: "4.2" # SERVER-37124 operations: - name: createIndex object: *collection0 arguments: keys: { x: 1 } unique: true - name: findOneAndUpdate object: *collection0 arguments: filter: { _id: 2 } update: { $set: { x: "foo" } } upsert: true expectError: errorCode: 11000 # DuplicateKey errorResponse: keyPattern: { x: 1 } keyValue: { x: "foo" } - description: "findOneAndUpdate document validation errInfo is accessible" runOnRequirements: - minServerVersion: "5.0" operations: - name: modifyCollection object: *database0 arguments: collection: *collection0Name validator: x: { $type: "string" } - name: findOneAndUpdate object: *collection0 arguments: filter: { _id: 1 } update: { $set: { x: 1 } } expectError: errorCode: 121 # DocumentValidationFailure errorResponse: # Avoid asserting the exact contents of errInfo as it may vary by # server version. Likewise, this is why drivers do not model the # document. The following is sufficient to test that validation # details are accessible. See SERVER-20547 for more context. errInfo: failingDocumentId: 1 details: { $$type: "object" } mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/findOneAndUpdate-hint-clientError.yml000066400000000000000000000035261505113246500327760ustar00rootroot00000000000000# This file was created automatically using mongodb-spec-converter. # Please review the generated file, then remove this notice. description: findOneAndUpdate-hint-clientError schemaVersion: '1.0' runOnRequirements: - maxServerVersion: 4.0.99 createEntities: - client: id: &client0 client0 observeEvents: - commandStartedEvent - database: id: &database0 database0 client: client0 databaseName: &database_name crud-v2 - collection: id: &collection0 collection0 database: database0 collectionName: &collection_name findOneAndUpdate_hint initialData: - collectionName: *collection_name databaseName: *database_name documents: - _id: 1 x: 11 - _id: 2 x: 22 tests: - description: 'FindOneAndUpdate with hint string unsupported (client-side error)' operations: - object: *collection0 name: findOneAndUpdate arguments: filter: &filter _id: 1 update: &update $inc: x: 1 hint: _id_ expectError: isError: true expectEvents: - client: *client0 events: [] outcome: &outcome - collectionName: *collection_name databaseName: *database_name documents: - _id: 1 x: 11 - _id: 2 x: 22 - description: 'FindOneAndUpdate with hint document unsupported (client-side error)' operations: - object: *collection0 name: findOneAndUpdate arguments: filter: *filter update: *update hint: _id: 1 expectError: isError: true expectEvents: - client: *client0 events: [] outcome: *outcome mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/findOneAndUpdate-hint-serverError.yml000066400000000000000000000044331505113246500330240ustar00rootroot00000000000000# This file was created automatically using mongodb-spec-converter. # Please review the generated file, then remove this notice. description: findOneAndUpdate-hint-serverError schemaVersion: '1.0' runOnRequirements: - minServerVersion: 4.2.0 maxServerVersion: 4.3.0 createEntities: - client: id: &client0 client0 observeEvents: - commandStartedEvent - database: id: &database0 database0 client: client0 databaseName: &database_name crud-v2 - collection: id: &collection0 collection0 database: database0 collectionName: &collection_name findOneAndUpdate_hint initialData: - collectionName: *collection_name databaseName: *database_name documents: - _id: 1 x: 11 - _id: 2 x: 22 tests: - description: 'FindOneAndUpdate with hint string unsupported (server-side error)' operations: - object: *collection0 name: findOneAndUpdate arguments: filter: &filter _id: 1 update: &update $inc: x: 1 hint: _id_ expectError: isError: true expectEvents: - client: *client0 events: - commandStartedEvent: command: findAndModify: *collection_name query: *filter update: *update hint: _id_ outcome: &outcome - collectionName: *collection_name databaseName: *database_name documents: - _id: 1 x: 11 - _id: 2 x: 22 - description: 'FindOneAndUpdate with hint document unsupported (server-side error)' operations: - object: *collection0 name: findOneAndUpdate arguments: filter: *filter update: *update hint: _id: 1 expectError: isError: true expectEvents: - client: *client0 events: - commandStartedEvent: command: findAndModify: *collection_name query: *filter update: *update hint: _id: 1 outcome: *outcome mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/findOneAndUpdate-hint-unacknowledged.yml000066400000000000000000000051431505113246500334750ustar00rootroot00000000000000description: findOneAndUpdate-hint-unacknowledged schemaVersion: '1.0' createEntities: - client: id: &client0 client0 observeEvents: - commandStartedEvent - database: id: &database0 database0 client: *client0 databaseName: &database0Name db0 - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name coll0 collectionOptions: writeConcern: { w: 0 } initialData: - collectionName: *collection0Name databaseName: *database0Name documents: &documents - { _id: 1, x: 11 } - { _id: 2, x: 22 } tests: - description: "Unacknowledged findOneAndUpdate with hint string fails with client-side error on pre-4.4 server" runOnRequirements: - maxServerVersion: "4.2.99" operations: - object: *collection0 name: findOneAndUpdate arguments: filter: &filter { _id: { $gt: 1 } } update: &update { $inc: { x: 1 } } hint: _id_ expectError: isClientError: true expectEvents: &noEvents - client: *client0 events: [] - description: "Unacknowledged findOneAndUpdate with hint document fails with client-side error on pre-4.4 server" runOnRequirements: - maxServerVersion: "4.2.99" operations: - object: *collection0 name: findOneAndUpdate arguments: filter: *filter update: *update hint: { _id: 1 } expectError: isClientError: true expectEvents: *noEvents - description: "Unacknowledged findOneAndUpdate with hint string on 4.4+ server" runOnRequirements: - minServerVersion: "4.4.0" operations: - object: *collection0 name: findOneAndUpdate arguments: filter: *filter update: *update hint: _id_ expectResult: &unacknowledgedResult { $$unsetOrMatches: null } expectEvents: &events - client: *client0 events: - commandStartedEvent: command: findAndModify: *collection0Name query: *filter update: *update hint: { $$type: [ string, object ]} writeConcern: { w: 0 } - description: "Unacknowledged findOneAndUpdate with hint document on 4.4+ server" runOnRequirements: - minServerVersion: "4.4.0" operations: - object: *collection0 name: findOneAndUpdate arguments: filter: *filter update: *update hint: { _id: 1 } expectResult: *unacknowledgedResult expectEvents: *events mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/findOneAndUpdate-hint.yml000066400000000000000000000042661505113246500305120ustar00rootroot00000000000000# This file was created automatically using mongodb-spec-converter. # Please review the generated file, then remove this notice. description: findOneAndUpdate-hint schemaVersion: '1.0' runOnRequirements: - minServerVersion: 4.3.1 createEntities: - client: id: &client0 client0 observeEvents: - commandStartedEvent - database: id: &database0 database0 client: client0 databaseName: &database_name crud-v2 - collection: id: &collection0 collection0 database: database0 collectionName: &collection_name findOneAndUpdate_hint initialData: - collectionName: *collection_name databaseName: *database_name documents: - _id: 1 x: 11 - _id: 2 x: 22 tests: - description: 'FindOneAndUpdate with hint string' operations: - object: *collection0 name: findOneAndUpdate arguments: filter: &filter _id: 1 update: &update $inc: x: 1 hint: _id_ expectResult: &result _id: 1 x: 11 expectEvents: - client: *client0 events: - commandStartedEvent: command: findAndModify: *collection_name query: *filter update: *update hint: _id_ outcome: &outcome - collectionName: *collection_name databaseName: *database_name documents: - _id: 1 x: 12 - _id: 2 x: 22 - description: 'FindOneAndUpdate with hint document' operations: - object: *collection0 name: findOneAndUpdate arguments: filter: *filter update: *update hint: _id: 1 expectResult: *result expectEvents: - client: *client0 events: - commandStartedEvent: command: findAndModify: *collection_name query: *filter update: *update hint: _id: 1 outcome: *outcome mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/findOneAndUpdate-let.yml000066400000000000000000000050051505113246500303240ustar00rootroot00000000000000description: "findOneAndUpdate-let" schemaVersion: "1.0" createEntities: - client: id: &client0 client0 observeEvents: [ commandStartedEvent ] - database: id: &database0 database0 client: *client0 databaseName: &database0Name crud-tests - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name coll0 initialData: &initialData - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1 } - { _id: 2 } tests: - description: "findOneAndUpdate with let option" runOnRequirements: - minServerVersion: "5.0" operations: - name: findOneAndUpdate object: *collection0 arguments: filter: &filter $expr: $eq: [ "$_id", "$$id" ] update: &update - $set: {x: "$$x" } let: &let0 id: 1 x: "foo" expectResult: _id: 1 expectEvents: - client: *client0 events: - commandStartedEvent: command: findAndModify: *collection0Name query: *filter update: *update let: *let0 outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1, x: "foo" } - { _id: 2 } - description: "findOneAndUpdate with let option unsupported (server-side error)" runOnRequirements: - minServerVersion: "4.2.0" maxServerVersion: "4.4.99" operations: - name: findOneAndUpdate object: *collection0 arguments: filter: &filter1 $expr: $eq: [ "$_id", "$$id" ] update: &update1 - $set: {x: "$$x"} let: &let1 id: 1 x: "foo" expectError: # This error message is consistent between 4.2.x and 4.4.x servers. # Older servers return a different error message. errorContains: "field 'let' is an unknown field" isClientError: false expectEvents: - client: *client0 events: - commandStartedEvent: command: findAndModify: *collection0Name query: *filter1 update: *update1 let: *let1 outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1 } - { _id: 2 } mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/insertMany-comment.yml000066400000000000000000000046131505113246500301670ustar00rootroot00000000000000description: "insertMany-comment" schemaVersion: "1.0" createEntities: - client: id: &client0 client0 observeEvents: [ commandStartedEvent ] - database: id: &database0 database0 client: *client0 databaseName: &database0Name crud-tests - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name coll0 initialData: &initialData - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1, x: 11 } tests: - description: "insertMany with string comment" runOnRequirements: - minServerVersion: "4.4" operations: - name: insertMany object: *collection0 arguments: documents: - &document { _id: 2, x: 22 } comment: "comment" expectEvents: - client: *client0 events: - commandStartedEvent: command: insert: *collection0Name documents: - *document comment: "comment" outcome: &outcome - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1, x: 11 } - { _id: 2, x: 22 } - description: "insertMany with document comment" runOnRequirements: - minServerVersion: "4.4" operations: - name: insertMany object: *collection0 arguments: documents: - *document comment: &comment { key: "value" } expectEvents: - client: *client0 events: - commandStartedEvent: command: insert: *collection0Name documents: - *document comment: *comment outcome: *outcome - description: "insertMany with comment - pre 4.4" runOnRequirements: - minServerVersion: "3.4.0" maxServerVersion: "4.2.99" operations: - name: insertMany object: *collection0 arguments: documents: - *document comment: "comment" expectError: isClientError: false expectEvents: - client: *client0 events: - commandStartedEvent: command: insert: *collection0Name documents: - *document comment: "comment" outcome: *initialData mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/insertMany-dots_and_dollars.yml000066400000000000000000000073071505113246500320430ustar00rootroot00000000000000description: "insertMany-dots_and_dollars" schemaVersion: "1.0" createEntities: - client: id: &client0 client0 observeEvents: [ commandStartedEvent ] - database: id: &database0 database0 client: *client0 databaseName: &database0Name crud-tests - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name coll0 initialData: &initialData - collectionName: *collection0Name databaseName: *database0Name documents: [] tests: - description: "Inserting document with top-level dollar-prefixed key on 5.0+ server" runOnRequirements: - minServerVersion: "5.0" operations: - name: insertMany object: *collection0 arguments: documents: - &dollarPrefixedKey { _id: 1, $a: 1 } expectResult: &insertResult # InsertManyResult is optional because all of its fields are optional $$unsetOrMatches: { insertedIds: { $$unsetOrMatches: { 0: 1 } } } expectEvents: &expectEventsDollarPrefixedKey - client: *client0 events: - commandStartedEvent: command: insert: *collection0Name documents: - *dollarPrefixedKey outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - *dollarPrefixedKey - description: "Inserting document with top-level dollar-prefixed key on pre-5.0 server yields server-side error" runOnRequirements: - maxServerVersion: "4.99" operations: - name: insertMany object: *collection0 arguments: documents: - *dollarPrefixedKey expectError: isClientError: false expectEvents: *expectEventsDollarPrefixedKey outcome: *initialData - description: "Inserting document with top-level dotted key" operations: - name: insertMany object: *collection0 arguments: documents: - &dottedKey { _id: 1, a.b: 1 } expectResult: *insertResult expectEvents: - client: *client0 events: - commandStartedEvent: command: insert: *collection0Name documents: - *dottedKey outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - *dottedKey - description: "Inserting document with dollar-prefixed key in embedded doc" operations: - name: insertMany object: *collection0 arguments: documents: - &dollarPrefixedKeyInEmbedded { _id: 1, a: { $b: 1 } } expectResult: *insertResult expectEvents: - client: *client0 events: - commandStartedEvent: command: insert: *collection0Name documents: - *dollarPrefixedKeyInEmbedded outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - *dollarPrefixedKeyInEmbedded - description: "Inserting document with dotted key in embedded doc" operations: - name: insertMany object: *collection0 arguments: documents: - &dottedKeyInEmbedded { _id: 1, a: { b.c: 1 } } expectResult: *insertResult expectEvents: - client: *client0 events: - commandStartedEvent: command: insert: *collection0Name documents: - *dottedKeyInEmbedded outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - *dottedKeyInEmbedded mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/insertOne-comment.yml000066400000000000000000000045301505113246500300020ustar00rootroot00000000000000description: "insertOne-comment" schemaVersion: "1.0" createEntities: - client: id: &client0 client0 observeEvents: [ commandStartedEvent ] - database: id: &database0 database0 client: *client0 databaseName: &database0Name crud-tests - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name coll0 initialData: &initialData - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1, x: 11 } tests: - description: "insertOne with string comment" runOnRequirements: - minServerVersion: "4.4" operations: - name: insertOne object: *collection0 arguments: document: &document { _id: 2, x: 22 } comment: "comment" expectEvents: - client: *client0 events: - commandStartedEvent: command: insert: *collection0Name documents: - *document comment: "comment" outcome: &outcome - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1, x: 11 } - { _id: 2, x: 22 } - description: "insertOne with document comment" runOnRequirements: - minServerVersion: "4.4" operations: - name: insertOne object: *collection0 arguments: document: *document comment: &comment { key: "value" } expectEvents: - client: *client0 events: - commandStartedEvent: command: insert: *collection0Name documents: - *document comment: *comment outcome: *outcome - description: "insertOne with comment - pre 4.4" runOnRequirements: - minServerVersion: "3.4.0" maxServerVersion: "4.2.99" operations: - name: insertOne object: *collection0 arguments: document: *document comment: "comment" expectError: isClientError: false expectEvents: - client: *client0 events: - commandStartedEvent: command: insert: *collection0Name documents: - *document comment: "comment" outcome: *initialData mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/insertOne-dots_and_dollars.yml000066400000000000000000000170431505113246500316560ustar00rootroot00000000000000description: "insertOne-dots_and_dollars" schemaVersion: "1.0" createEntities: - client: id: &client0 client0 observeEvents: [ commandStartedEvent ] - database: id: &database0 database0 client: *client0 databaseName: &database0Name crud-tests - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name coll0 - collection: id: &collection1 collection1 database: *database0 collectionName: &collection1Name coll1 collectionOptions: writeConcern: { w: 0 } initialData: &initialData - collectionName: *collection0Name databaseName: *database0Name documents: [] tests: - description: "Inserting document with top-level dollar-prefixed key on 5.0+ server" runOnRequirements: - minServerVersion: "5.0" operations: - name: insertOne object: *collection0 arguments: document: &dollarPrefixedKey { _id: 1, $a: 1 } expectResult: &insertResult # InsertOneResult is optional because all of its fields are optional $$unsetOrMatches: { insertedId: { $$unsetOrMatches: 1 } } expectEvents: &expectEventsDollarPrefixedKey - client: *client0 events: - commandStartedEvent: command: insert: *collection0Name documents: - *dollarPrefixedKey outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - *dollarPrefixedKey - description: "Inserting document with top-level dollar-prefixed key on pre-5.0 server yields server-side error" runOnRequirements: - maxServerVersion: "4.99" operations: - name: insertOne object: *collection0 arguments: document: *dollarPrefixedKey expectError: isClientError: false expectEvents: *expectEventsDollarPrefixedKey outcome: *initialData - description: "Inserting document with top-level dotted key" operations: - name: insertOne object: *collection0 arguments: document: &dottedKey { _id: 1, a.b: 1 } expectResult: *insertResult expectEvents: - client: *client0 events: - commandStartedEvent: command: insert: *collection0Name documents: - *dottedKey outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - *dottedKey - description: "Inserting document with dollar-prefixed key in embedded doc" operations: - name: insertOne object: *collection0 arguments: document: &dollarPrefixedKeyInEmbedded { _id: 1, a: { $b: 1 } } expectResult: *insertResult expectEvents: - client: *client0 events: - commandStartedEvent: command: insert: *collection0Name documents: - *dollarPrefixedKeyInEmbedded outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - *dollarPrefixedKeyInEmbedded - description: "Inserting document with dotted key in embedded doc" operations: - name: insertOne object: *collection0 arguments: document: &dottedKeyInEmbedded { _id: 1, a: { b.c: 1 } } expectResult: *insertResult expectEvents: - client: *client0 events: - commandStartedEvent: command: insert: *collection0Name documents: - *dottedKeyInEmbedded outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - *dottedKeyInEmbedded - description: "Inserting document with dollar-prefixed key in _id yields server-side error" # Note: 5.0+ did not remove restrictions on dollar-prefixed keys in _id documents operations: - name: insertOne object: *collection0 arguments: document: &dollarPrefixedKeyInId { _id: { $a: 1 } } expectError: isClientError: false expectEvents: - client: *client0 events: - commandStartedEvent: command: insert: *collection0Name documents: - *dollarPrefixedKeyInId outcome: *initialData - description: "Inserting document with dotted key in _id on 3.6+ server" runOnRequirements: - minServerVersion: "3.6" operations: - name: insertOne object: *collection0 arguments: document: &dottedKeyInId { _id: { a.b: 1 } } expectResult: # InsertOneResult is optional because all of its fields are optional $$unsetOrMatches: { insertedId: { $$unsetOrMatches: { a.b: 1 } } } expectEvents: &expectEventsDottedKeyInId - client: *client0 events: - commandStartedEvent: command: insert: *collection0Name documents: - *dottedKeyInId outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - *dottedKeyInId - description: "Inserting document with dotted key in _id on pre-3.6 server yields server-side error" runOnRequirements: - maxServerVersion: "3.4.99" operations: - name: insertOne object: *collection0 arguments: document: *dottedKeyInId expectError: isClientError: false expectEvents: *expectEventsDottedKeyInId outcome: *initialData - description: "Inserting document with DBRef-like keys" operations: - name: insertOne object: *collection0 arguments: # Note: an incomplete DBRef document may cause issues loading the test # file with an Extended JSON parser, since the presence of one DBRef # key may cause the parser to require others and/or enforce expected # types (e.g. $ref and $db must be strings). # # Using "$db" here works for libmongoc so long as it's a string type; # however, neither $ref nor $id would be accepted on their own. # # See https://github.com/mongodb/specifications/blob/master/source/extended-json/extended-json.md#parsers document: &dbrefLikeKey { _id: 1, a: { $db: "foo" } } expectResult: *insertResult expectEvents: - client: *client0 events: - commandStartedEvent: command: insert: *collection0Name documents: - *dbrefLikeKey outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - *dbrefLikeKey - description: "Unacknowledged write using dollar-prefixed or dotted keys may be silently rejected on pre-5.0 server" runOnRequirements: - maxServerVersion: "4.99" operations: - name: insertOne object: *collection1 arguments: document: *dollarPrefixedKeyInId expectResult: # InsertOneResult is optional because all of its fields are optional $$unsetOrMatches: { acknowledged: { $$unsetOrMatches: false } } expectEvents: - client: *client0 events: - commandStartedEvent: command: insert: *collection1Name documents: - *dollarPrefixedKeyInId writeConcern: { w: 0 } outcome: *initialData mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/insertOne-errorResponse.yml000066400000000000000000000025131505113246500312070ustar00rootroot00000000000000description: "insertOne-errorResponse" schemaVersion: "1.12" createEntities: - client: id: &client0 client0 useMultipleMongoses: false - database: id: &database0 database0 client: *client0 databaseName: &database0Name crud-tests - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name test tests: # Some drivers may still need to skip this test because the CRUD spec does not # prescribe how drivers should formulate a WriteException beyond collecting a # write or write concern error. - description: "insert operations support errorResponse assertions" runOnRequirements: - minServerVersion: "4.0.0" topologies: [ single, replicaset ] - minServerVersion: "4.2.0" topologies: [ sharded ] operations: - name: failPoint object: testRunner arguments: client: *client0 failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: [ insert ] errorCode: &errorCode 8 # UnknownError - name: insertOne object: *collection0 arguments: document: { _id: 1 } expectError: errorCode: *errorCode errorResponse: code: *errorCode mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/replaceOne-comment.yml000066400000000000000000000056261505113246500301200ustar00rootroot00000000000000description: "replaceOne-comment" schemaVersion: "1.0" createEntities: - client: id: &client0 client0 observeEvents: [ commandStartedEvent ] - database: id: &database0 database0 client: *client0 databaseName: &database0Name crud-tests - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name coll0 initialData: &initialData - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1, x: 11 } tests: - description: "ReplaceOne with string comment" runOnRequirements: - minServerVersion: "4.4" operations: - name: replaceOne object: *collection0 arguments: filter: &filter { _id: 1 } replacement: &replacement { x: 22 } comment: "comment" expectEvents: - client: *client0 events: - commandStartedEvent: command: update: *collection0Name updates: - q: *filter u: *replacement multi: { $$unsetOrMatches: false } upsert: { $$unsetOrMatches: false } comment: "comment" outcome: &outcome - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1, x: 22 } - description: "ReplaceOne with document comment" runOnRequirements: - minServerVersion: "4.4" operations: - name: replaceOne object: *collection0 arguments: filter: *filter replacement: *replacement comment: &comment { key: "value" } expectEvents: - client: *client0 events: - commandStartedEvent: command: update: *collection0Name updates: - q: *filter u: *replacement multi: { $$unsetOrMatches: false } upsert: { $$unsetOrMatches: false } comment: *comment outcome: *outcome - description: "ReplaceOne with comment - pre 4.4" runOnRequirements: - minServerVersion: "3.4.0" maxServerVersion: "4.2.99" operations: - name: replaceOne object: *collection0 arguments: filter: *filter replacement: *replacement comment: "comment" expectError: isClientError: false expectEvents: - client: *client0 events: - commandStartedEvent: command: update: *collection0Name updates: - q: *filter u: *replacement multi: { $$unsetOrMatches: false } upsert: { $$unsetOrMatches: false } comment: "comment" outcome: *initialData mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/replaceOne-dots_and_dollars.yml000066400000000000000000000132151505113246500317620ustar00rootroot00000000000000description: "replaceOne-dots_and_dollars" schemaVersion: "1.0" createEntities: - client: id: &client0 client0 observeEvents: [ commandStartedEvent ] - database: id: &database0 database0 client: *client0 databaseName: &database0Name crud-tests - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name coll0 - collection: id: &collection1 collection1 database: *database0 collectionName: &collection1Name coll1 collectionOptions: writeConcern: { w: 0 } initialData: &initialData - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1 } tests: - description: "Replacing document with top-level dotted key on 3.6+ server" runOnRequirements: - minServerVersion: "3.6" operations: - name: replaceOne object: *collection0 arguments: filter: { _id: 1 } replacement: &dottedKey { _id: 1, a.b: 1 } expectResult: &replaceResult matchedCount: 1 modifiedCount: 1 upsertedCount: 0 expectEvents: &expectEventsDottedKey - client: *client0 events: - commandStartedEvent: command: update: *collection0Name updates: - q: { _id: 1 } u: *dottedKey multi: { $$unsetOrMatches: false } upsert: { $$unsetOrMatches: false } outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - *dottedKey - description: "Replacing document with top-level dotted key on pre-3.6 server yields server-side error" runOnRequirements: - maxServerVersion: "3.4.99" operations: - name: replaceOne object: *collection0 arguments: filter: { _id: 1 } replacement: *dottedKey expectError: isClientError: false expectEvents: *expectEventsDottedKey outcome: *initialData - description: "Replacing document with dollar-prefixed key in embedded doc on 5.0+ server" runOnRequirements: - minServerVersion: "5.0" operations: - name: replaceOne object: *collection0 arguments: filter: { _id: 1 } replacement: &dollarPrefixedKeyInEmbedded { _id: 1, a: { $b: 1 } } expectResult: *replaceResult expectEvents: &expectEventsDollarPrefixedKeyInEmbedded - client: *client0 events: - commandStartedEvent: command: update: *collection0Name updates: - q: { _id: 1 } u: *dollarPrefixedKeyInEmbedded multi: { $$unsetOrMatches: false } upsert: { $$unsetOrMatches: false } outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - *dollarPrefixedKeyInEmbedded - description: "Replacing document with dollar-prefixed key in embedded doc on pre-5.0 server yields server-side error" runOnRequirements: - maxServerVersion: "4.99" operations: - name: replaceOne object: *collection0 arguments: filter: { _id: 1 } replacement: *dollarPrefixedKeyInEmbedded expectError: isClientError: false expectEvents: *expectEventsDollarPrefixedKeyInEmbedded outcome: *initialData - description: "Replacing document with dotted key in embedded doc on 3.6+ server" runOnRequirements: - minServerVersion: "3.6" operations: - name: replaceOne object: *collection0 arguments: filter: { _id: 1 } replacement: &dottedKeyInEmbedded { _id: 1, a: { b.c: 1 } } expectResult: *replaceResult expectEvents: &expectEventsDottedKeyInEmbedded - client: *client0 events: - commandStartedEvent: command: update: *collection0Name updates: - q: { _id: 1 } u: *dottedKeyInEmbedded multi: { $$unsetOrMatches: false } upsert: { $$unsetOrMatches: false } outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - *dottedKeyInEmbedded - description: "Replacing document with dotted key in embedded doc on pre-3.6 server yields server-side error" runOnRequirements: - maxServerVersion: "3.4.99" operations: - name: replaceOne object: *collection0 arguments: filter: { _id: 1 } replacement: *dottedKeyInEmbedded expectError: isClientError: false expectEvents: *expectEventsDottedKeyInEmbedded outcome: *initialData - description: "Unacknowledged write using dollar-prefixed or dotted keys may be silently rejected on pre-5.0 server" runOnRequirements: - maxServerVersion: "4.99" operations: - name: replaceOne object: *collection1 arguments: filter: { _id: 1 } replacement: *dollarPrefixedKeyInEmbedded expectResult: acknowledged: { $$unsetOrMatches: false } expectEvents: - client: *client0 events: - commandStartedEvent: command: update: *collection1Name updates: - q: { _id: 1 } u: *dollarPrefixedKeyInEmbedded multi: { $$unsetOrMatches: false } upsert: { $$unsetOrMatches: false } writeConcern: { w: 0 } outcome: *initialData mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/replaceOne-hint-unacknowledged.yml000066400000000000000000000053771505113246500324130ustar00rootroot00000000000000description: replaceOne-hint-unacknowledged schemaVersion: '1.0' createEntities: - client: id: &client0 client0 observeEvents: - commandStartedEvent - database: id: &database0 database0 client: *client0 databaseName: &database0Name db0 - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name coll0 collectionOptions: writeConcern: { w: 0 } initialData: - collectionName: *collection0Name databaseName: *database0Name documents: &documents - { _id: 1, x: 11 } - { _id: 2, x: 22 } tests: - description: "Unacknowledged replaceOne with hint string fails with client-side error on pre-4.2 server" runOnRequirements: - maxServerVersion: "4.0.99" operations: - object: *collection0 name: replaceOne arguments: filter: &filter { _id: { $gt: 1 } } replacement: &replacement { x: 111 } hint: _id_ expectError: isClientError: true expectEvents: &noEvents - client: *client0 events: [] - description: "Unacknowledged replaceOne with hint document fails with client-side error on pre-4.2 server" runOnRequirements: - maxServerVersion: "4.0.99" operations: - object: *collection0 name: replaceOne arguments: filter: *filter replacement: *replacement hint: { _id: 1 } expectError: isClientError: true expectEvents: *noEvents - description: "Unacknowledged replaceOne with hint string on 4.2+ server" runOnRequirements: - minServerVersion: "4.2.0" operations: - object: *collection0 name: replaceOne arguments: filter: *filter replacement: *replacement hint: _id_ expectResult: &unacknowledgedResult { $$unsetOrMatches: { acknowledged: { $$unsetOrMatches: false } } } expectEvents: &events - client: *client0 events: - commandStartedEvent: command: update: *collection0Name updates: - q: *filter u: *replacement multi: { $$unsetOrMatches: false } upsert: { $$unsetOrMatches: false } hint: { $$type: [ string, object ]} writeConcern: { w: 0 } - description: "Unacknowledged replaceOne with hint document on 4.2+ server" runOnRequirements: - minServerVersion: "4.2.0" operations: - object: *collection0 name: replaceOne arguments: filter: *filter replacement: *replacement hint: { _id: 1 } expectResult: *unacknowledgedResult expectEvents: *events mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/replaceOne-hint.yml000066400000000000000000000050321505113246500274070ustar00rootroot00000000000000# This file was created automatically using mongodb-spec-converter. # Please review the generated file, then remove this notice. description: replaceOne-hint schemaVersion: '1.0' runOnRequirements: - minServerVersion: 4.2.0 createEntities: - client: id: &client0 client0 observeEvents: - commandStartedEvent - database: id: &database0 database0 client: client0 databaseName: &database_name crud-v2 - collection: id: &collection0 collection0 database: database0 collectionName: &collection_name test_replaceone_hint initialData: - collectionName: *collection_name databaseName: *database_name documents: - _id: 1 x: 11 - _id: 2 x: 22 tests: - description: 'ReplaceOne with hint string' operations: - object: *collection0 name: replaceOne arguments: filter: &filter _id: $gt: 1 replacement: &replacement x: 111 hint: _id_ expectResult: &result matchedCount: 1 modifiedCount: 1 upsertedCount: 0 expectEvents: - client: *client0 events: - commandStartedEvent: command: update: *collection_name updates: - q: *filter u: *replacement hint: _id_ multi: { $$unsetOrMatches: false } upsert: { $$unsetOrMatches: false } outcome: &outcome - collectionName: *collection_name databaseName: *database_name documents: - _id: 1 x: 11 - _id: 2 x: 111 - description: 'ReplaceOne with hint document' operations: - object: *collection0 name: replaceOne arguments: filter: *filter replacement: *replacement hint: _id: 1 expectResult: *result expectEvents: - client: *client0 events: - commandStartedEvent: command: update: *collection_name updates: - q: *filter u: *replacement hint: _id: 1 multi: { $$unsetOrMatches: false } upsert: { $$unsetOrMatches: false } outcome: *outcome mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/replaceOne-let.yml000066400000000000000000000050741505113246500272370ustar00rootroot00000000000000description: "replaceOne-let" schemaVersion: "1.0" createEntities: - client: id: &client0 client0 observeEvents: [ commandStartedEvent ] - database: id: &database0 database0 client: *client0 databaseName: &database0Name crud-tests - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name coll0 initialData: &initialData - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1 } - { _id: 2 } tests: - description: "ReplaceOne with let option" runOnRequirements: - minServerVersion: "5.0" operations: - name: replaceOne object: *collection0 arguments: filter: &filter $expr: $eq: [ "$_id", "$$id" ] replacement: &replacement x: "foo" let: &let id: 1 expectResult: matchedCount: 1 modifiedCount: 1 upsertedCount: 0 expectEvents: - client: *client0 events: - commandStartedEvent: command: update: *collection0Name updates: - q: *filter u: *replacement multi: { $$unsetOrMatches: false } upsert: { $$unsetOrMatches: false } let: *let outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1, x: "foo" } - { _id: 2 } - description: "ReplaceOne with let option unsupported (server-side error)" runOnRequirements: - minServerVersion: "3.6.0" maxServerVersion: "4.4.99" operations: - name: replaceOne object: *collection0 arguments: filter: *filter replacement: *replacement let: *let expectError: errorContains: "'update.let' is an unknown field" isClientError: false expectEvents: - client: *client0 events: - commandStartedEvent: command: update: *collection0Name updates: - q: *filter u: *replacement multi: { $$unsetOrMatches: false } upsert: { $$unsetOrMatches: false } let: *let outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1 } - { _id: 2 } mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/replaceOne-validation.yml000066400000000000000000000016041505113246500306000ustar00rootroot00000000000000description: "replaceOne-validation" schemaVersion: "1.0" createEntities: - client: id: &client0 client0 observeEvents: [ commandStartedEvent ] - database: id: &database0 database0 client: *client0 databaseName: &database0Name crud-tests - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name coll0 initialData: &initialData - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1, x: 11 } tests: - description: "ReplaceOne prohibits atomic modifiers" operations: - name: replaceOne object: *collection0 arguments: filter: { _id: 1 } replacement: { $set: { x: 22 } } expectError: isClientError: true expectEvents: - client: *client0 events: [] outcome: *initialData mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/updateMany-comment.yml000066400000000000000000000054531505113246500301500ustar00rootroot00000000000000description: "updateMany-comment" schemaVersion: "1.0" createEntities: - client: id: &client0 client0 observeEvents: [ commandStartedEvent ] - database: id: &database0 database0 client: *client0 databaseName: &database0Name crud-tests - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name coll0 initialData: &initialData - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1, x: 11 } tests: - description: "UpdateMany with string comment" runOnRequirements: - minServerVersion: "4.4" operations: - name: updateMany object: *collection0 arguments: filter: &filter { _id: 1 } update: &update { $set: {x: 22} } comment: "comment" expectEvents: - client: *client0 events: - commandStartedEvent: command: update: *collection0Name updates: - q: *filter u: *update multi: true upsert: { $$unsetOrMatches: false } comment: "comment" outcome: &outcome - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1, x: 22 } - description: "UpdateMany with document comment" runOnRequirements: - minServerVersion: "4.4" operations: - name: updateMany object: *collection0 arguments: filter: *filter update: *update comment: &comment { key: "value" } expectEvents: - client: *client0 events: - commandStartedEvent: command: update: *collection0Name updates: - q: *filter u: *update multi: true upsert: { $$unsetOrMatches: false } comment: *comment outcome: *outcome - description: "UpdateMany with comment - pre 4.4" runOnRequirements: - minServerVersion: "3.4.0" maxServerVersion: "4.2.99" operations: - name: updateMany object: *collection0 arguments: filter: *filter update: *update comment: "comment" expectError: isClientError: false expectEvents: - client: *client0 events: - commandStartedEvent: command: update: *collection0Name updates: - q: *filter u: *update multi: true upsert: { $$unsetOrMatches: false } comment: "comment" outcome: *initialData mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/updateMany-dots_and_dollars.yml000066400000000000000000000104021505113246500320070ustar00rootroot00000000000000description: "updateMany-dots_and_dollars" schemaVersion: "1.0" createEntities: - client: id: &client0 client0 observeEvents: [ commandStartedEvent ] - database: id: &database0 database0 client: *client0 databaseName: &database0Name crud-tests - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name coll0 initialData: &initialData - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1, foo: {} } tests: - description: "Updating document to set top-level dollar-prefixed key on 5.0+ server" runOnRequirements: - minServerVersion: "5.0" operations: - name: updateMany object: *collection0 arguments: filter: { _id: 1 } update: &dollarPrefixedKey - { $replaceWith: { $setField: { field: { $literal: $a }, value: 1, input: $$ROOT } } } expectResult: &updateResult matchedCount: 1 modifiedCount: 1 upsertedCount: 0 expectEvents: - client: *client0 events: - commandStartedEvent: command: update: *collection0Name updates: - q: { _id: 1 } u: *dollarPrefixedKey multi: true upsert: { $$unsetOrMatches: false } outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1, foo: {}, $a: 1 } - description: "Updating document to set top-level dotted key on 5.0+ server" runOnRequirements: - minServerVersion: "5.0" operations: - name: updateMany object: *collection0 arguments: filter: { _id: 1 } update: &dottedKey - { $replaceWith: { $setField: { field: { $literal: a.b }, value: 1, input: $$ROOT } } } expectResult: *updateResult expectEvents: - client: *client0 events: - commandStartedEvent: command: update: *collection0Name updates: - q: { _id: 1 } u: *dottedKey multi: true upsert: { $$unsetOrMatches: false } outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1, foo: {}, a.b: 1 } - description: "Updating document to set dollar-prefixed key in embedded doc on 5.0+ server" runOnRequirements: - minServerVersion: "5.0" operations: - name: updateMany object: *collection0 arguments: filter: { _id: 1 } update: &dollarPrefixedKeyInEmbedded - { $set: { foo: { $setField: { field: { $literal: $a }, value: 1, input: $foo } } } } expectResult: *updateResult expectEvents: - client: *client0 events: - commandStartedEvent: command: update: *collection0Name updates: - q: { _id: 1 } u: *dollarPrefixedKeyInEmbedded multi: true upsert: { $$unsetOrMatches: false } outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1, foo: { $a: 1 } } - description: "Updating document to set dotted key in embedded doc on 5.0+ server" runOnRequirements: - minServerVersion: "5.0" operations: - name: updateMany object: *collection0 arguments: filter: { _id: 1 } update: &dottedKeyInEmbedded - { $set: { foo: { $setField: { field: { $literal: a.b }, value: 1, input: $foo } } } } expectResult: *updateResult expectEvents: - client: *client0 events: - commandStartedEvent: command: update: *collection0Name updates: - q: { _id: 1 } u: *dottedKeyInEmbedded multi: true upsert: { $$unsetOrMatches: false } outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1, foo: { a.b: 1 } } mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/updateMany-hint-clientError.yml000066400000000000000000000036401505113246500317320ustar00rootroot00000000000000# This file was created automatically using mongodb-spec-converter. # Please review the generated file, then remove this notice. description: updateMany-hint-clientError schemaVersion: '1.0' runOnRequirements: - maxServerVersion: 3.3.99 createEntities: - client: id: &client0 client0 observeEvents: - commandStartedEvent - database: id: &database0 database0 client: client0 databaseName: &database_name crud-v2 - collection: id: &collection0 collection0 database: database0 collectionName: &collection_name test_updatemany_hint initialData: - collectionName: *collection_name databaseName: *database_name documents: - _id: 1 x: 11 - _id: 2 x: 22 - _id: 3 x: 33 tests: - description: 'UpdateMany with hint string unsupported (client-side error)' operations: - object: *collection0 name: updateMany arguments: filter: &filter _id: $gt: 1 update: &update $inc: x: 1 hint: _id_ expectError: isError: true expectEvents: - client: *client0 events: [] outcome: &outcome - collectionName: *collection_name databaseName: *database_name documents: - _id: 1 x: 11 - _id: 2 x: 22 - _id: 3 x: 33 - description: 'UpdateMany with hint document unsupported (client-side error)' operations: - object: *collection0 name: updateMany arguments: filter: *filter update: *update hint: _id: 1 expectError: isError: true expectEvents: - client: *client0 events: [] outcome: *outcome mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/updateMany-hint-serverError.yml000066400000000000000000000051531505113246500317630ustar00rootroot00000000000000# This file was created automatically using mongodb-spec-converter. # Please review the generated file, then remove this notice. description: updateMany-hint-serverError schemaVersion: '1.0' runOnRequirements: - minServerVersion: 3.4.0 maxServerVersion: 4.1.9 createEntities: - client: id: &client0 client0 observeEvents: - commandStartedEvent - database: id: &database0 database0 client: client0 databaseName: &database_name crud-v2 - collection: id: &collection0 collection0 database: database0 collectionName: &collection_name test_updatemany_hint initialData: - collectionName: *collection_name databaseName: *database_name documents: - _id: 1 x: 11 - _id: 2 x: 22 - _id: 3 x: 33 tests: - description: 'UpdateMany with hint string unsupported (server-side error)' operations: - object: *collection0 name: updateMany arguments: filter: &filter _id: $gt: 1 update: &update $inc: x: 1 hint: _id_ expectError: isError: true expectEvents: - client: *client0 events: - commandStartedEvent: command: update: *collection_name updates: - q: *filter u: *update multi: true hint: _id_ upsert: { $$unsetOrMatches: false } outcome: &outcome - collectionName: *collection_name databaseName: *database_name documents: - _id: 1 x: 11 - _id: 2 x: 22 - _id: 3 x: 33 - description: 'UpdateMany with hint document unsupported (server-side error)' operations: - object: *collection0 name: updateMany arguments: filter: *filter update: *update hint: _id: 1 expectError: isError: true expectEvents: - client: *client0 events: - commandStartedEvent: command: update: *collection_name updates: - q: *filter u: *update multi: true hint: _id: 1 upsert: { $$unsetOrMatches: false } outcome: *outcome mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/updateMany-hint-unacknowledged.yml000066400000000000000000000053351505113246500324370ustar00rootroot00000000000000description: updateMany-hint-unacknowledged schemaVersion: '1.0' createEntities: - client: id: &client0 client0 observeEvents: - commandStartedEvent - database: id: &database0 database0 client: *client0 databaseName: &database0Name db0 - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name coll0 collectionOptions: writeConcern: { w: 0 } initialData: - collectionName: *collection0Name databaseName: *database0Name documents: &documents - { _id: 1, x: 11 } - { _id: 2, x: 22 } - { _id: 3, x: 33 } tests: - description: "Unacknowledged updateMany with hint string fails with client-side error on pre-4.2 server" runOnRequirements: - maxServerVersion: "4.0.99" operations: - object: *collection0 name: updateMany arguments: filter: &filter { _id: { $gt: 1 } } update: &update { $inc: { x: 1 } } hint: _id_ expectError: isClientError: true expectEvents: &noEvents - client: *client0 events: [] - description: "Unacknowledged updateMany with hint document fails with client-side error on pre-4.2 server" runOnRequirements: - maxServerVersion: "4.0.99" operations: - object: *collection0 name: updateMany arguments: filter: *filter update: *update hint: { _id: 1 } expectError: isClientError: true expectEvents: *noEvents - description: "Unacknowledged updateMany with hint string on 4.2+ server" runOnRequirements: - minServerVersion: "4.2.0" operations: - object: *collection0 name: updateMany arguments: filter: *filter update: *update hint: _id_ expectResult: &unacknowledgedResult { $$unsetOrMatches: { acknowledged: { $$unsetOrMatches: false } } } expectEvents: &events - client: *client0 events: - commandStartedEvent: command: update: *collection0Name updates: - q: *filter u: *update multi: true upsert: { $$unsetOrMatches: false } hint: { $$type: [ string, object ]} writeConcern: { w: 0 } - description: "Unacknowledged updateMany with hint document on 4.2+ server" runOnRequirements: - minServerVersion: "4.2.0" operations: - object: *collection0 name: updateMany arguments: filter: *filter update: *update hint: { _id: 1 } expectResult: *unacknowledgedResult expectEvents: *events mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/updateMany-hint.yml000066400000000000000000000050651505113246500274470ustar00rootroot00000000000000# This file was created automatically using mongodb-spec-converter. # Please review the generated file, then remove this notice. description: updateMany-hint schemaVersion: '1.0' runOnRequirements: - minServerVersion: 4.2.0 createEntities: - client: id: &client0 client0 observeEvents: - commandStartedEvent - database: id: &database0 database0 client: client0 databaseName: &database_name crud-v2 - collection: id: &collection0 collection0 database: database0 collectionName: &collection_name test_updatemany_hint initialData: - collectionName: *collection_name databaseName: *database_name documents: - _id: 1 x: 11 - _id: 2 x: 22 - _id: 3 x: 33 tests: - description: 'UpdateMany with hint string' operations: - object: *collection0 name: updateMany arguments: filter: &filter _id: $gt: 1 update: &update $inc: x: 1 hint: _id_ expectResult: &result matchedCount: 2 modifiedCount: 2 upsertedCount: 0 expectEvents: - client: *client0 events: - commandStartedEvent: command: update: *collection_name updates: - q: *filter u: *update multi: true hint: _id_ upsert: { $$unsetOrMatches: false } outcome: &outcome - collectionName: *collection_name databaseName: *database_name documents: - _id: 1 x: 11 - _id: 2 x: 23 - _id: 3 x: 34 - description: 'UpdateMany with hint document' operations: - object: *collection0 name: updateMany arguments: filter: *filter update: *update hint: _id: 1 expectResult: *result expectEvents: - client: *client0 events: - commandStartedEvent: command: update: *collection_name updates: - q: *filter u: *update multi: true hint: _id: 1 upsert: { $$unsetOrMatches: false } outcome: *outcome mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/updateMany-let.yml000066400000000000000000000055051505113246500272700ustar00rootroot00000000000000description: "updateMany-let" schemaVersion: "1.0" createEntities: - client: id: &client0 client0 observeEvents: [ commandStartedEvent ] - database: id: &database0 database0 client: *client0 databaseName: &database0Name crud-tests - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name coll0 initialData: &initialData - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1 } - { _id: 2, name: "name" } - { _id: 3, name: "name" } tests: - description: "updateMany with let option" runOnRequirements: - minServerVersion: "5.0" operations: - name: updateMany object: *collection0 arguments: filter: &filter $expr: $eq: [ "$name", "$$name" ] update: &update - $set: {x: "$$x", y: "$$y" } let: &let0 name: name x: foo y: { $literal: "bar" } expectResult: matchedCount: 2 modifiedCount: 2 upsertedCount: 0 expectEvents: - client: *client0 events: - commandStartedEvent: command: update: *collection0Name updates: - q: *filter u: *update multi: true upsert: { $$unsetOrMatches: false } let: *let0 outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1 } - { _id: 2, name: "name", x: "foo", y: "bar" } - { _id: 3, name: "name", x: "foo", y: "bar" } - description: "updateMany with let option unsupported (server-side error)" runOnRequirements: - minServerVersion: "4.2.0" maxServerVersion: "4.4.99" operations: - name: updateMany object: *collection0 arguments: filter: &filter1 _id: 1 update: &update1 - $set: {x: "$$x"} let: &let1 x: foo expectError: errorContains: "'update.let' is an unknown field" isClientError: false expectEvents: - client: *client0 events: - commandStartedEvent: command: update: *collection0Name updates: - q: *filter1 u: *update1 multi: true upsert: { $$unsetOrMatches: false } let: *let1 outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1 } - { _id: 2, name: "name" } - { _id: 3, name: "name" } mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/updateMany-validation.yml000066400000000000000000000016611505113246500306350ustar00rootroot00000000000000description: "updateMany-validation" schemaVersion: "1.0" createEntities: - client: id: &client0 client0 observeEvents: [ commandStartedEvent ] - database: id: &database0 database0 client: *client0 databaseName: &database0Name crud-tests - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name coll0 initialData: &initialData - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1, x: 11 } - { _id: 2, x: 22 } - { _id: 3, x: 33 } tests: - description: "UpdateMany requires atomic modifiers" operations: - name: updateMany object: *collection0 arguments: filter: { _id: { $gt: 1 } } update: { x: 44 } expectError: isClientError: true expectEvents: - client: *client0 events: [] outcome: *initialData mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/updateOne-comment.yml000066400000000000000000000055511505113246500277640ustar00rootroot00000000000000description: "updateOne-comment" schemaVersion: "1.0" createEntities: - client: id: &client0 client0 observeEvents: [ commandStartedEvent ] - database: id: &database0 database0 client: *client0 databaseName: &database0Name crud-tests - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name coll0 initialData: &initialData - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1, x: 11 } tests: - description: "UpdateOne with string comment" runOnRequirements: - minServerVersion: "4.4" operations: - name: updateOne object: *collection0 arguments: filter: &filter { _id: 1 } update: &update { $set: {x: 22} } comment: "comment" expectEvents: - client: *client0 events: - commandStartedEvent: command: update: *collection0Name updates: - q: *filter u: *update multi: { $$unsetOrMatches: false } upsert: { $$unsetOrMatches: false } comment: "comment" outcome: &outcome - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1, x: 22 } - description: "UpdateOne with document comment" runOnRequirements: - minServerVersion: "4.4" operations: - name: updateOne object: *collection0 arguments: filter: *filter update: *update comment: &comment { key: "value" } expectEvents: - client: *client0 events: - commandStartedEvent: command: update: *collection0Name updates: - q: *filter u: *update multi: { $$unsetOrMatches: false } upsert: { $$unsetOrMatches: false } comment: *comment outcome: *outcome - description: "UpdateOne with comment - pre 4.4" runOnRequirements: - minServerVersion: "3.4.0" maxServerVersion: "4.2.99" operations: - name: updateOne object: *collection0 arguments: filter: *filter update: *update comment: "comment" expectError: isClientError: false expectEvents: - client: *client0 events: - commandStartedEvent: command: update: *collection0Name updates: - q: *filter u: *update multi: { $$unsetOrMatches: false } upsert: { $$unsetOrMatches: false } comment: "comment" outcome: *initialData mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/updateOne-dots_and_dollars.yml000066400000000000000000000105311505113246500316270ustar00rootroot00000000000000description: "updateOne-dots_and_dollars" schemaVersion: "1.0" createEntities: - client: id: &client0 client0 observeEvents: [ commandStartedEvent ] - database: id: &database0 database0 client: *client0 databaseName: &database0Name crud-tests - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name coll0 initialData: &initialData - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1, foo: {} } tests: - description: "Updating document to set top-level dollar-prefixed key on 5.0+ server" runOnRequirements: - minServerVersion: "5.0" operations: - name: updateOne object: *collection0 arguments: filter: { _id: 1 } update: &dollarPrefixedKey - { $replaceWith: { $setField: { field: { $literal: $a }, value: 1, input: $$ROOT } } } expectResult: &updateResult matchedCount: 1 modifiedCount: 1 upsertedCount: 0 expectEvents: - client: *client0 events: - commandStartedEvent: command: update: *collection0Name updates: - q: { _id: 1 } u: *dollarPrefixedKey multi: { $$unsetOrMatches: false } upsert: { $$unsetOrMatches: false } outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1, foo: {}, $a: 1 } - description: "Updating document to set top-level dotted key on 5.0+ server" runOnRequirements: - minServerVersion: "5.0" operations: - name: updateOne object: *collection0 arguments: filter: { _id: 1 } update: &dottedKey - { $replaceWith: { $setField: { field: { $literal: a.b }, value: 1, input: $$ROOT } } } expectResult: *updateResult expectEvents: - client: *client0 events: - commandStartedEvent: command: update: *collection0Name updates: - q: { _id: 1 } u: *dottedKey multi: { $$unsetOrMatches: false } upsert: { $$unsetOrMatches: false } outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1, foo: {}, a.b: 1 } - description: "Updating document to set dollar-prefixed key in embedded doc on 5.0+ server" runOnRequirements: - minServerVersion: "5.0" operations: - name: updateOne object: *collection0 arguments: filter: { _id: 1 } update: &dollarPrefixedKeyInEmbedded - { $set: { foo: { $setField: { field: { $literal: $a }, value: 1, input: $foo } } } } expectResult: *updateResult expectEvents: - client: *client0 events: - commandStartedEvent: command: update: *collection0Name updates: - q: { _id: 1 } u: *dollarPrefixedKeyInEmbedded multi: { $$unsetOrMatches: false } upsert: { $$unsetOrMatches: false } outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1, foo: { $a: 1 } } - description: "Updating document to set dotted key in embedded doc on 5.0+ server" runOnRequirements: - minServerVersion: "5.0" operations: - name: updateOne object: *collection0 arguments: filter: { _id: 1 } update: &dottedKeyInEmbedded - { $set: { foo: { $setField: { field: { $literal: a.b }, value: 1, input: $foo } } } } expectResult: *updateResult expectEvents: - client: *client0 events: - commandStartedEvent: command: update: *collection0Name updates: - q: { _id: 1 } u: *dottedKeyInEmbedded multi: { $$unsetOrMatches: false } upsert: { $$unsetOrMatches: false } outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1, foo: { a.b: 1 } } mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/updateOne-errorResponse.yml000066400000000000000000000025561505113246500311740ustar00rootroot00000000000000description: "updateOne-errorResponse" schemaVersion: "1.12" createEntities: - client: id: &client0 client0 useMultipleMongoses: false - database: id: &database0 database0 client: *client0 databaseName: &database0Name crud-tests - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name test tests: # Some drivers may still need to skip this test because the CRUD spec does not # prescribe how drivers should formulate a WriteException beyond collecting a # write or write concern error. - description: "update operations support errorResponse assertions" runOnRequirements: - minServerVersion: "4.0.0" topologies: [ single, replicaset ] - minServerVersion: "4.2.0" topologies: [ sharded ] operations: - name: failPoint object: testRunner arguments: client: *client0 failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: [ update ] errorCode: &errorCode 8 # UnknownError - name: updateOne object: *collection0 arguments: filter: { _id: 1 } update: { $set: { x: 1 } } expectError: errorCode: *errorCode errorResponse: code: *errorCode mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/updateOne-hint-clientError.yml000066400000000000000000000035041505113246500315460ustar00rootroot00000000000000# This file was created automatically using mongodb-spec-converter. # Please review the generated file, then remove this notice. description: updateOne-hint-clientError schemaVersion: '1.0' runOnRequirements: - maxServerVersion: 3.3.99 createEntities: - client: id: &client0 client0 observeEvents: - commandStartedEvent - database: id: &database0 database0 client: client0 databaseName: &database_name crud-v2 - collection: id: &collection0 collection0 database: database0 collectionName: &collection_name test_updateone_hint initialData: - collectionName: *collection_name databaseName: *database_name documents: - _id: 1 x: 11 - _id: 2 x: 22 tests: - description: 'UpdateOne with hint string unsupported (client-side error)' operations: - object: *collection0 name: updateOne arguments: filter: &filter _id: $gt: 1 update: &update $inc: x: 1 hint: _id_ expectError: isError: true expectEvents: - client: *client0 events: [] outcome: &outcome - collectionName: *collection_name databaseName: *database_name documents: - _id: 1 x: 11 - _id: 2 x: 22 - description: 'UpdateOne with hint document unsupported (client-side error)' operations: - object: *collection0 name: updateOne arguments: filter: *filter update: *update hint: _id: 1 expectError: isError: true expectEvents: - client: *client0 events: [] outcome: *outcome mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/updateOne-hint-serverError.yml000066400000000000000000000050751505113246500316030ustar00rootroot00000000000000# This file was created automatically using mongodb-spec-converter. # Please review the generated file, then remove this notice. description: updateOne-hint-serverError schemaVersion: '1.0' runOnRequirements: - minServerVersion: 3.4.0 maxServerVersion: 4.1.9 createEntities: - client: id: &client0 client0 observeEvents: - commandStartedEvent - database: id: &database0 database0 client: client0 databaseName: &database_name crud-v2 - collection: id: &collection0 collection0 database: database0 collectionName: &collection_name test_updateone_hint initialData: - collectionName: *collection_name databaseName: *database_name documents: - _id: 1 x: 11 - _id: 2 x: 22 tests: - description: 'UpdateOne with hint string unsupported (server-side error)' operations: - object: *collection0 name: updateOne arguments: filter: &filter _id: $gt: 1 update: &update $inc: x: 1 hint: _id_ expectError: isError: true expectEvents: - client: *client0 events: - commandStartedEvent: command: update: *collection_name updates: - q: *filter u: *update hint: _id_ multi: { $$unsetOrMatches: false } upsert: { $$unsetOrMatches: false } outcome: &outcome - collectionName: *collection_name databaseName: *database_name documents: - _id: 1 x: 11 - _id: 2 x: 22 - description: 'UpdateOne with hint document unsupported (server-side error)' operations: - object: *collection0 name: updateOne arguments: filter: *filter update: *update hint: _id: 1 expectError: isError: true expectEvents: - client: *client0 events: - commandStartedEvent: command: update: *collection_name updates: - q: *filter u: *update hint: _id: 1 multi: { $$unsetOrMatches: false } upsert: { $$unsetOrMatches: false } outcome: *outcome mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/updateOne-hint-unacknowledged.yml000066400000000000000000000053211505113246500322470ustar00rootroot00000000000000description: updateOne-hint-unacknowledged schemaVersion: '1.0' createEntities: - client: id: &client0 client0 observeEvents: - commandStartedEvent - database: id: &database0 database0 client: *client0 databaseName: &database0Name db0 - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name coll0 collectionOptions: writeConcern: { w: 0 } initialData: - collectionName: *collection0Name databaseName: *database0Name documents: &documents - { _id: 1, x: 11 } - { _id: 2, x: 22 } tests: - description: "Unacknowledged updateOne with hint string fails with client-side error on pre-4.2 server" runOnRequirements: - maxServerVersion: "4.0.99" operations: - object: *collection0 name: updateOne arguments: filter: &filter { _id: { $gt: 1 } } update: &update { $inc: { x: 1 } } hint: _id_ expectError: isClientError: true expectEvents: &noEvents - client: *client0 events: [] - description: "Unacknowledged updateOne with hint document fails with client-side error on pre-4.2 server" runOnRequirements: - maxServerVersion: "4.0.99" operations: - object: *collection0 name: updateOne arguments: filter: *filter update: *update hint: { _id: 1 } expectError: isClientError: true expectEvents: *noEvents - description: "Unacknowledged updateOne with hint string on 4.2+ server" runOnRequirements: - minServerVersion: "4.2.0" operations: - object: *collection0 name: updateOne arguments: filter: *filter update: *update hint: _id_ expectResult: &unacknowledgedResult { $$unsetOrMatches: { acknowledged: { $$unsetOrMatches: false } } } expectEvents: &events - client: *client0 events: - commandStartedEvent: command: update: *collection0Name updates: - q: *filter u: *update multi: { $$unsetOrMatches: false } upsert: { $$unsetOrMatches: false } hint: { $$type: [ string, object ]} writeConcern: { w: 0 } - description: "Unacknowledged updateOne with hint document on 4.2+ server" runOnRequirements: - minServerVersion: "4.2.0" operations: - object: *collection0 name: updateOne arguments: filter: *filter update: *update hint: { _id: 1 } expectResult: *unacknowledgedResult expectEvents: *events mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/updateOne-hint.yml000066400000000000000000000050071505113246500272600ustar00rootroot00000000000000# This file was created automatically using mongodb-spec-converter. # Please review the generated file, then remove this notice. description: updateOne-hint schemaVersion: '1.0' runOnRequirements: - minServerVersion: 4.2.0 createEntities: - client: id: &client0 client0 observeEvents: - commandStartedEvent - database: id: &database0 database0 client: client0 databaseName: &database_name crud-v2 - collection: id: &collection0 collection0 database: database0 collectionName: &collection_name test_updateone_hint initialData: - collectionName: *collection_name databaseName: *database_name documents: - _id: 1 x: 11 - _id: 2 x: 22 tests: - description: 'UpdateOne with hint string' operations: - object: *collection0 name: updateOne arguments: filter: &filter _id: $gt: 1 update: &update $inc: x: 1 hint: _id_ expectResult: &result matchedCount: 1 modifiedCount: 1 upsertedCount: 0 expectEvents: - client: *client0 events: - commandStartedEvent: command: update: *collection_name updates: - q: *filter u: *update hint: _id_ multi: { $$unsetOrMatches: false } upsert: { $$unsetOrMatches: false } outcome: &outcome - collectionName: *collection_name databaseName: *database_name documents: - _id: 1 x: 11 - _id: 2 x: 23 - description: 'UpdateOne with hint document' operations: - object: *collection0 name: updateOne arguments: filter: *filter update: *update hint: _id: 1 expectResult: *result expectEvents: - client: *client0 events: - commandStartedEvent: command: update: *collection_name updates: - q: *filter u: *update hint: _id: 1 multi: { $$unsetOrMatches: false } upsert: { $$unsetOrMatches: false } outcome: *outcome mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/updateOne-let.yml000066400000000000000000000052061505113246500271030ustar00rootroot00000000000000description: "updateOne-let" schemaVersion: "1.0" createEntities: - client: id: &client0 client0 observeEvents: [ commandStartedEvent ] - database: id: &database0 database0 client: *client0 databaseName: &database0Name crud-tests - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name coll0 initialData: &initialData - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1 } - { _id: 2 } tests: - description: "UpdateOne with let option" runOnRequirements: - minServerVersion: "5.0" operations: - name: updateOne object: *collection0 arguments: filter: &filter $expr: $eq: [ "$_id", "$$id" ] update: &update - $set: {x: "$$x" } let: &let0 id: 1 x: "foo" expectResult: matchedCount: 1 modifiedCount: 1 upsertedCount: 0 expectEvents: - client: *client0 events: - commandStartedEvent: command: update: *collection0Name updates: - q: *filter u: *update multi: { $$unsetOrMatches: false } upsert: { $$unsetOrMatches: false } let: *let0 outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1, x: "foo" } - { _id: 2 } - description: "UpdateOne with let option unsupported (server-side error)" runOnRequirements: - minServerVersion: "4.2.0" maxServerVersion: "4.4.99" operations: - name: updateOne object: *collection0 arguments: filter: &filter1 _id: 1 update: &update1 - $set: {x: "$$x"} let: &let1 x: foo expectError: errorContains: "'update.let' is an unknown field" isClientError: false expectEvents: - client: *client0 events: - commandStartedEvent: command: update: *collection0Name updates: - q: *filter1 u: *update1 multi: { $$unsetOrMatches: false } upsert: { $$unsetOrMatches: false } let: *let1 outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1 } - { _id: 2 } mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/updateOne-validation.yml000066400000000000000000000015611505113246500304510ustar00rootroot00000000000000description: "updateOne-validation" schemaVersion: "1.0" createEntities: - client: id: &client0 client0 observeEvents: [ commandStartedEvent ] - database: id: &database0 database0 client: *client0 databaseName: &database0Name crud-tests - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name coll0 initialData: &initialData - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1, x: 11 } tests: - description: "UpdateOne requires atomic modifiers" operations: - name: updateOne object: *collection0 arguments: filter: { _id: 1 } update: { x: 22 } expectError: isClientError: true expectEvents: - client: *client0 events: [] outcome: *initialData mongo-ruby-driver-2.21.3/spec/spec_tests/data/crud_unified/updateWithPipelines.yml000066400000000000000000000157151505113246500303720ustar00rootroot00000000000000# This file was created automatically using mongodb-spec-converter. # Please review the generated file, then remove this notice. description: updateWithPipelines schemaVersion: '1.0' runOnRequirements: - minServerVersion: 4.1.11 createEntities: - client: id: &client0 client0 observeEvents: - commandStartedEvent - database: id: &database0 database0 client: client0 databaseName: &database_name crud-tests - collection: id: &collection0 collection0 database: database0 collectionName: &collection_name test initialData: - collectionName: *collection_name databaseName: *database_name documents: - _id: 1 x: 1 'y': 1 t: u: v: 1 - _id: 2 x: 2 'y': 1 tests: - description: 'UpdateOne using pipelines' operations: - object: *collection0 name: updateOne arguments: filter: _id: 1 update: - $replaceRoot: newRoot: $t - $addFields: foo: 1 expectResult: matchedCount: 1 modifiedCount: 1 upsertedCount: 0 expectEvents: - client: *client0 events: - commandStartedEvent: command: update: *collection_name updates: - q: _id: 1 u: - { $replaceRoot: { newRoot: $t } } - { $addFields: { foo: 1 } } multi: { $$unsetOrMatches: false } upsert: { $$unsetOrMatches: false } commandName: update databaseName: *database_name outcome: - collectionName: *collection_name databaseName: *database_name documents: - _id: 1 u: v: 1 foo: 1 - _id: 2 x: 2 'y': 1 - description: 'UpdateMany using pipelines' operations: - object: *collection0 name: updateMany arguments: filter: { } update: - $project: x: 1 - $addFields: foo: 1 expectResult: matchedCount: 2 modifiedCount: 2 upsertedCount: 0 expectEvents: - client: *client0 events: - commandStartedEvent: command: update: *collection_name updates: - q: { } u: - { $project: { x: 1 } } - { $addFields: { foo: 1 } } multi: true upsert: { $$unsetOrMatches: false } commandName: update databaseName: *database_name outcome: - collectionName: *collection_name databaseName: *database_name documents: - _id: 1 x: 1 foo: 1 - _id: 2 x: 2 foo: 1 - description: 'FindOneAndUpdate using pipelines' operations: - object: *collection0 name: findOneAndUpdate arguments: filter: _id: 1 update: - $project: x: 1 - $addFields: foo: 1 expectEvents: - client: *client0 events: - commandStartedEvent: command: findAndModify: *collection_name update: - $project: x: 1 - $addFields: foo: 1 commandName: findAndModify databaseName: *database_name outcome: - collectionName: *collection_name databaseName: *database_name documents: - _id: 1 x: 1 foo: 1 - _id: 2 x: 2 'y': 1 - description: 'UpdateOne in bulk write using pipelines' operations: - object: *collection0 name: bulkWrite arguments: requests: - updateOne: filter: _id: 1 update: - $replaceRoot: newRoot: $t - $addFields: foo: 1 expectResult: matchedCount: 1 modifiedCount: 1 upsertedCount: 0 expectEvents: - client: *client0 events: - commandStartedEvent: command: update: *collection_name updates: - q: _id: 1 u: - { $replaceRoot: { newRoot: $t } } - { $addFields: { foo: 1 } } multi: { $$unsetOrMatches: false } upsert: { $$unsetOrMatches: false } commandName: update databaseName: *database_name outcome: - collectionName: *collection_name databaseName: *database_name documents: - _id: 1 u: v: 1 foo: 1 - _id: 2 x: 2 'y': 1 - description: 'UpdateMany in bulk write using pipelines' operations: - object: *collection0 name: bulkWrite arguments: requests: - updateMany: filter: { } update: - $project: x: 1 - $addFields: foo: 1 expectResult: matchedCount: 2 modifiedCount: 2 upsertedCount: 0 expectEvents: - client: *client0 events: - commandStartedEvent: command: update: *collection_name updates: - q: { } u: - { $project: { x: 1 } } - { $addFields: { foo: 1 } } multi: true upsert: { $$unsetOrMatches: false } commandName: update databaseName: *database_name outcome: - collectionName: *collection_name databaseName: *database_name documents: - _id: 1 x: 1 foo: 1 - _id: 2 x: 2 foo: 1 mongo-ruby-driver-2.21.3/spec/spec_tests/data/gridfs/000077500000000000000000000000001505113246500224655ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/gridfs/delete.yml000066400000000000000000000112741505113246500244570ustar00rootroot00000000000000data: files: - _id: { "$oid" : "000000000000000000000001" } length: 0 chunkSize: 4 uploadDate: { "$date" : "1970-01-01T00:00:00.000Z" } md5: "d41d8cd98f00b204e9800998ecf8427e" filename: "length-0" contentType: "application/octet-stream" aliases: [] metadata: {} - _id: { "$oid" : "000000000000000000000002" } length: 0 chunkSize: 4 uploadDate: { "$date" : "1970-01-01T00:00:00.000Z" } md5: "d41d8cd98f00b204e9800998ecf8427e" filename: "length-0-with-empty-chunk" contentType: "application/octet-stream" aliases: [] metadata: {} - _id: { "$oid" : "000000000000000000000003" } length: 2 chunkSize: 4 uploadDate: { "$date" : "1970-01-01T00:00:00.000Z" } md5: "c700ed4fdb1d27055aa3faa2c2432283" filename: "length-2" contentType: "application/octet-stream" aliases: [] metadata: {} - _id: { "$oid" : "000000000000000000000004" } length: 8 chunkSize: 4 uploadDate: { "$date" : "1970-01-01T00:00:00.000Z" } md5: "dd254cdc958e53abaa67da9f797125f5" filename: "length-8" contentType: "application/octet-stream" aliases: [] metadata: {} chunks: - { _id : { "$oid" : "000000000000000000000001" }, files_id : { "$oid" : "000000000000000000000002" }, n : 0, data : { $hex : "" } } - { _id : { "$oid" : "000000000000000000000002" }, files_id : { "$oid" : "000000000000000000000003" }, n : 0, data : { $hex : "1122" } } - { _id : { "$oid" : "000000000000000000000003" }, files_id : { "$oid" : "000000000000000000000004" }, n : 0, data : { $hex : "11223344" } } - { _id : { "$oid" : "000000000000000000000004" }, files_id : { "$oid" : "000000000000000000000004" }, n : 1, data : { $hex : "55667788" } } tests: - description: "Delete when length is 0" act: operation: delete arguments: id: { "$oid" : "000000000000000000000001" } assert: result: void data: - { delete : "expected.files", deletes : [ { q : { _id : { "$oid" : "000000000000000000000001" } }, limit : 1 } ] } - description: "Delete when length is 0 and there is one extra empty chunk" act: operation: delete arguments: id: { "$oid" : "000000000000000000000002" } assert: result: void data: - { delete : "expected.files", deletes : [ { q : { _id : { "$oid" : "000000000000000000000002" } }, limit : 1 } ] } - { delete : "expected.chunks", deletes : [ { q : { files_id : { "$oid" : "000000000000000000000002" } }, limit : 0 } ] } - description: "Delete when length is 8" act: operation: delete arguments: id: { "$oid" : "000000000000000000000004" } assert: result: void data: - { delete : "expected.files", deletes : [ { q : { _id : { "$oid" : "000000000000000000000004" } }, limit : 1 } ] } - { delete : "expected.chunks", deletes : [ { q : { files_id : { "$oid" : "000000000000000000000004" } }, limit : 0 } ] } - description: "Delete when files entry does not exist" act: operation: delete arguments: id: { "$oid" : "000000000000000000000000" } assert: error: "FileNotFound" - description: "Delete when files entry does not exist and there are orphaned chunks" arrange: data: - { delete : "fs.files", deletes : [ { q : { _id : { "$oid" : "000000000000000000000004" } }, limit : 1 } ] } act: operation: delete arguments: id: { "$oid" : "000000000000000000000004" } assert: error: "FileNotFound" data: - { delete : "expected.files", deletes : [ { q : { _id : { "$oid" : "000000000000000000000004" } }, limit : 1 } ] } mongo-ruby-driver-2.21.3/spec/spec_tests/data/gridfs/download.yml000066400000000000000000000165541505113246500250320ustar00rootroot00000000000000data: files: - _id: { "$oid" : "000000000000000000000001" } length: 0 chunkSize: 4 uploadDate: { "$date" : "1970-01-01T00:00:00.000Z" } md5: "d41d8cd98f00b204e9800998ecf8427e" filename: "length-0" contentType: "application/octet-stream" aliases: [] metadata: {} - _id: { "$oid" : "000000000000000000000002" } length: 0 chunkSize: 4 uploadDate: { "$date" : "1970-01-01T00:00:00.000Z" } md5: "d41d8cd98f00b204e9800998ecf8427e" filename: "length-0-with-empty-chunk" contentType: "application/octet-stream" aliases: [] metadata: {} - _id: { "$oid" : "000000000000000000000003" } length: 2 chunkSize: 4 uploadDate: { "$date" : "1970-01-01T00:00:00.000Z" } md5: "c700ed4fdb1d27055aa3faa2c2432283" filename: "length-2" contentType: "application/octet-stream" aliases: [] metadata: {} - _id: { "$oid" : "000000000000000000000004" } length: 8 chunkSize: 4 uploadDate: { "$date" : "1970-01-01T00:00:00.000Z" } md5: "dd254cdc958e53abaa67da9f797125f5" filename: "length-8" contentType: "application/octet-stream" aliases: [] metadata: {} - _id: { "$oid" : "000000000000000000000005" } length: 10 chunkSize: 4 uploadDate: { "$date" : "1970-01-01T00:00:00.000Z" } md5: "57d83cd477bfb1ccd975ab33d827a92b" filename: "length-10" contentType: "application/octet-stream" aliases: [] metadata: {} - _id: { "$oid" : "000000000000000000000006" } length: 2 chunkSize: 4 uploadDate: { "$date" : "1970-01-01T00:00:00.000Z" } md5: "c700ed4fdb1d27055aa3faa2c2432283" contentType: "application/octet-stream" aliases: [] metadata: {} chunks: - { _id : { "$oid" : "000000000000000000000001" }, files_id : { "$oid" : "000000000000000000000002" }, n : 0, data : { $hex : "" } } - { _id : { "$oid" : "000000000000000000000002" }, files_id : { "$oid" : "000000000000000000000003" }, n : 0, data : { $hex : "1122" } } - { _id : { "$oid" : "000000000000000000000003" }, files_id : { "$oid" : "000000000000000000000004" }, n : 0, data : { $hex : "11223344" } } - { _id : { "$oid" : "000000000000000000000004" }, files_id : { "$oid" : "000000000000000000000004" }, n : 1, data : { $hex : "55667788" } } - { _id : { "$oid" : "000000000000000000000005" }, files_id : { "$oid" : "000000000000000000000005" }, n : 0, data : { $hex : "11223344" } } - { _id : { "$oid" : "000000000000000000000006" }, files_id : { "$oid" : "000000000000000000000005" }, n : 1, data : { $hex : "55667788" } } - { _id : { "$oid" : "000000000000000000000007" }, files_id : { "$oid" : "000000000000000000000005" }, n : 2, data : { $hex : "99aa" } } - { _id : { "$oid" : "000000000000000000000008" }, files_id : { "$oid" : "000000000000000000000006" }, n : 0, data : { $hex : "1122" } } tests: - description: "Download when length is zero" act: operation: download arguments: id: { "$oid" : "000000000000000000000001" } options: { } assert: result: { $hex : "" } - description: "Download when length is zero and there is one empty chunk" act: operation: download arguments: id: { "$oid" : "000000000000000000000002" } options: { } assert: result: { $hex : "" } - description: "Download when there is one chunk" act: operation: download arguments: id: { "$oid" : "000000000000000000000003" } options: { } assert: result: { $hex : "1122" } - description: "Download when there are two chunks" act: operation: download arguments: id: { "$oid" : "000000000000000000000004" } options: { } assert: result: { $hex : "1122334455667788" } - description: "Download when there are three chunks" act: operation: download arguments: id: { "$oid" : "000000000000000000000005" } options: { } assert: result: { $hex : "112233445566778899aa" } - description: "Download when files entry does not exist" act: operation: download arguments: id: { "$oid" : "000000000000000000000000" } options: { } assert: error: "FileNotFound" - description: "Download when an intermediate chunk is missing" arrange: data: - { delete : "fs.chunks", deletes : [ { q : { files_id : { "$oid" : "000000000000000000000005" }, n : 1 }, limit : 1 } ] } act: operation: download arguments: id: { "$oid" : "000000000000000000000005" } assert: error: "ChunkIsMissing" - description: "Download when final chunk is missing" arrange: data: - { delete : "fs.chunks", deletes : [ { q : { files_id : { "$oid" : "000000000000000000000005" }, n : 1 }, limit : 1 } ] } act: operation: download arguments: id: { "$oid" : "000000000000000000000005" } assert: error: "ChunkIsMissing" - description: "Download when an intermediate chunk is the wrong size" arrange: data: - { update : "fs.chunks", updates : [ { q : { files_id : { "$oid" : "000000000000000000000005" }, n : 1 }, u : { $set : { data : { $hex : "556677" } } } }, { q : { files_id : { "$oid" : "000000000000000000000005" }, n : 2 }, u : { $set : { data : { $hex : "8899aa" } } } } ] } act: operation: download arguments: id: { "$oid" : "000000000000000000000005" } assert: error: "ChunkIsWrongSize" - description: "Download when final chunk is the wrong size" arrange: data: - { update : "fs.chunks", updates : [ { q : { files_id : { "$oid" : "000000000000000000000005" }, n : 2 }, u : { $set : { data : { $hex : "99" } } } } ] } act: operation: download arguments: id: { "$oid" : "000000000000000000000005" } assert: error: "ChunkIsWrongSize" - description: "Download legacy file with no name" act: operation: download arguments: id: { "$oid" : "000000000000000000000006" } options: { } assert: result: { $hex : "1122" } mongo-ruby-driver-2.21.3/spec/spec_tests/data/gridfs/download_by_name.yml000066400000000000000000000100551505113246500265120ustar00rootroot00000000000000data: files: - _id: { "$oid" : "000000000000000000000001" } length: 1 chunkSize: 4 uploadDate: { "$date" : "1970-01-01T00:00:00.000Z" } md5: "47ed733b8d10be225eceba344d533586" filename: "abc" contentType: "application/octet-stream" aliases: [] metadata: {} - _id: { "$oid" : "000000000000000000000002" } length: 1 chunkSize: 4 uploadDate: { "$date" : "1970-01-02T00:00:00.000Z" } md5: "b15835f133ff2e27c7cb28117bfae8f4" filename: "abc" contentType: "application/octet-stream" aliases: [] metadata: {} - _id: { "$oid" : "000000000000000000000003" } length: 1 chunkSize: 4 uploadDate: { "$date" : "1970-01-03T00:00:00.000Z" } md5: "eccbc87e4b5ce2fe28308fd9f2a7baf3" filename: "abc" contentType: "application/octet-stream" aliases: [] metadata: {} - _id: { "$oid" : "000000000000000000000004" } length: 1 chunkSize: 4 uploadDate: { "$date" : "1970-01-04T00:00:00.000Z" } md5: "f623e75af30e62bbd73d6df5b50bb7b5" filename: "abc" contentType: "application/octet-stream" aliases: [] metadata: {} - _id: { "$oid" : "000000000000000000000005" } length: 1 chunkSize: 4 uploadDate: { "$date" : "1970-01-05T00:00:00.000Z" } md5: "4c614360da93c0a041b22e537de151eb" filename: "abc" contentType: "application/octet-stream" aliases: [] metadata: {} chunks: - { _id : { "$oid" : "000000000000000000000001" }, files_id : { "$oid" : "000000000000000000000001" }, n : 0, data : { $hex : "11" } } - { _id : { "$oid" : "000000000000000000000002" }, files_id : { "$oid" : "000000000000000000000002" }, n : 0, data : { $hex : "22" } } - { _id : { "$oid" : "000000000000000000000003" }, files_id : { "$oid" : "000000000000000000000003" }, n : 0, data : { $hex : "33" } } - { _id : { "$oid" : "000000000000000000000004" }, files_id : { "$oid" : "000000000000000000000004" }, n : 0, data : { $hex : "44" } } - { _id : { "$oid" : "000000000000000000000005" }, files_id : { "$oid" : "000000000000000000000005" }, n : 0, data : { $hex : "55" } } tests: - description: "Download_by_name when revision is 0" act: operation: download_by_name arguments: filename: "abc" options: { revision : 0 } assert: result: { $hex : "11" } - description: "Download_by_name when revision is 1" act: operation: download_by_name arguments: filename: "abc" options: { revision : 1 } assert: result: { $hex : "22" } - description: "Download_by_name when revision is -2" act: operation: download_by_name arguments: filename: "abc" options: { revision : -2 } assert: result: { $hex : "44" } - description: "Download_by_name when revision is -1" act: operation: download_by_name arguments: filename: "abc" options: { revision : -1 } assert: result: { $hex : "55" } - description: "Download_by_name when files entry does not exist" act: operation: download_by_name arguments: filename: "xyz" assert: error: "FileNotFound" - description: "Download_by_name when revision does not exist" act: operation: download_by_name arguments: filename: "abc" options: { revision : 999 } assert: error: "RevisionNotFound" mongo-ruby-driver-2.21.3/spec/spec_tests/data/gridfs/upload.yml000066400000000000000000000145101505113246500244750ustar00rootroot00000000000000data: files: [] chunks: [] tests: - description: "Upload when length is 0" act: operation: upload arguments: filename: "filename" source: { $hex : "" } options: { chunkSizeBytes : 4 } assert: result: "&result" data: - { insert : "expected.files", documents : [ { _id : "*result", length : 0, chunkSize : 4, uploadDate : "*actual", md5 : "d41d8cd98f00b204e9800998ecf8427e", filename : "filename" } ] } - description: "Upload when length is 1" act: operation: upload arguments: filename: "filename" source: { $hex : "11" } options: { chunkSizeBytes : 4 } assert: result: "&result" data: - { insert : "expected.files", documents : [ { _id : "*result", length : 1, chunkSize : 4, uploadDate : "*actual", md5 : "47ed733b8d10be225eceba344d533586", filename : "filename" } ] } - { insert : "expected.chunks", documents : [ { _id : "*actual", files_id : "*result", n : 0, data : { $hex : "11" } } ] } - description: "Upload when length is 3" act: operation: upload arguments: filename: "filename" source: { $hex : "112233" } options: { chunkSizeBytes : 4 } assert: result: "&result" data: - { insert : "expected.files", documents : [ { _id : "*result", length : 3, chunkSize : 4, uploadDate : "*actual", md5 : "bafae3a174ab91fc70db7a6aa50f4f52", filename : "filename" } ] } - { insert : "expected.chunks", documents : [ { _id : "*actual", files_id : "*result", n : 0, data : { $hex : "112233" } } ] } - description: "Upload when length is 4" act: operation: upload arguments: filename: "filename" source: { $hex : "11223344" } options: { chunkSizeBytes : 4 } assert: result: "&result" data: - { insert : "expected.files", documents : [ { _id : "*result", length : 4, chunkSize : 4, uploadDate : "*actual", md5 : "7e7c77cff5705d1f7574a25ef6662117", filename : "filename" } ] } - { insert : "expected.chunks", documents : [ { _id : "*actual", files_id : "*result", n : 0, data : { $hex : "11223344" } } ] } - description: "Upload when length is 5" act: operation: upload arguments: filename: "filename" source: { $hex : "1122334455" } options: { chunkSizeBytes : 4 } assert: result: "&result" data: - { insert : "expected.files", documents : [ { _id : "*result", length : 5, chunkSize : 4, uploadDate : "*actual", md5 : "283d4fea5dded59cf837d3047328f5af", filename : "filename" } ] } - { insert : "expected.chunks", documents : [ { _id : "*actual", files_id : "*result", n : 0, data : { $hex : "11223344" } }, { _id : "*actual", files_id : "*result", n : 1, data : { $hex : "55" } } ] } - description: "Upload when length is 8" act: operation: upload arguments: filename: "filename" source: { $hex : "1122334455667788" } options: { chunkSizeBytes : 4 } assert: result: "&result" data: - { insert : "expected.files", documents : [ { _id : "*result", length : 8, chunkSize : 4, uploadDate : "*actual", md5 : "dd254cdc958e53abaa67da9f797125f5", filename : "filename" } ] } - { insert : "expected.chunks", documents : [ { _id : "*actual", files_id : "*result", n : 0, data : { $hex : "11223344" } }, { _id : "*actual", files_id : "*result", n : 1, data : { $hex : "55667788" } } ] } - description: "Upload when contentType is provided" act: operation: upload arguments: filename: "filename" source: { $hex : "11" } options: { chunkSizeBytes : 4, contentType : "image/jpeg" } assert: result: "&result" data: - { insert : "expected.files", documents : [ { _id : "*result", length : 1, chunkSize : 4, uploadDate : "*actual", md5 : "47ed733b8d10be225eceba344d533586", filename : "filename", contentType : "image/jpeg" } ] } - { insert : "expected.chunks", documents : [ { _id : "*actual", files_id : "*result", n : 0, data : { $hex : "11" } } ] } - description: "Upload when metadata is provided" act: operation: upload arguments: filename: "filename" source: { $hex : "11" } options: chunkSizeBytes: 4 metadata: { x : 1 } assert: result: "&result" data: - { insert : "expected.files", documents : [ { _id : "*result", length : 1, chunkSize : 4, uploadDate : "*actual", md5 : "47ed733b8d10be225eceba344d533586", filename : "filename", metadata : { x : 1 } } ] } - { insert : "expected.chunks", documents : [ { _id : "*actual", files_id : "*result", n : 0, data : { $hex : "11" } } ] } mongo-ruby-driver-2.21.3/spec/spec_tests/data/gridfs_unified/000077500000000000000000000000001505113246500241705ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/gridfs_unified/delete.yml000066400000000000000000000142161505113246500261610ustar00rootroot00000000000000description: "gridfs-delete" schemaVersion: "1.0" createEntities: - client: id: &client0 client0 - database: id: &database0 database0 client: *client0 databaseName: &database0Name gridfs-tests - bucket: id: &bucket0 bucket0 database: *database0 - collection: id: &bucket0_files_collection bucket0_files_collection database: *database0 collectionName: &bucket0_files_collectionName fs.files - collection: id: &bucket0_chunks_collection bucket0_chunks_collection database: *database0 collectionName: &bucket0_chunks_collectionName fs.chunks initialData: - collectionName: *bucket0_files_collectionName databaseName: *database0Name documents: - &file1 _id: { "$oid": "000000000000000000000001" } length: 0 chunkSize: 4 uploadDate: { "$date": "1970-01-01T00:00:00.000Z" } md5: "d41d8cd98f00b204e9800998ecf8427e" filename: "length-0" contentType: "application/octet-stream" aliases: [] metadata: {} - &file2 _id: { "$oid": "000000000000000000000002" } length: 0 chunkSize: 4 uploadDate: { "$date": "1970-01-01T00:00:00.000Z" } md5: "d41d8cd98f00b204e9800998ecf8427e" filename: "length-0-with-empty-chunk" contentType: "application/octet-stream" aliases: [] metadata: {} - &file3 _id: { "$oid": "000000000000000000000003" } length: 2 chunkSize: 4 uploadDate: { "$date": "1970-01-01T00:00:00.000Z" } md5: "c700ed4fdb1d27055aa3faa2c2432283" filename: "length-2" contentType: "application/octet-stream" aliases: [] metadata: {} - &file4 _id: { "$oid": "000000000000000000000004" } length: 8 chunkSize: 4 uploadDate: { "$date": "1970-01-01T00:00:00.000Z" } md5: "dd254cdc958e53abaa67da9f797125f5" filename: "length-8" contentType: "application/octet-stream" aliases: [] metadata: {} - collectionName: *bucket0_chunks_collectionName databaseName: *database0Name documents: - &file2_chunk0 _id: { "$oid": "000000000000000000000001" } files_id: { "$oid": "000000000000000000000002" } n: 0 data: { "$binary": { "base64": "", "subType": "00" } } - &file3_chunk0 _id: { "$oid": "000000000000000000000002" } files_id: { "$oid": "000000000000000000000003" } n: 0 data: { "$binary": { "base64": "ESI=", "subType": "00" } } # hex: 1122 - &file4_chunk0 _id: { "$oid": "000000000000000000000003" } files_id: { "$oid": "000000000000000000000004" } n: 0 data: { "$binary": { "base64": "ESIzRA==", "subType": "00" } } # hex: 11223344 - &file4_chunk1 _id: { "$oid": "000000000000000000000004" } files_id: { "$oid": "000000000000000000000004" } n: 1 data: { "$binary": { "base64": "VWZ3iA==", "subType": "00" } } # hex: 55667788 tests: - description: "delete when length is 0" operations: - name: delete object: *bucket0 arguments: id: { $oid: "000000000000000000000001" } outcome: - collectionName: *bucket0_files_collectionName databaseName: *database0Name documents: - *file2 - *file3 - *file4 - collectionName: *bucket0_chunks_collectionName databaseName: *database0Name documents: - *file2_chunk0 - *file3_chunk0 - *file4_chunk0 - *file4_chunk1 - description: "delete when length is 0 and there is one extra empty chunk" operations: - name: delete object: *bucket0 arguments: id: { $oid: "000000000000000000000002" } outcome: - collectionName: *bucket0_files_collectionName databaseName: *database0Name documents: - *file1 - *file3 - *file4 - collectionName: *bucket0_chunks_collectionName databaseName: *database0Name documents: - *file3_chunk0 - *file4_chunk0 - *file4_chunk1 - description: "delete when length is 8" operations: - name: delete object: *bucket0 arguments: id: { $oid: "000000000000000000000004" } outcome: - collectionName: *bucket0_files_collectionName databaseName: *database0Name documents: - *file1 - *file2 - *file3 - collectionName: *bucket0_chunks_collectionName databaseName: *database0Name documents: - *file2_chunk0 - *file3_chunk0 - description: "delete when files entry does not exist" operations: - name: delete object: *bucket0 arguments: id: { $oid: "000000000000000000000000" } expectError: { isError: true } # FileNotFound outcome: - collectionName: *bucket0_files_collectionName databaseName: *database0Name documents: - *file1 - *file2 - *file3 - *file4 - collectionName: *bucket0_chunks_collectionName databaseName: *database0Name documents: - *file2_chunk0 - *file3_chunk0 - *file4_chunk0 - *file4_chunk1 - description: "delete when files entry does not exist and there are orphaned chunks" operations: - name: deleteOne object: *bucket0_files_collection arguments: filter: _id: { $oid: "000000000000000000000004" } expectResult: deletedCount: 1 - name: delete object: *bucket0 arguments: id: { $oid: "000000000000000000000004" } expectError: { isError: true } # FileNotFound outcome: - collectionName: *bucket0_files_collectionName databaseName: *database0Name documents: - *file1 - *file2 - *file3 # Orphaned chunks are still deleted even if fs.files - collectionName: *bucket0_chunks_collectionName databaseName: *database0Name documents: - *file2_chunk0 - *file3_chunk0 mongo-ruby-driver-2.21.3/spec/spec_tests/data/gridfs_unified/download.yml000066400000000000000000000207471505113246500265340ustar00rootroot00000000000000description: "gridfs-download" schemaVersion: "1.0" createEntities: - client: id: &client0 client0 - database: id: &database0 database0 client: *client0 databaseName: &database0Name gridfs-tests - bucket: id: &bucket0 bucket0 database: *database0 - collection: id: &bucket0_files_collection bucket0_files_collection database: *database0 collectionName: &bucket0_files_collectionName fs.files - collection: id: &bucket0_chunks_collection bucket0_chunks_collection database: *database0 collectionName: &bucket0_chunks_collectionName fs.chunks initialData: - collectionName: *bucket0_files_collectionName databaseName: *database0Name documents: - _id: { "$oid": "000000000000000000000001" } length: 0 chunkSize: 4 uploadDate: { "$date": "1970-01-01T00:00:00.000Z" } md5: "d41d8cd98f00b204e9800998ecf8427e" filename: "length-0" contentType: "application/octet-stream" aliases: [] metadata: {} - _id: { "$oid": "000000000000000000000002" } length: 0 chunkSize: 4 uploadDate: { "$date": "1970-01-01T00:00:00.000Z" } md5: "d41d8cd98f00b204e9800998ecf8427e" filename: "length-0-with-empty-chunk" contentType: "application/octet-stream" aliases: [] metadata: {} - _id: { "$oid": "000000000000000000000003" } length: 2 chunkSize: 4 uploadDate: { "$date": "1970-01-01T00:00:00.000Z" } md5: "c700ed4fdb1d27055aa3faa2c2432283" filename: "length-2" contentType: "application/octet-stream" aliases: [] metadata: {} - _id: { "$oid": "000000000000000000000004" } length: 8 chunkSize: 4 uploadDate: { "$date": "1970-01-01T00:00:00.000Z" } md5: "dd254cdc958e53abaa67da9f797125f5" filename: "length-8" contentType: "application/octet-stream" aliases: [] metadata: {} - _id: { "$oid": "000000000000000000000005" } length: 10 chunkSize: 4 uploadDate: { "$date": "1970-01-01T00:00:00.000Z" } md5: "57d83cd477bfb1ccd975ab33d827a92b" filename: "length-10" contentType: "application/octet-stream" aliases: [] metadata: {} - _id: { "$oid": "000000000000000000000006" } length: 2 chunkSize: 4 uploadDate: { "$date": "1970-01-01T00:00:00.000Z" } md5: "c700ed4fdb1d27055aa3faa2c2432283" # filename is intentionally omitted contentType: "application/octet-stream" aliases: [] metadata: {} - collectionName: *bucket0_chunks_collectionName databaseName: *database0Name documents: - _id: { "$oid": "000000000000000000000001" } files_id: { "$oid": "000000000000000000000002" } n: 0 data: { "$binary": { "base64": "", "subType": "00" } } - _id: { "$oid": "000000000000000000000002" } files_id: { "$oid": "000000000000000000000003" } n: 0 data: { "$binary": { "base64": "ESI=", "subType": "00" } } # hex: 1122 - _id: { "$oid": "000000000000000000000003" } files_id: { "$oid": "000000000000000000000004" } n: 0 data: { "$binary": { "base64": "ESIzRA==", "subType": "00" } } # hex: 11223344 - _id: { "$oid": "000000000000000000000004" } files_id: { "$oid": "000000000000000000000004" } n: 1 data: { "$binary": { "base64": "VWZ3iA==", "subType": "00" } } # hex: 55667788 - _id: { "$oid": "000000000000000000000005" } files_id: { "$oid": "000000000000000000000005" } n: 0 data: { "$binary": { "base64": "ESIzRA==", "subType": "00" } } # hex: 11223344 - _id: { "$oid": "000000000000000000000006" } files_id: { "$oid": "000000000000000000000005" } n: 1 data: { "$binary": { "base64": "VWZ3iA==", "subType": "00" } } # hex: 55667788 - _id: { "$oid": "000000000000000000000007" } files_id: { "$oid": "000000000000000000000005" } n: 2 data: { "$binary" : { "base64": "mao=", "subType" : "00" } } # hex: 99aa - _id: { "$oid": "000000000000000000000008" } files_id: { "$oid": "000000000000000000000006" } n: 0 data: { "$binary": { "base64": "ESI=", "subType": "00" } } # hex: 1122 tests: - description: "download when length is zero" operations: - name: download object: *bucket0 arguments: id: { $oid: "000000000000000000000001" } expectResult: { $$matchesHexBytes: "" } - description: "download when length is zero and there is one empty chunk" operations: - name: download object: *bucket0 arguments: id: { $oid: "000000000000000000000002" } expectResult: { $$matchesHexBytes: "" } - description: "download when there is one chunk" operations: - name: download object: *bucket0 arguments: id: { $oid: "000000000000000000000003" } expectResult: { $$matchesHexBytes: "1122" } - description: "download when there are two chunks" operations: - name: download object: *bucket0 arguments: id: { $oid: "000000000000000000000004" } expectResult: { $$matchesHexBytes: "1122334455667788" } - description: "download when there are three chunks" operations: - name: download object: *bucket0 arguments: id: { $oid: "000000000000000000000005" } expectResult: { $$matchesHexBytes: "112233445566778899aa" } - description: "download when files entry does not exist" operations: - name: download object: *bucket0 arguments: id: { $oid: "000000000000000000000000" } expectError: { isError: true } # FileNotFound - description: "download when an intermediate chunk is missing" operations: - name: deleteOne object: *bucket0_chunks_collection arguments: filter: files_id: { $oid: "000000000000000000000005" } n: 1 expectResult: deletedCount: 1 - name: download object: *bucket0 arguments: id: { $oid: "000000000000000000000005" } expectError: { isError: true } # ChunkIsMissing - description: "download when final chunk is missing" operations: - name: deleteOne object: *bucket0_chunks_collection arguments: filter: files_id: { $oid: "000000000000000000000005" } n: 2 expectResult: deletedCount: 1 - name: download object: *bucket0 arguments: id: { $oid: "000000000000000000000005" } expectError: { isError: true } # ChunkIsMissing - description: "download when an intermediate chunk is the wrong size" operations: - name: bulkWrite object: *bucket0_chunks_collection arguments: requests: - updateOne: filter: files_id: { $oid: "000000000000000000000005" } n: 1 update: $set: { data: { "$binary": { "base64": "VWZ3", "subType": "00" } } } # hex: 556677 - updateOne: filter: files_id: { $oid: "000000000000000000000005" } n: 2 update: $set: { data: { "$binary": { "base64": "iJmq", "subType": "00" } } } # hex: 8899aa expectResult: matchedCount: 2 modifiedCount: 2 - name: download object: *bucket0 arguments: id: { $oid: "000000000000000000000005" } expectError: { isError: true } # ChunkIsWrongSize - description: "download when final chunk is the wrong size" operations: - name: updateOne object: *bucket0_chunks_collection arguments: filter: files_id: { $oid: "000000000000000000000005" } n: 2 update: $set: { data: { "$binary": { "base64": "mQ==", "subType": "00" } } } # hex: 99 expectResult: matchedCount: 1 modifiedCount: 1 - name: download object: *bucket0 arguments: id: { $oid: "000000000000000000000005" } expectError: { isError: true } # ChunkIsWrongSize - description: "download legacy file with no name" operations: - name: download object: *bucket0 arguments: id: { $oid: "000000000000000000000006" } expectResult: { $$matchesHexBytes: "1122" } mongo-ruby-driver-2.21.3/spec/spec_tests/data/gridfs_unified/downloadByName.yml000066400000000000000000000122601505113246500276170ustar00rootroot00000000000000description: "gridfs-downloadByName" schemaVersion: "1.0" createEntities: - client: id: &client0 client0 - database: id: &database0 database0 client: *client0 databaseName: &database0Name gridfs-tests - bucket: id: &bucket0 bucket0 database: *database0 - collection: id: &bucket0_files_collection bucket0_files_collection database: *database0 collectionName: &bucket0_files_collectionName fs.files - collection: id: &bucket0_chunks_collection bucket0_chunks_collection database: *database0 collectionName: &bucket0_chunks_collectionName fs.chunks initialData: - collectionName: *bucket0_files_collectionName databaseName: *database0Name documents: - _id: { $oid: "000000000000000000000001" } length: 1 chunkSize: 4 uploadDate: { $date: "1970-01-01T00:00:00.000Z" } md5: "47ed733b8d10be225eceba344d533586" filename: "abc" contentType: "application/octet-stream" aliases: [] metadata: {} - _id: { $oid: "000000000000000000000002" } length: 1 chunkSize: 4 uploadDate: { $date: "1970-01-02T00:00:00.000Z" } md5: "b15835f133ff2e27c7cb28117bfae8f4" filename: "abc" contentType: "application/octet-stream" aliases: [] metadata: {} - _id: { $oid: "000000000000000000000003" } length: 1 chunkSize: 4 uploadDate: { $date: "1970-01-03T00:00:00.000Z" } md5: "eccbc87e4b5ce2fe28308fd9f2a7baf3" filename: "abc" contentType: "application/octet-stream" aliases: [] metadata: {} - _id: { $oid: "000000000000000000000004" } length: 1 chunkSize: 4 uploadDate: { $date: "1970-01-04T00:00:00.000Z" } md5: "f623e75af30e62bbd73d6df5b50bb7b5" filename: "abc" contentType: "application/octet-stream" aliases: [] metadata: {} - _id: { $oid: "000000000000000000000005" } length: 1 chunkSize: 4 uploadDate: { $date: "1970-01-05T00:00:00.000Z" } md5: "4c614360da93c0a041b22e537de151eb" filename: "abc" contentType: "application/octet-stream" aliases: [] metadata: {} - collectionName: *bucket0_chunks_collectionName databaseName: *database0Name documents: - _id: { $oid: "000000000000000000000001" } files_id: { $oid: "000000000000000000000001" } n: 0 data: { "$binary": { "base64": "EQ==", "subType": "00" } } # hex: 11 - _id: { $oid: "000000000000000000000002" } files_id: { $oid: "000000000000000000000002" } n: 0 data: { "$binary": { "base64": "Ig==", "subType": "00" } } # hex: 22 - _id: { $oid: "000000000000000000000003" } files_id: { $oid: "000000000000000000000003" } n: 0 data: { "$binary": { "base64": "Mw==", "subType": "00" } } # hex: 33 - _id: { $oid: "000000000000000000000004" } files_id: { $oid: "000000000000000000000004" } n: 0 data: { "$binary": { "base64": "RA==", "subType": "00" } } # hex: 44 - _id: { $oid: "000000000000000000000005" } files_id: { $oid: "000000000000000000000005" } n: 0 data: { "$binary": { "base64": "VQ==", "subType": "00" } } # hex: 55 tests: - description: "downloadByName defaults to latest revision (-1)" operations: - name: downloadByName object: *bucket0 arguments: filename: "abc" expectResult: { $$matchesHexBytes: "55" } - description: "downloadByName when revision is 0" operations: - name: downloadByName object: *bucket0 arguments: filename: "abc" revision: 0 expectResult: { $$matchesHexBytes: "11" } - description: "downloadByName when revision is 1" operations: - name: downloadByName object: *bucket0 arguments: filename: "abc" revision: 1 expectResult: { $$matchesHexBytes: "22" } - description: "downloadByName when revision is 2" operations: - name: downloadByName object: *bucket0 arguments: filename: "abc" revision: 2 expectResult: { $$matchesHexBytes: "33" } - description: "downloadByName when revision is -2" operations: - name: downloadByName object: *bucket0 arguments: filename: "abc" revision: -2 expectResult: { $$matchesHexBytes: "44" } - description: "downloadByName when revision is -1" operations: - name: downloadByName object: *bucket0 arguments: filename: "abc" revision: -1 expectResult: { $$matchesHexBytes: "55" } - description: "downloadByName when files entry does not exist" operations: - name: downloadByName object: *bucket0 arguments: filename: "xyz" expectError: { isError: true } # FileNotFound - description: "downloadByName when revision does not exist" operations: - name: downloadByName object: *bucket0 arguments: filename: "abc" revision: 999 expectError: { isError: true } # RevisionNotFound mongo-ruby-driver-2.21.3/spec/spec_tests/data/gridfs_unified/upload-disableMD5.yml000066400000000000000000000053711505113246500301140ustar00rootroot00000000000000description: "gridfs-upload-disableMD5" schemaVersion: "1.0" createEntities: - client: id: &client0 client0 - database: id: &database0 database0 client: *client0 databaseName: &database0Name gridfs-tests - bucket: id: &bucket0 bucket0 database: *database0 - collection: id: &bucket0_files_collection bucket0_files_collection database: *database0 collectionName: &bucket0_files_collectionName fs.files - collection: id: &bucket0_chunks_collection bucket0_chunks_collection database: *database0 collectionName: &bucket0_chunks_collectionName fs.chunks initialData: - collectionName: *bucket0_files_collectionName databaseName: *database0Name documents: [] - collectionName: *bucket0_chunks_collectionName databaseName: *database0Name documents: [] # Note: these tests utilize the transitional "disableMD5" option. Drivers that # do not support the option should skip this file. tests: - description: "upload when length is 0 sans MD5" operations: - name: upload object: *bucket0 arguments: filename: "filename" source: { $$hexBytes: "" } chunkSizeBytes: 4 disableMD5: true expectResult: { $$type: objectId } saveResultAsEntity: &uploadedObjectId uploadedObjectId - name: find object: *bucket0_files_collection arguments: filter: {} expectResult: - _id: { $$matchesEntity: *uploadedObjectId } length: 0 chunkSize: 4 uploadDate: { $$type: date } md5: { $$exists: false } filename: filename - name: find object: *bucket0_chunks_collection arguments: filter: {} expectResult: [] - description: "upload when length is 1 sans MD5" operations: - name: upload object: *bucket0 arguments: filename: "filename" source: { $$hexBytes: "11" } chunkSizeBytes: 4 disableMD5: true expectResult: { $$type: objectId } saveResultAsEntity: *uploadedObjectId - name: find object: *bucket0_files_collection arguments: filter: {} expectResult: - _id: { $$matchesEntity: *uploadedObjectId } length: 1 chunkSize: 4 uploadDate: { $$type: date } md5: { $$exists: false } filename: filename - name: find object: *bucket0_chunks_collection arguments: filter: {} expectResult: - _id: { $$type: objectId } files_id: { $$matchesEntity: *uploadedObjectId } n: 0 data: { $binary: { base64: "EQ==", subType: "00" } } # hex 11 mongo-ruby-driver-2.21.3/spec/spec_tests/data/gridfs_unified/upload.yml000066400000000000000000000230651505113246500262050ustar00rootroot00000000000000description: "gridfs-upload" schemaVersion: "1.0" createEntities: - client: id: &client0 client0 - database: id: &database0 database0 client: *client0 databaseName: &database0Name gridfs-tests - bucket: id: &bucket0 bucket0 database: *database0 - collection: id: &bucket0_files_collection bucket0_files_collection database: *database0 collectionName: &bucket0_files_collectionName fs.files - collection: id: &bucket0_chunks_collection bucket0_chunks_collection database: *database0 collectionName: &bucket0_chunks_collectionName fs.chunks initialData: - collectionName: *bucket0_files_collectionName databaseName: *database0Name documents: [] - collectionName: *bucket0_chunks_collectionName databaseName: *database0Name documents: [] # Note: Uploaded files and chunks include ObjectIds, which we cannot match with # "outcome" since it does not allow operators. Instead, these tests will use # find operations to assert the contents of uploaded files and chunks. tests: - description: "upload when length is 0" operations: - name: upload object: *bucket0 arguments: filename: "filename" source: { $$hexBytes: "" } chunkSizeBytes: 4 expectResult: { $$type: objectId } saveResultAsEntity: &uploadedObjectId uploadedObjectId - name: find object: *bucket0_files_collection arguments: filter: {} expectResult: - _id: { $$matchesEntity: *uploadedObjectId } length: 0 chunkSize: 4 uploadDate: { $$type: date } # The md5 field is deprecated so some drivers do not calculate it when uploading files. md5: { $$unsetOrMatches: "d41d8cd98f00b204e9800998ecf8427e" } filename: filename - name: find object: *bucket0_chunks_collection arguments: filter: {} expectResult: [] - description: "upload when length is 1" operations: - name: upload object: *bucket0 arguments: filename: "filename" source: { $$hexBytes: "11" } chunkSizeBytes: 4 expectResult: { $$type: objectId } saveResultAsEntity: *uploadedObjectId - name: find object: *bucket0_files_collection arguments: filter: {} expectResult: - _id: { $$matchesEntity: *uploadedObjectId } length: 1 chunkSize: 4 uploadDate: { $$type: date } md5: { $$unsetOrMatches: "47ed733b8d10be225eceba344d533586" } filename: filename - name: find object: *bucket0_chunks_collection arguments: filter: {} expectResult: - _id: { $$type: objectId } files_id: { $$matchesEntity: *uploadedObjectId } n: 0 data: { $binary: { base64: "EQ==", subType: "00" } } # hex 11 - description: "upload when length is 3" operations: - name: upload object: *bucket0 arguments: filename: "filename" source: { $$hexBytes: "112233" } chunkSizeBytes: 4 expectResult: { $$type: objectId } saveResultAsEntity: *uploadedObjectId - name: find object: *bucket0_files_collection arguments: filter: {} expectResult: - _id: { $$matchesEntity: *uploadedObjectId } length: 3 chunkSize: 4 uploadDate: { $$type: date } md5: { $$unsetOrMatches: "bafae3a174ab91fc70db7a6aa50f4f52" } filename: filename - name: find object: *bucket0_chunks_collection arguments: filter: {} expectResult: - _id: { $$type: objectId } files_id: { $$matchesEntity: *uploadedObjectId } n: 0 data: { $binary: { base64: "ESIz", subType: "00" } } # hex 112233 - description: "upload when length is 4" operations: - name: upload object: *bucket0 arguments: filename: "filename" source: { $$hexBytes: "11223344" } chunkSizeBytes: 4 expectResult: { $$type: objectId } saveResultAsEntity: *uploadedObjectId - name: find object: *bucket0_files_collection arguments: filter: {} expectResult: - _id: { $$matchesEntity: *uploadedObjectId } length: 4 chunkSize: 4 uploadDate: { $$type: date } md5: { $$unsetOrMatches: "7e7c77cff5705d1f7574a25ef6662117" } filename: filename - name: find object: *bucket0_chunks_collection arguments: filter: {} expectResult: - _id: { $$type: objectId } files_id: { $$matchesEntity: *uploadedObjectId } n: 0 data: { $binary: { base64: "ESIzRA==", subType: "00" } } # hex 11223344 - description: "upload when length is 5" operations: - name: upload object: *bucket0 arguments: filename: filename source: { $$hexBytes: "1122334455" } chunkSizeBytes: 4 expectResult: { $$type: objectId } saveResultAsEntity: *uploadedObjectId - name: find object: *bucket0_files_collection arguments: filter: {} expectResult: - _id: { $$matchesEntity: *uploadedObjectId } length: 5 chunkSize: 4 uploadDate: { $$type: date } md5: { $$unsetOrMatches: "283d4fea5dded59cf837d3047328f5af" } filename: filename - name: find object: *bucket0_chunks_collection arguments: filter: {} # Sort to ensure chunks are returned in a deterministic order sort: { n: 1 } expectResult: - _id: { $$type: objectId } files_id: { $$matchesEntity: *uploadedObjectId } n: 0 data: { $binary: { base64: "ESIzRA==", subType: "00" } } # hex 11223344 - _id: { $$type: objectId } files_id: { $$matchesEntity: *uploadedObjectId } n: 1 data: { $binary: { base64: "VQ==", subType: "00" } } # hex 55 - description: "upload when length is 8" operations: - name: upload object: *bucket0 arguments: filename: filename source: { $$hexBytes: "1122334455667788" } chunkSizeBytes: 4 expectResult: { $$type: objectId } saveResultAsEntity: *uploadedObjectId - name: find object: *bucket0_files_collection arguments: filter: {} expectResult: - _id: { $$matchesEntity: *uploadedObjectId } length: 8 chunkSize: 4 uploadDate: { $$type: date } md5: { $$unsetOrMatches: "dd254cdc958e53abaa67da9f797125f5" } filename: filename - name: find object: *bucket0_chunks_collection arguments: filter: {} # Sort to ensure chunks are returned in a deterministic order sort: { n: 1 } expectResult: - _id: { $$type: objectId } files_id: { $$matchesEntity: *uploadedObjectId } n: 0 data: { $binary: { base64: "ESIzRA==", subType: "00" } } # hex 11223344 - _id: { $$type: objectId } files_id: { $$matchesEntity: *uploadedObjectId } n: 1 data: { $binary: { base64: "VWZ3iA==", subType: "00" } } # hex 55667788 - description: "upload when contentType is provided" operations: - name: upload object: *bucket0 arguments: filename: "filename" source: { $$hexBytes: "11" } chunkSizeBytes: 4 contentType: "image/jpeg" expectResult: { $$type: objectId } saveResultAsEntity: *uploadedObjectId - name: find object: *bucket0_files_collection arguments: filter: {} expectResult: - _id: { $$matchesEntity: *uploadedObjectId } length: 1 chunkSize: 4 uploadDate: { $$type: date } md5: { $$unsetOrMatches: "47ed733b8d10be225eceba344d533586" } filename: filename contentType: "image/jpeg" - name: find object: *bucket0_chunks_collection arguments: filter: {} expectResult: - _id: { $$type: objectId } files_id: { $$matchesEntity: *uploadedObjectId } n: 0 data: { $binary: { base64: "EQ==", subType: "00" } } # hex 11 - description: "upload when metadata is provided" operations: - name: upload object: *bucket0 arguments: filename: "filename" source: { $$hexBytes: "11" } chunkSizeBytes: 4 metadata: { x: 1 } expectResult: { $$type: objectId } saveResultAsEntity: *uploadedObjectId - name: find object: *bucket0_files_collection arguments: filter: {} expectResult: - _id: { $$matchesEntity: *uploadedObjectId } length: 1 chunkSize: 4 uploadDate: { $$type: date } md5: { $$unsetOrMatches: "47ed733b8d10be225eceba344d533586" } filename: filename metadata: { x: 1 } - name: find object: *bucket0_chunks_collection arguments: filter: {} expectResult: - _id: { $$type: objectId } files_id: { $$matchesEntity: *uploadedObjectId } n: 0 data: { $binary: { base64: "EQ==", subType: "00" } } # hex 11 mongo-ruby-driver-2.21.3/spec/spec_tests/data/index_management/000077500000000000000000000000001505113246500245125ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/index_management/createSearchIndex.yml000066400000000000000000000072711505113246500306250ustar00rootroot00000000000000description: "createSearchIndex" schemaVersion: "1.4" createEntities: - client: id: &client0 client0 useMultipleMongoses: false observeEvents: - commandStartedEvent - database: id: &database0 database0 client: *client0 databaseName: *database0 - collection: id: &collection0 collection0 database: *database0 collectionName: *collection0 runOnRequirements: # Skip server versions without fix of SERVER-83107 to avoid error message "BSON field 'createSearchIndexes.indexes.type' is an unknown field." # SERVER-83107 was not backported to 7.1. - minServerVersion: "7.0.5" maxServerVersion: "7.0.99" topologies: [ replicaset, load-balanced, sharded ] serverless: forbid - minServerVersion: "7.2.0" topologies: [ replicaset, load-balanced, sharded ] serverless: forbid tests: - description: "no name provided for an index definition" operations: - name: createSearchIndex object: *collection0 arguments: model: { definition: &definition { mappings: { dynamic: true } } , type: 'search' } expectError: # This test always errors in a non-Atlas environment. The test functions as a unit test by asserting # that the driver constructs and sends the correct command. # The expected error message was changed in SERVER-83003. Check for the substring "Atlas" shared by both error messages. isError: true errorContains: Atlas expectEvents: - client: *client0 events: - commandStartedEvent: command: createSearchIndexes: *collection0 indexes: [ { definition: *definition, type: 'search'} ] $db: *database0 - description: "name provided for an index definition" operations: - name: createSearchIndex object: *collection0 arguments: model: { definition: &definition { mappings: { dynamic: true } } , name: 'test index', type: 'search' } expectError: # This test always errors in a non-Atlas environment. The test functions as a unit test by asserting # that the driver constructs and sends the correct command. # The expected error message was changed in SERVER-83003. Check for the substring "Atlas" shared by both error messages. isError: true errorContains: Atlas expectEvents: - client: *client0 events: - commandStartedEvent: command: createSearchIndexes: *collection0 indexes: [ { definition: *definition, name: 'test index', type: 'search' } ] $db: *database0 - description: "create a vector search index" operations: - name: createSearchIndex object: *collection0 arguments: model: { definition: &definition { fields: [ {"type": "vector", "path": "plot_embedding", "numDimensions": 1536, "similarity": "euclidean"} ] } , name: 'test index', type: 'vectorSearch' } expectError: # This test always errors in a non-Atlas environment. The test functions as a unit test by asserting # that the driver constructs and sends the correct command. # The expected error message was changed in SERVER-83003. Check for the substring "Atlas" shared by both error messages. isError: true errorContains: Atlas expectEvents: - client: *client0 events: - commandStartedEvent: command: createSearchIndexes: *collection0 indexes: [ { definition: *definition, name: 'test index', type: 'vectorSearch' } ] $db: *database0 mongo-ruby-driver-2.21.3/spec/spec_tests/data/index_management/createSearchIndexes.yml000066400000000000000000000110061505113246500311440ustar00rootroot00000000000000description: "createSearchIndexes" schemaVersion: "1.4" createEntities: - client: id: &client0 client0 useMultipleMongoses: false observeEvents: - commandStartedEvent - database: id: &database0 database0 client: *client0 databaseName: *database0 - collection: id: &collection0 collection0 database: *database0 collectionName: *collection0 runOnRequirements: # Skip server versions without fix of SERVER-83107 to avoid error message "BSON field 'createSearchIndexes.indexes.type' is an unknown field." # SERVER-83107 was not backported to 7.1. - minServerVersion: "7.0.5" maxServerVersion: "7.0.99" topologies: [ replicaset, load-balanced, sharded ] serverless: forbid - minServerVersion: "7.2.0" topologies: [ replicaset, load-balanced, sharded ] serverless: forbid tests: - description: "empty index definition array" operations: - name: createSearchIndexes object: *collection0 arguments: models: [] expectError: # This test always errors in a non-Atlas environment. The test functions as a unit test by asserting # that the driver constructs and sends the correct command. # The expected error message was changed in SERVER-83003. Check for the substring "Atlas" shared by both error messages. isError: true errorContains: Atlas expectEvents: - client: *client0 events: - commandStartedEvent: command: createSearchIndexes: *collection0 indexes: [] $db: *database0 - description: "no name provided for an index definition" operations: - name: createSearchIndexes object: *collection0 arguments: models: [ { definition: &definition { mappings: { dynamic: true } } , type: 'search' } ] expectError: # This test always errors in a non-Atlas environment. The test functions as a unit test by asserting # that the driver constructs and sends the correct command. # The expected error message was changed in SERVER-83003. Check for the substring "Atlas" shared by both error messages. isError: true errorContains: Atlas expectEvents: - client: *client0 events: - commandStartedEvent: command: createSearchIndexes: *collection0 indexes: [ { definition: *definition, type: 'search'} ] $db: *database0 - description: "name provided for an index definition" operations: - name: createSearchIndexes object: *collection0 arguments: models: [ { definition: &definition { mappings: { dynamic: true } } , name: 'test index' , type: 'search' } ] expectError: # This test always errors in a non-Atlas environment. The test functions as a unit test by asserting # that the driver constructs and sends the correct command. # The expected error message was changed in SERVER-83003. Check for the substring "Atlas" shared by both error messages. isError: true errorContains: Atlas expectEvents: - client: *client0 events: - commandStartedEvent: command: createSearchIndexes: *collection0 indexes: [ { definition: *definition, name: 'test index', type: 'search' } ] $db: *database0 - description: "create a vector search index" operations: - name: createSearchIndexes object: *collection0 arguments: models: [ { definition: &definition { fields: [ {"type": "vector", "path": "plot_embedding", "numDimensions": 1536, "similarity": "euclidean"} ] }, name: 'test index' , type: 'vectorSearch' } ] expectError: # This test always errors in a non-Atlas environment. The test functions as a unit test by asserting # that the driver constructs and sends the correct command. # The expected error message was changed in SERVER-83003. Check for the substring "Atlas" shared by both error messages. isError: true errorContains: Atlas expectEvents: - client: *client0 events: - commandStartedEvent: command: createSearchIndexes: *collection0 indexes: [ { definition: *definition, name: 'test index', type: 'vectorSearch' } ] $db: *database0 mongo-ruby-driver-2.21.3/spec/spec_tests/data/index_management/dropSearchIndex.yml000066400000000000000000000024661505113246500303270ustar00rootroot00000000000000description: "dropSearchIndex" schemaVersion: "1.4" createEntities: - client: id: &client0 client0 useMultipleMongoses: false observeEvents: - commandStartedEvent - database: id: &database0 database0 client: *client0 databaseName: *database0 - collection: id: &collection0 collection0 database: *database0 collectionName: *collection0 runOnRequirements: - minServerVersion: "7.0.0" topologies: [ replicaset, load-balanced, sharded ] serverless: forbid tests: - description: "sends the correct command" operations: - name: dropSearchIndex object: *collection0 arguments: name: &indexName 'test index' expectError: # This test always errors in a non-Atlas environment. The test functions as a unit test by asserting # that the driver constructs and sends the correct command. # The expected error message was changed in SERVER-83003. Check for the substring "Atlas" shared by both error messages. isError: true errorContains: Atlas expectEvents: - client: *client0 events: - commandStartedEvent: command: dropSearchIndex: *collection0 name: *indexName $db: *database0 mongo-ruby-driver-2.21.3/spec/spec_tests/data/index_management/listSearchIndexes.yml000066400000000000000000000062561505113246500306670ustar00rootroot00000000000000description: "listSearchIndexes" schemaVersion: "1.4" createEntities: - client: id: &client0 client0 useMultipleMongoses: false observeEvents: - commandStartedEvent - database: id: &database0 database0 client: *client0 databaseName: *database0 - collection: id: &collection0 collection0 database: *database0 collectionName: *collection0 initialData: - collectionName: *collection0 databaseName: *database0 documents: - x: 1 runOnRequirements: - minServerVersion: "7.0.0" topologies: [ replicaset, load-balanced, sharded ] serverless: forbid tests: - description: "when no name is provided, it does not populate the filter" operations: - name: listSearchIndexes object: *collection0 expectError: # This test always errors in a non-Atlas environment. The test functions as a unit test by asserting # that the driver constructs and sends the correct command. # The expected error message was changed in SERVER-83003. Check for the substring "Atlas" shared by both error messages. isError: true errorContains: Atlas expectEvents: - client: *client0 events: - commandStartedEvent: command: aggregate: *collection0 pipeline: - $listSearchIndexes: {} - description: "when a name is provided, it is present in the filter" operations: - name: listSearchIndexes object: *collection0 arguments: name: &indexName "test index" expectError: # This test always errors in a non-Atlas environment. The test functions as a unit test by asserting # that the driver constructs and sends the correct command. # The expected error message was changed in SERVER-83003. Check for the substring "Atlas" shared by both error messages. isError: true errorContains: Atlas expectEvents: - client: *client0 events: - commandStartedEvent: command: aggregate: *collection0 pipeline: - $listSearchIndexes: { name: *indexName } $db: *database0 - description: aggregation cursor options are supported operations: - name: listSearchIndexes object: *collection0 arguments: name: &indexName "test index" aggregationOptions: batchSize: 10 expectError: # This test always errors in a non-Atlas environment. The test functions as a unit test by asserting # that the driver constructs and sends the correct command. # The expected error message was changed in SERVER-83003. Check for the substring "Atlas" shared by both error messages. isError: true errorContains: Atlas expectEvents: - client: *client0 events: - commandStartedEvent: command: aggregate: *collection0 cursor: { batchSize: 10 } pipeline: - $listSearchIndexes: { name: *indexName } $db: *database0 searchIndexIgnoresReadWriteConcern.yml000066400000000000000000000132041505113246500340610ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/index_managementdescription: "search index operations ignore read and write concern" schemaVersion: "1.4" createEntities: - client: id: &client0 client0 useMultipleMongoses: false # Set a non-default read and write concern. uriOptions: readConcernLevel: local w: 1 observeEvents: - commandStartedEvent - database: id: &database0 database0 client: *client0 databaseName: *database0 - collection: id: &collection0 collection0 database: *database0 collectionName: *collection0 runOnRequirements: - minServerVersion: "7.0.0" topologies: [ replicaset, load-balanced, sharded ] serverless: forbid tests: - description: "createSearchIndex ignores read and write concern" operations: - name: createSearchIndex object: *collection0 arguments: model: { definition: &definition { mappings: { dynamic: true } } } expectError: # This test always errors in a non-Atlas environment. The test functions as a unit test by asserting # that the driver constructs and sends the correct command. # The expected error message was changed in SERVER-83003. Check for the substring "Atlas" shared by both error messages. isError: true errorContains: Atlas expectEvents: - client: *client0 events: - commandStartedEvent: command: createSearchIndexes: *collection0 indexes: [ { definition: *definition } ] $db: *database0 # Expect no writeConcern or readConcern to be sent. writeConcern: { $$exists: false } readConcern: { $$exists: false } - description: "createSearchIndexes ignores read and write concern" operations: - name: createSearchIndexes object: *collection0 arguments: models: [] expectError: # This test always errors in a non-Atlas environment. The test functions as a unit test by asserting # that the driver constructs and sends the correct command. # The expected error message was changed in SERVER-83003. Check for the substring "Atlas" shared by both error messages. isError: true errorContains: Atlas expectEvents: - client: *client0 events: - commandStartedEvent: command: createSearchIndexes: *collection0 indexes: [] $db: *database0 # Expect no writeConcern or readConcern to be sent. writeConcern: { $$exists: false } readConcern: { $$exists: false } - description: "dropSearchIndex ignores read and write concern" operations: - name: dropSearchIndex object: *collection0 arguments: name: &indexName 'test index' expectError: # This test always errors in a non-Atlas environment. The test functions as a unit test by asserting # that the driver constructs and sends the correct command. # The expected error message was changed in SERVER-83003. Check for the substring "Atlas" shared by both error messages. isError: true errorContains: Atlas expectEvents: - client: *client0 events: - commandStartedEvent: command: dropSearchIndex: *collection0 name: *indexName $db: *database0 # Expect no writeConcern or readConcern to be sent. writeConcern: { $$exists: false } readConcern: { $$exists: false } # https://jira.mongodb.org/browse/RUBY-3351 #- description: "listSearchIndexes ignores read and write concern" # operations: # - name: listSearchIndexes # object: *collection0 # expectError: # # This test always errors in a non-Atlas environment. The test functions as a unit test by asserting # # that the driver constructs and sends the correct command. # # The expected error message was changed in SERVER-83003. Check for the substring "Atlas" shared by both error messages. # isError: true # errorContains: Atlas # expectEvents: # - client: *client0 # events: # - commandStartedEvent: # command: # aggregate: *collection0 # pipeline: # - $listSearchIndexes: {} # # Expect no writeConcern or readConcern to be sent. # writeConcern: { $$exists: false } # readConcern: { $$exists: false } - description: "updateSearchIndex ignores the read and write concern" operations: - name: updateSearchIndex object: *collection0 arguments: name: &indexName 'test index' definition: &definition {} expectError: # This test always errors in a non-Atlas environment. The test functions as a unit test by asserting # that the driver constructs and sends the correct command. # The expected error message was changed in SERVER-83003. Check for the substring "Atlas" shared by both error messages. isError: true errorContains: Atlas expectEvents: - client: *client0 events: - commandStartedEvent: command: updateSearchIndex: *collection0 name: *indexName definition: *definition $db: *database0 # Expect no writeConcern or readConcern to be sent. writeConcern: { $$exists: false } readConcern: { $$exists: false } mongo-ruby-driver-2.21.3/spec/spec_tests/data/index_management/updateSearchIndex.yml000066400000000000000000000026121505113246500306360ustar00rootroot00000000000000description: "updateSearchIndex" schemaVersion: "1.4" createEntities: - client: id: &client0 client0 useMultipleMongoses: false observeEvents: - commandStartedEvent - database: id: &database0 database0 client: *client0 databaseName: *database0 - collection: id: &collection0 collection0 database: *database0 collectionName: *collection0 runOnRequirements: - minServerVersion: "7.0.0" topologies: [ replicaset, load-balanced, sharded ] serverless: forbid tests: - description: "sends the correct command" operations: - name: updateSearchIndex object: *collection0 arguments: name: &indexName 'test index' definition: &definition {} expectError: # This test always errors in a non-Atlas environment. The test functions as a unit test by asserting # that the driver constructs and sends the correct command. # The expected error message was changed in SERVER-83003. Check for the substring "Atlas" shared by both error messages. isError: true errorContains: Atlas expectEvents: - client: *client0 events: - commandStartedEvent: command: updateSearchIndex: *collection0 name: *indexName definition: *definition $db: *database0 mongo-ruby-driver-2.21.3/spec/spec_tests/data/load_balancers/000077500000000000000000000000001505113246500241405ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/load_balancers/cursors.yml000066400000000000000000000410211505113246500263610ustar00rootroot00000000000000description: cursors are correctly pinned to connections for load-balanced clusters schemaVersion: '1.4' runOnRequirements: - topologies: [ load-balanced ] createEntities: - client: id: &client0 client0 useMultipleMongoses: true observeEvents: - commandStartedEvent - commandSucceededEvent - commandFailedEvent - connectionReadyEvent - connectionClosedEvent - connectionCheckedOutEvent - connectionCheckedInEvent - database: id: &database0 database0 client: *client0 databaseName: &database0Name database0Name - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name coll0 - collection: id: &collection1 collection1 database: *database0 collectionName: &collection1Name coll1 - collection: id: &collection2 collection2 database: *database0 collectionName: &collection2Name coll2 initialData: - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1 } - { _id: 2 } - { _id: 3 } - collectionName: *collection1Name databaseName: *database0Name documents: [] - collectionName: *collection2Name databaseName: *database0Name documents: [] tests: - description: no connection is pinned if all documents are returned in the initial batch operations: - name: createFindCursor object: *collection0 arguments: filter: {} saveResultAsEntity: &cursor0 cursor0 - &assertConnectionNotPinned name: assertNumberConnectionsCheckedOut object: testRunner arguments: client: *client0 connections: 0 expectEvents: - client: *client0 events: - commandStartedEvent: command: find: *collection0Name filter: {} commandName: find - commandSucceededEvent: reply: cursor: id: 0 firstBatch: { $$type: array } ns: { $$type: string } commandName: find - client: *client0 eventType: cmap events: - connectionReadyEvent: {} - connectionCheckedOutEvent: {} - connectionCheckedInEvent: {} - description: pinned connections are returned when the cursor is drained skipReason: "RUBY-2881: ruby driver LB is not spec compliant" operations: - &createAndSaveCursor name: createFindCursor object: *collection0 arguments: filter: {} batchSize: 2 saveResultAsEntity: &cursor0 cursor0 - &assertConnectionPinned name: assertNumberConnectionsCheckedOut object: testRunner arguments: client: *client0 connections: 1 - name: iterateUntilDocumentOrError object: *cursor0 expectResult: { _id: 1 } - name: iterateUntilDocumentOrError object: *cursor0 expectResult: { _id: 2 } - name: iterateUntilDocumentOrError object: *cursor0 expectResult: { _id: 3 } - *assertConnectionNotPinned - &closeCursor name: close object: *cursor0 expectEvents: - client: *client0 events: - &findWithBatchSizeStarted commandStartedEvent: command: find: *collection0Name filter: {} batchSize: 2 commandName: find - &findWithBatchSizeSucceeded commandSucceededEvent: reply: cursor: id: { $$type: [ int, long ] } firstBatch: { $$type: array } ns: { $$type: string } commandName: find - &getMoreStarted commandStartedEvent: command: getMore: { $$type: [ int, long ] } collection: *collection0Name commandName: getMore - &getMoreSucceeded commandSucceededEvent: reply: cursor: id: 0 ns: { $$type: string } nextBatch: { $$type: array } commandName: getMore - client: *client0 eventType: cmap events: - connectionReadyEvent: {} - connectionCheckedOutEvent: {} - connectionCheckedInEvent: {} - description: pinned connections are returned to the pool when the cursor is closed operations: - *createAndSaveCursor - *assertConnectionPinned - *closeCursor - *assertConnectionNotPinned expectEvents: - client: *client0 events: - *findWithBatchSizeStarted - *findWithBatchSizeSucceeded - &killCursorsStarted commandStartedEvent: commandName: killCursors - &killCursorsSucceeded commandSucceededEvent: commandName: killCursors - client: *client0 eventType: cmap events: - connectionReadyEvent: {} - connectionCheckedOutEvent: {} - connectionCheckedInEvent: {} # If a network error occurs during a getMore request, the connection must remain pinned. and drivers must not # attempt to send a killCursors command when the cursor is closed because the connection is no longer valid. - description: pinned connections are not returned after an network error during getMore skipReason: "RUBY-2881: ruby driver LB is not spec compliant" operations: - name: failPoint object: testRunner arguments: client: *client0 failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: [ getMore ] closeConnection: true - *createAndSaveCursor - *assertConnectionPinned - name: iterateUntilDocumentOrError object: *cursor0 expectResult: _id: 1 - name: iterateUntilDocumentOrError object: *cursor0 expectResult: _id: 2 # Third next() call should perform a getMore. - name: iterateUntilDocumentOrError object: *cursor0 expectError: # Network errors are considered client-side errors per the unified test format spec. isClientError: true - *assertConnectionPinned - *closeCursor # Execute a close operation to actually release the connection. - *assertConnectionNotPinned expectEvents: - client: *client0 events: - *findWithBatchSizeStarted - *findWithBatchSizeSucceeded - *getMoreStarted - &getMoreFailed commandFailedEvent: commandName: getMore - client: *client0 eventType: cmap events: # Events to set the failpoint. - connectionReadyEvent: {} - connectionCheckedOutEvent: {} - connectionCheckedInEvent: {} # Events for the find command + getMore. - connectionCheckedOutEvent: {} # Events for the close() operation. - connectionCheckedInEvent: {} - connectionClosedEvent: reason: error - description: pinned connections are returned after a network error during a killCursors request skipReason: "RUBY-2881: ruby driver LB is not spec compliant" operations: - name: failPoint object: testRunner arguments: client: *client0 failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: [ killCursors ] closeConnection: true - *createAndSaveCursor - *assertConnectionPinned - *closeCursor - *assertConnectionNotPinned expectEvents: - client: *client0 events: - *findWithBatchSizeStarted - *findWithBatchSizeSucceeded - *killCursorsStarted - commandFailedEvent: commandName: killCursors - client: *client0 eventType: cmap events: # Events to set the failpoint. - connectionReadyEvent: {} - connectionCheckedOutEvent: {} - connectionCheckedInEvent: {} # Events for the find command + killCursors. - connectionCheckedOutEvent: {} - connectionCheckedInEvent: {} - connectionClosedEvent: reason: error - description: pinned connections are not returned to the pool after a non-network error on getMore operations: - name: failPoint object: testRunner arguments: client: *client0 failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: [ getMore ] errorCode: &hostNotFoundCode 7 # This is not a state change error code, so it should not cause SDAM changes. - *createAndSaveCursor - name: iterateUntilDocumentOrError object: *cursor0 expectResult: _id: 1 - name: iterateUntilDocumentOrError object: *cursor0 expectResult: _id: 2 - name: iterateUntilDocumentOrError object: *cursor0 expectError: errorCode: *hostNotFoundCode - *assertConnectionPinned - *closeCursor - *assertConnectionNotPinned expectEvents: - client: *client0 events: - *findWithBatchSizeStarted - *findWithBatchSizeSucceeded - *getMoreStarted - *getMoreFailed - *killCursorsStarted - *killCursorsSucceeded - client: *client0 eventType: cmap events: # Events to set the failpoint. - connectionReadyEvent: {} - connectionCheckedOutEvent: {} - connectionCheckedInEvent: {} # Events for the find command + getMore + killCursors. - connectionCheckedOutEvent: {} - connectionCheckedInEvent: {} # Basic tests for cursor-creating commands besides "find". We don't need to replicate the full set of tests defined # above for each such command. Instead, only one test is needed per command to ensure that the pinned connection is # correctly passed down to the server. # # Each test creates a cursor with a small batch size and fully iterates it. Because drivers do not publish CMAP # events when using pinned connections, each test asserts that only one set of ready/checkout/checkin events are # published. - description: aggregate pins the cursor to a connection operations: - name: aggregate object: *collection0 arguments: pipeline: [] batchSize: 2 - name: assertNumberConnectionsCheckedOut object: testRunner arguments: client: *client0 connections: 0 expectEvents: - client: *client0 events: - commandStartedEvent: command: aggregate: *collection0Name cursor: batchSize: 2 commandName: aggregate - commandSucceededEvent: commandName: aggregate - *getMoreStarted - *getMoreSucceeded - client: *client0 eventType: cmap events: - connectionReadyEvent: {} - connectionCheckedOutEvent: {} - connectionCheckedInEvent: {} - description: listCollections pins the cursor to a connection skipReason: "RUBY-2881: ruby driver LB is not spec compliant" runOnRequirements: - serverless: forbid # CLOUDP-98562 listCollections batchSize is ignored on serverless. operations: - name: listCollections object: *database0 arguments: filter: {} batchSize: 2 - name: assertNumberConnectionsCheckedOut object: testRunner arguments: client: *client0 connections: 0 expectEvents: - client: *client0 events: - commandStartedEvent: command: listCollections: 1 cursor: batchSize: 2 commandName: listCollections databaseName: *database0Name - commandSucceededEvent: commandName: listCollections # Write out the event for getMore rather than using the getMoreStarted anchor because the "collection" field # is not equal to *collection0Name as the command is not executed against a collection. - commandStartedEvent: command: getMore: { $$type: [ int, long ] } collection: { $$type: string } commandName: getMore - *getMoreSucceeded - client: *client0 eventType: cmap events: - connectionReadyEvent: {} - connectionCheckedOutEvent: {} - connectionCheckedInEvent: {} - description: listIndexes pins the cursor to a connection skipReason: "RUBY-2881: ruby driver LB is not spec compliant" operations: # There is an automatic index on _id so we create two more indexes to force multiple batches with batchSize=2. - name: createIndex object: *collection0 arguments: keys: &x1IndexSpec { x: 1 } name: &x1IndexName x_1 - name: createIndex object: *collection0 arguments: keys: &y1IndexSpec { y: 1 } name: &y1IndexName y_1 - name: listIndexes object: *collection0 arguments: batchSize: 2 - name: assertNumberConnectionsCheckedOut object: testRunner arguments: client: *client0 connections: 0 expectEvents: - client: *client0 events: - commandStartedEvent: command: createIndexes: *collection0Name indexes: - name: *x1IndexName key: *x1IndexSpec commandName: createIndexes - commandSucceededEvent: commandName: createIndexes - commandStartedEvent: command: createIndexes: *collection0Name indexes: - name: *y1IndexName key: *y1IndexSpec commandName: createIndexes - commandSucceededEvent: commandName: createIndexes - commandStartedEvent: command: listIndexes: *collection0Name cursor: batchSize: 2 commandName: listIndexes databaseName: *database0Name - commandSucceededEvent: commandName: listIndexes - *getMoreStarted - *getMoreSucceeded - client: *client0 eventType: cmap events: # Events for first createIndexes. - connectionReadyEvent: {} - connectionCheckedOutEvent: {} - connectionCheckedInEvent: {} # Events for second createIndexes. - connectionCheckedOutEvent: {} - connectionCheckedInEvent: {} # Events for listIndexes and getMore. - connectionCheckedOutEvent: {} - connectionCheckedInEvent: {} - description: change streams pin to a connection skipReason: "RUBY-2881: ruby driver LB is not spec compliant" runOnRequirements: - serverless: forbid # Serverless does not support change streams. operations: - name: createChangeStream object: *collection0 arguments: pipeline: [] saveResultAsEntity: &changeStream0 changeStream0 - name: assertNumberConnectionsCheckedOut object: testRunner arguments: client: *client0 connections: 1 - name: close object: *changeStream0 - name: assertNumberConnectionsCheckedOut object: testRunner arguments: client: *client0 connections: 0 expectEvents: - client: *client0 events: - commandStartedEvent: commandName: aggregate - commandSucceededEvent: commandName: aggregate - commandStartedEvent: commandName: killCursors - commandSucceededEvent: commandName: killCursors - client: *client0 eventType: cmap events: # Events for creating the change stream. - connectionReadyEvent: {} - connectionCheckedOutEvent: {} # Events for closing the change stream. - connectionCheckedInEvent: {} mongo-ruby-driver-2.21.3/spec/spec_tests/data/load_balancers/event-monitoring.yml000066400000000000000000000047641505113246500302020ustar00rootroot00000000000000description: monitoring events include correct fields schemaVersion: '1.3' runOnRequirements: - topologies: [ load-balanced ] createEntities: - client: id: &client0 client0 useMultipleMongoses: true uriOptions: retryReads: false observeEvents: - commandStartedEvent - commandSucceededEvent - commandFailedEvent - poolClearedEvent - database: id: &database0 database0 client: *client0 databaseName: &database0Name database0 - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name coll0 initialData: - databaseName: *database0Name collectionName: *collection0Name documents: [] tests: - description: command started and succeeded events include serviceId operations: - name: insertOne object: *collection0 arguments: document: { x: 1 } expectEvents: - client: *client0 events: - commandStartedEvent: commandName: insert hasServiceId: true - commandSucceededEvent: commandName: insert hasServiceId: true - description: command failed events include serviceId operations: - name: find object: *collection0 arguments: filter: { $or: true } expectError: isError: true expectEvents: - client: *client0 events: - commandStartedEvent: commandName: find hasServiceId: true - commandFailedEvent: commandName: find hasServiceId: true - description: poolClearedEvent events include serviceId operations: - name: failPoint object: testRunner arguments: client: *client0 failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: [find] closeConnection: true - name: find object: *collection0 arguments: filter: {} expectError: isClientError: true expectEvents: - client: *client0 events: - commandStartedEvent: commandName: find hasServiceId: true - commandFailedEvent: commandName: find hasServiceId: true - client: *client0 eventType: cmap events: - poolClearedEvent: hasServiceId: true mongo-ruby-driver-2.21.3/spec/spec_tests/data/load_balancers/lb-connection-establishment.yml000066400000000000000000000021271505113246500322570ustar00rootroot00000000000000description: connection establishment for load-balanced clusters schemaVersion: '1.3' runOnRequirements: - topologies: [ load-balanced ] createEntities: - client: id: &client0 client0 uriOptions: # Explicitly set loadBalanced to false to override the option from the global URI. loadBalanced: false observeEvents: - commandStartedEvent - database: id: &database0 database0 client: *client0 databaseName: &database0Name database0 tests: - description: operations against load balancers fail if URI contains loadBalanced=false skipReason: servers have not implemented LB support yet so they will not fail the connection handshake in this case operations: - name: runCommand object: *database0 arguments: commandName: ping command: { ping: 1 } expectError: isClientError: false expectEvents: # No events should be published because the server fails the connection handshake, so the "ping" command is never # sent. - client: *client0 events: [] mongo-ruby-driver-2.21.3/spec/spec_tests/data/load_balancers/non-lb-connection-establishment.yml000066400000000000000000000041431505113246500330470ustar00rootroot00000000000000description: connection establishment if loadBalanced is specified for non-load balanced clusters schemaVersion: '1.3' runOnRequirements: # Don't run on replica sets because the URI used to configure the clients will contain multiple hosts and the # replicaSet option, which will cause an error when constructing the lbTrueClient entity. - topologies: [ single, sharded ] createEntities: - client: id: &lbTrueClient lbTrueClient # Restrict to a single mongos to ensure there are not multiple hosts in the URI, which would conflict with # loadBalanced=true. useMultipleMongoses: false uriOptions: loadBalanced: true - database: id: &lbTrueDatabase lbTrueDatabase client: *lbTrueClient databaseName: &lbTrueDatabaseName lbTrueDb - client: id: &lbFalseClient lbFalseClient uriOptions: loadBalanced: false - database: id: &lbFalseDatabase lbFalseDatabase client: *lbFalseClient databaseName: &lbFalseDatabaseName lbFalseDb _yamlAnchors: runCommandArguments: - &pingArguments arguments: commandName: ping command: { ping: 1 } tests: # These tests assert that drivers behave correctly if loadBalanced=true/false for non-load balanced clusters. Existing # spec tests should cover the case where loadBalanced is unset. # If the server is not configured to be behind a load balancer and the URI contains loadBalanced=true, the driver # should error during the connection handshake because the server's hello response does not contain a serviceId field. - description: operations against non-load balanced clusters fail if URI contains loadBalanced=true operations: - name: runCommand object: *lbTrueDatabase <<: *pingArguments expectError: errorContains: Driver attempted to initialize in load balancing mode, but the server does not support this mode - description: operations against non-load balanced clusters succeed if URI contains loadBalanced=false operations: - name: runCommand object: *lbFalseDatabase <<: *pingArguments mongo-ruby-driver-2.21.3/spec/spec_tests/data/load_balancers/sdam-error-handling.yml000066400000000000000000000207471505113246500305320ustar00rootroot00000000000000description: state change errors are correctly handled schemaVersion: '1.4' runOnRequirements: - topologies: [ load-balanced ] _yamlAnchors: observedEvents: &observedEvents - connectionCreatedEvent - connectionReadyEvent - connectionCheckedOutEvent - connectionCheckOutFailedEvent - connectionCheckedInEvent - connectionClosedEvent - poolClearedEvent createEntities: - client: id: &failPointClient failPointClient useMultipleMongoses: false - client: id: &singleClient singleClient useMultipleMongoses: false uriOptions: appname: &singleClientAppName lbSDAMErrorTestClient retryWrites: false observeEvents: *observedEvents - database: id: &singleDB singleDB client: *singleClient databaseName: &singleDBName singleDB - collection: id: &singleColl singleColl database: *singleDB collectionName: &singleCollName singleColl - client: id: &multiClient multiClient useMultipleMongoses: true uriOptions: retryWrites: false observeEvents: *observedEvents - database: id: &multiDB multiDB client: *multiClient databaseName: &multiDBName multiDB - collection: id: &multiColl multiColl database: *multiDB collectionName: &multiCollName multiColl initialData: - collectionName: *singleCollName databaseName: *singleDBName documents: - _id: 1 - _id: 2 - _id: 3 - collectionName: *multiCollName databaseName: *multiDBName documents: - _id: 1 - _id: 2 - _id: 3 tests: - description: only connections for a specific serviceId are closed when pools are cleared skipReason: "RUBY-2881: ruby driver LB is not spec compliant" runOnRequirements: # This test assumes that two sequential connections receive different serviceIDs. # Sequential connections to a serverless instance may receive the same serviceID. - serverless: forbid operations: # Create two cursors to force two connections. - name: createFindCursor object: *multiColl arguments: filter: {} batchSize: 2 saveResultAsEntity: &cursor0 cursor0 - name: createFindCursor object: *multiColl arguments: filter: {} batchSize: 2 saveResultAsEntity: &cursor1 cursor1 # Close both cursors to return the connections to the pool. - name: close object: *cursor0 - name: close object: *cursor1 # Fail an operation with a state change error. - name: failPoint object: testRunner arguments: client: *multiClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: [insert] errorCode: &errorCode 11600 # InterruptedAtShutdown - name: insertOne object: *multiColl arguments: document: { x: 1 } expectError: errorCode: *errorCode # Do another operation to ensure the relevant connection has been closed. - name: insertOne object: *multiColl arguments: document: { x: 1 } expectEvents: - client: *multiClient eventType: cmap events: # Create cursors. - connectionCreatedEvent: {} - connectionReadyEvent: {} - connectionCheckedOutEvent: {} - connectionCreatedEvent: {} - connectionReadyEvent: {} - connectionCheckedOutEvent: {} # Close cursors. - connectionCheckedInEvent: {} - connectionCheckedInEvent: {} # Set failpoint. - connectionCheckedOutEvent: {} - connectionCheckedInEvent: {} # First insertOne. - connectionCheckedOutEvent: {} - poolClearedEvent: {} - connectionCheckedInEvent: {} - connectionClosedEvent: reason: stale # Second insertOne. - connectionCheckedOutEvent: {} - connectionCheckedInEvent: {} # This test uses singleClient to ensure that connection attempts are routed # to the same mongos on which the failpoint is set. - description: errors during the initial connection hello are ignored skipReason: "RUBY-2881: ruby driver LB is not spec compliant" runOnRequirements: # Require SERVER-49336 for failCommand + appName on the initial handshake. - minServerVersion: '4.4.7' operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: [isMaster, hello] closeConnection: true appName: *singleClientAppName - name: insertOne object: *singleColl arguments: document: { x: 1 } expectError: isClientError: true expectEvents: - client: *singleClient eventType: cmap events: - connectionCreatedEvent: {} - connectionClosedEvent: reason: error - connectionCheckOutFailedEvent: reason: connectionError - description: errors during authentication are processed runOnRequirements: - auth: true operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: [saslContinue] closeConnection: true appName: *singleClientAppName - name: insertOne object: *singleColl arguments: document: { x: 1 } expectError: isClientError: true expectEvents: - client: *singleClient eventType: cmap events: - connectionCreatedEvent: {} - poolClearedEvent: {} - connectionClosedEvent: reason: error - connectionCheckOutFailedEvent: reason: connectionError - description: stale errors are ignored skipReason: "RUBY-2881: ruby driver LB is not spec compliant" operations: - name: failPoint object: testRunner arguments: client: *failPointClient failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: [getMore] closeConnection: true # Force two connections to be checked out from the pool. - name: createFindCursor object: *singleColl arguments: filter: {} batchSize: 2 saveResultAsEntity: &cursor0 cursor0 - name: createFindCursor object: *singleColl arguments: filter: {} batchSize: 2 saveResultAsEntity: &cursor1 cursor1 # Iterate cursor0 three times to force a network error. - name: iterateUntilDocumentOrError object: *cursor0 - name: iterateUntilDocumentOrError object: *cursor0 - name: iterateUntilDocumentOrError object: *cursor0 expectError: isClientError: true - name: close object: *cursor0 # Iterate cursor1 three times to force a network error. - name: iterateUntilDocumentOrError object: *cursor1 - name: iterateUntilDocumentOrError object: *cursor1 - name: iterateUntilDocumentOrError object: *cursor1 expectError: isClientError: true - name: close object: *cursor1 expectEvents: - client: *singleClient eventType: cmap events: # Events for creating both cursors. - connectionCreatedEvent: {} - connectionReadyEvent: {} - connectionCheckedOutEvent: {} - connectionCreatedEvent: {} - connectionReadyEvent: {} - connectionCheckedOutEvent: {} # Events for iterating and closing the first cursor. The failed # getMore should cause a poolClearedEvent to be published. - poolClearedEvent: {} - connectionCheckedInEvent: {} - connectionClosedEvent: {} # Events for iterating and closing the second cursor. The failed # getMore should not clear the pool because the connection's # generation number is stale. - connectionCheckedInEvent: {} - connectionClosedEvent: {} mongo-ruby-driver-2.21.3/spec/spec_tests/data/load_balancers/server-selection.yml000066400000000000000000000025151505113246500301570ustar00rootroot00000000000000description: server selection for load-balanced clusters schemaVersion: '1.3' runOnRequirements: - topologies: [ load-balanced ] createEntities: - client: id: &client0 client0 useMultipleMongoses: true observeEvents: - commandStartedEvent - database: id: &database0 database0 client: *client0 databaseName: &database0Name database0Name - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name coll0 collectionOptions: readPreference: # Use secondaryPreferred to ensure that operations can succeed even if the shards are only comprised of one # server. mode: &readPrefMode secondaryPreferred initialData: - collectionName: *collection0Name databaseName: *database0Name documents: [] tests: - description: $readPreference is sent for load-balanced clusters operations: - name: find object: *collection0 arguments: filter: {} expectEvents: - client: *client0 events: - commandStartedEvent: command: find: *collection0Name filter: {} $readPreference: mode: *readPrefMode commandName: find databaseName: *database0Name mongo-ruby-driver-2.21.3/spec/spec_tests/data/load_balancers/transactions.yml000066400000000000000000000476371505113246500274140ustar00rootroot00000000000000description: transactions are correctly pinned to connections for load-balanced clusters schemaVersion: '1.4' runOnRequirements: - topologies: [ load-balanced ] createEntities: - client: id: &client0 client0 useMultipleMongoses: true observeEvents: # Do not observe commandSucceededEvent or commandFailedEvent because we cannot guarantee success or failure of # commands like commitTransaction and abortTransaction in a multi-mongos load-balanced setup. - commandStartedEvent - connectionReadyEvent - connectionClosedEvent - connectionCheckedOutEvent - connectionCheckedInEvent - session: id: &session0 session0 client: *client0 - database: id: &database0 database0 client: *client0 databaseName: &database0Name database0Name - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name coll0 initialData: - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1 } - { _id: 2 } - { _id: 3 } _yamlAnchors: documents: - &insertDocument _id: 4 tests: - description: sessions are reused in LB mode operations: - &nonTransactionalInsert name: insertOne object: *collection0 arguments: document: { x: 1 } - *nonTransactionalInsert - name: assertSameLsidOnLastTwoCommands object: testRunner arguments: client: *client0 - description: all operations go to the same mongos skipReason: "RUBY-2881: ruby driver LB is not spec compliant" operations: - &startTransaction name: startTransaction object: *session0 - &transactionalInsert name: insertOne object: *collection0 arguments: document: { x: 1 } session: *session0 - &assertConnectionPinned name: assertNumberConnectionsCheckedOut object: testRunner arguments: client: *client0 connections: 1 - *transactionalInsert - *transactionalInsert - *transactionalInsert - *transactionalInsert - *transactionalInsert - *assertConnectionPinned - &commitTransaction name: commitTransaction object: *session0 expectEvents: - client: *client0 events: - &insertStarted commandStartedEvent: commandName: insert - *insertStarted - *insertStarted - *insertStarted - *insertStarted - *insertStarted - &commitStarted commandStartedEvent: commandName: commitTransaction - client: *client0 eventType: cmap events: # The connection is never checked back in. - connectionReadyEvent: {} - connectionCheckedOutEvent: {} - description: transaction can be committed multiple times skipReason: "RUBY-2881: ruby driver LB is not spec compliant" operations: - *startTransaction - *transactionalInsert - *assertConnectionPinned - *commitTransaction - *assertConnectionPinned - *commitTransaction - *commitTransaction - *commitTransaction - *assertConnectionPinned expectEvents: - client: *client0 events: - *insertStarted - *commitStarted - *commitStarted - *commitStarted - *commitStarted - client: *client0 eventType: cmap events: - connectionReadyEvent: {} - connectionCheckedOutEvent: {} - description: pinned connection is not released after a non-transient CRUD error skipReason: "RUBY-2881: ruby driver LB is not spec compliant" operations: - name: failPoint object: testRunner arguments: client: *client0 failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: [ insert ] errorCode: &nonTransientErrorCode 51 # ManualInterventionRequired - *startTransaction - name: insertOne object: *collection0 arguments: document: { x: 1 } session: *session0 expectError: &nonTransientExpectedError errorCode: *nonTransientErrorCode errorLabelsOmit: [ TransientTransactionError ] - *assertConnectionPinned expectEvents: - client: *client0 events: - *insertStarted - client: *client0 eventType: cmap events: # Events for setting the fail point. - connectionReadyEvent: {} - connectionCheckedOutEvent: {} - connectionCheckedInEvent: {} # Events for the transactional insert. - connectionCheckedOutEvent: {} - description: pinned connection is not released after a non-transient commit error skipReason: "RUBY-2881: ruby driver LB is not spec compliant" operations: - name: failPoint object: testRunner arguments: client: *client0 failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: [ commitTransaction ] errorCode: *nonTransientErrorCode - *startTransaction - *transactionalInsert - name: commitTransaction object: *session0 expectError: *nonTransientExpectedError - *assertConnectionPinned expectEvents: - client: *client0 events: - *insertStarted - *commitStarted - client: *client0 eventType: cmap events: # Events for setting the fail point. - connectionReadyEvent: {} - connectionCheckedOutEvent: {} - connectionCheckedInEvent: {} # Events for the transactional insert and commit. - connectionCheckedOutEvent: {} # Errors during abort are different than errors during commit and CRUD operations because the pinned connection is # always released after abort. - description: pinned connection is released after a non-transient abort error skipReason: "RUBY-2881: ruby driver LB is not spec compliant" operations: - name: failPoint object: testRunner arguments: client: *client0 failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: [ abortTransaction ] errorCode: &nonTransientErrorCode 51 # ManualInterventionRequired - *startTransaction - *transactionalInsert - name: abortTransaction object: *session0 - &assertConnectionNotPinned name: assertNumberConnectionsCheckedOut object: testRunner arguments: client: *client0 connections: 0 expectEvents: - client: *client0 events: - *insertStarted - &abortStarted commandStartedEvent: commandName: abortTransaction - client: *client0 eventType: cmap events: # Events for setting the fail point. - connectionReadyEvent: {} - connectionCheckedOutEvent: {} - connectionCheckedInEvent: {} # Events for the transactional insert and abort. - connectionCheckedOutEvent: {} - connectionCheckedInEvent: {} - description: pinned connection is released after a transient non-network CRUD error skipReason: "RUBY-2881: ruby driver LB is not spec compliant" runOnRequirements: - serverless: forbid # (CLOUDP-88216) Serverless does not append error labels to errors triggered by failpoints. operations: - name: failPoint object: testRunner arguments: client: *client0 failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: [ insert ] errorCode: &transientErrorCode 24 # LockTimeout - *startTransaction - <<: *transactionalInsert expectError: &transientExpectedServerError errorCode: *transientErrorCode errorLabelsContain: [ TransientTransactionError ] - *assertConnectionNotPinned - name: abortTransaction object: *session0 - *assertConnectionNotPinned expectEvents: - client: *client0 events: - *insertStarted - *abortStarted - client: *client0 eventType: cmap events: # Events for setting the failpoint. - connectionReadyEvent: {} - connectionCheckedOutEvent: {} - connectionCheckedInEvent: {} # Events for the insert. - connectionCheckedOutEvent: {} - connectionCheckedInEvent: {} # Events for abortTransction. - connectionCheckedOutEvent: {} - connectionCheckedInEvent: {} - description: pinned connection is released after a transient network CRUD error skipReason: "RUBY-2881: ruby driver LB is not spec compliant" runOnRequirements: - serverless: forbid # (CLOUDP-88216) Serverless does not append error labels to errors triggered by failpoints. operations: - name: failPoint object: testRunner arguments: client: *client0 failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: [ insert ] closeConnection: true - *startTransaction - <<: *transactionalInsert expectError: &transientExpectedNetworkError isClientError: true errorLabelsContain: [ TransientTransactionError ] - *assertConnectionNotPinned - name: abortTransaction object: *session0 - *assertConnectionNotPinned expectEvents: - client: *client0 events: - *insertStarted - *abortStarted - client: *client0 eventType: cmap events: # Events for setting the failpoint. - connectionReadyEvent: {} - connectionCheckedOutEvent: {} - connectionCheckedInEvent: {} # Events for the insert. - connectionCheckedOutEvent: {} - connectionCheckedInEvent: {} - connectionClosedEvent: reason: error # Events for abortTransaction - connectionReadyEvent: {} - connectionCheckedOutEvent: {} - connectionCheckedInEvent: {} - description: pinned connection is released after a transient non-network commit error skipReason: "RUBY-2881: ruby driver LB is not spec compliant" runOnRequirements: - serverless: forbid # (CLOUDP-88216) Serverless does not append error labels to errors triggered by failpoints. operations: - name: failPoint object: testRunner arguments: client: *client0 failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: [ commitTransaction ] errorCode: *transientErrorCode - *startTransaction - *transactionalInsert - <<: *commitTransaction expectError: *transientExpectedServerError - *assertConnectionNotPinned expectEvents: - client: *client0 events: - *insertStarted - *commitStarted - client: *client0 eventType: cmap events: # Events for setting the failpoint. - connectionReadyEvent: {} - connectionCheckedOutEvent: {} - connectionCheckedInEvent: {} # Events for the insert. - connectionCheckedOutEvent: {} # Events for commitTransaction. - connectionCheckedInEvent: {} - description: pinned connection is released after a transient network commit error skipReason: "RUBY-2881: ruby driver LB is not spec compliant" operations: - name: failPoint object: testRunner arguments: client: *client0 failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: [ commitTransaction ] closeConnection: true - *startTransaction - *transactionalInsert - <<: *commitTransaction # Ignore the result and error because the operation might fail if it targets a new mongos that isn't aware of # the transaction or the server-side reaper thread closes the transaction first. We only want to assert that # the operation is retried, which is done via monitoring expectations, so the exact result/error is not # necessary. ignoreResultAndError: true - *assertConnectionNotPinned expectEvents: - client: *client0 events: - *insertStarted - *commitStarted # The commit will be automatically retried. - *commitStarted - client: *client0 eventType: cmap events: # Events for setting the failpoint. - connectionReadyEvent: {} - connectionCheckedOutEvent: {} - connectionCheckedInEvent: {} # Events for the insert. - connectionCheckedOutEvent: {} # Events for the first commitTransaction. - connectionCheckedInEvent: {} - connectionClosedEvent: reason: error # Events for the commitTransaction retry. - connectionReadyEvent: {} - connectionCheckedOutEvent: {} - connectionCheckedInEvent: {} - description: pinned connection is released after a transient non-network abort error skipReason: "RUBY-2881: ruby driver LB is not spec compliant" operations: - name: failPoint object: testRunner arguments: client: *client0 failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: [ abortTransaction ] errorCode: *transientErrorCode - *startTransaction - *transactionalInsert - name: abortTransaction object: *session0 - *assertConnectionNotPinned expectEvents: - client: *client0 events: - *insertStarted - *abortStarted - client: *client0 eventType: cmap events: # Events for setting the failpoint. - connectionReadyEvent: {} - connectionCheckedOutEvent: {} - connectionCheckedInEvent: {} # Events for the insert. - connectionCheckedOutEvent: {} # Events for abortTransaction. - connectionCheckedInEvent: {} - description: pinned connection is released after a transient network abort error skipReason: "RUBY-2881: ruby driver LB is not spec compliant" operations: - name: failPoint object: testRunner arguments: client: *client0 failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: [ abortTransaction ] closeConnection: true - *startTransaction - *transactionalInsert - name: abortTransaction object: *session0 - *assertConnectionNotPinned expectEvents: - client: *client0 events: - *insertStarted - *abortStarted # The abort will be automatically retried. - *abortStarted - client: *client0 eventType: cmap events: # Events for setting the failpoint. - connectionReadyEvent: {} - connectionCheckedOutEvent: {} - connectionCheckedInEvent: {} # Events for the insert. - connectionCheckedOutEvent: {} # Events for the first abortTransaction. - connectionCheckedInEvent: {} - connectionClosedEvent: reason: error # Events for the abortTransaction retry. - connectionReadyEvent: {} - connectionCheckedOutEvent: {} - connectionCheckedInEvent: {} - description: pinned connection is released on successful abort skipReason: "RUBY-2881: ruby driver LB is not spec compliant" operations: - *startTransaction - *transactionalInsert - name: abortTransaction object: *session0 - *assertConnectionNotPinned expectEvents: - client: *client0 events: - *insertStarted - *abortStarted - client: *client0 eventType: cmap events: # The insert will create and pin a connection. The abort will use it and then unpin. - connectionReadyEvent: {} - connectionCheckedOutEvent: {} - connectionCheckedInEvent: {} - description: pinned connection is returned when a new transaction is started skipReason: "RUBY-2881: ruby driver LB is not spec compliant" operations: - *startTransaction - *transactionalInsert - *commitTransaction - *assertConnectionPinned - *startTransaction - *assertConnectionNotPinned # startTransaction will unpin the connection. - *transactionalInsert - *assertConnectionPinned # The first operation in the new transaction will pin the connection again. - *commitTransaction expectEvents: - client: *client0 events: - *insertStarted - *commitStarted - *insertStarted - *commitStarted - client: *client0 eventType: cmap events: # Events for the first insert and commit. - connectionReadyEvent: {} - connectionCheckedOutEvent: {} # Events for startTransaction. - connectionCheckedInEvent: {} # Events for the second insert and commit. - connectionCheckedOutEvent: {} - description: pinned connection is returned when a non-transaction operation uses the session skipReason: "RUBY-2881: ruby driver LB is not spec compliant" operations: - *startTransaction - *transactionalInsert - *commitTransaction - *assertConnectionPinned - *transactionalInsert # The insert is a non-transactional operation that uses the session, so it unpins the connection. - *assertConnectionNotPinned expectEvents: - client: *client0 events: - *insertStarted - *commitStarted - *insertStarted - client: *client0 eventType: cmap events: # Events for the first insert and commit. - connectionReadyEvent: {} - connectionCheckedOutEvent: {} # Events for the second insert. - connectionCheckedInEvent: {} - connectionCheckedOutEvent: {} - connectionCheckedInEvent: {} - description: a connection can be shared by a transaction and a cursor skipReason: "RUBY-2881: ruby driver LB is not spec compliant" operations: - *startTransaction - *transactionalInsert - *assertConnectionPinned - name: createFindCursor object: *collection0 arguments: filter: {} batchSize: 2 session: *session0 saveResultAsEntity: &cursor0 cursor0 - *assertConnectionPinned - name: close object: *cursor0 - *assertConnectionPinned # Abort the transaction to ensure that the connection is unpinned. - name: abortTransaction object: *session0 - *assertConnectionNotPinned expectEvents: - client: *client0 events: - *insertStarted - commandStartedEvent: commandName: find - commandStartedEvent: commandName: killCursors - *abortStarted - client: *client0 eventType: cmap events: # Events for the insert, find, and killCursors. - connectionReadyEvent: {} - connectionCheckedOutEvent: {} # Events for abortTransaction. - connectionCheckedInEvent: {} mongo-ruby-driver-2.21.3/spec/spec_tests/data/load_balancers/wait-queue-timeouts.yml000066400000000000000000000047041505113246500306250ustar00rootroot00000000000000description: wait queue timeout errors include details about checked out connections schemaVersion: '1.3' runOnRequirements: - topologies: [ load-balanced ] createEntities: - client: id: &client0 client0 useMultipleMongoses: true uriOptions: maxPoolSize: 1 waitQueueTimeoutMS: 50 observeEvents: - connectionCheckedOutEvent - connectionCheckOutFailedEvent - session: id: &session0 session0 client: *client0 - database: id: &database0 database0 client: *client0 databaseName: &database0Name database0Name - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name coll0 initialData: - collectionName: *collection0Name databaseName: *database0Name documents: - _id: 1 - _id: 2 - _id: 3 tests: - description: wait queue timeout errors include cursor statistics skipReason: "RUBY-2881: ruby driver LB is not spec compliant" operations: - name: createFindCursor object: *collection0 arguments: filter: {} batchSize: 2 saveResultAsEntity: &cursor0 cursor0 - name: insertOne object: *collection0 arguments: document: { x: 1 } expectError: isClientError: true errorContains: 'maxPoolSize: 1, connections in use by cursors: 1, connections in use by transactions: 0, connections in use by other operations: 0' expectEvents: - client: *client0 eventType: cmap events: - connectionCheckedOutEvent: {} - connectionCheckOutFailedEvent: {} - description: wait queue timeout errors include transaction statistics skipReason: "RUBY-2881: ruby driver LB is not spec compliant" operations: - name: startTransaction object: *session0 - name: insertOne object: *collection0 arguments: document: { x: 1 } session: *session0 - name: insertOne object: *collection0 arguments: document: { x: 1 } expectError: isClientError: true errorContains: 'maxPoolSize: 1, connections in use by cursors: 0, connections in use by transactions: 1, connections in use by other operations: 0' expectEvents: - client: *client0 eventType: cmap events: - connectionCheckedOutEvent: {} - connectionCheckOutFailedEvent: {} mongo-ruby-driver-2.21.3/spec/spec_tests/data/max_staleness/000077500000000000000000000000001505113246500240555ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/max_staleness/ReplicaSetNoPrimary/000077500000000000000000000000001505113246500277515ustar00rootroot00000000000000DefaultNoMaxStaleness.yml000066400000000000000000000011321505113246500346230ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/max_staleness/ReplicaSetNoPrimary# By default, a read preference sets no maximum on staleness. --- topology_description: type: ReplicaSetNoPrimary servers: - &1 address: a:27017 type: RSSecondary avg_rtt_ms: 50 # Too far. lastUpdateTime: 0 maxWireVersion: 6 lastWrite: {lastWriteDate: {$numberLong: "1000001"}} - &2 address: b:27017 type: RSSecondary avg_rtt_ms: 5 lastUpdateTime: 0 maxWireVersion: 6 lastWrite: {lastWriteDate: {$numberLong: "1"}} # Very stale. read_preference: mode: Nearest suitable_servers: # Very stale server is fine. - *1 - *2 in_latency_window: - *2 mongo-ruby-driver-2.21.3/spec/spec_tests/data/max_staleness/ReplicaSetNoPrimary/Incompatible.yml000066400000000000000000000012421505113246500331010ustar00rootroot00000000000000# During server selection, clients (drivers or mongos) MUST raise an error if # maxStalenessSeconds is defined and not -1 and any server's ``maxWireVersion`` # is less than 5 (`SERVER-23893`_). --- topology_description: type: ReplicaSetNoPrimary servers: - &1 address: a:27017 type: RSSecondary avg_rtt_ms: 5 lastUpdateTime: 0 maxWireVersion: 5 lastWrite: {lastWriteDate: {$numberLong: "2"}} - &2 address: b:27017 type: RSSecondary avg_rtt_ms: 5 lastUpdateTime: 0 maxWireVersion: 4 # Incompatible. lastWrite: {lastWriteDate: {$numberLong: "1"}} read_preference: mode: Nearest maxStalenessSeconds: 120 error: true mongo-ruby-driver-2.21.3/spec/spec_tests/data/max_staleness/ReplicaSetNoPrimary/LastUpdateTime.yml000066400000000000000000000015131505113246500333610ustar00rootroot00000000000000heartbeatFrequencyMS: 25000 # 25 seconds. topology_description: type: ReplicaSetNoPrimary servers: - &1 address: a:27017 type: RSSecondary avg_rtt_ms: 5 lastUpdateTime: 1 lastWrite: {lastWriteDate: {$numberLong: "125002"}} maxWireVersion: 6 - &2 address: b:27017 type: RSSecondary avg_rtt_ms: 50 # Too far. lastUpdateTime: 25002 # Not used when there's no primary. lastWrite: {lastWriteDate: {$numberLong: "2"}} # 125 sec stale + 25 sec heartbeat <= 150 sec maxStaleness. maxWireVersion: 6 - &3 address: c:27017 type: RSSecondary avg_rtt_ms: 5 lastUpdateTime: 25001 lastWrite: {lastWriteDate: {$numberLong: "1"}} # Too stale. maxWireVersion: 6 read_preference: mode: Nearest maxStalenessSeconds: 150 suitable_servers: - *1 - *2 in_latency_window: - *1 MaxStalenessTooSmall.yml000066400000000000000000000005001505113246500344720ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/max_staleness/ReplicaSetNoPrimary# maxStalenessSeconds must be at least 90 seconds, even with no known servers. --- topology_description: type: ReplicaSetNoPrimary servers: - &1 address: a:27017 type: Unknown - &2 address: b:27017 type: Unknown read_preference: mode: Nearest maxStalenessSeconds: 1 # Too small. error: true mongo-ruby-driver-2.21.3/spec/spec_tests/data/max_staleness/ReplicaSetNoPrimary/Nearest.yml000066400000000000000000000014361505113246500321010ustar00rootroot00000000000000heartbeatFrequencyMS: 25000 # 25 seconds. topology_description: type: ReplicaSetNoPrimary servers: - &1 address: a:27017 type: RSSecondary avg_rtt_ms: 5 lastUpdateTime: 0 lastWrite: {lastWriteDate: {$numberLong: "125002"}} maxWireVersion: 6 - &2 address: b:27017 type: RSSecondary avg_rtt_ms: 50 # Too far. lastUpdateTime: 0 lastWrite: {lastWriteDate: {$numberLong: "2"}} # 125 sec stale + 25 sec heartbeat <= 150 sec maxStaleness. maxWireVersion: 6 - &3 address: c:27017 avg_rtt_ms: 5 lastUpdateTime: 0 type: RSSecondary lastWrite: {lastWriteDate: {$numberLong: "1"}} # Too stale. maxWireVersion: 6 read_preference: mode: Nearest maxStalenessSeconds: 150 suitable_servers: - *1 - *2 in_latency_window: - *1 mongo-ruby-driver-2.21.3/spec/spec_tests/data/max_staleness/ReplicaSetNoPrimary/Nearest2.yml000066400000000000000000000014361505113246500321630ustar00rootroot00000000000000heartbeatFrequencyMS: 25000 # 25 seconds. topology_description: type: ReplicaSetNoPrimary servers: - &1 address: a:27017 type: RSSecondary avg_rtt_ms: 50 # Too far. lastUpdateTime: 0 lastWrite: {lastWriteDate: {$numberLong: "125002"}} maxWireVersion: 6 - &2 address: b:27017 type: RSSecondary avg_rtt_ms: 5 lastUpdateTime: 0 lastWrite: {lastWriteDate: {$numberLong: "2"}} # 125 sec stale + 25 sec heartbeat <= 150 sec maxStaleness. maxWireVersion: 6 - &3 address: c:27017 avg_rtt_ms: 5 lastUpdateTime: 0 type: RSSecondary lastWrite: {lastWriteDate: {$numberLong: "1"}} # Too stale. maxWireVersion: 6 read_preference: mode: Nearest maxStalenessSeconds: 150 suitable_servers: - *1 - *2 in_latency_window: - *2 mongo-ruby-driver-2.21.3/spec/spec_tests/data/max_staleness/ReplicaSetNoPrimary/NoKnownServers.yml000066400000000000000000000005401505113246500334360ustar00rootroot00000000000000# valid maxStalenessSeconds and no known servers results in an empty set of suitable servers --- topology_description: type: ReplicaSetNoPrimary servers: - &1 address: a:27017 type: Unknown - &2 address: b:27017 type: Unknown read_preference: mode: Nearest maxStalenessSeconds: 90 suitable_servers: [] in_latency_window: [] mongo-ruby-driver-2.21.3/spec/spec_tests/data/max_staleness/ReplicaSetNoPrimary/PrimaryPreferred.yml000066400000000000000000000011411505113246500337530ustar00rootroot00000000000000# Fallback to secondary if no primary. --- heartbeatFrequencyMS: 25000 # 25 seconds. topology_description: type: ReplicaSetNoPrimary servers: - &1 address: a:27017 type: RSSecondary avg_rtt_ms: 5 lastUpdateTime: 0 maxWireVersion: 6 lastWrite: {lastWriteDate: {$numberLong: "1000001"}} - &2 address: b:27017 type: RSSecondary avg_rtt_ms: 5 lastUpdateTime: 0 maxWireVersion: 6 lastWrite: {lastWriteDate: {$numberLong: "1"}} # Very stale. read_preference: mode: PrimaryPreferred maxStalenessSeconds: 90 suitable_servers: - *1 in_latency_window: - *1 PrimaryPreferred_tags.yml000066400000000000000000000015611505113246500347200ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/max_staleness/ReplicaSetNoPrimary# maxStalenessSeconds is applied before tag sets. With tag sets # [{data_center: nyc}, {data_center: tokyo}], if the only node in NYC is stale # then use Tokyo. --- heartbeatFrequencyMS: 25000 # 25 seconds. topology_description: type: ReplicaSetNoPrimary servers: - &1 address: a:27017 type: RSSecondary avg_rtt_ms: 5 lastUpdateTime: 0 lastWrite: {lastWriteDate: {$numberLong: "125002"}} maxWireVersion: 6 tags: data_center: tokyo # Matches second tag set. - &2 address: b:27017 type: RSSecondary avg_rtt_ms: 5 lastUpdateTime: 0 lastWrite: {lastWriteDate: {$numberLong: "1"}} # Too stale. maxWireVersion: 6 tags: data_center: nyc read_preference: mode: PrimaryPreferred maxStalenessSeconds: 150 tag_sets: - data_center: nyc - data_center: tokyo suitable_servers: - *1 in_latency_window: - *1 mongo-ruby-driver-2.21.3/spec/spec_tests/data/max_staleness/ReplicaSetNoPrimary/Secondary.yml000066400000000000000000000023621505113246500324260ustar00rootroot00000000000000# Latest secondary's lastWriteDate is used normally with read preference tags. --- heartbeatFrequencyMS: 25000 # 25 seconds. topology_description: type: ReplicaSetNoPrimary servers: - &1 address: a:27017 type: RSSecondary avg_rtt_ms: 5 lastUpdateTime: 0 maxWireVersion: 6 lastWrite: {lastWriteDate: {$numberLong: "125002"}} tags: data_center: tokyo # No match, but its lastWriteDate is used in estimate. - &2 address: b:27017 type: RSSecondary avg_rtt_ms: 5 lastUpdateTime: 0 maxWireVersion: 6 lastWrite: {lastWriteDate: {$numberLong: "2"}} # 125 sec stale + 25 sec heartbeat <= 150 sec maxStaleness. tags: data_center: nyc - &3 address: c:27017 type: RSSecondary avg_rtt_ms: 5 lastUpdateTime: 0 maxWireVersion: 6 lastWrite: {lastWriteDate: {$numberLong: "1"}} # Too stale. tags: data_center: nyc - &4 address: d:27017 type: RSSecondary avg_rtt_ms: 5 lastUpdateTime: 0 maxWireVersion: 6 lastWrite: {lastWriteDate: {$numberLong: "2"}} tags: data_center: tokyo # No match. read_preference: mode: Secondary maxStalenessSeconds: 150 tag_sets: - data_center: nyc suitable_servers: - *2 in_latency_window: - *2 SecondaryPreferred.yml000066400000000000000000000010641505113246500342040ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/max_staleness/ReplicaSetNoPrimary# Filter out the stale secondary. --- topology_description: type: ReplicaSetNoPrimary servers: - &1 address: a:27017 type: RSSecondary avg_rtt_ms: 5 lastUpdateTime: 0 maxWireVersion: 6 lastWrite: {lastWriteDate: {$numberLong: "1000001"}} - &2 address: b:27017 type: RSSecondary avg_rtt_ms: 5 lastUpdateTime: 0 maxWireVersion: 6 lastWrite: {lastWriteDate: {$numberLong: "1"}} # Very stale. read_preference: mode: SecondaryPreferred maxStalenessSeconds: 120 suitable_servers: - *1 in_latency_window: - *1 SecondaryPreferred_tags.yml000066400000000000000000000023731505113246500352260ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/max_staleness/ReplicaSetNoPrimary# Latest secondary's lastWriteDate is used normally with read preference tags. --- heartbeatFrequencyMS: 25000 # 25 seconds. topology_description: type: ReplicaSetNoPrimary servers: - &1 address: a:27017 type: RSSecondary avg_rtt_ms: 5 lastUpdateTime: 0 maxWireVersion: 6 lastWrite: {lastWriteDate: {$numberLong: "125002"}} tags: data_center: tokyo # No match, but its lastWriteDate is used in estimate. - &2 address: b:27017 type: RSSecondary avg_rtt_ms: 5 lastUpdateTime: 0 maxWireVersion: 6 lastWrite: {lastWriteDate: {$numberLong: "2"}} # 125 sec stale + 25 sec heartbeat <= 150 sec maxStaleness. tags: data_center: nyc - &3 address: c:27017 type: RSSecondary avg_rtt_ms: 5 lastUpdateTime: 0 maxWireVersion: 6 lastWrite: {lastWriteDate: {$numberLong: "1"}} # Too stale. tags: data_center: nyc - &4 address: d:27017 type: RSSecondary avg_rtt_ms: 5 lastUpdateTime: 0 maxWireVersion: 6 lastWrite: {lastWriteDate: {$numberLong: "2"}} tags: data_center: tokyo # No match. read_preference: mode: SecondaryPreferred maxStalenessSeconds: 150 tag_sets: - data_center: nyc suitable_servers: - *2 in_latency_window: - *2 mongo-ruby-driver-2.21.3/spec/spec_tests/data/max_staleness/ReplicaSetNoPrimary/ZeroMaxStaleness.yml000066400000000000000000000007641505113246500337520ustar00rootroot00000000000000# maxStalenessSeconds=0 is prohibited. --- topology_description: type: ReplicaSetNoPrimary servers: - &1 address: a:27017 type: RSSecondary avg_rtt_ms: 5 lastUpdateTime: 0 maxWireVersion: 6 lastWrite: {lastWriteDate: {$numberLong: "2"}} - &2 address: b:27017 type: RSSecondary avg_rtt_ms: 5 lastUpdateTime: 0 maxWireVersion: 6 lastWrite: {lastWriteDate: {$numberLong: "1"}} read_preference: mode: Nearest maxStalenessSeconds: 0 error: true mongo-ruby-driver-2.21.3/spec/spec_tests/data/max_staleness/ReplicaSetWithPrimary/000077500000000000000000000000001505113246500303105ustar00rootroot00000000000000DefaultNoMaxStaleness.yml000066400000000000000000000011321505113246500351620ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/max_staleness/ReplicaSetWithPrimary# By default, a read preference sets no maximum on staleness. --- topology_description: type: ReplicaSetWithPrimary servers: - &1 address: a:27017 type: RSPrimary avg_rtt_ms: 50 # Too far. lastUpdateTime: 0 maxWireVersion: 6 lastWrite: {lastWriteDate: {$numberLong: "1000001"}} - &2 address: b:27017 type: RSSecondary avg_rtt_ms: 5 lastUpdateTime: 0 maxWireVersion: 6 lastWrite: {lastWriteDate: {$numberLong: "1"}} # Very stale. read_preference: mode: Nearest suitable_servers: # Very stale server is fine. - *1 - *2 in_latency_window: - *2 mongo-ruby-driver-2.21.3/spec/spec_tests/data/max_staleness/ReplicaSetWithPrimary/Incompatible.yml000066400000000000000000000012421505113246500334400ustar00rootroot00000000000000# During server selection, clients (drivers or mongos) MUST raise an error if # maxStalenessSeconds is defined and not -1 and any server's ``maxWireVersion`` # is less than 5 (`SERVER-23893`_). --- topology_description: type: ReplicaSetWithPrimary servers: - &1 address: a:27017 type: RSPrimary avg_rtt_ms: 5 lastUpdateTime: 0 maxWireVersion: 5 lastWrite: {lastWriteDate: {$numberLong: "1"}} - &2 address: b:27017 type: RSSecondary avg_rtt_ms: 5 lastUpdateTime: 0 maxWireVersion: 4 # Incompatible. lastWrite: {lastWriteDate: {$numberLong: "1"}} read_preference: mode: Nearest maxStalenessSeconds: 120 error: true mongo-ruby-driver-2.21.3/spec/spec_tests/data/max_staleness/ReplicaSetWithPrimary/LastUpdateTime.yml000066400000000000000000000015351505113246500337240ustar00rootroot00000000000000heartbeatFrequencyMS: 25000 # 25 seconds. topology_description: type: ReplicaSetWithPrimary servers: - &1 address: a:27017 type: RSPrimary avg_rtt_ms: 50 # Too far. lastUpdateTime: 1 lastWrite: {lastWriteDate: {$numberLong: "2"}} maxWireVersion: 6 - &2 address: b:27017 type: RSSecondary avg_rtt_ms: 5 lastUpdateTime: 125001 # Updated 125 sec after primary, so 125 sec stale. # 125 sec stale + 25 sec heartbeat <= 150 sec maxStaleness. lastWrite: {lastWriteDate: {$numberLong: "2"}} maxWireVersion: 6 - &3 address: c:27017 type: RSSecondary avg_rtt_ms: 5 lastUpdateTime: 125001 lastWrite: {lastWriteDate: {$numberLong: "1"}} # Too stale. maxWireVersion: 6 read_preference: mode: Nearest maxStalenessSeconds: 150 suitable_servers: - *1 - *2 in_latency_window: - *2 mongo-ruby-driver-2.21.3/spec/spec_tests/data/max_staleness/ReplicaSetWithPrimary/LongHeartbeat.yml000066400000000000000000000013301505113246500335470ustar00rootroot00000000000000# If users configure a longer ``heartbeatFrequencyMS`` than the default, # ``maxStalenessSeconds`` might have a larger minimum. --- heartbeatFrequencyMS: 120000 # 120 seconds. topology_description: type: ReplicaSetWithPrimary servers: - &1 address: a:27017 type: RSPrimary avg_rtt_ms: 5 lastUpdateTime: 0 maxWireVersion: 6 lastWrite: {lastWriteDate: {$numberLong: "1"}} - &2 address: b:27017 type: RSSecondary avg_rtt_ms: 50 # Too far. lastUpdateTime: 0 maxWireVersion: 6 lastWrite: {lastWriteDate: {$numberLong: "1"}} read_preference: mode: Nearest maxStalenessSeconds: 130 # OK, must be 120 + 10 = 130 seconds. suitable_servers: - *1 - *2 in_latency_window: - *1 mongo-ruby-driver-2.21.3/spec/spec_tests/data/max_staleness/ReplicaSetWithPrimary/LongHeartbeat2.yml000066400000000000000000000012521505113246500336340ustar00rootroot00000000000000# If users configure a longer ``heartbeatFrequencyMS`` than the default, # ``maxStalenessSeconds`` might have a larger minimum. --- heartbeatFrequencyMS: 120000 # 120 seconds. topology_description: type: ReplicaSetWithPrimary servers: - &1 address: a:27017 type: RSPrimary avg_rtt_ms: 5 lastUpdateTime: 0 maxWireVersion: 6 lastWrite: {lastWriteDate: {$numberLong: "1"}} - &2 address: b:27017 type: RSSecondary avg_rtt_ms: 5 lastUpdateTime: 0 maxWireVersion: 6 lastWrite: {lastWriteDate: {$numberLong: "1"}} read_preference: mode: Nearest maxStalenessSeconds: 129 # Too small, must be 120 + 10 = 130 seconds. error: true MaxStalenessTooSmall.yml000066400000000000000000000012121505113246500350320ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/max_staleness/ReplicaSetWithPrimary# A driver MUST raise an error # if the TopologyType is ReplicaSetWithPrimary or ReplicaSetNoPrimary # and ``maxStalenessSeconds`` is less than 90. --- heartbeatFrequencyMS: 500 topology_description: type: ReplicaSetWithPrimary servers: - &1 address: a:27017 type: RSPrimary avg_rtt_ms: 5 lastUpdateTime: 0 maxWireVersion: 6 lastWrite: {lastWriteDate: {$numberLong: "1"}} - &2 address: b:27017 type: RSSecondary avg_rtt_ms: 5 lastUpdateTime: 0 maxWireVersion: 6 lastWrite: {lastWriteDate: {$numberLong: "1"}} read_preference: mode: Nearest maxStalenessSeconds: 89 # Too small. error: true MaxStalenessWithModePrimary.yml000066400000000000000000000010611505113246500363660ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/max_staleness/ReplicaSetWithPrimary# Drivers MUST raise an error if maxStalenessSeconds is defined and not -1 # and the ``mode`` field is 'primary'. --- topology_description: type: ReplicaSetWithPrimary servers: - &1 address: a:27017 type: RSPrimary avg_rtt_ms: 5 lastUpdateTime: 0 maxWireVersion: 6 lastWrite: {lastWriteDate: {$numberLong: "1"}} - &2 address: b:27017 type: RSSecondary avg_rtt_ms: 5 lastUpdateTime: 0 maxWireVersion: 6 lastWrite: {lastWriteDate: {$numberLong: "1"}} read_preference: maxStalenessSeconds: 120 error: true mongo-ruby-driver-2.21.3/spec/spec_tests/data/max_staleness/ReplicaSetWithPrimary/Nearest.yml000066400000000000000000000014361505113246500324400ustar00rootroot00000000000000heartbeatFrequencyMS: 25000 # 25 seconds. topology_description: type: ReplicaSetWithPrimary servers: - &1 address: a:27017 type: RSPrimary avg_rtt_ms: 5 lastUpdateTime: 0 lastWrite: {lastWriteDate: {$numberLong: "125002"}} maxWireVersion: 6 - &2 address: b:27017 type: RSSecondary avg_rtt_ms: 50 # Too far. lastUpdateTime: 0 lastWrite: {lastWriteDate: {$numberLong: "2"}} # 125 sec stale + 25 sec heartbeat <= 150 sec maxStaleness. maxWireVersion: 6 - &3 address: c:27017 avg_rtt_ms: 5 lastUpdateTime: 0 type: RSSecondary lastWrite: {lastWriteDate: {$numberLong: "1"}} # Too stale. maxWireVersion: 6 read_preference: mode: Nearest maxStalenessSeconds: 150 suitable_servers: - *1 - *2 in_latency_window: - *1 mongo-ruby-driver-2.21.3/spec/spec_tests/data/max_staleness/ReplicaSetWithPrimary/Nearest2.yml000066400000000000000000000014361505113246500325220ustar00rootroot00000000000000heartbeatFrequencyMS: 25000 # 25 seconds. topology_description: type: ReplicaSetWithPrimary servers: - &1 address: a:27017 type: RSPrimary avg_rtt_ms: 50 # Too far. lastUpdateTime: 0 lastWrite: {lastWriteDate: {$numberLong: "125002"}} maxWireVersion: 6 - &2 address: b:27017 type: RSSecondary avg_rtt_ms: 5 lastUpdateTime: 0 lastWrite: {lastWriteDate: {$numberLong: "2"}} # 125 sec stale + 25 sec heartbeat <= 150 sec maxStaleness. maxWireVersion: 6 - &3 address: c:27017 avg_rtt_ms: 5 lastUpdateTime: 0 type: RSSecondary lastWrite: {lastWriteDate: {$numberLong: "1"}} # Too stale. maxWireVersion: 6 read_preference: mode: Nearest maxStalenessSeconds: 150 suitable_servers: - *1 - *2 in_latency_window: - *2 mongo-ruby-driver-2.21.3/spec/spec_tests/data/max_staleness/ReplicaSetWithPrimary/Nearest_tags.yml000066400000000000000000000015151505113246500334540ustar00rootroot00000000000000# maxStalenessSeconds is applied before tag sets. With tag sets # [{data_center: nyc}, {data_center: tokyo}], if the only node in NYC is stale # then use Tokyo. --- heartbeatFrequencyMS: 25000 # 25 seconds. topology_description: type: ReplicaSetWithPrimary servers: - &1 address: a:27017 type: RSPrimary avg_rtt_ms: 5 lastUpdateTime: 0 lastWrite: {lastWriteDate: {$numberLong: "125002"}} maxWireVersion: 6 tags: data_center: tokyo - &2 address: b:27017 type: RSSecondary avg_rtt_ms: 5 lastUpdateTime: 0 lastWrite: {lastWriteDate: {$numberLong: "1"}} # Too stale. maxWireVersion: 6 tags: data_center: nyc read_preference: mode: Nearest maxStalenessSeconds: 150 tag_sets: - data_center: nyc - data_center: tokyo suitable_servers: - *1 in_latency_window: - *1 PrimaryPreferred.yml000066400000000000000000000011341505113246500342350ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/max_staleness/ReplicaSetWithPrimary# Ignore maxStalenessSeconds if primary is available. --- heartbeatFrequencyMS: 25000 # 25 seconds. topology_description: type: ReplicaSetWithPrimary servers: - &1 address: a:27017 type: RSPrimary avg_rtt_ms: 5 lastUpdateTime: 0 maxWireVersion: 6 lastWrite: {lastWriteDate: {$numberLong: "1"}} - &2 address: b:27017 type: RSSecondary avg_rtt_ms: 5 lastUpdateTime: 0 maxWireVersion: 6 lastWrite: {lastWriteDate: {$numberLong: "1"}} read_preference: mode: PrimaryPreferred maxStalenessSeconds: 150 suitable_servers: - *1 in_latency_window: - *1 PrimaryPreferred_incompatible.yml000066400000000000000000000013501505113246500367630ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/max_staleness/ReplicaSetWithPrimary# Primary has wire version 5, secondary has 4, read preference primaryPreferred # with maxStalenessSeconds. The client must error, even though it uses primary and # never applies maxStalenessSeconds. Proves that the compatibility check precedes # filtration. --- topology_description: type: ReplicaSetWithPrimary servers: - &1 address: a:27017 type: RSPrimary avg_rtt_ms: 5 lastUpdateTime: 0 maxWireVersion: 5 lastWrite: {lastWriteDate: {$numberLong: "1"}} - &2 address: b:27017 type: RSSecondary avg_rtt_ms: 5 lastUpdateTime: 0 maxWireVersion: 4 # Too old. lastWrite: {lastWriteDate: {$numberLong: "1"}} read_preference: mode: PrimaryPreferred maxStalenessSeconds: 150 error: true SecondaryPreferred.yml000066400000000000000000000011111505113246500345340ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/max_staleness/ReplicaSetWithPrimary# Fallback to primary if no secondary is fresh enough. --- topology_description: type: ReplicaSetWithPrimary servers: - &1 address: a:27017 type: RSPrimary avg_rtt_ms: 5 lastUpdateTime: 0 maxWireVersion: 6 lastWrite: {lastWriteDate: {$numberLong: "1000001"}} - &2 address: b:27017 type: RSSecondary avg_rtt_ms: 5 lastUpdateTime: 0 maxWireVersion: 6 lastWrite: {lastWriteDate: {$numberLong: "1"}} # Very stale. read_preference: mode: SecondaryPreferred maxStalenessSeconds: 120 suitable_servers: - *1 in_latency_window: - *1 SecondaryPreferred_tags.yml000066400000000000000000000026531505113246500355660ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/max_staleness/ReplicaSetWithPrimary# Primary's lastWriteDate is used normally with SecondaryPreferred and tags. --- heartbeatFrequencyMS: 25000 # 25 seconds. topology_description: type: ReplicaSetWithPrimary servers: - &1 address: a:27017 type: RSPrimary avg_rtt_ms: 5 lastUpdateTime: 0 maxWireVersion: 6 lastWrite: {lastWriteDate: {$numberLong: "125002"}} - &2 address: b:27017 type: RSSecondary avg_rtt_ms: 5 lastUpdateTime: 0 maxWireVersion: 6 lastWrite: {lastWriteDate: {$numberLong: "2"}} # 125 sec stale + 25 sec heartbeat <= 150 sec maxStaleness. tags: data_center: nyc - &3 address: c:27017 type: RSSecondary avg_rtt_ms: 50 # Too far. lastUpdateTime: 1 maxWireVersion: 6 lastWrite: {lastWriteDate: {$numberLong: "1000001"}} # Not used in estimate since we have a primary. tags: data_center: nyc - &4 address: d:27017 type: RSSecondary avg_rtt_ms: 5 lastUpdateTime: 0 maxWireVersion: 6 lastWrite: {lastWriteDate: {$numberLong: "1"}} # Too stale. tags: data_center: nyc - &5 address: e:27017 type: RSSecondary avg_rtt_ms: 5 lastUpdateTime: 0 maxWireVersion: 6 lastWrite: {lastWriteDate: {$numberLong: "2"}} tags: data_center: tokyo # No match. read_preference: mode: SecondaryPreferred maxStalenessSeconds: 150 tag_sets: - data_center: nyc suitable_servers: - *2 - *3 in_latency_window: - *2 SecondaryPreferred_tags2.yml000066400000000000000000000020001505113246500356320ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/max_staleness/ReplicaSetWithPrimary# maxStalenessSeconds is applied before tag sets. With tag sets # [{data_center: nyc}, {data_center: tokyo}], if the only secondary in NYC is # stale then use Tokyo. --- heartbeatFrequencyMS: 25000 # 25 seconds. topology_description: type: ReplicaSetWithPrimary servers: - &1 address: a:27017 type: RSPrimary avg_rtt_ms: 5 lastUpdateTime: 0 lastWrite: {lastWriteDate: {$numberLong: "125002"}} maxWireVersion: 6 - &2 address: b:27017 type: RSSecondary avg_rtt_ms: 5 lastUpdateTime: 0 lastWrite: {lastWriteDate: {$numberLong: "2"}} maxWireVersion: 6 tags: data_center: tokyo - &3 address: c:27017 type: RSSecondary avg_rtt_ms: 5 lastUpdateTime: 0 lastWrite: {lastWriteDate: {$numberLong: "1"}} # Too stale. maxWireVersion: 6 tags: data_center: nyc read_preference: mode: SecondaryPreferred maxStalenessSeconds: 150 tag_sets: - data_center: nyc - data_center: tokyo suitable_servers: - *2 in_latency_window: - *2 mongo-ruby-driver-2.21.3/spec/spec_tests/data/max_staleness/ReplicaSetWithPrimary/Secondary_tags.yml000066400000000000000000000026421505113246500340040ustar00rootroot00000000000000# Primary's lastWriteDate is used normally with SecondaryPreferred and tags. --- heartbeatFrequencyMS: 25000 # 25 seconds. topology_description: type: ReplicaSetWithPrimary servers: - &1 address: a:27017 type: RSPrimary avg_rtt_ms: 5 lastUpdateTime: 0 maxWireVersion: 6 lastWrite: {lastWriteDate: {$numberLong: "125002"}} - &2 address: b:27017 type: RSSecondary avg_rtt_ms: 5 lastUpdateTime: 0 maxWireVersion: 6 lastWrite: {lastWriteDate: {$numberLong: "2"}} # 125 sec stale + 25 sec heartbeat <= 150 sec maxStaleness. tags: data_center: nyc - &3 address: c:27017 type: RSSecondary avg_rtt_ms: 50 # Too far. lastUpdateTime: 1 maxWireVersion: 6 lastWrite: {lastWriteDate: {$numberLong: "1000001"}} # Not used in estimate since we have a primary. tags: data_center: nyc - &4 address: d:27017 type: RSSecondary avg_rtt_ms: 5 lastUpdateTime: 0 maxWireVersion: 6 lastWrite: {lastWriteDate: {$numberLong: "1"}} # Too stale. tags: data_center: nyc - &5 address: e:27017 type: RSSecondary avg_rtt_ms: 5 lastUpdateTime: 0 maxWireVersion: 6 lastWrite: {lastWriteDate: {$numberLong: "2"}} tags: data_center: tokyo # No match. read_preference: mode: Secondary maxStalenessSeconds: 150 tag_sets: - data_center: nyc suitable_servers: - *2 - *3 in_latency_window: - *2 Secondary_tags2.yml000066400000000000000000000017671505113246500340160ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/max_staleness/ReplicaSetWithPrimary# maxStalenessSeconds is applied before tag sets. With tag sets # [{data_center: nyc}, {data_center: tokyo}], if the only secondary in NYC is # stale then use Tokyo. --- heartbeatFrequencyMS: 25000 # 25 seconds. topology_description: type: ReplicaSetWithPrimary servers: - &1 address: a:27017 type: RSPrimary avg_rtt_ms: 5 lastUpdateTime: 0 lastWrite: {lastWriteDate: {$numberLong: "125002"}} maxWireVersion: 6 - &2 address: b:27017 type: RSSecondary avg_rtt_ms: 5 lastUpdateTime: 0 lastWrite: {lastWriteDate: {$numberLong: "2"}} maxWireVersion: 6 tags: data_center: tokyo - &3 address: c:27017 type: RSSecondary avg_rtt_ms: 5 lastUpdateTime: 0 lastWrite: {lastWriteDate: {$numberLong: "1"}} # Too stale. maxWireVersion: 6 tags: data_center: nyc read_preference: mode: Secondary maxStalenessSeconds: 150 tag_sets: - data_center: nyc - data_center: tokyo suitable_servers: - *2 in_latency_window: - *2 ZeroMaxStaleness.yml000066400000000000000000000007641505113246500342320ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/max_staleness/ReplicaSetWithPrimary# maxStalenessSeconds=0 is prohibited. --- topology_description: type: ReplicaSetWithPrimary servers: - &1 address: a:27017 type: RSPrimary avg_rtt_ms: 5 lastUpdateTime: 0 maxWireVersion: 6 lastWrite: {lastWriteDate: {$numberLong: "2"}} - &2 address: b:27017 type: RSSecondary avg_rtt_ms: 5 lastUpdateTime: 0 maxWireVersion: 6 lastWrite: {lastWriteDate: {$numberLong: "1"}} read_preference: mode: Nearest maxStalenessSeconds: 0 error: true mongo-ruby-driver-2.21.3/spec/spec_tests/data/max_staleness/Sharded/000077500000000000000000000000001505113246500254275ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/max_staleness/Sharded/Incompatible.yml000066400000000000000000000012141505113246500305560ustar00rootroot00000000000000# During server selection, clients (drivers or mongos) MUST raise an error if # maxStalenessSeconds is defined and not -1 and any server's ``maxWireVersion`` # is less than 5 (`SERVER-23893`_). --- topology_description: type: Sharded servers: - &1 address: a:27017 type: Mongos avg_rtt_ms: 5 lastUpdateTime: 0 maxWireVersion: 5 lastWrite: {lastWriteDate: {$numberLong: "1"}} - &2 address: b:27017 type: Mongos avg_rtt_ms: 5 lastUpdateTime: 0 maxWireVersion: 4 # Incompatible. lastWrite: {lastWriteDate: {$numberLong: "1"}} read_preference: mode: Nearest maxStalenessSeconds: 120 error: true mongo-ruby-driver-2.21.3/spec/spec_tests/data/max_staleness/Sharded/SmallMaxStaleness.yml000066400000000000000000000011251505113246500315510ustar00rootroot00000000000000# Driver doesn't validate maxStalenessSeconds for mongos --- heartbeatFrequencyMS: 10000 topology_description: type: Sharded servers: - &1 address: a:27017 type: Mongos avg_rtt_ms: 5 lastUpdateTime: 0 maxWireVersion: 6 lastWrite: {lastWriteDate: {$numberLong: "1"}} - &2 address: b:27017 type: Mongos avg_rtt_ms: 50 # Too far. lastUpdateTime: 0 maxWireVersion: 6 lastWrite: {lastWriteDate: {$numberLong: "1"}} read_preference: mode: Nearest maxStalenessSeconds: 1 # OK for sharding. suitable_servers: - *1 - *2 in_latency_window: - *1 mongo-ruby-driver-2.21.3/spec/spec_tests/data/max_staleness/Single/000077500000000000000000000000001505113246500252765ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/max_staleness/Single/Incompatible.yml000066400000000000000000000007611505113246500304330ustar00rootroot00000000000000# During server selection, clients (drivers or mongos) MUST raise an error if # maxStalenessSeconds is defined and not -1 and any server's ``maxWireVersion`` # is less than 5 (`SERVER-23893`_). --- topology_description: type: Single servers: - &1 address: a:27017 type: Standalone avg_rtt_ms: 5 lastUpdateTime: 0 maxWireVersion: 4 # Incompatible. lastWrite: {lastWriteDate: {$numberLong: "1"}} read_preference: mode: Nearest maxStalenessSeconds: 120 error: true mongo-ruby-driver-2.21.3/spec/spec_tests/data/max_staleness/Single/SmallMaxStaleness.yml000066400000000000000000000006401505113246500314210ustar00rootroot00000000000000# Driver doesn't validate maxStalenessSeconds for direct connection. --- heartbeatFrequencyMS: 10000 topology_description: type: Single servers: - &1 address: a:27017 type: Standalone avg_rtt_ms: 5 lastUpdateTime: 0 maxWireVersion: 6 lastWrite: {lastWriteDate: {$numberLong: "1"}} read_preference: mode: Nearest maxStalenessSeconds: 1 suitable_servers: - *1 in_latency_window: - *1 mongo-ruby-driver-2.21.3/spec/spec_tests/data/max_staleness/Unknown/000077500000000000000000000000001505113246500255145ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/max_staleness/Unknown/SmallMaxStaleness.yml000066400000000000000000000005071505113246500316410ustar00rootroot00000000000000# Driver doesn't validate maxStalenessSeconds while TopologyType is Unknown. --- heartbeatFrequencyMS: 10000 topology_description: type: Unknown servers: - &1 address: a:27017 type: Unknown maxWireVersion: 6 read_preference: mode: Nearest maxStalenessSeconds: 1 suitable_servers: [] in_latency_window: [] mongo-ruby-driver-2.21.3/spec/spec_tests/data/read_write_concern/000077500000000000000000000000001505113246500250435ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/read_write_concern/connection-string/000077500000000000000000000000001505113246500305065ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/read_write_concern/connection-string/read-concern.yml000066400000000000000000000016631505113246500335770ustar00rootroot00000000000000tests: - description: "Default" uri: "mongodb://localhost/" valid: true warning: false readConcern: { } - description: "local specified" uri: "mongodb://localhost/?readConcernLevel=local" valid: true warning: false readConcern: { level: "local" } - description: "majority specified" uri: "mongodb://localhost/?readConcernLevel=majority" valid: true warning: false readConcern: { level: "majority" } - description: "linearizable specified" uri: "mongodb://localhost/?readConcernLevel=linearizable" valid: true warning: false readConcern: { level: "linearizable" } - description: "available specified" uri: "mongodb://localhost/?readConcernLevel=available" valid: true warning: false readConcern: { level: "available" } mongo-ruby-driver-2.21.3/spec/spec_tests/data/read_write_concern/connection-string/write-concern.yml000066400000000000000000000046211505113246500340130ustar00rootroot00000000000000tests: - description: "Default" uri: "mongodb://localhost/" valid: true warning: false writeConcern: { } - description: "w as a valid number" uri: "mongodb://localhost/?w=1" valid: true warning: false writeConcern: { w: 1 } - description: "w as an invalid number" uri: "mongodb://localhost/?w=-2" # https://jira.mongodb.org/browse/SPEC-1459 valid: true warning: ~ - description: "w as a string" uri: "mongodb://localhost/?w=majority" valid: true warning: false writeConcern: { w: "majority" } - description: "wtimeoutMS as a valid number" uri: "mongodb://localhost/?wtimeoutMS=500" valid: true warning: false writeConcern: { wtimeoutMS: 500 } - description: "wtimeoutMS as a negative number" uri: "mongodb://localhost/?wtimeoutMS=-500" # https://jira.mongodb.org/browse/SPEC-1457 valid: true warning: true - description: "journal as false" uri: "mongodb://localhost/?journal=false" valid: true warning: false writeConcern: { journal: false } - description: "journal as true" uri: "mongodb://localhost/?journal=true" valid: true warning: false writeConcern: { journal: true } - description: "All options combined" uri: "mongodb://localhost/?w=3&wtimeoutMS=500&journal=true" valid: true warning: false writeConcern: { w: 3, wtimeoutMS: 500, journal: true } - description: "Unacknowledged with w" uri: "mongodb://localhost/?w=0" valid: true warning: false writeConcern: { w: 0 } - description: "Unacknowledged with w and journal" uri: "mongodb://localhost/?w=0&journal=false" valid: true warning: false writeConcern: { w: 0, journal: false } - description: "Unacknowledged with w and wtimeoutMS" uri: "mongodb://localhost/?w=0&wtimeoutMS=500" valid: true warning: false writeConcern: { w: 0, wtimeoutMS: 500 } - description: "Acknowledged with w as 0 and journal true" uri: "mongodb://localhost/?w=0&journal=true" valid: false warning: false writeConcern: { w: 0, journal: true } mongo-ruby-driver-2.21.3/spec/spec_tests/data/read_write_concern/document/000077500000000000000000000000001505113246500266615ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/read_write_concern/document/read-concern.yml000066400000000000000000000020571505113246500317500ustar00rootroot00000000000000tests: - description: "Default" valid: true readConcern: {} readConcernDocument: {} isServerDefault: true - description: "Majority" valid: true readConcern: { level: "majority" } readConcernDocument: { level: "majority" } isServerDefault: false - description: "Local" valid: true readConcern: { level: "local" } readConcernDocument: { level: "local" } isServerDefault: false - description: "Linearizable" valid: true readConcern: { level: "linearizable" } readConcernDocument: { level: "linearizable" } isServerDefault: false - description: "Snapshot" valid: true readConcern: { level: "snapshot" } readConcernDocument: {level: "snapshot" } isServerDefault: false - description: "Available" valid: true readConcern: { level: "available" } readConcernDocument: { level: "available" } isServerDefault: false mongo-ruby-driver-2.21.3/spec/spec_tests/data/read_write_concern/document/write-concern.yml000066400000000000000000000057651505113246500322000ustar00rootroot00000000000000tests: - description: "Default" valid: true writeConcern: {} writeConcernDocument: {} isServerDefault: true isAcknowledged: true - description: "W as a number" valid: true writeConcern: { w: 3 } writeConcernDocument: { w: 3 } isServerDefault: false isAcknowledged: true - description: "W as an invalid number" valid: false writeConcern: { w: -3 } writeConcernDocument: ~ isServerDefault: ~ isAcknowledged: ~ - description: "W as majority" valid: true writeConcern: { w: "majority" } writeConcernDocument: { w: "majority" } isServerDefault: false isAcknowledged: true - description: "W as a custom string" valid: true writeConcern: { w: "my_mode" } writeConcernDocument: { w: "my_mode" } isServerDefault: false isAcknowledged: true - description: "WTimeoutMS" valid: true writeConcern: { wtimeoutMS: 1000 } writeConcernDocument: { wtimeout: 1000 } isServerDefault: false isAcknowledged: true - description: "WTimeoutMS as an invalid number" # https://jira.mongodb.org/browse/SPEC-1457 valid: true writeConcern: { wtimeoutMS: -1000 } writeConcernDocument: { wtimeout: -1000 } isServerDefault: ~ isAcknowledged: ~ - description: "Journal as true" valid: true writeConcern: { journal: true } writeConcernDocument: { j: true } isServerDefault: false isAcknowledged: true - description: "Journal as false" valid: true writeConcern: { journal: false } writeConcernDocument: { j: false } isServerDefault: false isAcknowledged: true - description: "Unacknowledged with only w" valid: true writeConcern: { w: 0 } writeConcernDocument: { w: 0 } isServerDefault: false isAcknowledged: false - description: "Unacknowledged with wtimeoutMS" valid: true writeConcern: { w: 0, wtimeoutMS: 500 } writeConcernDocument: { w: 0, wtimeout: 500 } isServerDefault: false isAcknowledged: false - description: "Unacknowledged with journal" valid: true writeConcern: { w: 0, journal: false } writeConcernDocument: { w: 0, j: false } isServerDefault: false isAcknowledged: false - description: "W is 0 with journal true" valid: false writeConcern: { w: 0, journal: true } writeConcernDocument: { w: 0, j: true } isServerDefault: false isAcknowledged: true - description: "Everything" valid: true writeConcern: { w: 3, wtimeoutMS: 1000, journal: true } writeConcernDocument: { w: 3, wtimeout: 1000, j: true } isServerDefault: false isAcknowledged: true mongo-ruby-driver-2.21.3/spec/spec_tests/data/read_write_concern/operation/000077500000000000000000000000001505113246500270435ustar00rootroot00000000000000default-write-concern-2.6.yml000066400000000000000000000140241505113246500342140ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/read_write_concern/operation# Test that setting a default write concern does not add a write concern # to the command sent over the wire. # Test operations that require 2.6+ server. data: - {_id: 1, x: 11} - {_id: 2, x: 22} collection_name: &collection_name default_write_concern_coll database_name: &database_name default_write_concern_db runOn: - minServerVersion: "2.6" tests: - description: DeleteOne omits default write concern operations: - name: deleteOne object: collection collectionOptions: {writeConcern: {}} arguments: filter: {} result: deletedCount: 1 expectations: - command_started_event: command: delete: *collection_name deletes: - {q: {}, limit: 1} writeConcern: null - description: DeleteMany omits default write concern operations: - name: deleteMany object: collection collectionOptions: {writeConcern: {}} arguments: filter: {} result: deletedCount: 2 expectations: - command_started_event: command: delete: *collection_name deletes: [{q: {}, limit: 0}] writeConcern: null - description: BulkWrite with all models omits default write concern operations: - name: bulkWrite object: collection collectionOptions: {writeConcern: {}} arguments: ordered: true requests: - name: deleteMany arguments: filter: {} - name: insertOne arguments: document: {_id: 1} - name: updateOne arguments: filter: {_id: 1} update: {$set: {x: 1}} - name: insertOne arguments: document: {_id: 2} - name: replaceOne arguments: filter: {_id: 1} replacement: {x: 2} - name: insertOne arguments: document: {_id: 3} - name: updateMany arguments: filter: {_id: 1} update: {$set: {x: 3}} - name: deleteOne arguments: filter: {_id: 3} outcome: collection: name: *collection_name data: - {_id: 1, x: 3} - {_id: 2} expectations: - command_started_event: command: delete: *collection_name deletes: [{q: {}, limit: 0}] writeConcern: null - command_started_event: command: insert: *collection_name documents: - {_id: 1} writeConcern: null - command_started_event: command: update: *collection_name updates: - {q: {_id: 1}, u: {$set: {x: 1}}} writeConcern: null - command_started_event: command: insert: *collection_name documents: - {_id: 2} writeConcern: null - command_started_event: command: update: *collection_name updates: - {q: {_id: 1}, u: {x: 2}} writeConcern: null - command_started_event: command: insert: *collection_name documents: - {_id: 3} writeConcern: null - command_started_event: command: update: *collection_name updates: - {q: {_id: 1}, u: {$set: {x: 3}}, multi: true} writeConcern: null - command_started_event: command: delete: *collection_name deletes: [{q: {_id: 3}, limit: 1}] writeConcern: null - description: 'InsertOne and InsertMany omit default write concern' operations: - name: insertOne object: collection collectionOptions: {writeConcern: {}} arguments: document: {_id: 3} - name: insertMany object: collection collectionOptions: {writeConcern: {}} arguments: documents: - {_id: 4} - {_id: 5} outcome: collection: name: *collection_name data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3} - {_id: 4} - {_id: 5} expectations: - command_started_event: command: insert: *collection_name documents: - {_id: 3} writeConcern: null - command_started_event: command: insert: *collection_name documents: - {_id: 4} - {_id: 5} writeConcern: null - description: 'UpdateOne, UpdateMany, and ReplaceOne omit default write concern' operations: - name: updateOne object: collection collectionOptions: {writeConcern: {}} arguments: filter: {_id: 1} update: {$set: {x: 1}} - name: updateMany object: collection collectionOptions: {writeConcern: {}} arguments: filter: {_id: 2} update: {$set: {x: 2}} - name: replaceOne object: collection collectionOptions: {writeConcern: {}} arguments: filter: {_id: 2} replacement: {x: 3} outcome: collection: name: *collection_name data: - {_id: 1, x: 1} - {_id: 2, x: 3} expectations: - command_started_event: command: update: *collection_name updates: - {q: {_id: 1}, u: {$set: {x: 1}}} writeConcern: null - command_started_event: command: update: *collection_name updates: - {q: {_id: 2}, u: {$set: {x: 2}}, multi: true} writeConcern: null - command_started_event: command: update: *collection_name updates: - {q: {_id: 2}, u: {x: 3}} writeConcern: nulldefault-write-concern-3.2.yml000066400000000000000000000032271505113246500342140ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/read_write_concern/operation# Test that setting a default write concern does not add a write concern # to the command sent over the wire. # Test operations that require 3.2+ server, where findAndModify started # to accept a write concern. data: - {_id: 1, x: 11} - {_id: 2, x: 22} collection_name: &collection_name default_write_concern_coll database_name: &database_name default_write_concern_db runOn: - minServerVersion: "3.2" tests: - description: 'findAndModify operations omit default write concern' operations: - name: findOneAndUpdate object: collection collectionOptions: {writeConcern: {}} arguments: filter: {_id: 1} update: {$set: {x: 1}} - name: findOneAndReplace object: collection collectionOptions: {writeConcern: {}} arguments: filter: {_id: 2} replacement: {x: 2} - name: findOneAndDelete object: collection collectionOptions: {writeConcern: {}} arguments: filter: {_id: 2} outcome: collection: name: *collection_name data: - {_id: 1, x: 1} expectations: - command_started_event: command: findAndModify: *collection_name query: {_id: 1} update: {$set: {x: 1}} writeConcern: null - command_started_event: command: findAndModify: *collection_name query: {_id: 2} update: {x: 2} writeConcern: null - command_started_event: command: findAndModify: *collection_name query: {_id: 2} remove: true writeConcern: nulldefault-write-concern-3.4.yml000066400000000000000000000060331505113246500342140ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/read_write_concern/operation# Test that setting a default write concern does not add a write concern # to the command sent over the wire. # Test operations that require 3.4+ server, where all commands started # to accept a write concern. data: - {_id: 1, x: 11} - {_id: 2, x: 22} collection_name: &collection_name default_write_concern_coll database_name: &database_name default_write_concern_db runOn: - minServerVersion: "3.4" tests: - description: Aggregate with $out omits default write concern operations: - object: collection collectionOptions: {writeConcern: {}} name: aggregate arguments: pipeline: &out_pipeline - $match: {_id: {$gt: 1}} - $out: &other_collection_name "other_collection_name" outcome: collection: name: *other_collection_name data: - {_id: 2, x: 22} expectations: - command_started_event: command: aggregate: *collection_name pipeline: *out_pipeline writeConcern: null - description: RunCommand with a write command omits default write concern (runCommand should never inherit write concern) operations: - object: database databaseOptions: {writeConcern: {}} name: runCommand command_name: delete arguments: command: delete: *collection_name deletes: - {q: {}, limit: 1} expectations: - command_started_event: command: delete: *collection_name deletes: - {q: {}, limit: 1} writeConcern: null - description: CreateIndex and dropIndex omits default write concern operations: - object: collection collectionOptions: {writeConcern: {}} name: createIndex arguments: keys: {x: 1} - object: collection collectionOptions: {writeConcern: {}} name: dropIndex arguments: name: x_1 expectations: - command_started_event: command: createIndexes: *collection_name indexes: - name: x_1 key: {x: 1} writeConcern: null - command_started_event: command: dropIndexes: *collection_name index: x_1 writeConcern: null - description: MapReduce omits default write concern operations: - name: mapReduce object: collection collectionOptions: {writeConcern: {}} arguments: map: { $code: 'function inc() { return emit(0, this.x + 1) }' } reduce: { $code: 'function sum(key, values) { return values.reduce((acc, x) => acc + x); }' } out: { inline: 1 } expectations: - command_started_event: command: mapReduce: *collection_name map: { $code: 'function inc() { return emit(0, this.x + 1) }' } reduce: { $code: 'function sum(key, values) { return values.reduce((acc, x) => acc + x); }' } out: { inline: 1 } writeConcern: nulldefault-write-concern-4.2.yml000066400000000000000000000021221505113246500342060ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/read_write_concern/operation# Test that setting a default write concern does not add a write concern # to the command sent over the wire. # Test operations that require 4.2+ server. data: - {_id: 1, x: 11} - {_id: 2, x: 22} collection_name: &collection_name default_write_concern_coll database_name: &database_name default_write_concern_db runOn: - minServerVersion: "4.2" tests: - description: Aggregate with $merge omits default write concern operations: - object: collection databaseOptions: {writeConcern: {}} collectionOptions: {writeConcern: {}} name: aggregate arguments: pipeline: &merge_pipeline - $match: {_id: {$gt: 1}} - $merge: {into: &other_collection_name "other_collection_name" } expectations: - command_started_event: command: aggregate: *collection_name pipeline: *merge_pipeline # "null" fields will be checked for non-existence writeConcern: null outcome: collection: name: *other_collection_name data: - {_id: 2, x: 22}mongo-ruby-driver-2.21.3/spec/spec_tests/data/retryable_reads/000077500000000000000000000000001505113246500243565ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/retryable_reads/legacy/000077500000000000000000000000001505113246500256225ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/retryable_reads/legacy/aggregate-merge.yml000066400000000000000000000016171505113246500313750ustar00rootroot00000000000000runOn: - minServerVersion: "4.1.11" database_name: &database_name "retryable-reads-tests" collection_name: &collection_name "coll" data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} tests: - description: "Aggregate with $merge does not retry" failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: [aggregate] closeConnection: true operations: - object: collection name: aggregate arguments: pipeline: &pipeline - $match: {_id: {$gt: 1}} - $sort: {x: 1} - $merge: { into: "output-collection" } error: true expectations: - command_started_event: command: aggregate: *collection_name pipeline: *pipeline command_name: aggregate database_name: *database_name mongo-ruby-driver-2.21.3/spec/spec_tests/data/retryable_reads/legacy/aggregate-serverErrors.yml000066400000000000000000000134611505113246500330010ustar00rootroot00000000000000runOn: - minServerVersion: "4.0" topology: ["single", "replicaset"] - minServerVersion: "4.1.7" topology: ["sharded"] database_name: &database_name "retryable-reads-tests" collection_name: &collection_name "coll" data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} tests: - description: "Aggregate succeeds after InterruptedAtShutdown" failPoint: &failCommand_failPoint configureFailPoint: failCommand mode: { times: 1 } data: { failCommands: [aggregate], errorCode: 11600 } operations: - &retryable_operation_succeeds <<: &retryable_operation name: aggregate object: collection arguments: pipeline: - $match: _id: {$gt: 1} - $sort: {x: 1} result: - {_id: 2, x: 22} - {_id: 3, x: 33} expectations: - &retryable_command_started_event command_started_event: command: aggregate: *collection_name pipeline: [{$match: {_id: {$gt: 1}}}, {$sort: {x: 1}}] database_name: *database_name - *retryable_command_started_event - description: "Aggregate succeeds after InterruptedDueToReplStateChange" failPoint: <<: *failCommand_failPoint data: { failCommands: [aggregate], errorCode: 11602 } operations: [*retryable_operation_succeeds] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "Aggregate succeeds after NotMaster" failPoint: <<: *failCommand_failPoint data: { failCommands: [aggregate], errorCode: 10107 } operations: [*retryable_operation_succeeds] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "Aggregate succeeds after NotPrimaryNoSecondaryOk" failPoint: <<: *failCommand_failPoint data: { failCommands: [aggregate], errorCode: 13435 } operations: [*retryable_operation_succeeds] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "Aggregate succeeds after NotMasterOrSecondary" failPoint: <<: *failCommand_failPoint data: { failCommands: [aggregate], errorCode: 13436 } operations: [*retryable_operation_succeeds] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "Aggregate succeeds after PrimarySteppedDown" failPoint: <<: *failCommand_failPoint data: { failCommands: [aggregate], errorCode: 189 } operations: [*retryable_operation_succeeds] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "Aggregate succeeds after ShutdownInProgress" failPoint: <<: *failCommand_failPoint data: { failCommands: [aggregate], errorCode: 91 } operations: [*retryable_operation_succeeds] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "Aggregate succeeds after HostNotFound" failPoint: <<: *failCommand_failPoint data: { failCommands: [aggregate], errorCode: 7 } operations: [*retryable_operation_succeeds] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "Aggregate succeeds after HostUnreachable" failPoint: <<: *failCommand_failPoint data: { failCommands: [aggregate], errorCode: 6 } operations: [*retryable_operation_succeeds] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "Aggregate succeeds after NetworkTimeout" failPoint: <<: *failCommand_failPoint data: { failCommands: [aggregate], errorCode: 89 } operations: [*retryable_operation_succeeds] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "Aggregate succeeds after SocketException" failPoint: <<: *failCommand_failPoint data: { failCommands: [aggregate], errorCode: 9001 } operations: [*retryable_operation_succeeds] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "Aggregate fails after two NotMaster errors" failPoint: <<: *failCommand_failPoint mode: { times: 2 } data: { failCommands: [aggregate], errorCode: 10107 } operations: - &retryable_operation_fails <<: *retryable_operation error: true expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "Aggregate fails after NotMaster when retryReads is false" clientOptions: retryReads: false failPoint: <<: *failCommand_failPoint data: { failCommands: [aggregate], errorCode: 10107 } operations: [*retryable_operation_fails] expectations: - *retryable_command_started_event mongo-ruby-driver-2.21.3/spec/spec_tests/data/retryable_reads/legacy/aggregate.yml000066400000000000000000000056671505113246500303110ustar00rootroot00000000000000runOn: - minServerVersion: "4.0" topology: ["single", "replicaset"] - minServerVersion: "4.1.7" topology: ["sharded"] database_name: &database_name "retryable-reads-tests" collection_name: &collection_name "coll" data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} tests: - description: "Aggregate succeeds on first attempt" operations: - &retryable_operation_succeeds <<: &retryable_operation name: aggregate object: collection arguments: pipeline: - $match: {_id: {$gt: 1}} - $sort: {x: 1} result: - {_id: 2, x: 22} - {_id: 3, x: 33} expectations: - &retryable_command_started_event command_started_event: command: aggregate: *collection_name pipeline: [{$match: {_id: {$gt: 1}}}, {$sort: {x: 1}}] database_name: *database_name - description: "Aggregate succeeds on second attempt" failPoint: &failCommand_failPoint configureFailPoint: failCommand mode: { times: 1 } data: failCommands: [aggregate] closeConnection: true operations: [*retryable_operation_succeeds] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "Aggregate fails on first attempt" clientOptions: retryReads: false failPoint: *failCommand_failPoint operations: - &retryable_operation_fails <<: *retryable_operation error: true expectations: - *retryable_command_started_event - description: "Aggregate fails on second attempt" failPoint: <<: *failCommand_failPoint mode: { times: 2 } operations: [*retryable_operation_fails] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "Aggregate with $out does not retry" failPoint: *failCommand_failPoint operations: - <<: *retryable_operation_fails arguments: pipeline: - $match: {_id: {$gt: 1}} - $sort: {x: 1} - $out: "output-collection" expectations: - command_started_event: command: aggregate: *collection_name pipeline: [{$match: {_id: {$gt: 1}}}, {$sort: {x: 1}}, {$out: 'output-collection'}] command_name: aggregate database_name: *database_namechangeStreams-client.watch-serverErrors.yml000066400000000000000000000127531505113246500361440ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/retryable_reads/legacyrunOn: - minServerVersion: "4.0" topology: ["replicaset"] - minServerVersion: "4.1.7" topology: ["sharded", "load-balanced"] serverless: "forbid" database_name: &database_name "retryable-reads-tests" collection_name: &collection_name "coll" data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} tests: - description: "client.watch succeeds after InterruptedAtShutdown" failPoint: &failCommand_failPoint configureFailPoint: failCommand mode: { times: 1 } data: { failCommands: [aggregate], errorCode: 11600 } operations: - &retryable_operation name: watch object: client expectations: - &retryable_command_started_event command_started_event: command: aggregate: 1 cursor: {} pipeline: [ { $changeStream: { allChangesForCluster: true } } ] database_name: admin - *retryable_command_started_event - description: "client.watch succeeds after InterruptedDueToReplStateChange" failPoint: <<: *failCommand_failPoint data: { failCommands: [aggregate], errorCode: 11602 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "client.watch succeeds after NotWritablePrimary" failPoint: <<: *failCommand_failPoint data: { failCommands: [aggregate], errorCode: 10107 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "client.watch succeeds after NotPrimaryNoSecondaryOk" failPoint: <<: *failCommand_failPoint data: { failCommands: [aggregate], errorCode: 13435 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "client.watch succeeds after NotPrimaryOrSecondary" failPoint: <<: *failCommand_failPoint data: { failCommands: [aggregate], errorCode: 13436 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "client.watch succeeds after PrimarySteppedDown" failPoint: <<: *failCommand_failPoint data: { failCommands: [aggregate], errorCode: 189 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "client.watch succeeds after ShutdownInProgress" failPoint: <<: *failCommand_failPoint data: { failCommands: [aggregate], errorCode: 91 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "client.watch succeeds after HostNotFound" failPoint: <<: *failCommand_failPoint data: { failCommands: [aggregate], errorCode: 7 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "client.watch succeeds after HostUnreachable" failPoint: <<: *failCommand_failPoint data: { failCommands: [aggregate], errorCode: 6 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "client.watch succeeds after NetworkTimeout" failPoint: <<: *failCommand_failPoint data: { failCommands: [aggregate], errorCode: 89 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "client.watch succeeds after SocketException" failPoint: <<: *failCommand_failPoint data: { failCommands: [aggregate], errorCode: 9001 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "client.watch fails after two NotWritablePrimary errors" failPoint: <<: *failCommand_failPoint mode: { times: 2 } data: { failCommands: [aggregate], errorCode: 10107 } operations: - &retryable_operation_fails <<: *retryable_operation error: true expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "client.watch fails after NotWritablePrimary when retryReads is false" clientOptions: retryReads: false failPoint: <<: *failCommand_failPoint data: { failCommands: [aggregate], errorCode: 10107 } operations: [*retryable_operation_fails] expectations: - *retryable_command_started_event mongo-ruby-driver-2.21.3/spec/spec_tests/data/retryable_reads/legacy/changeStreams-client.watch.yml000066400000000000000000000036711505113246500335210ustar00rootroot00000000000000runOn: - minServerVersion: "4.0" topology: ["replicaset"] - minServerVersion: "4.1.7" topology: ["sharded", "load-balanced"] serverless: "forbid" database_name: &database_name "retryable-reads-tests" collection_name: &collection_name "coll" data: - {_id: 1, x: 11} tests: - description: "client.watch succeeds on first attempt" operations: - &retryable_operation name: watch object: client expectations: - &retryable_command_started_event command_started_event: command: aggregate: 1 cursor: {} pipeline: [ { $changeStream: { "allChangesForCluster": true } } ] database_name: admin - description: "client.watch succeeds on second attempt" failPoint: &failCommand_failPoint configureFailPoint: failCommand mode: { times: 1 } data: failCommands: [aggregate] closeConnection: true operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "client.watch fails on first attempt" clientOptions: retryReads: false failPoint: *failCommand_failPoint operations: - &retryable_operation_fails <<: *retryable_operation error: true expectations: - *retryable_command_started_event - description: "client.watch fails on second attempt" failPoint: <<: *failCommand_failPoint mode: { times: 2 } operations: [*retryable_operation_fails] expectations: - *retryable_command_started_event - *retryable_command_started_event changeStreams-db.coll.watch-serverErrors.yml000066400000000000000000000127711505113246500362030ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/retryable_reads/legacyrunOn: - minServerVersion: "4.0" topology: ["replicaset"] - minServerVersion: "4.1.7" topology: ["sharded", "load-balanced"] serverless: "forbid" database_name: &database_name "retryable-reads-tests" collection_name: &collection_name "coll" data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} tests: - description: "db.coll.watch succeeds after InterruptedAtShutdown" failPoint: &failCommand_failPoint configureFailPoint: failCommand mode: { times: 1 } data: { failCommands: [aggregate], errorCode: 11600 } operations: - &retryable_operation name: watch object: collection expectations: - &retryable_command_started_event command_started_event: command: aggregate: *collection_name cursor: {} pipeline: [ { $changeStream: { } } ] database_name: *database_name - *retryable_command_started_event - description: "db.coll.watch succeeds after InterruptedDueToReplStateChange" failPoint: <<: *failCommand_failPoint data: { failCommands: [aggregate], errorCode: 11602 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "db.coll.watch succeeds after NotWritablePrimary" failPoint: <<: *failCommand_failPoint data: { failCommands: [aggregate], errorCode: 10107 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "db.coll.watch succeeds after NotPrimaryNoSecondaryOk" failPoint: <<: *failCommand_failPoint data: { failCommands: [aggregate], errorCode: 13435 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "db.coll.watch succeeds after NotPrimaryOrSecondary" failPoint: <<: *failCommand_failPoint data: { failCommands: [aggregate], errorCode: 13436 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "db.coll.watch succeeds after PrimarySteppedDown" failPoint: <<: *failCommand_failPoint data: { failCommands: [aggregate], errorCode: 189 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "db.coll.watch succeeds after ShutdownInProgress" failPoint: <<: *failCommand_failPoint data: { failCommands: [aggregate], errorCode: 91 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "db.coll.watch succeeds after HostNotFound" failPoint: <<: *failCommand_failPoint data: { failCommands: [aggregate], errorCode: 7 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "db.coll.watch succeeds after HostUnreachable" failPoint: <<: *failCommand_failPoint data: { failCommands: [aggregate], errorCode: 6 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "db.coll.watch succeeds after NetworkTimeout" failPoint: <<: *failCommand_failPoint data: { failCommands: [aggregate], errorCode: 89 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "db.coll.watch succeeds after SocketException" failPoint: <<: *failCommand_failPoint data: { failCommands: [aggregate], errorCode: 9001 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "db.coll.watch fails after two NotWritablePrimary errors" failPoint: <<: *failCommand_failPoint mode: { times: 2 } data: { failCommands: [aggregate], errorCode: 10107 } operations: - &retryable_operation_fails <<: *retryable_operation error: true expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "db.coll.watch fails after NotWritablePrimary when retryReads is false" clientOptions: retryReads: false failPoint: <<: *failCommand_failPoint data: { failCommands: [aggregate], errorCode: 10107 } operations: [*retryable_operation_fails] expectations: - *retryable_command_started_event mongo-ruby-driver-2.21.3/spec/spec_tests/data/retryable_reads/legacy/changeStreams-db.coll.watch.yml000066400000000000000000000037271505113246500335620ustar00rootroot00000000000000runOn: - minServerVersion: "4.0" topology: ["replicaset"] - minServerVersion: "4.1.7" topology: ["sharded", "load-balanced"] serverless: "forbid" database_name: &database_name "retryable-reads-tests" collection_name: &collection_name "coll" data: - {_id: 1, x: 11} tests: - description: "db.coll.watch succeeds on first attempt" operations: - &retryable_operation name: watch object: collection expectations: - &retryable_command_started_event command_started_event: command: aggregate: *collection_name cursor: {} pipeline: [ { $changeStream: { } } ] database_name: *database_name - description: "db.coll.watch succeeds on second attempt" failPoint: &failCommand_failPoint configureFailPoint: failCommand mode: { times: 1 } data: failCommands: - aggregate closeConnection: true operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "db.coll.watch fails on first attempt" clientOptions: retryReads: false failPoint: *failCommand_failPoint operations: - &retryable_operation_fails <<: *retryable_operation error: true expectations: - *retryable_command_started_event - description: "db.coll.watch fails on second attempt" failPoint: <<: *failCommand_failPoint mode: { times: 2 } operations: [*retryable_operation_fails] expectations: - *retryable_command_started_event - *retryable_command_started_event changeStreams-db.watch-serverErrors.yml000066400000000000000000000126531505113246500352520ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/retryable_reads/legacyrunOn: - minServerVersion: "4.0" topology: ["replicaset"] - minServerVersion: "4.1.7" topology: ["sharded", "load-balanced"] serverless: "forbid" database_name: &database_name "retryable-reads-tests" collection_name: &collection_name "coll" data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} tests: - description: "db.watch succeeds after InterruptedAtShutdown" failPoint: &failCommand_failPoint configureFailPoint: failCommand mode: { times: 1 } data: { failCommands: [aggregate], errorCode: 11600 } operations: - &retryable_operation name: watch object: database expectations: - &retryable_command_started_event command_started_event: command: aggregate: 1 cursor: {} pipeline: [ { $changeStream: { } } ] database_name: *database_name - *retryable_command_started_event - description: "db.watch succeeds after InterruptedDueToReplStateChange" failPoint: <<: *failCommand_failPoint data: { failCommands: [aggregate], errorCode: 11602 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "db.watch succeeds after NotWritablePrimary" failPoint: <<: *failCommand_failPoint data: { failCommands: [aggregate], errorCode: 10107 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "db.watch succeeds after NotPrimaryNoSecondaryOk" failPoint: <<: *failCommand_failPoint data: { failCommands: [aggregate], errorCode: 13435 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "db.watch succeeds after NotPrimaryOrSecondary" failPoint: <<: *failCommand_failPoint data: { failCommands: [aggregate], errorCode: 13436 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "db.watch succeeds after PrimarySteppedDown" failPoint: <<: *failCommand_failPoint data: { failCommands: [aggregate], errorCode: 189 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "db.watch succeeds after ShutdownInProgress" failPoint: <<: *failCommand_failPoint data: { failCommands: [aggregate], errorCode: 91 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "db.watch succeeds after HostNotFound" failPoint: <<: *failCommand_failPoint data: { failCommands: [aggregate], errorCode: 7 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "db.watch succeeds after HostUnreachable" failPoint: <<: *failCommand_failPoint data: { failCommands: [aggregate], errorCode: 6 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "db.watch succeeds after NetworkTimeout" failPoint: <<: *failCommand_failPoint data: { failCommands: [aggregate], errorCode: 89 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "db.watch succeeds after SocketException" failPoint: <<: *failCommand_failPoint data: { failCommands: [aggregate], errorCode: 9001 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "db.watch fails after two NotWritablePrimary errors" failPoint: <<: *failCommand_failPoint mode: { times: 2 } data: { failCommands: [aggregate], errorCode: 10107 } operations: - &retryable_operation_fails <<: *retryable_operation error: true expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "db.watch fails after NotWritablePrimary when retryReads is false" clientOptions: retryReads: false failPoint: <<: *failCommand_failPoint data: { failCommands: [aggregate], errorCode: 10107 } operations: [*retryable_operation_fails] expectations: - *retryable_command_started_event mongo-ruby-driver-2.21.3/spec/spec_tests/data/retryable_reads/legacy/changeStreams-db.watch.yml000066400000000000000000000036271505113246500326310ustar00rootroot00000000000000runOn: - minServerVersion: "4.0" topology: ["replicaset"] - minServerVersion: "4.1.7" topology: ["sharded", "load-balanced"] serverless: "forbid" database_name: &database_name "retryable-reads-tests" collection_name: &collection_name "coll" data: - {_id: 1, x: 11} tests: - description: "db.watch succeeds on first attempt" operations: - &retryable_operation name: watch object: database expectations: - &retryable_command_started_event command_started_event: command: aggregate: 1 cursor: {} pipeline: [ { $changeStream: { } } ] database_name: *database_name - description: "db.watch succeeds on second attempt" failPoint: &failCommand_failPoint configureFailPoint: failCommand mode: { times: 1 } data: failCommands: [aggregate] closeConnection: true operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "db.watch fails on first attempt" clientOptions: retryReads: false failPoint: *failCommand_failPoint operations: - &retryable_operation_fails <<: *retryable_operation error: true expectations: - *retryable_command_started_event - description: "db.watch fails on second attempt" failPoint: <<: *failCommand_failPoint mode: { times: 2 } operations: [*retryable_operation_fails] expectations: - *retryable_command_started_event - *retryable_command_started_event mongo-ruby-driver-2.21.3/spec/spec_tests/data/retryable_reads/legacy/count-serverErrors.yml000066400000000000000000000126051505113246500322020ustar00rootroot00000000000000runOn: - minServerVersion: "4.0" topology: ["single", "replicaset"] - minServerVersion: "4.1.7" topology: ["sharded"] database_name: &database_name "retryable-reads-tests" collection_name: &collection_name "coll" data: - { _id: 1, x: 11 } - { _id: 2, x: 22 } tests: - description: "Count succeeds after InterruptedAtShutdown" failPoint: &failCommand_failPoint configureFailPoint: failCommand mode: { times: 1 } data: { failCommands: [count], errorCode: 11600 } operations: - &retryable_operation_succeeds <<: &retryable_operation name: count object: collection arguments: { filter: { } } result: 2 expectations: - &retryable_command_started_event command_started_event: command: count: *collection_name database_name: *database_name - *retryable_command_started_event - description: "Count succeeds after InterruptedDueToReplStateChange" failPoint: <<: *failCommand_failPoint data: { failCommands: [count], errorCode: 11602 } operations: [*retryable_operation_succeeds] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "Count succeeds after NotMaster" failPoint: <<: *failCommand_failPoint data: { failCommands: [count], errorCode: 10107 } operations: [*retryable_operation_succeeds] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "Count succeeds after NotPrimaryNoSecondaryOk" failPoint: <<: *failCommand_failPoint data: { failCommands: [count], errorCode: 13435 } operations: [*retryable_operation_succeeds] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "Count succeeds after NotMasterOrSecondary" failPoint: <<: *failCommand_failPoint data: { failCommands: [count], errorCode: 13436 } operations: [*retryable_operation_succeeds] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "Count succeeds after PrimarySteppedDown" failPoint: <<: *failCommand_failPoint data: { failCommands: [count], errorCode: 189 } operations: [*retryable_operation_succeeds] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "Count succeeds after ShutdownInProgress" failPoint: <<: *failCommand_failPoint data: { failCommands: [count], errorCode: 91 } operations: [*retryable_operation_succeeds] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "Count succeeds after HostNotFound" failPoint: <<: *failCommand_failPoint data: { failCommands: [count], errorCode: 7 } operations: [*retryable_operation_succeeds] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "Count succeeds after HostUnreachable" failPoint: <<: *failCommand_failPoint data: { failCommands: [count], errorCode: 6 } operations: [*retryable_operation_succeeds] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "Count succeeds after NetworkTimeout" failPoint: <<: *failCommand_failPoint data: { failCommands: [count], errorCode: 89 } operations: [*retryable_operation_succeeds] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "Count succeeds after SocketException" failPoint: <<: *failCommand_failPoint data: { failCommands: [count], errorCode: 9001 } operations: [*retryable_operation_succeeds] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "Count fails after two NotMaster errors" failPoint: <<: *failCommand_failPoint mode: { times: 2 } data: { failCommands: [count], errorCode: 10107 } operations: - &retryable_operation_fails <<: *retryable_operation error: true expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "Count fails after NotMaster when retryReads is false" clientOptions: retryReads: false failPoint: <<: *failCommand_failPoint data: { failCommands: [count], errorCode: 10107 } operations: [*retryable_operation_fails] expectations: - *retryable_command_started_event mongo-ruby-driver-2.21.3/spec/spec_tests/data/retryable_reads/legacy/count.yml000066400000000000000000000036731505113246500275060ustar00rootroot00000000000000runOn: - minServerVersion: "4.0" topology: ["single", "replicaset"] - minServerVersion: "4.1.7" topology: ["sharded"] database_name: &database_name "retryable-reads-tests" collection_name: &collection_name "coll" data: - { _id: 1, x: 11 } - { _id: 2, x: 22 } tests: - description: "Count succeeds on first attempt" operations: - &retryable_operation_succeeds <<: &retryable_operation name: count object: collection arguments: { filter: { } } result: 2 expectations: - &retryable_command_started_event command_started_event: command: count: *collection_name database_name: *database_name - description: "Count succeeds on second attempt" failPoint: &failCommand_failPoint configureFailPoint: failCommand mode: { times: 1 } data: failCommands: [count] closeConnection: true operations: [*retryable_operation_succeeds] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "Count fails on first attempt" clientOptions: retryReads: false failPoint: *failCommand_failPoint operations: - &retryable_operation_fails <<: *retryable_operation error: true expectations: - *retryable_command_started_event - description: "Count fails on second attempt" failPoint: <<: *failCommand_failPoint mode: { times: 2 } operations: [*retryable_operation_fails] expectations: - *retryable_command_started_event - *retryable_command_started_event mongo-ruby-driver-2.21.3/spec/spec_tests/data/retryable_reads/legacy/countDocuments-serverErrors.yml000066400000000000000000000132311505113246500340600ustar00rootroot00000000000000runOn: - minServerVersion: "4.0" topology: ["single", "replicaset"] - minServerVersion: "4.1.7" topology: ["sharded"] database_name: &database_name "retryable-reads-tests" collection_name: &collection_name "coll" data: - { _id: 1, x: 11 } - { _id: 2, x: 22 } tests: - description: "CountDocuments succeeds after InterruptedAtShutdown" failPoint: &failCommand_failPoint configureFailPoint: failCommand mode: { times: 1 } data: { failCommands: [aggregate], errorCode: 11600 } operations: - &retryable_operation_succeeds <<: &retryable_operation name: countDocuments object: collection arguments: { filter: { } } result: 2 expectations: - &retryable_command_started_event command_started_event: command: aggregate: *collection_name pipeline: [{'$match': {}}, {'$group': {'_id': 1, 'n': {'$sum': 1}}}] database_name: *database_name - *retryable_command_started_event - description: "CountDocuments succeeds after InterruptedDueToReplStateChange" failPoint: <<: *failCommand_failPoint data: { failCommands: [aggregate], errorCode: 11602 } operations: [*retryable_operation_succeeds] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "CountDocuments succeeds after NotMaster" failPoint: <<: *failCommand_failPoint data: { failCommands: [aggregate], errorCode: 10107 } operations: [*retryable_operation_succeeds] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "CountDocuments succeeds after NotPrimaryNoSecondaryOk" failPoint: <<: *failCommand_failPoint data: { failCommands: [aggregate], errorCode: 13435 } operations: [*retryable_operation_succeeds] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "CountDocuments succeeds after NotMasterOrSecondary" failPoint: <<: *failCommand_failPoint data: { failCommands: [aggregate], errorCode: 13436 } operations: [*retryable_operation_succeeds] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "CountDocuments succeeds after PrimarySteppedDown" failPoint: <<: *failCommand_failPoint data: { failCommands: [aggregate], errorCode: 189 } operations: [*retryable_operation_succeeds] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "CountDocuments succeeds after ShutdownInProgress" failPoint: <<: *failCommand_failPoint data: { failCommands: [aggregate], errorCode: 91 } operations: [*retryable_operation_succeeds] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "CountDocuments succeeds after HostNotFound" failPoint: <<: *failCommand_failPoint data: { failCommands: [aggregate], errorCode: 7 } operations: [*retryable_operation_succeeds] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "CountDocuments succeeds after HostUnreachable" failPoint: <<: *failCommand_failPoint data: { failCommands: [aggregate], errorCode: 6 } operations: [*retryable_operation_succeeds] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "CountDocuments succeeds after NetworkTimeout" failPoint: <<: *failCommand_failPoint data: { failCommands: [aggregate], errorCode: 89 } operations: [*retryable_operation_succeeds] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "CountDocuments succeeds after SocketException" failPoint: <<: *failCommand_failPoint data: { failCommands: [aggregate], errorCode: 9001 } operations: [*retryable_operation_succeeds] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "CountDocuments fails after two NotMaster errors" failPoint: <<: *failCommand_failPoint mode: { times: 2 } data: { failCommands: [aggregate], errorCode: 10107 } operations: - &retryable_operation_fails <<: *retryable_operation error: true expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "CountDocuments fails after NotMaster when retryReads is false" clientOptions: retryReads: false failPoint: <<: *failCommand_failPoint data: { failCommands: [aggregate], errorCode: 10107 } operations: [*retryable_operation_fails] expectations: - *retryable_command_started_event mongo-ruby-driver-2.21.3/spec/spec_tests/data/retryable_reads/legacy/countDocuments.yml000066400000000000000000000041051505113246500313570ustar00rootroot00000000000000runOn: - minServerVersion: "4.0" topology: ["single", "replicaset"] - minServerVersion: "4.1.7" topology: ["sharded"] database_name: &database_name "retryable-reads-tests" collection_name: &collection_name "coll" data: - { _id: 1, x: 11 } - { _id: 2, x: 22 } tests: - description: "CountDocuments succeeds on first attempt" operations: - &retryable_operation_succeeds <<: &retryable_operation name: countDocuments object: collection arguments: { filter: { } } result: 2 expectations: - &retryable_command_started_event command_started_event: command: aggregate: *collection_name pipeline: [{'$match': {}}, {'$group': {'_id': 1, 'n': {'$sum': 1}}}] database_name: *database_name - description: "CountDocuments succeeds on second attempt" failPoint: &failCommand_failPoint configureFailPoint: failCommand mode: { times: 1 } data: failCommands: [aggregate] closeConnection: true operations: [*retryable_operation_succeeds] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "CountDocuments fails on first attempt" clientOptions: retryReads: false failPoint: *failCommand_failPoint operations: - &retryable_operation_fails <<: *retryable_operation error: true expectations: - *retryable_command_started_event - description: "CountDocuments fails on second attempt" failPoint: <<: *failCommand_failPoint mode: { times: 2 } operations: [*retryable_operation_fails] expectations: - *retryable_command_started_event - *retryable_command_started_event mongo-ruby-driver-2.21.3/spec/spec_tests/data/retryable_reads/legacy/distinct-serverErrors.yml000066400000000000000000000132431505113246500326720ustar00rootroot00000000000000runOn: - minServerVersion: "4.0" topology: ["single", "replicaset"] - minServerVersion: "4.1.7" topology: ["sharded"] database_name: &database_name "retryable-reads-tests" collection_name: &collection_name "coll" data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} tests: - description: "Distinct succeeds after InterruptedAtShutdown" failPoint: &failCommand_failPoint configureFailPoint: failCommand mode: { times: 1 } data: { failCommands: [distinct], errorCode: 11600 } operations: - &retryable_operation_succeeds <<: &retryable_operation name: distinct object: collection arguments: { fieldName: "x", filter: { _id: { $gt: 1 } } } result: - 22 - 33 expectations: - &retryable_command_started_event command_started_event: command: distinct: *collection_name key: "x" query: _id: {$gt: 1} database_name: *database_name - *retryable_command_started_event - description: "Distinct succeeds after InterruptedDueToReplStateChange" failPoint: <<: *failCommand_failPoint data: { failCommands: [distinct], errorCode: 11602 } operations: [*retryable_operation_succeeds] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "Distinct succeeds after NotMaster" failPoint: <<: *failCommand_failPoint data: { failCommands: [distinct], errorCode: 10107 } operations: [*retryable_operation_succeeds] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "Distinct succeeds after NotPrimaryNoSecondaryOk" failPoint: <<: *failCommand_failPoint data: { failCommands: [distinct], errorCode: 13435 } operations: [*retryable_operation_succeeds] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "Distinct succeeds after NotMasterOrSecondary" failPoint: <<: *failCommand_failPoint data: { failCommands: [distinct], errorCode: 13436 } operations: [*retryable_operation_succeeds] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "Distinct succeeds after PrimarySteppedDown" failPoint: <<: *failCommand_failPoint data: { failCommands: [distinct], errorCode: 189 } operations: [*retryable_operation_succeeds] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "Distinct succeeds after ShutdownInProgress" failPoint: <<: *failCommand_failPoint data: { failCommands: [distinct], errorCode: 91 } operations: [*retryable_operation_succeeds] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "Distinct succeeds after HostNotFound" failPoint: <<: *failCommand_failPoint data: { failCommands: [distinct], errorCode: 7 } operations: [*retryable_operation_succeeds] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "Distinct succeeds after HostUnreachable" failPoint: <<: *failCommand_failPoint data: { failCommands: [distinct], errorCode: 6 } operations: [*retryable_operation_succeeds] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "Distinct succeeds after NetworkTimeout" failPoint: <<: *failCommand_failPoint data: { failCommands: [distinct], errorCode: 89 } operations: [*retryable_operation_succeeds] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "Distinct succeeds after SocketException" failPoint: <<: *failCommand_failPoint data: { failCommands: [distinct], errorCode: 9001 } operations: [*retryable_operation_succeeds] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "Distinct fails after two NotMaster errors" failPoint: <<: *failCommand_failPoint mode: { times: 2 } data: { failCommands: [distinct], errorCode: 10107 } operations: - &retryable_operation_fails <<: *retryable_operation error: true expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "Distinct fails after NotMaster when retryReads is false" clientOptions: retryReads: false failPoint: <<: *failCommand_failPoint data: { failCommands: [distinct], errorCode: 10107 } operations: [*retryable_operation_fails] expectations: - *retryable_command_started_event mongo-ruby-driver-2.21.3/spec/spec_tests/data/retryable_reads/legacy/distinct.yml000066400000000000000000000042311505113246500301660ustar00rootroot00000000000000runOn: - minServerVersion: "4.0" topology: ["single", "replicaset"] - minServerVersion: "4.1.7" topology: ["sharded"] database_name: &database_name "retryable-reads-tests" collection_name: &collection_name "coll" data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} tests: - description: "Distinct succeeds on first attempt" operations: - &retryable_operation_succeeds <<: &retryable_operation name: distinct object: collection arguments: { fieldName: "x", filter: { _id: { $gt: 1 } } } result: - 22 - 33 expectations: - &retryable_command_started_event command_started_event: command: distinct: *collection_name key: "x" query: _id: {$gt: 1} database_name: *database_name - description: "Distinct succeeds on second attempt" failPoint: &failCommand_failPoint configureFailPoint: failCommand mode: { times: 1 } data: failCommands: [distinct] closeConnection: true operations: [*retryable_operation_succeeds] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "Distinct fails on first attempt" clientOptions: retryReads: false failPoint: *failCommand_failPoint operations: - &retryable_operation_fails <<: *retryable_operation error: true expectations: - *retryable_command_started_event - description: "Distinct fails on second attempt" failPoint: <<: *failCommand_failPoint mode: { times: 2 } operations: [*retryable_operation_fails] expectations: - *retryable_command_started_event - *retryable_command_started_event estimatedDocumentCount-serverErrors.yml000066400000000000000000000131031505113246500354540ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/retryable_reads/legacyrunOn: - minServerVersion: "4.0" topology: ["single", "replicaset"] - minServerVersion: "4.1.7" topology: ["sharded"] database_name: &database_name "retryable-reads-tests" collection_name: &collection_name "coll" data: - { _id: 1, x: 11 } - { _id: 2, x: 22 } tests: - description: "EstimatedDocumentCount succeeds after InterruptedAtShutdown" failPoint: &failCommand_failPoint configureFailPoint: failCommand mode: { times: 1 } data: { failCommands: [count], errorCode: 11600 } operations: - &retryable_operation_succeeds <<: &retryable_operation name: estimatedDocumentCount object: collection result: 2 expectations: - &retryable_command_started_event command_started_event: command: count: *collection_name database_name: *database_name - *retryable_command_started_event - description: "EstimatedDocumentCount succeeds after InterruptedDueToReplStateChange" failPoint: <<: *failCommand_failPoint data: { failCommands: [count], errorCode: 11602 } operations: [*retryable_operation_succeeds] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "EstimatedDocumentCount succeeds after NotMaster" failPoint: <<: *failCommand_failPoint data: { failCommands: [count], errorCode: 10107 } operations: [*retryable_operation_succeeds] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "EstimatedDocumentCount succeeds after NotPrimaryNoSecondaryOk" failPoint: <<: *failCommand_failPoint data: { failCommands: [count], errorCode: 13435 } operations: [*retryable_operation_succeeds] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "EstimatedDocumentCount succeeds after NotMasterOrSecondary" failPoint: <<: *failCommand_failPoint data: { failCommands: [count], errorCode: 13436 } operations: [*retryable_operation_succeeds] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "EstimatedDocumentCount succeeds after PrimarySteppedDown" failPoint: <<: *failCommand_failPoint data: { failCommands: [count], errorCode: 189 } operations: [*retryable_operation_succeeds] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "EstimatedDocumentCount succeeds after ShutdownInProgress" failPoint: <<: *failCommand_failPoint data: { failCommands: [count], errorCode: 91 } operations: [*retryable_operation_succeeds] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "EstimatedDocumentCount succeeds after HostNotFound" failPoint: <<: *failCommand_failPoint data: { failCommands: [count], errorCode: 7 } operations: [*retryable_operation_succeeds] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "EstimatedDocumentCount succeeds after HostUnreachable" failPoint: <<: *failCommand_failPoint data: { failCommands: [count], errorCode: 6 } operations: [*retryable_operation_succeeds] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "EstimatedDocumentCount succeeds after NetworkTimeout" failPoint: <<: *failCommand_failPoint data: { failCommands: [count], errorCode: 89 } operations: [*retryable_operation_succeeds] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "EstimatedDocumentCount succeeds after SocketException" failPoint: <<: *failCommand_failPoint data: { failCommands: [count], errorCode: 9001 } operations: [*retryable_operation_succeeds] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "EstimatedDocumentCount fails after two NotMaster errors" failPoint: <<: *failCommand_failPoint mode: { times: 2 } data: { failCommands: [count], errorCode: 10107 } operations: - &retryable_operation_fails <<: *retryable_operation error: true expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "EstimatedDocumentCount fails after NotMaster when retryReads is false" clientOptions: retryReads: false failPoint: <<: *failCommand_failPoint data: { failCommands: [count], errorCode: 10107 } operations: [*retryable_operation_fails] expectations: - *retryable_command_started_event mongo-ruby-driver-2.21.3/spec/spec_tests/data/retryable_reads/legacy/estimatedDocumentCount.yml000066400000000000000000000037301505113246500330370ustar00rootroot00000000000000runOn: - minServerVersion: "4.0" topology: ["single", "replicaset"] - minServerVersion: "4.1.7" topology: ["sharded"] database_name: &database_name "retryable-reads-tests" collection_name: &collection_name "coll" data: - { _id: 1, x: 11 } - { _id: 2, x: 22 } tests: - description: "EstimatedDocumentCount succeeds on first attempt" operations: - &retryable_operation_succeeds <<: &retryable_operation name: estimatedDocumentCount object: collection result: 2 expectations: - &retryable_command_started_event command_started_event: command: count: *collection_name database_name: *database_name - description: "EstimatedDocumentCount succeeds on second attempt" failPoint: &failCommand_failPoint configureFailPoint: failCommand mode: { times: 1 } data: failCommands: [count] closeConnection: true operations: [*retryable_operation_succeeds] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "EstimatedDocumentCount fails on first attempt" clientOptions: retryReads: false failPoint: *failCommand_failPoint operations: - &retryable_operation_fails <<: *retryable_operation error: true expectations: - *retryable_command_started_event - description: "EstimatedDocumentCount fails on second attempt" failPoint: <<: *failCommand_failPoint mode: { times: 2 } operations: [*retryable_operation_fails] expectations: - *retryable_command_started_event - *retryable_command_started_event mongo-ruby-driver-2.21.3/spec/spec_tests/data/retryable_reads/legacy/find-serverErrors.yml000066400000000000000000000133031505113246500317660ustar00rootroot00000000000000runOn: - minServerVersion: "4.0" topology: ["single", "replicaset"] - minServerVersion: "4.1.7" topology: ["sharded"] database_name: &database_name "retryable-reads-tests" collection_name: &collection_name "coll" data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} - {_id: 4, x: 44} - {_id: 5, x: 55} tests: - description: "Find succeeds after InterruptedAtShutdown" failPoint: &failCommand_failPoint configureFailPoint: failCommand mode: { times: 1 } data: { failCommands: [find], errorCode: 11600 } operations: - &retryable_operation_succeeds <<: &retryable_operation name: find object: collection arguments: { filter: {}, sort: { _id: 1 }, limit: 4 } result: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} - {_id: 4, x: 44} expectations: - &retryable_command_started_event command_started_event: command: find: *collection_name filter: {} sort: {_id: 1} limit: 4 database_name: *database_name - *retryable_command_started_event - description: "Find succeeds after InterruptedDueToReplStateChange" failPoint: <<: *failCommand_failPoint data: { failCommands: [find], errorCode: 11602 } operations: [*retryable_operation_succeeds] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "Find succeeds after NotMaster" failPoint: <<: *failCommand_failPoint data: { failCommands: [find], errorCode: 10107 } operations: [*retryable_operation_succeeds] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "Find succeeds after NotPrimaryNoSecondaryOk" failPoint: <<: *failCommand_failPoint data: { failCommands: [find], errorCode: 13435 } operations: [*retryable_operation_succeeds] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "Find succeeds after NotMasterOrSecondary" failPoint: <<: *failCommand_failPoint data: { failCommands: [find], errorCode: 13436 } operations: [*retryable_operation_succeeds] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "Find succeeds after PrimarySteppedDown" failPoint: <<: *failCommand_failPoint data: { failCommands: [find], errorCode: 189 } operations: [*retryable_operation_succeeds] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "Find succeeds after ShutdownInProgress" failPoint: <<: *failCommand_failPoint data: { failCommands: [find], errorCode: 91 } operations: [*retryable_operation_succeeds] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "Find succeeds after HostNotFound" failPoint: <<: *failCommand_failPoint data: { failCommands: [find], errorCode: 7 } operations: [*retryable_operation_succeeds] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "Find succeeds after HostUnreachable" failPoint: <<: *failCommand_failPoint data: { failCommands: [find], errorCode: 6 } operations: [*retryable_operation_succeeds] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "Find succeeds after NetworkTimeout" failPoint: <<: *failCommand_failPoint data: { failCommands: [find], errorCode: 89 } operations: [*retryable_operation_succeeds] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "Find succeeds after SocketException" failPoint: <<: *failCommand_failPoint data: { failCommands: [find], errorCode: 9001 } operations: [*retryable_operation_succeeds] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "Find fails after two NotMaster errors" failPoint: <<: *failCommand_failPoint mode: { times: 2 } data: { failCommands: [find], errorCode: 10107 } operations: - &retryable_operation_fails <<: *retryable_operation error: true expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "Find fails after NotMaster when retryReads is false" clientOptions: retryReads: false failPoint: <<: *failCommand_failPoint data: { failCommands: [find], errorCode: 10107 } operations: [*retryable_operation_fails] expectations: - *retryable_command_started_event mongo-ruby-driver-2.21.3/spec/spec_tests/data/retryable_reads/legacy/find.yml000066400000000000000000000052531505113246500272720ustar00rootroot00000000000000runOn: - minServerVersion: "4.0" topology: ["single", "replicaset"] - minServerVersion: "4.1.7" topology: ["sharded"] database_name: &database_name "retryable-reads-tests" collection_name: &collection_name "coll" data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} - {_id: 4, x: 44} - {_id: 5, x: 55} tests: - description: "Find succeeds on first attempt" operations: - &retryable_operation_succeeds <<: &retryable_operation name: find object: collection arguments: filter: {} sort: {_id: 1} limit: 4 result: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} - {_id: 4, x: 44} expectations: - &retryable_command_started_event command_started_event: command: find: *collection_name filter: {} sort: {_id: 1} limit: 4 database_name: *database_name - description: "Find succeeds on second attempt with explicit clientOptions" clientOptions: retryReads: true failPoint: &failCommand_failPoint configureFailPoint: failCommand mode: { times: 1 } data: failCommands: [find] closeConnection: true operations: [*retryable_operation_succeeds] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "Find succeeds on second attempt" failPoint: *failCommand_failPoint operations: [*retryable_operation_succeeds] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "Find fails on first attempt" clientOptions: retryReads: false failPoint: *failCommand_failPoint operations: - &retryable_operation_fails <<: *retryable_operation error: true expectations: - *retryable_command_started_event - description: "Find fails on second attempt" failPoint: <<: *failCommand_failPoint mode: { times: 2 } operations: [*retryable_operation_fails] expectations: - *retryable_command_started_event - *retryable_command_started_event mongo-ruby-driver-2.21.3/spec/spec_tests/data/retryable_reads/legacy/findOne-serverErrors.yml000066400000000000000000000130451505113246500324330ustar00rootroot00000000000000runOn: - minServerVersion: "4.0" topology: ["single", "replicaset"] - minServerVersion: "4.1.7" topology: ["sharded"] database_name: &database_name "retryable-reads-tests" collection_name: &collection_name "coll" data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} - {_id: 4, x: 44} - {_id: 5, x: 55} tests: - description: "FindOne succeeds after InterruptedAtShutdown" failPoint: &failCommand_failPoint configureFailPoint: failCommand mode: { times: 1 } data: { failCommands: [find], errorCode: 11600 } operations: - &retryable_operation_succeeds <<: &retryable_operation name: findOne object: collection arguments: filter: {_id: 1} result: {_id: 1, x: 11} expectations: - &retryable_command_started_event command_started_event: command: find: *collection_name filter: {_id: 1} database_name: *database_name - *retryable_command_started_event - description: "FindOne succeeds after InterruptedDueToReplStateChange" failPoint: <<: *failCommand_failPoint data: { failCommands: [find], errorCode: 11602 } operations: [*retryable_operation_succeeds] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "FindOne succeeds after NotMaster" failPoint: <<: *failCommand_failPoint data: { failCommands: [find], errorCode: 10107 } operations: [*retryable_operation_succeeds] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "FindOne succeeds after NotPrimaryNoSecondaryOk" failPoint: <<: *failCommand_failPoint data: { failCommands: [find], errorCode: 13435 } operations: [*retryable_operation_succeeds] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "FindOne succeeds after NotMasterOrSecondary" failPoint: <<: *failCommand_failPoint data: { failCommands: [find], errorCode: 13436 } operations: [*retryable_operation_succeeds] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "FindOne succeeds after PrimarySteppedDown" failPoint: <<: *failCommand_failPoint data: { failCommands: [find], errorCode: 189 } operations: [*retryable_operation_succeeds] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "FindOne succeeds after ShutdownInProgress" failPoint: <<: *failCommand_failPoint data: { failCommands: [find], errorCode: 91 } operations: [*retryable_operation_succeeds] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "FindOne succeeds after HostNotFound" failPoint: <<: *failCommand_failPoint data: { failCommands: [find], errorCode: 7 } operations: [*retryable_operation_succeeds] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "FindOne succeeds after HostUnreachable" failPoint: <<: *failCommand_failPoint data: { failCommands: [find], errorCode: 6 } operations: [*retryable_operation_succeeds] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "FindOne succeeds after NetworkTimeout" failPoint: <<: *failCommand_failPoint data: { failCommands: [find], errorCode: 89 } operations: [*retryable_operation_succeeds] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "FindOne succeeds after SocketException" failPoint: <<: *failCommand_failPoint data: { failCommands: [find], errorCode: 9001 } operations: [*retryable_operation_succeeds] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "FindOne fails after two NotMaster errors" failPoint: <<: *failCommand_failPoint mode: { times: 2 } data: { failCommands: [find], errorCode: 10107 } operations: - &retryable_operation_fails <<: *retryable_operation error: true expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "FindOne fails after NotMaster when retryReads is false" clientOptions: retryReads: false failPoint: <<: *failCommand_failPoint data: { failCommands: [find], errorCode: 10107 } operations: [*retryable_operation_fails] expectations: - *retryable_command_started_event mongo-ruby-driver-2.21.3/spec/spec_tests/data/retryable_reads/legacy/findOne.yml000066400000000000000000000040631505113246500277320ustar00rootroot00000000000000runOn: - minServerVersion: "4.0" topology: ["single", "replicaset"] - minServerVersion: "4.1.7" topology: ["sharded"] database_name: &database_name "retryable-reads-tests" collection_name: &collection_name "coll" data: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} - {_id: 4, x: 44} - {_id: 5, x: 55} tests: - description: "FindOne succeeds on first attempt" operations: - &retryable_operation_succeeds <<: &retryable_operation name: findOne object: collection arguments: {filter: {_id: 1 }} result: {_id: 1, x: 11} expectations: - &retryable_command_started_event command_started_event: command: find: *collection_name filter: {_id: 1} database_name: *database_name - description: "FindOne succeeds on second attempt" failPoint: &failCommand_failPoint configureFailPoint: failCommand mode: { times: 1 } data: failCommands: [find] closeConnection: true operations: [*retryable_operation_succeeds] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "FindOne fails on first attempt" clientOptions: retryReads: false failPoint: *failCommand_failPoint operations: - &retryable_operation_fails <<: *retryable_operation error: true expectations: - *retryable_command_started_event - description: "FindOne fails on second attempt" failPoint: <<: *failCommand_failPoint mode: { times: 2 } operations: [*retryable_operation_fails] expectations: - *retryable_command_started_event - *retryable_command_started_event gridfs-download-serverErrors.yml000066400000000000000000000147551505113246500340660ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/retryable_reads/legacyrunOn: - minServerVersion: "4.0" topology: ["single", "replicaset"] - minServerVersion: "4.1.7" topology: ["sharded"] database_name: &database_name "retryable-reads-tests" bucket_name: "fs" data: fs.files: - _id: { $oid : "000000000000000000000001" } length: 1 chunkSize: 4 uploadDate: { $date : "1970-01-01T00:00:00.000Z" } filename: abc metadata: {} fs.chunks: - { _id: { $oid: "000000000000000000000002" }, files_id: { $oid: "000000000000000000000001" }, n: 0, data: { $binary: { base64: "EQ==", subType: "00" } } } tests: - description: "Download succeeds after InterruptedAtShutdown" failPoint: &failCommand_failPoint configureFailPoint: failCommand mode: { times: 1 } data: { failCommands: [find], errorCode: 11600 } operations: - &retryable_operation name: download object: gridfsbucket arguments: { id: { "$oid" : "000000000000000000000001" } } expectations: - &retryable_command_started_event command_started_event: command: find: fs.files filter: { _id: {$oid : "000000000000000000000001" }} database_name: *database_name - *retryable_command_started_event - &find_chunks_command_started_event command_started_event: command: find: fs.chunks filter: { files_id: {$oid : "000000000000000000000001" }} sort: { n: 1 } database_name: *database_name - description: "Download succeeds after InterruptedDueToReplStateChange" failPoint: <<: *failCommand_failPoint data: { failCommands: [find], errorCode: 11602 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - *find_chunks_command_started_event - description: "Download succeeds after NotMaster" failPoint: <<: *failCommand_failPoint data: { failCommands: [find], errorCode: 10107 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - *find_chunks_command_started_event - description: "Download succeeds after NotPrimaryNoSecondaryOk" failPoint: <<: *failCommand_failPoint data: { failCommands: [find], errorCode: 13435 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - *find_chunks_command_started_event - description: "Download succeeds after NotMasterOrSecondary" failPoint: <<: *failCommand_failPoint data: { failCommands: [find], errorCode: 13436 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - *find_chunks_command_started_event - description: "Download succeeds after PrimarySteppedDown" failPoint: <<: *failCommand_failPoint data: { failCommands: [find], errorCode: 189 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - *find_chunks_command_started_event - description: "Download succeeds after ShutdownInProgress" failPoint: <<: *failCommand_failPoint data: { failCommands: [find], errorCode: 91 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - *find_chunks_command_started_event - description: "Download succeeds after HostNotFound" failPoint: <<: *failCommand_failPoint data: { failCommands: [find], errorCode: 7 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - *find_chunks_command_started_event - description: "Download succeeds after HostUnreachable" failPoint: <<: *failCommand_failPoint data: { failCommands: [find], errorCode: 6 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - *find_chunks_command_started_event - description: "Download succeeds after NetworkTimeout" failPoint: <<: *failCommand_failPoint data: { failCommands: [find], errorCode: 89 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - *find_chunks_command_started_event - description: "Download succeeds after SocketException" failPoint: <<: *failCommand_failPoint data: { failCommands: [find], errorCode: 9001 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - *find_chunks_command_started_event - description: "Download fails after two NotMaster errors" failPoint: <<: *failCommand_failPoint mode: { times: 2 } data: { failCommands: [find], errorCode: 10107 } operations: - &retryable_operation_fails <<: *retryable_operation error: true expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "Download fails after NotMaster when retryReads is false" clientOptions: retryReads: false failPoint: <<: *failCommand_failPoint data: { failCommands: [find], errorCode: 10107 } operations: [*retryable_operation_fails] expectations: - *retryable_command_started_event mongo-ruby-driver-2.21.3/spec/spec_tests/data/retryable_reads/legacy/gridfs-download.yml000066400000000000000000000052411505113246500314320ustar00rootroot00000000000000runOn: - minServerVersion: "4.0" topology: ["single", "replicaset"] - minServerVersion: "4.1.7" topology: ["sharded"] database_name: &database_name "retryable-reads-tests" bucket_name: "fs" data: fs.files: - _id: { $oid : "000000000000000000000001" } length: 1 chunkSize: 4 uploadDate: { $date : "1970-01-01T00:00:00.000Z" } filename: abc metadata: {} fs.chunks: - { _id: { $oid: "000000000000000000000002" }, files_id: { $oid: "000000000000000000000001" }, n: 0, data: { $binary: { base64: "EQ==", subType: "00" } } } tests: - description: "Download succeeds on first attempt" operations: - &retryable_operation name: download object: gridfsbucket arguments: { id: { "$oid" : "000000000000000000000001" } } expectations: - &retryable_command_started_event command_started_event: command: find: fs.files filter: { _id: {$oid : "000000000000000000000001" }} database_name: *database_name - &find_chunks_command_started_event command_started_event: command: find: fs.chunks filter: { files_id: {$oid : "000000000000000000000001" }} sort: { n: 1 } database_name: *database_name - description: "Download succeeds on second attempt" failPoint: &failCommand_failPoint configureFailPoint: failCommand mode: { times: 1 } data: failCommands: [find] closeConnection: true operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - *find_chunks_command_started_event - description: "Download fails on first attempt" clientOptions: retryReads: false failPoint: *failCommand_failPoint operations: - &retryable_operation_fails <<: *retryable_operation error: true expectations: - *retryable_command_started_event - description: "Download fails on second attempt" failPoint: <<: *failCommand_failPoint mode: { times: 2 } operations: [*retryable_operation_fails] expectations: - *retryable_command_started_event - *retryable_command_started_event gridfs-downloadByName-serverErrors.yml000066400000000000000000000150351505113246500351520ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/retryable_reads/legacyrunOn: - minServerVersion: "4.0" topology: ["single", "replicaset"] - minServerVersion: "4.1.7" topology: ["sharded"] database_name: &database_name "retryable-reads-tests" bucket_name: "fs" data: fs.files: - _id: { $oid : "000000000000000000000001" } length: 1 chunkSize: 4 uploadDate: { $date : "1970-01-01T00:00:00.000Z" } filename: abc metadata: {} fs.chunks: - { _id: { $oid: "000000000000000000000002" }, files_id: { $oid: "000000000000000000000001" }, n: 0, data: { $binary: { base64: "EQ==", subType: "00" } } } tests: - description: "DownloadByName succeeds after InterruptedAtShutdown" failPoint: &failCommand_failPoint configureFailPoint: failCommand mode: { times: 1 } data: { failCommands: [find], errorCode: 11600 } operations: - &retryable_operation name: download_by_name object: gridfsbucket arguments: filename: abc expectations: - &retryable_command_started_event command_started_event: command: find: fs.files filter: { filename : "abc" } database_name: *database_name - *retryable_command_started_event - &find_chunks_command_started_event command_started_event: command: find: fs.chunks filter: { files_id: { $oid : "000000000000000000000001" }} sort: { n: 1 } database_name: *database_name - description: "DownloadByName succeeds after InterruptedDueToReplStateChange" failPoint: <<: *failCommand_failPoint data: { failCommands: [find], errorCode: 11602 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - *find_chunks_command_started_event - description: "DownloadByName succeeds after NotMaster" failPoint: <<: *failCommand_failPoint data: { failCommands: [find], errorCode: 10107 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - *find_chunks_command_started_event - description: "DownloadByName succeeds after NotPrimaryNoSecondaryOk" failPoint: <<: *failCommand_failPoint data: { failCommands: [find], errorCode: 13435 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - *find_chunks_command_started_event - description: "DownloadByName succeeds after NotMasterOrSecondary" failPoint: <<: *failCommand_failPoint data: { failCommands: [find], errorCode: 13436 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - *find_chunks_command_started_event - description: "DownloadByName succeeds after PrimarySteppedDown" failPoint: <<: *failCommand_failPoint data: { failCommands: [find], errorCode: 189 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - *find_chunks_command_started_event - description: "DownloadByName succeeds after ShutdownInProgress" failPoint: <<: *failCommand_failPoint data: { failCommands: [find], errorCode: 91 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - *find_chunks_command_started_event - description: "DownloadByName succeeds after HostNotFound" failPoint: <<: *failCommand_failPoint data: { failCommands: [find], errorCode: 7 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - *find_chunks_command_started_event - description: "DownloadByName succeeds after HostUnreachable" failPoint: <<: *failCommand_failPoint data: { failCommands: [find], errorCode: 6 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - *find_chunks_command_started_event - description: "DownloadByName succeeds after NetworkTimeout" failPoint: <<: *failCommand_failPoint data: { failCommands: [find], errorCode: 89 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - *find_chunks_command_started_event - description: "DownloadByName succeeds after SocketException" failPoint: <<: *failCommand_failPoint data: { failCommands: [find], errorCode: 9001 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - *find_chunks_command_started_event - description: "DownloadByName fails after two NotMaster errors" failPoint: <<: *failCommand_failPoint mode: { times: 2 } data: { failCommands: [find], errorCode: 10107 } operations: - &retryable_operation_fails <<: *retryable_operation error: true expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "DownloadByName fails after NotMaster when retryReads is false" clientOptions: retryReads: false failPoint: <<: *failCommand_failPoint data: { failCommands: [find], errorCode: 10107 } operations: [*retryable_operation_fails] expectations: - *retryable_command_started_eventmongo-ruby-driver-2.21.3/spec/spec_tests/data/retryable_reads/legacy/gridfs-downloadByName.yml000066400000000000000000000052141505113246500325260ustar00rootroot00000000000000runOn: - minServerVersion: "4.0" topology: ["single", "replicaset"] - minServerVersion: "4.1.7" topology: ["sharded"] database_name: &database_name "retryable-reads-tests" bucket_name: "fs" data: fs.files: - _id: { $oid : "000000000000000000000001" } length: 1 chunkSize: 4 uploadDate: { $date : "1970-01-01T00:00:00.000Z" } filename: abc metadata: {} fs.chunks: - { _id: { $oid: "000000000000000000000002" }, files_id: { $oid: "000000000000000000000001" }, n: 0, data: { $binary: { base64: "EQ==", subType: "00" } } } tests: - description: "DownloadByName succeeds on first attempt" operations: - &retryable_operation name: download_by_name object: gridfsbucket arguments: { filename: "abc" } expectations: - &retryable_command_started_event command_started_event: command: find: fs.files filter: { filename : "abc" } database_name: *database_name - &find_chunks_command_started_event command_started_event: command: find: fs.chunks filter: { files_id: {$oid : "000000000000000000000001"} } sort: { n: 1 } database_name: *database_name - description: "DownloadByName succeeds on second attempt" failPoint: &failCommand_failPoint configureFailPoint: failCommand mode: { times: 1 } data: failCommands: [find] closeConnection: true operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - *find_chunks_command_started_event - description: "DownloadByName fails on first attempt" clientOptions: retryReads: false failPoint: *failCommand_failPoint operations: - &retryable_operation_fails <<: *retryable_operation error: true expectations: - *retryable_command_started_event - description: "DownloadByName fails on second attempt" failPoint: <<: *failCommand_failPoint mode: { times: 2 } operations: [*retryable_operation_fails] expectations: - *retryable_command_started_event - *retryable_command_started_event listCollectionNames-serverErrors.yml000066400000000000000000000126101505113246500347420ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/retryable_reads/legacyrunOn: - minServerVersion: "4.0" topology: ["single", "replicaset"] - minServerVersion: "4.1.7" topology: ["sharded"] database_name: &database_name "retryable-reads-tests" collection_name: &collection_name "coll" data: [] tests: - description: "ListCollectionNames succeeds after InterruptedAtShutdown" failPoint: &failCommand_failPoint configureFailPoint: failCommand mode: { times: 1 } data: { failCommands: [listCollections], errorCode: 11600 } operations: - &retryable_operation name: listCollectionNames object: database expectations: - &retryable_command_started_event command_started_event: command: listCollections: 1 - *retryable_command_started_event - description: "ListCollectionNames succeeds after InterruptedDueToReplStateChange" failPoint: <<: *failCommand_failPoint data: { failCommands: [listCollections], errorCode: 11602 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListCollectionNames succeeds after NotMaster" failPoint: <<: *failCommand_failPoint data: { failCommands: [listCollections], errorCode: 10107 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListCollectionNames succeeds after NotPrimaryNoSecondaryOk" failPoint: <<: *failCommand_failPoint data: { failCommands: [listCollections], errorCode: 13435 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListCollectionNames succeeds after NotMasterOrSecondary" failPoint: <<: *failCommand_failPoint data: { failCommands: [listCollections], errorCode: 13436 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListCollectionNames succeeds after PrimarySteppedDown" failPoint: <<: *failCommand_failPoint data: { failCommands: [listCollections], errorCode: 189 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListCollectionNames succeeds after ShutdownInProgress" failPoint: <<: *failCommand_failPoint data: { failCommands: [listCollections], errorCode: 91 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListCollectionNames succeeds after HostNotFound" failPoint: <<: *failCommand_failPoint data: { failCommands: [listCollections], errorCode: 7 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListCollectionNames succeeds after HostUnreachable" failPoint: <<: *failCommand_failPoint data: { failCommands: [listCollections], errorCode: 6 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListCollectionNames succeeds after NetworkTimeout" failPoint: <<: *failCommand_failPoint data: { failCommands: [listCollections], errorCode: 89 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListCollectionNames succeeds after SocketException" failPoint: <<: *failCommand_failPoint data: { failCommands: [listCollections], errorCode: 9001 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListCollectionNames fails after two NotMaster errors" failPoint: <<: *failCommand_failPoint mode: { times: 2 } data: { failCommands: [listCollections], errorCode: 10107 } operations: - &retryable_operation_fails <<: *retryable_operation error: true expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListCollectionNames fails after NotMaster when retryReads is false" clientOptions: retryReads: false failPoint: <<: *failCommand_failPoint data: { failCommands: [listCollections], errorCode: 10107 } operations: [*retryable_operation_fails] expectations: - *retryable_command_started_event mongo-ruby-driver-2.21.3/spec/spec_tests/data/retryable_reads/legacy/listCollectionNames.yml000066400000000000000000000034551505113246500323270ustar00rootroot00000000000000runOn: - minServerVersion: "4.0" topology: ["single", "replicaset"] - minServerVersion: "4.1.7" topology: ["sharded"] database_name: &database_name "retryable-reads-tests" collection_name: &collection_name "coll" data: [] tests: - description: "ListCollectionNames succeeds on first attempt" operations: - &retryable_operation name: listCollectionNames object: database expectations: - &retryable_command_started_event command_started_event: command: listCollections: 1 - description: "ListCollectionNames succeeds on second attempt" failPoint: &failCommand_failPoint configureFailPoint: failCommand mode: { times: 1 } data: failCommands: - listCollections closeConnection: true operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListCollectionNames fails on first attempt" clientOptions: retryReads: false failPoint: *failCommand_failPoint operations: - &retryable_operation_fails <<: *retryable_operation error: true expectations: - *retryable_command_started_event - description: "ListCollectionNames fails on second attempt" failPoint: <<: *failCommand_failPoint mode: { times: 2 } operations: [*retryable_operation_fails] expectations: - *retryable_command_started_event - *retryable_command_started_event listCollectionObjects-serverErrors.yml000066400000000000000000000126451505113246500353000ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/retryable_reads/legacyrunOn: - minServerVersion: "4.0" topology: ["single", "replicaset"] - minServerVersion: "4.1.7" topology: ["sharded"] database_name: &database_name "retryable-reads-tests" collection_name: &collection_name "coll" data: [] tests: - description: "ListCollectionObjects succeeds after InterruptedAtShutdown" failPoint: &failCommand_failPoint configureFailPoint: failCommand mode: { times: 1 } data: { failCommands: [listCollections], errorCode: 11600 } operations: - &retryable_operation name: listCollectionObjects object: database expectations: - &retryable_command_started_event command_started_event: command: listCollections: 1 - *retryable_command_started_event - description: "ListCollectionObjects succeeds after InterruptedDueToReplStateChange" failPoint: <<: *failCommand_failPoint data: { failCommands: [listCollections], errorCode: 11602 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListCollectionObjects succeeds after NotMaster" failPoint: <<: *failCommand_failPoint data: { failCommands: [listCollections], errorCode: 10107 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListCollectionObjects succeeds after NotPrimaryNoSecondaryOk" failPoint: <<: *failCommand_failPoint data: { failCommands: [listCollections], errorCode: 13435 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListCollectionObjects succeeds after NotMasterOrSecondary" failPoint: <<: *failCommand_failPoint data: { failCommands: [listCollections], errorCode: 13436 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListCollectionObjects succeeds after PrimarySteppedDown" failPoint: <<: *failCommand_failPoint data: { failCommands: [listCollections], errorCode: 189 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListCollectionObjects succeeds after ShutdownInProgress" failPoint: <<: *failCommand_failPoint data: { failCommands: [listCollections], errorCode: 91 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListCollectionObjects succeeds after HostNotFound" failPoint: <<: *failCommand_failPoint data: { failCommands: [listCollections], errorCode: 7 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListCollectionObjects succeeds after HostUnreachable" failPoint: <<: *failCommand_failPoint data: { failCommands: [listCollections], errorCode: 6 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListCollectionObjects succeeds after NetworkTimeout" failPoint: <<: *failCommand_failPoint data: { failCommands: [listCollections], errorCode: 89 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListCollectionObjects succeeds after SocketException" failPoint: <<: *failCommand_failPoint data: { failCommands: [listCollections], errorCode: 9001 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListCollectionObjects fails after two NotMaster errors" failPoint: <<: *failCommand_failPoint mode: { times: 2 } data: { failCommands: [listCollections], errorCode: 10107 } operations: - &retryable_operation_fails <<: *retryable_operation error: true expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListCollectionObjects fails after NotMaster when retryReads is false" clientOptions: retryReads: false failPoint: <<: *failCommand_failPoint data: { failCommands: [listCollections], errorCode: 10107 } operations: [*retryable_operation_fails] expectations: - *retryable_command_started_event mongo-ruby-driver-2.21.3/spec/spec_tests/data/retryable_reads/legacy/listCollectionObjects.yml000066400000000000000000000034671505113246500326600ustar00rootroot00000000000000runOn: - minServerVersion: "4.0" topology: ["single", "replicaset"] - minServerVersion: "4.1.7" topology: ["sharded"] database_name: &database_name "retryable-reads-tests" collection_name: &collection_name "coll" data: [] tests: - description: "ListCollectionObjects succeeds on first attempt" operations: - &retryable_operation name: listCollectionObjects object: database expectations: - &retryable_command_started_event command_started_event: command: listCollections: 1 - description: "ListCollectionObjects succeeds on second attempt" failPoint: &failCommand_failPoint configureFailPoint: failCommand mode: { times: 1 } data: failCommands: - listCollections closeConnection: true operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListCollectionObjects fails on first attempt" clientOptions: retryReads: false failPoint: *failCommand_failPoint operations: - &retryable_operation_fails <<: *retryable_operation error: true expectations: - *retryable_command_started_event - description: "ListCollectionObjects fails on second attempt" failPoint: <<: *failCommand_failPoint mode: { times: 2 } operations: [*retryable_operation_fails] expectations: - *retryable_command_started_event - *retryable_command_started_event listCollections-serverErrors.yml000066400000000000000000000125201505113246500341410ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/retryable_reads/legacyrunOn: - minServerVersion: "4.0" topology: ["single", "replicaset"] - minServerVersion: "4.1.7" topology: ["sharded"] database_name: &database_name "retryable-reads-tests" collection_name: &collection_name "coll" data: [] tests: - description: "ListCollections succeeds after InterruptedAtShutdown" failPoint: &failCommand_failPoint configureFailPoint: failCommand mode: { times: 1 } data: { failCommands: [listCollections], errorCode: 11600 } operations: - &retryable_operation name: listCollections object: database expectations: - &retryable_command_started_event command_started_event: command: listCollections: 1 - *retryable_command_started_event - description: "ListCollections succeeds after InterruptedDueToReplStateChange" failPoint: <<: *failCommand_failPoint data: { failCommands: [listCollections], errorCode: 11602 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListCollections succeeds after NotMaster" failPoint: <<: *failCommand_failPoint data: { failCommands: [listCollections], errorCode: 10107 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListCollections succeeds after NotPrimaryNoSecondaryOk" failPoint: <<: *failCommand_failPoint data: { failCommands: [listCollections], errorCode: 13435 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListCollections succeeds after NotMasterOrSecondary" failPoint: <<: *failCommand_failPoint data: { failCommands: [listCollections], errorCode: 13436 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListCollections succeeds after PrimarySteppedDown" failPoint: <<: *failCommand_failPoint data: { failCommands: [listCollections], errorCode: 189 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListCollections succeeds after ShutdownInProgress" failPoint: <<: *failCommand_failPoint data: { failCommands: [listCollections], errorCode: 91 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListCollections succeeds after HostNotFound" failPoint: <<: *failCommand_failPoint data: { failCommands: [listCollections], errorCode: 7 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListCollections succeeds after HostUnreachable" failPoint: <<: *failCommand_failPoint data: { failCommands: [listCollections], errorCode: 6 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListCollections succeeds after NetworkTimeout" failPoint: <<: *failCommand_failPoint data: { failCommands: [listCollections], errorCode: 89 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListCollections succeeds after SocketException" failPoint: <<: *failCommand_failPoint data: { failCommands: [listCollections], errorCode: 9001 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListCollections fails after two NotMaster errors" failPoint: <<: *failCommand_failPoint mode: { times: 2 } data: { failCommands: [listCollections], errorCode: 10107 } operations: - &retryable_operation_fails <<: *retryable_operation error: true expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListCollections fails after NotMaster when retryReads is false" clientOptions: retryReads: false failPoint: <<: *failCommand_failPoint data: { failCommands: [listCollections], errorCode: 10107 } operations: [*retryable_operation_fails] expectations: - *retryable_command_started_event mongo-ruby-driver-2.21.3/spec/spec_tests/data/retryable_reads/legacy/listCollections.yml000066400000000000000000000034311505113246500315200ustar00rootroot00000000000000runOn: - minServerVersion: "4.0" topology: ["single", "replicaset"] - minServerVersion: "4.1.7" topology: ["sharded"] database_name: &database_name "retryable-reads-tests" collection_name: &collection_name "coll" data: [] tests: - description: "ListCollections succeeds on first attempt" operations: - &retryable_operation name: listCollections object: database expectations: - &retryable_command_started_event command_started_event: command: listCollections: 1 - description: "ListCollections succeeds on second attempt" failPoint: &failCommand_failPoint configureFailPoint: failCommand mode: { times: 1 } data: failCommands: - listCollections closeConnection: true operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListCollections fails on first attempt" clientOptions: retryReads: false failPoint: *failCommand_failPoint operations: - &retryable_operation_fails <<: *retryable_operation error: true expectations: - *retryable_command_started_event - description: "ListCollections fails on second attempt" failPoint: <<: *failCommand_failPoint mode: { times: 2 } operations: [*retryable_operation_fails] expectations: - *retryable_command_started_event - *retryable_command_started_event listDatabaseNames-serverErrors.yml000066400000000000000000000125151505113246500343570ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/retryable_reads/legacyrunOn: - minServerVersion: "4.0" topology: ["single", "replicaset"] - minServerVersion: "4.1.7" topology: ["sharded"] database_name: &database_name "retryable-reads-tests" collection_name: &collection_name "coll" data: [] tests: - description: "ListDatabaseNames succeeds after InterruptedAtShutdown" failPoint: &failCommand_failPoint configureFailPoint: failCommand mode: { times: 1 } data: { failCommands: [listDatabases], errorCode: 11600 } operations: - &retryable_operation name: listDatabaseNames object: client expectations: - &retryable_command_started_event command_started_event: command: listDatabases: 1 - *retryable_command_started_event - description: "ListDatabaseNames succeeds after InterruptedDueToReplStateChange" failPoint: <<: *failCommand_failPoint data: { failCommands: [listDatabases], errorCode: 11602 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListDatabaseNames succeeds after NotMaster" failPoint: <<: *failCommand_failPoint data: { failCommands: [listDatabases], errorCode: 10107 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListDatabaseNames succeeds after NotPrimaryNoSecondaryOk" failPoint: <<: *failCommand_failPoint data: { failCommands: [listDatabases], errorCode: 13435 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListDatabaseNames succeeds after NotMasterOrSecondary" failPoint: <<: *failCommand_failPoint data: { failCommands: [listDatabases], errorCode: 13436 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListDatabaseNames succeeds after PrimarySteppedDown" failPoint: <<: *failCommand_failPoint data: { failCommands: [listDatabases], errorCode: 189 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListDatabaseNames succeeds after ShutdownInProgress" failPoint: <<: *failCommand_failPoint data: { failCommands: [listDatabases], errorCode: 91 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListDatabaseNames succeeds after HostNotFound" failPoint: <<: *failCommand_failPoint data: { failCommands: [listDatabases], errorCode: 7 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListDatabaseNames succeeds after HostUnreachable" failPoint: <<: *failCommand_failPoint data: { failCommands: [listDatabases], errorCode: 6 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListDatabaseNames succeeds after NetworkTimeout" failPoint: <<: *failCommand_failPoint data: { failCommands: [listDatabases], errorCode: 89 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListDatabaseNames succeeds after SocketException" failPoint: <<: *failCommand_failPoint data: { failCommands: [listDatabases], errorCode: 9001 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListDatabaseNames fails after two NotMaster errors" failPoint: <<: *failCommand_failPoint mode: { times: 2 } data: { failCommands: [listDatabases], errorCode: 10107 } operations: - &retryable_operation_fails <<: *retryable_operation error: true expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListDatabaseNames fails after NotMaster when retryReads is false" clientOptions: retryReads: false failPoint: <<: *failCommand_failPoint data: { failCommands: [listDatabases], errorCode: 10107 } operations: [*retryable_operation_fails] expectations: - *retryable_command_started_event mongo-ruby-driver-2.21.3/spec/spec_tests/data/retryable_reads/legacy/listDatabaseNames.yml000066400000000000000000000034351505113246500317360ustar00rootroot00000000000000runOn: - minServerVersion: "4.0" topology: ["single", "replicaset"] - minServerVersion: "4.1.7" topology: ["sharded"] database_name: &database_name "retryable-reads-tests" collection_name: &collection_name "coll" data: [] tests: - description: "ListDatabaseNames succeeds on first attempt" operations: - &retryable_operation name: listDatabaseNames object: client expectations: - &retryable_command_started_event command_started_event: command: listDatabases: 1 - description: "ListDatabaseNames succeeds on second attempt" failPoint: &failCommand_failPoint configureFailPoint: failCommand mode: { times: 1 } data: failCommands: - listDatabases closeConnection: true operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListDatabaseNames fails on first attempt" clientOptions: retryReads: false failPoint: *failCommand_failPoint operations: - &retryable_operation_fails <<: *retryable_operation error: true expectations: - *retryable_command_started_event - description: "ListDatabaseNames fails on second attempt" failPoint: <<: *failCommand_failPoint mode: { times: 2 } operations: [*retryable_operation_fails] expectations: - *retryable_command_started_event - *retryable_command_started_event listDatabaseObjects-serverErrors.yml000066400000000000000000000125521505113246500347060ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/retryable_reads/legacyrunOn: - minServerVersion: "4.0" topology: ["single", "replicaset"] - minServerVersion: "4.1.7" topology: ["sharded"] database_name: &database_name "retryable-reads-tests" collection_name: &collection_name "coll" data: [] tests: - description: "ListDatabaseObjects succeeds after InterruptedAtShutdown" failPoint: &failCommand_failPoint configureFailPoint: failCommand mode: { times: 1 } data: { failCommands: [listDatabases], errorCode: 11600 } operations: - &retryable_operation name: listDatabaseObjects object: client expectations: - &retryable_command_started_event command_started_event: command: listDatabases: 1 - *retryable_command_started_event - description: "ListDatabaseObjects succeeds after InterruptedDueToReplStateChange" failPoint: <<: *failCommand_failPoint data: { failCommands: [listDatabases], errorCode: 11602 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListDatabaseObjects succeeds after NotMaster" failPoint: <<: *failCommand_failPoint data: { failCommands: [listDatabases], errorCode: 10107 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListDatabaseObjects succeeds after NotPrimaryNoSecondaryOk" failPoint: <<: *failCommand_failPoint data: { failCommands: [listDatabases], errorCode: 13435 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListDatabaseObjects succeeds after NotMasterOrSecondary" failPoint: <<: *failCommand_failPoint data: { failCommands: [listDatabases], errorCode: 13436 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListDatabaseObjects succeeds after PrimarySteppedDown" failPoint: <<: *failCommand_failPoint data: { failCommands: [listDatabases], errorCode: 189 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListDatabaseObjects succeeds after ShutdownInProgress" failPoint: <<: *failCommand_failPoint data: { failCommands: [listDatabases], errorCode: 91 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListDatabaseObjects succeeds after HostNotFound" failPoint: <<: *failCommand_failPoint data: { failCommands: [listDatabases], errorCode: 7 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListDatabaseObjects succeeds after HostUnreachable" failPoint: <<: *failCommand_failPoint data: { failCommands: [listDatabases], errorCode: 6 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListDatabaseObjects succeeds after NetworkTimeout" failPoint: <<: *failCommand_failPoint data: { failCommands: [listDatabases], errorCode: 89 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListDatabaseObjects succeeds after SocketException" failPoint: <<: *failCommand_failPoint data: { failCommands: [listDatabases], errorCode: 9001 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListDatabaseObjects fails after two NotMaster errors" failPoint: <<: *failCommand_failPoint mode: { times: 2 } data: { failCommands: [listDatabases], errorCode: 10107 } operations: - &retryable_operation_fails <<: *retryable_operation error: true expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListDatabaseObjects fails after NotMaster when retryReads is false" clientOptions: retryReads: false failPoint: <<: *failCommand_failPoint data: { failCommands: [listDatabases], errorCode: 10107 } operations: [*retryable_operation_fails] expectations: - *retryable_command_started_event mongo-ruby-driver-2.21.3/spec/spec_tests/data/retryable_reads/legacy/listDatabaseObjects.yml000066400000000000000000000034471505113246500322670ustar00rootroot00000000000000runOn: - minServerVersion: "4.0" topology: ["single", "replicaset"] - minServerVersion: "4.1.7" topology: ["sharded"] database_name: &database_name "retryable-reads-tests" collection_name: &collection_name "coll" data: [] tests: - description: "ListDatabaseObjects succeeds on first attempt" operations: - &retryable_operation name: listDatabaseObjects object: client expectations: - &retryable_command_started_event command_started_event: command: listDatabases: 1 - description: "ListDatabaseObjects succeeds on second attempt" failPoint: &failCommand_failPoint configureFailPoint: failCommand mode: { times: 1 } data: failCommands: - listDatabases closeConnection: true operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListDatabaseObjects fails on first attempt" clientOptions: retryReads: false failPoint: *failCommand_failPoint operations: - &retryable_operation_fails <<: *retryable_operation error: true expectations: - *retryable_command_started_event - description: "ListDatabaseObjects fails on second attempt" failPoint: <<: *failCommand_failPoint mode: { times: 2 } operations: [*retryable_operation_fails] expectations: - *retryable_command_started_event - *retryable_command_started_event mongo-ruby-driver-2.21.3/spec/spec_tests/data/retryable_reads/legacy/listDatabases-serverErrors.yml000066400000000000000000000124261505113246500336360ustar00rootroot00000000000000runOn: - minServerVersion: "4.0" topology: ["single", "replicaset"] - minServerVersion: "4.1.7" topology: ["sharded"] database_name: &database_name "retryable-reads-tests" collection_name: &collection_name "coll" data: [] tests: - description: "ListDatabases succeeds after InterruptedAtShutdown" failPoint: &failCommand_failPoint configureFailPoint: failCommand mode: { times: 1 } data: { failCommands: [listDatabases], errorCode: 11600 } operations: - &retryable_operation name: listDatabases object: client expectations: - &retryable_command_started_event command_started_event: command: listDatabases: 1 - *retryable_command_started_event - description: "ListDatabases succeeds after InterruptedDueToReplStateChange" failPoint: <<: *failCommand_failPoint data: { failCommands: [listDatabases], errorCode: 11602 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListDatabases succeeds after NotMaster" failPoint: <<: *failCommand_failPoint data: { failCommands: [listDatabases], errorCode: 10107 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListDatabases succeeds after NotPrimaryNoSecondaryOk" failPoint: <<: *failCommand_failPoint data: { failCommands: [listDatabases], errorCode: 13435 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListDatabases succeeds after NotMasterOrSecondary" failPoint: <<: *failCommand_failPoint data: { failCommands: [listDatabases], errorCode: 13436 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListDatabases succeeds after PrimarySteppedDown" failPoint: <<: *failCommand_failPoint data: { failCommands: [listDatabases], errorCode: 189 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListDatabases succeeds after ShutdownInProgress" failPoint: <<: *failCommand_failPoint data: { failCommands: [listDatabases], errorCode: 91 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListDatabases succeeds after HostNotFound" failPoint: <<: *failCommand_failPoint data: { failCommands: [listDatabases], errorCode: 7 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListDatabases succeeds after HostUnreachable" failPoint: <<: *failCommand_failPoint data: { failCommands: [listDatabases], errorCode: 6 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListDatabases succeeds after NetworkTimeout" failPoint: <<: *failCommand_failPoint data: { failCommands: [listDatabases], errorCode: 89 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListDatabases succeeds after SocketException" failPoint: <<: *failCommand_failPoint data: { failCommands: [listDatabases], errorCode: 9001 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListDatabases fails after two NotMaster errors" failPoint: <<: *failCommand_failPoint mode: { times: 2 } data: { failCommands: [listDatabases], errorCode: 10107 } operations: - &retryable_operation_fails <<: *retryable_operation error: true expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListDatabases fails after NotMaster when retryReads is false" clientOptions: retryReads: false failPoint: <<: *failCommand_failPoint data: { failCommands: [listDatabases], errorCode: 10107 } operations: [*retryable_operation_fails] expectations: - *retryable_command_started_event mongo-ruby-driver-2.21.3/spec/spec_tests/data/retryable_reads/legacy/listDatabases.yml000066400000000000000000000034111505113246500311270ustar00rootroot00000000000000runOn: - minServerVersion: "4.0" topology: ["single", "replicaset"] - minServerVersion: "4.1.7" topology: ["sharded"] database_name: &database_name "retryable-reads-tests" collection_name: &collection_name "coll" data: [] tests: - description: "ListDatabases succeeds on first attempt" operations: - &retryable_operation name: listDatabases object: client expectations: - &retryable_command_started_event command_started_event: command: listDatabases: 1 - description: "ListDatabases succeeds on second attempt" failPoint: &failCommand_failPoint configureFailPoint: failCommand mode: { times: 1 } data: failCommands: - listDatabases closeConnection: true operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListDatabases fails on first attempt" clientOptions: retryReads: false failPoint: *failCommand_failPoint operations: - &retryable_operation_fails <<: *retryable_operation error: true expectations: - *retryable_command_started_event - description: "ListDatabases fails on second attempt" failPoint: <<: *failCommand_failPoint mode: { times: 2 } operations: [*retryable_operation_fails] expectations: - *retryable_command_started_event - *retryable_command_started_event mongo-ruby-driver-2.21.3/spec/spec_tests/data/retryable_reads/legacy/listIndexNames-serverErrors.yml000066400000000000000000000125141505113246500340000ustar00rootroot00000000000000runOn: - minServerVersion: "4.0" topology: ["single", "replicaset"] - minServerVersion: "4.1.7" topology: ["sharded"] database_name: &database_name "retryable-reads-tests" collection_name: &collection_name "coll" data: [] tests: - description: "ListIndexNames succeeds after InterruptedAtShutdown" failPoint: &failCommand_failPoint configureFailPoint: failCommand mode: { times: 1 } data: { failCommands: [listIndexes], errorCode: 11600 } operations: - &retryable_operation name: listIndexNames object: collection expectations: - &retryable_command_started_event command_started_event: command: listIndexes: *collection_name database_name: *database_name - *retryable_command_started_event - description: "ListIndexNames succeeds after InterruptedDueToReplStateChange" failPoint: <<: *failCommand_failPoint data: { failCommands: [listIndexes], errorCode: 11602 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListIndexNames succeeds after NotMaster" failPoint: <<: *failCommand_failPoint data: { failCommands: [listIndexes], errorCode: 10107 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListIndexNames succeeds after NotPrimaryNoSecondaryOk" failPoint: <<: *failCommand_failPoint data: { failCommands: [listIndexes], errorCode: 13435 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListIndexNames succeeds after NotMasterOrSecondary" failPoint: <<: *failCommand_failPoint data: { failCommands: [listIndexes], errorCode: 13436 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListIndexNames succeeds after PrimarySteppedDown" failPoint: <<: *failCommand_failPoint data: { failCommands: [listIndexes], errorCode: 189 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListIndexNames succeeds after ShutdownInProgress" failPoint: <<: *failCommand_failPoint data: { failCommands: [listIndexes], errorCode: 91 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListIndexNames succeeds after HostNotFound" failPoint: <<: *failCommand_failPoint data: { failCommands: [listIndexes], errorCode: 7 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListIndexNames succeeds after HostUnreachable" failPoint: <<: *failCommand_failPoint data: { failCommands: [listIndexes], errorCode: 6 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListIndexNames succeeds after NetworkTimeout" failPoint: <<: *failCommand_failPoint data: { failCommands: [listIndexes], errorCode: 89 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListIndexNames succeeds after SocketException" failPoint: <<: *failCommand_failPoint data: { failCommands: [listIndexes], errorCode: 9001 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListIndexNames fails after two NotMaster errors" failPoint: <<: *failCommand_failPoint mode: { times: 2 } data: { failCommands: [listIndexes], errorCode: 10107 } operations: - &retryable_operation_fails <<: *retryable_operation error: true expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListIndexNames fails after NotMaster when retryReads is false" clientOptions: retryReads: false failPoint: <<: *failCommand_failPoint data: { failCommands: [listIndexes], errorCode: 10107 } operations: [*retryable_operation_fails] expectations: - *retryable_command_started_event mongo-ruby-driver-2.21.3/spec/spec_tests/data/retryable_reads/legacy/listIndexNames.yml000066400000000000000000000035171505113246500313020ustar00rootroot00000000000000runOn: - minServerVersion: "4.0" topology: ["single", "replicaset"] - minServerVersion: "4.1.7" topology: ["sharded"] database_name: &database_name "retryable-reads-tests" collection_name: &collection_name "coll" data: [] tests: - description: "ListIndexNames succeeds on first attempt" operations: - &retryable_operation name: listIndexNames object: collection expectations: - &retryable_command_started_event command_started_event: command: listIndexes: *collection_name database_name: *database_name - description: "ListIndexNames succeeds on second attempt" failPoint: &failCommand_failPoint configureFailPoint: failCommand mode: { times: 1 } data: failCommands: - listIndexes closeConnection: true operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListIndexNames fails on first attempt" clientOptions: retryReads: false failPoint: *failCommand_failPoint operations: - &retryable_operation_fails <<: *retryable_operation error: true expectations: - *retryable_command_started_event - description: "ListIndexNames fails on second attempt" failPoint: <<: *failCommand_failPoint mode: { times: 2 } operations: [*retryable_operation_fails] expectations: - *retryable_command_started_event - *retryable_command_started_event mongo-ruby-driver-2.21.3/spec/spec_tests/data/retryable_reads/legacy/listIndexes-serverErrors.yml000066400000000000000000000124431505113246500333450ustar00rootroot00000000000000runOn: - minServerVersion: "4.0" topology: ["single", "replicaset"] - minServerVersion: "4.1.7" topology: ["sharded"] database_name: &database_name "retryable-reads-tests" collection_name: &collection_name "coll" data: [] tests: - description: "ListIndexes succeeds after InterruptedAtShutdown" failPoint: &failCommand_failPoint configureFailPoint: failCommand mode: { times: 1 } data: { failCommands: [listIndexes], errorCode: 11600 } operations: - &retryable_operation name: listIndexes object: collection expectations: - &retryable_command_started_event command_started_event: command: listIndexes: *collection_name database_name: *database_name - *retryable_command_started_event - description: "ListIndexes succeeds after InterruptedDueToReplStateChange" failPoint: <<: *failCommand_failPoint data: { failCommands: [listIndexes], errorCode: 11602 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListIndexes succeeds after NotMaster" failPoint: <<: *failCommand_failPoint data: { failCommands: [listIndexes], errorCode: 10107 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListIndexes succeeds after NotPrimaryNoSecondaryOk" failPoint: <<: *failCommand_failPoint data: { failCommands: [listIndexes], errorCode: 13435 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListIndexes succeeds after NotMasterOrSecondary" failPoint: <<: *failCommand_failPoint data: { failCommands: [listIndexes], errorCode: 13436 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListIndexes succeeds after PrimarySteppedDown" failPoint: <<: *failCommand_failPoint data: { failCommands: [listIndexes], errorCode: 189 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListIndexes succeeds after ShutdownInProgress" failPoint: <<: *failCommand_failPoint data: { failCommands: [listIndexes], errorCode: 91 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListIndexes succeeds after HostNotFound" failPoint: <<: *failCommand_failPoint data: { failCommands: [listIndexes], errorCode: 7 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListIndexes succeeds after HostUnreachable" failPoint: <<: *failCommand_failPoint data: { failCommands: [listIndexes], errorCode: 6 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListIndexes succeeds after NetworkTimeout" failPoint: <<: *failCommand_failPoint data: { failCommands: [listIndexes], errorCode: 89 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListIndexes succeeds after SocketException" failPoint: <<: *failCommand_failPoint data: { failCommands: [listIndexes], errorCode: 9001 } operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListIndexes fails after two NotMaster errors" failPoint: <<: *failCommand_failPoint mode: { times: 2 } data: { failCommands: [listIndexes], errorCode: 10107 } operations: - &retryable_operation_fails <<: *retryable_operation error: true expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListIndexes fails after NotMaster when retryReads is false" clientOptions: retryReads: false failPoint: <<: *failCommand_failPoint data: { failCommands: [listIndexes], errorCode: 10107 } operations: [*retryable_operation_fails] expectations: - *retryable_command_started_event mongo-ruby-driver-2.21.3/spec/spec_tests/data/retryable_reads/legacy/listIndexes.yml000066400000000000000000000035001505113246500306360ustar00rootroot00000000000000runOn: - minServerVersion: "4.0" topology: ["single", "replicaset"] - minServerVersion: "4.1.7" topology: ["sharded"] database_name: &database_name "retryable-reads-tests" collection_name: &collection_name "coll" data: [] tests: - description: "ListIndexes succeeds on first attempt" operations: - &retryable_operation name: listIndexes object: collection expectations: - &retryable_command_started_event command_started_event: command: listIndexes: *collection_name database_name: *database_name - description: "ListIndexes succeeds on second attempt" failPoint: &failCommand_failPoint configureFailPoint: failCommand mode: { times: 1 } data: failCommands: - listIndexes closeConnection: true operations: [*retryable_operation] expectations: - *retryable_command_started_event - *retryable_command_started_event - description: "ListIndexes fails on first attempt" clientOptions: retryReads: false failPoint: *failCommand_failPoint operations: - &retryable_operation_fails <<: *retryable_operation error: true expectations: - *retryable_command_started_event - description: "ListIndexes fails on second attempt" failPoint: <<: *failCommand_failPoint mode: { times: 2 } operations: [*retryable_operation_fails] expectations: - *retryable_command_started_event - *retryable_command_started_event mongo-ruby-driver-2.21.3/spec/spec_tests/data/retryable_reads/legacy/mapReduce.yml000066400000000000000000000042021505113246500302500ustar00rootroot00000000000000runOn: - minServerVersion: "4.0" topology: ["single", "replicaset"] - minServerVersion: "4.1.7" topology: ["sharded", "load-balanced"] # serverless proxy does not support mapReduce operation serverless: "forbid" database_name: &database_name "retryable-reads-tests" collection_name: &collection_name "coll" data: - {_id: 1, x: 0} - {_id: 2, x: 1} - {_id: 3, x: 2} tests: - description: "MapReduce succeeds with retry on" operations: - &operation_succeeds <<: &operation name: mapReduce object: collection arguments: map: { $code: "function inc() { return emit(0, this.x + 1) }" } reduce: { $code: "function sum(key, values) { return values.reduce((acc, x) => acc + x); }" } out: { inline: 1 } result: [ { "_id" : 0, "value" : 6 } ] expectations: - &command_started_event command_started_event: command: mapReduce: *collection_name map: { $code: "function inc() { return emit(0, this.x + 1) }" } reduce: { $code: "function sum(key, values) { return values.reduce((acc, x) => acc + x); }" } out: { inline: 1 } database_name: *database_name - description: "MapReduce fails with retry on" failPoint: &failCommand_failPoint configureFailPoint: failCommand mode: { times: 1 } data: failCommands: [mapReduce] closeConnection: true operations: - &operation_fails <<: *operation error: true expectations: - *command_started_event - description: "MapReduce fails with retry off" clientOptions: retryReads: false failPoint: *failCommand_failPoint operations: [*operation_fails] expectations: - *command_started_event mongo-ruby-driver-2.21.3/spec/spec_tests/data/retryable_reads/unified/000077500000000000000000000000001505113246500260015ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/retryable_reads/unified/handshakeError.yml000066400000000000000000001257051505113246500314760ustar00rootroot00000000000000# Tests in this file are generated from handshakeError.yml.template. description: "retryable reads handshake failures" # 1.4 is required for "serverless: forbid". schemaVersion: "1.4" runOnRequirements: - minServerVersion: "4.2" topologies: [replicaset, sharded, load-balanced] auth: true createEntities: - client: id: &client client useMultipleMongoses: false observeEvents: - connectionCheckOutStartedEvent - commandStartedEvent - commandSucceededEvent - commandFailedEvent - database: id: &database database client: *client databaseName: &databaseName retryable-reads-handshake-tests - collection: id: &collection collection database: *database collectionName: &collectionName coll initialData: - collectionName: *collectionName databaseName: *databaseName documents: - { _id: 1, x: 11 } - { _id: 2, x: 22 } - { _id: 3, x: 33 } tests: # Because setting a failPoint creates a connection in the connection pool, run # a ping operation that fails immediately after the failPoint operation in # order to discard the connection before running the actual operation to be # tested. The saslContinue command is used to avoid SDAM errors. # # Description of events: # - Failpoint operation. # - Creates a connection in the connection pool that must be closed. # - Ping operation. # - Triggers failpoint (first time). # - Closes the connection made by the fail point operation. # - Test operation. # - New connection is created. # - Triggers failpoint (second time). # - Tests whether operation successfully retries the handshake and succeeds. - description: "client.listDatabases succeeds after retryable handshake network error" operations: - name: failPoint object: testRunner arguments: client: *client failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: [ping, saslContinue] closeConnection: true - name: runCommand object: *database arguments: { commandName: ping, command: { ping: 1 } } expectError: { isError: true } - name: listDatabases object: *client arguments: filter: {} expectEvents: - client: *client eventType: cmap events: - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - client: *client events: - commandStartedEvent: command: { ping: 1 } databaseName: *databaseName - commandFailedEvent: commandName: ping - commandStartedEvent: commandName: listDatabases - commandSucceededEvent: commandName: listDatabases - description: "client.listDatabases succeeds after retryable handshake server error (ShutdownInProgress)" operations: - name: failPoint object: testRunner arguments: client: *client failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: [ping, saslContinue] closeConnection: true - name: runCommand object: *database arguments: { commandName: ping, command: { ping: 1 } } expectError: { isError: true } - name: listDatabases object: *client arguments: filter: {} expectEvents: - client: *client eventType: cmap events: - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - client: *client events: - commandStartedEvent: command: { ping: 1 } databaseName: *databaseName - commandFailedEvent: commandName: ping - commandStartedEvent: commandName: listDatabases - commandSucceededEvent: commandName: listDatabases - description: "client.listDatabaseNames succeeds after retryable handshake network error" operations: - name: failPoint object: testRunner arguments: client: *client failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: [ping, saslContinue] closeConnection: true - name: runCommand object: *database arguments: { commandName: ping, command: { ping: 1 } } expectError: { isError: true } - name: listDatabaseNames object: *client expectEvents: - client: *client eventType: cmap events: - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - client: *client events: - commandStartedEvent: command: { ping: 1 } databaseName: *databaseName - commandFailedEvent: commandName: ping - commandStartedEvent: commandName: listDatabases - commandSucceededEvent: commandName: listDatabases - description: "client.listDatabaseNames succeeds after retryable handshake server error (ShutdownInProgress)" operations: - name: failPoint object: testRunner arguments: client: *client failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: [ping, saslContinue] closeConnection: true - name: runCommand object: *database arguments: { commandName: ping, command: { ping: 1 } } expectError: { isError: true } - name: listDatabaseNames object: *client expectEvents: - client: *client eventType: cmap events: - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - client: *client events: - commandStartedEvent: command: { ping: 1 } databaseName: *databaseName - commandFailedEvent: commandName: ping - commandStartedEvent: commandName: listDatabases - commandSucceededEvent: commandName: listDatabases - description: "client.createChangeStream succeeds after retryable handshake network error" runOnRequirements: - serverless: forbid operations: - name: failPoint object: testRunner arguments: client: *client failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: [ping, saslContinue] closeConnection: true - name: runCommand object: *database arguments: { commandName: ping, command: { ping: 1 } } expectError: { isError: true } - name: createChangeStream object: *client arguments: pipeline: [] saveResultAsEntity: changeStream expectEvents: - client: *client eventType: cmap events: - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - client: *client events: - commandStartedEvent: command: { ping: 1 } databaseName: *databaseName - commandFailedEvent: commandName: ping - commandStartedEvent: commandName: aggregate - commandSucceededEvent: commandName: aggregate - description: "client.createChangeStream succeeds after retryable handshake server error (ShutdownInProgress)" runOnRequirements: - serverless: forbid operations: - name: failPoint object: testRunner arguments: client: *client failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: [ping, saslContinue] closeConnection: true - name: runCommand object: *database arguments: { commandName: ping, command: { ping: 1 } } expectError: { isError: true } - name: createChangeStream object: *client arguments: pipeline: [] saveResultAsEntity: changeStream expectEvents: - client: *client eventType: cmap events: - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - client: *client events: - commandStartedEvent: command: { ping: 1 } databaseName: *databaseName - commandFailedEvent: commandName: ping - commandStartedEvent: commandName: aggregate - commandSucceededEvent: commandName: aggregate - description: "database.aggregate succeeds after retryable handshake network error" runOnRequirements: - serverless: forbid operations: - name: failPoint object: testRunner arguments: client: *client failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: [ping, saslContinue] closeConnection: true - name: runCommand object: *database arguments: { commandName: ping, command: { ping: 1 } } expectError: { isError: true } - name: aggregate object: *database arguments: pipeline: [ { $listLocalSessions: {} }, { $limit: 1 } ] expectEvents: - client: *client eventType: cmap events: - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - client: *client events: - commandStartedEvent: command: { ping: 1 } databaseName: *databaseName - commandFailedEvent: commandName: ping - commandStartedEvent: commandName: aggregate - commandSucceededEvent: commandName: aggregate - description: "database.aggregate succeeds after retryable handshake server error (ShutdownInProgress)" runOnRequirements: - serverless: forbid operations: - name: failPoint object: testRunner arguments: client: *client failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: [ping, saslContinue] closeConnection: true - name: runCommand object: *database arguments: { commandName: ping, command: { ping: 1 } } expectError: { isError: true } - name: aggregate object: *database arguments: pipeline: [ { $listLocalSessions: {} }, { $limit: 1 } ] expectEvents: - client: *client eventType: cmap events: - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - client: *client events: - commandStartedEvent: command: { ping: 1 } databaseName: *databaseName - commandFailedEvent: commandName: ping - commandStartedEvent: commandName: aggregate - commandSucceededEvent: commandName: aggregate - description: "database.listCollections succeeds after retryable handshake network error" operations: - name: failPoint object: testRunner arguments: client: *client failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: [ping, saslContinue] closeConnection: true - name: runCommand object: *database arguments: { commandName: ping, command: { ping: 1 } } expectError: { isError: true } - name: listCollections object: *database arguments: filter: {} expectEvents: - client: *client eventType: cmap events: - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - client: *client events: - commandStartedEvent: command: { ping: 1 } databaseName: *databaseName - commandFailedEvent: commandName: ping - commandStartedEvent: commandName: listCollections - commandSucceededEvent: commandName: listCollections - description: "database.listCollections succeeds after retryable handshake server error (ShutdownInProgress)" operations: - name: failPoint object: testRunner arguments: client: *client failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: [ping, saslContinue] closeConnection: true - name: runCommand object: *database arguments: { commandName: ping, command: { ping: 1 } } expectError: { isError: true } - name: listCollections object: *database arguments: filter: {} expectEvents: - client: *client eventType: cmap events: - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - client: *client events: - commandStartedEvent: command: { ping: 1 } databaseName: *databaseName - commandFailedEvent: commandName: ping - commandStartedEvent: commandName: listCollections - commandSucceededEvent: commandName: listCollections - description: "database.listCollectionNames succeeds after retryable handshake network error" operations: - name: failPoint object: testRunner arguments: client: *client failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: [ping, saslContinue] closeConnection: true - name: runCommand object: *database arguments: { commandName: ping, command: { ping: 1 } } expectError: { isError: true } - name: listCollectionNames object: *database arguments: filter: {} expectEvents: - client: *client eventType: cmap events: - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - client: *client events: - commandStartedEvent: command: { ping: 1 } databaseName: *databaseName - commandFailedEvent: commandName: ping - commandStartedEvent: commandName: listCollections - commandSucceededEvent: commandName: listCollections - description: "database.listCollectionNames succeeds after retryable handshake server error (ShutdownInProgress)" operations: - name: failPoint object: testRunner arguments: client: *client failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: [ping, saslContinue] closeConnection: true - name: runCommand object: *database arguments: { commandName: ping, command: { ping: 1 } } expectError: { isError: true } - name: listCollectionNames object: *database arguments: filter: {} expectEvents: - client: *client eventType: cmap events: - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - client: *client events: - commandStartedEvent: command: { ping: 1 } databaseName: *databaseName - commandFailedEvent: commandName: ping - commandStartedEvent: commandName: listCollections - commandSucceededEvent: commandName: listCollections - description: "database.createChangeStream succeeds after retryable handshake network error" runOnRequirements: - serverless: forbid operations: - name: failPoint object: testRunner arguments: client: *client failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: [ping, saslContinue] closeConnection: true - name: runCommand object: *database arguments: { commandName: ping, command: { ping: 1 } } expectError: { isError: true } - name: createChangeStream object: *database arguments: pipeline: [] saveResultAsEntity: changeStream expectEvents: - client: *client eventType: cmap events: - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - client: *client events: - commandStartedEvent: command: { ping: 1 } databaseName: *databaseName - commandFailedEvent: commandName: ping - commandStartedEvent: commandName: aggregate - commandSucceededEvent: commandName: aggregate - description: "database.createChangeStream succeeds after retryable handshake server error (ShutdownInProgress)" runOnRequirements: - serverless: forbid operations: - name: failPoint object: testRunner arguments: client: *client failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: [ping, saslContinue] closeConnection: true - name: runCommand object: *database arguments: { commandName: ping, command: { ping: 1 } } expectError: { isError: true } - name: createChangeStream object: *database arguments: pipeline: [] saveResultAsEntity: changeStream expectEvents: - client: *client eventType: cmap events: - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - client: *client events: - commandStartedEvent: command: { ping: 1 } databaseName: *databaseName - commandFailedEvent: commandName: ping - commandStartedEvent: commandName: aggregate - commandSucceededEvent: commandName: aggregate - description: "collection.aggregate succeeds after retryable handshake network error" operations: - name: failPoint object: testRunner arguments: client: *client failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: [ping, saslContinue] closeConnection: true - name: runCommand object: *database arguments: { commandName: ping, command: { ping: 1 } } expectError: { isError: true } - name: aggregate object: *collection arguments: pipeline: [] expectEvents: - client: *client eventType: cmap events: - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - client: *client events: - commandStartedEvent: command: { ping: 1 } databaseName: *databaseName - commandFailedEvent: commandName: ping - commandStartedEvent: commandName: aggregate - commandSucceededEvent: commandName: aggregate - description: "collection.aggregate succeeds after retryable handshake server error (ShutdownInProgress)" operations: - name: failPoint object: testRunner arguments: client: *client failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: [ping, saslContinue] closeConnection: true - name: runCommand object: *database arguments: { commandName: ping, command: { ping: 1 } } expectError: { isError: true } - name: aggregate object: *collection arguments: pipeline: [] expectEvents: - client: *client eventType: cmap events: - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - client: *client events: - commandStartedEvent: command: { ping: 1 } databaseName: *databaseName - commandFailedEvent: commandName: ping - commandStartedEvent: commandName: aggregate - commandSucceededEvent: commandName: aggregate - description: "collection.countDocuments succeeds after retryable handshake network error" operations: - name: failPoint object: testRunner arguments: client: *client failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: [ping, saslContinue] closeConnection: true - name: runCommand object: *database arguments: { commandName: ping, command: { ping: 1 } } expectError: { isError: true } - name: countDocuments object: *collection arguments: filter: {} expectEvents: - client: *client eventType: cmap events: - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - client: *client events: - commandStartedEvent: command: { ping: 1 } databaseName: *databaseName - commandFailedEvent: commandName: ping - commandStartedEvent: commandName: aggregate - commandSucceededEvent: commandName: aggregate - description: "collection.countDocuments succeeds after retryable handshake server error (ShutdownInProgress)" operations: - name: failPoint object: testRunner arguments: client: *client failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: [ping, saslContinue] closeConnection: true - name: runCommand object: *database arguments: { commandName: ping, command: { ping: 1 } } expectError: { isError: true } - name: countDocuments object: *collection arguments: filter: {} expectEvents: - client: *client eventType: cmap events: - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - client: *client events: - commandStartedEvent: command: { ping: 1 } databaseName: *databaseName - commandFailedEvent: commandName: ping - commandStartedEvent: commandName: aggregate - commandSucceededEvent: commandName: aggregate - description: "collection.estimatedDocumentCount succeeds after retryable handshake network error" operations: - name: failPoint object: testRunner arguments: client: *client failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: [ping, saslContinue] closeConnection: true - name: runCommand object: *database arguments: { commandName: ping, command: { ping: 1 } } expectError: { isError: true } - name: estimatedDocumentCount object: *collection expectEvents: - client: *client eventType: cmap events: - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - client: *client events: - commandStartedEvent: command: { ping: 1 } databaseName: *databaseName - commandFailedEvent: commandName: ping - commandStartedEvent: commandName: count - commandSucceededEvent: commandName: count - description: "collection.estimatedDocumentCount succeeds after retryable handshake server error (ShutdownInProgress)" operations: - name: failPoint object: testRunner arguments: client: *client failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: [ping, saslContinue] closeConnection: true - name: runCommand object: *database arguments: { commandName: ping, command: { ping: 1 } } expectError: { isError: true } - name: estimatedDocumentCount object: *collection expectEvents: - client: *client eventType: cmap events: - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - client: *client events: - commandStartedEvent: command: { ping: 1 } databaseName: *databaseName - commandFailedEvent: commandName: ping - commandStartedEvent: commandName: count - commandSucceededEvent: commandName: count - description: "collection.distinct succeeds after retryable handshake network error" operations: - name: failPoint object: testRunner arguments: client: *client failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: [ping, saslContinue] closeConnection: true - name: runCommand object: *database arguments: { commandName: ping, command: { ping: 1 } } expectError: { isError: true } - name: distinct object: *collection arguments: fieldName: x filter: {} expectEvents: - client: *client eventType: cmap events: - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - client: *client events: - commandStartedEvent: command: { ping: 1 } databaseName: *databaseName - commandFailedEvent: commandName: ping - commandStartedEvent: commandName: distinct - commandSucceededEvent: commandName: distinct - description: "collection.distinct succeeds after retryable handshake server error (ShutdownInProgress)" operations: - name: failPoint object: testRunner arguments: client: *client failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: [ping, saslContinue] closeConnection: true - name: runCommand object: *database arguments: { commandName: ping, command: { ping: 1 } } expectError: { isError: true } - name: distinct object: *collection arguments: fieldName: x filter: {} expectEvents: - client: *client eventType: cmap events: - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - client: *client events: - commandStartedEvent: command: { ping: 1 } databaseName: *databaseName - commandFailedEvent: commandName: ping - commandStartedEvent: commandName: distinct - commandSucceededEvent: commandName: distinct - description: "collection.find succeeds after retryable handshake network error" operations: - name: failPoint object: testRunner arguments: client: *client failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: [ping, saslContinue] closeConnection: true - name: runCommand object: *database arguments: { commandName: ping, command: { ping: 1 } } expectError: { isError: true } - name: find object: *collection arguments: filter: {} expectEvents: - client: *client eventType: cmap events: - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - client: *client events: - commandStartedEvent: command: { ping: 1 } databaseName: *databaseName - commandFailedEvent: commandName: ping - commandStartedEvent: commandName: find - commandSucceededEvent: commandName: find - description: "collection.find succeeds after retryable handshake server error (ShutdownInProgress)" operations: - name: failPoint object: testRunner arguments: client: *client failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: [ping, saslContinue] closeConnection: true - name: runCommand object: *database arguments: { commandName: ping, command: { ping: 1 } } expectError: { isError: true } - name: find object: *collection arguments: filter: {} expectEvents: - client: *client eventType: cmap events: - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - client: *client events: - commandStartedEvent: command: { ping: 1 } databaseName: *databaseName - commandFailedEvent: commandName: ping - commandStartedEvent: commandName: find - commandSucceededEvent: commandName: find - description: "collection.findOne succeeds after retryable handshake network error" operations: - name: failPoint object: testRunner arguments: client: *client failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: [ping, saslContinue] closeConnection: true - name: runCommand object: *database arguments: { commandName: ping, command: { ping: 1 } } expectError: { isError: true } - name: findOne object: *collection arguments: filter: {} expectEvents: - client: *client eventType: cmap events: - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - client: *client events: - commandStartedEvent: command: { ping: 1 } databaseName: *databaseName - commandFailedEvent: commandName: ping - commandStartedEvent: commandName: find - commandSucceededEvent: commandName: find - description: "collection.findOne succeeds after retryable handshake server error (ShutdownInProgress)" operations: - name: failPoint object: testRunner arguments: client: *client failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: [ping, saslContinue] closeConnection: true - name: runCommand object: *database arguments: { commandName: ping, command: { ping: 1 } } expectError: { isError: true } - name: findOne object: *collection arguments: filter: {} expectEvents: - client: *client eventType: cmap events: - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - client: *client events: - commandStartedEvent: command: { ping: 1 } databaseName: *databaseName - commandFailedEvent: commandName: ping - commandStartedEvent: commandName: find - commandSucceededEvent: commandName: find - description: "collection.listIndexes succeeds after retryable handshake network error" operations: - name: failPoint object: testRunner arguments: client: *client failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: [ping, saslContinue] closeConnection: true - name: runCommand object: *database arguments: { commandName: ping, command: { ping: 1 } } expectError: { isError: true } - name: listIndexes object: *collection expectEvents: - client: *client eventType: cmap events: - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - client: *client events: - commandStartedEvent: command: { ping: 1 } databaseName: *databaseName - commandFailedEvent: commandName: ping - commandStartedEvent: commandName: listIndexes - commandSucceededEvent: commandName: listIndexes - description: "collection.listIndexes succeeds after retryable handshake server error (ShutdownInProgress)" operations: - name: failPoint object: testRunner arguments: client: *client failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: [ping, saslContinue] closeConnection: true - name: runCommand object: *database arguments: { commandName: ping, command: { ping: 1 } } expectError: { isError: true } - name: listIndexes object: *collection expectEvents: - client: *client eventType: cmap events: - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - client: *client events: - commandStartedEvent: command: { ping: 1 } databaseName: *databaseName - commandFailedEvent: commandName: ping - commandStartedEvent: commandName: listIndexes - commandSucceededEvent: commandName: listIndexes - description: "collection.listIndexNames succeeds after retryable handshake network error" operations: - name: failPoint object: testRunner arguments: client: *client failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: [ping, saslContinue] closeConnection: true - name: runCommand object: *database arguments: { commandName: ping, command: { ping: 1 } } expectError: { isError: true } - name: listIndexNames object: *collection expectEvents: - client: *client eventType: cmap events: - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - client: *client events: - commandStartedEvent: command: { ping: 1 } databaseName: *databaseName - commandFailedEvent: commandName: ping - commandStartedEvent: commandName: listIndexes - commandSucceededEvent: commandName: listIndexes - description: "collection.listIndexNames succeeds after retryable handshake server error (ShutdownInProgress)" operations: - name: failPoint object: testRunner arguments: client: *client failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: [ping, saslContinue] closeConnection: true - name: runCommand object: *database arguments: { commandName: ping, command: { ping: 1 } } expectError: { isError: true } - name: listIndexNames object: *collection expectEvents: - client: *client eventType: cmap events: - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - client: *client events: - commandStartedEvent: command: { ping: 1 } databaseName: *databaseName - commandFailedEvent: commandName: ping - commandStartedEvent: commandName: listIndexes - commandSucceededEvent: commandName: listIndexes - description: "collection.createChangeStream succeeds after retryable handshake network error" runOnRequirements: - serverless: forbid operations: - name: failPoint object: testRunner arguments: client: *client failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: [ping, saslContinue] closeConnection: true - name: runCommand object: *database arguments: { commandName: ping, command: { ping: 1 } } expectError: { isError: true } - name: createChangeStream object: *collection arguments: pipeline: [] saveResultAsEntity: changeStream expectEvents: - client: *client eventType: cmap events: - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - client: *client events: - commandStartedEvent: command: { ping: 1 } databaseName: *databaseName - commandFailedEvent: commandName: ping - commandStartedEvent: commandName: aggregate - commandSucceededEvent: commandName: aggregate - description: "collection.createChangeStream succeeds after retryable handshake server error (ShutdownInProgress)" runOnRequirements: - serverless: forbid operations: - name: failPoint object: testRunner arguments: client: *client failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: [ping, saslContinue] closeConnection: true - name: runCommand object: *database arguments: { commandName: ping, command: { ping: 1 } } expectError: { isError: true } - name: createChangeStream object: *collection arguments: pipeline: [] saveResultAsEntity: changeStream expectEvents: - client: *client eventType: cmap events: - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - client: *client events: - commandStartedEvent: command: { ping: 1 } databaseName: *databaseName - commandFailedEvent: commandName: ping - commandStartedEvent: commandName: aggregate - commandSucceededEvent: commandName: aggregate mongo-ruby-driver-2.21.3/spec/spec_tests/data/retryable_writes/000077500000000000000000000000001505113246500245755ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/retryable_writes/legacy/000077500000000000000000000000001505113246500260415ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/retryable_writes/legacy/bulkWrite-errorLabels.yml000066400000000000000000000053551505113246500330160ustar00rootroot00000000000000runOn: - minServerVersion: "4.3.1" topology: ["replicaset", "sharded"] data: - { _id: 1, x: 11 } - { _id: 2, x: 22 } tests: - description: "BulkWrite succeeds with RetryableWriteError from server" failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["update"] errorCode: 112 # WriteConflict, not a retryable error code errorLabels: ["RetryableWriteError"] # Override server behavior: send RetryableWriteError label with non-retryable error code operation: name: "bulkWrite" arguments: requests: - name: "deleteOne" arguments: filter: { _id: 1 } - name: "insertOne" arguments: document: { _id: 3, x: 33 } - name: "updateOne" arguments: filter: { _id: 2 } update: { $inc: { x: 1 } } options: { ordered: true } outcome: # Driver retries operation and it succeeds result: deletedCount: 1 insertedCount: 1 insertedIds: { 1: 3 } matchedCount: 1 modifiedCount: 1 upsertedCount: 0 upsertedIds: {} collection: data: - { _id: 2, x: 23 } - { _id: 3, x: 33 } - description: "BulkWrite fails if server does not return RetryableWriteError" failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["update"] errorCode: 11600 # InterruptedAtShutdown, normally a retryable error code errorLabels: [] # Override server behavior: do not send RetryableWriteError label with retryable code operation: name: "bulkWrite" arguments: requests: - name: "deleteOne" arguments: filter: { _id: 1 } - name: "insertOne" arguments: document: { _id: 3, x: 33 } - name: "updateOne" arguments: filter: { _id: 2 } update: { $inc: { x: 1 } } options: { ordered: true } outcome: error: true # Driver does not retry operation because there was no RetryableWriteError label on response result: errorLabelsOmit: ["RetryableWriteError"] collection: data: - { _id: 2, x: 22 } - { _id: 3, x: 33 } mongo-ruby-driver-2.21.3/spec/spec_tests/data/retryable_writes/legacy/bulkWrite-serverErrors.yml000066400000000000000000000101551505113246500332370ustar00rootroot00000000000000runOn: - minServerVersion: "4.0" topology: ["replicaset"] - minServerVersion: "4.1.7" topology: ["sharded"] data: - { _id: 1, x: 11 } - { _id: 2, x: 22 } tests: - description: "BulkWrite succeeds after PrimarySteppedDown" failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["update"] errorCode: 189 errorLabels: ["RetryableWriteError"] operation: name: "bulkWrite" arguments: requests: - name: "deleteOne" arguments: filter: { _id: 1 } - name: "insertOne" arguments: document: { _id: 3, x: 33 } - name: "updateOne" arguments: filter: { _id: 2 } update: { $inc: { x : 1 }} options: { ordered: true } outcome: result: deletedCount: 1 insertedCount: 1 insertedIds: { 1: 3 } matchedCount: 1 modifiedCount: 1 upsertedCount: 0 upsertedIds: { } collection: data: - { _id: 2, x: 23 } - { _id: 3, x: 33 } - description: "BulkWrite succeeds after WriteConcernError ShutdownInProgress" failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["insert"] errorLabels: ["RetryableWriteError"] writeConcernError: code: 91 errmsg: Replication is being shut down operation: name: "bulkWrite" arguments: requests: - name: "deleteOne" arguments: filter: { _id: 1 } - name: "insertOne" arguments: document: { _id: 3, x: 33 } - name: "updateOne" arguments: filter: { _id: 2 } update: { $inc: { x : 1 }} options: { ordered: true } outcome: result: deletedCount: 1 insertedCount: 1 insertedIds: { 1: 3 } matchedCount: 1 modifiedCount: 1 upsertedCount: 0 upsertedIds: { } collection: data: - { _id: 2, x: 23 } - { _id: 3, x: 33 } - description: "BulkWrite fails with a RetryableWriteError label after two connection failures" failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["update"] closeConnection: true operation: name: "bulkWrite" arguments: requests: - name: "deleteOne" arguments: filter: { _id: 1 } - name: "insertOne" arguments: document: { _id: 3, x: 33 } - name: "updateOne" arguments: filter: { _id: 2 } update: { $inc: { x : 1 }} options: { ordered: true } outcome: error: true result: errorLabelsContain: ["RetryableWriteError"] collection: data: - { _id: 2, x: 22 } - { _id: 3, x: 33 } mongo-ruby-driver-2.21.3/spec/spec_tests/data/retryable_writes/legacy/bulkWrite.yml000066400000000000000000000336511505113246500305440ustar00rootroot00000000000000runOn: - minServerVersion: "3.6" topology: ["replicaset"] data: - { _id: 1, x: 11 } tests: - description: "First command is retried" failPoint: configureFailPoint: onPrimaryTransactionalWrite mode: { times: 1 } operation: name: "bulkWrite" arguments: requests: - name: "insertOne" arguments: document: { _id: 2, x: 22 } - name: "updateOne" arguments: filter: { _id: 2 } update: { $inc: { x : 1 }} - name: "deleteOne" arguments: filter: { _id: 1 } options: { ordered: true } outcome: result: deletedCount: 1 insertedCount: 1 insertedIds: { 0: 2 } matchedCount: 1 modifiedCount: 1 upsertedCount: 0 upsertedIds: { } collection: data: - { _id: 2, x: 23 } - # Write operations in this ordered batch are intentionally sequenced so # that each write command consists of a single statement, which will # fail on the first attempt and succeed on the second, retry attempt. description: "All commands are retried" failPoint: configureFailPoint: onPrimaryTransactionalWrite mode: { times: 7 } operation: name: "bulkWrite" arguments: requests: - name: "insertOne" arguments: document: { _id: 2, x: 22 } - name: "updateOne" arguments: filter: { _id: 2 } update: { $inc: { x : 1 }} - name: "insertOne" arguments: document: { _id: 3, x: 33 } - name: "updateOne" arguments: filter: { _id: 4, x: 44 } update: { $inc: { x : 1 }} upsert: true - name: "insertOne" arguments: document: { _id: 5, x: 55 } - name: "replaceOne" arguments: filter: { _id: 3 } replacement: { _id: 3, x: 333 } - name: "deleteOne" arguments: filter: { _id: 1 } options: { ordered: true } outcome: result: deletedCount: 1 insertedCount: 3 insertedIds: { 0: 2, 2: 3, 4: 5 } matchedCount: 2 modifiedCount: 2 upsertedCount: 1 upsertedIds: { 3: 4 } collection: data: - { _id: 2, x: 23 } - { _id: 3, x: 333 } - { _id: 4, x: 45 } - { _id: 5, x: 55 } - description: "Both commands are retried after their first statement fails" failPoint: configureFailPoint: onPrimaryTransactionalWrite mode: { times: 2 } operation: name: "bulkWrite" arguments: requests: - name: "insertOne" arguments: document: { _id: 2, x: 22 } - name: "updateOne" arguments: filter: { _id: 1 } update: { $inc: { x : 1 }} - name: "updateOne" arguments: filter: { _id: 2 } update: { $inc: { x : 1 }} options: { ordered: true } outcome: result: deletedCount: 0 insertedCount: 1 insertedIds: { 0: 2 } matchedCount: 2 modifiedCount: 2 upsertedCount: 0 upsertedIds: { } collection: data: - { _id: 1, x: 12 } - { _id: 2, x: 23 } - description: "Second command is retried after its second statement fails" failPoint: configureFailPoint: onPrimaryTransactionalWrite mode: { skip: 2 } operation: name: "bulkWrite" arguments: requests: - name: "insertOne" arguments: document: { _id: 2, x: 22 } - name: "updateOne" arguments: filter: { _id: 1 } update: { $inc: { x : 1 }} - name: "updateOne" arguments: filter: { _id: 2 } update: { $inc: { x : 1 }} options: { ordered: true } outcome: result: deletedCount: 0 insertedCount: 1 insertedIds: { 0: 2 } matchedCount: 2 modifiedCount: 2 upsertedCount: 0 upsertedIds: { } collection: data: - { _id: 1, x: 12 } - { _id: 2, x: 23 } - description: "BulkWrite with unordered execution" failPoint: configureFailPoint: onPrimaryTransactionalWrite mode: { times: 1 } operation: name: "bulkWrite" arguments: requests: - name: "insertOne" arguments: document: { _id: 2, x: 22 } - name: "insertOne" arguments: document: { _id: 3, x: 33 } options: { ordered: false } outcome: result: deletedCount: 0 insertedCount: 2 insertedIds: { 0: 2, 1: 3 } matchedCount: 0 modifiedCount: 0 upsertedCount: 0 upsertedIds: { } collection: data: - { _id: 1, x: 11 } - { _id: 2, x: 22 } - { _id: 3, x: 33 } - description: "First insertOne is never committed" failPoint: configureFailPoint: onPrimaryTransactionalWrite mode: { times: 2 } data: { failBeforeCommitExceptionCode: 1 } operation: name: "bulkWrite" arguments: requests: - name: "insertOne" arguments: document: { _id: 2, x: 22 } - name: "updateOne" arguments: filter: { _id: 2 } update: { $inc: { x : 1 }} - name: "deleteOne" arguments: filter: { _id: 1 } options: { ordered: true } outcome: error: true # Driver does not return a complete result in case of an error # Therefore, we cannot validate it. # result: # deletedCount: 0 # insertedCount: 0 # insertedIds: { } # matchedCount: 0 # modifiedCount: 0 # upsertedCount: 0 # upsertedIds: { } collection: data: - { _id: 1, x: 11 } - description: "Second updateOne is never committed" failPoint: configureFailPoint: onPrimaryTransactionalWrite mode: { skip: 1 } data: { failBeforeCommitExceptionCode: 1 } operation: name: "bulkWrite" arguments: requests: - name: "insertOne" arguments: document: { _id: 2, x: 22 } - name: "updateOne" arguments: filter: { _id: 2 } update: { $inc: { x : 1 }} - name: "deleteOne" arguments: filter: { _id: 1 } options: { ordered: true } outcome: error: true # Driver does not return a complete result in case of an error # Therefore, we cannot validate it. # result: # deletedCount: 0 # insertedCount: 1 # insertedIds: { 0: 2 } # matchedCount: 0 # modifiedCount: 0 # upsertedCount: 0 # upsertedIds: { } collection: data: - { _id: 1, x: 11 } - { _id: 2, x: 22 } - description: "Third updateOne is never committed" failPoint: configureFailPoint: onPrimaryTransactionalWrite mode: { skip: 2 } data: { failBeforeCommitExceptionCode: 1 } operation: name: "bulkWrite" arguments: requests: - name: "updateOne" arguments: filter: { _id: 1 } update: { $inc: { x : 1 }} - name: "insertOne" arguments: document: { _id: 2, x: 22 } - name: "updateOne" arguments: filter: { _id: 2 } update: { $inc: { x : 1 }} options: { ordered: true } outcome: error: true # Driver does not return a complete result in case of an error # Therefore, we cannot validate it. # result: # deletedCount: 0 # insertedCount: 1 # insertedIds: { 1: 2 } # matchedCount: 1 # modifiedCount: 1 # upsertedCount: 0 # upsertedIds: { } collection: data: - { _id: 1, x: 12 } - { _id: 2, x: 22 } - # The onPrimaryTransactionalWrite fail point only triggers for write # operations that include a transaction ID. Therefore, it will not # affect the initial deleteMany and will trigger once (and only once) # for the first insertOne attempt. description: "Single-document write following deleteMany is retried" failPoint: configureFailPoint: onPrimaryTransactionalWrite mode: { times: 1 } data: { failBeforeCommitExceptionCode: 1 } operation: name: "bulkWrite" arguments: requests: - name: "deleteMany" arguments: filter: { x: 11 } - name: "insertOne" arguments: document: { _id: 2, x: 22 } options: { ordered: true } outcome: result: deletedCount: 1 insertedCount: 1 insertedIds: { 1: 2 } matchedCount: 0 modifiedCount: 0 upsertedCount: 0 upsertedIds: { } collection: data: - { _id: 2, x: 22 } - # The onPrimaryTransactionalWrite fail point only triggers for write # operations that include a transaction ID. Therefore, it will not # affect the initial updateMany and will trigger once (and only once) # for the first insertOne attempt. description: "Single-document write following updateMany is retried" failPoint: configureFailPoint: onPrimaryTransactionalWrite mode: { times: 1 } data: { failBeforeCommitExceptionCode: 1 } operation: name: "bulkWrite" arguments: requests: - name: "updateMany" arguments: filter: { x: 11 } update: { $inc: { x : 1 }} - name: "insertOne" arguments: document: { _id: 2, x: 22 } options: { ordered: true } outcome: result: deletedCount: 0 insertedCount: 1 insertedIds: { 1: 2 } matchedCount: 1 modifiedCount: 1 upsertedCount: 0 upsertedIds: { } collection: data: - { _id: 1, x: 12 } - { _id: 2, x: 22 } mongo-ruby-driver-2.21.3/spec/spec_tests/data/retryable_writes/legacy/deleteMany.yml000066400000000000000000000007211505113246500306530ustar00rootroot00000000000000runOn: - minServerVersion: "3.6" topology: ["replicaset", "sharded"] data: - { _id: 1, x: 11 } - { _id: 2, x: 22 } tests: - description: "DeleteMany ignores retryWrites" useMultipleMongoses: true operation: name: "deleteMany" arguments: filter: { } outcome: result: deletedCount: 2 collection: data: [] mongo-ruby-driver-2.21.3/spec/spec_tests/data/retryable_writes/legacy/deleteOne-errorLabels.yml000066400000000000000000000032621505113246500327450ustar00rootroot00000000000000runOn: - minServerVersion: "4.3.1" topology: ["replicaset", "sharded"] data: - { _id: 1, x: 11 } - { _id: 2, x: 22 } tests: - description: "DeleteOne succeeds with RetryableWriteError from server" failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["delete"] errorCode: 112 # WriteConflict, not a retryable error code errorLabels: ["RetryableWriteError"] # Override server behavior: send RetryableWriteError label with non-retryable error code operation: name: "deleteOne" arguments: filter: { _id: 1 } outcome: # Driver retries operation and it succeeds result: deletedCount: 1 collection: data: - { _id: 2, x: 22 } - description: "DeleteOne fails if server does not return RetryableWriteError" failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["delete"] errorCode: 11600 # InterruptedAtShutdown, normally a retryable error code errorLabels: [] # Override server behavior: do not send RetryableWriteError label with retryable code operation: name: "deleteOne" arguments: filter: { _id: 1 } outcome: error: true # Driver does not retry operation because there was no RetryableWriteError label on response result: errorLabelsOmit: ["RetryableWriteError"] collection: data: - { _id: 1, x: 11 } - { _id: 2, x: 22 } mongo-ruby-driver-2.21.3/spec/spec_tests/data/retryable_writes/legacy/deleteOne-serverErrors.yml000066400000000000000000000041041505113246500331700ustar00rootroot00000000000000runOn: - minServerVersion: "4.0" topology: ["replicaset"] - minServerVersion: "4.1.7" topology: ["sharded"] data: - { _id: 1, x: 11 } - { _id: 2, x: 22 } tests: - description: "DeleteOne succeeds after PrimarySteppedDown" failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["delete"] errorCode: 189 errorLabels: ["RetryableWriteError"] operation: name: "deleteOne" arguments: filter: { _id: 1 } outcome: result: deletedCount: 1 collection: data: - { _id: 2, x: 22 } - description: "DeleteOne succeeds after WriteConcernError ShutdownInProgress" failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["delete"] errorLabels: ["RetryableWriteError"] writeConcernError: code: 91 errmsg: Replication is being shut down operation: name: "deleteOne" arguments: filter: { _id: 1 } outcome: result: deletedCount: 1 collection: data: - { _id: 2, x: 22 } - description: "DeleteOne fails with RetryableWriteError label after two connection failures" failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["delete"] closeConnection: true operation: name: "deleteOne" arguments: filter: { _id: 1 } outcome: error: true result: errorLabelsContain: ["RetryableWriteError"] collection: data: - { _id: 1, x: 11 } - { _id: 2, x: 22 } mongo-ruby-driver-2.21.3/spec/spec_tests/data/retryable_writes/legacy/deleteOne.yml000066400000000000000000000030531505113246500304710ustar00rootroot00000000000000runOn: - minServerVersion: "3.6" topology: ["replicaset"] data: - { _id: 1, x: 11 } - { _id: 2, x: 22 } tests: - description: "DeleteOne is committed on first attempt" failPoint: configureFailPoint: onPrimaryTransactionalWrite mode: { times: 1 } operation: name: "deleteOne" arguments: filter: { _id: 1 } outcome: result: deletedCount: 1 collection: data: - { _id: 2, x: 22 } - description: "DeleteOne is not committed on first attempt" failPoint: configureFailPoint: onPrimaryTransactionalWrite mode: { times: 1 } data: { failBeforeCommitExceptionCode: 1 } operation: name: "deleteOne" arguments: filter: { _id: 1 } outcome: result: deletedCount: 1 collection: data: - { _id: 2, x: 22 } - description: "DeleteOne is never committed" failPoint: configureFailPoint: onPrimaryTransactionalWrite mode: { times: 2 } data: { failBeforeCommitExceptionCode: 1 } operation: name: "deleteOne" arguments: filter: { _id: 1 } outcome: error: true collection: data: - { _id: 1, x: 11 } - { _id: 2, x: 22 } findOneAndDelete-errorLabels.yml000066400000000000000000000034341505113246500341130ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/retryable_writes/legacyrunOn: - minServerVersion: "4.3.1" topology: ["replicaset", "sharded"] data: - { _id: 1, x: 11 } - { _id: 2, x: 22 } tests: - description: "FindOneAndDelete succeeds with RetryableWriteError from server" failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["findAndModify"] errorCode: 112 # WriteConflict, not a retryable error code errorLabels: ["RetryableWriteError"] # Override server behavior: send RetryableWriteError label with non-retryable error code operation: name: "findOneAndDelete" arguments: filter: { x: { $gte: 11 } } sort: { x: 1 } outcome: # Driver retries operation and it succeeds result: { _id: 1, x: 11 } collection: data: - { _id: 2, x: 22 } - description: "FindOneAndDelete fails if server does not return RetryableWriteError" failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["findAndModify"] errorCode: 11600 # InterruptedAtShutdown, normally a retryable error code errorLabels: [] # Override server behavior: do not send RetryableWriteError label with retryable code operation: name: "findOneAndDelete" arguments: filter: { x: { $gte: 11 } } sort: { x: 1 } outcome: error: true # Driver does not retry operation because there was no RetryableWriteError label on response result: errorLabelsOmit: ["RetryableWriteError"] collection: data: - { _id: 1, x: 11 } - { _id: 2, x: 22 } findOneAndDelete-serverErrors.yml000066400000000000000000000043371505113246500343450ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/retryable_writes/legacyrunOn: - minServerVersion: "4.0" topology: ["replicaset"] - minServerVersion: "4.1.7" topology: ["sharded"] data: - { _id: 1, x: 11 } - { _id: 2, x: 22 } tests: - description: "FindOneAndDelete succeeds after PrimarySteppedDown" failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["findAndModify"] errorCode: 189 errorLabels: ["RetryableWriteError"] operation: name: "findOneAndDelete" arguments: filter: { x: { $gte: 11 }} sort: { x: 1 } outcome: result: { _id: 1, x: 11 } collection: data: - { _id: 2, x: 22 } - description: "FindOneAndDelete succeeds after WriteConcernError ShutdownInProgress" failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["findAndModify"] errorLabels: ["RetryableWriteError"] writeConcernError: code: 91 errmsg: Replication is being shut down operation: name: "findOneAndDelete" arguments: filter: { x: { $gte: 11 }} sort: { x: 1 } outcome: result: { _id: 1, x: 11 } collection: data: - { _id: 2, x: 22 } - description: "FindOneAndDelete fails with a RetryableWriteError label after two connection failures" failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["findAndModify"] closeConnection: true operation: name: "findOneAndDelete" arguments: filter: { x: { $gte: 11 } } sort: { x: 1 } outcome: error: true result: errorLabelsContain: ["RetryableWriteError"] collection: data: - { _id: 1, x: 11 } - { _id: 2, x: 22 } mongo-ruby-driver-2.21.3/spec/spec_tests/data/retryable_writes/legacy/findOneAndDelete.yml000066400000000000000000000032561505113246500317220ustar00rootroot00000000000000runOn: - minServerVersion: "3.6" topology: ["replicaset"] data: - { _id: 1, x: 11 } - { _id: 2, x: 22 } tests: - description: "FindOneAndDelete is committed on first attempt" failPoint: configureFailPoint: onPrimaryTransactionalWrite mode: { times: 1 } operation: name: "findOneAndDelete" arguments: filter: { x: { $gte: 11 }} sort: { x: 1 } outcome: result: { _id: 1, x: 11 } collection: data: - { _id: 2, x: 22 } - description: "FindOneAndDelete is not committed on first attempt" failPoint: configureFailPoint: onPrimaryTransactionalWrite mode: { times: 1 } data: { failBeforeCommitExceptionCode: 1 } operation: name: "findOneAndDelete" arguments: filter: { x: { $gte: 11 }} sort: { x: 1 } outcome: result: { _id: 1, x: 11 } collection: data: - { _id: 2, x: 22 } - description: "FindOneAndDelete is never committed" failPoint: configureFailPoint: onPrimaryTransactionalWrite mode: { times: 2 } data: { failBeforeCommitExceptionCode: 1 } operation: name: "findOneAndDelete" arguments: filter: { x: { $gte: 11 }} sort: { x: 1 } outcome: error: true collection: data: - { _id: 1, x: 11 } - { _id: 2, x: 22 } findOneAndReplace-errorLabels.yml000066400000000000000000000036451505113246500342700ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/retryable_writes/legacyrunOn: - minServerVersion: "4.3.1" topology: ["replicaset", "sharded"] data: - { _id: 1, x: 11 } - { _id: 2, x: 22 } tests: - description: "FindOneAndReplace succeeds with RetryableWriteError from server" failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["findAndModify"] errorCode: 112 # WriteConflict, not a retryable error code errorLabels: ["RetryableWriteError"] # Override server behavior: send RetryableWriteError label with non-retryable error code operation: name: "findOneAndReplace" arguments: filter: { _id: 1 } replacement: { _id: 1, x: 111 } returnDocument: "Before" outcome: # Driver retries operation and it succeeds result: { _id: 1, x: 11 } collection: data: - { _id: 1, x: 111 } - { _id: 2, x: 22 } - description: "FindOneAndReplace fails if server does not return RetryableWriteError" failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["findAndModify"] errorCode: 11600 # InterruptedAtShutdown, normally a retryable error code errorLabels: [] # Override server behavior: do not send RetryableWriteError label with retryable code operation: name: "findOneAndReplace" arguments: filter: { _id: 1 } replacement: { _id: 1, x: 111 } returnDocument: "Before" outcome: error: true # Driver does not retry operation because there was no RetryableWriteError label on response result: errorLabelsOmit: ["RetryableWriteError"] collection: data: - { _id: 1, x: 11 } - { _id: 2, x: 22 } findOneAndReplace-serverErrors.yml000066400000000000000000000047151505113246500345160ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/retryable_writes/legacyrunOn: - minServerVersion: "4.0" topology: ["replicaset"] - minServerVersion: "4.1.7" topology: ["sharded"] data: - { _id: 1, x: 11 } - { _id: 2, x: 22 } tests: - description: "FindOneAndReplace succeeds after PrimarySteppedDown" failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["findAndModify"] errorCode: 189 errorLabels: ["RetryableWriteError"] operation: name: "findOneAndReplace" arguments: filter: { _id: 1 } replacement: { _id: 1, x: 111 } returnDocument: "Before" outcome: result: { _id: 1, x: 11 } collection: data: - { _id: 1, x: 111 } - { _id: 2, x: 22 } - description: "FindOneAndReplace succeeds after WriteConcernError ShutdownInProgress" failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["findAndModify"] errorLabels: ["RetryableWriteError"] writeConcernError: code: 91 errmsg: Replication is being shut down operation: name: "findOneAndReplace" arguments: filter: { _id: 1 } replacement: { _id: 1, x: 111 } returnDocument: "Before" outcome: result: { _id: 1, x: 11 } collection: data: - { _id: 1, x: 111 } - { _id: 2, x: 22 } - description: "FindOneAndReplace fails with a RetryableWriteError label after two connection failures" failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["findAndModify"] closeConnection: true operation: name: "findOneAndReplace" arguments: filter: { _id: 1 } replacement: { _id: 1, x: 111 } returnDocument: "Before" outcome: error: true result: errorLabelsContain: ["RetryableWriteError"] collection: data: - { _id: 1, x: 11 } - { _id: 2, x: 22 } mongo-ruby-driver-2.21.3/spec/spec_tests/data/retryable_writes/legacy/findOneAndReplace.yml000066400000000000000000000036341505113246500320730ustar00rootroot00000000000000runOn: - minServerVersion: "3.6" topology: ["replicaset"] data: - { _id: 1, x: 11 } - { _id: 2, x: 22 } tests: - description: "FindOneAndReplace is committed on first attempt" failPoint: configureFailPoint: onPrimaryTransactionalWrite mode: { times: 1 } operation: name: "findOneAndReplace" arguments: filter: { _id: 1 } replacement: { _id: 1, x: 111 } returnDocument: "Before" outcome: result: { _id: 1, x: 11 } collection: data: - { _id: 1, x: 111 } - { _id: 2, x: 22 } - description: "FindOneAndReplace is not committed on first attempt" failPoint: configureFailPoint: onPrimaryTransactionalWrite mode: { times: 1 } data: { failBeforeCommitExceptionCode: 1 } operation: name: "findOneAndReplace" arguments: filter: { _id: 1 } replacement: { _id: 1, x: 111 } returnDocument: "Before" outcome: result: { _id: 1, x: 11 } collection: data: - { _id: 1, x: 111 } - { _id: 2, x: 22 } - description: "FindOneAndReplace is never committed" failPoint: configureFailPoint: onPrimaryTransactionalWrite mode: { times: 2 } data: { failBeforeCommitExceptionCode: 1 } operation: name: "findOneAndReplace" arguments: filter: { _id: 1 } replacement: { _id: 1, x: 111 } returnDocument: "Before" outcome: error: true collection: data: - { _id: 1, x: 11 } - { _id: 2, x: 22 } findOneAndUpdate-errorLabels.yml000066400000000000000000000036261505113246500341360ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/retryable_writes/legacyrunOn: - minServerVersion: "4.3.1" topology: ["replicaset", "sharded"] data: - { _id: 1, x: 11 } - { _id: 2, x: 22 } tests: - description: "FindOneAndUpdate succeeds with RetryableWriteError from server" failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["findAndModify"] errorCode: 112 # WriteConflict, not a retryable error code errorLabels: ["RetryableWriteError"] # Override server behavior: send RetryableWriteError label with non-retryable error code operation: name: "findOneAndUpdate" arguments: filter: { _id: 1 } update: { $inc: { x: 1 } } returnDocument: "Before" outcome: # Driver retries operation and it succeeds result: { _id: 1, x: 11 } collection: data: - { _id: 1, x: 12 } - { _id: 2, x: 22 } - description: "FindOneAndUpdate fails if server does not return RetryableWriteError" failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["findAndModify"] errorCode: 11600 # InterruptedAtShutdown, normally a retryable error code errorLabels: [] # Override server behavior: do not send RetryableWriteError label with retryable code operation: name: "findOneAndUpdate" arguments: filter: { _id: 1 } update: { $inc: { x: 1 } } returnDocument: "Before" outcome: error: true # Driver does not retry operation because there was no RetryableWriteError label on response result: errorLabelsOmit: ["RetryableWriteError"] collection: data: - { _id: 1, x: 11 } - { _id: 2, x: 22 } findOneAndUpdate-serverErrors.yml000066400000000000000000000046651505113246500343710ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/retryable_writes/legacyrunOn: - minServerVersion: "4.0" topology: ["replicaset"] - minServerVersion: "4.1.7" topology: ["sharded"] data: - { _id: 1, x: 11 } - { _id: 2, x: 22 } tests: - description: "FindOneAndUpdate succeeds after PrimarySteppedDown" failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["findAndModify"] errorCode: 189 errorLabels: ["RetryableWriteError"] operation: name: "findOneAndUpdate" arguments: filter: { _id: 1 } update: { $inc: { x : 1 }} returnDocument: "Before" outcome: result: { _id: 1, x: 11 } collection: data: - { _id: 1, x: 12 } - { _id: 2, x: 22 } - description: "FindOneAndUpdate succeeds after WriteConcernError ShutdownInProgress" failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["findAndModify"] errorLabels: ["RetryableWriteError"] writeConcernError: code: 91 errmsg: Replication is being shut down operation: name: "findOneAndUpdate" arguments: filter: { _id: 1 } update: { $inc: { x : 1 }} returnDocument: "Before" outcome: result: { _id: 1, x: 11 } collection: data: - { _id: 1, x: 12 } - { _id: 2, x: 22 } - description: "FindOneAndUpdate fails with a RetryableWriteError label after two connection failures" failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["findAndModify"] closeConnection: true operation: name: "findOneAndUpdate" arguments: filter: { _id: 1 } update: { $inc: { x: 1 } } returnDocument: "Before" outcome: error: true result: errorLabelsContain: ["RetryableWriteError"] collection: data: - { _id: 1, x: 11 } - { _id: 2, x: 22 } mongo-ruby-driver-2.21.3/spec/spec_tests/data/retryable_writes/legacy/findOneAndUpdate.yml000066400000000000000000000035341505113246500317410ustar00rootroot00000000000000runOn: - minServerVersion: "3.6" topology: ["replicaset"] data: - { _id: 1, x: 11 } - { _id: 2, x: 22 } tests: - description: "FindOneAndUpdate is committed on first attempt" failPoint: configureFailPoint: onPrimaryTransactionalWrite mode: { times: 1 } operation: name: "findOneAndUpdate" arguments: filter: { _id: 1 } update: { $inc: { x : 1 }} returnDocument: "Before" outcome: result: { _id: 1, x: 11 } collection: data: - { _id: 1, x: 12 } - { _id: 2, x: 22 } - description: "FindOneAndUpdate is not committed on first attempt" failPoint: configureFailPoint: onPrimaryTransactionalWrite mode: { times: 1 } data: { failBeforeCommitExceptionCode: 1 } operation: name: "findOneAndUpdate" arguments: filter: { _id: 1 } update: { $inc: { x : 1 }} returnDocument: "Before" outcome: result: { _id: 1, x: 11 } collection: data: - { _id: 1, x: 12 } - { _id: 2, x: 22 } - description: "FindOneAndUpdate is never committed" failPoint: configureFailPoint: onPrimaryTransactionalWrite mode: { times: 2 } data: { failBeforeCommitExceptionCode: 1 } operation: name: "findOneAndUpdate" arguments: filter: { _id: 1 } update: { $inc: { x : 1 }} outcome: error: true collection: data: - { _id: 1, x: 11 } - { _id: 2, x: 22 } mongo-ruby-driver-2.21.3/spec/spec_tests/data/retryable_writes/legacy/insertMany-errorLabels.yml000066400000000000000000000036521505113246500331750ustar00rootroot00000000000000runOn: - minServerVersion: "4.3.1" topology: ["replicaset", "sharded"] data: - { _id: 1, x: 11 } tests: - description: "InsertMany succeeds with RetryableWriteError from server" failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["insert"] errorCode: 112 # WriteConflict, not a retryable error code errorLabels: ["RetryableWriteError"] # Override server behavior: send RetryableWriteError label with non-retryable error code operation: name: "insertMany" arguments: documents: - { _id: 2, x: 22 } - { _id: 3, x: 33 } options: { ordered: true } outcome: # Driver retries operation and it succeeds result: insertedIds: { 0: 2, 1: 3 } collection: data: - { _id: 1, x: 11 } - { _id: 2, x: 22 } - { _id: 3, x: 33 } - description: "InsertMany fails if server does not return RetryableWriteError" failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["insert"] errorCode: 11600 # InterruptedAtShutdown, normally a retryable error code errorLabels: [] # Override server behavior: do not send RetryableWriteError label with retryable code operation: name: "insertMany" arguments: documents: - { _id: 2, x: 22 } - { _id: 3, x: 33 } options: { ordered: true } outcome: error: true # Driver does not retry operation because there was no RetryableWriteError label on response result: errorLabelsOmit: ["RetryableWriteError"] collection: data: - { _id: 1, x: 11 } mongo-ruby-driver-2.21.3/spec/spec_tests/data/retryable_writes/legacy/insertMany-serverErrors.yml000066400000000000000000000050351505113246500334210ustar00rootroot00000000000000runOn: - minServerVersion: "4.0" topology: ["replicaset"] - minServerVersion: "4.1.7" topology: ["sharded"] data: - { _id: 1, x: 11 } tests: - description: "InsertMany succeeds after PrimarySteppedDown" failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["insert"] errorCode: 189 errorLabels: ["RetryableWriteError"] operation: name: "insertMany" arguments: documents: - { _id: 2, x: 22 } - { _id: 3, x: 33 } options: { ordered: true } outcome: result: insertedIds: { 0: 2, 1: 3 } collection: data: - { _id: 1, x: 11 } - { _id: 2, x: 22 } - { _id: 3, x: 33 } - description: "InsertMany succeeds after WriteConcernError ShutdownInProgress" failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["insert"] errorLabels: ["RetryableWriteError"] writeConcernError: code: 91 errmsg: Replication is being shut down operation: name: "insertMany" arguments: documents: - { _id: 2, x: 22 } - { _id: 3, x: 33 } options: { ordered: true } outcome: result: insertedIds: { 0: 2, 1: 3 } collection: data: - { _id: 1, x: 11 } - { _id: 2, x: 22 } - { _id: 3, x: 33 } - description: "InsertMany fails with a RetryableWriteError label after two connection failures" failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["insert"] closeConnection: true operation: name: "insertMany" arguments: documents: - { _id: 2, x: 22 } - { _id: 3, x: 33 } options: { ordered: true } outcome: error: true result: errorLabelsContain: ["RetryableWriteError"] collection: data: - { _id: 1, x: 11 } mongo-ruby-driver-2.21.3/spec/spec_tests/data/retryable_writes/legacy/insertMany.yml000066400000000000000000000046601505113246500307230ustar00rootroot00000000000000runOn: - minServerVersion: "3.6" topology: ["replicaset"] data: - { _id: 1, x: 11 } tests: - description: "InsertMany succeeds after one network error" failPoint: configureFailPoint: onPrimaryTransactionalWrite mode: { times: 1 } operation: name: "insertMany" arguments: documents: - { _id: 2, x: 22 } - { _id: 3, x: 33 } options: { ordered: true } outcome: result: insertedIds: { 0: 2, 1: 3 } collection: data: - { _id: 1, x: 11 } - { _id: 2, x: 22 } - { _id: 3, x: 33 } - description: "InsertMany with unordered execution" failPoint: configureFailPoint: onPrimaryTransactionalWrite mode: { times: 1 } operation: name: "insertMany" arguments: documents: - { _id: 2, x: 22 } - { _id: 3, x: 33 } options: { ordered: false } outcome: result: insertedIds: { 0: 2, 1: 3 } collection: data: - { _id: 1, x: 11 } - { _id: 2, x: 22 } - { _id: 3, x: 33 } - description: "InsertMany fails after multiple network errors" failPoint: # Normally, a mongod will insert the documents as a batch with a # single commit. If this fails, mongod may try to insert each # document one at a time depending on the failure. Therefore our # single insert command may trigger the failpoint twice on each # driver attempt. This test permanently enables the fail point to # ensure the retry attempt always fails. configureFailPoint: onPrimaryTransactionalWrite mode: "alwaysOn" data: { failBeforeCommitExceptionCode: 1 } operation: name: "insertMany" arguments: documents: - { _id: 2, x: 22 } - { _id: 3, x: 33 } - { _id: 4, x: 44 } options: { ordered: true } outcome: error: true collection: data: - { _id: 1, x: 11 } mongo-ruby-driver-2.21.3/spec/spec_tests/data/retryable_writes/legacy/insertOne-errorLabels.yml000066400000000000000000000031141505113246500330030ustar00rootroot00000000000000runOn: - minServerVersion: "4.3.1" topology: ["replicaset", "sharded"] data: [] tests: - description: "InsertOne succeeds with RetryableWriteError from server" failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["insert"] errorCode: 112 # WriteConflict, not a retryable error code errorLabels: ["RetryableWriteError"] # Override server behavior: send RetryableWriteError label with non-retryable error code operation: name: "insertOne" arguments: document: { _id: 1, x: 11 } outcome: # Driver retries operation and it succeeds result: insertedId: 1 collection: data: - { _id: 1, x: 11 } - description: "InsertOne fails if server does not return RetryableWriteError" failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["insert"] errorCode: 11600 # InterruptedAtShutdown, normally a retryable error code errorLabels: [] # Override server behavior: do not send RetryableWriteError label with retryable code operation: name: "insertOne" arguments: document: { _id: 1, x: 11 } outcome: error: true # Driver does not retry operation because there was no RetryableWriteError label on response result: errorLabelsOmit: ["RetryableWriteError"] collection: data: [] mongo-ruby-driver-2.21.3/spec/spec_tests/data/retryable_writes/legacy/insertOne-serverErrors.yml000066400000000000000000000411421505113246500332350ustar00rootroot00000000000000runOn: - minServerVersion: "4.0" topology: ["replicaset"] - minServerVersion: "4.1.7" topology: ["sharded"] data: - { _id: 1, x: 11 } - { _id: 2, x: 22 } tests: - description: "InsertOne succeeds after connection failure" failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["insert"] closeConnection: true operation: name: "insertOne" arguments: document: { _id: 3, x: 33 } outcome: result: insertedId: 3 collection: data: - { _id: 1, x: 11 } - { _id: 2, x: 22 } - { _id: 3, x: 33 } - description: "InsertOne fails after connection failure when retryWrites option is false" clientOptions: retryWrites: false failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["insert"] closeConnection: true operation: name: "insertOne" arguments: document: { _id: 3, x: 33 } outcome: error: true result: # If retryWrites is false, the driver should not add the # RetryableWriteError label to the error. errorLabelsOmit: ["RetryableWriteError"] collection: data: - { _id: 1, x: 11 } - { _id: 2, x: 22 } - description: "InsertOne succeeds after NotWritablePrimary" failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["insert"] errorCode: 10107 errorLabels: ["RetryableWriteError"] closeConnection: false operation: name: "insertOne" arguments: document: { _id: 3, x: 33 } outcome: result: insertedId: 3 collection: data: - { _id: 1, x: 11 } - { _id: 2, x: 22 } - { _id: 3, x: 33 } - description: "InsertOne succeeds after NotPrimaryOrSecondary" failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["insert"] errorCode: 13436 errorLabels: ["RetryableWriteError"] closeConnection: false operation: name: "insertOne" arguments: document: { _id: 3, x: 33 } outcome: result: insertedId: 3 collection: data: - { _id: 1, x: 11 } - { _id: 2, x: 22 } - { _id: 3, x: 33 } - description: "InsertOne succeeds after NotPrimaryNoSecondaryOk" failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["insert"] errorCode: 13435 errorLabels: ["RetryableWriteError"] closeConnection: false operation: name: "insertOne" arguments: document: { _id: 3, x: 33 } outcome: result: insertedId: 3 collection: data: - { _id: 1, x: 11 } - { _id: 2, x: 22 } - { _id: 3, x: 33 } - description: "InsertOne succeeds after InterruptedDueToReplStateChange" failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["insert"] errorCode: 11602 errorLabels: ["RetryableWriteError"] closeConnection: false operation: name: "insertOne" arguments: document: { _id: 3, x: 33 } outcome: result: insertedId: 3 collection: data: - { _id: 1, x: 11 } - { _id: 2, x: 22 } - { _id: 3, x: 33 } - description: "InsertOne succeeds after InterruptedAtShutdown" failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["insert"] errorCode: 11600 errorLabels: ["RetryableWriteError"] closeConnection: false operation: name: "insertOne" arguments: document: { _id: 3, x: 33 } outcome: result: insertedId: 3 collection: data: - { _id: 1, x: 11 } - { _id: 2, x: 22 } - { _id: 3, x: 33 } - description: "InsertOne succeeds after PrimarySteppedDown" failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["insert"] errorCode: 189 errorLabels: ["RetryableWriteError"] closeConnection: false operation: name: "insertOne" arguments: document: { _id: 3, x: 33 } outcome: result: insertedId: 3 collection: data: - { _id: 1, x: 11 } - { _id: 2, x: 22 } - { _id: 3, x: 33 } - description: "InsertOne succeeds after ShutdownInProgress" failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["insert"] errorCode: 91 errorLabels: ["RetryableWriteError"] closeConnection: false operation: name: "insertOne" arguments: document: { _id: 3, x: 33 } outcome: result: insertedId: 3 collection: data: - { _id: 1, x: 11 } - { _id: 2, x: 22 } - { _id: 3, x: 33 } - description: "InsertOne succeeds after HostNotFound" failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["insert"] errorCode: 7 errorLabels: ["RetryableWriteError"] closeConnection: false operation: name: "insertOne" arguments: document: { _id: 3, x: 33 } outcome: result: insertedId: 3 collection: data: - { _id: 1, x: 11 } - { _id: 2, x: 22 } - { _id: 3, x: 33 } - description: "InsertOne succeeds after HostUnreachable" failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["insert"] errorCode: 6 errorLabels: ["RetryableWriteError"] closeConnection: false operation: name: "insertOne" arguments: document: { _id: 3, x: 33 } outcome: result: insertedId: 3 collection: data: - { _id: 1, x: 11 } - { _id: 2, x: 22 } - { _id: 3, x: 33 } - description: "InsertOne succeeds after SocketException" failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["insert"] errorCode: 9001 errorLabels: ["RetryableWriteError"] closeConnection: false operation: name: "insertOne" arguments: document: { _id: 3, x: 33 } outcome: result: insertedId: 3 collection: data: - { _id: 1, x: 11 } - { _id: 2, x: 22 } - { _id: 3, x: 33 } - description: "InsertOne succeeds after NetworkTimeout" failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["insert"] errorCode: 89 errorLabels: ["RetryableWriteError"] closeConnection: false operation: name: "insertOne" arguments: document: { _id: 3, x: 33 } outcome: result: insertedId: 3 collection: data: - { _id: 1, x: 11 } - { _id: 2, x: 22 } - { _id: 3, x: 33 } - description: "InsertOne succeeds after ExceededTimeLimit" failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["insert"] errorCode: 262 errorLabels: ["RetryableWriteError"] closeConnection: false operation: name: "insertOne" arguments: document: { _id: 3, x: 33 } outcome: result: insertedId: 3 collection: data: - { _id: 1, x: 11 } - { _id: 2, x: 22 } - { _id: 3, x: 33 } - description: "InsertOne fails after Interrupted" failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["insert"] errorCode: 11601 closeConnection: false operation: name: "insertOne" arguments: document: { _id: 3, x: 33 } outcome: error: true result: errorLabelsOmit: ["RetryableWriteError"] collection: data: - { _id: 1, x: 11 } - { _id: 2, x: 22 } - description: "InsertOne succeeds after WriteConcernError InterruptedAtShutdown" failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["insert"] errorLabels: ["RetryableWriteError"] writeConcernError: code: 11600 errmsg: Replication is being shut down operation: name: "insertOne" arguments: document: { _id: 3, x: 33 } outcome: result: insertedId: 3 collection: data: - { _id: 1, x: 11 } - { _id: 2, x: 22 } - { _id: 3, x: 33 } - description: "InsertOne succeeds after WriteConcernError InterruptedDueToReplStateChange" failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["insert"] errorLabels: ["RetryableWriteError"] writeConcernError: code: 11602 errmsg: Replication is being shut down operation: name: "insertOne" arguments: document: { _id: 3, x: 33 } outcome: result: insertedId: 3 collection: data: - { _id: 1, x: 11 } - { _id: 2, x: 22 } - { _id: 3, x: 33 } - description: "InsertOne succeeds after WriteConcernError PrimarySteppedDown" failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["insert"] errorLabels: ["RetryableWriteError"] writeConcernError: code: 189 errmsg: Replication is being shut down operation: name: "insertOne" arguments: document: { _id: 3, x: 33 } outcome: result: insertedId: 3 collection: data: - { _id: 1, x: 11 } - { _id: 2, x: 22 } - { _id: 3, x: 33 } - description: "InsertOne succeeds after WriteConcernError ShutdownInProgress" failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["insert"] errorLabels: ["RetryableWriteError"] writeConcernError: code: 91 errmsg: Replication is being shut down operation: name: "insertOne" arguments: document: { _id: 3, x: 33 } outcome: result: insertedId: 3 collection: data: - { _id: 1, x: 11 } - { _id: 2, x: 22 } - { _id: 3, x: 33 } - description: "InsertOne fails after multiple retryable writeConcernErrors" failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["insert"] errorLabels: ["RetryableWriteError"] writeConcernError: code: 91 errmsg: Replication is being shut down operation: name: "insertOne" arguments: document: { _id: 3, x: 33 } outcome: error: true result: errorLabelsContain: ["RetryableWriteError"] collection: data: - { _id: 1, x: 11 } - { _id: 2, x: 22 } - { _id: 3, x: 33 } # The write was still applied. - description: "InsertOne fails after WriteConcernError Interrupted" failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["insert"] writeConcernError: code: 11601 errmsg: operation was interrupted operation: name: "insertOne" arguments: document: { _id: 3, x: 33 } outcome: error: true result: errorLabelsOmit: ["RetryableWriteError"] collection: data: - { _id: 1, x: 11 } - { _id: 2, x: 22 } - { _id: 3, x: 33 } # The write was still applied. - description: "InsertOne fails after WriteConcernError WriteConcernFailed" failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["insert"] writeConcernError: code: 64 codeName: WriteConcernFailed errmsg: waiting for replication timed out errInfo: {wtimeout: True} operation: name: "insertOne" arguments: document: { _id: 3, x: 33 } outcome: error: true result: errorLabelsOmit: ["RetryableWriteError"] collection: data: - { _id: 1, x: 11 } - { _id: 2, x: 22 } - { _id: 3, x: 33 } # The write was still applied. - description: "InsertOne fails with a RetryableWriteError label after two connection failures" failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["insert"] closeConnection: true operation: name: "insertOne" arguments: document: { _id: 3, x: 33 } outcome: error: true result: errorLabelsContain: ["RetryableWriteError"] collection: data: - { _id: 1, x: 11 } - { _id: 2, x: 22 }mongo-ruby-driver-2.21.3/spec/spec_tests/data/retryable_writes/legacy/insertOne.yml000066400000000000000000000033421505113246500305340ustar00rootroot00000000000000runOn: - minServerVersion: "3.6" topology: ["replicaset"] data: - { _id: 1, x: 11 } - { _id: 2, x: 22 } tests: - description: "InsertOne is committed on first attempt" failPoint: configureFailPoint: onPrimaryTransactionalWrite mode: { times: 1 } operation: name: "insertOne" arguments: document: { _id: 3, x: 33 } outcome: result: insertedId: 3 collection: data: - { _id: 1, x: 11 } - { _id: 2, x: 22 } - { _id: 3, x: 33 } - description: "InsertOne is not committed on first attempt" failPoint: configureFailPoint: onPrimaryTransactionalWrite mode: { times: 1 } data: { failBeforeCommitExceptionCode: 1 } operation: name: "insertOne" arguments: document: { _id: 3, x: 33 } outcome: result: insertedId: 3 collection: data: - { _id: 1, x: 11 } - { _id: 2, x: 22 } - { _id: 3, x: 33 } - description: "InsertOne is never committed" failPoint: configureFailPoint: onPrimaryTransactionalWrite mode: { times: 2 } data: { failBeforeCommitExceptionCode: 1 } operation: name: "insertOne" arguments: document: { _id: 3, x: 33 } outcome: error: true collection: data: - { _id: 1, x: 11 } - { _id: 2, x: 22 } mongo-ruby-driver-2.21.3/spec/spec_tests/data/retryable_writes/legacy/replaceOne-errorLabels.yml000066400000000000000000000035671505113246500331260ustar00rootroot00000000000000runOn: - minServerVersion: "4.3.1" topology: ["replicaset", "sharded"] data: - { _id: 1, x: 11 } - { _id: 2, x: 22 } tests: - description: "ReplaceOne succeeds with RetryableWriteError from server" failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["update"] errorCode: 112 # WriteConflict, not a retryable error code errorLabels: ["RetryableWriteError"] # Override server behavior: send RetryableWriteError label with non-retryable error code operation: name: "replaceOne" arguments: filter: { _id: 1 } replacement: { _id: 1, x: 111 } outcome: # Driver retries operation and it succeeds result: matchedCount: 1 modifiedCount: 1 upsertedCount: 0 collection: data: - { _id: 1, x: 111 } - { _id: 2, x: 22 } - description: "ReplaceOne fails if server does not return RetryableWriteError" failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["update"] errorCode: 11600 # InterruptedAtShutdown, normally a retryable error code errorLabels: [] # Override server behavior: do not send RetryableWriteError label with retryable code operation: name: "replaceOne" arguments: filter: { _id: 1 } replacement: { _id: 1, x: 111 } outcome: error: true # Driver does not retry operation because there was no RetryableWriteError label on response result: errorLabelsOmit: ["RetryableWriteError"] collection: data: - { _id: 1, x: 11 } - { _id: 2, x: 22 } mongo-ruby-driver-2.21.3/spec/spec_tests/data/retryable_writes/legacy/replaceOne-serverErrors.yml000066400000000000000000000046621505113246500333520ustar00rootroot00000000000000runOn: - minServerVersion: "4.0" topology: ["replicaset"] - minServerVersion: "4.1.7" topology: ["sharded"] data: - { _id: 1, x: 11 } - { _id: 2, x: 22 } tests: - description: "ReplaceOne succeeds after PrimarySteppedDown" failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["update"] errorCode: 189 errorLabels: ["RetryableWriteError"] operation: name: "replaceOne" arguments: filter: { _id: 1 } replacement: { _id: 1, x: 111 } outcome: result: matchedCount: 1 modifiedCount: 1 upsertedCount: 0 collection: data: - { _id: 1, x: 111 } - { _id: 2, x: 22 } - description: "ReplaceOne succeeds after WriteConcernError ShutdownInProgress" failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["update"] errorLabels: ["RetryableWriteError"] writeConcernError: code: 91 errmsg: Replication is being shut down operation: name: "replaceOne" arguments: filter: { _id: 1 } replacement: { _id: 1, x: 111 } outcome: result: matchedCount: 1 modifiedCount: 1 upsertedCount: 0 collection: data: - { _id: 1, x: 111 } - { _id: 2, x: 22 } - description: "ReplaceOne fails with a RetryableWriteError label after two connection failures" failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["update"] closeConnection: true operation: name: "replaceOne" arguments: filter: { _id: 1 } replacement: { _id: 1, x: 111 } outcome: error: true result: errorLabelsContain: ["RetryableWriteError"] collection: data: - { _id: 1, x: 11 } - { _id: 2, x: 22 } mongo-ruby-driver-2.21.3/spec/spec_tests/data/retryable_writes/legacy/replaceOne.yml000066400000000000000000000036271505113246500306510ustar00rootroot00000000000000runOn: - minServerVersion: "3.6" topology: ["replicaset"] data: - { _id: 1, x: 11 } - { _id: 2, x: 22 } tests: - description: "ReplaceOne is committed on first attempt" failPoint: configureFailPoint: onPrimaryTransactionalWrite mode: { times: 1 } operation: name: "replaceOne" arguments: filter: { _id: 1 } replacement: { _id: 1, x: 111 } outcome: result: matchedCount: 1 modifiedCount: 1 upsertedCount: 0 collection: data: - { _id: 1, x: 111 } - { _id: 2, x: 22 } - description: "ReplaceOne is not committed on first attempt" failPoint: configureFailPoint: onPrimaryTransactionalWrite mode: { times: 1 } data: { failBeforeCommitExceptionCode: 1 } operation: name: "replaceOne" arguments: filter: { _id: 1 } replacement: { _id: 1, x: 111 } outcome: result: matchedCount: 1 modifiedCount: 1 upsertedCount: 0 collection: data: - { _id: 1, x: 111 } - { _id: 2, x: 22 } - description: "ReplaceOne is never committed" failPoint: configureFailPoint: onPrimaryTransactionalWrite mode: { times: 2 } data: { failBeforeCommitExceptionCode: 1 } operation: name: "replaceOne" arguments: filter: { _id: 1 } replacement: { _id: 1, x: 111 } outcome: error: true collection: data: - { _id: 1, x: 11 } - { _id: 2, x: 22 } mongo-ruby-driver-2.21.3/spec/spec_tests/data/retryable_writes/legacy/updateMany.yml000066400000000000000000000012071505113246500306730ustar00rootroot00000000000000runOn: - minServerVersion: "3.6" topology: ["replicaset", "sharded"] data: - { _id: 1, x: 11 } - { _id: 2, x: 22 } tests: - description: "UpdateMany ignores retryWrites" useMultipleMongoses: true operation: name: "updateMany" arguments: filter: { } update: { $inc: { x : 1 }} outcome: result: matchedCount: 2 modifiedCount: 2 upsertedCount: 0 collection: data: - { _id: 1, x: 12 } - { _id: 2, x: 23 } mongo-ruby-driver-2.21.3/spec/spec_tests/data/retryable_writes/legacy/updateOne-errorLabels.yml000066400000000000000000000035501505113246500327650ustar00rootroot00000000000000runOn: - minServerVersion: "4.3.1" topology: ["replicaset", "sharded"] data: - { _id: 1, x: 11 } - { _id: 2, x: 22 } tests: - description: "UpdateOne succeeds with RetryableWriteError from server" failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["update"] errorCode: 112 # WriteConflict, not a retryable error code errorLabels: ["RetryableWriteError"] # Override server behavior: send RetryableWriteError label with non-retryable error code operation: name: "updateOne" arguments: filter: { _id: 1 } update: { $inc: { x: 1 } } outcome: # Driver retries operation and it succeeds result: matchedCount: 1 modifiedCount: 1 upsertedCount: 0 collection: data: - { _id: 1, x: 12 } - { _id: 2, x: 22 } - description: "UpdateOne fails if server does not return RetryableWriteError" failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["update"] errorCode: 11600 # InterruptedAtShutdown, normally a retryable error code errorLabels: [] # Override server behavior: do not send RetryableWriteError label with retryable code operation: name: "updateOne" arguments: filter: { _id: 1 } update: { $inc: { x: 1 } } outcome: error: true # Driver does not retry operation because there was no RetryableWriteError label on response result: errorLabelsOmit: ["RetryableWriteError"] collection: data: - { _id: 1, x: 11 } - { _id: 2, x: 22 } mongo-ruby-driver-2.21.3/spec/spec_tests/data/retryable_writes/legacy/updateOne-serverErrors.yml000066400000000000000000000046331505113246500332170ustar00rootroot00000000000000runOn: - minServerVersion: "4.0" topology: ["replicaset"] - minServerVersion: "4.1.7" topology: ["sharded"] data: - { _id: 1, x: 11 } - { _id: 2, x: 22 } tests: - description: "UpdateOne succeeds after PrimarySteppedDown" failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["update"] errorCode: 189 errorLabels: ["RetryableWriteError"] operation: name: "updateOne" arguments: filter: { _id: 1 } update: { $inc: { x : 1 }} outcome: result: matchedCount: 1 modifiedCount: 1 upsertedCount: 0 collection: data: - { _id: 1, x: 12 } - { _id: 2, x: 22 } - description: "UpdateOne succeeds after WriteConcernError ShutdownInProgress" failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["update"] errorLabels: ["RetryableWriteError"] writeConcernError: code: 91 errmsg: Replication is being shut down operation: name: "updateOne" arguments: filter: { _id: 1 } update: { $inc: { x : 1 }} outcome: result: matchedCount: 1 modifiedCount: 1 upsertedCount: 0 collection: data: - { _id: 1, x: 12 } - { _id: 2, x: 22 } - description: "UpdateOne fails with a RetryableWriteError label after two connection failures" failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["update"] closeConnection: true operation: name: "updateOne" arguments: filter: { _id: 1 } update: { $inc: { x: 1 } } outcome: error: true result: errorLabelsContain: ["RetryableWriteError"] collection: data: - { _id: 1, x: 11 } - { _id: 2, x: 22 } mongo-ruby-driver-2.21.3/spec/spec_tests/data/retryable_writes/legacy/updateOne.yml000066400000000000000000000076171505113246500305230ustar00rootroot00000000000000runOn: - minServerVersion: "3.6" topology: ["replicaset"] data: - { _id: 1, x: 11 } - { _id: 2, x: 22 } tests: - description: "UpdateOne is committed on first attempt" failPoint: configureFailPoint: onPrimaryTransactionalWrite mode: { times: 1 } operation: name: "updateOne" arguments: filter: { _id: 1 } update: { $inc: { x : 1 }} outcome: result: matchedCount: 1 modifiedCount: 1 upsertedCount: 0 collection: data: - { _id: 1, x: 12 } - { _id: 2, x: 22 } - description: "UpdateOne is not committed on first attempt" failPoint: configureFailPoint: onPrimaryTransactionalWrite mode: { times: 1 } data: { failBeforeCommitExceptionCode: 1 } operation: name: "updateOne" arguments: filter: { _id: 1 } update: { $inc: { x : 1 }} outcome: result: matchedCount: 1 modifiedCount: 1 upsertedCount: 0 collection: data: - { _id: 1, x: 12 } - { _id: 2, x: 22 } - description: "UpdateOne is never committed" failPoint: configureFailPoint: onPrimaryTransactionalWrite mode: { times: 2 } data: { failBeforeCommitExceptionCode: 1 } operation: name: "updateOne" arguments: filter: { _id: 1 } update: { $inc: { x : 1 }} outcome: error: true collection: data: - { _id: 1, x: 11 } - { _id: 2, x: 22 } - description: "UpdateOne with upsert is committed on first attempt" failPoint: configureFailPoint: onPrimaryTransactionalWrite mode: { times: 1 } operation: name: "updateOne" arguments: filter: { _id: 3, x: 33 } update: { $inc: { x : 1 }} upsert: true outcome: result: matchedCount: 0 modifiedCount: 0 upsertedCount: 1 upsertedId: 3 collection: data: - { _id: 1, x: 11 } - { _id: 2, x: 22 } - { _id: 3, x: 34 } - description: "UpdateOne with upsert is not committed on first attempt" failPoint: configureFailPoint: onPrimaryTransactionalWrite mode: { times: 1 } data: { failBeforeCommitExceptionCode: 1 } operation: name: "updateOne" arguments: filter: { _id: 3, x: 33 } update: { $inc: { x : 1 }} upsert: true outcome: result: matchedCount: 0 modifiedCount: 0 upsertedCount: 1 upsertedId: 3 collection: data: - { _id: 1, x: 11 } - { _id: 2, x: 22 } - { _id: 3, x: 34 } - description: "UpdateOne with upsert is never committed" failPoint: configureFailPoint: onPrimaryTransactionalWrite mode: { times: 2 } data: { failBeforeCommitExceptionCode: 1 } operation: name: "updateOne" arguments: filter: { _id: 3, x: 33 } update: { $inc: { x : 1 }} upsert: true outcome: error: true collection: data: - { _id: 1, x: 11 } - { _id: 2, x: 22 } mongo-ruby-driver-2.21.3/spec/spec_tests/data/retryable_writes/unified/000077500000000000000000000000001505113246500262205ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/retryable_writes/unified/bulkWrite-serverErrors.yml000066400000000000000000000052731505113246500334230ustar00rootroot00000000000000description: "retryable-writes bulkWrite serverErrors" schemaVersion: "1.0" runOnRequirements: - minServerVersion: "4.0" topologies: [ replicaset ] - minServerVersion: "4.1.7" topologies: [ sharded ] createEntities: - client: id: &client0 client0 useMultipleMongoses: false observeEvents: [ commandStartedEvent ] - database: id: &database0 database0 client: *client0 databaseName: &databaseName retryable-writes-tests - collection: id: &collection0 collection0 database: *database0 collectionName: &collectionName coll initialData: - collectionName: *collectionName databaseName: *databaseName documents: - { _id: 1, x: 11 } - { _id: 2, x: 22 } tests: - description: "BulkWrite succeeds after retryable writeConcernError in first batch" operations: - name: failPoint object: testRunner arguments: client: *client0 failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: [ insert ] errorLabels: [RetryableWriteError] # top-level error labels writeConcernError: code: 91 # ShutdownInProgress errmsg: "Replication is being shut down" - name: bulkWrite object: *collection0 arguments: requests: - insertOne: document: { _id: 3, x: 33 } - deleteOne: filter: { _id: 2 } expectResult: deletedCount: 1 insertedCount: 1 matchedCount: 0 modifiedCount: 0 upsertedCount: 0 insertedIds: { $$unsetOrMatches: { 0: 3 } } upsertedIds: { } expectEvents: - client: *client0 events: - commandStartedEvent: command: insert: *collectionName documents: [{ _id: 3, x: 33 }] commandName: insert databaseName: *databaseName - commandStartedEvent: command: insert: *collectionName documents: [{ _id: 3, x: 33 }] commandName: insert databaseName: *databaseName - commandStartedEvent: command: delete: *collectionName deletes: - q: { _id: 2 } limit: 1 commandName: delete databaseName: *databaseName outcome: - collectionName: *collectionName databaseName: *databaseName documents: - { _id: 1, x: 11 } - { _id: 3, x: 33 } # The write was still applied mongo-ruby-driver-2.21.3/spec/spec_tests/data/retryable_writes/unified/handshakeError.yml000066400000000000000000000617471505113246500317220ustar00rootroot00000000000000# Tests in this file are generated from handshakeError.yml.template. description: "retryable writes handshake failures" schemaVersion: "1.3" runOnRequirements: - minServerVersion: "4.2" topologies: [replicaset, sharded, load-balanced] auth: true createEntities: - client: id: &client client useMultipleMongoses: false observeEvents: - connectionCheckOutStartedEvent - commandStartedEvent - commandSucceededEvent - commandFailedEvent - database: id: &database database client: *client databaseName: &databaseName retryable-writes-handshake-tests - collection: id: &collection collection database: *database collectionName: &collectionName coll initialData: - collectionName: *collectionName databaseName: *databaseName documents: - { _id: 1, x: 11 } tests: # Because setting a failPoint creates a connection in the connection pool, run # a ping operation that fails immediately after the failPoint operation in # order to discard the connection before running the actual operation to be # tested. The saslContinue command is used to avoid SDAM errors. # # Description of events: # - Failpoint operation. # - Creates a connection in the connection pool that must be closed. # - Ping operation. # - Triggers failpoint (first time). # - Closes the connection made by the fail point operation. # - Test operation. # - New connection is created. # - Triggers failpoint (second time). # - Tests whether operation successfully retries the handshake and succeeds. - description: "collection.insertOne succeeds after retryable handshake network error" operations: - name: failPoint object: testRunner arguments: client: *client failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: [ping, saslContinue] closeConnection: true - name: runCommand object: *database arguments: { commandName: ping, command: { ping: 1 } } expectError: { isError: true } - name: insertOne object: *collection arguments: document: { _id: 2, x: 22 } expectEvents: - client: *client eventType: cmap events: - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - client: *client events: - commandStartedEvent: command: { ping: 1 } databaseName: *databaseName - commandFailedEvent: commandName: ping - commandStartedEvent: commandName: insert - commandSucceededEvent: commandName: insert - description: "collection.insertOne succeeds after retryable handshake server error (ShutdownInProgress)" operations: - name: failPoint object: testRunner arguments: client: *client failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: [ping, saslContinue] closeConnection: true - name: runCommand object: *database arguments: { commandName: ping, command: { ping: 1 } } expectError: { isError: true } - name: insertOne object: *collection arguments: document: { _id: 2, x: 22 } expectEvents: - client: *client eventType: cmap events: - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - client: *client events: - commandStartedEvent: command: { ping: 1 } databaseName: *databaseName - commandFailedEvent: commandName: ping - commandStartedEvent: commandName: insert - commandSucceededEvent: commandName: insert - description: "collection.insertMany succeeds after retryable handshake network error" operations: - name: failPoint object: testRunner arguments: client: *client failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: [ping, saslContinue] closeConnection: true - name: runCommand object: *database arguments: { commandName: ping, command: { ping: 1 } } expectError: { isError: true } - name: insertMany object: *collection arguments: documents: - { _id: 2, x: 22 } expectEvents: - client: *client eventType: cmap events: - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - client: *client events: - commandStartedEvent: command: { ping: 1 } databaseName: *databaseName - commandFailedEvent: commandName: ping - commandStartedEvent: commandName: insert - commandSucceededEvent: commandName: insert - description: "collection.insertMany succeeds after retryable handshake server error (ShutdownInProgress)" operations: - name: failPoint object: testRunner arguments: client: *client failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: [ping, saslContinue] closeConnection: true - name: runCommand object: *database arguments: { commandName: ping, command: { ping: 1 } } expectError: { isError: true } - name: insertMany object: *collection arguments: documents: - { _id: 2, x: 22 } expectEvents: - client: *client eventType: cmap events: - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - client: *client events: - commandStartedEvent: command: { ping: 1 } databaseName: *databaseName - commandFailedEvent: commandName: ping - commandStartedEvent: commandName: insert - commandSucceededEvent: commandName: insert - description: "collection.deleteOne succeeds after retryable handshake network error" operations: - name: failPoint object: testRunner arguments: client: *client failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: [ping, saslContinue] closeConnection: true - name: runCommand object: *database arguments: { commandName: ping, command: { ping: 1 } } expectError: { isError: true } - name: deleteOne object: *collection arguments: filter: {} expectEvents: - client: *client eventType: cmap events: - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - client: *client events: - commandStartedEvent: command: { ping: 1 } databaseName: *databaseName - commandFailedEvent: commandName: ping - commandStartedEvent: commandName: delete - commandSucceededEvent: commandName: delete - description: "collection.deleteOne succeeds after retryable handshake server error (ShutdownInProgress)" operations: - name: failPoint object: testRunner arguments: client: *client failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: [ping, saslContinue] closeConnection: true - name: runCommand object: *database arguments: { commandName: ping, command: { ping: 1 } } expectError: { isError: true } - name: deleteOne object: *collection arguments: filter: {} expectEvents: - client: *client eventType: cmap events: - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - client: *client events: - commandStartedEvent: command: { ping: 1 } databaseName: *databaseName - commandFailedEvent: commandName: ping - commandStartedEvent: commandName: delete - commandSucceededEvent: commandName: delete - description: "collection.replaceOne succeeds after retryable handshake network error" operations: - name: failPoint object: testRunner arguments: client: *client failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: [ping, saslContinue] closeConnection: true - name: runCommand object: *database arguments: { commandName: ping, command: { ping: 1 } } expectError: { isError: true } - name: replaceOne object: *collection arguments: filter: {} replacement: { x: 22 } expectEvents: - client: *client eventType: cmap events: - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - client: *client events: - commandStartedEvent: command: { ping: 1 } databaseName: *databaseName - commandFailedEvent: commandName: ping - commandStartedEvent: commandName: update - commandSucceededEvent: commandName: update - description: "collection.replaceOne succeeds after retryable handshake server error (ShutdownInProgress)" operations: - name: failPoint object: testRunner arguments: client: *client failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: [ping, saslContinue] closeConnection: true - name: runCommand object: *database arguments: { commandName: ping, command: { ping: 1 } } expectError: { isError: true } - name: replaceOne object: *collection arguments: filter: {} replacement: { x: 22 } expectEvents: - client: *client eventType: cmap events: - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - client: *client events: - commandStartedEvent: command: { ping: 1 } databaseName: *databaseName - commandFailedEvent: commandName: ping - commandStartedEvent: commandName: update - commandSucceededEvent: commandName: update - description: "collection.updateOne succeeds after retryable handshake network error" operations: - name: failPoint object: testRunner arguments: client: *client failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: [ping, saslContinue] closeConnection: true - name: runCommand object: *database arguments: { commandName: ping, command: { ping: 1 } } expectError: { isError: true } - name: updateOne object: *collection arguments: filter: {} update: { $set: { x: 22 } } expectEvents: - client: *client eventType: cmap events: - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - client: *client events: - commandStartedEvent: command: { ping: 1 } databaseName: *databaseName - commandFailedEvent: commandName: ping - commandStartedEvent: commandName: update - commandSucceededEvent: commandName: update - description: "collection.updateOne succeeds after retryable handshake server error (ShutdownInProgress)" operations: - name: failPoint object: testRunner arguments: client: *client failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: [ping, saslContinue] closeConnection: true - name: runCommand object: *database arguments: { commandName: ping, command: { ping: 1 } } expectError: { isError: true } - name: updateOne object: *collection arguments: filter: {} update: { $set: { x: 22 } } expectEvents: - client: *client eventType: cmap events: - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - client: *client events: - commandStartedEvent: command: { ping: 1 } databaseName: *databaseName - commandFailedEvent: commandName: ping - commandStartedEvent: commandName: update - commandSucceededEvent: commandName: update - description: "collection.findOneAndDelete succeeds after retryable handshake network error" operations: - name: failPoint object: testRunner arguments: client: *client failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: [ping, saslContinue] closeConnection: true - name: runCommand object: *database arguments: { commandName: ping, command: { ping: 1 } } expectError: { isError: true } - name: findOneAndDelete object: *collection arguments: filter: {} expectEvents: - client: *client eventType: cmap events: - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - client: *client events: - commandStartedEvent: command: { ping: 1 } databaseName: *databaseName - commandFailedEvent: commandName: ping - commandStartedEvent: commandName: findAndModify - commandSucceededEvent: commandName: findAndModify - description: "collection.findOneAndDelete succeeds after retryable handshake server error (ShutdownInProgress)" operations: - name: failPoint object: testRunner arguments: client: *client failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: [ping, saslContinue] closeConnection: true - name: runCommand object: *database arguments: { commandName: ping, command: { ping: 1 } } expectError: { isError: true } - name: findOneAndDelete object: *collection arguments: filter: {} expectEvents: - client: *client eventType: cmap events: - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - client: *client events: - commandStartedEvent: command: { ping: 1 } databaseName: *databaseName - commandFailedEvent: commandName: ping - commandStartedEvent: commandName: findAndModify - commandSucceededEvent: commandName: findAndModify - description: "collection.findOneAndReplace succeeds after retryable handshake network error" operations: - name: failPoint object: testRunner arguments: client: *client failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: [ping, saslContinue] closeConnection: true - name: runCommand object: *database arguments: { commandName: ping, command: { ping: 1 } } expectError: { isError: true } - name: findOneAndReplace object: *collection arguments: filter: {} replacement: { x: 22 } expectEvents: - client: *client eventType: cmap events: - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - client: *client events: - commandStartedEvent: command: { ping: 1 } databaseName: *databaseName - commandFailedEvent: commandName: ping - commandStartedEvent: commandName: findAndModify - commandSucceededEvent: commandName: findAndModify - description: "collection.findOneAndReplace succeeds after retryable handshake server error (ShutdownInProgress)" operations: - name: failPoint object: testRunner arguments: client: *client failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: [ping, saslContinue] closeConnection: true - name: runCommand object: *database arguments: { commandName: ping, command: { ping: 1 } } expectError: { isError: true } - name: findOneAndReplace object: *collection arguments: filter: {} replacement: { x: 22 } expectEvents: - client: *client eventType: cmap events: - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - client: *client events: - commandStartedEvent: command: { ping: 1 } databaseName: *databaseName - commandFailedEvent: commandName: ping - commandStartedEvent: commandName: findAndModify - commandSucceededEvent: commandName: findAndModify - description: "collection.findOneAndUpdate succeeds after retryable handshake network error" operations: - name: failPoint object: testRunner arguments: client: *client failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: [ping, saslContinue] closeConnection: true - name: runCommand object: *database arguments: { commandName: ping, command: { ping: 1 } } expectError: { isError: true } - name: findOneAndUpdate object: *collection arguments: filter: {} update: { $set: { x: 22 } } expectEvents: - client: *client eventType: cmap events: - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - client: *client events: - commandStartedEvent: command: { ping: 1 } databaseName: *databaseName - commandFailedEvent: commandName: ping - commandStartedEvent: commandName: findAndModify - commandSucceededEvent: commandName: findAndModify - description: "collection.findOneAndUpdate succeeds after retryable handshake server error (ShutdownInProgress)" operations: - name: failPoint object: testRunner arguments: client: *client failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: [ping, saslContinue] closeConnection: true - name: runCommand object: *database arguments: { commandName: ping, command: { ping: 1 } } expectError: { isError: true } - name: findOneAndUpdate object: *collection arguments: filter: {} update: { $set: { x: 22 } } expectEvents: - client: *client eventType: cmap events: - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - client: *client events: - commandStartedEvent: command: { ping: 1 } databaseName: *databaseName - commandFailedEvent: commandName: ping - commandStartedEvent: commandName: findAndModify - commandSucceededEvent: commandName: findAndModify - description: "collection.bulkWrite succeeds after retryable handshake network error" operations: - name: failPoint object: testRunner arguments: client: *client failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: [ping, saslContinue] closeConnection: true - name: runCommand object: *database arguments: { commandName: ping, command: { ping: 1 } } expectError: { isError: true } - name: bulkWrite object: *collection arguments: requests: - insertOne: document: { _id: 2, x: 22 } expectEvents: - client: *client eventType: cmap events: - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - client: *client events: - commandStartedEvent: command: { ping: 1 } databaseName: *databaseName - commandFailedEvent: commandName: ping - commandStartedEvent: commandName: insert - commandSucceededEvent: commandName: insert - description: "collection.bulkWrite succeeds after retryable handshake server error (ShutdownInProgress)" operations: - name: failPoint object: testRunner arguments: client: *client failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: [ping, saslContinue] closeConnection: true - name: runCommand object: *database arguments: { commandName: ping, command: { ping: 1 } } expectError: { isError: true } - name: bulkWrite object: *collection arguments: requests: - insertOne: document: { _id: 2, x: 22 } expectEvents: - client: *client eventType: cmap events: - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - { connectionCheckOutStartedEvent: {} } - client: *client events: - commandStartedEvent: command: { ping: 1 } databaseName: *databaseName - commandFailedEvent: commandName: ping - commandStartedEvent: commandName: insert - commandSucceededEvent: commandName: insert insertOne-noWritesPerformedError.yml000066400000000000000000000026211505113246500353210ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/retryable_writes/unifieddescription: "retryable-writes insertOne noWritesPerformedErrors" schemaVersion: "1.0" runOnRequirements: - minServerVersion: "6.0" topologies: [ replicaset ] createEntities: - client: id: &client0 client0 useMultipleMongoses: false observeEvents: [ commandFailedEvent ] - database: id: &database0 database0 client: *client0 databaseName: &databaseName retryable-writes-tests - collection: id: &collection0 collection0 database: *database0 collectionName: &collectionName no-writes-performed-collection tests: - description: "InsertOne fails after NoWritesPerformed error" operations: - name: failPoint object: testRunner arguments: client: *client0 failPoint: configureFailPoint: failCommand mode: times: 2 data: failCommands: - insert errorCode: 64 errorLabels: - NoWritesPerformed - RetryableWriteError - name: insertOne object: *collection0 arguments: document: x: 1 expectError: errorCode: 64 errorLabelsContain: - NoWritesPerformed - RetryableWriteError outcome: - collectionName: *collectionName databaseName: *databaseName documents: [] mongo-ruby-driver-2.21.3/spec/spec_tests/data/retryable_writes/unified/insertOne-serverErrors.yml000066400000000000000000000042611505113246500334150ustar00rootroot00000000000000description: "retryable-writes insertOne serverErrors" schemaVersion: "1.0" runOnRequirements: - minServerVersion: "4.0" topologies: [ replicaset ] - minServerVersion: "4.1.7" topologies: [ sharded ] createEntities: - client: id: &client0 client0 useMultipleMongoses: false observeEvents: [ commandStartedEvent ] - database: id: &database0 database0 client: *client0 databaseName: &databaseName retryable-writes-tests - collection: id: &collection0 collection0 database: *database0 collectionName: &collectionName coll initialData: - collectionName: *collectionName databaseName: *databaseName documents: - { _id: 1, x: 11 } - { _id: 2, x: 22 } tests: - description: "InsertOne succeeds after retryable writeConcernError" operations: - name: failPoint object: testRunner arguments: client: *client0 failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: [ insert ] errorLabels: [RetryableWriteError] # top-level error labels writeConcernError: code: 91 # ShutdownInProgress errmsg: "Replication is being shut down" - name: insertOne object: *collection0 arguments: document: { _id: 3, x: 33 } expectResult: $$unsetOrMatches: { insertedId: { $$unsetOrMatches: 3 } } expectEvents: - client: *client0 events: - commandStartedEvent: command: insert: *collectionName documents: [{ _id: 3, x: 33 }] commandName: insert databaseName: *databaseName - commandStartedEvent: command: insert: *collectionName documents: [{ _id: 3, x: 33 }] commandName: insert databaseName: *databaseName outcome: - collectionName: *collectionName databaseName: *databaseName documents: - { _id: 1, x: 11 } - { _id: 2, x: 22 } - { _id: 3, x: 33 } # The write was still applied mongo-ruby-driver-2.21.3/spec/spec_tests/data/run_command_unified/000077500000000000000000000000001505113246500252145ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/run_command_unified/runCommand.yml000066400000000000000000000230411505113246500300420ustar00rootroot00000000000000description: runCommand schemaVersion: "1.3" createEntities: - client: id: &client client useMultipleMongoses: false observeEvents: [commandStartedEvent] - database: id: &db db client: *client databaseName: *db - collection: id: &collection collection database: *db collectionName: *collection - database: id: &dbWithRC dbWithRC client: *client databaseName: *dbWithRC databaseOptions: readConcern: { level: 'local' } - database: id: &dbWithWC dbWithWC client: *client databaseName: *dbWithWC databaseOptions: writeConcern: { w: 0 } - session: id: &session session client: *client # Stable API test - client: id: &clientWithStableApi clientWithStableApi observeEvents: [commandStartedEvent] serverApi: version: "1" strict: true - database: id: &dbWithStableApi dbWithStableApi client: *clientWithStableApi databaseName: *dbWithStableApi initialData: - collectionName: *collection databaseName: *db documents: [] tests: - description: always attaches $db and implicit lsid to given command and omits default readPreference operations: - name: runCommand object: *db arguments: commandName: ping command: { ping: 1 } expectResult: { ok: 1 } expectEvents: - client: *client events: - commandStartedEvent: command: ping: 1 $db: *db lsid: { $$exists: true } $readPreference: { $$exists: false } commandName: ping - description: always gossips the $clusterTime on the sent command runOnRequirements: # Only replicasets and sharded clusters have a $clusterTime - topologies: [ replicaset, sharded ] operations: # We have to run one command to obtain a clusterTime to gossip - name: runCommand object: *db arguments: commandName: ping command: { ping: 1 } expectResult: { ok: 1 } - name: runCommand object: *db arguments: commandName: ping command: { ping: 1 } expectResult: { ok: 1 } expectEvents: - client: *client events: - commandStartedEvent: commandName: ping # Only check the shape of the second ping which should have the $clusterTime received from the first operation - commandStartedEvent: command: ping: 1 $clusterTime: { $$exists: true } commandName: ping - description: attaches the provided session lsid to given command operations: - name: runCommand object: *db arguments: commandName: ping command: { ping: 1 } session: *session expectResult: { ok: 1 } expectEvents: - client: *client events: - commandStartedEvent: command: ping: 1 lsid: { $$sessionLsid: *session } $db: *db commandName: ping - description: attaches the provided $readPreference to given command runOnRequirements: # Exclude single topology, which is most likely a standalone server - topologies: [ replicaset, load-balanced, sharded ] operations: - name: runCommand object: *db arguments: commandName: ping command: { ping: 1 } readPreference: &readPreference { mode: 'nearest' } expectResult: { ok: 1 } expectEvents: - client: *client events: - commandStartedEvent: command: ping: 1 $readPreference: *readPreference $db: *db commandName: ping - description: does not attach $readPreference to given command on standalone runOnRequirements: # This test assumes that the single topology contains a standalone server; # however, it is possible for a single topology to contain a direct # connection to another server type. # See: https://github.com/mongodb/specifications/blob/master/source/server-selection/server-selection.md#topology-type-single - topologies: [ single ] operations: - name: runCommand object: *db arguments: commandName: ping command: { ping: 1 } readPreference: { mode: 'nearest' } expectResult: { ok: 1 } expectEvents: - client: *client events: - commandStartedEvent: command: ping: 1 $readPreference: { $$exists: false } $db: *db commandName: ping - description: does not attach primary $readPreference to given command operations: - name: runCommand object: *db arguments: commandName: ping command: { ping: 1 } readPreference: { mode: 'primary' } expectResult: { ok: 1 } expectEvents: - client: *client events: - commandStartedEvent: command: ping: 1 $readPreference: { $$exists: false } $db: *db commandName: ping - description: does not inherit readConcern specified at the db level operations: - name: runCommand object: *dbWithRC # Test with a command that supports a readConcern option. # expectResult is intentionally omitted because some drivers # may automatically convert command responses into cursors. arguments: commandName: aggregate command: { aggregate: *collection, pipeline: [], cursor: {} } expectEvents: - client: *client events: - commandStartedEvent: command: aggregate: *collection readConcern: { $$exists: false } $db: *dbWithRC commandName: aggregate - description: does not inherit writeConcern specified at the db level operations: - name: runCommand object: *dbWithWC arguments: commandName: insert command: insert: *collection documents: [ { foo: 'bar' } ] ordered: true expectResult: { ok: 1 } expectEvents: - client: *client events: - commandStartedEvent: command: insert: *collection writeConcern: { $$exists: false } $db: *dbWithWC commandName: insert - description: does not retry retryable errors on given command runOnRequirements: - minServerVersion: "4.2" operations: - name: failPoint object: testRunner arguments: client: *client failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: [ping] closeConnection: true - name: runCommand object: *db arguments: commandName: ping command: { ping: 1 } expectError: isClientError: true - description: attaches transaction fields to given command runOnRequirements: - minServerVersion: "4.0" topologies: [ replicaset ] - minServerVersion: "4.2" topologies: [ sharded, load-balanced ] operations: - name: withTransaction object: *session arguments: callback: - name: runCommand object: *db arguments: session: *session commandName: insert command: insert: *collection documents: [ { foo: 'transaction' } ] ordered: true expectResult: { $$unsetOrMatches: { insertedId: { $$unsetOrMatches: 1 } } } expectEvents: - client: *client events: - commandStartedEvent: command: insert: *collection documents: [ { foo: 'transaction' } ] ordered: true lsid: { $$sessionLsid: *session } txnNumber: 1 startTransaction: true autocommit: false # omitted fields readConcern: { $$exists: false } writeConcern: { $$exists: false } commandName: insert databaseName: *db - commandStartedEvent: command: commitTransaction: 1 lsid: { $$sessionLsid: *session } txnNumber: 1 autocommit: false # omitted fields writeConcern: { $$exists: false } readConcern: { $$exists: false } commandName: commitTransaction databaseName: admin - description: attaches apiVersion fields to given command when stableApi is configured on the client runOnRequirements: - minServerVersion: "5.0" operations: - name: runCommand object: *dbWithStableApi arguments: commandName: ping command: ping: 1 expectResult: { ok: 1 } expectEvents: - client: *clientWithStableApi events: - commandStartedEvent: command: ping: 1 $db: *dbWithStableApi apiVersion: "1" apiStrict: true apiDeprecationErrors: { $$unsetOrMatches: false } commandName: ping mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/000077500000000000000000000000001505113246500221335ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/errors/000077500000000000000000000000001505113246500234475ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/errors/error_handling_handshake.yml000066400000000000000000000026111505113246500311750ustar00rootroot00000000000000description: Network timeouts before and after the handshake completes uri: mongodb://a/?replicaSet=rs phases: - description: Primary A is discovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: &topologyVersion_1_1 processId: "$oid": '000000000000000000000001' counter: "$numberLong": '1' outcome: &outcome servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 0 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Ignore network timeout application error (afterHandshakeCompletes) applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 9 type: timeout outcome: *outcome - description: Mark server unknown on network timeout application error (beforeHandshakeCompletes) applicationErrors: - address: a:27017 when: beforeHandshakeCompletes maxWireVersion: 9 type: timeout outcome: servers: a:27017: type: Unknown topologyVersion: null pool: generation: 1 topologyType: ReplicaSetNoPrimary logicalSessionTimeoutMinutes: null setName: rs mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/errors/non-stale-network-error.yml000066400000000000000000000021441505113246500307110ustar00rootroot00000000000000description: Non-stale network error uri: mongodb://a/?replicaSet=rs phases: - description: Primary A is discovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: &topologyVersion_1_1 processId: "$oid": '000000000000000000000001' counter: "$numberLong": '1' outcome: &outcome servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 0 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Non-stale network error marks server Unknown applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 9 type: network outcome: servers: a:27017: type: Unknown topologyVersion: null pool: generation: 1 topologyType: ReplicaSetNoPrimary logicalSessionTimeoutMinutes: null setName: rs mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/errors/non-stale-network-timeout-error.yml000066400000000000000000000016621505113246500324010ustar00rootroot00000000000000description: Non-stale network timeout error uri: mongodb://a/?replicaSet=rs phases: - description: Primary A is discovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: &topologyVersion_1_1 processId: "$oid": '000000000000000000000001' counter: "$numberLong": '1' outcome: &outcome servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 0 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Non-stale network timeout error does not mark server Unknown applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 9 type: timeout outcome: *outcome non-stale-topologyVersion-greater-InterruptedAtShutdown.yml000066400000000000000000000031011505113246500372010ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/errors# Autogenerated tests for SDAM error handling, see generate-error-tests.py description: Non-stale topologyVersion greater InterruptedAtShutdown error uri: mongodb://a/?replicaSet=rs phases: - description: Primary A is discovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: &topologyVersion_1_1 processId: "$oid": '000000000000000000000001' counter: "$numberLong": '1' outcome: &outcome servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 0 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Non-stale topologyVersion greater InterruptedAtShutdown error marks server Unknown applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 9 type: command response: ok: 0 errmsg: InterruptedAtShutdown code: 11600 topologyVersion: processId: "$oid": '000000000000000000000001' counter: "$numberLong": "2" outcome: servers: a:27017: type: Unknown topologyVersion: processId: "$oid": '000000000000000000000001' counter: "$numberLong": "2" pool: generation: 1 topologyType: ReplicaSetNoPrimary logicalSessionTimeoutMinutes: null setName: rs non-stale-topologyVersion-greater-InterruptedDueToReplStateChange.yml000066400000000000000000000031371505113246500410640ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/errors# Autogenerated tests for SDAM error handling, see generate-error-tests.py description: Non-stale topologyVersion greater InterruptedDueToReplStateChange error uri: mongodb://a/?replicaSet=rs phases: - description: Primary A is discovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: &topologyVersion_1_1 processId: "$oid": '000000000000000000000001' counter: "$numberLong": '1' outcome: &outcome servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 0 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Non-stale topologyVersion greater InterruptedDueToReplStateChange error marks server Unknown applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 9 type: command response: ok: 0 errmsg: InterruptedDueToReplStateChange code: 11602 topologyVersion: processId: "$oid": '000000000000000000000001' counter: "$numberLong": "2" outcome: servers: a:27017: type: Unknown topologyVersion: processId: "$oid": '000000000000000000000001' counter: "$numberLong": "2" pool: generation: 0 topologyType: ReplicaSetNoPrimary logicalSessionTimeoutMinutes: null setName: rs non-stale-topologyVersion-greater-LegacyNotPrimary.yml000066400000000000000000000030621505113246500361120ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/errors# Autogenerated tests for SDAM error handling, see generate-error-tests.py description: Non-stale topologyVersion greater LegacyNotPrimary error uri: mongodb://a/?replicaSet=rs phases: - description: Primary A is discovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: &topologyVersion_1_1 processId: "$oid": '000000000000000000000001' counter: "$numberLong": '1' outcome: &outcome servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 0 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Non-stale topologyVersion greater LegacyNotPrimary error marks server Unknown applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 9 type: command response: ok: 0 errmsg: LegacyNotPrimary code: 10058 topologyVersion: processId: "$oid": '000000000000000000000001' counter: "$numberLong": "2" outcome: servers: a:27017: type: Unknown topologyVersion: processId: "$oid": '000000000000000000000001' counter: "$numberLong": "2" pool: generation: 0 topologyType: ReplicaSetNoPrimary logicalSessionTimeoutMinutes: null setName: rs non-stale-topologyVersion-greater-NotPrimaryNoSecondaryOk.yml000066400000000000000000000031071505113246500374240ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/errors# Autogenerated tests for SDAM error handling, see generate-error-tests.py description: Non-stale topologyVersion greater NotPrimaryNoSecondaryOk error uri: mongodb://a/?replicaSet=rs phases: - description: Primary A is discovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: &topologyVersion_1_1 processId: "$oid": '000000000000000000000001' counter: "$numberLong": '1' outcome: &outcome servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 0 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Non-stale topologyVersion greater NotPrimaryNoSecondaryOk error marks server Unknown applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 9 type: command response: ok: 0 errmsg: NotPrimaryNoSecondaryOk code: 13435 topologyVersion: processId: "$oid": '000000000000000000000001' counter: "$numberLong": "2" outcome: servers: a:27017: type: Unknown topologyVersion: processId: "$oid": '000000000000000000000001' counter: "$numberLong": "2" pool: generation: 0 topologyType: ReplicaSetNoPrimary logicalSessionTimeoutMinutes: null setName: rs non-stale-topologyVersion-greater-NotPrimaryOrSecondary.yml000066400000000000000000000031011505113246500371300ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/errors# Autogenerated tests for SDAM error handling, see generate-error-tests.py description: Non-stale topologyVersion greater NotPrimaryOrSecondary error uri: mongodb://a/?replicaSet=rs phases: - description: Primary A is discovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: &topologyVersion_1_1 processId: "$oid": '000000000000000000000001' counter: "$numberLong": '1' outcome: &outcome servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 0 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Non-stale topologyVersion greater NotPrimaryOrSecondary error marks server Unknown applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 9 type: command response: ok: 0 errmsg: NotPrimaryOrSecondary code: 13436 topologyVersion: processId: "$oid": '000000000000000000000001' counter: "$numberLong": "2" outcome: servers: a:27017: type: Unknown topologyVersion: processId: "$oid": '000000000000000000000001' counter: "$numberLong": "2" pool: generation: 0 topologyType: ReplicaSetNoPrimary logicalSessionTimeoutMinutes: null setName: rs non-stale-topologyVersion-greater-NotWritablePrimary.yml000066400000000000000000000030701505113246500364560ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/errors# Autogenerated tests for SDAM error handling, see generate-error-tests.py description: Non-stale topologyVersion greater NotWritablePrimary error uri: mongodb://a/?replicaSet=rs phases: - description: Primary A is discovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: &topologyVersion_1_1 processId: "$oid": '000000000000000000000001' counter: "$numberLong": '1' outcome: &outcome servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 0 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Non-stale topologyVersion greater NotWritablePrimary error marks server Unknown applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 9 type: command response: ok: 0 errmsg: NotWritablePrimary code: 10107 topologyVersion: processId: "$oid": '000000000000000000000001' counter: "$numberLong": "2" outcome: servers: a:27017: type: Unknown topologyVersion: processId: "$oid": '000000000000000000000001' counter: "$numberLong": "2" pool: generation: 0 topologyType: ReplicaSetNoPrimary logicalSessionTimeoutMinutes: null setName: rs non-stale-topologyVersion-greater-PrimarySteppedDown.yml000066400000000000000000000030661505113246500364650ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/errors# Autogenerated tests for SDAM error handling, see generate-error-tests.py description: Non-stale topologyVersion greater PrimarySteppedDown error uri: mongodb://a/?replicaSet=rs phases: - description: Primary A is discovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: &topologyVersion_1_1 processId: "$oid": '000000000000000000000001' counter: "$numberLong": '1' outcome: &outcome servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 0 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Non-stale topologyVersion greater PrimarySteppedDown error marks server Unknown applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 9 type: command response: ok: 0 errmsg: PrimarySteppedDown code: 189 topologyVersion: processId: "$oid": '000000000000000000000001' counter: "$numberLong": "2" outcome: servers: a:27017: type: Unknown topologyVersion: processId: "$oid": '000000000000000000000001' counter: "$numberLong": "2" pool: generation: 0 topologyType: ReplicaSetNoPrimary logicalSessionTimeoutMinutes: null setName: rs non-stale-topologyVersion-greater-ShutdownInProgress.yml000066400000000000000000000030651505113246500365130ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/errors# Autogenerated tests for SDAM error handling, see generate-error-tests.py description: Non-stale topologyVersion greater ShutdownInProgress error uri: mongodb://a/?replicaSet=rs phases: - description: Primary A is discovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: &topologyVersion_1_1 processId: "$oid": '000000000000000000000001' counter: "$numberLong": '1' outcome: &outcome servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 0 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Non-stale topologyVersion greater ShutdownInProgress error marks server Unknown applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 9 type: command response: ok: 0 errmsg: ShutdownInProgress code: 91 topologyVersion: processId: "$oid": '000000000000000000000001' counter: "$numberLong": "2" outcome: servers: a:27017: type: Unknown topologyVersion: processId: "$oid": '000000000000000000000001' counter: "$numberLong": "2" pool: generation: 1 topologyType: ReplicaSetNoPrimary logicalSessionTimeoutMinutes: null setName: rs non-stale-topologyVersion-missing-InterruptedAtShutdown.yml000066400000000000000000000025131505113246500372270ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/errors# Autogenerated tests for SDAM error handling, see generate-error-tests.py description: Non-stale topologyVersion missing InterruptedAtShutdown error uri: mongodb://a/?replicaSet=rs phases: - description: Primary A is discovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: &topologyVersion_1_1 processId: "$oid": '000000000000000000000001' counter: "$numberLong": '1' outcome: &outcome servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 0 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Non-stale topologyVersion missing InterruptedAtShutdown error marks server Unknown applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 9 type: command response: ok: 0 errmsg: InterruptedAtShutdown code: 11600 outcome: servers: a:27017: type: Unknown topologyVersion: null pool: generation: 1 topologyType: ReplicaSetNoPrimary logicalSessionTimeoutMinutes: null setName: rs non-stale-topologyVersion-missing-InterruptedDueToReplStateChange.yml000066400000000000000000000025511505113246500411030ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/errors# Autogenerated tests for SDAM error handling, see generate-error-tests.py description: Non-stale topologyVersion missing InterruptedDueToReplStateChange error uri: mongodb://a/?replicaSet=rs phases: - description: Primary A is discovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: &topologyVersion_1_1 processId: "$oid": '000000000000000000000001' counter: "$numberLong": '1' outcome: &outcome servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 0 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Non-stale topologyVersion missing InterruptedDueToReplStateChange error marks server Unknown applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 9 type: command response: ok: 0 errmsg: InterruptedDueToReplStateChange code: 11602 outcome: servers: a:27017: type: Unknown topologyVersion: null pool: generation: 0 topologyType: ReplicaSetNoPrimary logicalSessionTimeoutMinutes: null setName: rs non-stale-topologyVersion-missing-LegacyNotPrimary.yml000066400000000000000000000024741505113246500361400ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/errors# Autogenerated tests for SDAM error handling, see generate-error-tests.py description: Non-stale topologyVersion missing LegacyNotPrimary error uri: mongodb://a/?replicaSet=rs phases: - description: Primary A is discovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: &topologyVersion_1_1 processId: "$oid": '000000000000000000000001' counter: "$numberLong": '1' outcome: &outcome servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 0 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Non-stale topologyVersion missing LegacyNotPrimary error marks server Unknown applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 9 type: command response: ok: 0 errmsg: LegacyNotPrimary code: 10058 outcome: servers: a:27017: type: Unknown topologyVersion: null pool: generation: 0 topologyType: ReplicaSetNoPrimary logicalSessionTimeoutMinutes: null setName: rs non-stale-topologyVersion-missing-NotPrimaryNoSecondaryOk.yml000066400000000000000000000025211505113246500374430ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/errors# Autogenerated tests for SDAM error handling, see generate-error-tests.py description: Non-stale topologyVersion missing NotPrimaryNoSecondaryOk error uri: mongodb://a/?replicaSet=rs phases: - description: Primary A is discovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: &topologyVersion_1_1 processId: "$oid": '000000000000000000000001' counter: "$numberLong": '1' outcome: &outcome servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 0 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Non-stale topologyVersion missing NotPrimaryNoSecondaryOk error marks server Unknown applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 9 type: command response: ok: 0 errmsg: NotPrimaryNoSecondaryOk code: 13435 outcome: servers: a:27017: type: Unknown topologyVersion: null pool: generation: 0 topologyType: ReplicaSetNoPrimary logicalSessionTimeoutMinutes: null setName: rs non-stale-topologyVersion-missing-NotPrimaryOrSecondary.yml000066400000000000000000000025131505113246500371560ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/errors# Autogenerated tests for SDAM error handling, see generate-error-tests.py description: Non-stale topologyVersion missing NotPrimaryOrSecondary error uri: mongodb://a/?replicaSet=rs phases: - description: Primary A is discovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: &topologyVersion_1_1 processId: "$oid": '000000000000000000000001' counter: "$numberLong": '1' outcome: &outcome servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 0 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Non-stale topologyVersion missing NotPrimaryOrSecondary error marks server Unknown applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 9 type: command response: ok: 0 errmsg: NotPrimaryOrSecondary code: 13436 outcome: servers: a:27017: type: Unknown topologyVersion: null pool: generation: 0 topologyType: ReplicaSetNoPrimary logicalSessionTimeoutMinutes: null setName: rs non-stale-topologyVersion-missing-NotWritablePrimary.yml000066400000000000000000000025021505113246500364750ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/errors# Autogenerated tests for SDAM error handling, see generate-error-tests.py description: Non-stale topologyVersion missing NotWritablePrimary error uri: mongodb://a/?replicaSet=rs phases: - description: Primary A is discovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: &topologyVersion_1_1 processId: "$oid": '000000000000000000000001' counter: "$numberLong": '1' outcome: &outcome servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 0 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Non-stale topologyVersion missing NotWritablePrimary error marks server Unknown applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 9 type: command response: ok: 0 errmsg: NotWritablePrimary code: 10107 outcome: servers: a:27017: type: Unknown topologyVersion: null pool: generation: 0 topologyType: ReplicaSetNoPrimary logicalSessionTimeoutMinutes: null setName: rs non-stale-topologyVersion-missing-PrimarySteppedDown.yml000066400000000000000000000025001505113246500364750ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/errors# Autogenerated tests for SDAM error handling, see generate-error-tests.py description: Non-stale topologyVersion missing PrimarySteppedDown error uri: mongodb://a/?replicaSet=rs phases: - description: Primary A is discovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: &topologyVersion_1_1 processId: "$oid": '000000000000000000000001' counter: "$numberLong": '1' outcome: &outcome servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 0 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Non-stale topologyVersion missing PrimarySteppedDown error marks server Unknown applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 9 type: command response: ok: 0 errmsg: PrimarySteppedDown code: 189 outcome: servers: a:27017: type: Unknown topologyVersion: null pool: generation: 0 topologyType: ReplicaSetNoPrimary logicalSessionTimeoutMinutes: null setName: rs non-stale-topologyVersion-missing-ShutdownInProgress.yml000066400000000000000000000024771505113246500365410ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/errors# Autogenerated tests for SDAM error handling, see generate-error-tests.py description: Non-stale topologyVersion missing ShutdownInProgress error uri: mongodb://a/?replicaSet=rs phases: - description: Primary A is discovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: &topologyVersion_1_1 processId: "$oid": '000000000000000000000001' counter: "$numberLong": '1' outcome: &outcome servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 0 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Non-stale topologyVersion missing ShutdownInProgress error marks server Unknown applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 9 type: command response: ok: 0 errmsg: ShutdownInProgress code: 91 outcome: servers: a:27017: type: Unknown topologyVersion: null pool: generation: 1 topologyType: ReplicaSetNoPrimary logicalSessionTimeoutMinutes: null setName: rs non-stale-topologyVersion-proccessId-changed-InterruptedAtShutdown.yml000066400000000000000000000031271505113246500412450ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/errors# Autogenerated tests for SDAM error handling, see generate-error-tests.py description: Non-stale topologyVersion proccessId changed InterruptedAtShutdown error uri: mongodb://a/?replicaSet=rs phases: - description: Primary A is discovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: &topologyVersion_1_1 processId: "$oid": '000000000000000000000001' counter: "$numberLong": '1' outcome: &outcome servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 0 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Non-stale topologyVersion proccessId changed InterruptedAtShutdown error marks server Unknown applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 9 type: command response: ok: 0 errmsg: InterruptedAtShutdown code: 11600 topologyVersion: processId: "$oid": '000000000000000000000002' counter: "$numberLong": "1" outcome: servers: a:27017: type: Unknown topologyVersion: processId: "$oid": '000000000000000000000002' counter: "$numberLong": "1" pool: generation: 1 topologyType: ReplicaSetNoPrimary logicalSessionTimeoutMinutes: null setName: rs non-stale-topologyVersion-proccessId-changed-InterruptedDueToReplStateChange.yml000066400000000000000000000031651505113246500431210ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/errors# Autogenerated tests for SDAM error handling, see generate-error-tests.py description: Non-stale topologyVersion proccessId changed InterruptedDueToReplStateChange error uri: mongodb://a/?replicaSet=rs phases: - description: Primary A is discovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: &topologyVersion_1_1 processId: "$oid": '000000000000000000000001' counter: "$numberLong": '1' outcome: &outcome servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 0 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Non-stale topologyVersion proccessId changed InterruptedDueToReplStateChange error marks server Unknown applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 9 type: command response: ok: 0 errmsg: InterruptedDueToReplStateChange code: 11602 topologyVersion: processId: "$oid": '000000000000000000000002' counter: "$numberLong": "1" outcome: servers: a:27017: type: Unknown topologyVersion: processId: "$oid": '000000000000000000000002' counter: "$numberLong": "1" pool: generation: 0 topologyType: ReplicaSetNoPrimary logicalSessionTimeoutMinutes: null setName: rs non-stale-topologyVersion-proccessId-changed-LegacyNotPrimary.yml000066400000000000000000000031101505113246500401400ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/errors# Autogenerated tests for SDAM error handling, see generate-error-tests.py description: Non-stale topologyVersion proccessId changed LegacyNotPrimary error uri: mongodb://a/?replicaSet=rs phases: - description: Primary A is discovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: &topologyVersion_1_1 processId: "$oid": '000000000000000000000001' counter: "$numberLong": '1' outcome: &outcome servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 0 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Non-stale topologyVersion proccessId changed LegacyNotPrimary error marks server Unknown applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 9 type: command response: ok: 0 errmsg: LegacyNotPrimary code: 10058 topologyVersion: processId: "$oid": '000000000000000000000002' counter: "$numberLong": "1" outcome: servers: a:27017: type: Unknown topologyVersion: processId: "$oid": '000000000000000000000002' counter: "$numberLong": "1" pool: generation: 0 topologyType: ReplicaSetNoPrimary logicalSessionTimeoutMinutes: null setName: rs non-stale-topologyVersion-proccessId-changed-NotPrimaryNoSecondaryOk.yml000066400000000000000000000031351505113246500414610ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/errors# Autogenerated tests for SDAM error handling, see generate-error-tests.py description: Non-stale topologyVersion proccessId changed NotPrimaryNoSecondaryOk error uri: mongodb://a/?replicaSet=rs phases: - description: Primary A is discovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: &topologyVersion_1_1 processId: "$oid": '000000000000000000000001' counter: "$numberLong": '1' outcome: &outcome servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 0 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Non-stale topologyVersion proccessId changed NotPrimaryNoSecondaryOk error marks server Unknown applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 9 type: command response: ok: 0 errmsg: NotPrimaryNoSecondaryOk code: 13435 topologyVersion: processId: "$oid": '000000000000000000000002' counter: "$numberLong": "1" outcome: servers: a:27017: type: Unknown topologyVersion: processId: "$oid": '000000000000000000000002' counter: "$numberLong": "1" pool: generation: 0 topologyType: ReplicaSetNoPrimary logicalSessionTimeoutMinutes: null setName: rs non-stale-topologyVersion-proccessId-changed-NotPrimaryOrSecondary.yml000066400000000000000000000031271505113246500411740ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/errors# Autogenerated tests for SDAM error handling, see generate-error-tests.py description: Non-stale topologyVersion proccessId changed NotPrimaryOrSecondary error uri: mongodb://a/?replicaSet=rs phases: - description: Primary A is discovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: &topologyVersion_1_1 processId: "$oid": '000000000000000000000001' counter: "$numberLong": '1' outcome: &outcome servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 0 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Non-stale topologyVersion proccessId changed NotPrimaryOrSecondary error marks server Unknown applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 9 type: command response: ok: 0 errmsg: NotPrimaryOrSecondary code: 13436 topologyVersion: processId: "$oid": '000000000000000000000002' counter: "$numberLong": "1" outcome: servers: a:27017: type: Unknown topologyVersion: processId: "$oid": '000000000000000000000002' counter: "$numberLong": "1" pool: generation: 0 topologyType: ReplicaSetNoPrimary logicalSessionTimeoutMinutes: null setName: rs non-stale-topologyVersion-proccessId-changed-NotWritablePrimary.yml000066400000000000000000000031161505113246500405130ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/errors# Autogenerated tests for SDAM error handling, see generate-error-tests.py description: Non-stale topologyVersion proccessId changed NotWritablePrimary error uri: mongodb://a/?replicaSet=rs phases: - description: Primary A is discovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: &topologyVersion_1_1 processId: "$oid": '000000000000000000000001' counter: "$numberLong": '1' outcome: &outcome servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 0 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Non-stale topologyVersion proccessId changed NotWritablePrimary error marks server Unknown applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 9 type: command response: ok: 0 errmsg: NotWritablePrimary code: 10107 topologyVersion: processId: "$oid": '000000000000000000000002' counter: "$numberLong": "1" outcome: servers: a:27017: type: Unknown topologyVersion: processId: "$oid": '000000000000000000000002' counter: "$numberLong": "1" pool: generation: 0 topologyType: ReplicaSetNoPrimary logicalSessionTimeoutMinutes: null setName: rs non-stale-topologyVersion-proccessId-changed-PrimarySteppedDown.yml000066400000000000000000000031141505113246500405130ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/errors# Autogenerated tests for SDAM error handling, see generate-error-tests.py description: Non-stale topologyVersion proccessId changed PrimarySteppedDown error uri: mongodb://a/?replicaSet=rs phases: - description: Primary A is discovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: &topologyVersion_1_1 processId: "$oid": '000000000000000000000001' counter: "$numberLong": '1' outcome: &outcome servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 0 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Non-stale topologyVersion proccessId changed PrimarySteppedDown error marks server Unknown applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 9 type: command response: ok: 0 errmsg: PrimarySteppedDown code: 189 topologyVersion: processId: "$oid": '000000000000000000000002' counter: "$numberLong": "1" outcome: servers: a:27017: type: Unknown topologyVersion: processId: "$oid": '000000000000000000000002' counter: "$numberLong": "1" pool: generation: 0 topologyType: ReplicaSetNoPrimary logicalSessionTimeoutMinutes: null setName: rs non-stale-topologyVersion-proccessId-changed-ShutdownInProgress.yml000066400000000000000000000031131505113246500405410ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/errors# Autogenerated tests for SDAM error handling, see generate-error-tests.py description: Non-stale topologyVersion proccessId changed ShutdownInProgress error uri: mongodb://a/?replicaSet=rs phases: - description: Primary A is discovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: &topologyVersion_1_1 processId: "$oid": '000000000000000000000001' counter: "$numberLong": '1' outcome: &outcome servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 0 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Non-stale topologyVersion proccessId changed ShutdownInProgress error marks server Unknown applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 9 type: command response: ok: 0 errmsg: ShutdownInProgress code: 91 topologyVersion: processId: "$oid": '000000000000000000000002' counter: "$numberLong": "1" outcome: servers: a:27017: type: Unknown topologyVersion: processId: "$oid": '000000000000000000000002' counter: "$numberLong": "1" pool: generation: 1 topologyType: ReplicaSetNoPrimary logicalSessionTimeoutMinutes: null setName: rs mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/errors/post-42-InterruptedAtShutdown.yml000066400000000000000000000021571505113246500317330ustar00rootroot00000000000000# Autogenerated tests for SDAM error handling, see generate-error-tests.py description: Post-4.2 InterruptedAtShutdown error uri: mongodb://a/?replicaSet=rs phases: - description: Primary A is discovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 8 outcome: &outcome servers: a:27017: type: RSPrimary setName: rs topologyVersion: null pool: generation: 0 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Post-4.2 InterruptedAtShutdown error marks server Unknown applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 8 type: command response: ok: 0 errmsg: InterruptedAtShutdown code: 11600 outcome: servers: a:27017: type: Unknown topologyVersion: null pool: generation: 1 topologyType: ReplicaSetNoPrimary logicalSessionTimeoutMinutes: null setName: rs post-42-InterruptedDueToReplStateChange.yml000066400000000000000000000022151505113246500335210ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/errors# Autogenerated tests for SDAM error handling, see generate-error-tests.py description: Post-4.2 InterruptedDueToReplStateChange error uri: mongodb://a/?replicaSet=rs phases: - description: Primary A is discovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 8 outcome: &outcome servers: a:27017: type: RSPrimary setName: rs topologyVersion: null pool: generation: 0 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Post-4.2 InterruptedDueToReplStateChange error marks server Unknown applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 8 type: command response: ok: 0 errmsg: InterruptedDueToReplStateChange code: 11602 outcome: servers: a:27017: type: Unknown topologyVersion: null pool: generation: 0 topologyType: ReplicaSetNoPrimary logicalSessionTimeoutMinutes: null setName: rs mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/errors/post-42-LegacyNotPrimary.yml000066400000000000000000000021401505113246500306260ustar00rootroot00000000000000# Autogenerated tests for SDAM error handling, see generate-error-tests.py description: Post-4.2 LegacyNotPrimary error uri: mongodb://a/?replicaSet=rs phases: - description: Primary A is discovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 8 outcome: &outcome servers: a:27017: type: RSPrimary setName: rs topologyVersion: null pool: generation: 0 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Post-4.2 LegacyNotPrimary error marks server Unknown applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 8 type: command response: ok: 0 errmsg: LegacyNotPrimary code: 10058 outcome: servers: a:27017: type: Unknown topologyVersion: null pool: generation: 0 topologyType: ReplicaSetNoPrimary logicalSessionTimeoutMinutes: null setName: rs mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/errors/post-42-NotPrimaryNoSecondaryOk.yml000066400000000000000000000021651505113246500321470ustar00rootroot00000000000000# Autogenerated tests for SDAM error handling, see generate-error-tests.py description: Post-4.2 NotPrimaryNoSecondaryOk error uri: mongodb://a/?replicaSet=rs phases: - description: Primary A is discovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 8 outcome: &outcome servers: a:27017: type: RSPrimary setName: rs topologyVersion: null pool: generation: 0 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Post-4.2 NotPrimaryNoSecondaryOk error marks server Unknown applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 8 type: command response: ok: 0 errmsg: NotPrimaryNoSecondaryOk code: 13435 outcome: servers: a:27017: type: Unknown topologyVersion: null pool: generation: 0 topologyType: ReplicaSetNoPrimary logicalSessionTimeoutMinutes: null setName: rs mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/errors/post-42-NotPrimaryOrSecondary.yml000066400000000000000000000021571505113246500316620ustar00rootroot00000000000000# Autogenerated tests for SDAM error handling, see generate-error-tests.py description: Post-4.2 NotPrimaryOrSecondary error uri: mongodb://a/?replicaSet=rs phases: - description: Primary A is discovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 8 outcome: &outcome servers: a:27017: type: RSPrimary setName: rs topologyVersion: null pool: generation: 0 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Post-4.2 NotPrimaryOrSecondary error marks server Unknown applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 8 type: command response: ok: 0 errmsg: NotPrimaryOrSecondary code: 13436 outcome: servers: a:27017: type: Unknown topologyVersion: null pool: generation: 0 topologyType: ReplicaSetNoPrimary logicalSessionTimeoutMinutes: null setName: rs mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/errors/post-42-NotWritablePrimary.yml000066400000000000000000000021461505113246500312010ustar00rootroot00000000000000# Autogenerated tests for SDAM error handling, see generate-error-tests.py description: Post-4.2 NotWritablePrimary error uri: mongodb://a/?replicaSet=rs phases: - description: Primary A is discovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 8 outcome: &outcome servers: a:27017: type: RSPrimary setName: rs topologyVersion: null pool: generation: 0 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Post-4.2 NotWritablePrimary error marks server Unknown applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 8 type: command response: ok: 0 errmsg: NotWritablePrimary code: 10107 outcome: servers: a:27017: type: Unknown topologyVersion: null pool: generation: 0 topologyType: ReplicaSetNoPrimary logicalSessionTimeoutMinutes: null setName: rs mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/errors/post-42-PrimarySteppedDown.yml000066400000000000000000000021441505113246500312010ustar00rootroot00000000000000# Autogenerated tests for SDAM error handling, see generate-error-tests.py description: Post-4.2 PrimarySteppedDown error uri: mongodb://a/?replicaSet=rs phases: - description: Primary A is discovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 8 outcome: &outcome servers: a:27017: type: RSPrimary setName: rs topologyVersion: null pool: generation: 0 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Post-4.2 PrimarySteppedDown error marks server Unknown applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 8 type: command response: ok: 0 errmsg: PrimarySteppedDown code: 189 outcome: servers: a:27017: type: Unknown topologyVersion: null pool: generation: 0 topologyType: ReplicaSetNoPrimary logicalSessionTimeoutMinutes: null setName: rs mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/errors/post-42-ShutdownInProgress.yml000066400000000000000000000021431505113246500312270ustar00rootroot00000000000000# Autogenerated tests for SDAM error handling, see generate-error-tests.py description: Post-4.2 ShutdownInProgress error uri: mongodb://a/?replicaSet=rs phases: - description: Primary A is discovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 8 outcome: &outcome servers: a:27017: type: RSPrimary setName: rs topologyVersion: null pool: generation: 0 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Post-4.2 ShutdownInProgress error marks server Unknown applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 8 type: command response: ok: 0 errmsg: ShutdownInProgress code: 91 outcome: servers: a:27017: type: Unknown topologyVersion: null pool: generation: 1 topologyType: ReplicaSetNoPrimary logicalSessionTimeoutMinutes: null setName: rs mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/errors/pre-42-InterruptedAtShutdown.yml000066400000000000000000000022011505113246500315220ustar00rootroot00000000000000# Autogenerated tests for SDAM error handling, see generate-error-tests.py description: Pre-4.2 InterruptedAtShutdown error uri: mongodb://a/?replicaSet=rs phases: - description: Primary A is discovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 7 outcome: &outcome servers: a:27017: type: RSPrimary setName: rs topologyVersion: null pool: generation: 0 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Pre-4.2 InterruptedAtShutdown error marks server Unknown and clears the pool applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 7 type: command response: ok: 0 errmsg: InterruptedAtShutdown code: 11600 outcome: servers: a:27017: type: Unknown topologyVersion: null pool: generation: 1 topologyType: ReplicaSetNoPrimary logicalSessionTimeoutMinutes: null setName: rs mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/errors/pre-42-InterruptedDueToReplStateChange.yml000066400000000000000000000022371505113246500334050ustar00rootroot00000000000000# Autogenerated tests for SDAM error handling, see generate-error-tests.py description: Pre-4.2 InterruptedDueToReplStateChange error uri: mongodb://a/?replicaSet=rs phases: - description: Primary A is discovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 7 outcome: &outcome servers: a:27017: type: RSPrimary setName: rs topologyVersion: null pool: generation: 0 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Pre-4.2 InterruptedDueToReplStateChange error marks server Unknown and clears the pool applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 7 type: command response: ok: 0 errmsg: InterruptedDueToReplStateChange code: 11602 outcome: servers: a:27017: type: Unknown topologyVersion: null pool: generation: 1 topologyType: ReplicaSetNoPrimary logicalSessionTimeoutMinutes: null setName: rs mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/errors/pre-42-LegacyNotPrimary.yml000066400000000000000000000021621505113246500304330ustar00rootroot00000000000000# Autogenerated tests for SDAM error handling, see generate-error-tests.py description: Pre-4.2 LegacyNotPrimary error uri: mongodb://a/?replicaSet=rs phases: - description: Primary A is discovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 7 outcome: &outcome servers: a:27017: type: RSPrimary setName: rs topologyVersion: null pool: generation: 0 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Pre-4.2 LegacyNotPrimary error marks server Unknown and clears the pool applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 7 type: command response: ok: 0 errmsg: LegacyNotPrimary code: 10058 outcome: servers: a:27017: type: Unknown topologyVersion: null pool: generation: 1 topologyType: ReplicaSetNoPrimary logicalSessionTimeoutMinutes: null setName: rs mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/errors/pre-42-NotPrimaryNoSecondaryOk.yml000066400000000000000000000022071505113246500317450ustar00rootroot00000000000000# Autogenerated tests for SDAM error handling, see generate-error-tests.py description: Pre-4.2 NotPrimaryNoSecondaryOk error uri: mongodb://a/?replicaSet=rs phases: - description: Primary A is discovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 7 outcome: &outcome servers: a:27017: type: RSPrimary setName: rs topologyVersion: null pool: generation: 0 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Pre-4.2 NotPrimaryNoSecondaryOk error marks server Unknown and clears the pool applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 7 type: command response: ok: 0 errmsg: NotPrimaryNoSecondaryOk code: 13435 outcome: servers: a:27017: type: Unknown topologyVersion: null pool: generation: 1 topologyType: ReplicaSetNoPrimary logicalSessionTimeoutMinutes: null setName: rs mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/errors/pre-42-NotPrimaryOrSecondary.yml000066400000000000000000000022011505113246500314510ustar00rootroot00000000000000# Autogenerated tests for SDAM error handling, see generate-error-tests.py description: Pre-4.2 NotPrimaryOrSecondary error uri: mongodb://a/?replicaSet=rs phases: - description: Primary A is discovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 7 outcome: &outcome servers: a:27017: type: RSPrimary setName: rs topologyVersion: null pool: generation: 0 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Pre-4.2 NotPrimaryOrSecondary error marks server Unknown and clears the pool applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 7 type: command response: ok: 0 errmsg: NotPrimaryOrSecondary code: 13436 outcome: servers: a:27017: type: Unknown topologyVersion: null pool: generation: 1 topologyType: ReplicaSetNoPrimary logicalSessionTimeoutMinutes: null setName: rs mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/errors/pre-42-NotWritablePrimary.yml000066400000000000000000000021701505113246500307770ustar00rootroot00000000000000# Autogenerated tests for SDAM error handling, see generate-error-tests.py description: Pre-4.2 NotWritablePrimary error uri: mongodb://a/?replicaSet=rs phases: - description: Primary A is discovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 7 outcome: &outcome servers: a:27017: type: RSPrimary setName: rs topologyVersion: null pool: generation: 0 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Pre-4.2 NotWritablePrimary error marks server Unknown and clears the pool applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 7 type: command response: ok: 0 errmsg: NotWritablePrimary code: 10107 outcome: servers: a:27017: type: Unknown topologyVersion: null pool: generation: 1 topologyType: ReplicaSetNoPrimary logicalSessionTimeoutMinutes: null setName: rs mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/errors/pre-42-PrimarySteppedDown.yml000066400000000000000000000021661505113246500310060ustar00rootroot00000000000000# Autogenerated tests for SDAM error handling, see generate-error-tests.py description: Pre-4.2 PrimarySteppedDown error uri: mongodb://a/?replicaSet=rs phases: - description: Primary A is discovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 7 outcome: &outcome servers: a:27017: type: RSPrimary setName: rs topologyVersion: null pool: generation: 0 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Pre-4.2 PrimarySteppedDown error marks server Unknown and clears the pool applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 7 type: command response: ok: 0 errmsg: PrimarySteppedDown code: 189 outcome: servers: a:27017: type: Unknown topologyVersion: null pool: generation: 1 topologyType: ReplicaSetNoPrimary logicalSessionTimeoutMinutes: null setName: rs mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/errors/pre-42-ShutdownInProgress.yml000066400000000000000000000021651505113246500310340ustar00rootroot00000000000000# Autogenerated tests for SDAM error handling, see generate-error-tests.py description: Pre-4.2 ShutdownInProgress error uri: mongodb://a/?replicaSet=rs phases: - description: Primary A is discovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 7 outcome: &outcome servers: a:27017: type: RSPrimary setName: rs topologyVersion: null pool: generation: 0 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Pre-4.2 ShutdownInProgress error marks server Unknown and clears the pool applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 7 type: command response: ok: 0 errmsg: ShutdownInProgress code: 91 outcome: servers: a:27017: type: Unknown topologyVersion: null pool: generation: 1 topologyType: ReplicaSetNoPrimary logicalSessionTimeoutMinutes: null setName: rs mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/errors/prefer-error-code.yml000066400000000000000000000026651505113246500275250ustar00rootroot00000000000000description: Do not check errmsg when code exists uri: mongodb://a/?replicaSet=rs phases: - description: Primary A is discovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: &topologyVersion_1_1 processId: "$oid": '000000000000000000000001' counter: "$numberLong": '1' outcome: &outcome servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 0 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs - description: errmsg "not master" gets ignored when error code exists applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 9 type: command response: ok: 0 errmsg: "not master" # NOTE: This needs to be "not master" and not "not writable primary". code: 1 # Not a "not writable primary" error code. outcome: *outcome - description: errmsg "node is recovering" gets ignored when error code exists applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 9 type: command response: ok: 0 errmsg: "node is recovering" code: 1 # Not a "node is recovering" error code. outcome: *outcome mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/errors/stale-generation-InterruptedAtShutdown.yml000066400000000000000000000043121505113246500337570ustar00rootroot00000000000000# Autogenerated tests for SDAM error handling, see generate-error-tests.py description: Stale generation InterruptedAtShutdown error uri: mongodb://a/?replicaSet=rs phases: - description: Primary A is discovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: &topologyVersion_1_1 processId: "$oid": '000000000000000000000001' counter: "$numberLong": '1' outcome: servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 0 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs # Process a network error which increments the pool generation. - description: Non-stale application network error applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 9 type: network outcome: servers: a:27017: type: Unknown topologyVersion: null pool: generation: 1 topologyType: ReplicaSetNoPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Primary A is rediscovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: *topologyVersion_1_1 outcome: &outcome servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 1 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Ignore stale InterruptedAtShutdown error (stale generation) applicationErrors: - address: a:27017 generation: 0 when: afterHandshakeCompletes maxWireVersion: 9 type: command response: ok: 0 errmsg: InterruptedAtShutdown code: 11600 topologyVersion: processId: "$oid": '000000000000000000000001' counter: "$numberLong": '2' outcome: *outcome stale-generation-InterruptedDueToReplStateChange.yml000066400000000000000000000043501505113246500355540ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/errors# Autogenerated tests for SDAM error handling, see generate-error-tests.py description: Stale generation InterruptedDueToReplStateChange error uri: mongodb://a/?replicaSet=rs phases: - description: Primary A is discovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: &topologyVersion_1_1 processId: "$oid": '000000000000000000000001' counter: "$numberLong": '1' outcome: servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 0 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs # Process a network error which increments the pool generation. - description: Non-stale application network error applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 9 type: network outcome: servers: a:27017: type: Unknown topologyVersion: null pool: generation: 1 topologyType: ReplicaSetNoPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Primary A is rediscovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: *topologyVersion_1_1 outcome: &outcome servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 1 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Ignore stale InterruptedDueToReplStateChange error (stale generation) applicationErrors: - address: a:27017 generation: 0 when: afterHandshakeCompletes maxWireVersion: 9 type: command response: ok: 0 errmsg: InterruptedDueToReplStateChange code: 11602 topologyVersion: processId: "$oid": '000000000000000000000001' counter: "$numberLong": '2' outcome: *outcome stale-generation-NotPrimaryNoSecondaryOk.yml000066400000000000000000000043201505113246500341140ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/errors# Autogenerated tests for SDAM error handling, see generate-error-tests.py description: Stale generation NotPrimaryNoSecondaryOk error uri: mongodb://a/?replicaSet=rs phases: - description: Primary A is discovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: &topologyVersion_1_1 processId: "$oid": '000000000000000000000001' counter: "$numberLong": '1' outcome: servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 0 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs # Process a network error which increments the pool generation. - description: Non-stale application network error applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 9 type: network outcome: servers: a:27017: type: Unknown topologyVersion: null pool: generation: 1 topologyType: ReplicaSetNoPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Primary A is rediscovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: *topologyVersion_1_1 outcome: &outcome servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 1 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Ignore stale NotPrimaryNoSecondaryOk error (stale generation) applicationErrors: - address: a:27017 generation: 0 when: afterHandshakeCompletes maxWireVersion: 9 type: command response: ok: 0 errmsg: NotPrimaryNoSecondaryOk code: 13435 topologyVersion: processId: "$oid": '000000000000000000000001' counter: "$numberLong": '2' outcome: *outcome mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/errors/stale-generation-NotPrimaryOrSecondary.yml000066400000000000000000000043121505113246500337060ustar00rootroot00000000000000# Autogenerated tests for SDAM error handling, see generate-error-tests.py description: Stale generation NotPrimaryOrSecondary error uri: mongodb://a/?replicaSet=rs phases: - description: Primary A is discovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: &topologyVersion_1_1 processId: "$oid": '000000000000000000000001' counter: "$numberLong": '1' outcome: servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 0 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs # Process a network error which increments the pool generation. - description: Non-stale application network error applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 9 type: network outcome: servers: a:27017: type: Unknown topologyVersion: null pool: generation: 1 topologyType: ReplicaSetNoPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Primary A is rediscovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: *topologyVersion_1_1 outcome: &outcome servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 1 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Ignore stale NotPrimaryOrSecondary error (stale generation) applicationErrors: - address: a:27017 generation: 0 when: afterHandshakeCompletes maxWireVersion: 9 type: command response: ok: 0 errmsg: NotPrimaryOrSecondary code: 13436 topologyVersion: processId: "$oid": '000000000000000000000001' counter: "$numberLong": '2' outcome: *outcome mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/errors/stale-generation-NotWritablePrimary.yml000066400000000000000000000043011505113246500332250ustar00rootroot00000000000000# Autogenerated tests for SDAM error handling, see generate-error-tests.py description: Stale generation NotWritablePrimary error uri: mongodb://a/?replicaSet=rs phases: - description: Primary A is discovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: &topologyVersion_1_1 processId: "$oid": '000000000000000000000001' counter: "$numberLong": '1' outcome: servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 0 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs # Process a network error which increments the pool generation. - description: Non-stale application network error applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 9 type: network outcome: servers: a:27017: type: Unknown topologyVersion: null pool: generation: 1 topologyType: ReplicaSetNoPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Primary A is rediscovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: *topologyVersion_1_1 outcome: &outcome servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 1 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Ignore stale NotWritablePrimary error (stale generation) applicationErrors: - address: a:27017 generation: 0 when: afterHandshakeCompletes maxWireVersion: 9 type: command response: ok: 0 errmsg: NotWritablePrimary code: 10107 topologyVersion: processId: "$oid": '000000000000000000000001' counter: "$numberLong": '2' outcome: *outcome mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/errors/stale-generation-PrimarySteppedDown.yml000066400000000000000000000042771505113246500332430ustar00rootroot00000000000000# Autogenerated tests for SDAM error handling, see generate-error-tests.py description: Stale generation PrimarySteppedDown error uri: mongodb://a/?replicaSet=rs phases: - description: Primary A is discovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: &topologyVersion_1_1 processId: "$oid": '000000000000000000000001' counter: "$numberLong": '1' outcome: servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 0 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs # Process a network error which increments the pool generation. - description: Non-stale application network error applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 9 type: network outcome: servers: a:27017: type: Unknown topologyVersion: null pool: generation: 1 topologyType: ReplicaSetNoPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Primary A is rediscovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: *topologyVersion_1_1 outcome: &outcome servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 1 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Ignore stale PrimarySteppedDown error (stale generation) applicationErrors: - address: a:27017 generation: 0 when: afterHandshakeCompletes maxWireVersion: 9 type: command response: ok: 0 errmsg: PrimarySteppedDown code: 189 topologyVersion: processId: "$oid": '000000000000000000000001' counter: "$numberLong": '2' outcome: *outcome mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/errors/stale-generation-ShutdownInProgress.yml000066400000000000000000000042761505113246500332710ustar00rootroot00000000000000# Autogenerated tests for SDAM error handling, see generate-error-tests.py description: Stale generation ShutdownInProgress error uri: mongodb://a/?replicaSet=rs phases: - description: Primary A is discovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: &topologyVersion_1_1 processId: "$oid": '000000000000000000000001' counter: "$numberLong": '1' outcome: servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 0 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs # Process a network error which increments the pool generation. - description: Non-stale application network error applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 9 type: network outcome: servers: a:27017: type: Unknown topologyVersion: null pool: generation: 1 topologyType: ReplicaSetNoPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Primary A is rediscovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: *topologyVersion_1_1 outcome: &outcome servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 1 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Ignore stale ShutdownInProgress error (stale generation) applicationErrors: - address: a:27017 generation: 0 when: afterHandshakeCompletes maxWireVersion: 9 type: command response: ok: 0 errmsg: ShutdownInProgress code: 91 topologyVersion: processId: "$oid": '000000000000000000000001' counter: "$numberLong": '2' outcome: *outcome stale-generation-afterHandshakeCompletes-InterruptedAtShutdown.yml000066400000000000000000000043421505113246500404650ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/errors# Autogenerated tests for SDAM error handling, see generate-error-tests.py description: Stale generation InterruptedAtShutdown error afterHandshakeCompletes uri: mongodb://a/?replicaSet=rs phases: - description: Primary A is discovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: &topologyVersion_1_1 processId: "$oid": '000000000000000000000001' counter: "$numberLong": '1' outcome: servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 0 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs # Process a network error which increments the pool generation. - description: Non-stale application network error applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 9 type: network outcome: servers: a:27017: type: Unknown topologyVersion: null pool: generation: 1 topologyType: ReplicaSetNoPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Primary A is rediscovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: *topologyVersion_1_1 outcome: &outcome servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 1 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Ignore stale InterruptedAtShutdown error (stale generation) applicationErrors: - address: a:27017 generation: 0 when: afterHandshakeCompletes maxWireVersion: 9 type: command response: ok: 0 errmsg: InterruptedAtShutdown code: 11600 topologyVersion: processId: "$oid": '000000000000000000000001' counter: "$numberLong": "2" outcome: *outcome stale-generation-afterHandshakeCompletes-InterruptedDueToReplStateChange.yml000066400000000000000000000044001505113246500423320ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/errors# Autogenerated tests for SDAM error handling, see generate-error-tests.py description: Stale generation InterruptedDueToReplStateChange error afterHandshakeCompletes uri: mongodb://a/?replicaSet=rs phases: - description: Primary A is discovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: &topologyVersion_1_1 processId: "$oid": '000000000000000000000001' counter: "$numberLong": '1' outcome: servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 0 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs # Process a network error which increments the pool generation. - description: Non-stale application network error applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 9 type: network outcome: servers: a:27017: type: Unknown topologyVersion: null pool: generation: 1 topologyType: ReplicaSetNoPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Primary A is rediscovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: *topologyVersion_1_1 outcome: &outcome servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 1 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Ignore stale InterruptedDueToReplStateChange error (stale generation) applicationErrors: - address: a:27017 generation: 0 when: afterHandshakeCompletes maxWireVersion: 9 type: command response: ok: 0 errmsg: InterruptedDueToReplStateChange code: 11602 topologyVersion: processId: "$oid": '000000000000000000000001' counter: "$numberLong": "2" outcome: *outcome stale-generation-afterHandshakeCompletes-LegacyNotPrimary.yml000066400000000000000000000043231505113246500373670ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/errors# Autogenerated tests for SDAM error handling, see generate-error-tests.py description: Stale generation LegacyNotPrimary error afterHandshakeCompletes uri: mongodb://a/?replicaSet=rs phases: - description: Primary A is discovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: &topologyVersion_1_1 processId: "$oid": '000000000000000000000001' counter: "$numberLong": '1' outcome: servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 0 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs # Process a network error which increments the pool generation. - description: Non-stale application network error applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 9 type: network outcome: servers: a:27017: type: Unknown topologyVersion: null pool: generation: 1 topologyType: ReplicaSetNoPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Primary A is rediscovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: *topologyVersion_1_1 outcome: &outcome servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 1 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Ignore stale LegacyNotPrimary error (stale generation) applicationErrors: - address: a:27017 generation: 0 when: afterHandshakeCompletes maxWireVersion: 9 type: command response: ok: 0 errmsg: LegacyNotPrimary code: 10058 topologyVersion: processId: "$oid": '000000000000000000000001' counter: "$numberLong": "2" outcome: *outcome stale-generation-afterHandshakeCompletes-NotPrimaryNoSecondaryOk.yml000066400000000000000000000043501505113246500407010ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/errors# Autogenerated tests for SDAM error handling, see generate-error-tests.py description: Stale generation NotPrimaryNoSecondaryOk error afterHandshakeCompletes uri: mongodb://a/?replicaSet=rs phases: - description: Primary A is discovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: &topologyVersion_1_1 processId: "$oid": '000000000000000000000001' counter: "$numberLong": '1' outcome: servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 0 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs # Process a network error which increments the pool generation. - description: Non-stale application network error applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 9 type: network outcome: servers: a:27017: type: Unknown topologyVersion: null pool: generation: 1 topologyType: ReplicaSetNoPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Primary A is rediscovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: *topologyVersion_1_1 outcome: &outcome servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 1 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Ignore stale NotPrimaryNoSecondaryOk error (stale generation) applicationErrors: - address: a:27017 generation: 0 when: afterHandshakeCompletes maxWireVersion: 9 type: command response: ok: 0 errmsg: NotPrimaryNoSecondaryOk code: 13435 topologyVersion: processId: "$oid": '000000000000000000000001' counter: "$numberLong": "2" outcome: *outcome stale-generation-afterHandshakeCompletes-NotPrimaryOrSecondary.yml000066400000000000000000000043421505113246500404140ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/errors# Autogenerated tests for SDAM error handling, see generate-error-tests.py description: Stale generation NotPrimaryOrSecondary error afterHandshakeCompletes uri: mongodb://a/?replicaSet=rs phases: - description: Primary A is discovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: &topologyVersion_1_1 processId: "$oid": '000000000000000000000001' counter: "$numberLong": '1' outcome: servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 0 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs # Process a network error which increments the pool generation. - description: Non-stale application network error applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 9 type: network outcome: servers: a:27017: type: Unknown topologyVersion: null pool: generation: 1 topologyType: ReplicaSetNoPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Primary A is rediscovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: *topologyVersion_1_1 outcome: &outcome servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 1 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Ignore stale NotPrimaryOrSecondary error (stale generation) applicationErrors: - address: a:27017 generation: 0 when: afterHandshakeCompletes maxWireVersion: 9 type: command response: ok: 0 errmsg: NotPrimaryOrSecondary code: 13436 topologyVersion: processId: "$oid": '000000000000000000000001' counter: "$numberLong": "2" outcome: *outcome stale-generation-afterHandshakeCompletes-NotWritablePrimary.yml000066400000000000000000000043311505113246500377330ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/errors# Autogenerated tests for SDAM error handling, see generate-error-tests.py description: Stale generation NotWritablePrimary error afterHandshakeCompletes uri: mongodb://a/?replicaSet=rs phases: - description: Primary A is discovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: &topologyVersion_1_1 processId: "$oid": '000000000000000000000001' counter: "$numberLong": '1' outcome: servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 0 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs # Process a network error which increments the pool generation. - description: Non-stale application network error applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 9 type: network outcome: servers: a:27017: type: Unknown topologyVersion: null pool: generation: 1 topologyType: ReplicaSetNoPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Primary A is rediscovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: *topologyVersion_1_1 outcome: &outcome servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 1 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Ignore stale NotWritablePrimary error (stale generation) applicationErrors: - address: a:27017 generation: 0 when: afterHandshakeCompletes maxWireVersion: 9 type: command response: ok: 0 errmsg: NotWritablePrimary code: 10107 topologyVersion: processId: "$oid": '000000000000000000000001' counter: "$numberLong": "2" outcome: *outcome stale-generation-afterHandshakeCompletes-PrimarySteppedDown.yml000066400000000000000000000043271505113246500377420ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/errors# Autogenerated tests for SDAM error handling, see generate-error-tests.py description: Stale generation PrimarySteppedDown error afterHandshakeCompletes uri: mongodb://a/?replicaSet=rs phases: - description: Primary A is discovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: &topologyVersion_1_1 processId: "$oid": '000000000000000000000001' counter: "$numberLong": '1' outcome: servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 0 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs # Process a network error which increments the pool generation. - description: Non-stale application network error applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 9 type: network outcome: servers: a:27017: type: Unknown topologyVersion: null pool: generation: 1 topologyType: ReplicaSetNoPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Primary A is rediscovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: *topologyVersion_1_1 outcome: &outcome servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 1 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Ignore stale PrimarySteppedDown error (stale generation) applicationErrors: - address: a:27017 generation: 0 when: afterHandshakeCompletes maxWireVersion: 9 type: command response: ok: 0 errmsg: PrimarySteppedDown code: 189 topologyVersion: processId: "$oid": '000000000000000000000001' counter: "$numberLong": "2" outcome: *outcome stale-generation-afterHandshakeCompletes-ShutdownInProgress.yml000066400000000000000000000043261505113246500377700ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/errors# Autogenerated tests for SDAM error handling, see generate-error-tests.py description: Stale generation ShutdownInProgress error afterHandshakeCompletes uri: mongodb://a/?replicaSet=rs phases: - description: Primary A is discovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: &topologyVersion_1_1 processId: "$oid": '000000000000000000000001' counter: "$numberLong": '1' outcome: servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 0 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs # Process a network error which increments the pool generation. - description: Non-stale application network error applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 9 type: network outcome: servers: a:27017: type: Unknown topologyVersion: null pool: generation: 1 topologyType: ReplicaSetNoPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Primary A is rediscovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: *topologyVersion_1_1 outcome: &outcome servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 1 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Ignore stale ShutdownInProgress error (stale generation) applicationErrors: - address: a:27017 generation: 0 when: afterHandshakeCompletes maxWireVersion: 9 type: command response: ok: 0 errmsg: ShutdownInProgress code: 91 topologyVersion: processId: "$oid": '000000000000000000000001' counter: "$numberLong": "2" outcome: *outcome stale-generation-afterHandshakeCompletes-network.yml000066400000000000000000000037611505113246500356340ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/errors# Autogenerated tests for SDAM error handling, see generate-error-tests.py description: Stale generation network error afterHandshakeCompletes uri: mongodb://a/?replicaSet=rs phases: - description: Primary A is discovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: &topologyVersion_1_1 processId: "$oid": '000000000000000000000001' counter: "$numberLong": '1' outcome: servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 0 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs # Process a network error which increments the pool generation. - description: Non-stale application network error applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 9 type: network outcome: servers: a:27017: type: Unknown topologyVersion: null pool: generation: 1 topologyType: ReplicaSetNoPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Primary A is rediscovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: *topologyVersion_1_1 outcome: &outcome servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 1 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Ignore stale network error (stale generation) applicationErrors: - address: a:27017 generation: 0 when: afterHandshakeCompletes maxWireVersion: 9 type: network outcome: *outcome stale-generation-afterHandshakeCompletes-timeout.yml000066400000000000000000000037611505113246500356310ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/errors# Autogenerated tests for SDAM error handling, see generate-error-tests.py description: Stale generation timeout error afterHandshakeCompletes uri: mongodb://a/?replicaSet=rs phases: - description: Primary A is discovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: &topologyVersion_1_1 processId: "$oid": '000000000000000000000001' counter: "$numberLong": '1' outcome: servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 0 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs # Process a network error which increments the pool generation. - description: Non-stale application network error applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 9 type: network outcome: servers: a:27017: type: Unknown topologyVersion: null pool: generation: 1 topologyType: ReplicaSetNoPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Primary A is rediscovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: *topologyVersion_1_1 outcome: &outcome servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 1 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Ignore stale timeout error (stale generation) applicationErrors: - address: a:27017 generation: 0 when: afterHandshakeCompletes maxWireVersion: 9 type: timeout outcome: *outcome stale-generation-beforeHandshakeCompletes-InterruptedAtShutdown.yml000066400000000000000000000043441505113246500406300ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/errors# Autogenerated tests for SDAM error handling, see generate-error-tests.py description: Stale generation InterruptedAtShutdown error beforeHandshakeCompletes uri: mongodb://a/?replicaSet=rs phases: - description: Primary A is discovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: &topologyVersion_1_1 processId: "$oid": '000000000000000000000001' counter: "$numberLong": '1' outcome: servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 0 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs # Process a network error which increments the pool generation. - description: Non-stale application network error applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 9 type: network outcome: servers: a:27017: type: Unknown topologyVersion: null pool: generation: 1 topologyType: ReplicaSetNoPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Primary A is rediscovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: *topologyVersion_1_1 outcome: &outcome servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 1 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Ignore stale InterruptedAtShutdown error (stale generation) applicationErrors: - address: a:27017 generation: 0 when: beforeHandshakeCompletes maxWireVersion: 9 type: command response: ok: 0 errmsg: InterruptedAtShutdown code: 11600 topologyVersion: processId: "$oid": '000000000000000000000001' counter: "$numberLong": "2" outcome: *outcome stale-generation-beforeHandshakeCompletes-InterruptedDueToReplStateChange.yml000066400000000000000000000044021505113246500424750ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/errors# Autogenerated tests for SDAM error handling, see generate-error-tests.py description: Stale generation InterruptedDueToReplStateChange error beforeHandshakeCompletes uri: mongodb://a/?replicaSet=rs phases: - description: Primary A is discovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: &topologyVersion_1_1 processId: "$oid": '000000000000000000000001' counter: "$numberLong": '1' outcome: servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 0 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs # Process a network error which increments the pool generation. - description: Non-stale application network error applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 9 type: network outcome: servers: a:27017: type: Unknown topologyVersion: null pool: generation: 1 topologyType: ReplicaSetNoPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Primary A is rediscovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: *topologyVersion_1_1 outcome: &outcome servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 1 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Ignore stale InterruptedDueToReplStateChange error (stale generation) applicationErrors: - address: a:27017 generation: 0 when: beforeHandshakeCompletes maxWireVersion: 9 type: command response: ok: 0 errmsg: InterruptedDueToReplStateChange code: 11602 topologyVersion: processId: "$oid": '000000000000000000000001' counter: "$numberLong": "2" outcome: *outcome stale-generation-beforeHandshakeCompletes-LegacyNotPrimary.yml000066400000000000000000000043251505113246500375320ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/errors# Autogenerated tests for SDAM error handling, see generate-error-tests.py description: Stale generation LegacyNotPrimary error beforeHandshakeCompletes uri: mongodb://a/?replicaSet=rs phases: - description: Primary A is discovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: &topologyVersion_1_1 processId: "$oid": '000000000000000000000001' counter: "$numberLong": '1' outcome: servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 0 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs # Process a network error which increments the pool generation. - description: Non-stale application network error applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 9 type: network outcome: servers: a:27017: type: Unknown topologyVersion: null pool: generation: 1 topologyType: ReplicaSetNoPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Primary A is rediscovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: *topologyVersion_1_1 outcome: &outcome servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 1 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Ignore stale LegacyNotPrimary error (stale generation) applicationErrors: - address: a:27017 generation: 0 when: beforeHandshakeCompletes maxWireVersion: 9 type: command response: ok: 0 errmsg: LegacyNotPrimary code: 10058 topologyVersion: processId: "$oid": '000000000000000000000001' counter: "$numberLong": "2" outcome: *outcome stale-generation-beforeHandshakeCompletes-NotPrimaryNoSecondaryOk.yml000066400000000000000000000043521505113246500410440ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/errors# Autogenerated tests for SDAM error handling, see generate-error-tests.py description: Stale generation NotPrimaryNoSecondaryOk error beforeHandshakeCompletes uri: mongodb://a/?replicaSet=rs phases: - description: Primary A is discovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: &topologyVersion_1_1 processId: "$oid": '000000000000000000000001' counter: "$numberLong": '1' outcome: servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 0 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs # Process a network error which increments the pool generation. - description: Non-stale application network error applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 9 type: network outcome: servers: a:27017: type: Unknown topologyVersion: null pool: generation: 1 topologyType: ReplicaSetNoPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Primary A is rediscovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: *topologyVersion_1_1 outcome: &outcome servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 1 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Ignore stale NotPrimaryNoSecondaryOk error (stale generation) applicationErrors: - address: a:27017 generation: 0 when: beforeHandshakeCompletes maxWireVersion: 9 type: command response: ok: 0 errmsg: NotPrimaryNoSecondaryOk code: 13435 topologyVersion: processId: "$oid": '000000000000000000000001' counter: "$numberLong": "2" outcome: *outcome stale-generation-beforeHandshakeCompletes-NotPrimaryOrSecondary.yml000066400000000000000000000043441505113246500405570ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/errors# Autogenerated tests for SDAM error handling, see generate-error-tests.py description: Stale generation NotPrimaryOrSecondary error beforeHandshakeCompletes uri: mongodb://a/?replicaSet=rs phases: - description: Primary A is discovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: &topologyVersion_1_1 processId: "$oid": '000000000000000000000001' counter: "$numberLong": '1' outcome: servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 0 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs # Process a network error which increments the pool generation. - description: Non-stale application network error applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 9 type: network outcome: servers: a:27017: type: Unknown topologyVersion: null pool: generation: 1 topologyType: ReplicaSetNoPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Primary A is rediscovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: *topologyVersion_1_1 outcome: &outcome servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 1 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Ignore stale NotPrimaryOrSecondary error (stale generation) applicationErrors: - address: a:27017 generation: 0 when: beforeHandshakeCompletes maxWireVersion: 9 type: command response: ok: 0 errmsg: NotPrimaryOrSecondary code: 13436 topologyVersion: processId: "$oid": '000000000000000000000001' counter: "$numberLong": "2" outcome: *outcome stale-generation-beforeHandshakeCompletes-NotWritablePrimary.yml000066400000000000000000000043331505113246500400760ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/errors# Autogenerated tests for SDAM error handling, see generate-error-tests.py description: Stale generation NotWritablePrimary error beforeHandshakeCompletes uri: mongodb://a/?replicaSet=rs phases: - description: Primary A is discovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: &topologyVersion_1_1 processId: "$oid": '000000000000000000000001' counter: "$numberLong": '1' outcome: servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 0 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs # Process a network error which increments the pool generation. - description: Non-stale application network error applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 9 type: network outcome: servers: a:27017: type: Unknown topologyVersion: null pool: generation: 1 topologyType: ReplicaSetNoPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Primary A is rediscovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: *topologyVersion_1_1 outcome: &outcome servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 1 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Ignore stale NotWritablePrimary error (stale generation) applicationErrors: - address: a:27017 generation: 0 when: beforeHandshakeCompletes maxWireVersion: 9 type: command response: ok: 0 errmsg: NotWritablePrimary code: 10107 topologyVersion: processId: "$oid": '000000000000000000000001' counter: "$numberLong": "2" outcome: *outcome stale-generation-beforeHandshakeCompletes-PrimarySteppedDown.yml000066400000000000000000000043311505113246500400760ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/errors# Autogenerated tests for SDAM error handling, see generate-error-tests.py description: Stale generation PrimarySteppedDown error beforeHandshakeCompletes uri: mongodb://a/?replicaSet=rs phases: - description: Primary A is discovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: &topologyVersion_1_1 processId: "$oid": '000000000000000000000001' counter: "$numberLong": '1' outcome: servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 0 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs # Process a network error which increments the pool generation. - description: Non-stale application network error applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 9 type: network outcome: servers: a:27017: type: Unknown topologyVersion: null pool: generation: 1 topologyType: ReplicaSetNoPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Primary A is rediscovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: *topologyVersion_1_1 outcome: &outcome servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 1 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Ignore stale PrimarySteppedDown error (stale generation) applicationErrors: - address: a:27017 generation: 0 when: beforeHandshakeCompletes maxWireVersion: 9 type: command response: ok: 0 errmsg: PrimarySteppedDown code: 189 topologyVersion: processId: "$oid": '000000000000000000000001' counter: "$numberLong": "2" outcome: *outcome stale-generation-beforeHandshakeCompletes-ShutdownInProgress.yml000066400000000000000000000043301505113246500401240ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/errors# Autogenerated tests for SDAM error handling, see generate-error-tests.py description: Stale generation ShutdownInProgress error beforeHandshakeCompletes uri: mongodb://a/?replicaSet=rs phases: - description: Primary A is discovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: &topologyVersion_1_1 processId: "$oid": '000000000000000000000001' counter: "$numberLong": '1' outcome: servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 0 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs # Process a network error which increments the pool generation. - description: Non-stale application network error applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 9 type: network outcome: servers: a:27017: type: Unknown topologyVersion: null pool: generation: 1 topologyType: ReplicaSetNoPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Primary A is rediscovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: *topologyVersion_1_1 outcome: &outcome servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 1 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Ignore stale ShutdownInProgress error (stale generation) applicationErrors: - address: a:27017 generation: 0 when: beforeHandshakeCompletes maxWireVersion: 9 type: command response: ok: 0 errmsg: ShutdownInProgress code: 91 topologyVersion: processId: "$oid": '000000000000000000000001' counter: "$numberLong": "2" outcome: *outcome stale-generation-beforeHandshakeCompletes-network.yml000066400000000000000000000037631505113246500357770ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/errors# Autogenerated tests for SDAM error handling, see generate-error-tests.py description: Stale generation network error beforeHandshakeCompletes uri: mongodb://a/?replicaSet=rs phases: - description: Primary A is discovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: &topologyVersion_1_1 processId: "$oid": '000000000000000000000001' counter: "$numberLong": '1' outcome: servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 0 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs # Process a network error which increments the pool generation. - description: Non-stale application network error applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 9 type: network outcome: servers: a:27017: type: Unknown topologyVersion: null pool: generation: 1 topologyType: ReplicaSetNoPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Primary A is rediscovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: *topologyVersion_1_1 outcome: &outcome servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 1 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Ignore stale network error (stale generation) applicationErrors: - address: a:27017 generation: 0 when: beforeHandshakeCompletes maxWireVersion: 9 type: network outcome: *outcome stale-generation-beforeHandshakeCompletes-timeout.yml000066400000000000000000000037631505113246500357740ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/errors# Autogenerated tests for SDAM error handling, see generate-error-tests.py description: Stale generation timeout error beforeHandshakeCompletes uri: mongodb://a/?replicaSet=rs phases: - description: Primary A is discovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: &topologyVersion_1_1 processId: "$oid": '000000000000000000000001' counter: "$numberLong": '1' outcome: servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 0 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs # Process a network error which increments the pool generation. - description: Non-stale application network error applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 9 type: network outcome: servers: a:27017: type: Unknown topologyVersion: null pool: generation: 1 topologyType: ReplicaSetNoPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Primary A is rediscovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: *topologyVersion_1_1 outcome: &outcome servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 1 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Ignore stale timeout error (stale generation) applicationErrors: - address: a:27017 generation: 0 when: beforeHandshakeCompletes maxWireVersion: 9 type: timeout outcome: *outcome stale-topologyVersion-InterruptedAtShutdown.yml000066400000000000000000000032251505113246500347710ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/errors# Autogenerated tests for SDAM error handling, see generate-error-tests.py description: Stale topologyVersion InterruptedAtShutdown error uri: mongodb://a/?replicaSet=rs phases: - description: Primary A is discovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: &topologyVersion_1_1 processId: "$oid": '000000000000000000000001' counter: "$numberLong": '1' outcome: &outcome servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 0 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Ignore stale InterruptedAtShutdown error (topologyVersion less) applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 9 type: command response: ok: 0 errmsg: InterruptedAtShutdown code: 11600 topologyVersion: processId: "$oid": '000000000000000000000001' counter: "$numberLong": '0' outcome: *outcome - description: Ignore stale InterruptedAtShutdown error (topologyVersion equal) applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 9 type: command response: ok: 0 errmsg: InterruptedAtShutdown code: 11600 topologyVersion: processId: "$oid": '000000000000000000000001' counter: "$numberLong": '1' outcome: *outcome stale-topologyVersion-InterruptedDueToReplStateChange.yml000066400000000000000000000033071505113246500366440ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/errors# Autogenerated tests for SDAM error handling, see generate-error-tests.py description: Stale topologyVersion InterruptedDueToReplStateChange error uri: mongodb://a/?replicaSet=rs phases: - description: Primary A is discovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: &topologyVersion_1_1 processId: "$oid": '000000000000000000000001' counter: "$numberLong": '1' outcome: &outcome servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 0 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Ignore stale InterruptedDueToReplStateChange error (topologyVersion less) applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 9 type: command response: ok: 0 errmsg: InterruptedDueToReplStateChange code: 11602 topologyVersion: processId: "$oid": '000000000000000000000001' counter: "$numberLong": '0' outcome: *outcome - description: Ignore stale InterruptedDueToReplStateChange error (topologyVersion equal) applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 9 type: command response: ok: 0 errmsg: InterruptedDueToReplStateChange code: 11602 topologyVersion: processId: "$oid": '000000000000000000000001' counter: "$numberLong": '1' outcome: *outcome mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/errors/stale-topologyVersion-LegacyNotPrimary.yml000066400000000000000000000031741505113246500337560ustar00rootroot00000000000000# Autogenerated tests for SDAM error handling, see generate-error-tests.py description: Stale topologyVersion LegacyNotPrimary error uri: mongodb://a/?replicaSet=rs phases: - description: Primary A is discovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: &topologyVersion_1_1 processId: "$oid": '000000000000000000000001' counter: "$numberLong": '1' outcome: &outcome servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 0 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Ignore stale LegacyNotPrimary error (topologyVersion less) applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 9 type: command response: ok: 0 errmsg: LegacyNotPrimary code: 10058 topologyVersion: processId: "$oid": '000000000000000000000001' counter: "$numberLong": '0' outcome: *outcome - description: Ignore stale LegacyNotPrimary error (topologyVersion equal) applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 9 type: command response: ok: 0 errmsg: LegacyNotPrimary code: 10058 topologyVersion: processId: "$oid": '000000000000000000000001' counter: "$numberLong": '1' outcome: *outcome stale-topologyVersion-NotPrimaryNoSecondaryOk.yml000066400000000000000000000032371505113246500352110ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/errors# Autogenerated tests for SDAM error handling, see generate-error-tests.py description: Stale topologyVersion NotPrimaryNoSecondaryOk error uri: mongodb://a/?replicaSet=rs phases: - description: Primary A is discovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: &topologyVersion_1_1 processId: "$oid": '000000000000000000000001' counter: "$numberLong": '1' outcome: &outcome servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 0 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Ignore stale NotPrimaryNoSecondaryOk error (topologyVersion less) applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 9 type: command response: ok: 0 errmsg: NotPrimaryNoSecondaryOk code: 13435 topologyVersion: processId: "$oid": '000000000000000000000001' counter: "$numberLong": '0' outcome: *outcome - description: Ignore stale NotPrimaryNoSecondaryOk error (topologyVersion equal) applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 9 type: command response: ok: 0 errmsg: NotPrimaryNoSecondaryOk code: 13435 topologyVersion: processId: "$oid": '000000000000000000000001' counter: "$numberLong": '1' outcome: *outcome stale-topologyVersion-NotPrimaryOrSecondary.yml000066400000000000000000000032251505113246500347200ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/errors# Autogenerated tests for SDAM error handling, see generate-error-tests.py description: Stale topologyVersion NotPrimaryOrSecondary error uri: mongodb://a/?replicaSet=rs phases: - description: Primary A is discovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: &topologyVersion_1_1 processId: "$oid": '000000000000000000000001' counter: "$numberLong": '1' outcome: &outcome servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 0 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Ignore stale NotPrimaryOrSecondary error (topologyVersion less) applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 9 type: command response: ok: 0 errmsg: NotPrimaryOrSecondary code: 13436 topologyVersion: processId: "$oid": '000000000000000000000001' counter: "$numberLong": '0' outcome: *outcome - description: Ignore stale NotPrimaryOrSecondary error (topologyVersion equal) applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 9 type: command response: ok: 0 errmsg: NotPrimaryOrSecondary code: 13436 topologyVersion: processId: "$oid": '000000000000000000000001' counter: "$numberLong": '1' outcome: *outcome stale-topologyVersion-NotWritablePrimary.yml000066400000000000000000000032061505113246500342400ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/errors# Autogenerated tests for SDAM error handling, see generate-error-tests.py description: Stale topologyVersion NotWritablePrimary error uri: mongodb://a/?replicaSet=rs phases: - description: Primary A is discovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: &topologyVersion_1_1 processId: "$oid": '000000000000000000000001' counter: "$numberLong": '1' outcome: &outcome servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 0 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Ignore stale NotWritablePrimary error (topologyVersion less) applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 9 type: command response: ok: 0 errmsg: NotWritablePrimary code: 10107 topologyVersion: processId: "$oid": '000000000000000000000001' counter: "$numberLong": '0' outcome: *outcome - description: Ignore stale NotWritablePrimary error (topologyVersion equal) applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 9 type: command response: ok: 0 errmsg: NotWritablePrimary code: 10107 topologyVersion: processId: "$oid": '000000000000000000000001' counter: "$numberLong": '1' outcome: *outcome stale-topologyVersion-PrimarySteppedDown.yml000066400000000000000000000032021505113246500342360ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/errors# Autogenerated tests for SDAM error handling, see generate-error-tests.py description: Stale topologyVersion PrimarySteppedDown error uri: mongodb://a/?replicaSet=rs phases: - description: Primary A is discovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: &topologyVersion_1_1 processId: "$oid": '000000000000000000000001' counter: "$numberLong": '1' outcome: &outcome servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 0 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Ignore stale PrimarySteppedDown error (topologyVersion less) applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 9 type: command response: ok: 0 errmsg: PrimarySteppedDown code: 189 topologyVersion: processId: "$oid": '000000000000000000000001' counter: "$numberLong": '0' outcome: *outcome - description: Ignore stale PrimarySteppedDown error (topologyVersion equal) applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 9 type: command response: ok: 0 errmsg: PrimarySteppedDown code: 189 topologyVersion: processId: "$oid": '000000000000000000000001' counter: "$numberLong": '1' outcome: *outcome stale-topologyVersion-ShutdownInProgress.yml000066400000000000000000000032001505113246500342630ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/errors# Autogenerated tests for SDAM error handling, see generate-error-tests.py description: Stale topologyVersion ShutdownInProgress error uri: mongodb://a/?replicaSet=rs phases: - description: Primary A is discovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: &topologyVersion_1_1 processId: "$oid": '000000000000000000000001' counter: "$numberLong": '1' outcome: &outcome servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 0 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Ignore stale ShutdownInProgress error (topologyVersion less) applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 9 type: command response: ok: 0 errmsg: ShutdownInProgress code: 91 topologyVersion: processId: "$oid": '000000000000000000000001' counter: "$numberLong": '0' outcome: *outcome - description: Ignore stale ShutdownInProgress error (topologyVersion equal) applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 9 type: command response: ok: 0 errmsg: ShutdownInProgress code: 91 topologyVersion: processId: "$oid": '000000000000000000000001' counter: "$numberLong": '1' outcome: *outcome mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/errors/write_errors_ignored.yml000066400000000000000000000020161505113246500304260ustar00rootroot00000000000000description: writeErrors field is ignored uri: mongodb://a/?replicaSet=rs phases: - description: Primary A is discovered responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 setName: rs minWireVersion: 0 maxWireVersion: 9 topologyVersion: &topologyVersion_1_1 processId: "$oid": '000000000000000000000001' counter: "$numberLong": '1' outcome: &outcome servers: a:27017: type: RSPrimary setName: rs topologyVersion: *topologyVersion_1_1 pool: generation: 0 topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs - description: Ignore command error with writeErrors field applicationErrors: - address: a:27017 when: afterHandshakeCompletes maxWireVersion: 9 type: command response: ok: 1 writeErrors: - { errmsg: NotPrimaryNoSecondaryOk, code: 13435, index: 0 } outcome: *outcome mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/load-balanced/000077500000000000000000000000001505113246500246015ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/load-balanced/discover_load_balancer.yml000066400000000000000000000013301505113246500317650ustar00rootroot00000000000000description: "Load balancer can be discovered and only has the address property set" uri: "mongodb://a/?loadBalanced=true" phases: # There should be no monitoring in LoadBalanced mode, so no responses are necessary to get the topology into the # correct state. - outcome: servers: a:27017: type: LoadBalancer setName: null setVersion: null electionId: null logicalSessionTimeoutMinutes: null minWireVersion: null maxWireVersion: null topologyVersion: null topologyType: LoadBalanced setName: null logicalSessionTimeoutMinutes: null maxSetVersion: null maxElectionId: null compatible: true mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/rs/000077500000000000000000000000001505113246500225575ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/rs/compatible.yml000066400000000000000000000024231505113246500254220ustar00rootroot00000000000000description: "Replica set member with large maxWireVersion" uri: "mongodb://a,b/?replicaSet=rs" phases: [ { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true, setName: "rs", hosts: ["a:27017", "b:27017"], minWireVersion: 0, maxWireVersion: 6 }], ["b:27017", { ok: 1, helloOk: true, isWritablePrimary: false, secondary: true, setName: "rs", hosts: ["a:27017", "b:27017"], minWireVersion: 0, maxWireVersion: 1000 }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs" }, "b:27017": { type: "RSSecondary", setName: "rs" } }, topologyType: "ReplicaSetWithPrimary", setName: "rs", logicalSessionTimeoutMinutes: null, compatible: true } } ] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/rs/compatible_unknown.yml000066400000000000000000000016011505113246500271760ustar00rootroot00000000000000description: "Replica set member and an unknown server" uri: "mongodb://a,b/?replicaSet=rs" phases: [ { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true, setName: "rs", hosts: ["a:27017", "b:27017"], minWireVersion: 0, maxWireVersion: 6 }], ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs" }, "b:27017": { type: "Unknown", } }, topologyType: "ReplicaSetWithPrimary", setName: "rs", logicalSessionTimeoutMinutes: null, compatible: true } } ] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/rs/discover_arbiters.yml000066400000000000000000000016711505113246500270200ustar00rootroot00000000000000description: "Discover arbiters with directConnection URI option" uri: "mongodb://a/?directConnection=false" phases: [ { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true, hosts: ["a:27017"], arbiters: ["b:27017"], setName: "rs", minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs" }, "b:27017": { type: "Unknown", setName: } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: null, setName: "rs" } } ] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/rs/discover_arbiters_replicaset.yml000066400000000000000000000016521505113246500312320ustar00rootroot00000000000000description: "Discover arbiters with replicaSet URI option" uri: "mongodb://a/?replicaSet=rs" phases: [ { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true, hosts: ["a:27017"], arbiters: ["b:27017"], setName: "rs", minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs" }, "b:27017": { type: "Unknown", setName: } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: null, setName: "rs" } } ] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/rs/discover_ghost.yml000066400000000000000000000013331505113246500263240ustar00rootroot00000000000000description: "Discover ghost with directConnection URI option" uri: "mongodb://b/?directConnection=false" phases: [ { responses: [ ["b:27017", { ok: 1, helloOk: true, isWritablePrimary: false, isreplicaset: true, minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "b:27017": { type: "RSGhost", setName: } }, topologyType: "Unknown", logicalSessionTimeoutMinutes: null, setName: } } ] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/rs/discover_ghost_replicaset.yml000066400000000000000000000015231505113246500305400ustar00rootroot00000000000000description: "Discover ghost with replicaSet URI option" uri: "mongodb://a,b/?replicaSet=rs" phases: [ { responses: [ ["b:27017", { ok: 1, helloOk: true, isWritablePrimary: false, isreplicaset: true, minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "Unknown", setName: }, "b:27017": { type: "RSGhost", setName: } }, topologyType: "ReplicaSetNoPrimary", logicalSessionTimeoutMinutes: null, setName: "rs" } } ] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/rs/discover_hidden.yml000066400000000000000000000021221505113246500264300ustar00rootroot00000000000000description: "Discover hidden with directConnection URI option" uri: "mongodb://a/?directConnection=false" phases: [ { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: false, secondary: true, hidden: true, hosts: ["c:27017", "d:27017"], setName: "rs", minWireVersion: 0, maxWireVersion: 6 }], ], outcome: { servers: { "a:27017": { type: "RSOther", setName: "rs" }, "c:27017": { type: "Unknown", setName: }, "d:27017": { type: "Unknown", setName: } }, topologyType: "ReplicaSetNoPrimary", logicalSessionTimeoutMinutes: null, setName: "rs" } } ] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/rs/discover_hidden_replicaset.yml000066400000000000000000000021031505113246500306420ustar00rootroot00000000000000description: "Discover hidden with replicaSet URI option" uri: "mongodb://a/?replicaSet=rs" phases: [ { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: false, secondary: true, hidden: true, hosts: ["c:27017", "d:27017"], setName: "rs", minWireVersion: 0, maxWireVersion: 6 }], ], outcome: { servers: { "a:27017": { type: "RSOther", setName: "rs" }, "c:27017": { type: "Unknown", setName: }, "d:27017": { type: "Unknown", setName: } }, topologyType: "ReplicaSetNoPrimary", logicalSessionTimeoutMinutes: null, setName: "rs" } } ] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/rs/discover_passives.yml000066400000000000000000000035111505113246500270350ustar00rootroot00000000000000description: "Discover passives with directConnection URI option" uri: "mongodb://a/?directConnection=false" phases: [ { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true, hosts: ["a:27017"], passives: ["b:27017"], setName: "rs", minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs" }, "b:27017": { type: "Unknown", setName: } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: null, setName: "rs" } }, { responses: [ ["b:27017", { ok: 1, helloOk: true, isWritablePrimary: false, secondary: true, passive: true, hosts: ["a:27017"], passives: ["b:27017"], setName: "rs", minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs" }, "b:27017": { type: "RSSecondary", setName: "rs" } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: null, setName: "rs" } } ] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/rs/discover_passives_replicaset.yml000066400000000000000000000034721505113246500312560ustar00rootroot00000000000000description: "Discover passives with replicaSet URI option" uri: "mongodb://a/?replicaSet=rs" phases: [ { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true, hosts: ["a:27017"], passives: ["b:27017"], setName: "rs", minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs" }, "b:27017": { type: "Unknown", setName: } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: null, setName: "rs" } }, { responses: [ ["b:27017", { ok: 1, helloOk: true, isWritablePrimary: false, secondary: true, passive: true, hosts: ["a:27017"], passives: ["b:27017"], setName: "rs", minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs" }, "b:27017": { type: "RSSecondary", setName: "rs" } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: null, setName: "rs" } } ] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/rs/discover_primary.yml000066400000000000000000000016301505113246500266630ustar00rootroot00000000000000description: "Discover primary with directConnection URI option" uri: "mongodb://a/?directConnection=false" phases: [ { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true, setName: "rs", hosts: ["a:27017", "b:27017"], minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs" }, "b:27017": { type: "Unknown", setName: } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: null, setName: "rs" } } ] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/rs/discover_primary_replicaset.yml000066400000000000000000000016111505113246500310750ustar00rootroot00000000000000description: "Discover primary with replicaSet URI option" uri: "mongodb://a/?replicaSet=rs" phases: [ { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true, setName: "rs", hosts: ["a:27017", "b:27017"], minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs" }, "b:27017": { type: "Unknown", setName: } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: null, setName: "rs" } } ] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/rs/discover_rsother.yml000066400000000000000000000020611505113246500266650ustar00rootroot00000000000000description: "Discover RSOther with directConnection URI option" uri: "mongodb://b/?directConnection=false" phases: [ { responses: [ ["b:27017", { ok: 1, helloOk: true, isWritablePrimary: false, secondary: false, hosts: ["c:27017", "d:27017"], setName: "rs", minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "b:27017": { type: "RSOther", setName: "rs" }, "c:27017": { type: "Unknown", setName: }, "d:27017": { type: "Unknown", setName: } }, topologyType: "ReplicaSetNoPrimary", logicalSessionTimeoutMinutes: null, setName: "rs" } } ] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/rs/discover_rsother_replicaset.yml000066400000000000000000000030461505113246500311040ustar00rootroot00000000000000description: "Discover RSOther with replicaSet URI option" uri: "mongodb://a,b/?replicaSet=rs" phases: [ { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: false, secondary: true, hidden: true, hosts: ["c:27017", "d:27017"], setName: "rs", minWireVersion: 0, maxWireVersion: 6 }], ["b:27017", { ok: 1, helloOk: true, isWritablePrimary: false, secondary: false, hosts: ["c:27017", "d:27017"], setName: "rs", minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "RSOther", setName: "rs" }, "b:27017": { type: "RSOther", setName: "rs" }, "c:27017": { type: "Unknown", setName: }, "d:27017": { type: "Unknown", setName: } }, topologyType: "ReplicaSetNoPrimary", logicalSessionTimeoutMinutes: null, setName: "rs" } } ] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/rs/discover_secondary.yml000066400000000000000000000017001505113246500271650ustar00rootroot00000000000000description: "Discover secondary with directConnection URI option" uri: "mongodb://b/?directConnection=false" phases: [ { responses: [ ["b:27017", { ok: 1, helloOk: true, isWritablePrimary: false, secondary: true, setName: "rs", hosts: ["a:27017", "b:27017"], minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "Unknown", setName: }, "b:27017": { type: "RSSecondary", setName: "rs" } }, topologyType: "ReplicaSetNoPrimary", logicalSessionTimeoutMinutes: null, setName: "rs" } } ] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/rs/discover_secondary_replicaset.yml000066400000000000000000000016611505113246500314060ustar00rootroot00000000000000description: "Discover secondary with replicaSet URI option" uri: "mongodb://b/?replicaSet=rs" phases: [ { responses: [ ["b:27017", { ok: 1, helloOk: true, isWritablePrimary: false, secondary: true, setName: "rs", hosts: ["a:27017", "b:27017"], minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "Unknown", setName: }, "b:27017": { type: "RSSecondary", setName: "rs" } }, topologyType: "ReplicaSetNoPrimary", logicalSessionTimeoutMinutes: null, setName: "rs" } } ] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/rs/discovery.yml000066400000000000000000000110031505113246500253040ustar00rootroot00000000000000description: "Replica set discovery" uri: "mongodb://a/?replicaSet=rs" phases: [ # At first, a, b, and c are secondaries. { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: false, secondary: true, setName: "rs", hosts: ["a:27017", "b:27017", "c:27017"], minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "RSSecondary", setName: "rs" }, "b:27017": { type: "Unknown", setName: }, "c:27017": { type: "Unknown", setName: } }, topologyType: "ReplicaSetNoPrimary", logicalSessionTimeoutMinutes: null, setName: "rs" } }, # Admin removes a, adds a high-priority member d which becomes primary. { responses: [ ["b:27017", { ok: 1, helloOk: true, isWritablePrimary: false, secondary: true, setName: "rs", primary: "d:27017", hosts: ["b:27017", "c:27017", "d:27017"], minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "RSSecondary", setName: "rs" }, "b:27017": { type: "RSSecondary", setName: "rs" }, "c:27017": { type: "Unknown", setName: }, "d:27017": { type: "PossiblePrimary", setName: } }, topologyType: "ReplicaSetNoPrimary", logicalSessionTimeoutMinutes: null, setName: "rs" } }, # Primary responds. { responses: [ ["d:27017", { ok: 1, helloOk: true, isWritablePrimary: true, setName: "rs", hosts: ["b:27017", "c:27017", "d:27017", "e:27017"], minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { # e is new. servers: { "b:27017": { type: "RSSecondary", setName: "rs" }, "c:27017": { type: "Unknown", setName: }, "d:27017": { type: "RSPrimary", setName: "rs" }, "e:27017": { type: "Unknown", setName: } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: null, setName: "rs" } }, # Stale response from c. { responses: [ ["c:27017", { ok: 1, helloOk: true, isWritablePrimary: false, secondary: true, setName: "rs", hosts: ["a:27017", "b:27017", "c:27017"], minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { # We don't add a back. # We don't remove e. servers: { "b:27017": { type: "RSSecondary", setName: "rs" }, "c:27017": { type: "RSSecondary", setName: "rs" }, "d:27017": { type: "RSPrimary", setName: "rs" }, "e:27017": { type: "Unknown", setName: } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: null, setName: "rs" } } ] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/rs/electionId_precedence_setVersion.yml000066400000000000000000000033121505113246500317560ustar00rootroot00000000000000description: ElectionId is considered higher precedence than setVersion uri: "mongodb://a/?replicaSet=rs" phases: - responses: - - "a:27017" - ok: 1 helloOk: true isWritablePrimary: true hosts: - "a:27017" - "b:27017" setName: rs setVersion: 1 electionId: $oid: "000000000000000000000001" minWireVersion: 0 maxWireVersion: 17 - - "b:27017" - ok: 1 helloOk: true isWritablePrimary: true hosts: - "a:27017" - "b:27017" setName: rs setVersion: 2 # Even though "B" reports the newer setVersion, "A" will report the newer electionId which should allow it to remain the primary electionId: $oid: "000000000000000000000001" minWireVersion: 0 maxWireVersion: 17 - - "a:27017" - ok: 1 helloOk: true isWritablePrimary: true hosts: - "a:27017" - "b:27017" setName: rs setVersion: 1 electionId: $oid: "000000000000000000000002" minWireVersion: 0 maxWireVersion: 17 outcome: servers: "a:27017": type: RSPrimary setName: rs setVersion: 1 electionId: $oid: "000000000000000000000002" "b:27017": type: Unknown setName: null setVersion: null electionId: null topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs maxSetVersion: 1 maxElectionId: $oid: "000000000000000000000002" mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/rs/equal_electionids.yml000066400000000000000000000032641505113246500270000ustar00rootroot00000000000000description: "New primary with equal electionId" uri: "mongodb://a/?replicaSet=rs" phases: [ # A and B claim to be primaries, with equal electionIds. { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true, hosts: ["a:27017", "b:27017"], setName: "rs", setVersion: 1, electionId: {"$oid": "000000000000000000000001"}, minWireVersion: 0, maxWireVersion: 6 }], ["b:27017", { ok: 1, helloOk: true, isWritablePrimary: true, hosts: ["a:27017", "b:27017"], setName: "rs", setVersion: 1, electionId: {"$oid": "000000000000000000000001"}, minWireVersion: 0, maxWireVersion: 6 }] ], # No choice but to believe the latter response. outcome: { servers: { "a:27017": { type: "Unknown", setName: , setVersion: , electionId: }, "b:27017": { type: "RSPrimary", setName: "rs", setVersion: 1, electionId: {"$oid": "000000000000000000000001"} } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: null, setName: "rs", maxSetVersion: 1, maxElectionId: {"$oid": "000000000000000000000001"}, } } ] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/rs/hosts_differ_from_seeds.yml000066400000000000000000000013621505113246500301710ustar00rootroot00000000000000description: "Host list differs from seeds" uri: "mongodb://a/?replicaSet=rs" phases: [ { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true, setName: "rs", hosts: ["b:27017"], minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "b:27017": { type: "Unknown", setName: } }, topologyType: "ReplicaSetNoPrimary", logicalSessionTimeoutMinutes: null, setName: "rs" } } ] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/rs/incompatible_arbiter.yml000066400000000000000000000014771505113246500274710ustar00rootroot00000000000000description: "Incompatible arbiter" uri: "mongodb://a,b/?replicaSet=rs" phases: - responses: - - "a:27017" - ok: 1 helloOk: true isWritablePrimary: true setName: "rs" hosts: ["a:27017", "b:27017"] minWireVersion: 0 maxWireVersion: 6 - - "b:27017" - ok: 1 helloOk: true arbiterOnly: true setName: "rs" hosts: ["a:27017", "b:27017"] minWireVersion: 0 maxWireVersion: 1 outcome: servers: "a:27017": type: "RSPrimary" setName: "rs" "b:27017": type: "RSArbiter" setName: "rs" topologyType: "ReplicaSetWithPrimary" setName: "rs" logicalSessionTimeoutMinutes: ~ compatible: false mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/rs/incompatible_ghost.yml000066400000000000000000000013671505113246500271630ustar00rootroot00000000000000description: "Incompatible ghost" uri: "mongodb://a,b/?replicaSet=rs" phases: - responses: - - "a:27017" - ok: 1 helloOk: true isWritablePrimary: true setName: "rs" hosts: ["a:27017", "b:27017"] minWireVersion: 0 maxWireVersion: 6 - - "b:27017" - ok: 1 helloOk: true isreplicaset: true minWireVersion: 0 maxWireVersion: 1 outcome: servers: "a:27017": type: "RSPrimary" setName: "rs" "b:27017": type: "RSGhost" setName: topologyType: "ReplicaSetWithPrimary" setName: "rs" logicalSessionTimeoutMinutes: ~ compatible: false mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/rs/incompatible_other.yml000066400000000000000000000014661505113246500271600ustar00rootroot00000000000000description: "Incompatible other" uri: "mongodb://a,b/?replicaSet=rs" phases: - responses: - - "a:27017" - ok: 1 helloOk: true isWritablePrimary: true setName: "rs" hosts: ["a:27017", "b:27017"] minWireVersion: 0 maxWireVersion: 6 - - "b:27017" - ok: 1 helloOk: true hidden: true setName: "rs" hosts: ["a:27017", "b:27017"] minWireVersion: 0 maxWireVersion: 1 outcome: servers: "a:27017": type: "RSPrimary" setName: "rs" "b:27017": type: "RSOther" setName: "rs" topologyType: "ReplicaSetWithPrimary" setName: "rs" logicalSessionTimeoutMinutes: ~ compatible: false mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/rs/ls_timeout.yml000066400000000000000000000160111505113246500254650ustar00rootroot00000000000000description: "Parse logicalSessionTimeoutMinutes from replica set" uri: "mongodb://a/?replicaSet=rs" phases: [ # An RSPrimary responds with a non-null logicalSessionTimeoutMinutes { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true, hosts: ["a:27017", "b:27017", "c:27017", "d:27017", "e:27017"], setName: "rs", logicalSessionTimeoutMinutes: 3, minWireVersion: 0, maxWireVersion: 6 }], ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs" }, "b:27017": { type: "Unknown", }, "c:27017": { type: "Unknown", }, "d:27017": { type: "Unknown", }, "e:27017": { type: "Unknown", } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: 3, setName: "rs", } }, # An RSGhost responds without a logicalSessionTimeoutMinutes { responses: [ ["d:27017", { ok: 1, helloOk: true, isWritablePrimary: false, isreplicaset: true, minWireVersion: 0, maxWireVersion: 6 }], ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs" }, "b:27017": { type: "Unknown", }, "c:27017": { type: "Unknown", }, "d:27017": { type: "RSGhost", }, "e:27017": { type: "Unknown", } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: 3, setName: "rs", } }, # An RSArbiter responds without a logicalSessionTimeoutMinutes { responses: [ ["e:27017", { ok: 1, helloOk: true, isWritablePrimary: false, hosts: ["a:27017", "b:27017", "c:27017", "d:27017", "e:27017"], setName: "rs", arbiterOnly: true, minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs" }, "b:27017": { type: "Unknown", }, "c:27017": { type: "Unknown", }, "d:27017": { type: "RSGhost", }, "e:27017": { type: "RSArbiter", setName: "rs" } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: 3, setName: "rs", } }, # An RSSecondary responds with a lower logicalSessionTimeoutMinutes { responses: [ ["b:27017", { ok: 1, helloOk: true, isWritablePrimary: false, secondary: true, hosts: ["a:27017", "b:27017", "c:27017", "d:27017", "e:27017"], setName: "rs", logicalSessionTimeoutMinutes: 2, minWireVersion: 0, maxWireVersion: 6 }], ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs" }, "b:27017": { type: "RSSecondary", setName: "rs" }, "c:27017": { type: "Unknown", }, "d:27017": { type: "RSGhost", }, "e:27017": { type: "RSArbiter", setName: "rs" } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: 2, setName: "rs", } }, # An RSOther responds with an even lower logicalSessionTimeoutMinutes, which is ignored { responses: [ ["c:27017", { ok: 1, helloOk: true, isWritablePrimary: false, setName: "rs", hidden: true, logicalSessionTimeoutMinutes: 1, minWireVersion: 0, maxWireVersion: 6 }], ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs" }, "b:27017": { type: "RSSecondary", setName: "rs" }, "c:27017": { type: "RSOther", setName: "rs" }, "d:27017": { type: "RSGhost", }, "e:27017": { type: "RSArbiter", setName: "rs" } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: 2, setName: "rs", } }, # Now the RSSecondary responds with no logicalSessionTimeoutMinutes { responses: [ ["b:27017", { ok: 1, helloOk: true, isWritablePrimary: false, secondary: true, hosts: ["a:27017", "b:27017", "c:27017", "d:27017", "e:27017"], setName: "rs", logicalSessionTimeoutMinutes: null, minWireVersion: 0, maxWireVersion: 6 }] ], # Sessions aren't supported now outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs" }, "b:27017": { type: "RSSecondary", setName: "rs" }, "c:27017": { type: "RSOther", setName: "rs" }, "d:27017": { type: "RSGhost", }, "e:27017": { type: "RSArbiter", setName: "rs" } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: null, setName: "rs", } } ] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/rs/member_reconfig.yml000066400000000000000000000030321505113246500264230ustar00rootroot00000000000000description: "Member removed by reconfig" uri: "mongodb://a,b/?replicaSet=rs" phases: [ { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true, setName: "rs", hosts: ["a:27017", "b:27017"], minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs" }, "b:27017": { type: "Unknown", setName: } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: null, setName: "rs" } }, { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true, setName: "rs", hosts: ["a:27017"], minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs" } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: null, setName: "rs" } } ] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/rs/member_standalone.yml000066400000000000000000000024541505113246500267660ustar00rootroot00000000000000description: "Member brought up as standalone" uri: "mongodb://a,b" phases: [ { responses: [ ["b:27017", { ok: 1, helloOk: true, isWritablePrimary: true, minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "Unknown", setName: } }, topologyType: "Unknown", logicalSessionTimeoutMinutes: null, setName: } }, { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true, setName: "rs", hosts: ["a:27017"], minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs" } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: null, setName: "rs" } } ] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/rs/new_primary.yml000066400000000000000000000032121505113246500256340ustar00rootroot00000000000000description: "New primary" uri: "mongodb://a,b/?replicaSet=rs" phases: [ { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true, setName: "rs", hosts: ["a:27017", "b:27017"], minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs" }, "b:27017": { type: "Unknown", setName: } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: null, setName: "rs" } }, { responses: [ ["b:27017", { ok: 1, helloOk: true, isWritablePrimary: true, setName: "rs", hosts: ["a:27017", "b:27017"], minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "Unknown", setName: }, "b:27017": { type: "RSPrimary", setName: "rs" } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: null, setName: "rs" } } ] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/rs/new_primary_new_electionid.yml000066400000000000000000000066551505113246500307220ustar00rootroot00000000000000description: "New primary with greater setVersion and electionId" uri: "mongodb://a/?replicaSet=rs" phases: [ # Primary A is discovered and tells us about B. { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true, hosts: ["a:27017", "b:27017"], setName: "rs", setVersion: 1, electionId: {"$oid": "000000000000000000000001"}, minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs", setVersion: 1, electionId: {"$oid": "000000000000000000000001"} }, "b:27017": { type: "Unknown", setName: , electionId: } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: null, setName: "rs", maxSetVersion: 1, maxElectionId: {"$oid": "000000000000000000000001"}, } }, # B is elected. { responses: [ ["b:27017", { ok: 1, helloOk: true, isWritablePrimary: true, hosts: ["a:27017", "b:27017"], setName: "rs", setVersion: 1, electionId: {"$oid": "000000000000000000000002"}, minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "Unknown", setName: , electionId: }, "b:27017": { type: "RSPrimary", setName: "rs", setVersion: 1, electionId: {"$oid": "000000000000000000000002"} } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: null, setName: "rs", maxSetVersion: 1, maxElectionId: {"$oid": "000000000000000000000002"}, } }, # A still claims to be primary but it's ignored. { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true, hosts: ["a:27017", "b:27017"], setName: "rs", setVersion: 1, electionId: {"$oid": "000000000000000000000001"}, minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "Unknown", setName: , electionId: }, "b:27017": { type: "RSPrimary", setName: "rs", setVersion: 1, electionId: {"$oid": "000000000000000000000002"} } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: null, setName: "rs", maxSetVersion: 1, maxElectionId: {"$oid": "000000000000000000000002"}, } } ] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/rs/new_primary_new_setversion.yml000066400000000000000000000066651505113246500310050ustar00rootroot00000000000000description: "New primary with greater setVersion" uri: "mongodb://a/?replicaSet=rs" phases: [ # Primary A is discovered and tells us about B. { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true, hosts: ["a:27017", "b:27017"], setName: "rs", setVersion: 1, electionId: {"$oid": "000000000000000000000001"}, minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs", setVersion: 1, electionId: {"$oid": "000000000000000000000001"} }, "b:27017": { type: "Unknown", setName: , electionId: } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: null, setName: "rs", maxSetVersion: 1, maxElectionId: {"$oid": "000000000000000000000001"}, } }, # RS is reconfigured and B is elected. { responses: [ ["b:27017", { ok: 1, helloOk: true, isWritablePrimary: true, hosts: ["a:27017", "b:27017"], setName: "rs", setVersion: 2, electionId: {"$oid": "000000000000000000000001"}, minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "Unknown", setName: , electionId: }, "b:27017": { type: "RSPrimary", setName: "rs", setVersion: 2, electionId: {"$oid": "000000000000000000000001"} } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: null, setName: "rs", maxSetVersion: 2, maxElectionId: {"$oid": "000000000000000000000001"}, } }, # A still claims to be primary but it's ignored. { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true, hosts: ["a:27017", "b:27017"], setName: "rs", setVersion: 1, electionId: {"$oid": "000000000000000000000001"}, minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "Unknown", setName: , electionId: }, "b:27017": { type: "RSPrimary", setName: "rs", setVersion: 2, electionId: {"$oid": "000000000000000000000001"} } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: null, setName: "rs", maxSetVersion: 2, maxElectionId: {"$oid": "000000000000000000000001"}, } } ] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/rs/new_primary_wrong_set_name.yml000066400000000000000000000033361505113246500307320ustar00rootroot00000000000000description: "New primary with wrong setName" uri: "mongodb://a/?replicaSet=rs" phases: [ # Primary is discovered normally, and tells us about server B. { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true, hosts: ["a:27017", "b:27017"], setName: "rs", minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs" }, "b:27017": { type: "Unknown", setName: } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: null, setName: "rs" } }, # B is actually the primary of another replica set. It's removed, and # topologyType remains ReplicaSetWithPrimary. { responses: [ ["b:27017", { ok: 1, helloOk: true, isWritablePrimary: true, hosts: ["a:27017"], setName: "wrong", minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs" } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: null, setName: "rs" } } ] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/rs/non_rs_member.yml000066400000000000000000000011751505113246500261330ustar00rootroot00000000000000description: "Non replicaSet member responds" uri: "mongodb://a,b/?replicaSet=rs" phases: [ { responses: [ ["b:27017", { ok: 1, helloOk: true, minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "Unknown", setName: } }, topologyType: "ReplicaSetNoPrimary", logicalSessionTimeoutMinutes: null, setName: "rs" } } ] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/rs/normalize_case.yml000066400000000000000000000020741505113246500263000ustar00rootroot00000000000000description: "Replica set case normalization" uri: "mongodb://A/?replicaSet=rs" phases: [ { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true, setName: "rs", hosts: ["A:27017"], passives: ["B:27017"], arbiters: ["C:27017"], minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs" }, "b:27017": { type: "Unknown", setName: }, "c:27017": { type: "Unknown", setName: } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: null, setName: "rs" } } ] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/rs/normalize_case_me.yml000066400000000000000000000042271505113246500267630ustar00rootroot00000000000000description: "Replica set mixed case normalization" uri: "mongodb://A/?replicaSet=rs" phases: [ { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true, setName: "rs", me: "A:27017", hosts: ["A:27017"], passives: ["B:27017"], arbiters: ["C:27017"], minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs" }, "b:27017": { type: "Unknown", setName: }, "c:27017": { type: "Unknown", setName: } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: null, setName: "rs" } }, { responses: [ ["b:27017", { ok: 1, helloOk: true, isWritablePrimary: false, secondary: true, setName: "rs", me: "B:27017", hosts: ["A:27017"], passives: ["B:27017"], arbiters: ["C:27017"], minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs" }, "b:27017": { type: "RSSecondary", setName: "rs" }, "c:27017": { type: "Unknown", setName: } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: null, setName: "rs" } } ] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/rs/null_election_id-pre-6.0.yml000066400000000000000000000117701505113246500277050ustar00rootroot00000000000000description: "Pre 6.0 Primaries with and without electionIds" uri: "mongodb://a/?replicaSet=rs" phases: [ # Primary A has no electionId. { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true, hosts: ["a:27017", "b:27017", "c:27017"], setVersion: 1, setName: "rs", minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs", setVersion: 1, electionId: }, "b:27017": { type: "Unknown", setName: , electionId: }, "c:27017": { type: "Unknown", setName: , electionId: } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: null, setName: "rs", maxSetVersion: 1, } }, # B is elected, it has an electionId. { responses: [ ["b:27017", { ok: 1, helloOk: true, isWritablePrimary: true, hosts: ["a:27017", "b:27017", "c:27017"], setName: "rs", setVersion: 1, electionId: {"$oid": "000000000000000000000002"}, minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "Unknown", setName: , electionId: }, "b:27017": { type: "RSPrimary", setName: "rs", setVersion: 1, electionId: {"$oid": "000000000000000000000002"} }, "c:27017": { type: "Unknown", setName: , electionId: } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: null, setName: "rs", maxSetVersion: 1, maxElectionId: {"$oid": "000000000000000000000002"}, } }, # A still claims to be primary, no electionId, we have to trust it. { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true, hosts: ["a:27017", "b:27017", "c:27017"], setVersion: 1, setName: "rs", minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs", setVersion: 1, electionId: }, "b:27017": { type: "Unknown", setName: , electionId: }, "c:27017": { type: "Unknown", setName: , electionId: } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: null, setName: "rs", maxSetVersion: 1, maxElectionId: {"$oid": "000000000000000000000002"}, } }, # But we remember B's electionId, so when we finally hear from C # claiming it is primary, we ignore it due to its outdated electionId { responses: [ ["c:27017", { ok: 1, helloOk: true, isWritablePrimary: true, hosts: ["a:27017", "b:27017", "c:27017"], setName: "rs", setVersion: 1, electionId: {"$oid": "000000000000000000000001"}, minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { # Still primary. "a:27017": { type: "RSPrimary", setName: "rs", setVersion: 1, electionId: }, "b:27017": { type: "Unknown", setName: , electionId: }, "c:27017": { type: "Unknown", setName: , electionId: } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: null, setName: "rs", maxSetVersion: 1, maxElectionId: {"$oid": "000000000000000000000002"}, } } ] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/rs/null_election_id.yml000066400000000000000000000122261505113246500266150ustar00rootroot00000000000000description: "Primaries with and without electionIds" uri: "mongodb://a/?replicaSet=rs" phases: [ # Primary A has no electionId. { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true, hosts: ["a:27017", "b:27017", "c:27017"], setVersion: 1, setName: "rs", minWireVersion: 0, maxWireVersion: 17 }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs", setVersion: 1, electionId: }, "b:27017": { type: "Unknown", setName: , electionId: }, "c:27017": { type: "Unknown", setName: , electionId: } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: null, setName: "rs", maxSetVersion: 1, } }, # B is elected, it has an electionId. { responses: [ ["b:27017", { ok: 1, helloOk: true, isWritablePrimary: true, hosts: ["a:27017", "b:27017", "c:27017"], setName: "rs", setVersion: 1, electionId: {"$oid": "000000000000000000000002"}, minWireVersion: 0, maxWireVersion: 17 }] ], outcome: { servers: { "a:27017": { type: "Unknown", setName: , electionId: }, "b:27017": { type: "RSPrimary", setName: "rs", setVersion: 1, electionId: {"$oid": "000000000000000000000002"} }, "c:27017": { type: "Unknown", setName: , electionId: } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: null, setName: "rs", maxSetVersion: 1, maxElectionId: {"$oid": "000000000000000000000002"}, } }, # A still claims to be primary, no electionId, we don't trust it. { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true, hosts: ["a:27017", "b:27017", "c:27017"], setVersion: 1, setName: "rs", minWireVersion: 0, maxWireVersion: 17 }] ], outcome: { servers: { # A ignored for missing electionId "a:27017": { type: "Unknown", setName: , setVersion: , electionId: }, "b:27017": { type: "RSPrimary", setName: "rs", setVersion: 1, electionId: { "$oid": "000000000000000000000002" } }, "c:27017": { type: "Unknown", setName: , electionId: } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: null, setName: "rs", maxSetVersion: 1, maxElectionId: {"$oid": "000000000000000000000002"}, } }, # But we remember B's electionId, so when we finally hear from C # claiming it is primary, we ignore it due to its outdated electionId { responses: [ ["c:27017", { ok: 1, helloOk: true, isWritablePrimary: true, hosts: ["a:27017", "b:27017", "c:27017"], setName: "rs", setVersion: 1, electionId: {"$oid": "000000000000000000000001"}, minWireVersion: 0, maxWireVersion: 17 }] ], outcome: { servers: { "a:27017": { type: "Unknown", setName: , setVersion: , electionId: }, "b:27017": { type: "RSPrimary", setName: "rs", setVersion: 1, electionId: { "$oid": "000000000000000000000002" } }, "c:27017": { type: "Unknown", setName: , electionId: } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: null, setName: "rs", maxSetVersion: 1, maxElectionId: {"$oid": "000000000000000000000002"}, } } ] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/rs/primary_becomes_ghost.yml000066400000000000000000000025471505113246500276760ustar00rootroot00000000000000description: "Primary becomes ghost" uri: "mongodb://a/?replicaSet=rs" phases: [ { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true, hosts: ["a:27017"], setName: "rs", minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs" } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: null, setName: "rs" } }, { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: false, isreplicaset: true, minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "RSGhost", setName: } }, topologyType: "ReplicaSetNoPrimary", logicalSessionTimeoutMinutes: null, setName: "rs" } } ] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/rs/primary_becomes_mongos.yml000066400000000000000000000023441505113246500300470ustar00rootroot00000000000000description: "Primary becomes mongos" uri: "mongodb://a/?replicaSet=rs" phases: [ { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true, hosts: ["a:27017"], setName: "rs", minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs" } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: null, setName: "rs" } }, { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true, msg: "isdbgrid", minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: {}, topologyType: "ReplicaSetNoPrimary", logicalSessionTimeoutMinutes: null, setName: "rs" } } ] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/rs/primary_becomes_standalone.yml000066400000000000000000000021631505113246500306740ustar00rootroot00000000000000description: "Primary becomes standalone" uri: "mongodb://a/?replicaSet=rs" phases: [ { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true, hosts: ["a:27017"], setName: "rs", minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs" } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: null, setName: "rs" } }, { responses: [ ["a:27017", { ok: 1, minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: {}, topologyType: "ReplicaSetNoPrimary", logicalSessionTimeoutMinutes: null, setName: "rs" } } ] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/rs/primary_changes_set_name.yml000066400000000000000000000026001505113246500303260ustar00rootroot00000000000000description: "Primary changes setName" uri: "mongodb://a/?replicaSet=rs" phases: [ # Primary is discovered normally. { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true, hosts: ["a:27017"], setName: "rs", minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs" } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: null, setName: "rs" } }, # Primary changes its setName. Remove it and change the topologyType. { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true, hosts: ["a:27017"], setName: "wrong", minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: {}, topologyType: "ReplicaSetNoPrimary", logicalSessionTimeoutMinutes: null, setName: "rs" } } ] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/rs/primary_disconnect.yml000066400000000000000000000021721505113246500272000ustar00rootroot00000000000000description: "Disconnected from primary" uri: "mongodb://a/?replicaSet=rs" phases: [ { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true, hosts: ["a:27017"], setName: "rs", minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs" } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: null, setName: "rs" } }, { responses: [ ["a:27017", {}] ], outcome: { servers: { "a:27017": { type: "Unknown", setName: } }, topologyType: "ReplicaSetNoPrimary", logicalSessionTimeoutMinutes: null, setName: "rs" } } ] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/rs/primary_disconnect_electionid.yml000066400000000000000000000126071505113246500314030ustar00rootroot00000000000000description: "Disconnected from primary, reject primary with stale electionId" uri: "mongodb://a/?replicaSet=rs" phases: [ # A is elected, then B. { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true, hosts: ["a:27017", "b:27017"], setName: "rs", setVersion: 1, electionId: {"$oid": "000000000000000000000001"}, minWireVersion: 0, maxWireVersion: 6 }], ["b:27017", { ok: 1, helloOk: true, isWritablePrimary: true, hosts: ["a:27017", "b:27017"], setName: "rs", setVersion: 1, electionId: {"$oid": "000000000000000000000002"}, minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "Unknown", setName: , electionId: }, "b:27017": { type: "RSPrimary", setName: "rs", setVersion: 1, electionId: {"$oid": "000000000000000000000002"} } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: null, setName: "rs", maxSetVersion: 1, maxElectionId: {"$oid": "000000000000000000000002"}, } }, # Disconnected from B. { responses: [ ["b:27017", {}] ], outcome: { servers: { "a:27017": { type: "Unknown", setName: , electionId: }, "b:27017": { type: "Unknown", setName: , electionId: } }, topologyType: "ReplicaSetNoPrimary", logicalSessionTimeoutMinutes: null, setName: "rs", maxSetVersion: 1, maxElectionId: {"$oid": "000000000000000000000002"}, } }, # A still claims to be primary but it's ignored. { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true, hosts: ["a:27017", "b:27017"], setName: "rs", setVersion: 1, electionId: {"$oid": "000000000000000000000001"}, minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "Unknown", setName: , electionId: }, "b:27017": { type: "Unknown", setName: , electionId: } }, topologyType: "ReplicaSetNoPrimary", logicalSessionTimeoutMinutes: null, setName: "rs", maxSetVersion: 1, maxElectionId: {"$oid": "000000000000000000000002"}, } }, # Now A is re-elected. { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true, hosts: ["a:27017", "b:27017"], setName: "rs", setVersion: 1, electionId: {"$oid": "000000000000000000000003"}, minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs", setVersion: 1, electionId: {"$oid": "000000000000000000000003"} }, "b:27017": { type: "Unknown", setName: , electionId: } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: null, setName: "rs", maxSetVersion: 1, maxElectionId: {"$oid": "000000000000000000000003"}, } }, # B comes back as secondary. { responses: [ ["b:27017", { ok: 1, helloOk: true, isWritablePrimary: false, secondary: true, hosts: ["a:27017", "b:27017"], setName: "rs", minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs", setVersion: 1, electionId: {"$oid": "000000000000000000000003"} }, "b:27017": { type: "RSSecondary", setName: "rs" } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: null, setName: "rs", maxSetVersion: 1, maxElectionId: {"$oid": "000000000000000000000003"}, } } ] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/rs/primary_disconnect_setversion.yml000066400000000000000000000126301505113246500314610ustar00rootroot00000000000000description: "Disconnected from primary, reject primary with stale setVersion" uri: "mongodb://a/?replicaSet=rs" phases: [ # A is elected, then B after a reconfig. { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true, hosts: ["a:27017", "b:27017"], setName: "rs", setVersion: 1, electionId: {"$oid": "000000000000000000000001"}, minWireVersion: 0, maxWireVersion: 6 }], ["b:27017", { ok: 1, helloOk: true, isWritablePrimary: true, hosts: ["a:27017", "b:27017"], setName: "rs", setVersion: 2, electionId: {"$oid": "000000000000000000000001"}, minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "Unknown", setName: , electionId: }, "b:27017": { type: "RSPrimary", setName: "rs", setVersion: 2, electionId: {"$oid": "000000000000000000000001"} } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: null, setName: "rs", maxSetVersion: 2, maxElectionId: {"$oid": "000000000000000000000001"}, } }, # Disconnected from B. { responses: [ ["b:27017", {}] ], outcome: { servers: { "a:27017": { type: "Unknown", setName: , electionId: }, "b:27017": { type: "Unknown", setName: , electionId: } }, topologyType: "ReplicaSetNoPrimary", logicalSessionTimeoutMinutes: null, setName: "rs", maxSetVersion: 2, maxElectionId: {"$oid": "000000000000000000000001"}, } }, # A still claims to be primary but it's ignored. { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true, hosts: ["a:27017", "b:27017"], setName: "rs", setVersion: 1, electionId: {"$oid": "000000000000000000000001"}, minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "Unknown", setName: , electionId: }, "b:27017": { type: "Unknown", setName: , electionId: } }, topologyType: "ReplicaSetNoPrimary", logicalSessionTimeoutMinutes: null, setName: "rs", maxSetVersion: 2, maxElectionId: {"$oid": "000000000000000000000001"}, } }, # Now A is re-elected. { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true, hosts: ["a:27017", "b:27017"], setName: "rs", setVersion: 2, electionId: {"$oid": "000000000000000000000002"}, minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs", setVersion: 2, electionId: {"$oid": "000000000000000000000002"} }, "b:27017": { type: "Unknown", setName: , electionId: } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: null, setName: "rs", maxSetVersion: 2, maxElectionId: {"$oid": "000000000000000000000002"}, } }, # B comes back as secondary. { responses: [ ["b:27017", { ok: 1, helloOk: true, isWritablePrimary: false, secondary: true, hosts: ["a:27017", "b:27017"], setName: "rs", minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs", setVersion: 2, electionId: {"$oid": "000000000000000000000002"} }, "b:27017": { type: "RSSecondary", setName: "rs" } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: null, setName: "rs", maxSetVersion: 2, maxElectionId: {"$oid": "000000000000000000000002"}, } } ] primary_hint_from_secondary_with_mismatched_me.yml000066400000000000000000000031261505113246500347360ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/rsdescription: "Secondary with mismatched 'me' tells us who the primary is" uri: "mongodb://a/?replicaSet=rs" phases: [ # A is a secondary with mismatched "me". Remove A, add PossiblePrimary B. { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: false, secondary: true, me: "c:27017", hosts: ["b:27017"], setName: "rs", primary: "b:27017", minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "b:27017": { type: "PossiblePrimary", setName: } }, topologyType: "ReplicaSetNoPrimary", logicalSessionTimeoutMinutes: null, setName: "rs", } }, # Discover B is primary. { responses: [ ["b:27017", { ok: 1, helloOk: true, isWritablePrimary: true, me: "b:27017", hosts: ["b:27017"], setName: "rs", minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "b:27017": { type: "RSPrimary", setName: "rs" } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: null, setName: "rs" } } ] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/rs/primary_mismatched_me.yml000066400000000000000000000011751505113246500276500ustar00rootroot00000000000000description: Primary mismatched me phases: - outcome: servers: 'a:27017': setName: null type: Unknown 'b:27017': setName: null type: Unknown setName: rs topologyType: ReplicaSetNoPrimary logicalSessionTimeoutMinutes: null responses: - - 'localhost:27017' - me: 'a:27017' hosts: - 'a:27017' - 'b:27017' helloOk: true isWritablePrimary: true ok: 1 setName: rs minWireVersion: 0 maxWireVersion: 6 uri: 'mongodb://localhost:27017/?replicaSet=rs' mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/rs/primary_mismatched_me_not_removed.yml000066400000000000000000000034111505113246500322440ustar00rootroot00000000000000description: Primary mismatched me is not removed uri: mongodb://localhost:27017,localhost:27018/?replicaSet=rs phases: [ { responses: [ ["localhost:27017", { ok: 1, hosts: [ "localhost:27017", "localhost:27018" ], helloOk: true, isWritablePrimary: true, setName: "rs", primary: "localhost:27017", # me does not match the primary responder's address, but the server # is still added because we don't me mismatch check the primary and all # servers from a primary isWritablePrimary are added to the working server set me: "a:27017", minWireVersion: 0, maxWireVersion: 7 }] ], outcome: { servers: { "localhost:27017": { type: "RSPrimary", setName: "rs" }, "localhost:27018": { type: "Unknown", setName: null } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: null, setName: "rs" } }, { responses: [ ["localhost:27018", { ok: 1, hosts: [ "localhost:27017", "localhost:27018" ], helloOk: true, isWritablePrimary: false, secondary: true, setName: "rs", primary: "localhost:27017", me: "localhost:27018", minWireVersion: 0, maxWireVersion: 7 }] ], outcome: { servers: { "localhost:27017": { type: "RSPrimary", setName: "rs" }, "localhost:27018": { type: "RSSecondary", setName: "rs" } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: null, setName: "rs" } } ] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/rs/primary_reports_new_member.yml000066400000000000000000000074411505113246500307510ustar00rootroot00000000000000description: "Primary reports a new member" uri: "mongodb://a/?replicaSet=rs" phases: [ # At first, a is a secondary. { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: false, secondary: true, setName: "rs", hosts: ["a:27017", "b:27017"], minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "RSSecondary", setName: "rs" }, "b:27017": { type: "Unknown", setName: } }, topologyType: "ReplicaSetNoPrimary", logicalSessionTimeoutMinutes: null, setName: "rs" } }, # b is the primary. { responses: [ ["b:27017", { ok: 1, helloOk: true, isWritablePrimary: true, setName: "rs", hosts: ["a:27017", "b:27017"], minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "RSSecondary", setName: "rs" }, "b:27017": { type: "RSPrimary", setName: "rs" } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: null, setName: "rs" } }, # Admin adds a secondary member c. { responses: [ ["b:27017", { ok: 1, helloOk: true, isWritablePrimary: true, setName: "rs", hosts: ["a:27017", "b:27017", "c:27017"], minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { # c is new. servers: { "a:27017": { type: "RSSecondary", setName: "rs" }, "b:27017": { type: "RSPrimary", setName: "rs" }, "c:27017": { type: "Unknown", setName: } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: null, setName: "rs" } }, # c becomes secondary. { responses: [ ["c:27017", { ok: 1, helloOk: true, isWritablePrimary: false, secondary: true, setName: "rs", primary: "b:27017", hosts: ["a:27017", "b:27017", "c:27017"], minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { # c is a secondary. servers: { "a:27017": { type: "RSSecondary", setName: "rs" }, "b:27017": { type: "RSPrimary", setName: "rs" }, "c:27017": { type: "RSSecondary", setName: "rs" } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: null, setName: "rs" } } ] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/rs/primary_to_no_primary_mismatched_me.yml000066400000000000000000000033421505113246500326070ustar00rootroot00000000000000description: "Primary to no primary with mismatched me" uri: "mongodb://a/?replicaSet=rs" phases: [ { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true, hosts: ["a:27017", "b:27017"], me: "a:27017", setName: "rs", minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs" }, "b:27017": { type: "Unknown", setName: } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: null, setName: "rs" } }, { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true, hosts: ["c:27017", "d:27017"], me : "c:27017", setName: "rs", minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "c:27017": { type: "Unknown", setName: }, "d:27017": { type: "Unknown", setName: } }, topologyType: "ReplicaSetNoPrimary", logicalSessionTimeoutMinutes: null, setName: "rs" } } ] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/rs/primary_wrong_set_name.yml000066400000000000000000000011561505113246500300570ustar00rootroot00000000000000description: "Primary wrong setName" uri: "mongodb://a/?replicaSet=rs" phases: [ { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true, hosts: ["a:27017"], setName: "wrong", minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: {}, topologyType: "ReplicaSetNoPrimary", logicalSessionTimeoutMinutes: null, setName: "rs" } } ] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/rs/repeated.yml000066400000000000000000000045221505113246500250760ustar00rootroot00000000000000description: Repeated isWritablePrimary response must be processed uri: "mongodb://a,b/?replicaSet=rs" phases: # Phase 1 - a says it's not primary and suggests c may be the primary - responses: - - "a:27017" - ok: 1 helloOk: true isWritablePrimary: false secondary: true hidden: true hosts: ["a:27017", "c:27017"] setName: "rs" minWireVersion: 0 maxWireVersion: 6 outcome: servers: "a:27017": type: "RSOther" setName: "rs" "b:27017": type: Unknown "c:27017": type: Unknown topologyType: "ReplicaSetNoPrimary" logicalSessionTimeoutMinutes: ~ setName: "rs" # Phase 2 - c says it's a standalone, is removed - responses: - - "c:27017" - ok: 1 helloOk: true isWritablePrimary: true minWireVersion: 0 maxWireVersion: 6 outcome: servers: "a:27017": type: "RSOther" setName: "rs" "b:27017": type: Unknown topologyType: "ReplicaSetNoPrimary" logicalSessionTimeoutMinutes: ~ setName: "rs" # Phase 3 - response from a is repeated, and must be processed; c added again - responses: - - "a:27017" - ok: 1 helloOk: true isWritablePrimary: false secondary: true hidden: true hosts: ["a:27017", "c:27017"] setName: "rs" minWireVersion: 0 maxWireVersion: 6 outcome: servers: "a:27017": type: "RSOther" setName: "rs" "b:27017": type: Unknown "c:27017": type: Unknown topologyType: "ReplicaSetNoPrimary" logicalSessionTimeoutMinutes: ~ setName: "rs" # Phase 4 - c is now a primary - responses: - - "c:27017" - ok: 1 helloOk: true isWritablePrimary: true hosts: ["a:27017", "c:27017"] setName: "rs" minWireVersion: 0 maxWireVersion: 6 outcome: servers: "a:27017": type: "RSOther" setName: "rs" "c:27017": type: RSPrimary setName: rs topologyType: "ReplicaSetWithPrimary" logicalSessionTimeoutMinutes: ~ setName: "rs" mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/rs/replicaset_rsnp.yml000066400000000000000000000010661505113246500265020ustar00rootroot00000000000000description: replicaSet URI option causes starting topology to be RSNP uri: "mongodb://a/?replicaSet=rs&directConnection=false" phases: # We are connecting to a standalone - responses: - - "a:27017" - ok: 1 helloOk: true isWritablePrimary: true minWireVersion: 0 maxWireVersion: 6 outcome: # Server is removed because it's a standalone and the driver # started in RSNP topology servers: {} topologyType: "ReplicaSetNoPrimary" logicalSessionTimeoutMinutes: ~ setName: "rs" mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/rs/response_from_removed.yml000066400000000000000000000027161505113246500277120ustar00rootroot00000000000000description: "Response from removed server" uri: "mongodb://a,b/?replicaSet=rs" phases: [ { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true, setName: "rs", hosts: ["a:27017"], minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs" } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: null, setName: "rs" } }, { responses: [ ["b:27017", { ok: 1, helloOk: true, isWritablePrimary: false, secondary: true, setName: "rs", hosts: ["a:27017", "b:27017"], minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs" } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: null, setName: "rs" } } ] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/rs/ruby_primary_address_change.yml000066400000000000000000000015711505113246500310440ustar00rootroot00000000000000# This test was used during Ruby driver SDAM implementation. # It is not worthwhile to upstream it to specifications repo. description: Primary whose address differs from client address but no me mismatch uri: mongodb://localhost:27017/?replicaSet=rs phases: - responses: - - localhost:27017 - hosts: - a:27017 - b:27017 ismaster: true ok: 1 setName: rs minWireVersion: 0 maxWireVersion: 6 outcome: # Both of the hosts in the primary description are added to the topology. # Existing server (localhost:27017) is removed from topology because # its address is not in the list of hosts returned by the primary. servers: a:27017: setName: type: Unknown b:27017: setName: type: Unknown setName: rs topologyType: ReplicaSetNoPrimary logicalSessionTimeoutMinutes: ruby_secondary_wrong_set_name_with_primary_second.yml000066400000000000000000000030051505113246500354710ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/rsdescription: "Secondary wrong setName with primary when secondary responds first" uri: "mongodb://a,b/" phases: [ { responses: [ ["b:27017", { ok: 1, ismaster: false, secondary: true, hosts: ["a:27017", "b:27017"], setName: "set-b", minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "Unknown", }, "b:27017": { type: "RSSecondary", setName: "set-b" } }, topologyType: "ReplicaSetNoPrimary", logicalSessionTimeoutMinutes: null, setName: "set-b" } }, { responses: [ ["a:27017", { ok: 1, ismaster: true, hosts: ["a:27017", "b:27017"], setName: "set-a", minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "b:27017": { type: "RSSecondary", setName: "set-b", } }, topologyType: "ReplicaSetNoPrimary", logicalSessionTimeoutMinutes: null, setName: "set-b" } }, ] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/rs/sec_not_auth.yml000066400000000000000000000023711505113246500257600ustar00rootroot00000000000000description: "Secondary's host list is not authoritative" uri: "mongodb://a/?replicaSet=rs" phases: [ { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true, setName: "rs", hosts: ["a:27017", "b:27017"], minWireVersion: 0, maxWireVersion: 6 }], ["b:27017", { ok: 1, helloOk: true, isWritablePrimary: false, secondary: true, setName: "rs", hosts: ["b:27017", "c:27017"], minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs" }, "b:27017": { type: "RSSecondary", setName: "rs" } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: null, setName: "rs" } } ] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/rs/secondary_ignore_ok_0-pre-6.0.yml000066400000000000000000000035341505113246500306360ustar00rootroot00000000000000description: "Pre 6.0 New primary" uri: "mongodb://a,b/?replicaSet=rs" phases: [ { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true, setName: "rs", hosts: ["a:27017", "b:27017"], minWireVersion: 0, maxWireVersion: 6 }], ["b:27017", { ok: 1, helloOk: true, isWritablePrimary: false, secondary: true, setName: "rs", hosts: ["a:27017", "b:27017"], minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs" }, "b:27017": { type: "RSSecondary", setName: "rs" } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: null, setName: "rs" } }, { responses: [ ["b:27017", { ok: 0, minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs" }, "b:27017": { type: "Unknown", setName: } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: null, setName: "rs" } } ] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/rs/secondary_ignore_ok_0.yml000066400000000000000000000035521505113246500275510ustar00rootroot00000000000000description: "Secondary ignored when ok is zero" uri: "mongodb://a,b/?replicaSet=rs" phases: [ { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true, setName: "rs", hosts: ["a:27017", "b:27017"], minWireVersion: 0, maxWireVersion: 6 }], ["b:27017", { ok: 1, helloOk: true, isWritablePrimary: false, secondary: true, setName: "rs", hosts: ["a:27017", "b:27017"], minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs" }, "b:27017": { type: "RSSecondary", setName: "rs" } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: null, setName: "rs" } }, { responses: [ ["b:27017", { ok: 0, minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs" }, "b:27017": { type: "Unknown", setName: } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: null, setName: "rs" } } ] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/rs/secondary_mismatched_me.yml000066400000000000000000000012011505113246500301420ustar00rootroot00000000000000description: Secondary mismatched me uri: 'mongodb://localhost:27017/?replicaSet=rs' phases: - outcome: servers: 'a:27017': setName: null type: Unknown 'b:27017': setName: null type: Unknown setName: rs topologyType: ReplicaSetNoPrimary logicalSessionTimeoutMinutes: null responses: - - 'localhost:27017' - me: 'a:27017' hosts: - 'a:27017' - 'b:27017' helloOk: true isWritablePrimary: false ok: 1 setName: rs minWireVersion: 0 maxWireVersion: 6 mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/rs/secondary_wrong_set_name.yml000066400000000000000000000012261505113246500303610ustar00rootroot00000000000000description: "Secondary wrong setName" uri: "mongodb://a/?replicaSet=rs" phases: [ { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: false, secondary: true, hosts: ["a:27017"], setName: "wrong", minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: {}, topologyType: "ReplicaSetNoPrimary", logicalSessionTimeoutMinutes: null, setName: "rs" } } ] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/rs/secondary_wrong_set_name_with_primary.yml000066400000000000000000000031301505113246500331530ustar00rootroot00000000000000description: "Secondary wrong setName with primary" uri: "mongodb://a,b/?replicaSet=rs" phases: [ { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true, hosts: ["a:27017", "b:27017"], setName: "rs", minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs" }, "b:27017": { type: "Unknown", setName: } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: null, setName: "rs" } }, { responses: [ ["b:27017", { ok: 1, helloOk: true, isWritablePrimary: false, secondary: true, hosts: ["a:27017", "b:27017"], setName: "wrong", minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs" } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: null, setName: "rs" } } ] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/rs/set_version_can_rollback.yml000066400000000000000000000050571505113246500303430ustar00rootroot00000000000000description: Set version rolls back after new primary with higher election Id uri: mongodb://a/?replicaSet=rs phases: - responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 - b:27017 setName: rs setVersion: 2 electionId: $oid: '000000000000000000000001' minWireVersion: 0 maxWireVersion: 17 outcome: servers: a:27017: type: RSPrimary setName: rs setVersion: 2 electionId: $oid: '000000000000000000000001' b:27017: type: Unknown setName: null electionId: null topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs maxSetVersion: 2 maxElectionId: $oid: '000000000000000000000001' - # Response from new primary with newer election Id responses: - - b:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 - b:27017 setName: rs setVersion: 1 electionId: $oid: '000000000000000000000002' minWireVersion: 0 maxWireVersion: 17 outcome: servers: a:27017: type: Unknown setName: null electionId: null b:27017: type: RSPrimary setName: rs setVersion: 1 electionId: $oid: '000000000000000000000002' topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs maxSetVersion: 1 maxElectionId: $oid: '000000000000000000000002' - # Response from stale primary responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 - b:27017 setName: rs setVersion: 2 electionId: $oid: '000000000000000000000001' minWireVersion: 0 maxWireVersion: 17 outcome: servers: a:27017: type: Unknown setName: null electionId: null b:27017: type: RSPrimary setName: rs setVersion: 1 electionId: $oid: '000000000000000000000002' topologyType: ReplicaSetWithPrimary logicalSessionTimeoutMinutes: null setName: rs maxSetVersion: 1 maxElectionId: $oid: '000000000000000000000002' mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/rs/setversion_equal_max_without_electionid.yml000066400000000000000000000041761505113246500335310ustar00rootroot00000000000000description: "setVersion version that is equal is treated the same as greater than if there is no electionId" uri: "mongodb://a/?replicaSet=rs" phases: [ # Primary A is discovered and tells us about B. { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true, hosts: ["a:27017", "b:27017"], setName: "rs", setVersion: 1, minWireVersion: 0, maxWireVersion: 17 }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs", setVersion: 1, electionId: }, "b:27017": { type: "Unknown", setName: , electionId: } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: null, setName: "rs", maxSetVersion: 1, } }, # B is elected, its setVersion is older so it is stale { responses: [ ["b:27017", { ok: 1, helloOk: true, isWritablePrimary: true, hosts: ["a:27017", "b:27017"], setName: "rs", setVersion: 1, minWireVersion: 0, maxWireVersion: 17 }] ], outcome: { servers: { "a:27017": { type: "Unknown", setName: , electionId: }, "b:27017": { type: "RSPrimary", setName: "rs", setVersion: 1, electionId: } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: null, setName: "rs", maxSetVersion: 1, # Max is still 1, there wasn't an actual larger setVersion seen } } ] setversion_greaterthan_max_without_electionid.yml000066400000000000000000000041521505113246500346410ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/rsdescription: "setVersion that is greater than maxSetVersion is used if there is no electionId" uri: "mongodb://a/?replicaSet=rs" phases: [ # Primary A is discovered and tells us about B. { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true, hosts: ["a:27017", "b:27017"], setName: "rs", setVersion: 1, minWireVersion: 0, maxWireVersion: 17 }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs", setVersion: 1, electionId: }, "b:27017": { type: "Unknown", setName: , electionId: } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: null, setName: "rs", maxSetVersion: 1, } }, # B is elected, its setVersion is greater than our current maxSetVersion # B is primary, A is marked Unknown { responses: [ ["b:27017", { ok: 1, helloOk: true, isWritablePrimary: true, hosts: ["a:27017", "b:27017"], setName: "rs", setVersion: 2, minWireVersion: 0, maxWireVersion: 17 }] ], outcome: { servers: { "a:27017": { type: "Unknown", setName: , electionId: }, "b:27017": { type: "RSPrimary", setName: "rs", setVersion: 2, electionId: }, }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: null, setName: "rs", maxSetVersion: 2, } } ] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/rs/setversion_without_electionid-pre-6.0.yml000066400000000000000000000041471505113246500325600ustar00rootroot00000000000000description: "Pre 6.0 setVersion is ignored if there is no electionId" uri: "mongodb://a/?replicaSet=rs" phases: [ # Primary A is discovered and tells us about B. { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true, hosts: ["a:27017", "b:27017"], setName: "rs", setVersion: 2, minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs", setVersion: 2 , electionId: }, "b:27017": { type: "Unknown", setName: , electionId: } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: null, setName: "rs", maxSetVersion: 2, } }, # B is elected, its setVersion is older but we believe it anyway, because # setVersion is only used in conjunction with electionId. { responses: [ ["b:27017", { ok: 1, helloOk: true, isWritablePrimary: true, hosts: ["a:27017", "b:27017"], setName: "rs", setVersion: 1, minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "Unknown", setName: , electionId: }, "b:27017": { type: "RSPrimary", setName: "rs", setVersion: 1, electionId: } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: null, setName: "rs", maxSetVersion: 2, } } ] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/rs/setversion_without_electionid.yml000066400000000000000000000040611505113246500314660ustar00rootroot00000000000000description: "setVersion that is less than maxSetVersion is ignored if there is no electionId" uri: "mongodb://a/?replicaSet=rs" phases: [ # Primary A is discovered and tells us about B. { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true, hosts: ["a:27017", "b:27017"], setName: "rs", setVersion: 2, minWireVersion: 0, maxWireVersion: 17 }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs", setVersion: 2 , electionId: }, "b:27017": { type: "Unknown", setName: , electionId: } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: null, setName: "rs", maxSetVersion: 2, } }, # B is elected, its setVersion is older so it is stale { responses: [ ["b:27017", { ok: 1, helloOk: true, isWritablePrimary: true, hosts: ["a:27017", "b:27017"], setName: "rs", setVersion: 1, minWireVersion: 0, maxWireVersion: 17 }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs", setVersion: 2 , electionId: }, "b:27017": { type: "Unknown", setName: , electionId: } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: null, setName: "rs", maxSetVersion: 2, } } ] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/rs/stepdown_change_set_name.yml000066400000000000000000000027311505113246500303300ustar00rootroot00000000000000description: "Primary becomes a secondary with wrong setName" uri: "mongodb://a/?replicaSet=rs" phases: [ # Primary is discovered normally. { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true, hosts: ["a:27017"], setName: "rs", minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs" } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: null, setName: "rs" } }, # Primary changes its setName and becomes secondary. # Remove it and change the topologyType. { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: false, secondary: true, hosts: ["a:27017"], setName: "wrong", minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: {}, topologyType: "ReplicaSetNoPrimary", logicalSessionTimeoutMinutes: null, setName: "rs" } } ] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/rs/too_new.yml000066400000000000000000000024261505113246500247600ustar00rootroot00000000000000description: "Replica set member with large minWireVersion" uri: "mongodb://a,b/?replicaSet=rs" phases: [ { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true, setName: "rs", hosts: ["a:27017", "b:27017"], minWireVersion: 0, maxWireVersion: 6 }], ["b:27017", { ok: 1, helloOk: true, isWritablePrimary: false, secondary: true, setName: "rs", hosts: ["a:27017", "b:27017"], minWireVersion: 999, maxWireVersion: 1000 }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs" }, "b:27017": { type: "RSSecondary", setName: "rs" } }, topologyType: "ReplicaSetWithPrimary", setName: "rs", logicalSessionTimeoutMinutes: null, compatible: false } } ] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/rs/too_old.yml000066400000000000000000000024341505113246500247440ustar00rootroot00000000000000description: "Replica set member with default maxWireVersion of 0" uri: "mongodb://a,b/?replicaSet=rs" phases: [ { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true, setName: "rs", hosts: ["a:27017", "b:27017"], minWireVersion: 0, maxWireVersion: 21 }], ["b:27017", { ok: 1, helloOk: true, isWritablePrimary: false, secondary: true, setName: "rs", hosts: ["a:27017", "b:27017"], minWireVersion: 999, maxWireVersion: 1000 }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs" }, "b:27017": { type: "RSSecondary", setName: "rs" } }, topologyType: "ReplicaSetWithPrimary", setName: "rs", logicalSessionTimeoutMinutes: null, compatible: false } } ] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/rs/topology_version_equal.yml000066400000000000000000000040551505113246500301160ustar00rootroot00000000000000description: "Primary with equal topologyVersion" uri: "mongodb://a/?replicaSet=rs" phases: [ # Primary A is discovered { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true, hosts: ["a:27017"], setName: "rs", minWireVersion: 0, maxWireVersion: 9, topologyVersion: {'processId': {"$oid": "000000000000000000000001"}, "counter": {"$numberLong": "1"}} }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs", topologyVersion: {'processId': {"$oid": "000000000000000000000001"}, "counter": {"$numberLong": "1"}} } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: null, setName: "rs", } }, # A responds with an equal topologyVersion, we should process the response. { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true, hosts: ["a:27017", "b:27017"], setName: "rs", minWireVersion: 0, maxWireVersion: 9, topologyVersion: {'processId': {"$oid": "000000000000000000000001"}, "counter": {"$numberLong": "1"}} }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs", topologyVersion: {'processId': {"$oid": "000000000000000000000001"}, "counter": {"$numberLong": "1"}} }, "b:27017": { type: "Unknown", topologyVersion: null } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: null, setName: "rs", } } ] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/rs/topology_version_greater.yml000066400000000000000000000132261505113246500304400ustar00rootroot00000000000000description: "Primary with newer topologyVersion" uri: "mongodb://a/?replicaSet=rs" phases: [ # Primary A is discovered { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true, hosts: ["a:27017"], setName: "rs", minWireVersion: 0, maxWireVersion: 9, topologyVersion: {'processId': {"$oid": "000000000000000000000001"}, "counter": {"$numberLong": "1"}} }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs", topologyVersion: {'processId': {"$oid": "000000000000000000000001"}, "counter": {"$numberLong": "1"}} } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: null, setName: "rs", } }, # A responds with a greater topologyVersion counter, we should process the response. { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true, hosts: ["a:27017", "b:27017"], setName: "rs", minWireVersion: 0, maxWireVersion: 9, topologyVersion: {'processId': {"$oid": "000000000000000000000001"}, "counter": {"$numberLong": "2"}} }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs", topologyVersion: {'processId': {"$oid": "000000000000000000000001"}, "counter": {"$numberLong": "2"}} }, "b:27017": { type: "Unknown", topologyVersion: null } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: null, setName: "rs", } }, # A responds with a different topologyVersion processId, we should process the response. { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true, hosts: ["a:27017", "c:27017"], setName: "rs", minWireVersion: 0, maxWireVersion: 9, topologyVersion: {'processId': {"$oid": "000000000000000000000002"}, "counter": {"$numberLong": "0"}} }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs", topologyVersion: {'processId': {"$oid": "000000000000000000000002"}, "counter": {"$numberLong": "0"}} }, "c:27017": { type: "Unknown", topologyVersion: null } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: null, setName: "rs", } }, # A responds without a topologyVersion, we should process the response. { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true, hosts: ["a:27017", "d:27017"], setName: "rs", minWireVersion: 0, maxWireVersion: 9 }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs", topologyVersion: null }, "d:27017": { type: "Unknown", topologyVersion: null } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: null, setName: "rs", } }, # A responds with a topologyVersion again, we should process the response. { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true, hosts: ["a:27017", "e:27017"], setName: "rs", minWireVersion: 0, maxWireVersion: 9, topologyVersion: {'processId': {"$oid": "000000000000000000000003"}, "counter": {"$numberLong": "0"}} }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs", topologyVersion: {'processId': {"$oid": "000000000000000000000003"}, "counter": {"$numberLong": "0"}} }, "e:27017": { type: "Unknown", topologyVersion: null } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: null, setName: "rs", } }, # A responds with a network error, we should process the response. { responses: [ ["a:27017", {}] ], outcome: { servers: { "a:27017": { type: "Unknown", topologyVersion: null }, "e:27017": { type: "Unknown", topologyVersion: null } }, topologyType: "ReplicaSetNoPrimary", logicalSessionTimeoutMinutes: null, setName: "rs", } } ] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/rs/topology_version_less.yml000066400000000000000000000036551505113246500277620ustar00rootroot00000000000000description: "Primary with older topologyVersion" uri: "mongodb://a/?replicaSet=rs" phases: [ # Primary A is discovered { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true, hosts: ["a:27017"], setName: "rs", minWireVersion: 0, maxWireVersion: 9, topologyVersion: {'processId': {"$oid": "000000000000000000000001"}, "counter": {"$numberLong": "1"}} }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs", topologyVersion: {'processId': {"$oid": "000000000000000000000001"}, "counter": {"$numberLong": "1"}} } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: null, setName: "rs", } }, # A responds with an older topologyVersion, we should ignore the response. { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true, hosts: ["a:27017", "b:27017"], setName: "rs", minWireVersion: 0, maxWireVersion: 9, topologyVersion: {'processId': {"$oid": "000000000000000000000001"}, "counter": {"$numberLong": "0"}} }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs", topologyVersion: {'processId': {"$oid": "000000000000000000000001"}, "counter": {"$numberLong": "1"}} } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: null, setName: "rs", } } ] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/rs/unexpected_mongos.yml000066400000000000000000000011011505113246500270210ustar00rootroot00000000000000description: "Unexpected mongos" uri: "mongodb://b/?replicaSet=rs" phases: [ { responses: [ ["b:27017", { ok: 1, helloOk: true, isWritablePrimary: true, msg: "isdbgrid", minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: {}, topologyType: "ReplicaSetNoPrimary", logicalSessionTimeoutMinutes: null, setName: "rs" } } ] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/rs/use_setversion_without_electionid-pre-6.0.yml000066400000000000000000000066021505113246500334320ustar00rootroot00000000000000description: "Pre 6.0 Record max setVersion, even from primary without electionId" uri: "mongodb://a/?replicaSet=rs" phases: [ # Primary A has setVersion and electionId, tells us about B. { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true, hosts: ["a:27017", "b:27017"], setName: "rs", setVersion: 1, electionId: {"$oid": "000000000000000000000001"}, minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs", setVersion: 1, electionId: {"$oid": "000000000000000000000001"} }, "b:27017": { type: "Unknown", setName: , electionId: } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: null, setName: "rs", maxSetVersion: 1, maxElectionId: {"$oid": "000000000000000000000001"}, } }, # Reconfig the set and elect B, it has a new setVersion but no electionId. { responses: [ ["b:27017", { ok: 1, helloOk: true, isWritablePrimary: true, hosts: ["a:27017", "b:27017"], setName: "rs", setVersion: 2, minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "Unknown", setName: , electionId: }, "b:27017": { type: "RSPrimary", setName: "rs", setVersion: 2 } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: null, setName: "rs", maxSetVersion: 2, maxElectionId: {"$oid": "000000000000000000000001"}, } }, # Delayed response from A, reporting its reelection. Its setVersion shows # the election preceded B's so we ignore it. { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true, hosts: ["a:27017", "b:27017"], setName: "rs", setVersion: 1, electionId: {"$oid": "000000000000000000000002"}, minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "Unknown", setName: , electionId: }, "b:27017": { type: "RSPrimary", setName: "rs", setVersion: 2 } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: null, setName: "rs", maxSetVersion: 2, maxElectionId: {"$oid": "000000000000000000000001"}, } } ] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/rs/use_setversion_without_electionid.yml000066400000000000000000000071611505113246500323460ustar00rootroot00000000000000description: "Record max setVersion, even from primary without electionId" uri: "mongodb://a/?replicaSet=rs" phases: [ # Primary A has electionId and setVersion, tells us about B. { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true, hosts: ["a:27017", "b:27017"], setName: "rs", setVersion: 1, electionId: {"$oid": "000000000000000000000001"}, minWireVersion: 0, maxWireVersion: 17 }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs", setVersion: 1, electionId: {"$oid": "000000000000000000000001"} }, "b:27017": { type: "Unknown", setName: , electionId: } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: null, setName: "rs", maxSetVersion: 1, maxElectionId: {"$oid": "000000000000000000000001"}, } }, # Reconfig, B reports as primary, B is missing the electionId but reports setVersion { responses: [ ["b:27017", { ok: 1, helloOk: true, isWritablePrimary: true, hosts: ["a:27017", "b:27017"], setName: "rs", setVersion: 2, minWireVersion: 0, maxWireVersion: 17 }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs", setVersion: 1, electionId: { "$oid": "000000000000000000000001" } }, "b:27017": { type: "Unknown", setName: , electionId: } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: null, setName: "rs", maxSetVersion: 1, maxElectionId: {"$oid": "000000000000000000000001"}, } }, # A reports as primary, A has been reelection (electionId greater than our recorded maxElectionId). # A's setVersion is less than our maxSetVersion, but electionId takes precedence so B's primary claim is ignored { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true, hosts: ["a:27017", "b:27017"], setName: "rs", setVersion: 1, electionId: {"$oid": "000000000000000000000002"}, minWireVersion: 0, maxWireVersion: 17 }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs", setVersion: 1, electionId: {"$oid": "000000000000000000000002"} }, "b:27017":{ type: "Unknown", setName: , electionId: } }, topologyType: "ReplicaSetWithPrimary", logicalSessionTimeoutMinutes: null, setName: "rs", maxSetVersion: 1, maxElectionId: {"$oid": "000000000000000000000002"}, } } ] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/rs/wrong_set_name.yml000066400000000000000000000014311505113246500263100ustar00rootroot00000000000000description: "Wrong setName" uri: "mongodb://a,b/?replicaSet=rs" phases: [ { responses: [ ["b:27017", { ok: 1, helloOk: true, isWritablePrimary: false, secondary: true, hosts: ["b:27017", "c:27017"], setName: "wrong", minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "Unknown", setName: } }, topologyType: "ReplicaSetNoPrimary", logicalSessionTimeoutMinutes: null, setName: "rs" } } ] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/sharded/000077500000000000000000000000001505113246500235455ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/sharded/compatible.yml000066400000000000000000000021251505113246500264070ustar00rootroot00000000000000description: "Multiple mongoses with large maxWireVersion" uri: "mongodb://a,b" phases: [ { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true, msg: "isdbgrid", minWireVersion: 0, maxWireVersion: 1000 }], ["b:27017", { ok: 1, helloOk: true, isWritablePrimary: true, msg: "isdbgrid", minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "Mongos", setName: }, "b:27017": { type: "Mongos", setName: } }, topologyType: "Sharded", logicalSessionTimeoutMinutes: null, setName: , compatible: true } } ] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/sharded/discover_single_mongos.yml000066400000000000000000000006501505113246500310320ustar00rootroot00000000000000description: "Discover single mongos" uri: "mongodb://a/?directConnection=false" phases: - responses: - - "a:27017" - ok: 1 helloOk: true isWritablePrimary: true msg: "isdbgrid" minWireVersion: 0 maxWireVersion: 6 outcome: servers: "a:27017": type: "Mongos" setName: topologyType: "Sharded" setName: mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/sharded/ls_timeout_mongos.yml000066400000000000000000000045041505113246500300410ustar00rootroot00000000000000description: "Parse logicalSessionTimeoutMinutes from mongoses" uri: "mongodb://a,b" phases: [ { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true, msg: "isdbgrid", logicalSessionTimeoutMinutes: 1, minWireVersion: 0, maxWireVersion: 6 }], ["b:27017", { ok: 1, helloOk: true, isWritablePrimary: true, msg: "isdbgrid", logicalSessionTimeoutMinutes: 2, minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "Mongos", setName: }, "b:27017": { type: "Mongos", setName: } }, topologyType: "Sharded", logicalSessionTimeoutMinutes: 1, # Minimum of the two setName: } }, # Now an isWritablePrimary response with no logicalSessionTimeoutMinutes { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true, msg: "isdbgrid", logicalSessionTimeoutMinutes: 1, minWireVersion: 0, maxWireVersion: 6 }], ["b:27017", { ok: 1, helloOk: true, isWritablePrimary: true, msg: "isdbgrid", minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "Mongos", setName: }, "b:27017": { type: "Mongos", setName: } }, topologyType: "Sharded", logicalSessionTimeoutMinutes: null, # Sessions not supported now setName: } } ] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/sharded/mongos_disconnect.yml000066400000000000000000000044071505113246500300100ustar00rootroot00000000000000description: "Mongos disconnect" uri: "mongodb://a,b" phases: [ { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true, msg: "isdbgrid", minWireVersion: 0, maxWireVersion: 6 }], ["b:27017", { ok: 1, helloOk: true, isWritablePrimary: true, msg: "isdbgrid", minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "Mongos", setName: }, "b:27017": { type: "Mongos", setName: } }, topologyType: "Sharded", logicalSessionTimeoutMinutes: null, setName: } }, { responses: [ ["a:27017", {}], # Hangup. ], outcome: { servers: { "a:27017": { type: "Unknown", setName: }, "b:27017": { type: "Mongos", setName: } }, topologyType: "Sharded", logicalSessionTimeoutMinutes: null, setName: } }, { responses: [ # Back in action. ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true, msg: "isdbgrid", minWireVersion: 0, maxWireVersion: 6 }], ], outcome: { servers: { "a:27017": { type: "Mongos", setName: }, "b:27017": { type: "Mongos", setName: } }, topologyType: "Sharded", logicalSessionTimeoutMinutes: null, setName: } } ] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/sharded/multiple_mongoses.yml000066400000000000000000000020461505113246500300370ustar00rootroot00000000000000description: "Multiple mongoses" uri: "mongodb://a,b" phases: [ { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true, msg: "isdbgrid", minWireVersion: 0, maxWireVersion: 6 }], ["b:27017", { ok: 1, helloOk: true, isWritablePrimary: true, msg: "isdbgrid", minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "Mongos", setName: }, "b:27017": { type: "Mongos", setName: } }, topologyType: "Sharded", logicalSessionTimeoutMinutes: null, setName: } } ] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/sharded/non_mongos_removed.yml000066400000000000000000000017541505113246500301740ustar00rootroot00000000000000description: "Non-Mongos server in sharded cluster" uri: "mongodb://a,b" phases: [ { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true, msg: "isdbgrid", minWireVersion: 0, maxWireVersion: 6 }], ["b:27017", { ok: 1, helloOk: true, isWritablePrimary: true, hosts: ["b:27017"], setName: "rs", minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "Mongos", setName: } }, topologyType: "Sharded", logicalSessionTimeoutMinutes: null, setName: } } ] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/sharded/normalize_uri_case.yml000066400000000000000000000010101505113246500301320ustar00rootroot00000000000000description: "Normalize URI case" uri: "mongodb://A,B" phases: [ { responses: [ ], outcome: { servers: { "a:27017": { type: "Unknown", setName: }, "b:27017": { type: "Unknown", setName: } }, topologyType: "Unknown", logicalSessionTimeoutMinutes: null, setName: } } ] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/sharded/ruby_primary_different_address.yml000066400000000000000000000010101505113246500325370ustar00rootroot00000000000000description: RS Primary in forced sharded topology with a different address from client but no me mismatch uri: mongodb://localhost:27017/?connect=sharded phases: - responses: - - localhost:27017 - hosts: - a:27017 - b:27017 ismaster: true ok: 1 setName: rs minWireVersion: 0 maxWireVersion: 6 outcome: # Since the server is of the wrong type, it is removed from the topology. servers: {} topologyType: Sharded logicalSessionTimeoutMinutes: mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/sharded/ruby_primary_mismatched_me.yml000066400000000000000000000007631505113246500317010ustar00rootroot00000000000000description: RS Primary in forced sharded topology with me mismatch uri: mongodb://localhost:27017/?connect=sharded phases: - responses: - - localhost:27017 - me: a:27017 hosts: - a:27017 - b:27017 ismaster: true ok: 1 setName: rs minWireVersion: 0 maxWireVersion: 6 outcome: # Since the server is of the wrong type, it is removed from the topology. servers: {} topologyType: Sharded logicalSessionTimeoutMinutes: mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/sharded/too_new.yml000066400000000000000000000021321505113246500257400ustar00rootroot00000000000000description: "Multiple mongoses with large minWireVersion" uri: "mongodb://a,b" phases: [ { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true, msg: "isdbgrid", minWireVersion: 999, maxWireVersion: 1000 }], ["b:27017", { ok: 1, helloOk: true, isWritablePrimary: true, msg: "isdbgrid", minWireVersion: 7, maxWireVersion: 900 }] ], outcome: { servers: { "a:27017": { type: "Mongos", setName: }, "b:27017": { type: "Mongos", setName: } }, topologyType: "Sharded", logicalSessionTimeoutMinutes: null, setName: , compatible: false } } ] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/sharded/too_old.yml000066400000000000000000000020141505113246500257240ustar00rootroot00000000000000description: "Multiple mongoses with default maxWireVersion of 0" uri: "mongodb://a,b" phases: [ { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true, msg: "isdbgrid", minWireVersion: 2, maxWireVersion: 6 }], ["b:27017", { ok: 1, helloOk: true, isWritablePrimary: true, msg: "isdbgrid" }] ], outcome: { servers: { "a:27017": { type: "Mongos", setName: }, "b:27017": { type: "Mongos", setName: } }, topologyType: "Sharded", logicalSessionTimeoutMinutes: null, setName: , compatible: false } } ] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/single/000077500000000000000000000000001505113246500234145ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/single/compatible.yml000066400000000000000000000012471505113246500262620ustar00rootroot00000000000000description: "Standalone with large maxWireVersion" uri: "mongodb://a" phases: [ { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true, minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "Standalone", setName: } }, topologyType: "Single", logicalSessionTimeoutMinutes: null, setName: , compatible: true } } ] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/single/direct_connection_external_ip.yml000066400000000000000000000014211505113246500322200ustar00rootroot00000000000000description: "Direct connection to RSPrimary via external IP" uri: "mongodb://a/?directConnection=true" phases: [ { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true, hosts: ["b:27017"], # Internal IP. setName: "rs", minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs" } }, topologyType: "Single", logicalSessionTimeoutMinutes: null, setName: } } ] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/single/direct_connection_mongos.yml000066400000000000000000000013001505113246500312040ustar00rootroot00000000000000description: "Direct connection to mongos" uri: "mongodb://a/?directConnection=true" phases: [ { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true, msg: "isdbgrid", minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "Mongos", setName: } }, topologyType: "Single", logicalSessionTimeoutMinutes: null, setName: } } ] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/single/direct_connection_replicaset.yml000066400000000000000000000010231505113246500320370ustar00rootroot00000000000000description: Direct connection with replicaSet URI option uri: "mongodb://a/?replicaSet=rs&directConnection=true" phases: # We are connecting to a replica set member - responses: - - "a:27017" - ok: 1 helloOk: true isWritablePrimary: true setName: rs minWireVersion: 0 maxWireVersion: 6 outcome: servers: "a:27017": type: "RSPrimary" setName: "rs" topologyType: "Single" logicalSessionTimeoutMinutes: setName: rs mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/single/direct_connection_rsarbiter.yml000066400000000000000000000014341505113246500317070ustar00rootroot00000000000000description: "Direct connection to RSArbiter" uri: "mongodb://a/?directConnection=true" phases: [ { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: false, arbiterOnly: true, hosts: ["a:27017", "b:27017"], setName: "rs", minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "RSArbiter", setName: "rs" } }, topologyType: "Single", logicalSessionTimeoutMinutes: null, setName: } } ] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/single/direct_connection_rsprimary.yml000066400000000000000000000013651505113246500317450ustar00rootroot00000000000000description: "Direct connection to RSPrimary" uri: "mongodb://a/?directConnection=true" phases: [ { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true, hosts: ["a:27017", "b:27017"], setName: "rs", minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "RSPrimary", setName: "rs" } }, topologyType: "Single", logicalSessionTimeoutMinutes: null, setName: } } ] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/single/direct_connection_rssecondary.yml000066400000000000000000000014461505113246500322510ustar00rootroot00000000000000description: "Direct connection to RSSecondary" uri: "mongodb://a/?directConnection=true" phases: [ { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: false, secondary: true, hosts: ["a:27017", "b:27017"], setName: "rs", minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "RSSecondary", setName: "rs" } }, topologyType: "Single", logicalSessionTimeoutMinutes: null, setName: } } ] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/single/direct_connection_standalone.yml000066400000000000000000000012431505113246500320400ustar00rootroot00000000000000description: "Direct connection to standalone" uri: "mongodb://a/?directConnection=true" phases: [ { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true, minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "Standalone", setName: } }, topologyType: "Single", logicalSessionTimeoutMinutes: null, setName: } } ] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/single/direct_connection_unavailable_seed.yml000066400000000000000000000007341505113246500331770ustar00rootroot00000000000000description: "Direct connection to unavailable seed" uri: "mongodb://a/?directConnection=true" phases: [ { responses: [ ["a:27017", {}] ], outcome: { servers: { "a:27017": { type: "Unknown", setName: } }, topologyType: "Single", logicalSessionTimeoutMinutes: null, setName: } } ] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/single/direct_connection_wrong_set_name.yml000066400000000000000000000014711505113246500327220ustar00rootroot00000000000000description: Direct connection to RSPrimary with wrong set name uri: mongodb://a/?directConnection=true&replicaSet=rs phases: - responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 - b:27017 setName: wrong minWireVersion: 0 maxWireVersion: 6 outcome: servers: a:27017: type: Unknown topologyType: Single logicalSessionTimeoutMinutes: setName: rs - responses: - - a:27017 - ok: 1 helloOk: true isWritablePrimary: true hosts: - a:27017 - b:27017 setName: rs minWireVersion: 0 maxWireVersion: 6 outcome: servers: a:27017: type: RSPrimary setName: rs topologyType: Single logicalSessionTimeoutMinutes: setName: rs mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/single/discover_standalone.yml000066400000000000000000000012301505113246500301610ustar00rootroot00000000000000description: "Discover standalone" uri: "mongodb://a/?directConnection=false" phases: [ { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true, minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "Standalone", setName: } }, topologyType: "Single", logicalSessionTimeoutMinutes: null, setName: } } ] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/single/discover_unavailable_seed.yml000066400000000000000000000007221505113246500313210ustar00rootroot00000000000000description: "Discover unavailable seed" uri: "mongodb://a/?directConnection=false" phases: [ { responses: [ ["a:27017", {}] ], outcome: { servers: { "a:27017": { type: "Unknown", setName: } }, topologyType: "Unknown", logicalSessionTimeoutMinutes: null, setName: } } ] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/single/ls_timeout_standalone.yml000066400000000000000000000013211505113246500305300ustar00rootroot00000000000000description: "Parse logicalSessionTimeoutMinutes from standalone" uri: "mongodb://a" phases: [ { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true, logicalSessionTimeoutMinutes: 7, minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "Standalone", setName: } }, topologyType: "Single", logicalSessionTimeoutMinutes: 7, setName: } } ] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/single/not_ok_response.yml000066400000000000000000000016001505113246500273430ustar00rootroot00000000000000description: "Handle a not-ok isWritablePrimary response" uri: "mongodb://a" phases: [ { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true, minWireVersion: 0, maxWireVersion: 6 }], ["a:27017", { ok: 0, helloOk: true, isWritablePrimary: true, minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "Unknown", setName: } }, topologyType: "Single", logicalSessionTimeoutMinutes: null, setName: } } ] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/single/ruby_primary_different_address.yml000066400000000000000000000011361505113246500324170ustar00rootroot00000000000000description: RS Primary in forced single topology with a different address from client but no me mismatch uri: mongodb://localhost:27017/?connect=direct phases: - responses: - - localhost:27017 - hosts: - a:27017 - b:27017 ismaster: true ok: 1 setName: rs minWireVersion: 0 maxWireVersion: 6 outcome: # In Single topology the server type is preserved. In this case the # connection is to a RS primary. servers: localhost:27017: type: RSPrimary setName: rs topologyType: Single logicalSessionTimeoutMinutes: mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/single/ruby_primary_mismatched_me.yml000066400000000000000000000011111505113246500315340ustar00rootroot00000000000000description: RS Primary in forced single topology with me mismatch uri: mongodb://localhost:27017/?connect=direct phases: - responses: - - localhost:27017 - me: a:27017 hosts: - a:27017 - b:27017 ismaster: true ok: 1 setName: rs minWireVersion: 0 maxWireVersion: 6 outcome: # In Single topology the server type is preserved. In this case the # connection is to a RS primary. servers: localhost:27017: type: RSPrimary setName: rs topologyType: Single logicalSessionTimeoutMinutes: mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/single/standalone_removed.yml000066400000000000000000000012321505113246500300060ustar00rootroot00000000000000description: "Standalone removed from multi-server topology" uri: "mongodb://a,b" phases: [ { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true, minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "b:27017": { type: "Unknown", setName: } }, topologyType: "Unknown", logicalSessionTimeoutMinutes: null, setName: } } ] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/single/standalone_using_legacy_hello.yml000066400000000000000000000011511505113246500322010ustar00rootroot00000000000000description: "Connect to standalone using legacy hello" uri: "mongodb://a" phases: [ { responses: [ ["a:27017", { ok: 1, ismaster: true, minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "Standalone", setName: } }, topologyType: "Single", logicalSessionTimeoutMinutes: null, setName: } } ] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/single/too_new.yml000066400000000000000000000012551505113246500256140ustar00rootroot00000000000000description: "Standalone with large minWireVersion" uri: "mongodb://a" phases: [ { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true, minWireVersion: 999, maxWireVersion: 1000 }] ], outcome: { servers: { "a:27017": { type: "Standalone", setName: } }, topologyType: "Single", logicalSessionTimeoutMinutes: null, setName: , compatible: false } } ] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/single/too_old.yml000066400000000000000000000011411505113246500255730ustar00rootroot00000000000000description: "Standalone with default maxWireVersion of 0" uri: "mongodb://a" phases: [ { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true }] ], outcome: { servers: { "a:27017": { type: "Standalone", setName: } }, topologyType: "Single", logicalSessionTimeoutMinutes: null, setName: , compatible: false } } ] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam/single/too_old_then_upgraded.yml000066400000000000000000000023371505113246500304740ustar00rootroot00000000000000description: "Standalone with default maxWireVersion of 0 is upgraded to one with maxWireVersion 6" uri: "mongodb://a" phases: [ { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true }] ], outcome: { servers: { "a:27017": { type: "Standalone", setName: } }, topologyType: "Single", logicalSessionTimeoutMinutes: null, setName: , compatible: false } }, { responses: [ ["a:27017", { ok: 1, helloOk: true, isWritablePrimary: true, minWireVersion: 0, maxWireVersion: 6 }] ], outcome: { servers: { "a:27017": { type: "Standalone", setName: } }, topologyType: "Single", logicalSessionTimeoutMinutes: null, setName: , compatible: true } } ] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam_monitoring/000077500000000000000000000000001505113246500244005ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam_monitoring/discovered_standalone.yml000066400000000000000000000037241505113246500314700ustar00rootroot00000000000000description: "Monitoring a discovered standalone connection" uri: "mongodb://a:27017/?directConnection=false" phases: - responses: - - "a:27017" - { ok: 1, helloOk: true, isWritablePrimary: true, minWireVersion: 0, maxWireVersion: 6 } outcome: events: - topology_opening_event: topologyId: "42" - topology_description_changed_event: topologyId: "42" previousDescription: topologyType: "Unknown" servers: [] newDescription: topologyType: "Unknown" servers: - address: "a:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" - server_opening_event: topologyId: "42" address: "a:27017" - server_description_changed_event: topologyId: "42" address: "a:27017" previousDescription: address: "a:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" newDescription: address: "a:27017" arbiters: [] hosts: [] passives: [] type: "Standalone" - topology_description_changed_event: topologyId: "42" previousDescription: topologyType: "Unknown" servers: - address: "a:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" newDescription: topologyType: "Single" servers: - address: "a:27017" arbiters: [] hosts: [] passives: [] type: "Standalone" mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam_monitoring/load_balancer.yml000066400000000000000000000035021505113246500276710ustar00rootroot00000000000000description: "Monitoring a load balancer" uri: "mongodb://a:27017/?loadBalanced=true" phases: - outcome: events: - topology_opening_event: topologyId: "42" - topology_description_changed_event: topologyId: "42" previousDescription: topologyType: "Unknown" servers: [] newDescription: topologyType: "LoadBalanced" servers: - address: "a:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" - server_opening_event: topologyId: "42" address: "a:27017" - server_description_changed_event: topologyId: "42" address: "a:27017" previousDescription: address: "a:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" newDescription: address: "a:27017" arbiters: [] hosts: [] passives: [] type: "LoadBalancer" - topology_description_changed_event: topologyId: "42" previousDescription: topologyType: "LoadBalanced" servers: - address: "a:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" newDescription: topologyType: "LoadBalanced" servers: - address: "a:27017" arbiters: [] hosts: [] passives: [] type: "LoadBalancer" mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam_monitoring/replica_set_other_chain.yml000066400000000000000000000142621505113246500317650ustar00rootroot00000000000000description: "Multiple RSOther responses with different servers in each" uri: "mongodb://a,x/" phases: # Phase 1 - responses: [] outcome: events: - topology_opening_event: topologyId: "42" - topology_description_changed_event: topologyId: "42" previousDescription: topologyType: "Unknown" servers: [] newDescription: topologyType: "Unknown" servers: - address: "a:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" - address: "x:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" - server_opening_event: topologyId: "42" address: "a:27017" - server_opening_event: topologyId: "42" address: "x:27017" # Phase 2 - first response from hidden member that thinks b is the primary - responses: - - "a:27017" - { ok: 1, ismaster: false, hidden: true, setName: "rs", setVersion: 1.0, primary: "b:27017", hosts: [ "b:27017" ], minWireVersion: 0, maxWireVersion: 4 } outcome: events: - server_description_changed_event: topologyId: "42" address: "a:27017" previousDescription: address: "a:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" newDescription: address: "a:27017" arbiters: [] hosts: [ "b:27017" ] passives: [] primary: "b:27017" setName: "rs" type: "RSOther" - topology_description_changed_event: topologyId: "42" previousDescription: topologyType: "Unknown" servers: - address: "a:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" - address: "x:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" newDescription: topologyType: "ReplicaSetNoPrimary" setName: "rs" servers: - address: "a:27017" arbiters: [] hosts: [ "b:27017" ] passives: [] primary: "b:27017" setName: "rs" type: "RSOther" - address: "b:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" - address: "x:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" - server_opening_event: topologyId: "42" address: "b:27017" # Phase 3 - second response from b which is also hidden, thinks c is primary - responses: - - "b:27017" - { ok: 1, ismaster: false, hidden: true, setName: "rs", setVersion: 1.0, primary: "c:27017", hosts: [ "c:27017" ], minWireVersion: 0, maxWireVersion: 4 } outcome: events: - server_description_changed_event: topologyId: "42" address: "b:27017" previousDescription: address: "b:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" newDescription: address: "b:27017" arbiters: [] hosts: [ "c:27017" ] passives: [] primary: "c:27017" setName: "rs" type: "RSOther" - topology_description_changed_event: topologyId: "42" previousDescription: topologyType: "ReplicaSetNoPrimary" setName: "rs" servers: - address: "a:27017" arbiters: [] hosts: [ "b:27017" ] passives: [] primary: "b:27017" setName: "rs" type: "RSOther" - address: "b:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" - address: "x:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" newDescription: topologyType: "ReplicaSetNoPrimary" setName: "rs" servers: - address: "a:27017" arbiters: [] hosts: [ "b:27017" ] passives: [] primary: "b:27017" setName: "rs" type: "RSOther" - address: "b:27017" arbiters: [] hosts: [ "c:27017" ] passives: [] primary: "c:27017" setName: "rs" type: "RSOther" - address: "c:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" - address: "x:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" - server_opening_event: topologyId: "42" address: "c:27017" mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam_monitoring/replica_set_other_change.yml000066400000000000000000000145451505113246500321340ustar00rootroot00000000000000description: "Multiple RSOther responses from the same server with different hosts" uri: "mongodb://a,x/" phases: # Phase 1 - responses: [] outcome: events: - topology_opening_event: topologyId: "42" - topology_description_changed_event: topologyId: "42" previousDescription: topologyType: "Unknown" servers: [] newDescription: topologyType: "Unknown" servers: - address: "a:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" - address: "x:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" - server_opening_event: topologyId: "42" address: "a:27017" - server_opening_event: topologyId: "42" address: "x:27017" # Phase 2 - first response from hidden member that thinks b is the primary - responses: - - "a:27017" - { ok: 1, ismaster: false, hidden: true, setName: "rs", setVersion: 1.0, primary: "b:27017", hosts: [ "b:27017" ], minWireVersion: 0, maxWireVersion: 4 } outcome: events: - server_description_changed_event: topologyId: "42" address: "a:27017" previousDescription: address: "a:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" newDescription: address: "a:27017" arbiters: [] hosts: [ "b:27017" ] passives: [] primary: "b:27017" setName: "rs" type: "RSOther" - topology_description_changed_event: topologyId: "42" previousDescription: topologyType: "Unknown" servers: - address: "a:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" - address: "x:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" newDescription: topologyType: "ReplicaSetNoPrimary" setName: "rs" servers: - address: "a:27017" arbiters: [] hosts: [ "b:27017" ] passives: [] primary: "b:27017" setName: "rs" type: "RSOther" - address: "b:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" - address: "x:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" - server_opening_event: topologyId: "42" address: "b:27017" # Phase 3 - second response from a, now it thinks c is primary. # setVersion on a should have changed, but since we don't look at setVersion # on non-primaries it's not included in this test. # Servers prior to 2.6.10 do not have setVersion also. - responses: - - "a:27017" - { ok: 1, ismaster: false, hidden: true, setName: "rs", setVersion: 1.0, primary: "c:27017", hosts: [ "c:27017" ], minWireVersion: 0, maxWireVersion: 4 } outcome: events: - server_description_changed_event: topologyId: "42" address: "a:27017" previousDescription: address: "a:27017" arbiters: [] hosts: [ "b:27017" ] passives: [] primary: "b:27017" setName: "rs" type: "RSOther" newDescription: address: "a:27017" arbiters: [] hosts: [ "c:27017" ] passives: [] primary: "c:27017" setName: "rs" type: "RSOther" - topology_description_changed_event: topologyId: "42" previousDescription: topologyType: "ReplicaSetNoPrimary" setName: "rs" servers: - address: "a:27017" arbiters: [] hosts: [ "b:27017" ] passives: [] primary: "b:27017" setName: "rs" type: "RSOther" - address: "b:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" - address: "x:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" newDescription: topologyType: "ReplicaSetNoPrimary" setName: "rs" servers: - address: "a:27017" arbiters: [] hosts: [ "c:27017" ] passives: [] primary: "c:27017" setName: "rs" type: "RSOther" - address: "b:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" - address: "c:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" - address: "x:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" - server_opening_event: topologyId: "42" address: "c:27017" mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam_monitoring/replica_set_primary_address_change.yml000066400000000000000000000153131505113246500341750ustar00rootroot00000000000000description: "Monitoring a topology that is a replica set with primary address change" uri: "mongodb://a,b" phases: # phase 1 - initial events - responses: [] outcome: events: - topology_opening_event: topologyId: "42" - topology_description_changed_event: topologyId: "42" previousDescription: topologyType: "Unknown" servers: [] newDescription: topologyType: "Unknown" servers: - address: "a:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" - address: "b:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" - server_opening_event: topologyId: "42" address: "a:27017" - server_opening_event: topologyId: "42" address: "b:27017" # phase 2 - discover topology - responses: - - "a:27017" - ok: 1 ismaster: true setName: "rs" setVersion: 1 primary: "a:27017" hosts: - "a:27017" minWireVersion: 0 maxWireVersion: 4 outcome: events: - server_description_changed_event: topologyId: "42" address: "a:27017" previousDescription: address: "a:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" newDescription: address: "a:27017" arbiters: [] hosts: - "a:27017" passives: [] primary: "a:27017" setName: "rs" type: "RSPrimary" - server_closed_event: topologyId: "42" address: "b:27017" - topology_description_changed_event: topologyId: "42" previousDescription: topologyType: "Unknown" servers: - address: "a:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" - address: "b:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" newDescription: topologyType: "ReplicaSetWithPrimary" setName: "rs" servers: - address: "a:27017" arbiters: [] hosts: - "a:27017" passives: [] primary: "a:27017" setName: "rs" type: "RSPrimary" # phase 3 - primary changes address - responses: - - "a:27017" - ok: 1 ismaster: true setName: "rs" setVersion: 1 primary: "aa:27017" me: "aa:27017" hosts: - "aa:27017" minWireVersion: 0 maxWireVersion: 4 outcome: events: - server_description_changed_event: topologyId: "42" address: "a:27017" previousDescription: address: "a:27017" arbiters: [] hosts: - "a:27017" passives: [] primary: "a:27017" setName: "rs" type: "RSPrimary" newDescription: address: "a:27017" arbiters: [] hosts: - "aa:27017" passives: [] primary: "aa:27017" setName: "rs" type: "RSPrimary" - server_closed_event: topologyId: "42" address: "a:27017" - server_opening_event: topologyId: "42" address: "aa:27017" - topology_description_changed_event: topologyId: "42" previousDescription: topologyType: "ReplicaSetWithPrimary" setName: "rs" servers: - address: "a:27017" arbiters: [] hosts: - "a:27017" passives: [] primary: "a:27017" setName: "rs" type: "RSPrimary" newDescription: topologyType: "ReplicaSetNoPrimary" setName: rs servers: - address: "aa:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" # phase 4 - response from primary on new address - responses: - - "aa:27017" - ok: 1 ismaster: true setName: "rs" setVersion: 1 primary: "aa:27017" me: "aa:27017" hosts: - "aa:27017" minWireVersion: 0 maxWireVersion: 4 outcome: events: - server_description_changed_event: topologyId: "42" address: "aa:27017" previousDescription: address: "aa:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" newDescription: address: "aa:27017" arbiters: [] hosts: - "aa:27017" passives: [] primary: "aa:27017" setName: "rs" type: "RSPrimary" - topology_description_changed_event: topologyId: "42" previousDescription: topologyType: "ReplicaSetNoPrimary" setName: rs servers: - address: "aa:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" newDescription: topologyType: "ReplicaSetWithPrimary" setName: "rs" servers: - address: "aa:27017" arbiters: [] hosts: - "aa:27017" passives: [] primary: "aa:27017" setName: "rs" type: "RSPrimary" mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam_monitoring/replica_set_with_me_mismatch.yml000066400000000000000000000057501505113246500330250ustar00rootroot00000000000000description: "Monitoring a topology that is a replica set with a me mismatch in first response" uri: "mongodb://a,b/" phases: # phase 1 - initial events - responses: [] outcome: events: - topology_opening_event: topologyId: "42" - topology_description_changed_event: topologyId: "42" previousDescription: topologyType: "Unknown" servers: [] newDescription: topologyType: "Unknown" servers: - address: "a:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" - address: "b:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" - server_opening_event: topologyId: "42" address: "a:27017" - server_opening_event: topologyId: "42" address: "b:27017" # phase 2 - server is a primary with mismatched me - responses: - - "a:27017" - ok: 1 ismaster: true setName: "rs" setVersion: 1 me: "aa:27017" primary: "a:27017" hosts: - "a:27017" minWireVersion: 0 maxWireVersion: 4 outcome: events: - server_description_changed_event: topologyId: "42" address: "a:27017" previousDescription: address: "a:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" newDescription: address: "a:27017" arbiters: [] hosts: - "a:27017" passives: [] primary: "a:27017" setName: "rs" type: "RSPrimary" - server_closed_event: topologyId: "42" address: "b:27017" - topology_description_changed_event: topologyId: "42" previousDescription: topologyType: "Unknown" servers: - address: "a:27017" arbiters: [] hosts: [] passives: [] - address: "b:27017" arbiters: [] hosts: [] passives: [] newDescription: topologyType: "ReplicaSetWithPrimary" setName: rs servers: - address: "a:27017" arbiters: [] hosts: - "a:27017" passives: [] primary: "a:27017" setName: "rs" type: "RSPrimary" mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam_monitoring/replica_set_with_no_primary.yml000066400000000000000000000061101505113246500327050ustar00rootroot00000000000000description: "Monitoring a topology that is a replica set with no primary connected" uri: "mongodb://a,b" phases: - responses: - - "a:27017" - ok: 1 helloOk: true isWritablePrimary: false secondary: true setName: "rs" setVersion: 1 primary: "b:27017" hosts: - "a:27017" - "b:27017" minWireVersion: 0 maxWireVersion: 6 outcome: events: - topology_opening_event: topologyId: "42" - topology_description_changed_event: topologyId: "42" previousDescription: topologyType: "Unknown" servers: [] newDescription: topologyType: "Unknown" servers: - address: "a:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" - address: "b:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" - server_opening_event: topologyId: "42" address: "a:27017" - server_opening_event: topologyId: "42" address: "b:27017" - server_description_changed_event: topologyId: "42" address: "a:27017" previousDescription: address: "a:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" newDescription: address: "a:27017" arbiters: [] hosts: - "a:27017" - "b:27017" passives: [] primary: "b:27017" setName: "rs" type: "RSSecondary" - topology_description_changed_event: topologyId: "42" previousDescription: topologyType: "Unknown" servers: - address: "a:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" - address: "b:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" newDescription: topologyType: "ReplicaSetNoPrimary" setName: "rs" servers: - address: "a:27017" arbiters: [] hosts: - "a:27017" - "b:27017" passives: [] primary: "b:27017" setName: "rs" type: "RSSecondary" - address: "b:27017" arbiters: [] hosts: [] passives: [] type: "PossiblePrimary" mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam_monitoring/replica_set_with_primary.yml000066400000000000000000000060421505113246500322150ustar00rootroot00000000000000description: "Monitoring a topology that is a replica set with a primary connected" uri: "mongodb://a,b" phases: - responses: - - "a:27017" - ok: 1 helloOk: true isWritablePrimary: true setName: "rs" setVersion: 1 primary: "a:27017" hosts: - "a:27017" - "b:27017" minWireVersion: 0 maxWireVersion: 6 outcome: events: - topology_opening_event: topologyId: "42" - topology_description_changed_event: topologyId: "42" previousDescription: topologyType: "Unknown" servers: [] newDescription: topologyType: "Unknown" servers: - address: "a:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" - address: "b:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" - server_opening_event: topologyId: "42" address: "a:27017" - server_opening_event: topologyId: "42" address: "b:27017" - server_description_changed_event: topologyId: "42" address: "a:27017" previousDescription: address: "a:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" newDescription: address: "a:27017" arbiters: [] hosts: - "a:27017" - "b:27017" passives: [] primary: "a:27017" setName: "rs" type: "RSPrimary" - topology_description_changed_event: topologyId: "42" previousDescription: topologyType: "Unknown" servers: - address: "a:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" - address: "b:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" newDescription: topologyType: "ReplicaSetWithPrimary" setName: "rs" servers: - address: "a:27017" arbiters: [] hosts: - "a:27017" - "b:27017" passives: [] primary: "a:27017" setName: "rs" type: "RSPrimary" - address: "b:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" replica_set_with_primary_and_secondary.yml000066400000000000000000000127551505113246500350370ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam_monitoringdescription: "Monitoring a topology that is a replica set with a primary and a secondary both responding" uri: "mongodb://a,b" phases: # phase 1 - primary responds - responses: - - "a:27017" - ok: 1 ismaster: true setName: "rs" setVersion: 1 primary: "a:27017" hosts: - "a:27017" - "b:27017" minWireVersion: 0 maxWireVersion: 4 outcome: events: - topology_opening_event: topologyId: "42" - topology_description_changed_event: topologyId: "42" previousDescription: topologyType: "Unknown" servers: [] newDescription: topologyType: "Unknown" servers: - address: "a:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" - address: "b:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" - server_opening_event: topologyId: "42" address: "a:27017" - server_opening_event: topologyId: "42" address: "b:27017" - server_description_changed_event: topologyId: "42" address: "a:27017" previousDescription: address: "a:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" newDescription: address: "a:27017" arbiters: [] hosts: - "a:27017" - "b:27017" passives: [] primary: "a:27017" setName: "rs" type: "RSPrimary" - topology_description_changed_event: topologyId: "42" previousDescription: topologyType: "Unknown" servers: - address: "a:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" - address: "b:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" newDescription: topologyType: "ReplicaSetWithPrimary" setName: "rs" servers: - address: "a:27017" arbiters: [] hosts: - "a:27017" - "b:27017" passives: [] primary: "a:27017" setName: "rs" type: "RSPrimary" - address: "b:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" # phase 2 - secondary responds - responses: - - "b:27017" - ok: 1 ismaster: false secondary: true setName: "rs" setVersion: 1 primary: "a:27017" hosts: - "a:27017" - "b:27017" minWireVersion: 0 maxWireVersion: 4 outcome: events: - server_description_changed_event: topologyId: "42" address: "b:27017" previousDescription: address: "b:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" newDescription: address: "b:27017" arbiters: [] hosts: - "a:27017" - "b:27017" passives: [] primary: "a:27017" setName: "rs" type: "RSSecondary" - topology_description_changed_event: topologyId: "42" previousDescription: topologyType: "ReplicaSetWithPrimary" setName: "rs" servers: - address: "a:27017" arbiters: [] hosts: - "a:27017" - "b:27017" passives: [] primary: "a:27017" setName: "rs" type: "RSPrimary" - address: "b:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" newDescription: topologyType: "ReplicaSetWithPrimary" setName: "rs" servers: - address: "a:27017" arbiters: [] hosts: - "a:27017" - "b:27017" passives: [] primary: "a:27017" setName: "rs" type: "RSPrimary" - address: "b:27017" arbiters: [] hosts: - "a:27017" - "b:27017" passives: [] primary: "a:27017" setName: "rs" type: "RSSecondary" mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam_monitoring/replica_set_with_primary_removal.yml000066400000000000000000000112461505113246500337440ustar00rootroot00000000000000description: "Monitoring a topology that is a replica set with primary removal" uri: "mongodb://a,b" phases: - responses: [] outcome: events: - topology_opening_event: topologyId: "42" - topology_description_changed_event: topologyId: "42" previousDescription: topologyType: "Unknown" servers: [] newDescription: topologyType: "Unknown" servers: - address: "a:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" - address: "b:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" - server_opening_event: topologyId: "42" address: "a:27017" - server_opening_event: topologyId: "42" address: "b:27017" # phase 1 - discover topology - responses: - - "a:27017" - ok: 1 ismaster: true setName: "rs" setVersion: 1 primary: "a:27017" hosts: - "a:27017" minWireVersion: 0 maxWireVersion: 4 outcome: events: - server_description_changed_event: topologyId: "42" address: "a:27017" previousDescription: address: "a:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" newDescription: address: "a:27017" arbiters: [] hosts: - "a:27017" passives: [] primary: "a:27017" setName: "rs" type: "RSPrimary" - server_closed_event: topologyId: "42" address: "b:27017" - topology_description_changed_event: topologyId: "42" previousDescription: topologyType: "Unknown" servers: - address: "a:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" - address: "b:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" newDescription: topologyType: "ReplicaSetWithPrimary" setName: "rs" servers: - address: "a:27017" arbiters: [] hosts: - "a:27017" passives: [] primary: "a:27017" setName: "rs" type: "RSPrimary" # phase 2 - primary changes rs name and is removed - responses: - - "a:27017" - ok: 1 ismaster: true setName: "wrong" setVersion: 1 primary: "c:27017" hosts: - "c:27017" minWireVersion: 0 maxWireVersion: 4 outcome: events: - server_description_changed_event: topologyId: "42" address: "a:27017" previousDescription: address: "a:27017" arbiters: [] hosts: - "a:27017" passives: [] primary: "a:27017" setName: "rs" type: "RSPrimary" newDescription: address: "a:27017" arbiters: [] hosts: - "c:27017" passives: [] primary: "c:27017" setName: "wrong" type: "RSPrimary" - server_closed_event: topologyId: "42" address: "a:27017" - topology_description_changed_event: topologyId: "42" previousDescription: topologyType: "ReplicaSetWithPrimary" setName: "rs" servers: - address: "a:27017" arbiters: [] hosts: - "a:27017" passives: [] primary: "a:27017" setName: "rs" type: "RSPrimary" newDescription: topologyType: "ReplicaSetNoPrimary" setName: "rs" servers: [] mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam_monitoring/replica_set_with_removal.yml000066400000000000000000000057561505113246500322120ustar00rootroot00000000000000description: "Monitoring a replica set with non member" uri: "mongodb://a,b/" phases: - responses: [] outcome: events: - topology_opening_event: topologyId: "42" - topology_description_changed_event: topologyId: "42" previousDescription: topologyType: "Unknown" servers: [] newDescription: topologyType: "Unknown" servers: - address: "a:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" - address: "b:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" - server_opening_event: topologyId: "42" address: "a:27017" - server_opening_event: topologyId: "42" address: "b:27017" - responses: - - "a:27017" - { ok: 1, helloOk: true, isWritablePrimary: true, setName: "rs", setVersion: 1.0, primary: "a:27017", hosts: [ "a:27017" ], minWireVersion: 0, maxWireVersion: 6 } - - "b:27017" - { ok: 1, helloOk: true, isWritablePrimary: true } outcome: events: - server_description_changed_event: topologyId: "42" address: "a:27017" previousDescription: address: "a:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" newDescription: address: "a:27017" arbiters: [] hosts: [ "a:27017" ] passives: [] primary: "a:27017" setName: "rs" type: "RSPrimary" - server_closed_event: topologyId: "42" address: "b:27017" - topology_description_changed_event: topologyId: "42" previousDescription: topologyType: "Unknown" servers: - address: "a:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" - address: "b:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" newDescription: topologyType: "ReplicaSetWithPrimary" setName: "rs" servers: - address: "a:27017" arbiters: [] hosts: [ "a:27017" ] passives: [] primary: "a:27017" setName: "rs" type: "RSPrimary" replica_set_with_second_seed_removal.yml000066400000000000000000000056451505113246500344630ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam_monitoringdescription: "Monitoring a topology that is a replica set with second seed removal" uri: "mongodb://a,b" phases: # phase 1 - discover topology - responses: - - "a:27017" - ok: 1 ismaster: true setName: "rs" setVersion: 1 primary: "a:27017" hosts: - "a:27017" minWireVersion: 0 maxWireVersion: 4 outcome: events: - topology_opening_event: topologyId: "42" - topology_description_changed_event: topologyId: "42" previousDescription: topologyType: "Unknown" servers: [] newDescription: topologyType: "Unknown" servers: - address: "a:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" - address: "b:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" - server_opening_event: topologyId: "42" address: "a:27017" - server_opening_event: topologyId: "42" address: "b:27017" - server_description_changed_event: topologyId: "42" address: "a:27017" previousDescription: address: "a:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" newDescription: address: "a:27017" arbiters: [] hosts: - "a:27017" passives: [] primary: "a:27017" setName: "rs" type: "RSPrimary" - server_closed_event: topologyId: "42" address: "b:27017" - topology_description_changed_event: topologyId: "42" previousDescription: topologyType: "Unknown" servers: - address: "a:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" - address: "b:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" newDescription: topologyType: "ReplicaSetWithPrimary" setName: "rs" servers: - address: "a:27017" arbiters: [] hosts: - "a:27017" passives: [] primary: "a:27017" setName: "rs" type: "RSPrimary" mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam_monitoring/required_replica_set.yml000066400000000000000000000061041505113246500313160ustar00rootroot00000000000000description: "Monitoring a topology that is required to be a replica set" uri: "mongodb://a,b/?replicaSet=rs" phases: - responses: - - "a:27017" - { ok: 1, helloOk: true, isWritablePrimary: true, setName: "rs", setVersion: 1.0, primary: "a:27017", hosts: [ "a:27017", "b:27017" ], minWireVersion: 0, maxWireVersion: 6 } outcome: events: - topology_opening_event: topologyId: "42" - topology_description_changed_event: topologyId: "42" previousDescription: topologyType: "Unknown" servers: [] newDescription: topologyType: "ReplicaSetNoPrimary" setName: "rs" servers: - address: "a:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" - address: "b:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" - server_opening_event: topologyId: "42" address: "a:27017" - server_opening_event: topologyId: "42" address: "b:27017" - server_description_changed_event: topologyId: "42" address: "a:27017" previousDescription: address: "a:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" newDescription: address: "a:27017" arbiters: [] hosts: [ "a:27017", "b:27017" ] passives: [] primary: "a:27017" setName: "rs" type: "RSPrimary" - topology_description_changed_event: topologyId: "42" previousDescription: topologyType: "ReplicaSetNoPrimary" setName: "rs" servers: - address: "a:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" - address: "b:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" newDescription: topologyType: "ReplicaSetWithPrimary" setName: "rs" servers: - address: "a:27017" arbiters: [] hosts: [ "a:27017", "b:27017" ] passives: [] primary: "a:27017" setName: "rs" type: "RSPrimary" - address: "b:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam_monitoring/standalone.yml000066400000000000000000000037021505113246500272550ustar00rootroot00000000000000description: "Monitoring a direct connection" uri: "mongodb://a:27017/?directConnection=true" phases: - responses: - - "a:27017" - { ok: 1, helloOk: true, isWritablePrimary: true, minWireVersion: 0, maxWireVersion: 6 } outcome: events: - topology_opening_event: topologyId: "42" - topology_description_changed_event: topologyId: "42" previousDescription: topologyType: "Unknown" servers: [] newDescription: topologyType: "Single" servers: - address: "a:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" - server_opening_event: topologyId: "42" address: "a:27017" - server_description_changed_event: topologyId: "42" address: "a:27017" previousDescription: address: "a:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" newDescription: address: "a:27017" arbiters: [] hosts: [] passives: [] type: "Standalone" - topology_description_changed_event: topologyId: "42" previousDescription: topologyType: "Single" servers: - address: "a:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" newDescription: topologyType: "Single" servers: - address: "a:27017" arbiters: [] hosts: [] passives: [] type: "Standalone" mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam_monitoring/standalone_repeated.yml000066400000000000000000000043621505113246500311310ustar00rootroot00000000000000description: "Monitoring a direct connection with repeated ismaster response" uri: "mongodb://a:27017/?directConnection=true" phases: # phase 1 - responses: [] outcome: events: - topology_opening_event: topologyId: "42" - topology_description_changed_event: topologyId: "42" previousDescription: topologyType: "Unknown" servers: [] newDescription: topologyType: "Single" servers: - address: "a:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" - server_opening_event: topologyId: "42" address: "a:27017" # phase 2 - responses: - - "a:27017" - { ok: 1, ismaster: true, minWireVersion: 0, maxWireVersion: 4 } outcome: events: - server_description_changed_event: topologyId: "42" address: "a:27017" previousDescription: address: "a:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" newDescription: address: "a:27017" arbiters: [] hosts: [] passives: [] type: "Standalone" - topology_description_changed_event: topologyId: "42" previousDescription: topologyType: "Single" servers: - address: "a:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" newDescription: topologyType: "Single" servers: - address: "a:27017" arbiters: [] hosts: [] passives: [] type: "Standalone" # phase 3 - same response as in phase 2 - responses: - - "a:27017" - { ok: 1, ismaster: true, minWireVersion: 0, maxWireVersion: 4 } outcome: # no events published events: [] standalone_suppress_equal_description_changes.yml000066400000000000000000000041671505113246500364320ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam_monitoringdescription: "Monitoring a direct connection - suppress update events for equal server descriptions" uri: "mongodb://a:27017/?directConnection=true" phases: - responses: - - "a:27017" - { ok: 1, helloOk: true, isWritablePrimary: true, minWireVersion: 0, maxWireVersion: 6 } - - "a:27017" - { ok: 1, helloOk: true, isWritablePrimary: true, minWireVersion: 0, maxWireVersion: 6 } outcome: events: - topology_opening_event: topologyId: "42" - topology_description_changed_event: topologyId: "42" previousDescription: topologyType: "Unknown" servers: [] newDescription: topologyType: "Single" servers: - address: "a:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" - server_opening_event: topologyId: "42" address: "a:27017" - server_description_changed_event: topologyId: "42" address: "a:27017" previousDescription: address: "a:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" newDescription: address: "a:27017" arbiters: [] hosts: [] passives: [] type: "Standalone" - topology_description_changed_event: topologyId: "42" previousDescription: topologyType: "Single" servers: - address: "a:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" newDescription: topologyType: "Single" servers: - address: "a:27017" arbiters: [] hosts: [] passives: [] type: "Standalone" mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam_monitoring/standalone_to_rs_with_me_mismatch.yml000066400000000000000000000046601505113246500340700ustar00rootroot00000000000000description: "Direct connection to a replica set node with a me mismatch" uri: "mongodb://a/?directConnection=true" phases: - responses: [] outcome: events: - topology_opening_event: topologyId: "42" - topology_description_changed_event: topologyId: "42" previousDescription: topologyType: "Unknown" servers: [] newDescription: topologyType: "Single" servers: - address: "a:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" - server_opening_event: topologyId: "42" address: "a:27017" # phase 1 - server is a primary with mismatched me - responses: - - "a:27017" - ok: 1 ismaster: true setName: "rs" setVersion: 1 primary: "aa:27017" me: "aa:27017" hosts: - "aa:27017" minWireVersion: 0 maxWireVersion: 4 outcome: events: - server_description_changed_event: topologyId: "42" address: "a:27017" previousDescription: address: "a:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" newDescription: address: "a:27017" arbiters: [] hosts: - "aa:27017" passives: [] primary: "aa:27017" setName: "rs" type: "RSPrimary" - topology_description_changed_event: topologyId: "42" previousDescription: topologyType: "Single" servers: - address: "a:27017" arbiters: [] hosts: [] passives: [] type: "Unknown" newDescription: topologyType: "Single" servers: - address: "a:27017" arbiters: [] hosts: - "aa:27017" passives: [] primary: "aa:27017" setName: "rs" type: "RSPrimary" mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam_unified/000077500000000000000000000000001505113246500236365ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam_unified/auth-error.yml000066400000000000000000000067721505113246500264650ustar00rootroot00000000000000description: auth-error schemaVersion: "1.10" runOnRequirements: # failCommand appName requirements - minServerVersion: "4.4" auth: true serverless: forbid topologies: [ single, replicaset, sharded ] createEntities: - client: id: &setupClient setupClient useMultipleMongoses: false initialData: &initialData - collectionName: &collectionName auth-error databaseName: &databaseName sdam-tests documents: - _id: 1 - _id: 2 tests: - description: Reset server and pool after AuthenticationFailure error operations: - name: failPoint object: testRunner arguments: client: *setupClient failPoint: configureFailPoint: failCommand mode: times: 1 data: failCommands: - saslContinue appName: authErrorTest errorCode: 18 - name: createEntities object: testRunner arguments: entities: - client: id: &client client useMultipleMongoses: false observeEvents: - commandStartedEvent - serverDescriptionChangedEvent - poolClearedEvent uriOptions: retryWrites: false appname: authErrorTest - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - name: insertMany object: *collection arguments: documents: - _id: 3 - _id: 4 expectError: isError: true - name: waitForEvent object: testRunner arguments: client: *client event: serverDescriptionChangedEvent: newDescription: type: Unknown count: 1 - name: waitForEvent object: testRunner arguments: client: *client event: poolClearedEvent: {} count: 1 # Perform another operation to ensure the node is rediscovered. - name: insertMany object: *collection arguments: documents: - _id: 5 - _id: 6 # Assert the server was marked Unknown and pool was cleared exactly once. - name: assertEventCount object: testRunner arguments: client: *client event: serverDescriptionChangedEvent: newDescription: type: Unknown count: 1 - name: assertEventCount object: testRunner arguments: client: *client event: poolClearedEvent: {} count: 1 expectEvents: # Note: The first insert command is never attempted because connection # checkout fails. - client: *client eventType: command events: - commandStartedEvent: command: insert: auth-error documents: - _id: 5 - _id: 6 commandName: insert databaseName: *databaseName outcome: - collectionName: *collectionName databaseName: *databaseName documents: - _id: 1 - _id: 2 - _id: 5 - _id: 6 mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam_unified/auth-misc-command-error.yml000066400000000000000000000070421505113246500310210ustar00rootroot00000000000000--- description: auth-misc-command-error schemaVersion: "1.10" runOnRequirements: # failCommand appName requirements - minServerVersion: "4.4" auth: true serverless: forbid topologies: [ single, replicaset, sharded ] createEntities: - client: id: &setupClient setupClient useMultipleMongoses: false initialData: &initialData - collectionName: &collectionName auth-misc-error databaseName: &databaseName sdam-tests documents: - _id: 1 - _id: 2 tests: - description: Reset server and pool after misc command error operations: - name: failPoint object: testRunner arguments: client: *setupClient failPoint: configureFailPoint: failCommand mode: times: 1 data: failCommands: - saslContinue appName: authMiscErrorTest errorCode: 1 # InternalError - name: createEntities object: testRunner arguments: entities: - client: id: &client client useMultipleMongoses: false observeEvents: - commandStartedEvent - serverDescriptionChangedEvent - poolClearedEvent uriOptions: retryWrites: false appname: authMiscErrorTest - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - name: insertMany object: *collection arguments: documents: - _id: 3 - _id: 4 expectError: isError: true - name: waitForEvent object: testRunner arguments: client: *client event: serverDescriptionChangedEvent: newDescription: type: Unknown count: 1 - name: waitForEvent object: testRunner arguments: client: *client event: poolClearedEvent: {} count: 1 # Perform another operation to ensure the node is rediscovered. - name: insertMany object: *collection arguments: documents: - _id: 5 - _id: 6 # Assert the server was marked Unknown and pool was cleared exactly once. - name: assertEventCount object: testRunner arguments: client: *client event: serverDescriptionChangedEvent: newDescription: type: Unknown count: 1 - name: assertEventCount object: testRunner arguments: client: *client event: poolClearedEvent: {} count: 1 expectEvents: # Note: The first insert command is never attempted because connection # checkout fails. - client: *client eventType: command events: - commandStartedEvent: command: insert: auth-misc-error documents: - _id: 5 - _id: 6 commandName: insert databaseName: *databaseName outcome: - collectionName: *collectionName databaseName: *databaseName documents: - _id: 1 - _id: 2 - _id: 5 - _id: 6 mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam_unified/auth-network-error.yml000066400000000000000000000070631505113246500301460ustar00rootroot00000000000000--- description: auth-network-error schemaVersion: "1.10" runOnRequirements: # failCommand appName requirements - minServerVersion: "4.4" auth: true serverless: forbid topologies: [ single, replicaset, sharded ] createEntities: - client: id: &setupClient setupClient useMultipleMongoses: false initialData: &initialData - collectionName: &collectionName auth-network-error databaseName: &databaseName sdam-tests documents: - _id: 1 - _id: 2 tests: - description: Reset server and pool after network error during authentication operations: - name: failPoint object: testRunner arguments: client: *setupClient failPoint: configureFailPoint: failCommand mode: times: 1 data: failCommands: - saslContinue closeConnection: true appName: authNetworkErrorTest - name: createEntities object: testRunner arguments: entities: - client: id: &client client useMultipleMongoses: false observeEvents: - commandStartedEvent - serverDescriptionChangedEvent - poolClearedEvent uriOptions: retryWrites: false appname: authNetworkErrorTest - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - name: insertMany object: *collection arguments: documents: - _id: 3 - _id: 4 expectError: isError: true - name: waitForEvent object: testRunner arguments: client: *client event: serverDescriptionChangedEvent: newDescription: type: Unknown count: 1 - name: waitForEvent object: testRunner arguments: client: *client event: poolClearedEvent: {} count: 1 # Perform another operation to ensure the node is rediscovered. - name: insertMany object: *collection arguments: documents: - _id: 5 - _id: 6 # Assert the server was marked Unknown and pool was cleared exactly once. - name: assertEventCount object: testRunner arguments: client: *client event: serverDescriptionChangedEvent: newDescription: type: Unknown count: 1 - name: assertEventCount object: testRunner arguments: client: *client event: poolClearedEvent: {} count: 1 expectEvents: # Note: The first insert command is never attempted because connection # checkout fails. - client: *client eventType: command events: - commandStartedEvent: command: insert: auth-network-error documents: - _id: 5 - _id: 6 commandName: insert databaseName: *databaseName outcome: - collectionName: *collectionName databaseName: *databaseName documents: - _id: 1 - _id: 2 - _id: 5 - _id: 6 mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam_unified/auth-network-timeout-error.yml000066400000000000000000000075441505113246500316360ustar00rootroot00000000000000--- description: auth-network-timeout-error schemaVersion: "1.10" runOnRequirements: # failCommand appName requirements - minServerVersion: "4.4" auth: true serverless: forbid topologies: [ single, replicaset, sharded ] createEntities: - client: id: &setupClient setupClient useMultipleMongoses: false initialData: &initialData - collectionName: &collectionName auth-network-timeout-error databaseName: &databaseName sdam-tests documents: - _id: 1 - _id: 2 tests: - description: Reset server and pool after network timeout error during authentication operations: - name: failPoint object: testRunner arguments: client: *setupClient failPoint: configureFailPoint: failCommand mode: times: 1 data: failCommands: - saslContinue blockConnection: true blockTimeMS: 500 appName: authNetworkTimeoutErrorTest - name: createEntities object: testRunner arguments: entities: - client: id: &client client useMultipleMongoses: false observeEvents: - commandStartedEvent - serverDescriptionChangedEvent - poolClearedEvent uriOptions: retryWrites: false appname: authNetworkTimeoutErrorTest # Set a short connect/socket timeout to ensure the fail point causes the # connection establishment to timeout. connectTimeoutMS: 250 socketTimeoutMS: 250 - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - name: insertMany object: *collection arguments: documents: - _id: 3 - _id: 4 expectError: isError: true - name: waitForEvent object: testRunner arguments: client: *client event: serverDescriptionChangedEvent: newDescription: type: Unknown count: 1 - name: waitForEvent object: testRunner arguments: client: *client event: poolClearedEvent: {} count: 1 # Perform another operation to ensure the node is rediscovered. - name: insertMany object: *collection arguments: documents: - _id: 5 - _id: 6 # Assert the server was marked Unknown and pool was cleared exactly once. - name: assertEventCount object: testRunner arguments: client: *client event: serverDescriptionChangedEvent: newDescription: type: Unknown count: 1 - name: assertEventCount object: testRunner arguments: client: *client event: poolClearedEvent: {} count: 1 expectEvents: # Note: The first insert command is never attempted because connection # checkout fails. - client: *client eventType: command events: - commandStartedEvent: command: insert: auth-network-timeout-error documents: - _id: 5 - _id: 6 commandName: insert databaseName: *databaseName outcome: - collectionName: *collectionName databaseName: *databaseName documents: - _id: 1 - _id: 2 - _id: 5 - _id: 6 mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam_unified/auth-shutdown-error.yml000066400000000000000000000070621505113246500303270ustar00rootroot00000000000000--- description: auth-shutdown-error schemaVersion: "1.10" runOnRequirements: # failCommand appName requirements - minServerVersion: "4.4" auth: true serverless: forbid topologies: [ single, replicaset, sharded ] createEntities: - client: id: &setupClient setupClient useMultipleMongoses: false initialData: &initialData - collectionName: &collectionName auth-shutdown-error databaseName: &databaseName sdam-tests documents: - _id: 1 - _id: 2 tests: - description: Reset server and pool after shutdown error during authentication operations: - name: failPoint object: testRunner arguments: client: *setupClient failPoint: configureFailPoint: failCommand mode: times: 1 data: failCommands: - saslContinue appName: authShutdownErrorTest errorCode: 91 - name: createEntities object: testRunner arguments: entities: - client: id: &client client useMultipleMongoses: false observeEvents: - commandStartedEvent - serverDescriptionChangedEvent - poolClearedEvent uriOptions: retryWrites: false appname: authShutdownErrorTest - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - name: insertMany object: *collection arguments: documents: - _id: 3 - _id: 4 expectError: isError: true - name: waitForEvent object: testRunner arguments: client: *client event: serverDescriptionChangedEvent: newDescription: type: Unknown count: 1 - name: waitForEvent object: testRunner arguments: client: *client event: poolClearedEvent: {} count: 1 # Perform another operation to ensure the node is rediscovered. - name: insertMany object: *collection arguments: documents: - _id: 5 - _id: 6 # Assert the server was marked Unknown and pool was cleared exactly once. - name: assertEventCount object: testRunner arguments: client: *client event: serverDescriptionChangedEvent: newDescription: type: Unknown count: 1 - name: assertEventCount object: testRunner arguments: client: *client event: poolClearedEvent: {} count: 1 expectEvents: # Note: The first insert command is never attempted because connection # checkout fails. - client: *client eventType: command events: - commandStartedEvent: command: insert: auth-shutdown-error documents: - _id: 5 - _id: 6 commandName: insert databaseName: *databaseName outcome: - collectionName: *collectionName databaseName: *databaseName documents: - _id: 1 - _id: 2 - _id: 5 - _id: 6 mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam_unified/cancel-server-check.yml000066400000000000000000000103751505113246500301730ustar00rootroot00000000000000--- description: cancel-server-check schemaVersion: "1.10" runOnRequirements: # General failCommand requirements (this file does not use appName # with failCommand). - minServerVersion: "4.0" topologies: - replicaset serverless: forbid - minServerVersion: "4.2" topologies: - sharded serverless: forbid createEntities: - client: id: &setupClient setupClient useMultipleMongoses: false initialData: &initialData - collectionName: &collectionName cancel-server-check databaseName: &databaseName sdam-tests documents: [] tests: - description: Cancel server check operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client useMultipleMongoses: false observeEvents: - serverDescriptionChangedEvent - poolClearedEvent uriOptions: retryWrites: true heartbeatFrequencyMS: 10000 # Server selection timeout MUST be less than heartbeatFrequencyMS for # this test. This setting ensures that the retried insert will fail # after 5 seconds if the driver does not properly cancel the in progress # check. serverSelectionTimeoutMS: 5000 appname: cancelServerCheckTest - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName # Perform an operation to ensure the node is discovered. - name: insertOne object: *collection arguments: document: _id: 1 # Configure the next inserts to fail with a non-timeout network error. # This should: # 1) Mark the server Unknown # 2) Clear the connection pool # 3) Cancel the in progress hello or legacy hello check and close the Monitor # connection # 4) The write will be then we retried, server selection will request an # immediate check, and block for ~500ms until the next Monitor check # proceeds. # 5) The write will succeed on the second attempt. - name: failPoint object: testRunner arguments: failPoint: configureFailPoint: failCommand mode: times: 1 data: failCommands: - insert closeConnection: true client: *setupClient - name: insertOne object: *collection arguments: document: _id: 2 expectResult: insertedId: 2 # The first error should mark the server Unknown and then clear the pool. - name: waitForEvent object: testRunner arguments: client: *client event: serverDescriptionChangedEvent: newDescription: type: Unknown count: 1 - name: waitForEvent object: testRunner arguments: client: *client event: poolClearedEvent: {} count: 1 # Perform another operation to ensure the node still selectable. - name: insertOne object: *collection arguments: document: _id: 3 expectResult: insertedId: 3 # Assert the server was marked Unknown and pool was cleared exactly once. - name: assertEventCount object: testRunner arguments: client: *client event: serverDescriptionChangedEvent: newDescription: type: Unknown count: 1 - name: assertEventCount object: testRunner arguments: client: *client event: poolClearedEvent: {} count: 1 # Order of operations is non-deterministic so we cannot check events. outcome: - collectionName: *collectionName databaseName: *databaseName documents: - _id: 1 - _id: 2 - _id: 3 mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam_unified/connectTimeoutMS.yml000066400000000000000000000071531505113246500276270ustar00rootroot00000000000000--- description: connectTimeoutMS schemaVersion: "1.10" runOnRequirements: # failCommand appName requirements - minServerVersion: "4.4" serverless: forbid topologies: [ single, replicaset, sharded ] createEntities: - client: id: &setupClient setupClient useMultipleMongoses: false initialData: &initialData - collectionName: &collectionName connectTimeoutMS databaseName: &databaseName sdam-tests documents: [] tests: - description: connectTimeoutMS=0 operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client observeEvents: - commandStartedEvent - serverDescriptionChangedEvent - poolClearedEvent uriOptions: retryWrites: false connectTimeoutMS: 0 heartbeatFrequencyMS: 500 appname: connectTimeoutMS=0 useMultipleMongoses: false - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName # Perform an operation to ensure the node is discovered. - name: insertMany object: *collection arguments: documents: - _id: 1 - _id: 2 # Block the next streaming hello check for longer than # heartbeatFrequencyMS to ensure that the connection timeout remains # unlimited. - name: failPoint object: testRunner arguments: failPoint: configureFailPoint: failCommand mode: times: 2 data: failCommands: - hello - isMaster appName: connectTimeoutMS=0 blockConnection: true blockTimeMS: 550 client: *setupClient - name: wait object: testRunner arguments: ms: 750 # Perform an operation to ensure the node is still selectable. - name: insertMany object: *collection arguments: documents: - _id: 3 - _id: 4 # Assert that the server was never marked Unknown and the pool was never # cleared. - name: assertEventCount object: testRunner arguments: client: *client event: serverDescriptionChangedEvent: newDescription: type: Unknown count: 0 - name: assertEventCount object: testRunner arguments: client: *client event: poolClearedEvent: {} count: 0 expectEvents: - client: *client eventType: command events: - commandStartedEvent: command: insert: connectTimeoutMS documents: - _id: 1 - _id: 2 commandName: insert databaseName: *databaseName - commandStartedEvent: command: insert: connectTimeoutMS documents: - _id: 3 - _id: 4 commandName: insert databaseName: *databaseName outcome: - collectionName: *collectionName databaseName: *databaseName documents: - _id: 1 - _id: 2 - _id: 3 - _id: 4 mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam_unified/find-network-error.yml000066400000000000000000000071311505113246500301210ustar00rootroot00000000000000--- description: find-network-error schemaVersion: "1.10" runOnRequirements: # failCommand appName requirements - minServerVersion: "4.4" serverless: forbid topologies: [ single, replicaset, sharded ] createEntities: - client: id: &setupClient setupClient useMultipleMongoses: false initialData: &initialData - collectionName: &collectionName find-network-error databaseName: &databaseName sdam-tests documents: - _id: 1 - _id: 2 tests: - description: Reset server and pool after network error on find operations: - name: failPoint object: testRunner arguments: client: *setupClient failPoint: configureFailPoint: failCommand mode: times: 1 data: failCommands: - find closeConnection: true appName: findNetworkErrorTest - name: createEntities object: testRunner arguments: entities: - client: id: &client client useMultipleMongoses: false observeEvents: - commandStartedEvent - serverDescriptionChangedEvent - poolClearedEvent uriOptions: retryWrites: false retryReads: false appname: findNetworkErrorTest - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - name: find object: *collection arguments: filter: _id: 1 expectError: isError: true - name: waitForEvent object: testRunner arguments: client: *client event: serverDescriptionChangedEvent: newDescription: type: Unknown count: 1 - name: waitForEvent object: testRunner arguments: client: *client event: poolClearedEvent: {} count: 1 # Perform another operation to ensure the node is rediscovered. - name: insertMany object: *collection arguments: documents: - _id: 5 - _id: 6 # Assert the server was marked Unknown and pool was cleared exactly once. - name: assertEventCount object: testRunner arguments: client: *client event: serverDescriptionChangedEvent: newDescription: type: Unknown count: 1 - name: assertEventCount object: testRunner arguments: client: *client event: poolClearedEvent: {} count: 1 expectEvents: - client: *client eventType: command events: - commandStartedEvent: command: find: find-network-error commandName: find databaseName: *databaseName - commandStartedEvent: command: insert: find-network-error documents: - _id: 5 - _id: 6 commandName: insert databaseName: *databaseName outcome: - collectionName: *collectionName databaseName: *databaseName documents: - _id: 1 - _id: 2 - _id: 5 - _id: 6 mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam_unified/find-network-timeout-error.yml000066400000000000000000000064711505113246500316130ustar00rootroot00000000000000--- description: find-network-timeout-error schemaVersion: "1.10" runOnRequirements: # failCommand appName requirements - minServerVersion: "4.4" serverless: forbid topologies: [ single, replicaset, sharded ] createEntities: - client: id: &setupClient setupClient useMultipleMongoses: false initialData: &initialData - collectionName: &collectionName find-network-timeout-error databaseName: &databaseName sdam-tests documents: - _id: 1 - _id: 2 tests: - description: Ignore network timeout error on find operations: - name: failPoint object: testRunner arguments: client: *setupClient failPoint: configureFailPoint: failCommand mode: times: 1 data: failCommands: - find blockConnection: true blockTimeMS: 500 appName: findNetworkTimeoutErrorTest - name: createEntities object: testRunner arguments: entities: - client: id: &client client useMultipleMongoses: false observeEvents: - commandStartedEvent - serverDescriptionChangedEvent - poolClearedEvent uriOptions: retryWrites: false retryReads: false appname: findNetworkTimeoutErrorTest # Set a short socket timeout to ensure the find command times out. socketTimeoutMS: 250 - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - name: find object: *collection arguments: filter: _id: 1 expectError: isError: true # Perform another operation to ensure the node is still usable. - name: insertOne object: *collection arguments: document: _id: 3 # Assert the server was not marked Unknown and the pool was not cleared. - name: assertEventCount object: testRunner arguments: client: *client event: serverDescriptionChangedEvent: newDescription: type: Unknown count: 0 - name: assertEventCount object: testRunner arguments: client: *client event: poolClearedEvent: {} count: 0 expectEvents: - client: *client eventType: command events: - commandStartedEvent: command: find: find-network-timeout-error commandName: find databaseName: *databaseName - commandStartedEvent: command: insert: find-network-timeout-error documents: - _id: 3 commandName: insert databaseName: *databaseName outcome: - collectionName: *collectionName databaseName: *databaseName documents: - _id: 1 - _id: 2 - _id: 3 mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam_unified/find-shutdown-error.yml000066400000000000000000000110501505113246500302760ustar00rootroot00000000000000--- description: find-shutdown-error schemaVersion: "1.10" runOnRequirements: # failCommand appName requirements - minServerVersion: "4.4" serverless: forbid topologies: [ single, replicaset, sharded ] createEntities: - client: id: &setupClient setupClient useMultipleMongoses: false initialData: &initialData - collectionName: &collectionName find-shutdown-error databaseName: &databaseName sdam-tests documents: [] tests: - description: Concurrent shutdown error on find operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client useMultipleMongoses: false uriOptions: retryWrites: false retryReads: false heartbeatFrequencyMS: 500 appname: shutdownErrorFindTest observeEvents: - serverDescriptionChangedEvent - poolClearedEvent - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName # Perform an operation to ensure the node is discovered. - name: insertOne object: *collection arguments: document: _id: 1 # Configure the next two finds to fail with a non-timeout shutdown # errors. Block the connection for 500ms to ensure both operations check # out connections from the same pool generation. - name: failPoint object: testRunner arguments: failPoint: configureFailPoint: failCommand mode: times: 2 data: failCommands: - find appName: shutdownErrorFindTest errorCode: 91 blockConnection: true blockTimeMS: 500 client: *setupClient # Start threads. - name: createEntities object: testRunner arguments: entities: - thread: id: &thread0 thread0 - thread: id: &thread1 thread1 # Perform concurrent find operations. Both fail with shutdown errors. - name: runOnThread object: testRunner arguments: thread: *thread0 operation: name: find object: *collection arguments: filter: _id: 1 expectError: isError: true - name: runOnThread object: testRunner arguments: thread: *thread1 operation: name: find object: *collection arguments: filter: _id: 1 expectError: isError: true # Stop threads. - name: waitForThread object: testRunner arguments: thread: *thread0 - name: waitForThread object: testRunner arguments: thread: *thread1 # The first shutdown error should mark the server Unknown and then clear # the pool. - name: waitForEvent object: testRunner arguments: client: *client event: serverDescriptionChangedEvent: newDescription: type: Unknown count: 1 - name: waitForEvent object: testRunner arguments: client: *client event: poolClearedEvent: {} count: 1 # Perform an operation to ensure the node is rediscovered. - name: insertOne object: *collection arguments: document: _id: 4 # Assert the server was marked Unknown and pool was cleared exactly once. - name: assertEventCount object: testRunner arguments: client: *client event: serverDescriptionChangedEvent: newDescription: type: Unknown count: 1 - name: assertEventCount object: testRunner arguments: client: *client event: poolClearedEvent: {} count: 1 # Order of operations is non-deterministic so we cannot check events. outcome: - collectionName: *collectionName databaseName: *databaseName documents: - _id: 1 - _id: 4 mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam_unified/hello-command-error.yml000066400000000000000000000156551505113246500302430ustar00rootroot00000000000000--- description: hello-command-error schemaVersion: "1.4" runOnRequirements: # Require SERVER-49336 for failCommand + appName on the initial handshake. - minServerVersion: "4.4.7" serverless: forbid topologies: [ single, replicaset, sharded ] createEntities: - client: id: &setupClient setupClient useMultipleMongoses: false initialData: &initialData - collectionName: &collectionName hello-command-error databaseName: &databaseName sdam-tests documents: [] tests: - description: Command error on Monitor handshake operations: # Configure the next streaming hello check to fail with a command error. # Use "times: 4" to increase the probability that the Monitor check fails # since the RTT hello may trigger this failpoint one or many times as # well. - name: failPoint object: testRunner arguments: client: *setupClient failPoint: configureFailPoint: failCommand mode: times: 4 data: failCommands: - hello - isMaster appName: commandErrorHandshakeTest closeConnection: false errorCode: 91 - name: createEntities object: testRunner arguments: entities: - client: id: &client client useMultipleMongoses: false observeEvents: - serverDescriptionChangedEvent - poolClearedEvent - commandStartedEvent uriOptions: retryWrites: false connectTimeoutMS: 250 heartbeatFrequencyMS: 500 appname: commandErrorHandshakeTest - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName # The command error on the initial handshake should mark the server # Unknown (emitting a ServerDescriptionChangedEvent) and clear the pool. - name: waitForEvent object: testRunner arguments: client: *client event: serverDescriptionChangedEvent: newDescription: type: Unknown count: 1 # Perform an operation to ensure the node is discovered. - name: insertMany object: *collection arguments: documents: - _id: 1 - _id: 2 # We cannot assert the server was marked Unknown and pool was cleared an # exact number of times because the RTT hello may or may not have # triggered this failpoint as well. expectEvents: - client: *client eventType: command events: - commandStartedEvent: command: insert: hello-command-error documents: - _id: 1 - _id: 2 commandName: insert databaseName: *databaseName outcome: - collectionName: *collectionName databaseName: *databaseName documents: - _id: 1 - _id: 2 - description: Command error on Monitor check operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client useMultipleMongoses: false observeEvents: - commandStartedEvent - serverDescriptionChangedEvent - poolClearedEvent uriOptions: retryWrites: false connectTimeoutMS: 1000 heartbeatFrequencyMS: 500 appname: commandErrorCheckTest - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName # Perform an operation to ensure the node is discovered. - name: insertMany object: *collection arguments: documents: - _id: 1 - _id: 2 # Configure the next streaming hello check to fail with a command # error. # Use times: 2 so that the RTT hello is blocked as well. - name: failPoint object: testRunner arguments: failPoint: configureFailPoint: failCommand mode: times: 2 data: failCommands: - hello - isMaster appName: commandErrorCheckTest closeConnection: false blockConnection: true blockTimeMS: 750 errorCode: 91 client: *setupClient # The command error on the next check should mark the server Unknown and # clear the pool. - name: waitForEvent object: testRunner arguments: client: *client event: serverDescriptionChangedEvent: newDescription: type: Unknown count: 1 - name: waitForEvent object: testRunner arguments: client: *client event: poolClearedEvent: {} count: 1 # Perform an operation to ensure the node is rediscovered. - name: insertMany object: *collection arguments: documents: - _id: 3 - _id: 4 # Assert the server was marked Unknown and pool was cleared exactly once. - name: assertEventCount object: testRunner arguments: client: *client event: serverDescriptionChangedEvent: newDescription: type: Unknown count: 1 - name: assertEventCount object: testRunner arguments: client: *client event: poolClearedEvent: {} count: 1 expectEvents: - client: *client eventType: command events: - commandStartedEvent: command: insert: hello-command-error documents: - _id: 1 - _id: 2 commandName: insert databaseName: *databaseName - commandStartedEvent: command: insert: hello-command-error documents: - _id: 3 - _id: 4 commandName: insert databaseName: *databaseName outcome: - collectionName: *collectionName databaseName: *databaseName documents: - _id: 1 - _id: 2 - _id: 3 - _id: 4 mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam_unified/hello-network-error.yml000066400000000000000000000156241505113246500303120ustar00rootroot00000000000000--- description: hello-network-error schemaVersion: "1.4" runOnRequirements: # Require SERVER-49336 for failCommand + appName on the initial handshake. - minServerVersion: "4.4.7" serverless: forbid topologies: [ single, replicaset, sharded ] createEntities: - client: id: &setupClient setupClient useMultipleMongoses: false initialData: &initialData - collectionName: &collectionName hello-network-error databaseName: &databaseName sdam-tests documents: [] tests: - description: Network error on Monitor handshake # Configure the initial handshake to fail with a network error. # Use times: 2 so that the RTT hello fails as well. operations: - name: failPoint object: testRunner arguments: client: *setupClient failPoint: configureFailPoint: failCommand mode: times: 2 data: failCommands: - hello - isMaster appName: networkErrorHandshakeTest closeConnection: true - name: createEntities object: testRunner arguments: entities: - client: id: &client client useMultipleMongoses: false observeEvents: - commandStartedEvent - serverDescriptionChangedEvent - poolClearedEvent uriOptions: retryWrites: false connectTimeoutMS: 250 heartbeatFrequencyMS: 500 appname: networkErrorHandshakeTest - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName # The network error on the initial handshake should mark the server # Unknown (emitting a ServerDescriptionChangedEvent) and clear the pool. - name: waitForEvent object: testRunner arguments: client: *client event: serverDescriptionChangedEvent: newDescription: type: Unknown count: 1 # Perform an operation to ensure the node is discovered. - name: insertMany object: *collection arguments: documents: - _id: 1 - _id: 2 # We cannot assert the server was marked Unknown and pool was cleared an # exact number of times because the RTT hello may or may not have # triggered this failpoint as well. expectEvents: - client: *client eventType: command events: - commandStartedEvent: command: insert: hello-network-error documents: - _id: 1 - _id: 2 commandName: insert databaseName: *databaseName outcome: - collectionName: *collectionName databaseName: *databaseName documents: - _id: 1 - _id: 2 - description: Network error on Monitor check operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client useMultipleMongoses: false observeEvents: - commandStartedEvent - serverDescriptionChangedEvent - poolClearedEvent uriOptions: retryWrites: false connectTimeoutMS: 250 heartbeatFrequencyMS: 500 appname: networkErrorCheckTest - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName # Perform an operation to ensure the node is discovered. - name: insertMany object: *collection arguments: documents: - _id: 1 - _id: 2 # Configure the next streaming hello check to fail with a non-timeout # network error. Use "times: 4" to increase the probability that the # Monitor check fails since the RTT hello may trigger this failpoint one # or many times as well. - name: failPoint object: testRunner arguments: failPoint: configureFailPoint: failCommand mode: times: 2 data: failCommands: - hello - isMaster appName: networkErrorCheckTest closeConnection: true client: *setupClient # The network error on the next check should mark the server Unknown and # clear the pool. - name: waitForEvent object: testRunner arguments: client: *client event: serverDescriptionChangedEvent: newDescription: type: Unknown count: 1 - name: waitForEvent object: testRunner arguments: client: *client event: poolClearedEvent: {} count: 1 # Perform an operation to ensure the node is rediscovered. - name: insertMany object: *collection arguments: documents: - _id: 3 - _id: 4 # We cannot assert the server was marked Unknown and pool was cleared an # exact number of times because the RTT hello may or may not have # triggered this failpoint as well. # - name: assertEventCount # object: testRunner # arguments: # client: *client # event: # serverDescriptionChangedEvent: # newDescription: # type: Unknown # count: 1 # - name: assertEventCount # object: testRunner # arguments: # event: # poolClearedEvent: {} # count: 1 expectEvents: - client: *client eventType: command events: - commandStartedEvent: command: insert: hello-network-error documents: - _id: 1 - _id: 2 commandName: insert databaseName: *databaseName - commandStartedEvent: command: insert: hello-network-error documents: - _id: 3 - _id: 4 commandName: insert databaseName: *databaseName outcome: - collectionName: *collectionName databaseName: *databaseName documents: - _id: 1 - _id: 2 - _id: 3 - _id: 4 mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam_unified/hello-timeout.yml000066400000000000000000000232541505113246500271560ustar00rootroot00000000000000--- description: hello-timeout schemaVersion: "1.10" runOnRequirements: # failCommand appName requirements - minServerVersion: "4.4" serverless: forbid topologies: [ single, replicaset, sharded ] createEntities: - client: id: &setupClient setupClient useMultipleMongoses: false initialData: &initialData - collectionName: &collectionName hello-timeout databaseName: &databaseName sdam-tests documents: [] tests: - description: Network timeout on Monitor handshake operations: # Configure the initial handshake to fail with a timeout. # Use times: 2 so that the RTT hello is blocked as well. - name: failPoint object: testRunner arguments: client: *setupClient failPoint: configureFailPoint: failCommand mode: times: 2 data: failCommands: - hello - isMaster appName: timeoutMonitorHandshakeTest blockConnection: true blockTimeMS: 1000 - name: createEntities object: testRunner arguments: entities: - client: id: &client client useMultipleMongoses: false observeEvents: - commandStartedEvent - serverDescriptionChangedEvent - poolClearedEvent uriOptions: retryWrites: false connectTimeoutMS: 250 heartbeatFrequencyMS: 500 appname: timeoutMonitorHandshakeTest - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName # The network error on the initial handshake should mark the server # Unknown (emitting a ServerDescriptionChangedEvent) and clear the pool. - name: waitForEvent object: testRunner arguments: client: *client event: serverDescriptionChangedEvent: newDescription: type: Unknown count: 1 # Perform an operation to ensure the node is discovered. - name: insertMany object: *collection arguments: documents: - _id: 1 - _id: 2 # We cannot assert the server was marked Unknown and pool was cleared an # exact number of times because the RTT hello may or may not have # triggered this failpoint as well. # - name: assertEventCount # object: testRunner # arguments: # event: ServerMarkedUnknownEvent # count: 1 # - name: assertEventCount # object: testRunner # arguments: # event: PoolClearedEvent # count: 1 expectEvents: - client: *client eventType: command events: - commandStartedEvent: command: insert: hello-timeout documents: - _id: 1 - _id: 2 commandName: insert databaseName: *databaseName outcome: - collectionName: *collectionName databaseName: *databaseName documents: - _id: 1 - _id: 2 - description: Network timeout on Monitor check operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client useMultipleMongoses: false observeEvents: - commandStartedEvent - serverDescriptionChangedEvent - poolClearedEvent uriOptions: retryWrites: false connectTimeoutMS: 750 heartbeatFrequencyMS: 500 appname: timeoutMonitorCheckTest - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName # Perform an operation to ensure the node is discovered. - name: insertMany object: *collection arguments: documents: - _id: 1 - _id: 2 # Configure the next streaming hello check to fail with a timeout. # Use "times: 4" to increase the probability that the Monitor check times # out since the RTT hello may trigger this failpoint one or many times as # well. - name: failPoint object: testRunner arguments: failPoint: configureFailPoint: failCommand mode: times: 2 data: failCommands: - hello - isMaster appName: timeoutMonitorCheckTest blockConnection: true # blockTimeMS is evaluated after the waiting for heartbeatFrequencyMS server-side, so this value only # needs to be greater than connectTimeoutMS. The driver will wait for (500+750)ms and the server will # respond after (500+1000)ms. blockTimeMS: 1000 client: *setupClient - name: waitForEvent object: testRunner arguments: client: *client event: serverDescriptionChangedEvent: newDescription: type: Unknown count: 1 # The network error on the next check should mark the server Unknown and # clear the pool. - name: waitForEvent object: testRunner arguments: client: *client event: poolClearedEvent: {} count: 1 # Perform an operation to ensure the node is rediscovered. - name: insertMany object: *collection arguments: documents: - _id: 3 - _id: 4 # We cannot assert the server was marked Unknown and pool was cleared an # exact number of times because the RTT hello may have triggered this # failpoint one or many times as well. expectEvents: - client: *client eventType: command events: - commandStartedEvent: command: insert: hello-timeout documents: - _id: 1 - _id: 2 commandName: insert databaseName: *databaseName - commandStartedEvent: command: insert: hello-timeout documents: - _id: 3 - _id: 4 commandName: insert databaseName: *databaseName outcome: - collectionName: *collectionName databaseName: *databaseName documents: - _id: 1 - _id: 2 - _id: 3 - _id: 4 - description: Driver extends timeout while streaming operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client observeEvents: - commandStartedEvent - serverDescriptionChangedEvent - poolClearedEvent uriOptions: retryWrites: false connectTimeoutMS: 250 heartbeatFrequencyMS: 500 appname: extendsTimeoutTest - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName # Perform an operation to ensure the node is discovered. - name: insertMany object: *collection arguments: documents: - _id: 1 - _id: 2 # Wait for multiple monitor checks to complete. - name: wait object: testRunner arguments: ms: 2000 # Perform an operation to ensure the node is still selectable. - name: insertMany object: *collection arguments: documents: - _id: 3 - _id: 4 # Assert that the server was never marked Unknown and the pool was never # cleared. - name: assertEventCount object: testRunner arguments: client: *client event: serverDescriptionChangedEvent: newDescription: type: Unknown count: 0 - name: assertEventCount object: testRunner arguments: client: *client event: poolClearedEvent: {} count: 0 expectEvents: - client: *client eventType: command events: - commandStartedEvent: command: insert: hello-timeout documents: - _id: 1 - _id: 2 commandName: insert databaseName: *databaseName - commandStartedEvent: command: insert: hello-timeout documents: - _id: 3 - _id: 4 commandName: insert databaseName: *databaseName outcome: - collectionName: *collectionName databaseName: *databaseName documents: - _id: 1 - _id: 2 - _id: 3 - _id: 4 mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam_unified/insert-network-error.yml000066400000000000000000000072731505113246500305140ustar00rootroot00000000000000--- description: insert-network-error schemaVersion: "1.10" runOnRequirements: # failCommand appName requirements - minServerVersion: "4.4" serverless: forbid topologies: [ single, replicaset, sharded ] createEntities: - client: id: &setupClient setupClient useMultipleMongoses: false initialData: &initialData - collectionName: &collectionName insert-network-error databaseName: &databaseName sdam-tests documents: - _id: 1 - _id: 2 tests: - description: Reset server and pool after network error on insert operations: - name: failPoint object: testRunner arguments: client: *setupClient failPoint: configureFailPoint: failCommand mode: times: 1 data: failCommands: - insert closeConnection: true appName: insertNetworkErrorTest - name: createEntities object: testRunner arguments: entities: - client: id: &client client observeEvents: - commandStartedEvent - serverDescriptionChangedEvent - poolClearedEvent uriOptions: retryWrites: false appname: insertNetworkErrorTest useMultipleMongoses: false - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - name: insertMany object: *collection arguments: documents: - _id: 3 - _id: 4 expectError: isError: true - name: waitForEvent object: testRunner arguments: client: *client event: serverDescriptionChangedEvent: newDescription: type: Unknown count: 1 - name: waitForEvent object: testRunner arguments: client: *client event: poolClearedEvent: {} count: 1 # Perform another operation to ensure the node is rediscovered. - name: insertMany object: *collection arguments: documents: - _id: 5 - _id: 6 # Assert the server was marked Unknown and pool was cleared exactly once. - name: assertEventCount object: testRunner arguments: client: *client event: serverDescriptionChangedEvent: newDescription: type: Unknown count: 1 - name: assertEventCount object: testRunner arguments: client: *client event: poolClearedEvent: {} count: 1 expectEvents: - client: *client eventType: command events: - commandStartedEvent: command: insert: insert-network-error documents: - _id: 3 - _id: 4 commandName: insert databaseName: *databaseName - commandStartedEvent: command: insert: insert-network-error documents: - _id: 5 - _id: 6 commandName: insert databaseName: *databaseName outcome: - collectionName: *collectionName databaseName: *databaseName documents: - _id: 1 - _id: 2 - _id: 5 - _id: 6 mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam_unified/insert-shutdown-error.yml000066400000000000000000000110441505113246500306650ustar00rootroot00000000000000--- description: insert-shutdown-error schemaVersion: "1.10" runOnRequirements: # failCommand appName requirements - minServerVersion: "4.4" serverless: forbid topologies: [ single, replicaset, sharded ] createEntities: - client: id: &setupClient setupClient useMultipleMongoses: false initialData: &initialData - collectionName: &collectionName insert-shutdown-error databaseName: &databaseName sdam-tests documents: [] tests: - description: Concurrent shutdown error on insert operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client useMultipleMongoses: false uriOptions: retryWrites: false heartbeatFrequencyMS: 500 appname: shutdownErrorInsertTest observeEvents: - serverDescriptionChangedEvent - poolClearedEvent - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName # Perform an operation to ensure the node is discovered. - name: insertOne object: *collection arguments: document: _id: 1 # Configure the next two inserts to fail with a non-timeout shutdown # errors. Block the connection for 500ms to ensure both operations check # out connections from the same pool generation. - name: failPoint object: testRunner arguments: failPoint: configureFailPoint: failCommand mode: times: 2 data: failCommands: - insert appName: shutdownErrorInsertTest errorCode: 91 blockConnection: true blockTimeMS: 500 client: *setupClient # Start threads. - name: createEntities object: testRunner arguments: entities: - thread: id: &thread0 thread0 - thread: id: &thread1 thread1 # Perform concurrent insert operations. Both fail with shutdown errors. - name: runOnThread object: testRunner arguments: thread: *thread0 operation: name: insertOne object: *collection arguments: document: _id: 2 expectError: isError: true - name: runOnThread object: testRunner arguments: thread: *thread1 operation: name: insertOne object: *collection arguments: document: _id: 3 expectError: isError: true # Stop threads. - name: waitForThread object: testRunner arguments: thread: *thread0 - name: waitForThread object: testRunner arguments: thread: *thread1 # The first shutdown error should mark the server Unknown and then clear # the pool. - name: waitForEvent object: testRunner arguments: client: *client event: serverDescriptionChangedEvent: newDescription: type: Unknown count: 1 - name: waitForEvent object: testRunner arguments: client: *client event: poolClearedEvent: {} count: 1 # Perform an operation to ensure the node is rediscovered. - name: insertOne object: *collection arguments: document: _id: 4 # Assert the server was marked Unknown and pool was cleared exactly once. - name: assertEventCount object: testRunner arguments: client: *client event: serverDescriptionChangedEvent: newDescription: type: Unknown count: 1 - name: assertEventCount object: testRunner arguments: client: *client event: poolClearedEvent: {} count: 1 # Order of operations is non-deterministic so we cannot check events. outcome: - collectionName: *collectionName databaseName: *databaseName documents: - _id: 1 - _id: 4 mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam_unified/interruptInUse-pool-clear.yml000066400000000000000000000247551505113246500314310ustar00rootroot00000000000000--- description: interruptInUse schemaVersion: "1.11" runOnRequirements: # failCommand appName requirements - minServerVersion: "4.4" serverless: forbid topologies: [ replicaset, sharded ] createEntities: - client: id: &setupClient setupClient useMultipleMongoses: false initialData: &initialData - collectionName: &collectionName interruptInUse databaseName: &databaseName sdam-tests documents: [] tests: - description: Connection pool clear uses interruptInUseConnections=true after monitor timeout operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client useMultipleMongoses: false observeEvents: - poolClearedEvent - connectionClosedEvent - commandStartedEvent - commandSucceededEvent - commandFailedEvent - connectionCheckedOutEvent - connectionCheckedInEvent uriOptions: connectTimeoutMS: 500 heartbeatFrequencyMS: 500 appname: interruptInUse retryReads: false minPoolSize: 0 - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - thread: id: &thread1 thread1 - name: insertOne object: *collection arguments: document: { _id: 1 } # simulate a long-running query - name: runOnThread object: testRunner arguments: thread: *thread1 operation: name: find object: *collection arguments: filter: $where : sleep(2000) || true expectError: isError: true # Configure the monitor check to fail with a timeout. # Use "times: 4" to increase the probability that the Monitor check triggers # the failpoint, since the RTT hello may trigger this failpoint one or many # times as well. - name: failPoint object: testRunner arguments: client: *setupClient failPoint: configureFailPoint: failCommand mode: times: 1 data: failCommands: - hello - isMaster blockConnection: true blockTimeMS: 1500 appName: interruptInUse - name: waitForThread object: testRunner arguments: thread: *thread1 expectEvents: - client: *client eventType: command events: - commandStartedEvent: commandName: insert - commandSucceededEvent: commandName: insert - commandStartedEvent: commandName: find - commandFailedEvent: commandName: find - client: *client eventType: cmap events: - connectionCheckedOutEvent: {} - connectionCheckedInEvent: {} - connectionCheckedOutEvent: {} - poolClearedEvent: interruptInUseConnections: true - connectionCheckedInEvent: {} - connectionClosedEvent: {} outcome: - collectionName: *collectionName databaseName: *databaseName documents: - _id: 1 - description: Error returned from connection pool clear with interruptInUseConnections=true is retryable operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client useMultipleMongoses: false observeEvents: - poolClearedEvent - connectionClosedEvent - commandStartedEvent - commandFailedEvent - commandSucceededEvent - connectionCheckedOutEvent - connectionCheckedInEvent uriOptions: connectTimeoutMS: 500 heartbeatFrequencyMS: 500 appname: interruptInUseRetryable retryReads: true minPoolSize: 0 - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - thread: id: &thread1 thread1 - name: insertOne object: *collection arguments: document: { _id: 1 } # simulate a long-running query - name: runOnThread object: testRunner arguments: thread: *thread1 operation: name: find object: *collection arguments: filter: $where : sleep(2000) || true # Configure the monitor check to fail with a timeout. # Use "times: 4" to increase the probability that the Monitor check triggers # the failpoint, since the RTT hello may trigger this failpoint one or many # times as well. - name: failPoint object: testRunner arguments: client: *setupClient failPoint: configureFailPoint: failCommand mode: times: 1 data: failCommands: - hello - isMaster blockConnection: true blockTimeMS: 1500 appName: interruptInUseRetryable - name: waitForThread object: testRunner arguments: thread: *thread1 expectEvents: - client: *client eventType: command events: - commandStartedEvent: commandName: insert - commandSucceededEvent: commandName: insert - commandStartedEvent: commandName: find - commandFailedEvent: commandName: find - commandStartedEvent: commandName: find - commandSucceededEvent: commandName: find - client: *client eventType: cmap events: - connectionCheckedOutEvent: {} - connectionCheckedInEvent: {} - connectionCheckedOutEvent: {} - poolClearedEvent: interruptInUseConnections: true - connectionCheckedInEvent: {} - connectionClosedEvent: {} - connectionCheckedOutEvent: {} - connectionCheckedInEvent: {} outcome: - collectionName: *collectionName databaseName: *databaseName documents: - _id: 1 - description: Error returned from connection pool clear with interruptInUseConnections=true is retryable for write operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client useMultipleMongoses: false observeEvents: - poolClearedEvent - connectionClosedEvent - commandStartedEvent - commandFailedEvent - commandSucceededEvent - connectionCheckedOutEvent - connectionCheckedInEvent uriOptions: connectTimeoutMS: 500 heartbeatFrequencyMS: 500 appname: interruptInUseRetryableWrite retryWrites: true minPoolSize: 0 - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName - thread: id: &thread1 thread1 # ensure the primary is discovered - name: insertOne object: *collection arguments: document: { _id: 1 } # simulate a long-running query - name: runOnThread object: testRunner arguments: thread: *thread1 operation: name: updateOne object: *collection arguments: filter: $where: sleep(2000) || true update: "$set": { "a": "bar" } # Configure the monitor check to fail with a timeout. # Use "times: 4" to increase the probability that the Monitor check triggers # the failpoint, since the RTT hello may trigger this failpoint one or many # times as well. - name: failPoint object: testRunner arguments: client: *setupClient failPoint: configureFailPoint: failCommand mode: times: 1 data: failCommands: - hello - isMaster blockConnection: true blockTimeMS: 1500 appName: interruptInUseRetryableWrite - name: waitForThread object: testRunner arguments: thread: *thread1 expectEvents: - client: *client eventType: command events: - commandStartedEvent: commandName: insert - commandSucceededEvent: commandName: insert - commandStartedEvent: commandName: update - commandFailedEvent: commandName: update - commandStartedEvent: commandName: update - commandSucceededEvent: commandName: update - client: *client eventType: cmap events: - connectionCheckedOutEvent: {} - connectionCheckedInEvent: {} - connectionCheckedOutEvent: {} - poolClearedEvent: interruptInUseConnections: true - connectionCheckedInEvent: {} - connectionClosedEvent: {} - connectionCheckedOutEvent: {} - connectionCheckedInEvent: {} outcome: - collectionName: *collectionName databaseName: *databaseName documents: - { _id: 1, a : bar } mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam_unified/minPoolSize-error.yml000066400000000000000000000073151505113246500277660ustar00rootroot00000000000000--- description: minPoolSize-error schemaVersion: "1.4" runOnRequirements: # Require SERVER-49336 for failCommand + appName on the initial handshake. - minServerVersion: "4.4.7" serverless: forbid topologies: - single createEntities: - client: id: &setupClient setupClient useMultipleMongoses: false initialData: &initialData - collectionName: &collectionName sdam-minPoolSize-error databaseName: &databaseName sdam-tests documents: [] tests: - description: Network error on minPoolSize background creation operations: # Configure the initial monitor handshake to succeed but the # first or second background minPoolSize establishments to fail. - name: failPoint object: testRunner arguments: client: *setupClient failPoint: configureFailPoint: failCommand mode: skip: 3 data: failCommands: - hello - isMaster appName: SDAMminPoolSizeError closeConnection: true - name: createEntities object: testRunner arguments: entities: - client: id: &client client useMultipleMongoses: false observeEvents: - serverDescriptionChangedEvent - poolClearedEvent - poolReadyEvent uriOptions: heartbeatFrequencyMS: 10000 appname: SDAMminPoolSizeError minPoolSize: 10 serverSelectionTimeoutMS: 1000 - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName # Wait for monitor to succeed handshake and mark the pool as ready. - name: waitForEvent object: testRunner arguments: client: *client event: poolReadyEvent: {} count: 1 # Background connection establishment ensuring minPoolSize should fail, # causing the pool to be cleared. - name: waitForEvent object: testRunner arguments: client: *client event: poolClearedEvent: {} count: 1 # The server should be marked as Unknown as part of this. - name: waitForEvent object: testRunner arguments: client: *client event: serverDescriptionChangedEvent: newDescription: type: Unknown count: 1 # Executing a command should fail server selection due to not being able # to find the primary. - name: runCommand object: *database arguments: command: ping: {} commandName: ping expectError: isError: true # Disable the failpoint, allowing the monitor to discover the primary again. - name: failPoint object: testRunner arguments: failPoint: configureFailPoint: failCommand mode: "off" client: *setupClient # Perform an operation to ensure the node is discovered. - name: runCommand object: *database arguments: command: ping: 1 commandName: ping # Assert that the monitor discovered the primary and mark the pool as ready again. - name: assertEventCount object: testRunner arguments: client: *client event: poolReadyEvent: {} count: 2 mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam_unified/pool-cleared-error.yml000066400000000000000000000152101505113246500300550ustar00rootroot00000000000000--- description: pool-cleared-error schemaVersion: "1.10" runOnRequirements: # This test requires retryable writes, failCommand appName, and # failCommand blockConnection with closeConnection:true (SERVER-53512). - minServerVersion: "4.9" serverless: forbid topologies: - replicaset - sharded createEntities: - client: id: &setupClient setupClient useMultipleMongoses: false initialData: &initialData - collectionName: &collectionName pool-cleared-error databaseName: &databaseName sdam-tests documents: [] tests: - description: PoolClearedError does not mark server unknown operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client useMultipleMongoses: false observeEvents: - serverDescriptionChangedEvent - poolClearedEvent uriOptions: retryWrites: true maxPoolSize: 1 appname: poolClearedErrorTest - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName # Perform an operation to ensure the node is discovered. - name: insertOne object: *collection arguments: document: _id: 1 # Configure the next insert to fail with a network error which will # clear the pool leaving it paused until the server is rediscovered. - name: failPoint object: testRunner arguments: failPoint: configureFailPoint: failCommand mode: times: 1 data: failCommands: - insert blockConnection: true blockTimeMS: 100 closeConnection: true appName: poolClearedErrorTest client: *setupClient # Start threads. - name: createEntities object: testRunner arguments: entities: - thread: id: &thread0 thread0 - thread: id: &thread1 thread1 - thread: id: &thread2 thread2 - thread: id: &thread3 thread3 - thread: id: &thread4 thread4 - thread: id: &thread5 thread5 # Perform concurrent insert operations. The first one to execute will # fail with a network error, mark the server Unknown, clear the pool, # and retry. # The other operations will either: # - Notice the pool is paused, fail with a PoolClearedError, and retry. # - Or block waiting in server selection until the server is # rediscovered. # # Note that this test does not guarantee that a PoolClearedError will be # raised but it is likely since the initial insert is delayed. - name: runOnThread object: testRunner arguments: thread: *thread0 operation: name: insertOne object: *collection arguments: document: _id: 2 - name: runOnThread object: testRunner arguments: thread: *thread1 operation: name: insertOne object: *collection arguments: document: _id: 3 - name: runOnThread object: testRunner arguments: thread: *thread2 operation: name: insertOne object: *collection arguments: document: _id: 4 - name: runOnThread object: testRunner arguments: thread: *thread3 operation: name: insertOne object: *collection arguments: document: _id: 5 - name: runOnThread object: testRunner arguments: thread: *thread4 operation: name: insertOne object: *collection arguments: document: _id: 6 - name: runOnThread object: testRunner arguments: thread: *thread5 operation: name: insertOne object: *collection arguments: document: _id: 7 # Stop threads. - name: waitForThread object: testRunner arguments: thread: *thread0 - name: waitForThread object: testRunner arguments: thread: *thread1 - name: waitForThread object: testRunner arguments: thread: *thread2 - name: waitForThread object: testRunner arguments: thread: *thread3 - name: waitForThread object: testRunner arguments: thread: *thread4 - name: waitForThread object: testRunner arguments: thread: *thread5 # The first shutdown error should mark the server Unknown and then clear # the pool. - name: waitForEvent object: testRunner arguments: client: *client event: serverDescriptionChangedEvent: newDescription: type: Unknown count: 1 - name: waitForEvent object: testRunner arguments: client: *client event: poolClearedEvent: {} count: 1 # Perform an operation to ensure the node still useable. - name: insertOne object: *collection arguments: document: _id: 8 # Assert the server was marked Unknown and pool was cleared exactly once. - name: assertEventCount object: testRunner arguments: client: *client event: serverDescriptionChangedEvent: newDescription: type: Unknown count: 1 - name: assertEventCount object: testRunner arguments: client: *client event: poolClearedEvent: {} count: 1 # Order of operations is non-deterministic so we cannot check events. outcome: - collectionName: *collectionName databaseName: *databaseName documents: - _id: 1 - _id: 2 - _id: 3 - _id: 4 - _id: 5 - _id: 6 - _id: 7 - _id: 8 mongo-ruby-driver-2.21.3/spec/spec_tests/data/sdam_unified/rediscover-quickly-after-step-down.yml000066400000000000000000000103021505113246500332160ustar00rootroot00000000000000--- description: rediscover-quickly-after-step-down schemaVersion: "1.10" runOnRequirements: # 4.4 is required for streaming. # A replica set is required for replSetStepDown. - minServerVersion: "4.4" serverless: forbid topologies: - replicaset createEntities: - client: id: &setupClient setupClient - database: id: &adminDatabase adminDatabase client: *setupClient databaseName: admin initialData: &initialData - collectionName: &collectionName test-replSetStepDown databaseName: &databaseName sdam-tests documents: - _id: 1 - _id: 2 tests: - description: Rediscover quickly after replSetStepDown operations: - name: createEntities object: testRunner arguments: entities: - client: id: &client client observeEvents: - poolClearedEvent - commandStartedEvent uriOptions: appname: replSetStepDownTest # Configure a large heartbeatFrequencyMS heartbeatFrequencyMS: 60000 # Configure a much smaller server selection timeout so that the test # will error when it cannot discover the new primary soon. serverSelectionTimeoutMS: 5000 w: majority - database: id: &database database client: *client databaseName: *databaseName - collection: id: &collection collection database: *database collectionName: *collectionName # Discover the primary. - name: insertMany object: *collection arguments: documents: - _id: 3 - _id: 4 - name: recordTopologyDescription object: testRunner arguments: client: *client id: &topologyDescription topologyDescription - name: assertTopologyType object: testRunner arguments: topologyDescription: *topologyDescription topologyType: ReplicaSetWithPrimary # Unfreeze a secondary with replSetFreeze:0 to ensure a speedy election. - name: runCommand object: *adminDatabase arguments: command: replSetFreeze: 0 readPreference: mode: secondary commandName: replSetFreeze # Run replSetStepDown on the meta client. - name: runCommand object: *adminDatabase arguments: command: replSetStepDown: 30 secondaryCatchUpPeriodSecs: 30 force: false commandName: replSetStepDown - name: waitForPrimaryChange object: testRunner arguments: client: *client priorTopologyDescription: *topologyDescription # We use a relatively large timeout here to workaround slow # elections on Windows, possibly caused by SERVER-48154. timeoutMS: 15000 # Rediscover the new primary. - name: insertMany object: *collection arguments: documents: - _id: 5 - _id: 6 # Assert that no pools were cleared. - name: assertEventCount object: testRunner arguments: client: *client event: poolClearedEvent: {} count: 0 expectEvents: - client: *client eventType: command events: - commandStartedEvent: command: insert: test-replSetStepDown documents: - _id: 3 - _id: 4 commandName: insert databaseName: *databaseName - commandStartedEvent: command: insert: test-replSetStepDown documents: - _id: 5 - _id: 6 commandName: insert databaseName: *databaseName outcome: - collectionName: *collectionName databaseName: *databaseName documents: - _id: 1 - _id: 2 - _id: 3 - _id: 4 - _id: 5 - _id: 6 mongo-ruby-driver-2.21.3/spec/spec_tests/data/seed_list_discovery/000077500000000000000000000000001505113246500252515ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/seed_list_discovery/load-balanced/000077500000000000000000000000001505113246500277175ustar00rootroot00000000000000loadBalanced-directConnection.yml000066400000000000000000000007631505113246500362120ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/seed_list_discovery/load-balanced# The TXT record for test24.test.build.10gen.cc contains loadBalanced=true. # DRIVERS-1721 introduced this test as passing. uri: "mongodb+srv://test24.test.build.10gen.cc/?directConnection=false" seeds: - localhost.test.build.10gen.cc:8000 hosts: # In LB mode, the driver does not do server discovery, so the hostname does # not get resolved to localhost:8000. - localhost.test.build.10gen.cc:8000 options: loadBalanced: true ssl: true directConnection: false ping: true loadBalanced-no-results.yml000066400000000000000000000002501505113246500350220ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/seed_list_discovery/load-balanceduri: "mongodb+srv://test4.test.build.10gen.cc/?loadBalanced=true" seeds: [] hosts: [] error: true comment: Should fail because no SRV records are present for this URI. loadBalanced-replicaSet-errors.yml000066400000000000000000000003771505113246500363260ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/seed_list_discovery/load-balanced# The TXT record for test24.test.build.10gen.cc contains loadBalanced=true. uri: "mongodb+srv://test24.test.build.10gen.cc/?replicaSet=replset" seeds: [] hosts: [] error: true comment: Should fail because loadBalanced=true is incompatible with replicaSet loadBalanced-true-multiple-hosts.yml000066400000000000000000000003021505113246500366530ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/seed_list_discovery/load-balanceduri: "mongodb+srv://test1.test.build.10gen.cc/?loadBalanced=true" seeds: [] hosts: [] error: true comment: Should fail because loadBalanced is true but the SRV record resolves to multiple hosts loadBalanced-true-txt.yml000066400000000000000000000005041505113246500345050ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/seed_list_discovery/load-balanceduri: "mongodb+srv://test24.test.build.10gen.cc/" seeds: - localhost.test.build.10gen.cc:8000 hosts: # In LB mode, the driver does not do server discovery, so the hostname does # not get resolved to localhost:8000. - localhost.test.build.10gen.cc:8000 options: loadBalanced: true ssl: true ping: true srvMaxHosts-conflicts_with_loadBalanced-true-txt.yml000066400000000000000000000003041505113246500421010ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/seed_list_discovery/load-balanceduri: "mongodb+srv://test24.test.build.10gen.cc/?srvMaxHosts=1" seeds: [] hosts: [] error: true comment: Should fail because positive integer for srvMaxHosts conflicts with loadBalanced=true (TXT) srvMaxHosts-conflicts_with_loadBalanced-true.yml000066400000000000000000000003171505113246500412700ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/seed_list_discovery/load-balanceduri: "mongodb+srv://test3.test.build.10gen.cc/?loadBalanced=true&srvMaxHosts=1" seeds: [] hosts: [] error: true comment: Should fail because positive integer for srvMaxHosts conflicts with loadBalanced=true srvMaxHosts-zero-txt.yml000066400000000000000000000004661505113246500344640ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/seed_list_discovery/load-balanced# loadBalanced=true (TXT) is permitted because srvMaxHosts is non-positive uri: "mongodb+srv://test24.test.build.10gen.cc/?srvMaxHosts=0" seeds: - localhost.test.build.10gen.cc:8000 hosts: - localhost.test.build.10gen.cc:8000 options: loadBalanced: true srvMaxHosts: 0 ssl: true ping: true mongo-ruby-driver-2.21.3/spec/spec_tests/data/seed_list_discovery/load-balanced/srvMaxHosts-zero.yml000066400000000000000000000005021505113246500337150ustar00rootroot00000000000000# loadBalanced=true is permitted because srvMaxHosts is non-positive uri: "mongodb+srv://test23.test.build.10gen.cc/?loadBalanced=true&srvMaxHosts=0" seeds: - localhost.test.build.10gen.cc:8000 hosts: - localhost.test.build.10gen.cc:8000 options: loadBalanced: true srvMaxHosts: 0 ssl: true ping: true mongo-ruby-driver-2.21.3/spec/spec_tests/data/seed_list_discovery/replica-set/000077500000000000000000000000001505113246500274615ustar00rootroot00000000000000direct-connection-false.yml000066400000000000000000000003641505113246500346270ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/seed_list_discovery/replica-seturi: "mongodb+srv://test3.test.build.10gen.cc/?directConnection=false" seeds: - localhost.test.build.10gen.cc:27017 hosts: - localhost:27017 - localhost:27018 - localhost:27019 options: ssl: true directConnection: false direct-connection-true.yml000066400000000000000000000002701505113246500345100ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/seed_list_discovery/replica-seturi: "mongodb+srv://test3.test.build.10gen.cc/?directConnection=true" seeds: [] hosts: [] error: true comment: Should fail because directConnection=true is incompatible with SRV URIs. encoded-userinfo-and-db.yml000066400000000000000000000006231505113246500345020ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/seed_list_discovery/replica-seturi: "mongodb+srv://b*b%40f3tt%3D:%244to%40L8%3DMC@test3.test.build.10gen.cc/mydb%3F?replicaSet=repl0" seeds: - localhost.test.build.10gen.cc:27017 hosts: - localhost:27017 - localhost:27018 - localhost:27019 options: replicaSet: repl0 ssl: true parsed_options: user: "b*b@f3tt=" password: "$4to@L8=MC" db: "mydb?" comment: Encoded user, pass, and DB parse correctly loadBalanced-false-txt.yml000066400000000000000000000003321505113246500343610ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/seed_list_discovery/replica-seturi: "mongodb+srv://test21.test.build.10gen.cc/" seeds: - localhost.test.build.10gen.cc:27017 hosts: - localhost:27017 - localhost:27018 - localhost:27019 options: loadBalanced: false ssl: true longer-parent-in-return.yml000066400000000000000000000005031505113246500346210ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/seed_list_discovery/replica-seturi: "mongodb+srv://test18.test.build.10gen.cc/?replicaSet=repl0" seeds: - localhost.sub.test.build.10gen.cc:27017 hosts: - localhost:27017 - localhost:27018 - localhost:27019 options: replicaSet: repl0 ssl: true comment: Is correct, as returned host name shared the URI root "test.build.10gen.cc". misformatted-option.yml000066400000000000000000000002651505113246500341340ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/seed_list_discovery/replica-seturi: "mongodb+srv://test8.test.build.10gen.cc/" seeds: [] hosts: [] error: true comment: Should fail because the options in the TXT record are incorrectly formatted (misses value). mongo-ruby-driver-2.21.3/spec/spec_tests/data/seed_list_discovery/replica-set/no-results.yml000066400000000000000000000002261505113246500323170ustar00rootroot00000000000000uri: "mongodb+srv://test4.test.build.10gen.cc/" seeds: [] hosts: [] error: true comment: Should fail because no SRV records are present for this URI. mongo-ruby-driver-2.21.3/spec/spec_tests/data/seed_list_discovery/replica-set/not-enough-parts.yml000066400000000000000000000002321505113246500334130ustar00rootroot00000000000000uri: "mongodb+srv://10gen.cc/" seeds: [] hosts: [] error: true comment: Should fail because host in URI does not have {hostname}, {domainname} and {tld}. one-result-default-port.yml000066400000000000000000000003501505113246500346240ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/seed_list_discovery/replica-seturi: "mongodb+srv://test3.test.build.10gen.cc/?replicaSet=repl0" seeds: - localhost.test.build.10gen.cc:27017 hosts: - localhost:27017 - localhost:27018 - localhost:27019 options: replicaSet: repl0 ssl: true one-txt-record-multiple-strings.yml000066400000000000000000000003301505113246500363130ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/seed_list_discovery/replica-seturi: "mongodb+srv://test11.test.build.10gen.cc/" seeds: - localhost.test.build.10gen.cc:27017 hosts: - localhost:27017 - localhost:27018 - localhost:27019 options: replicaSet: repl0 ssl: true mongo-ruby-driver-2.21.3/spec/spec_tests/data/seed_list_discovery/replica-set/one-txt-record.yml000066400000000000000000000003561505113246500330620ustar00rootroot00000000000000uri: "mongodb+srv://test5.test.build.10gen.cc/" seeds: - localhost.test.build.10gen.cc:27017 hosts: - localhost:27017 - localhost:27018 - localhost:27019 options: replicaSet: repl0 authSource: thisDB ssl: true parent-part-mismatch1.yml000066400000000000000000000002661505113246500342520ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/seed_list_discovery/replica-seturi: "mongodb+srv://test14.test.build.10gen.cc/" seeds: [] hosts: [] error: true comment: Should fail because returned host name's part "not-test" mismatches URI parent part "test". parent-part-mismatch2.yml000066400000000000000000000002701505113246500342460ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/seed_list_discovery/replica-seturi: "mongodb+srv://test15.test.build.10gen.cc/" seeds: [] hosts: [] error: true comment: Should fail because returned host name's part "not-build" mismatches URI parent part "build". parent-part-mismatch3.yml000066400000000000000000000002701505113246500342470ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/seed_list_discovery/replica-seturi: "mongodb+srv://test16.test.build.10gen.cc/" seeds: [] hosts: [] error: true comment: Should fail because returned host name's part "not-10gen" mismatches URI parent part "10gen". parent-part-mismatch4.yml000066400000000000000000000002511505113246500342470ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/seed_list_discovery/replica-seturi: "mongodb+srv://test17.test.build.10gen.cc/" seeds: [] hosts: [] error: true comment: Should fail because returned host name's TLD "not-cc" mismatches URI TLD "cc". parent-part-mismatch5.yml000066400000000000000000000002721505113246500342530ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/seed_list_discovery/replica-seturi: "mongodb+srv://test19.test.build.10gen.cc/" seeds: [] hosts: [] error: true comment: Should fail because one of the returned host names' domain name parts "evil" mismatches "test". returned-parent-too-short.yml000066400000000000000000000002521505113246500351770ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/seed_list_discovery/replica-seturi: "mongodb+srv://test13.test.build.10gen.cc/" seeds: [] hosts: [] error: true comment: Should fail because returned host name's parent (build.10gen.cc) misses "test." returned-parent-wrong.yml000066400000000000000000000002471505113246500344010ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/seed_list_discovery/replica-seturi: "mongodb+srv://test12.test.build.10gen.cc/" seeds: [] hosts: [] error: true comment: Should fail because returned host name is too short and mismatches a parent. mongo-ruby-driver-2.21.3/spec/spec_tests/data/seed_list_discovery/replica-set/srv-service-name.yml000066400000000000000000000004471505113246500333770ustar00rootroot00000000000000uri: "mongodb+srv://test22.test.build.10gen.cc/?srvServiceName=customname" seeds: - localhost.test.build.10gen.cc:27017 - localhost.test.build.10gen.cc:27018 hosts: - localhost:27017 - localhost:27018 - localhost:27019 options: ssl: true srvServiceName: "customname" srvMaxHosts-conflicts_with_replicaSet-txt.yml000066400000000000000000000003031505113246500404270ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/seed_list_discovery/replica-seturi: "mongodb+srv://test5.test.build.10gen.cc/?srvMaxHosts=1" seeds: [] hosts: [] error: true comment: Should fail because positive integer for srvMaxHosts conflicts with replicaSet option (TXT) srvMaxHosts-conflicts_with_replicaSet.yml000066400000000000000000000003161505113246500376160ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/seed_list_discovery/replica-seturi: "mongodb+srv://test1.test.build.10gen.cc/?replicaSet=repl0&srvMaxHosts=1" seeds: [] hosts: [] error: true comment: Should fail because positive integer for srvMaxHosts conflicts with replicaSet option srvMaxHosts-equal_to_srv_records.yml000066400000000000000000000006771505113246500366620ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/seed_list_discovery/replica-set# When srvMaxHosts equals the number of SRV records, all hosts are added to the # seed list. # # The replicaSet URI option is omitted to avoid a URI validation error. uri: "mongodb+srv://test1.test.build.10gen.cc/?srvMaxHosts=2" numSeeds: 2 seeds: - localhost.test.build.10gen.cc:27017 - localhost.test.build.10gen.cc:27018 hosts: - localhost:27017 - localhost:27018 - localhost:27019 options: srvMaxHosts: 2 ssl: true srvMaxHosts-greater_than_srv_records.yml000066400000000000000000000006741505113246500375110ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/seed_list_discovery/replica-set# When srvMaxHosts is greater than the number of SRV records, all hosts are # added to the seed list. # # The replicaSet URI option is omitted to avoid a URI validation error. uri: "mongodb+srv://test1.test.build.10gen.cc/?srvMaxHosts=3" seeds: - localhost.test.build.10gen.cc:27017 - localhost.test.build.10gen.cc:27018 hosts: - localhost:27017 - localhost:27018 - localhost:27019 options: srvMaxHosts: 3 ssl: true srvMaxHosts-less_than_srv_records.yml000066400000000000000000000010551505113246500370200ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/seed_list_discovery/replica-set# When srvMaxHosts is less than the number of SRV records, a random subset of # hosts are added to the seed list. We cannot anticipate which hosts will be # selected, so this test uses numSeeds instead of seeds. Since this is a replica # set, all hosts should ultimately be discovered by SDAM. # # The replicaSet URI option is omitted to avoid a URI validation error. uri: "mongodb+srv://test1.test.build.10gen.cc/?srvMaxHosts=1" numSeeds: 1 hosts: - localhost:27017 - localhost:27018 - localhost:27019 options: srvMaxHosts: 1 ssl: true srvMaxHosts-zero-txt.yml000066400000000000000000000006301505113246500342170ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/seed_list_discovery/replica-set# When srvMaxHosts is zero, all hosts are added to the seed list. # # replicaSet (TXT) is permitted because srvMaxHosts is non-positive. uri: "mongodb+srv://test5.test.build.10gen.cc/?srvMaxHosts=0" seeds: - localhost.test.build.10gen.cc:27017 hosts: - localhost:27017 - localhost:27018 - localhost:27019 options: authSource: thisDB replicaSet: repl0 srvMaxHosts: 0 ssl: true mongo-ruby-driver-2.21.3/spec/spec_tests/data/seed_list_discovery/replica-set/srvMaxHosts-zero.yml000066400000000000000000000006661505113246500334720ustar00rootroot00000000000000# When srvMaxHosts is zero, all hosts are added to the seed list. # # replicaSet is permitted because srvMaxHosts is non-positive. uri: "mongodb+srv://test1.test.build.10gen.cc/?replicaSet=repl0&srvMaxHosts=0" seeds: - localhost.test.build.10gen.cc:27017 - localhost.test.build.10gen.cc:27018 hosts: - localhost:27017 - localhost:27018 - localhost:27019 options: replicaSet: repl0 srvMaxHosts: 0 ssl: true two-results-default-port.yml000066400000000000000000000004221505113246500350370ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/seed_list_discovery/replica-seturi: "mongodb+srv://test1.test.build.10gen.cc/?replicaSet=repl0" seeds: - localhost.test.build.10gen.cc:27017 - localhost.test.build.10gen.cc:27018 hosts: - localhost:27017 - localhost:27018 - localhost:27019 options: replicaSet: repl0 ssl: true two-results-nonstandard-port.yml000066400000000000000000000004221505113246500357260ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/seed_list_discovery/replica-seturi: "mongodb+srv://test2.test.build.10gen.cc/?replicaSet=repl0" seeds: - localhost.test.build.10gen.cc:27018 - localhost.test.build.10gen.cc:27019 hosts: - localhost:27017 - localhost:27018 - localhost:27019 options: replicaSet: repl0 ssl: true mongo-ruby-driver-2.21.3/spec/spec_tests/data/seed_list_discovery/replica-set/two-txt-records.yml000066400000000000000000000002101505113246500332620ustar00rootroot00000000000000uri: "mongodb+srv://test6.test.build.10gen.cc/" seeds: [] hosts: [] error: true comment: Should fail because there are two TXT records. txt-record-not-allowed-option.yml000066400000000000000000000002511505113246500357470ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/seed_list_discovery/replica-seturi: "mongodb+srv://test10.test.build.10gen.cc/?replicaSet=repl0" seeds: [] hosts: [] error: true comment: Should fail because socketTimeoutMS is not an allowed option. txt-record-with-overridden-ssl-option.yml000066400000000000000000000003711505113246500374360ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/seed_list_discovery/replica-seturi: "mongodb+srv://test5.test.build.10gen.cc/?ssl=false" seeds: - localhost.test.build.10gen.cc:27017 hosts: - localhost:27017 - localhost:27018 - localhost:27019 options: replicaSet: repl0 authSource: thisDB ssl: false txt-record-with-overridden-uri-option.yml000066400000000000000000000004021505113246500374270ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/seed_list_discovery/replica-seturi: "mongodb+srv://test5.test.build.10gen.cc/?authSource=otherDB" seeds: - localhost.test.build.10gen.cc:27017 hosts: - localhost:27017 - localhost:27018 - localhost:27019 options: replicaSet: repl0 authSource: otherDB ssl: true txt-record-with-unallowed-option.yml000066400000000000000000000002151505113246500364650ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/seed_list_discovery/replica-seturi: "mongodb+srv://test7.test.build.10gen.cc/" seeds: [] hosts: [] error: true comment: Should fail because "ssl" is not an allowed option. uri-with-admin-database.yml000066400000000000000000000005041505113246500345240ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/seed_list_discovery/replica-seturi: "mongodb+srv://test1.test.build.10gen.cc/adminDB?replicaSet=repl0" seeds: - localhost.test.build.10gen.cc:27017 - localhost.test.build.10gen.cc:27018 hosts: - localhost:27017 - localhost:27018 - localhost:27019 options: replicaSet: repl0 ssl: true parsed_options: auth_database: adminDB mongo-ruby-driver-2.21.3/spec/spec_tests/data/seed_list_discovery/replica-set/uri-with-auth.yml000066400000000000000000000005171505113246500327160ustar00rootroot00000000000000uri: "mongodb+srv://auser:apass@test1.test.build.10gen.cc/?replicaSet=repl0" seeds: - localhost.test.build.10gen.cc:27017 - localhost.test.build.10gen.cc:27018 hosts: - localhost:27017 - localhost:27018 - localhost:27019 parsed_options: user: auser password: apass comment: Should preserve auth credentials mongo-ruby-driver-2.21.3/spec/spec_tests/data/seed_list_discovery/replica-set/uri-with-port.yml000066400000000000000000000002501505113246500327330ustar00rootroot00000000000000uri: "mongodb+srv://test5.test.build.10gen.cc:8123/?replicaSet=repl0" seeds: [] hosts: [] error: true comment: Should fail because the mongodb+srv URI includes a port. mongo-ruby-driver-2.21.3/spec/spec_tests/data/seed_list_discovery/replica-set/uri-with-two-hosts.yml000066400000000000000000000003051505113246500337170ustar00rootroot00000000000000uri: "mongodb+srv://test5.test.build.10gen.cc,test6.test.build.10gen.cc/?replicaSet=repl0" seeds: [] hosts: [] error: true comment: Should fail because the mongodb+srv URI includes two host names. mongo-ruby-driver-2.21.3/spec/spec_tests/data/seed_list_discovery/sharded/000077500000000000000000000000001505113246500266635ustar00rootroot00000000000000srvMaxHosts-equal_to_srv_records.yml000066400000000000000000000006071505113246500360550ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/seed_list_discovery/sharded# When srvMaxHosts equals the number of SRV records, all hosts are added to the # seed list. uri: "mongodb+srv://test1.test.build.10gen.cc/?srvMaxHosts=2" numSeeds: 2 seeds: - localhost.test.build.10gen.cc:27017 - localhost.test.build.10gen.cc:27018 hosts: - localhost.test.build.10gen.cc:27017 - localhost.test.build.10gen.cc:27018 options: srvMaxHosts: 2 ssl: true srvMaxHosts-greater_than_srv_records.yml000066400000000000000000000006041505113246500367040ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/seed_list_discovery/sharded# When srvMaxHosts is greater than the number of SRV records, all hosts are # added to the seed list. uri: "mongodb+srv://test1.test.build.10gen.cc/?srvMaxHosts=3" seeds: - localhost.test.build.10gen.cc:27017 - localhost.test.build.10gen.cc:27018 hosts: - localhost.test.build.10gen.cc:27017 - localhost.test.build.10gen.cc:27018 options: srvMaxHosts: 3 ssl: true srvMaxHosts-less_than_srv_records.yml000066400000000000000000000005731505113246500362260ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/seed_list_discovery/sharded# When srvMaxHosts is less than the number of SRV records, a random subset of # hosts are added to the seed list. We cannot anticipate which hosts will be # selected, so this test uses numSeeds and numHosts instead of seeds and hosts, # respectively. uri: "mongodb+srv://test1.test.build.10gen.cc/?srvMaxHosts=1" numSeeds: 1 numHosts: 1 options: srvMaxHosts: 1 ssl: true mongo-ruby-driver-2.21.3/spec/spec_tests/data/seed_list_discovery/sharded/srvMaxHosts-zero.yml000066400000000000000000000005401505113246500326630ustar00rootroot00000000000000# When srvMaxHosts is zero, all hosts are added to the seed list. uri: "mongodb+srv://test1.test.build.10gen.cc/?srvMaxHosts=0" seeds: - localhost.test.build.10gen.cc:27017 - localhost.test.build.10gen.cc:27018 hosts: - localhost.test.build.10gen.cc:27017 - localhost.test.build.10gen.cc:27018 options: srvMaxHosts: 0 ssl: true mongo-ruby-driver-2.21.3/spec/spec_tests/data/server_selection/000077500000000000000000000000001505113246500245625ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/server_selection/ReplicaSetNoPrimary/000077500000000000000000000000001505113246500304565ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/server_selection/ReplicaSetNoPrimary/read/000077500000000000000000000000001505113246500313715ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/server_selection/ReplicaSetNoPrimary/read/Nearest.yml000066400000000000000000000006171505113246500335210ustar00rootroot00000000000000topology_description: type: ReplicaSetNoPrimary servers: - &1 address: b:27017 avg_rtt_ms: 5 type: RSSecondary tags: data_center: nyc - &2 address: c:27017 avg_rtt_ms: 100 type: RSSecondary tags: data_center: nyc operation: read read_preference: mode: Nearest tag_sets: - data_center: nyc suitable_servers: - *1 - *2 in_latency_window: - *1 Nearest_multiple.yml000066400000000000000000000006241505113246500353530ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/server_selection/ReplicaSetNoPrimary/readtopology_description: type: ReplicaSetNoPrimary servers: - &1 address: b:27017 avg_rtt_ms: 10 type: RSSecondary tags: data_center: nyc - &2 address: c:27017 avg_rtt_ms: 20 type: RSSecondary tags: data_center: nyc operation: read read_preference: mode: Nearest tag_sets: - data_center: nyc suitable_servers: - *1 - *2 in_latency_window: - *1 - *2 Nearest_non_matching.yml000066400000000000000000000005671505113246500361720ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/server_selection/ReplicaSetNoPrimary/readtopology_description: type: ReplicaSetNoPrimary servers: - address: b:27017 avg_rtt_ms: 5 type: RSSecondary tags: data_center: nyc - address: c:27017 avg_rtt_ms: 100 type: RSSecondary tags: data_center: nyc operation: read read_preference: mode: Nearest tag_sets: - data_center: sf suitable_servers: [] in_latency_window: [] PossiblePrimary.yml000066400000000000000000000004731505113246500351650ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/server_selection/ReplicaSetNoPrimary/read# Test that PossiblePrimary isn't candidate for any read preference mode. --- topology_description: type: ReplicaSetNoPrimary servers: - address: b:27017 avg_rtt_ms: 5 type: PossiblePrimary operation: read read_preference: mode: Primary tag_sets: - {} suitable_servers: [] in_latency_window: [] PossiblePrimaryNearest.yml000066400000000000000000000004731505113246500365070ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/server_selection/ReplicaSetNoPrimary/read# Test that PossiblePrimary isn't candidate for any read preference mode. --- topology_description: type: ReplicaSetNoPrimary servers: - address: b:27017 avg_rtt_ms: 5 type: PossiblePrimary operation: read read_preference: mode: Nearest tag_sets: - {} suitable_servers: [] in_latency_window: [] mongo-ruby-driver-2.21.3/spec/spec_tests/data/server_selection/ReplicaSetNoPrimary/read/Primary.yml000066400000000000000000000005271505113246500335430ustar00rootroot00000000000000topology_description: type: ReplicaSetNoPrimary servers: - address: b:27017 avg_rtt_ms: 5 type: RSSecondary tags: data_center: nyc - address: c:27017 avg_rtt_ms: 100 type: RSSecondary tags: data_center: nyc operation: read read_preference: mode: Primary suitable_servers: [] in_latency_window: [] PrimaryPreferred.yml000066400000000000000000000006121505113246500353160ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/server_selection/ReplicaSetNoPrimary/readtopology_description: type: ReplicaSetNoPrimary servers: - &1 address: b:27017 avg_rtt_ms: 5 type: RSSecondary tags: data_center: nyc - &2 address: c:27017 avg_rtt_ms: 100 type: RSSecondary tags: data_center: nyc operation: read read_preference: mode: PrimaryPreferred tag_sets: - {} suitable_servers: - *1 - *2 in_latency_window: - *1 PrimaryPreferred_non_matching.yml000066400000000000000000000006001505113246500400370ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/server_selection/ReplicaSetNoPrimary/readtopology_description: type: ReplicaSetNoPrimary servers: - address: b:27017 avg_rtt_ms: 5 type: RSSecondary tags: data_center: nyc - address: c:27017 avg_rtt_ms: 100 type: RSSecondary tags: data_center: nyc operation: read read_preference: mode: PrimaryPreferred tag_sets: - data_center: sf suitable_servers: [] in_latency_window: [] Secondary.yml000066400000000000000000000006211505113246500337630ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/server_selection/ReplicaSetNoPrimary/readtopology_description: type: ReplicaSetNoPrimary servers: - &1 address: b:27017 avg_rtt_ms: 5 type: RSSecondary tags: data_center: nyc - &2 address: c:27017 avg_rtt_ms: 100 type: RSSecondary tags: data_center: nyc operation: read read_preference: mode: Secondary tag_sets: - data_center: nyc suitable_servers: - *1 - *2 in_latency_window: - *1 SecondaryPreferred.yml000066400000000000000000000006321505113246500356240ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/server_selection/ReplicaSetNoPrimary/readtopology_description: type: ReplicaSetNoPrimary servers: - &1 address: b:27017 avg_rtt_ms: 5 type: RSSecondary tags: data_center: nyc - &2 address: c:27017 avg_rtt_ms: 100 type: RSSecondary tags: data_center: nyc operation: read read_preference: mode: SecondaryPreferred tag_sets: - data_center: nyc suitable_servers: - *1 - *2 in_latency_window: - *1 SecondaryPreferred_non_matching.yml000066400000000000000000000006021505113246500403450ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/server_selection/ReplicaSetNoPrimary/readtopology_description: type: ReplicaSetNoPrimary servers: - address: b:27017 avg_rtt_ms: 5 type: RSSecondary tags: data_center: nyc - address: c:27017 avg_rtt_ms: 100 type: RSSecondary tags: data_center: nyc operation: read read_preference: mode: SecondaryPreferred tag_sets: - data_center: sf suitable_servers: [] in_latency_window: [] Secondary_multi_tags.yml000066400000000000000000000011431505113246500362130ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/server_selection/ReplicaSetNoPrimary/read# Catch bugs like CDRIVER-1447, ensure clients select a server that matches all # tags, even when the other server mismatches multiple tags. --- topology_description: type: ReplicaSetNoPrimary servers: - &1 address: b:27017 avg_rtt_ms: 5 type: RSSecondary tags: rack: one data_center: nyc - &2 address: c:27017 avg_rtt_ms: 5 type: RSSecondary tags: rack: two data_center: sf operation: read read_preference: mode: Secondary tag_sets: - data_center: nyc rack: one - other_tag: doesntexist suitable_servers: - *1 in_latency_window: - *1 Secondary_multi_tags2.yml000066400000000000000000000011711505113246500362760ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/server_selection/ReplicaSetNoPrimary/read# Ensure clients select a server that matches all tags, even when the other # server matches one tag and doesn't match the other. --- topology_description: type: ReplicaSetNoPrimary servers: - &1 address: b:27017 avg_rtt_ms: 5 type: RSSecondary tags: rack: one data_center: nyc - &2 address: c:27017 avg_rtt_ms: 5 type: RSSecondary tags: rack: two # mismatch data_center: nyc # match operation: read read_preference: mode: Secondary tag_sets: - data_center: nyc rack: one - other_tag: doesntexist suitable_servers: - *1 in_latency_window: - *1 Secondary_non_matching.yml000066400000000000000000000005711505113246500365130ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/server_selection/ReplicaSetNoPrimary/readtopology_description: type: ReplicaSetNoPrimary servers: - address: b:27017 avg_rtt_ms: 5 type: RSSecondary tags: data_center: nyc - address: c:27017 avg_rtt_ms: 100 type: RSSecondary tags: data_center: nyc operation: read read_preference: mode: Secondary tag_sets: - data_center: sf suitable_servers: [] in_latency_window: [] mongo-ruby-driver-2.21.3/spec/spec_tests/data/server_selection/ReplicaSetWithPrimary/000077500000000000000000000000001505113246500310155ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/server_selection/ReplicaSetWithPrimary/read/000077500000000000000000000000001505113246500317305ustar00rootroot00000000000000Nearest.yml000066400000000000000000000007721505113246500340030ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/server_selection/ReplicaSetWithPrimary/readtopology_description: type: ReplicaSetWithPrimary servers: - &1 address: b:27017 avg_rtt_ms: 5 type: RSSecondary tags: data_center: nyc - &3 address: c:27017 avg_rtt_ms: 100 type: RSSecondary tags: data_center: nyc - &2 address: a:27017 avg_rtt_ms: 26 type: RSPrimary tags: data_center: nyc operation: read read_preference: mode: Nearest tag_sets: - data_center: nyc suitable_servers: - *1 - *2 - *3 in_latency_window: - *1 Nearest_multiple.yml000066400000000000000000000010001505113246500356770ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/server_selection/ReplicaSetWithPrimary/readtopology_description: type: ReplicaSetWithPrimary servers: - &1 address: b:27017 avg_rtt_ms: 10 type: RSSecondary tags: data_center: nyc - &3 address: c:27017 avg_rtt_ms: 100 type: RSSecondary tags: data_center: nyc - &2 address: a:27017 avg_rtt_ms: 20 type: RSPrimary tags: data_center: nyc operation: read read_preference: mode: Nearest tag_sets: - data_center: nyc suitable_servers: - *1 - *2 - *3 in_latency_window: - *1 - *2 Nearest_non_matching.yml000066400000000000000000000007261505113246500365260ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/server_selection/ReplicaSetWithPrimary/readtopology_description: type: ReplicaSetWithPrimary servers: - address: b:27017 avg_rtt_ms: 5 type: RSSecondary tags: data_center: nyc - address: c:27017 avg_rtt_ms: 100 type: RSSecondary tags: data_center: nyc - address: a:27017 avg_rtt_ms: 26 type: RSPrimary tags: data_center: nyc operation: read read_preference: mode: Nearest tag_sets: - data_center: sf suitable_servers: [] in_latency_window: [] Primary.yml000066400000000000000000000007011505113246500340150ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/server_selection/ReplicaSetWithPrimary/readtopology_description: type: ReplicaSetWithPrimary servers: - address: b:27017 avg_rtt_ms: 5 type: RSSecondary tags: data_center: nyc - address: c:27017 avg_rtt_ms: 100 type: RSSecondary tags: data_center: nyc - &1 address: a:27017 avg_rtt_ms: 26 type: RSPrimary tags: data_center: nyc operation: read read_preference: mode: Primary suitable_servers: - *1 in_latency_window: - *1 PrimaryPreferred.yml000066400000000000000000000007351505113246500356630ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/server_selection/ReplicaSetWithPrimary/readtopology_description: type: ReplicaSetWithPrimary servers: - address: b:27017 avg_rtt_ms: 5 type: RSSecondary tags: data_center: nyc - address: c:27017 avg_rtt_ms: 100 type: RSSecondary tags: data_center: nyc - &1 address: a:27017 avg_rtt_ms: 26 type: RSPrimary tags: data_center: nyc operation: read read_preference: mode: PrimaryPreferred tag_sets: - {} suitable_servers: - *1 in_latency_window: - *1 PrimaryPreferred_non_matching.yml000066400000000000000000000007521505113246500404060ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/server_selection/ReplicaSetWithPrimary/readtopology_description: type: ReplicaSetWithPrimary servers: - address: b:27017 avg_rtt_ms: 5 type: RSSecondary tags: data_center: nyc - address: c:27017 avg_rtt_ms: 100 type: RSSecondary tags: data_center: nyc - &1 address: a:27017 avg_rtt_ms: 26 type: RSPrimary tags: data_center: nyc operation: read read_preference: mode: PrimaryPreferred tag_sets: - data_center: sf suitable_servers: - *1 in_latency_window: - *1 Secondary.yml000066400000000000000000000007601505113246500343260ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/server_selection/ReplicaSetWithPrimary/readtopology_description: type: ReplicaSetWithPrimary servers: - &1 address: b:27017 avg_rtt_ms: 5 type: RSSecondary tags: data_center: nyc - &2 address: c:27017 avg_rtt_ms: 100 type: RSSecondary tags: data_center: nyc - address: a:27017 avg_rtt_ms: 26 type: RSPrimary tags: data_center: nyc operation: read read_preference: mode: Secondary tag_sets: - data_center: nyc suitable_servers: - *1 - *2 in_latency_window: - *1 SecondaryPreferred.yml000066400000000000000000000007711505113246500361670ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/server_selection/ReplicaSetWithPrimary/readtopology_description: type: ReplicaSetWithPrimary servers: - &1 address: b:27017 avg_rtt_ms: 5 type: RSSecondary tags: data_center: nyc - &2 address: c:27017 avg_rtt_ms: 100 type: RSSecondary tags: data_center: nyc - address: a:27017 avg_rtt_ms: 26 type: RSPrimary tags: data_center: nyc operation: read read_preference: mode: SecondaryPreferred tag_sets: - data_center: nyc suitable_servers: - *1 - *2 in_latency_window: - *1 SecondaryPreferred_non_matching.yml000066400000000000000000000007541505113246500407140ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/server_selection/ReplicaSetWithPrimary/readtopology_description: type: ReplicaSetWithPrimary servers: - address: b:27017 avg_rtt_ms: 5 type: RSSecondary tags: data_center: nyc - address: c:27017 avg_rtt_ms: 100 type: RSSecondary tags: data_center: nyc - &1 address: a:27017 avg_rtt_ms: 26 type: RSPrimary tags: data_center: nyc operation: read read_preference: mode: SecondaryPreferred tag_sets: - data_center: sf suitable_servers: - *1 in_latency_window: - *1 SecondaryPreferred_tags.yml000066400000000000000000000007751505113246500372110ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/server_selection/ReplicaSetWithPrimary/read# Attempt to select the secondary, except its tag doesn't match. # Fall back to primary. --- topology_description: type: ReplicaSetWithPrimary servers: - &1 address: a:27017 avg_rtt_ms: 5 type: RSPrimary tags: data_center: nyc - &2 address: b:27017 avg_rtt_ms: 5 type: RSSecondary tags: data_center: sf # No match. operation: read read_preference: mode: SecondaryPreferred tag_sets: - data_center: nyc suitable_servers: - *1 in_latency_window: - *1 Secondary_non_matching.yml000066400000000000000000000007301505113246500370470ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/server_selection/ReplicaSetWithPrimary/readtopology_description: type: ReplicaSetWithPrimary servers: - address: b:27017 avg_rtt_ms: 5 type: RSSecondary tags: data_center: nyc - address: c:27017 avg_rtt_ms: 100 type: RSSecondary tags: data_center: nyc - address: a:27017 avg_rtt_ms: 26 type: RSPrimary tags: data_center: nyc operation: read read_preference: mode: Secondary tag_sets: - data_center: sf suitable_servers: [] in_latency_window: [] mongo-ruby-driver-2.21.3/spec/spec_tests/data/server_selection/Sharded/000077500000000000000000000000001505113246500261345ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/server_selection/Sharded/read/000077500000000000000000000000001505113246500270475ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/server_selection/Sharded/read/Nearest.yml000066400000000000000000000004661505113246500312010ustar00rootroot00000000000000topology_description: type: Sharded servers: - &1 address: g:27017 avg_rtt_ms: 5 type: Mongos - &2 address: h:27017 avg_rtt_ms: 35 type: Mongos operation: read read_preference: mode: Nearest tag_sets: - data_center: nyc suitable_servers: - *1 - *2 in_latency_window: - *1 mongo-ruby-driver-2.21.3/spec/spec_tests/data/server_selection/Sharded/read/Primary.yml000066400000000000000000000004251505113246500312160ustar00rootroot00000000000000topology_description: type: Sharded servers: - &1 address: g:27017 avg_rtt_ms: 5 type: Mongos - &2 address: h:27017 avg_rtt_ms: 35 type: Mongos operation: read read_preference: mode: Primary suitable_servers: - *1 - *2 in_latency_window: - *1 mongo-ruby-driver-2.21.3/spec/spec_tests/data/server_selection/Sharded/read/PrimaryPreferred.yml000066400000000000000000000004771505113246500330640ustar00rootroot00000000000000topology_description: type: Sharded servers: - &1 address: g:27017 avg_rtt_ms: 5 type: Mongos - &2 address: h:27017 avg_rtt_ms: 35 type: Mongos operation: read read_preference: mode: PrimaryPreferred tag_sets: - data_center: nyc suitable_servers: - *1 - *2 in_latency_window: - *1 mongo-ruby-driver-2.21.3/spec/spec_tests/data/server_selection/Sharded/read/Secondary.yml000066400000000000000000000004701505113246500315220ustar00rootroot00000000000000topology_description: type: Sharded servers: - &1 address: g:27017 avg_rtt_ms: 5 type: Mongos - &2 address: h:27017 avg_rtt_ms: 35 type: Mongos operation: read read_preference: mode: Secondary tag_sets: - data_center: nyc suitable_servers: - *1 - *2 in_latency_window: - *1 mongo-ruby-driver-2.21.3/spec/spec_tests/data/server_selection/Sharded/read/SecondaryPreferred.yml000066400000000000000000000005011505113246500333540ustar00rootroot00000000000000topology_description: type: Sharded servers: - &1 address: g:27017 avg_rtt_ms: 5 type: Mongos - &2 address: h:27017 avg_rtt_ms: 35 type: Mongos operation: read read_preference: mode: SecondaryPreferred tag_sets: - data_center: nyc suitable_servers: - *1 - *2 in_latency_window: - *1 mongo-ruby-driver-2.21.3/spec/spec_tests/data/server_selection/Single/000077500000000000000000000000001505113246500260035ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/server_selection/Single/read/000077500000000000000000000000001505113246500267165ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/server_selection/Single/read/SecondaryPreferred.yml000066400000000000000000000004371505113246500332330ustar00rootroot00000000000000topology_description: type: Single servers: - &1 address: a:27017 avg_rtt_ms: 5 type: Standalone tags: data_center: dc operation: read read_preference: mode: SecondaryPreferred tag_sets: - data_center: nyc suitable_servers: - *1 in_latency_window: - *1 mongo-ruby-driver-2.21.3/spec/spec_tests/data/server_selection/Unknown/000077500000000000000000000000001505113246500262215ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/server_selection/Unknown/read/000077500000000000000000000000001505113246500271345ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/server_selection/Unknown/read/SecondaryPreferred.yml000066400000000000000000000002741505113246500334500ustar00rootroot00000000000000topology_description: type: Unknown servers: [] operation: read read_preference: mode: SecondaryPreferred tag_sets: - data_center: nyc suitable_servers: [] in_latency_window: [] mongo-ruby-driver-2.21.3/spec/spec_tests/data/server_selection/Unknown/read/ghost.yml000066400000000000000000000003061505113246500310020ustar00rootroot00000000000000topology_description: type: Unknown servers: - address: a:27017 avg_rtt_ms: 5 type: RSGhost operation: read read_preference: mode: Nearest suitable_servers: [] in_latency_window: [] mongo-ruby-driver-2.21.3/spec/spec_tests/data/server_selection/Unknown/write/000077500000000000000000000000001505113246500273535ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/server_selection/Unknown/write/ghost.yml000066400000000000000000000003071505113246500312220ustar00rootroot00000000000000topology_description: type: Unknown servers: - address: a:27017 avg_rtt_ms: 5 type: RSGhost operation: write read_preference: mode: Nearest suitable_servers: [] in_latency_window: [] mongo-ruby-driver-2.21.3/spec/spec_tests/data/server_selection_rtt/000077500000000000000000000000001505113246500254535ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/server_selection_rtt/first_value.yml000066400000000000000000000000661505113246500305230ustar00rootroot00000000000000--- avg_rtt_ms: 'NULL' new_rtt_ms: 10 new_avg_rtt: 10 mongo-ruby-driver-2.21.3/spec/spec_tests/data/server_selection_rtt/first_value_zero.yml000066400000000000000000000000641505113246500315600ustar00rootroot00000000000000--- avg_rtt_ms: 'NULL' new_rtt_ms: 0 new_avg_rtt: 0 mongo-ruby-driver-2.21.3/spec/spec_tests/data/server_selection_rtt/value_test_1.yml000066400000000000000000000000611505113246500305660ustar00rootroot00000000000000--- avg_rtt_ms: 0 new_rtt_ms: 5 new_avg_rtt: 1.0 mongo-ruby-driver-2.21.3/spec/spec_tests/data/server_selection_rtt/value_test_2.yml000066400000000000000000000000651505113246500305730ustar00rootroot00000000000000--- avg_rtt_ms: 3.1 new_rtt_ms: 36 new_avg_rtt: 9.68 mongo-ruby-driver-2.21.3/spec/spec_tests/data/server_selection_rtt/value_test_3.yml000066400000000000000000000000701505113246500305700ustar00rootroot00000000000000--- avg_rtt_ms: 9.12 new_rtt_ms: 9.12 new_avg_rtt: 9.12 mongo-ruby-driver-2.21.3/spec/spec_tests/data/server_selection_rtt/value_test_4.yml000066400000000000000000000000661505113246500305760ustar00rootroot00000000000000--- avg_rtt_ms: 1 new_rtt_ms: 1000 new_avg_rtt: 200.8 mongo-ruby-driver-2.21.3/spec/spec_tests/data/server_selection_rtt/value_test_5.yml000066400000000000000000000000651505113246500305760ustar00rootroot00000000000000--- avg_rtt_ms: 0 new_rtt_ms: 0.25 new_avg_rtt: 0.05 mongo-ruby-driver-2.21.3/spec/spec_tests/data/sessions_unified/000077500000000000000000000000001505113246500245605ustar00rootroot00000000000000driver-sessions-dirty-session-errors.yml000066400000000000000000000255101505113246500345120ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/sessions_unifieddescription: "driver-sessions-dirty-session-errors" schemaVersion: "1.0" runOnRequirements: - minServerVersion: "4.0" topologies: [ replicaset ] - minServerVersion: "4.1.8" topologies: [ sharded ] createEntities: - client: id: &client0 client0 useMultipleMongoses: false observeEvents: [ commandStartedEvent ] - database: id: &database0 database0 client: *client0 databaseName: &database0Name session-tests - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name test - session: id: &session0 session0 client: *client0 initialData: - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1 } tests: - description: "Dirty explicit session is discarded (insert)" operations: - name: failPoint object: testRunner arguments: client: *client0 failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: [ insert ] closeConnection: true - name: assertSessionNotDirty object: testRunner arguments: session: *session0 - name: insertOne object: *collection0 arguments: session: *session0 document: { _id: 2 } expectResult: { $$unsetOrMatches: { insertedId: { $$unsetOrMatches: 2 } } } - name: assertSessionDirty object: testRunner arguments: session: *session0 - name: insertOne object: *collection0 arguments: session: *session0 document: { _id: 3 } expectResult: { $$unsetOrMatches: { insertedId: { $$unsetOrMatches: 3 } } } - name: assertSessionDirty object: testRunner arguments: session: *session0 - name: endSession object: *session0 - &find_with_implicit_session name: find object: *collection0 arguments: filter: { _id: -1 } expectResult: [] - name: assertDifferentLsidOnLastTwoCommands object: testRunner arguments: client: *client0 expectEvents: - client: *client0 events: - commandStartedEvent: &insert_attempt command: insert: *collection0Name documents: - { _id: 2 } ordered: true lsid: { $$sessionLsid: *session0 } txnNumber: 1 commandName: insert databaseName: *database0Name - commandStartedEvent: *insert_attempt - commandStartedEvent: command: insert: *collection0Name documents: - { _id: 3 } ordered: true lsid: { $$sessionLsid: *session0 } txnNumber: 2 commandName: insert databaseName: *database0Name - commandStartedEvent: &find_with_implicit_session_event command: find: *collection0Name filter: { _id: -1 } # There is no explicit session to use with $$sessionLsid, so # just assert an arbitrary lsid document lsid: { $$type: object } commandName: find databaseName: *database0Name outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1 } - { _id: 2 } - { _id: 3 } - description: "Dirty explicit session is discarded (findAndModify)" operations: - name: failPoint object: testRunner arguments: client: *client0 failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: [ findAndModify ] closeConnection: true - name: assertSessionNotDirty object: testRunner arguments: session: *session0 - name: findOneAndUpdate object: *collection0 arguments: session: *session0 filter: { _id: 1 } update: { $inc: { x: 1 } } returnDocument: Before expectResult: { _id: 1 } - name: assertSessionDirty object: testRunner arguments: session: *session0 - name: endSession object: *session0 - *find_with_implicit_session - name: assertDifferentLsidOnLastTwoCommands object: testRunner arguments: client: *client0 expectEvents: - client: *client0 events: - commandStartedEvent: &findAndModify_attempt command: findAndModify: *collection0Name query: { _id: 1 } update: { $inc: { x: 1 } } new: false lsid: { $$sessionLsid: *session0 } txnNumber: 1 readConcern: { $$exists: false } writeConcern: { $$exists: false } commandName: findAndModify databaseName: *database0Name - commandStartedEvent: *findAndModify_attempt - commandStartedEvent: *find_with_implicit_session_event outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1, x: 1 } - description: "Dirty implicit session is discarded (insert)" operations: - name: failPoint object: testRunner arguments: client: *client0 failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: [ insert ] closeConnection: true - name: insertOne object: *collection0 arguments: document: { _id: 2 } expectResult: { $$unsetOrMatches: { insertedId: { $$unsetOrMatches: 2 } } } - *find_with_implicit_session - name: assertDifferentLsidOnLastTwoCommands object: testRunner arguments: client: *client0 expectEvents: - client: *client0 events: - commandStartedEvent: &insert_attempt command: insert: *collection0Name documents: - { _id: 2 } ordered: true lsid: { $$type: object } txnNumber: 1 commandName: insert databaseName: *database0Name - commandStartedEvent: *insert_attempt - commandStartedEvent: *find_with_implicit_session_event outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1 } - { _id: 2 } - description: "Dirty implicit session is discarded (findAndModify)" operations: - name: failPoint object: testRunner arguments: client: *client0 failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: [ findAndModify ] closeConnection: true - name: findOneAndUpdate object: *collection0 arguments: filter: { _id: 1 } update: { $inc: { x: 1 } } returnDocument: Before expectResult: { _id: 1 } - *find_with_implicit_session - name: assertDifferentLsidOnLastTwoCommands object: testRunner arguments: client: *client0 expectEvents: - client: *client0 events: - commandStartedEvent: &findAndModify_attempt command: findAndModify: *collection0Name query: { _id: 1 } update: { $inc: { x: 1 } } new: false lsid: { $$type: object } txnNumber: 1 readConcern: { $$exists: false } writeConcern: { $$exists: false } commandName: findAndModify databaseName: *database0Name - commandStartedEvent: *findAndModify_attempt - commandStartedEvent: *find_with_implicit_session_event outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1, x: 1 } - description: "Dirty implicit session is discarded (read returning cursor)" operations: - name: failPoint object: testRunner arguments: client: *client0 failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: [ aggregate ] closeConnection: true - name: aggregate object: *collection0 arguments: pipeline: [ { $project: { _id: 1 } } ] expectResult: [ { _id: 1 } ] - *find_with_implicit_session - name: assertDifferentLsidOnLastTwoCommands object: testRunner arguments: client: *client0 expectEvents: - client: *client0 events: - commandStartedEvent: &aggregate_attempt command: aggregate: *collection0Name pipeline: [ { $project: { _id: 1 } } ] lsid: { $$type: object } commandName: aggregate databaseName: *database0Name - commandStartedEvent: *aggregate_attempt - commandStartedEvent: *find_with_implicit_session_event outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1 } - description: "Dirty implicit session is discarded (read not returning cursor)" operations: - name: failPoint object: testRunner arguments: client: *client0 failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: [ aggregate ] closeConnection: true - name: countDocuments object: *collection0 arguments: filter: {} expectResult: 1 - *find_with_implicit_session - name: assertDifferentLsidOnLastTwoCommands object: testRunner arguments: client: *client0 expectEvents: - client: *client0 events: - commandStartedEvent: &countDocuments_attempt command: aggregate: *collection0Name pipeline: [ { $match: {} }, { $group: { _id: 1, n: { $sum: 1 } } } ] lsid: { $$type: object } commandName: aggregate databaseName: *database0Name - commandStartedEvent: *countDocuments_attempt - commandStartedEvent: *find_with_implicit_session_event outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1 } mongo-ruby-driver-2.21.3/spec/spec_tests/data/sessions_unified/driver-sessions-server-support.yml000066400000000000000000000070131505113246500334610ustar00rootroot00000000000000description: "driver-sessions-server-support" schemaVersion: "1.0" runOnRequirements: - minServerVersion: "3.6" createEntities: - client: id: &client0 client0 useMultipleMongoses: false observeEvents: [ commandStartedEvent ] - database: id: &database0 database0 client: *client0 databaseName: &database0Name session-tests - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name test - session: id: &session0 session0 client: *client0 initialData: - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1 } tests: - description: "Server supports explicit sessions" operations: - name: assertSessionNotDirty object: testRunner arguments: session: *session0 - name: insertOne object: *collection0 arguments: session: *session0 document: { _id: 2 } expectResult: { $$unsetOrMatches: { insertedId: { $$unsetOrMatches: 2 } } } - name: assertSessionNotDirty object: testRunner arguments: session: *session0 - name: endSession object: *session0 - &find_with_implicit_session name: find object: *collection0 arguments: filter: { _id: -1 } expectResult: [] - name: assertSameLsidOnLastTwoCommands object: testRunner arguments: client: *client0 expectEvents: - client: *client0 events: - commandStartedEvent: command: insert: *collection0Name documents: [ { _id: 2 } ] ordered: true lsid: { $$sessionLsid: *session0 } commandName: insert databaseName: *database0Name - commandStartedEvent: command: find: *collection0Name filter: { _id: -1 } lsid: { $$sessionLsid: *session0 } commandName: find databaseName: *database0Name outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1 } - { _id: 2 } - description: "Server supports implicit sessions" operations: - name: insertOne object: *collection0 arguments: document: { _id: 2 } expectResult: { $$unsetOrMatches: { insertedId: { $$unsetOrMatches: 2 } } } - *find_with_implicit_session - name: assertSameLsidOnLastTwoCommands object: testRunner arguments: client: *client0 expectEvents: - client: *client0 events: - commandStartedEvent: command: insert: *collection0Name documents: - { _id: 2 } ordered: true # There is no explicit session to use with $$sessionLsid, so # just assert an arbitrary lsid document lsid: { $$type: object } commandName: insert databaseName: *database0Name - commandStartedEvent: command: find: *collection0Name filter: { _id: -1 } lsid: { $$type: object } commandName: find databaseName: *database0Name outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1 } - { _id: 2 } implicit-sessions-default-causal-consistency.yml000066400000000000000000000075601505113246500361410ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/sessions_unifieddescription: "implicit sessions default causal consistency" schemaVersion: "1.3" runOnRequirements: - minServerVersion: "4.2" topologies: [replicaset, sharded, load-balanced] createEntities: - client: id: &client0 client0 useMultipleMongoses: false observeEvents: [commandStartedEvent] - database: id: &database0 database0 client: *client0 databaseName: &databaseName implicit-cc-tests - collection: id: &collectionDefault collectionDefault database: *database0 collectionName: &collectionNameDefault coll-default - collection: id: &collectionSnapshot collectionSnapshot database: *database0 collectionName: &collectionNameSnapshot coll-snapshot collectionOptions: readConcern: { level: snapshot } - collection: id: &collectionlinearizable collectionlinearizable database: *database0 collectionName: &collectionNamelinearizable coll-linearizable collectionOptions: readConcern: { level: linearizable } initialData: - collectionName: *collectionNameDefault databaseName: *databaseName documents: - { _id: 1, x: default } - collectionName: *collectionNameSnapshot databaseName: *databaseName documents: - { _id: 1, x: snapshot } - collectionName: *collectionNamelinearizable databaseName: *databaseName documents: - { _id: 1, x: linearizable } tests: - description: "readConcern is not sent on retried read in implicit session when readConcern level is not specified" operations: - &failPointCommand name: failPoint object: testRunner arguments: client: *client0 failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: [find] errorCode: 11600 #InterruptedAtShutdown - name: find object: *collectionDefault arguments: filter: {} expectResult: [{ _id: 1, x: default }] expectEvents: - client: *client0 events: - commandStartedEvent: &commandStartedEventDefault command: find: *collectionNameDefault filter: {} readConcern: { $$exists: false } databaseName: *databaseName - commandStartedEvent: *commandStartedEventDefault - description: "afterClusterTime is not sent on retried read in implicit session when readConcern level is snapshot" runOnRequirements: - minServerVersion: "5.0" operations: - *failPointCommand - name: find object: *collectionSnapshot arguments: filter: {} expectResult: [{ _id: 1, x: snapshot }] expectEvents: - client: *client0 events: - commandStartedEvent: &commandStartedEventSnapshot command: find: *collectionNameSnapshot filter: {} readConcern: { level: snapshot, afterClusterTime: { $$exists: false } } databaseName: *databaseName - commandStartedEvent: *commandStartedEventSnapshot - description: "afterClusterTime is not sent on retried read in implicit session when readConcern level is linearizable" operations: - *failPointCommand - name: find object: *collectionlinearizable arguments: filter: {} expectResult: [{ _id: 1, x: linearizable }] expectEvents: - client: *client0 events: - commandStartedEvent: &commandStartedEventLinearizable command: find: *collectionNamelinearizable filter: {} readConcern: { level: linearizable, afterClusterTime: { $$exists: false } } databaseName: *databaseName - commandStartedEvent: *commandStartedEventLinearizable snapshot-sessions-not-supported-client-error.yml000066400000000000000000000033321505113246500361540ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/sessions_unifieddescription: snapshot-sessions-not-supported-client-error schemaVersion: "1.0" runOnRequirements: - minServerVersion: "3.6" maxServerVersion: "4.4.99" createEntities: - client: id: &client0 client0 observeEvents: [ commandStartedEvent, commandFailedEvent ] - database: id: &database0Name database0 client: *client0 databaseName: *database0Name - collection: id: &collection0Name collection0 database: *database0Name collectionName: *collection0Name - session: id: session0 client: client0 sessionOptions: snapshot: true initialData: - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1, x: 11 } tests: - description: Client error on find with snapshot operations: - name: find object: collection0 arguments: session: session0 filter: {} expectError: isClientError: true errorContains: Snapshot reads require MongoDB 5.0 or later expectEvents: - client: *client0 events: [] - description: Client error on aggregate with snapshot operations: - name: aggregate object: collection0 arguments: session: session0 pipeline: [] expectError: isClientError: true errorContains: Snapshot reads require MongoDB 5.0 or later expectEvents: - client: *client0 events: [] - description: Client error on distinct with snapshot operations: - name: distinct object: collection0 arguments: fieldName: x filter: {} session: session0 expectError: isClientError: true errorContains: Snapshot reads require MongoDB 5.0 or later expectEvents: - client: *client0 events: [] snapshot-sessions-not-supported-server-error.yml000066400000000000000000000044531505113246500362110ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/sessions_unifieddescription: snapshot-sessions-not-supported-server-error schemaVersion: "1.0" runOnRequirements: - minServerVersion: "5.0" topologies: [ single ] createEntities: - client: id: &client0 client0 observeEvents: [ commandStartedEvent, commandFailedEvent ] - database: id: &database0Name database0 client: *client0 databaseName: *database0Name - collection: id: &collection0Name collection0 database: *database0Name collectionName: *collection0Name - session: id: session0 client: client0 sessionOptions: snapshot: true initialData: - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1, x: 11 } tests: - description: Server returns an error on find with snapshot operations: - name: find object: collection0 arguments: session: session0 filter: {} expectError: isError: true isClientError: false expectEvents: - client: client0 events: - commandStartedEvent: command: find: collection0 readConcern: level: snapshot atClusterTime: "$$exists": false - commandFailedEvent: commandName: find - description: Server returns an error on aggregate with snapshot operations: - name: aggregate object: collection0 arguments: session: session0 pipeline: [] expectError: isError: true isClientError: false expectEvents: - client: client0 events: - commandStartedEvent: command: aggregate: collection0 readConcern: level: snapshot atClusterTime: "$$exists": false - commandFailedEvent: commandName: aggregate - description: Server returns an error on distinct with snapshot operations: - name: distinct object: collection0 arguments: fieldName: x filter: {} session: session0 expectError: isError: true isClientError: false expectEvents: - client: client0 events: - commandStartedEvent: command: distinct: collection0 readConcern: level: snapshot atClusterTime: "$$exists": false - commandFailedEvent: commandName: distinct mongo-ruby-driver-2.21.3/spec/spec_tests/data/sessions_unified/snapshot-sessions-unsupported-ops.yml000066400000000000000000000140601505113246500341740ustar00rootroot00000000000000description: snapshot-sessions-unsupported-ops schemaVersion: "1.0" runOnRequirements: - minServerVersion: "5.0" topologies: [replicaset, sharded] createEntities: - client: id: &client0 client0 observeEvents: [ commandStartedEvent, commandFailedEvent ] - database: id: &database0Name database0 client: *client0 databaseName: *database0Name - collection: id: &collection0Name collection0 database: *database0Name collectionName: *collection0Name - session: id: session0 client: client0 sessionOptions: snapshot: true initialData: - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1, x: 11 } tests: - description: Server returns an error on insertOne with snapshot # Skip on sharded clusters due to SERVER-58176. runOnRequirements: - topologies: [replicaset] operations: - name: insertOne object: collection0 arguments: session: session0 document: _id: 22 x: 22 expectError: isError: true isClientError: false expectEvents: - client: client0 events: - commandStartedEvent: command: insert: collection0 readConcern: level: snapshot atClusterTime: "$$exists": false - commandFailedEvent: commandName: insert - description: Server returns an error on insertMany with snapshot # Skip on sharded clusters due to SERVER-58176. runOnRequirements: - topologies: [replicaset] operations: - name: insertMany object: collection0 arguments: session: session0 documents: - _id: 22 x: 22 - _id: 33 x: 33 expectError: isError: true isClientError: false expectEvents: - client: client0 events: - commandStartedEvent: command: insert: collection0 readConcern: level: snapshot atClusterTime: "$$exists": false - commandFailedEvent: commandName: insert - description: Server returns an error on deleteOne with snapshot # Skip on sharded clusters due to SERVER-58176. runOnRequirements: - topologies: [replicaset] operations: - name: deleteOne object: collection0 arguments: session: session0 filter: {} expectError: isError: true isClientError: false expectEvents: - client: client0 events: - commandStartedEvent: command: delete: collection0 readConcern: level: snapshot atClusterTime: "$$exists": false - commandFailedEvent: commandName: delete - description: Server returns an error on updateOne with snapshot # Skip on sharded clusters due to SERVER-58176. runOnRequirements: - topologies: [replicaset] operations: - name: updateOne object: collection0 arguments: session: session0 filter: { _id: 1 } update: { $inc: { x: 1 } } expectError: isError: true isClientError: false expectEvents: - client: client0 events: - commandStartedEvent: command: update: collection0 readConcern: level: snapshot atClusterTime: "$$exists": false - commandFailedEvent: commandName: update - description: Server returns an error on findOneAndUpdate with snapshot operations: - name: findOneAndUpdate object: collection0 arguments: session: session0 filter: { _id: 1 } update: { $inc: { x: 1 } } expectError: isError: true isClientError: false expectEvents: - client: client0 events: - commandStartedEvent: command: findAndModify: collection0 readConcern: level: snapshot atClusterTime: "$$exists": false - commandFailedEvent: commandName: findAndModify - description: Server returns an error on listDatabases with snapshot operations: - name: listDatabases object: client0 arguments: session: session0 expectError: isError: true isClientError: false expectEvents: - client: client0 events: - commandStartedEvent: command: listDatabases: 1 readConcern: level: snapshot atClusterTime: "$$exists": false - commandFailedEvent: commandName: listDatabases - description: Server returns an error on listCollections with snapshot operations: - name: listCollections object: database0 arguments: session: session0 expectError: isError: true isClientError: false expectEvents: - client: client0 events: - commandStartedEvent: command: listCollections: 1 readConcern: level: snapshot atClusterTime: "$$exists": false - commandFailedEvent: commandName: listCollections - description: Server returns an error on listIndexes with snapshot operations: - name: listIndexes object: collection0 arguments: session: session0 expectError: isError: true isClientError: false expectEvents: - client: client0 events: - commandStartedEvent: command: listIndexes: collection0 readConcern: level: snapshot atClusterTime: "$$exists": false - commandFailedEvent: commandName: listIndexes - description: Server returns an error on runCommand with snapshot operations: - name: runCommand object: database0 arguments: session: session0 commandName: listCollections command: listCollections: 1 expectError: isError: true isClientError: false expectEvents: - client: client0 events: - commandStartedEvent: command: listCollections: 1 readConcern: level: snapshot atClusterTime: "$$exists": false - commandFailedEvent: commandName: listCollections mongo-ruby-driver-2.21.3/spec/spec_tests/data/sessions_unified/snapshot-sessions.yml000066400000000000000000000257621505113246500310220ustar00rootroot00000000000000description: snapshot-sessions schemaVersion: "1.0" runOnRequirements: - minServerVersion: "5.0" topologies: [replicaset, sharded] createEntities: - client: id: &client0 client0 observeEvents: [ commandStartedEvent] ignoreCommandMonitoringEvents: [ findAndModify, insert, update ] - database: id: &database0 database0 client: *client0 databaseName: &database0Name database0 - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name collection0 collectionOptions: writeConcern: { w: majority } - session: id: session0 client: client0 sessionOptions: snapshot: true - session: id: session1 client: client0 sessionOptions: snapshot: true initialData: - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1, x: 11 } - { _id: 2, x: 11 } tests: - description: Find operation with snapshot operations: - name: find object: collection0 arguments: session: session0 filter: { _id: 1 } expectResult: - {_id: 1, x: 11} - name: findOneAndUpdate object: collection0 arguments: filter: { _id: 1 } update: { $inc: { x: 1 } } returnDocument: After expectResult: { _id: 1, x: 12 } - name: find object: collection0 arguments: session: session1 filter: { _id: 1 } expectResult: - { _id: 1, x: 12 } - name: findOneAndUpdate object: collection0 arguments: filter: { _id: 1 } update: { $inc: { x: 1 } } returnDocument: After expectResult: { _id: 1, x: 13 } - name: find object: collection0 arguments: filter: { _id: 1 } expectResult: - { _id: 1, x: 13 } - name: find object: collection0 arguments: session: session0 filter: { _id: 1 } expectResult: - {_id: 1, x: 11} - name: find object: collection0 arguments: session: session1 filter: { _id: 1 } expectResult: - {_id: 1, x: 12} expectEvents: - client: client0 events: - commandStartedEvent: command: find: collection0 readConcern: level: snapshot atClusterTime: "$$exists": false - commandStartedEvent: command: find: collection0 readConcern: level: snapshot atClusterTime: "$$exists": false - commandStartedEvent: command: find: collection0 readConcern: "$$exists": false - commandStartedEvent: command: find: collection0 readConcern: level: snapshot atClusterTime: "$$exists": true - commandStartedEvent: command: find: collection0 readConcern: level: snapshot atClusterTime: "$$exists": true - description: Distinct operation with snapshot operations: - name: distinct object: collection0 arguments: fieldName: x filter: {} session: session0 expectResult: - 11 - name: findOneAndUpdate object: collection0 arguments: filter: { _id: 2 } update: { $inc: { x: 1 } } returnDocument: After expectResult: { _id: 2, x: 12 } - name: distinct object: collection0 arguments: fieldName: x filter: {} session: session1 expectResult: [11, 12] - name: findOneAndUpdate object: collection0 arguments: filter: { _id: 2 } update: { $inc: { x: 1 } } returnDocument: After expectResult: { _id: 2, x: 13 } - name: distinct object: collection0 arguments: fieldName: x filter: {} expectResult: [ 11, 13 ] - name: distinct object: collection0 arguments: fieldName: x filter: {} session: session0 expectResult: [ 11 ] - name: distinct object: collection0 arguments: fieldName: x filter: {} session: session1 expectResult: [ 11, 12 ] expectEvents: - client: client0 events: - commandStartedEvent: command: distinct: collection0 readConcern: level: snapshot atClusterTime: "$$exists": false - commandStartedEvent: command: distinct: collection0 readConcern: level: snapshot atClusterTime: "$$exists": false - commandStartedEvent: command: distinct: collection0 readConcern: "$$exists": false - commandStartedEvent: command: distinct: collection0 readConcern: level: snapshot atClusterTime: "$$exists": true - commandStartedEvent: command: distinct: collection0 readConcern: level: snapshot atClusterTime: "$$exists": true - description: Aggregate operation with snapshot operations: - name: aggregate object: collection0 arguments: pipeline: - "$match": { _id: 1 } session: session0 expectResult: - { _id: 1, x: 11 } - name: findOneAndUpdate object: collection0 arguments: filter: { _id: 1 } update: { $inc: { x: 1 } } returnDocument: After expectResult: { _id: 1, x: 12 } - name: aggregate object: collection0 arguments: pipeline: - "$match": _id: 1 session: session1 expectResult: - {_id: 1, x: 12} - name: findOneAndUpdate object: collection0 arguments: filter: { _id: 1 } update: { $inc: { x: 1 } } returnDocument: After expectResult: { _id: 1, x: 13 } - name: aggregate object: collection0 arguments: pipeline: - "$match": { _id: 1 } expectResult: - { _id: 1, x: 13 } - name: aggregate object: collection0 arguments: pipeline: - "$match": _id: 1 session: session0 expectResult: - { _id: 1, x: 11 } - name: aggregate object: collection0 arguments: pipeline: - "$match": { _id: 1 } session: session1 expectResult: - { _id: 1, x: 12 } expectEvents: - client: client0 events: - commandStartedEvent: command: aggregate: collection0 readConcern: level: snapshot atClusterTime: "$$exists": false - commandStartedEvent: command: aggregate: collection0 readConcern: level: snapshot atClusterTime: "$$exists": false - commandStartedEvent: command: aggregate: collection0 readConcern: "$$exists": false - commandStartedEvent: command: aggregate: collection0 readConcern: level: snapshot atClusterTime: "$$exists": true - commandStartedEvent: command: aggregate: collection0 readConcern: level: snapshot atClusterTime: "$$exists": true - description: countDocuments operation with snapshot operations: - name: countDocuments object: collection0 arguments: filter: {} session: session0 expectResult: 2 - name: countDocuments object: collection0 arguments: filter: {} session: session0 expectResult: 2 expectEvents: - client: client0 events: - commandStartedEvent: command: aggregate: collection0 readConcern: level: snapshot atClusterTime: "$$exists": false - commandStartedEvent: command: aggregate: collection0 readConcern: level: snapshot atClusterTime: "$$exists": true - description: Mixed operation with snapshot operations: - name: find object: collection0 arguments: session: session0 filter: { _id: 1 } expectResult: - { _id: 1, x: 11 } - name: findOneAndUpdate object: collection0 arguments: filter: { _id: 1 } update: { $inc: { x: 1 } } returnDocument: After expectResult: { _id: 1, x: 12 } - name: find object: collection0 arguments: filter: { _id: 1 } expectResult: - { _id: 1, x: 12 } - name: aggregate object: collection0 arguments: pipeline: - "$match": _id: 1 session: session0 expectResult: - { _id: 1, x: 11 } - name: distinct object: collection0 arguments: fieldName: x filter: {} session: session0 expectResult: [ 11 ] expectEvents: - client: client0 events: - commandStartedEvent: command: find: collection0 readConcern: level: snapshot atClusterTime: "$$exists": false - commandStartedEvent: command: find: collection0 readConcern: "$$exists": false - commandStartedEvent: command: aggregate: collection0 readConcern: level: snapshot atClusterTime: "$$exists": true - commandStartedEvent: command: distinct: collection0 readConcern: level: snapshot atClusterTime: "$$exists": true - description: Write commands with snapshot session do not affect snapshot reads operations: - name: find object: collection0 arguments: filter: {} session: session0 - name: insertOne object: collection0 arguments: document: _id: 22 x: 33 - name: updateOne object: collection0 arguments: filter: { _id: 1 } update: { $inc: { x: 1 } } - name: find object: collection0 arguments: filter: { _id: 1 } session: session0 expectResult: - {_id: 1, x: 11} expectEvents: - client: client0 events: - commandStartedEvent: command: find: collection0 readConcern: level: snapshot atClusterTime: "$$exists": false - commandStartedEvent: command: find: collection0 readConcern: level: snapshot atClusterTime: "$$exists": true - description: First snapshot read does not send atClusterTime operations: - name: find object: collection0 arguments: filter: {} session: session0 expectEvents: - client: client0 events: - commandStartedEvent: command: find: collection0 readConcern: level: snapshot atClusterTime: "$$exists": false commandName: find databaseName: database0 - description: StartTransaction fails in snapshot session operations: - name: startTransaction object: session0 expectError: isError: true isClientError: true errorContains: Transactions are not supported in snapshot sessions mongo-ruby-driver-2.21.3/spec/spec_tests/data/transactions/000077500000000000000000000000001505113246500237175ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/transactions/abort.yml000066400000000000000000000244141505113246500255560ustar00rootroot00000000000000runOn: - minServerVersion: "4.0" topology: ["replicaset"] - minServerVersion: "4.1.8" topology: ["sharded"] database_name: &database_name "transaction-tests" collection_name: &collection_name "test" data: [] tests: - description: abort operations: - name: startTransaction object: session0 - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 - name: abortTransaction object: session0 - name: startTransaction object: session0 - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 - name: abortTransaction object: session0 expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: abortTransaction: 1 lsid: session0 txnNumber: "$numberLong": "1" startTransaction: autocommit: false writeConcern: command_name: abortTransaction database_name: admin - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true readConcern: afterClusterTime: 42 lsid: session0 txnNumber: $numberLong: "2" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: abortTransaction: 1 lsid: session0 txnNumber: $numberLong: "2" startTransaction: autocommit: false writeConcern: command_name: abortTransaction database_name: admin outcome: collection: data: [] - description: implicit abort operations: # Start a transaction but don't commit - the driver calls abortTransaction # from ClientSession.endSession(). - name: startTransaction object: session0 - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: abortTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: abortTransaction database_name: admin outcome: collection: data: [] - description: two aborts operations: - name: startTransaction object: session0 - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 - name: abortTransaction object: session0 - name: abortTransaction object: session0 result: errorContains: cannot call abortTransaction twice expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: abortTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: abortTransaction database_name: admin outcome: collection: data: [] - description: abort without start operations: - name: abortTransaction object: session0 result: errorContains: no transaction started expectations: [] outcome: collection: data: [] - description: abort directly after no-op commit operations: - name: startTransaction object: session0 - name: commitTransaction object: session0 - name: abortTransaction # Error calling abort after no-op commit. object: session0 result: errorContains: Cannot call abortTransaction after calling commitTransaction expectations: [] outcome: collection: data: [] - description: abort directly after commit operations: - name: startTransaction object: session0 - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 - name: commitTransaction object: session0 - name: abortTransaction # Error calling abort after commit. object: session0 result: errorContains: Cannot call abortTransaction after calling commitTransaction expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: commitTransaction database_name: admin outcome: collection: data: - _id: 1 - description: abort ignores TransactionAborted operations: - name: startTransaction object: session0 - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 # Abort the server transaction with a duplicate key error. - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: errorLabelsOmit: ["TransientTransactionError", "UnknownTransactionCommitResult"] # DuplicateKey error code included in the bulk write error message # returned by the server errorContains: E11000 # Make sure the server aborted the transaction. - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: errorCodeName: NoSuchTransaction errorLabelsContain: ["TransientTransactionError"] errorLabelsOmit: ["UnknownTransactionCommitResult"] # abortTransaction must ignore the TransactionAborted and succeed. - name: abortTransaction object: session0 expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: abortTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: abortTransaction database_name: admin outcome: collection: data: [] - description: abort does not apply writeConcern operations: - name: startTransaction object: session0 arguments: options: writeConcern: w: 10 - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 - name: abortTransaction object: session0 # No write concern error. outcome: collection: data: [] mongo-ruby-driver-2.21.3/spec/spec_tests/data/transactions/bulk.yml000066400000000000000000000167371505113246500254150ustar00rootroot00000000000000runOn: - minServerVersion: "4.0" topology: ["replicaset"] - minServerVersion: "4.1.8" topology: ["sharded"] database_name: &database_name "transaction-tests" collection_name: &collection_name "test" data: [] tests: - description: bulk operations: - name: startTransaction object: session0 - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 - name: deleteOne object: collection arguments: session: session0 filter: _id: 1 result: deletedCount: 1 - name: bulkWrite object: collection arguments: session: session0 requests: - name: insertOne arguments: document: {_id: 1} - name: updateOne arguments: filter: {_id: 1} update: {$set: {x: 1}} - name: updateOne arguments: filter: {_id: 2} update: {$set: {x: 2}} upsert: true # Produces upsertedIds: {2: 2} in the result. - name: insertOne arguments: document: {_id: 3} - name: insertOne arguments: document: {_id: 4} - name: insertOne arguments: document: {_id: 5} - name: insertOne arguments: document: {_id: 6} - name: insertOne arguments: document: {_id: 7} # Keep replaces segregated from updates, so that drivers that aren't able to coalesce # adjacent updates and replaces into a single update command will still pass this test - name: replaceOne arguments: filter: {_id: 1} replacement: {y: 1} - name: replaceOne arguments: filter: {_id: 2} replacement: {y: 2} - name: deleteOne arguments: filter: {_id: 3} - name: deleteOne arguments: filter: {_id: 4} - name: updateMany arguments: filter: {_id: {$gte: 2}} update: {$set: {z: 1}} # Keep deleteMany segregated from deleteOne, so that drivers that aren't able to coalesce # adjacent mixed deletes into a single delete command will still pass this test - name: deleteMany arguments: filter: {_id: {$gte: 6}} result: deletedCount: 4 insertedIds: {0: 1, 3: 3, 4: 4, 5: 5, 6: 6, 7: 7} matchedCount: 7 modifiedCount: 7 upsertedCount: 1 upsertedIds: {2: 2} - name: commitTransaction object: session0 expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: delete: *collection_name deletes: - q: {_id: 1} limit: 1 ordered: true lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: delete database_name: *database_name # Commands in the bulkWrite. - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: update: *collection_name updates: - q: {_id: 1} u: {$set: {x: 1}} - q: {_id: 2} u: {$set: {x: 2}} upsert: true ordered: true lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: update database_name: *database_name - command_started_event: command: insert: *collection_name documents: - _id: 3 - _id: 4 - _id: 5 - _id: 6 - _id: 7 ordered: true lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: update: *collection_name updates: - q: {_id: 1} u: {y: 1} - q: {_id: 2} u: {y: 2} ordered: true lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: update database_name: *database_name - command_started_event: command: delete: *collection_name deletes: - q: {_id: 3} limit: 1 - q: {_id: 4} limit: 1 ordered: true lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: delete database_name: *database_name - command_started_event: command: update: *collection_name updates: - q: {_id: {$gte: 2}} u: {$set: {z: 1}} multi: true ordered: true lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: update database_name: *database_name - command_started_event: command: delete: *collection_name deletes: - q: {_id: {$gte: 6}} limit: 0 ordered: true lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: delete database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: commitTransaction database_name: admin outcome: collection: data: - {_id: 1, y: 1} - {_id: 2, y: 2, z: 1} - {_id: 5, z: 1} mongo-ruby-driver-2.21.3/spec/spec_tests/data/transactions/causal-consistency.yml000066400000000000000000000105501505113246500302520ustar00rootroot00000000000000runOn: - minServerVersion: "4.0" topology: ["replicaset"] - minServerVersion: "4.1.8" topology: ["sharded"] database_name: &database_name "transaction-tests" collection_name: &collection_name "test" data: - _id: 1 count: 0 tests: - description: causal consistency clientOptions: retryWrites: false operations: # Update a document without a transaction. - &updateOne name: updateOne object: collection arguments: session: session0 filter: {_id: 1} update: $inc: {count: 1} result: matchedCount: 1 modifiedCount: 1 upsertedCount: 0 # Updating the same document inside a transaction. # Casual consistency ensures that the transaction snapshot is causally # after the first updateOne. - name: startTransaction object: session0 - *updateOne - name: commitTransaction object: session0 expectations: - command_started_event: command: update: *collection_name updates: - q: {_id: 1} u: {$inc: {count: 1}} ordered: true lsid: session0 readConcern: txnNumber: startTransaction: autocommit: writeConcern: command_name: update database_name: *database_name - command_started_event: command: update: *collection_name updates: - q: {_id: 1} u: {$inc: {count: 1}} ordered: true readConcern: afterClusterTime: 42 lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: update database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: commitTransaction database_name: admin outcome: collection: data: - _id: 1 count: 2 - description: causal consistency disabled clientOptions: retryWrites: false sessionOptions: session0: causalConsistency: false operations: # Insert a document without a transaction. - name: insertOne object: collection arguments: session: session0 document: _id: 2 result: insertedId: 2 - name: startTransaction object: session0 - name: updateOne object: collection arguments: session: session0 filter: {_id: 1} update: $inc: {count: 1} result: matchedCount: 1 modifiedCount: 1 upsertedCount: 0 - name: commitTransaction object: session0 expectations: - command_started_event: command: insert: *collection_name documents: - _id: 2 ordered: true readConcern: lsid: session0 txnNumber: autocommit: writeConcern: command_name: insert database_name: *database_name - command_started_event: command: update: *collection_name updates: - q: {_id: 1} u: {$inc: {count: 1}} ordered: true # No afterClusterTime readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: update database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: commitTransaction database_name: admin outcome: collection: data: - _id: 1 count: 1 - _id: 2 mongo-ruby-driver-2.21.3/spec/spec_tests/data/transactions/commit.yml000066400000000000000000000365151505113246500257440ustar00rootroot00000000000000runOn: - minServerVersion: "4.0" topology: ["replicaset"] - minServerVersion: "4.1.8" topology: ["sharded"] database_name: &database_name "transaction-tests" collection_name: &collection_name "test" data: [] tests: - description: commit operations: - name: startTransaction object: session0 - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 - name: commitTransaction object: session0 # Again, to verify that txnNumber is incremented. - name: startTransaction object: session0 - name: insertOne object: collection arguments: session: session0 document: _id: 2 result: insertedId: 2 - name: commitTransaction object: session0 expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: commitTransaction database_name: admin - command_started_event: command: insert: *collection_name documents: - _id: 2 ordered: true readConcern: afterClusterTime: 42 lsid: session0 txnNumber: $numberLong: "2" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "2" startTransaction: autocommit: false writeConcern: command_name: commitTransaction database_name: admin outcome: collection: data: - _id: 1 - _id: 2 - description: rerun commit after empty transaction operations: - name: startTransaction object: session0 - name: commitTransaction object: session0 # Rerun the commit (which does not increment the txnNumber). - name: commitTransaction object: session0 - name: startTransaction object: session0 - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 - name: commitTransaction object: session0 expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "2" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "2" startTransaction: autocommit: false writeConcern: command_name: commitTransaction database_name: admin outcome: collection: data: - _id: 1 - description: multiple commits in a row operations: - name: startTransaction object: session0 - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 - name: commitTransaction object: session0 - name: commitTransaction object: session0 - name: commitTransaction object: session0 expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: commitTransaction database_name: admin - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false # commitTransaction applies w:majority on retries writeConcern: { w: majority, wtimeout: 10000 } command_name: commitTransaction database_name: admin - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: { w: majority, wtimeout: 10000 } command_name: commitTransaction database_name: admin outcome: collection: data: - _id: 1 - description: write concern error on commit operations: - name: startTransaction object: session0 arguments: options: writeConcern: w: 10 - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 - name: commitTransaction object: session0 result: # { # 'ok': 1.0, # 'writeConcernError': { # 'code': 100, # 'codeName': 'UnsatisfiableWriteConcern', # 'errmsg': 'Not enough data-bearing nodes' # } # } errorLabelsOmit: ["TransientTransactionError", "UnknownTransactionCommitResult"] outcome: collection: data: - _id: 1 - description: commit without start operations: - name: commitTransaction object: session0 result: errorContains: no transaction started expectations: [] outcome: collection: data: [] - description: commit after no-op abort operations: - name: startTransaction object: session0 - name: abortTransaction object: session0 - name: commitTransaction object: session0 result: errorContains: Cannot call commitTransaction after calling abortTransaction expectations: [] outcome: collection: data: [] - description: commit after abort operations: - name: startTransaction object: session0 - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 - name: abortTransaction object: session0 - name: commitTransaction object: session0 result: errorContains: Cannot call commitTransaction after calling abortTransaction expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: abortTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: abortTransaction database_name: admin - description: multiple commits after empty transaction operations: - name: startTransaction object: session0 - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 - name: abortTransaction object: session0 # Increments txnNumber. - name: startTransaction object: session0 # These commits aren't sent to server, transaction is empty. - name: commitTransaction object: session0 - name: commitTransaction object: session0 # Verify that previous, empty transaction incremented txnNumber. - name: startTransaction object: session0 - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 - name: abortTransaction object: session0 expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: abortTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: abortTransaction database_name: admin - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true readConcern: afterClusterTime: 42 lsid: session0 # txnNumber 2 was skipped. txnNumber: $numberLong: "3" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: abortTransaction: 1 lsid: session0 txnNumber: $numberLong: "3" startTransaction: autocommit: false writeConcern: command_name: abortTransaction database_name: admin outcome: collection: data: [] - description: reset session state commit clientOptions: retryWrites: false operations: - name: startTransaction object: session0 - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 - name: commitTransaction object: session0 # Running any operation after an ended transaction resets the session # state to "no transaction". - name: insertOne object: collection arguments: session: session0 document: _id: 2 result: insertedId: 2 # Calling commit again should error instead of re-running the commit. - name: commitTransaction object: session0 result: errorContains: no transaction started expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: commitTransaction database_name: admin - command_started_event: command: insert: *collection_name documents: - _id: 2 ordered: true readConcern: lsid: session0 txnNumber: startTransaction: autocommit: command_name: insert database_name: *database_name outcome: collection: data: - _id: 1 - _id: 2 - description: reset session state abort clientOptions: retryWrites: false operations: - name: startTransaction object: session0 - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 - name: abortTransaction object: session0 # Running any operation after an ended transaction resets the session # state to "no transaction". - name: insertOne object: collection arguments: session: session0 document: _id: 2 result: insertedId: 2 # Calling abort should error with "no transaction started" instead of # "cannot call abortTransaction twice". - name: abortTransaction object: session0 result: errorContains: no transaction started expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: abortTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: abortTransaction database_name: admin - command_started_event: command: insert: *collection_name documents: - _id: 2 ordered: true readConcern: lsid: session0 txnNumber: startTransaction: autocommit: command_name: insert database_name: *database_name outcome: collection: data: - _id: 2 mongo-ruby-driver-2.21.3/spec/spec_tests/data/transactions/count.yml000066400000000000000000000031421505113246500255720ustar00rootroot00000000000000runOn: # SERVER-35388 introduced OperationNotSupportedInTransaction in 4.0.2 - minServerVersion: "4.0.2" topology: ["replicaset"] - minServerVersion: "4.1.8" topology: ["sharded"] database_name: &database_name "transaction-tests" collection_name: &collection_name "test" data: &data - {_id: 1} - {_id: 2} - {_id: 3} - {_id: 4} tests: - description: count operations: - name: startTransaction object: session0 - name: count object: collection arguments: session: session0 filter: _id: 1 result: errorCodeName: OperationNotSupportedInTransaction errorLabelsOmit: ["TransientTransactionError", "UnknownTransactionCommitResult"] - name: abortTransaction object: session0 expectations: - command_started_event: command: count: *collection_name query: _id: 1 readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: count database_name: *database_name - command_started_event: command: abortTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: abortTransaction database_name: admin outcome: collection: data: *data mongo-ruby-driver-2.21.3/spec/spec_tests/data/transactions/create-collection.yml000066400000000000000000000067441505113246500300510ustar00rootroot00000000000000runOn: - minServerVersion: "4.3.4" topology: ["replicaset", "sharded"] database_name: &database_name "transaction-tests" collection_name: &collection_name "test" data: [] tests: - description: explicitly create collection using create command operations: - name: dropCollection object: database arguments: collection: *collection_name - name: startTransaction object: session0 - name: createCollection object: database arguments: session: session0 collection: *collection_name - name: assertCollectionNotExists object: testRunner arguments: database: *database_name collection: *collection_name - name: commitTransaction object: session0 - name: assertCollectionExists object: testRunner arguments: database: *database_name collection: *collection_name expectations: - command_started_event: command: drop: *collection_name writeConcern: command_name: drop database_name: *database_name - command_started_event: command: create: *collection_name lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: create database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: commitTransaction database_name: admin - description: implicitly create collection using insert operations: - name: dropCollection object: database arguments: collection: *collection_name - name: startTransaction object: session0 - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 - name: assertCollectionNotExists object: testRunner arguments: database: *database_name collection: *collection_name - name: commitTransaction object: session0 - name: assertCollectionExists object: testRunner arguments: database: *database_name collection: *collection_name expectations: - command_started_event: command: drop: *collection_name writeConcern: command_name: drop database_name: *database_name - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: commitTransaction database_name: admin mongo-ruby-driver-2.21.3/spec/spec_tests/data/transactions/create-index.yml000066400000000000000000000100761505113246500270160ustar00rootroot00000000000000runOn: - minServerVersion: "4.3.4" topology: ["replicaset", "sharded"] database_name: &database_name "transaction-tests" collection_name: &collection_name "test" data: [] tests: - description: create index on a non-existing collection operations: - name: dropCollection object: database arguments: collection: *collection_name - name: startTransaction object: session0 - name: createIndex object: collection arguments: session: session0 name: &index_name "t_1" keys: x: 1 - name: assertIndexNotExists object: testRunner arguments: database: *database_name collection: *collection_name index: *index_name - name: commitTransaction object: session0 - name: assertIndexExists object: testRunner arguments: database: *database_name collection: *collection_name index: *index_name expectations: - command_started_event: command: drop: *collection_name writeConcern: command_name: drop database_name: *database_name - command_started_event: command: createIndexes: *collection_name indexes: - name: *index_name key: x: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: createIndexes database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: commitTransaction database_name: admin - description: create index on a collection created within the same transaction operations: - name: dropCollection object: database arguments: collection: *collection_name - name: startTransaction object: session0 - name: createCollection object: database arguments: session: session0 collection: *collection_name - name: createIndex object: collection arguments: session: session0 name: *index_name keys: x: 1 - name: assertIndexNotExists object: testRunner arguments: database: *database_name collection: *collection_name index: *index_name - name: commitTransaction object: session0 - name: assertIndexExists object: testRunner arguments: database: *database_name collection: *collection_name index: *index_name expectations: - command_started_event: command: drop: *collection_name writeConcern: command_name: drop database_name: *database_name - command_started_event: command: create: *collection_name lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: create database_name: *database_name - command_started_event: command: createIndexes: *collection_name indexes: - name: *index_name key: x: 1 lsid: session0 writeConcern: command_name: createIndexes database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: commitTransaction database_name: admin mongo-ruby-driver-2.21.3/spec/spec_tests/data/transactions/delete.yml000066400000000000000000000112451505113246500257070ustar00rootroot00000000000000runOn: - minServerVersion: "4.0" topology: ["replicaset"] - minServerVersion: "4.1.8" topology: ["sharded"] database_name: &database_name "transaction-tests" collection_name: &collection_name "test" data: - _id: 1 - _id: 2 - _id: 3 - _id: 4 - _id: 5 tests: - description: delete operations: - name: startTransaction object: session0 - name: deleteOne object: collection arguments: session: session0 filter: _id: 1 result: deletedCount: 1 - name: deleteMany object: collection arguments: session: session0 filter: _id: {$lte: 3} result: deletedCount: 2 - name: deleteOne object: collection arguments: session: session0 filter: _id: 4 result: deletedCount: 1 - name: commitTransaction object: session0 expectations: - command_started_event: command: delete: *collection_name deletes: - q: {_id: 1} limit: 1 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: delete database_name: *database_name - command_started_event: command: delete: *collection_name deletes: - q: {_id: {$lte: 3}} limit: 0 ordered: true lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: delete database_name: *database_name - command_started_event: command: delete: *collection_name deletes: - q: {_id: 4} limit: 1 ordered: true lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: delete database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: commitTransaction database_name: admin outcome: collection: data: - _id: 5 - description: collection writeConcern ignored for delete operations: - name: startTransaction object: session0 arguments: options: writeConcern: w: majority - name: deleteOne object: collection collectionOptions: writeConcern: w: majority arguments: session: session0 filter: _id: 1 result: deletedCount: 1 - name: deleteMany object: collection collectionOptions: writeConcern: w: majority arguments: session: session0 filter: _id: {$lte: 3} result: deletedCount: 2 - name: commitTransaction object: session0 expectations: - command_started_event: command: delete: *collection_name deletes: - q: {_id: 1} limit: 1 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: delete database_name: *database_name - command_started_event: command: delete: *collection_name deletes: - q: {_id: {$lte: 3}} limit: 0 ordered: true lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: delete database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: w: majority command_name: commitTransaction database_name: admin mongo-ruby-driver-2.21.3/spec/spec_tests/data/transactions/error-labels.yml000066400000000000000000000703501505113246500270400ustar00rootroot00000000000000runOn: - minServerVersion: "4.0" topology: ["replicaset"] - minServerVersion: "4.1.8" topology: ["sharded"] # serverless proxy doesn't append error labels to errors in transactions # caused by failpoints (CLOUDP-88216) serverless: "forbid" database_name: &database_name "transaction-tests" collection_name: &collection_name "test" data: [] tests: - description: DuplicateKey errors do not contain transient label operations: - name: startTransaction object: session0 - name: insertMany object: collection arguments: session: session0 documents: - _id: 1 - _id: 1 result: errorLabelsOmit: ["TransientTransactionError", "UnknownTransactionCommitResult"] # DuplicateKey error code included in the bulk write error message # returned by the server errorContains: E11000 - name: abortTransaction object: session0 expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 - _id: 1 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: abortTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: abortTransaction database_name: admin outcome: collection: data: [] - description: NotMaster errors contain transient label failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["insert"] errorCode: 10107 # NotMaster operations: - name: startTransaction object: session0 - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: # Note, the server will return the errorLabel in this case. errorLabelsContain: ["TransientTransactionError"] errorLabelsOmit: ["RetryableWriteError", "UnknownTransactionCommitResult"] - name: abortTransaction object: session0 expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: abortTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: abortTransaction database_name: admin outcome: collection: data: [] - description: WriteConflict errors contain transient label failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["insert"] errorCode: 112 # WriteConflict operations: - name: startTransaction object: session0 - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: # Note, the server will return the errorLabel in this case. errorLabelsContain: ["TransientTransactionError"] errorLabelsOmit: ["RetryableWriteError", "UnknownTransactionCommitResult"] - name: abortTransaction object: session0 expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: abortTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: abortTransaction database_name: admin outcome: collection: data: [] - description: NoSuchTransaction errors contain transient label failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["insert"] errorCode: 251 # NoSuchTransaction operations: - name: startTransaction object: session0 - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: # Note, the server will return the errorLabel in this case. errorLabelsContain: ["TransientTransactionError"] errorLabelsOmit: ["RetryableWriteError", "UnknownTransactionCommitResult"] - name: abortTransaction object: session0 expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: abortTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: abortTransaction database_name: admin outcome: collection: data: [] - description: NoSuchTransaction errors on commit contain transient label failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["commitTransaction"] errorCode: 251 # NoSuchTransaction operations: - name: startTransaction object: session0 - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 - name: commitTransaction object: session0 result: # Note, the server will return the errorLabel in this case. errorLabelsContain: ["TransientTransactionError"] errorLabelsOmit: ["RetryableWriteError", "UnknownTransactionCommitResult"] expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: commitTransaction database_name: admin outcome: collection: data: [] - description: add TransientTransactionError label to connection errors, but do not add RetryableWriteError label failPoint: configureFailPoint: failCommand mode: { times: 4 } data: failCommands: ["insert", "find", "aggregate", "distinct"] closeConnection: true operations: - name: startTransaction object: session0 - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: &transient_label_only errorLabelsContain: ["TransientTransactionError"] # While a connection error would normally be retryable, these are not because # they occur within a transaction; ensure the driver does not add the # RetryableWriteError label to these errors. errorLabelsOmit: ["RetryableWriteError", "UnknownTransactionCommitResult"] - name: find object: collection arguments: session: session0 result: *transient_label_only - name: aggregate object: collection arguments: pipeline: - $project: _id: 1 session: session0 result: *transient_label_only - name: distinct object: collection arguments: fieldName: _id session: session0 result: *transient_label_only - name: abortTransaction object: session0 expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: find: *collection_name readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false command_name: find database_name: *database_name - command_started_event: command: aggregate: *collection_name pipeline: - $project: _id: 1 cursor: {} readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false command_name: aggregate database_name: *database_name - command_started_event: command: distinct: *collection_name key: _id lsid: session0 readConcern: txnNumber: $numberLong: "1" startTransaction: autocommit: false command_name: distinct database_name: *database_name - command_started_event: command: abortTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: abortTransaction database_name: admin outcome: collection: data: [] - description: add RetryableWriteError and UnknownTransactionCommitResult labels to connection errors failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["commitTransaction"] closeConnection: true operations: - name: startTransaction object: session0 - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 - name: commitTransaction object: session0 result: errorLabelsContain: ["RetryableWriteError", "UnknownTransactionCommitResult"] errorLabelsOmit: ["TransientTransactionError"] - name: commitTransaction object: session0 expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: commitTransaction database_name: admin - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false # commitTransaction applies w:majority on retries writeConcern: { w: majority, wtimeout: 10000 } command_name: commitTransaction database_name: admin - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: { w: majority, wtimeout: 10000 } command_name: commitTransaction database_name: admin outcome: collection: data: - _id: 1 - description: add RetryableWriteError and UnknownTransactionCommitResult labels to retryable commit errors failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["commitTransaction"] errorCode: 11602 # InterruptedDueToReplStateChange errorLabels: ["RetryableWriteError"] operations: - name: startTransaction object: session0 - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 - name: commitTransaction object: session0 result: errorLabelsContain: ["RetryableWriteError", "UnknownTransactionCommitResult"] errorLabelsOmit: ["TransientTransactionError"] - name: commitTransaction object: session0 expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: commitTransaction database_name: admin - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false # commitTransaction applies w:majority on retries writeConcern: { w: majority, wtimeout: 10000 } command_name: commitTransaction database_name: admin - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: { w: majority, wtimeout: 10000 } command_name: commitTransaction database_name: admin outcome: collection: data: - _id: 1 - description: add RetryableWriteError and UnknownTransactionCommitResult labels to writeConcernError ShutdownInProgress failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["commitTransaction"] errorLabels: ["RetryableWriteError"] writeConcernError: code: 91 errmsg: Replication is being shut down operations: - name: startTransaction object: session0 arguments: options: writeConcern: w: majority - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 - name: commitTransaction object: session0 result: errorLabelsContain: ["RetryableWriteError", "UnknownTransactionCommitResult"] errorLabelsOmit: ["TransientTransactionError"] - name: commitTransaction object: session0 expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: w: majority command_name: commitTransaction database_name: admin - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false # commitTransaction applies w:majority on retries writeConcern: { w: majority, wtimeout: 10000 } command_name: commitTransaction database_name: admin - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false # commitTransaction applies w:majority on retries writeConcern: { w: majority, wtimeout: 10000 } command_name: commitTransaction database_name: admin outcome: collection: data: - _id: 1 - description: do not add RetryableWriteError label to writeConcernError ShutdownInProgress that occurs within transaction failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["insert"] writeConcernError: code: 91 errmsg: Replication is being shut down operations: - name: startTransaction object: session0 arguments: options: writeConcern: w: majority - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: errorLabelsContain: [] errorLabelsOmit: ["RetryableWriteError", "TransientTransactionError", "UnknownTransactionCommitResult"] - name: abortTransaction object: session0 expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false command_name: insert database_name: *database_name - command_started_event: command: abortTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false command_name: abortTransaction database_name: admin outcome: collection: data: [] - description: add UnknownTransactionCommitResult label to writeConcernError WriteConcernFailed failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["commitTransaction"] writeConcernError: code: 64 # WriteConcernFailed without wtimeout errmsg: multiple errors reported operations: - name: startTransaction object: session0 arguments: options: writeConcern: w: majority - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 - name: commitTransaction object: session0 result: errorLabelsContain: ["UnknownTransactionCommitResult"] errorLabelsOmit: ["RetryableWriteError", "TransientTransactionError"] - name: commitTransaction object: session0 expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: w: majority command_name: commitTransaction database_name: admin - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false # commitTransaction applies w:majority on retries writeConcern: { w: majority, wtimeout: 10000 } command_name: commitTransaction database_name: admin outcome: collection: data: - _id: 1 - description: add UnknownTransactionCommitResult label to writeConcernError WriteConcernFailed with wtimeout failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["commitTransaction"] writeConcernError: code: 64 codeName: WriteConcernFailed errmsg: waiting for replication timed out errInfo: {wtimeout: True} operations: - name: startTransaction object: session0 arguments: options: writeConcern: w: majority - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 - name: commitTransaction object: session0 result: errorLabelsContain: ["UnknownTransactionCommitResult"] errorLabelsOmit: ["RetryableWriteError", "TransientTransactionError"] - name: commitTransaction object: session0 expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: w: majority command_name: commitTransaction database_name: admin - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false # commitTransaction applies w:majority on retries writeConcern: { w: majority, wtimeout: 10000 } command_name: commitTransaction database_name: admin outcome: collection: data: - _id: 1 - description: omit UnknownTransactionCommitResult label from writeConcernError UnsatisfiableWriteConcern failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["commitTransaction"] writeConcernError: code: 100 # UnsatisfiableWriteConcern errmsg: Not enough data-bearing nodes operations: - name: startTransaction object: session0 arguments: options: writeConcern: w: majority - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 - name: commitTransaction object: session0 result: errorLabelsOmit: ["RetryableWriteError", "TransientTransactionError", "UnknownTransactionCommitResult"] expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: w: majority command_name: commitTransaction database_name: admin outcome: collection: data: - _id: 1 - description: omit UnknownTransactionCommitResult label from writeConcernError UnknownReplWriteConcern failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["commitTransaction"] writeConcernError: code: 79 # UnknownReplWriteConcern errmsg: No write concern mode named 'blah' found in replica set configuration operations: - name: startTransaction object: session0 arguments: options: writeConcern: w: majority - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 - name: commitTransaction object: session0 result: errorLabelsOmit: ["RetryableWriteConcern", "TransientTransactionError", "UnknownTransactionCommitResult"] expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: w: majority command_name: commitTransaction database_name: admin outcome: collection: data: - _id: 1 mongo-ruby-driver-2.21.3/spec/spec_tests/data/transactions/errors-client.yml000066400000000000000000000025021505113246500272310ustar00rootroot00000000000000runOn: - minServerVersion: "4.0" topology: ["replicaset"] - minServerVersion: "4.1.8" topology: ["sharded"] database_name: &database_name "transaction-tests" collection_name: &collection_name "test" data: [] tests: - description: Client side error in command starting transaction operations: - name: startTransaction object: session0 - name: updateOne object: collection arguments: session: session0 filter: { _id: 1 } update: { x: 1 } error: true - name: assertSessionTransactionState object: testRunner arguments: session: session0 state: starting - description: Client side error when transaction is in progress operations: - name: startTransaction object: session0 - name: insertOne object: collection arguments: session: session0 document: { _id: 1 } result: insertedId: 1 - name: updateOne object: collection arguments: session: session0 filter: { _id: 1 } update: { x: 1 } error: true - name: assertSessionTransactionState object: testRunner arguments: session: session0 state: in_progress mongo-ruby-driver-2.21.3/spec/spec_tests/data/transactions/errors.yml000066400000000000000000000064761505113246500257730ustar00rootroot00000000000000runOn: - minServerVersion: "4.0" topology: ["replicaset"] - minServerVersion: "4.1.8" topology: ["sharded"] database_name: &database_name "transaction-tests" collection_name: &collection_name "test" data: [] tests: - description: start insert start operations: - name: startTransaction object: session0 - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 - name: startTransaction object: session0 result: # Client-side error. errorContains: transaction already in progress # Just to clean up. - name: commitTransaction object: session0 - description: start twice operations: - name: startTransaction object: session0 - name: startTransaction object: session0 result: # Client-side error. errorContains: transaction already in progress - description: commit and start twice operations: - name: startTransaction object: session0 - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 - name: commitTransaction object: session0 - name: startTransaction object: session0 - name: startTransaction object: session0 result: # Client-side error. errorContains: transaction already in progress - description: write conflict commit operations: - name: startTransaction object: session0 - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 - name: startTransaction object: session1 - name: insertOne object: collection arguments: session: session1 document: _id: 1 result: errorCodeName: WriteConflict errorLabelsContain: ["TransientTransactionError"] errorLabelsOmit: ["UnknownTransactionCommitResult"] - name: commitTransaction object: session0 - name: commitTransaction object: session1 result: errorCodeName: NoSuchTransaction errorLabelsContain: ["TransientTransactionError"] errorLabelsOmit: ["UnknownTransactionCommitResult"] - description: write conflict abort operations: - name: startTransaction object: session0 - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 - name: startTransaction object: session1 - name: insertOne object: collection arguments: session: session1 document: _id: 1 result: errorCodeName: WriteConflict errorLabelsContain: ["TransientTransactionError"] errorLabelsOmit: ["UnknownTransactionCommitResult"] - name: commitTransaction object: session0 # Driver ignores "NoSuchTransaction" error. - name: abortTransaction object: session1 mongo-ruby-driver-2.21.3/spec/spec_tests/data/transactions/findOneAndDelete.yml000066400000000000000000000064751505113246500276060ustar00rootroot00000000000000runOn: - minServerVersion: "4.0" topology: ["replicaset"] - minServerVersion: "4.1.8" topology: ["sharded"] database_name: &database_name "transaction-tests" collection_name: &collection_name "test" data: - _id: 1 - _id: 2 - _id: 3 tests: - description: findOneAndDelete operations: - name: startTransaction object: session0 - name: findOneAndDelete object: collection arguments: session: session0 filter: {_id: 3} result: {_id: 3} - name: findOneAndDelete object: collection arguments: session: session0 filter: {_id: 4} result: - name: commitTransaction object: session0 expectations: - command_started_event: command: findAndModify: *collection_name query: {_id: 3} remove: True lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false readConcern: writeConcern: command_name: findAndModify database_name: *database_name - command_started_event: command: findAndModify: *collection_name query: {_id: 4} remove: True lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false readConcern: writeConcern: command_name: findAndModify database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false readConcern: writeConcern: command_name: commitTransaction database_name: admin outcome: collection: data: - {_id: 1} - {_id: 2} - description: collection writeConcern ignored for findOneAndDelete operations: - name: startTransaction object: session0 arguments: options: writeConcern: w: majority - name: findOneAndDelete object: collection collectionOptions: writeConcern: w: majority arguments: session: session0 filter: {_id: 3} result: {_id: 3} - name: commitTransaction object: session0 expectations: - command_started_event: command: findAndModify: *collection_name query: {_id: 3} remove: True lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false readConcern: writeConcern: command_name: findAndModify database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false readConcern: writeConcern: w: majority command_name: commitTransaction database_name: admin mongo-ruby-driver-2.21.3/spec/spec_tests/data/transactions/findOneAndReplace.yml000066400000000000000000000072761505113246500277570ustar00rootroot00000000000000runOn: - minServerVersion: "4.0" topology: ["replicaset"] - minServerVersion: "4.1.8" topology: ["sharded"] database_name: &database_name "transaction-tests" collection_name: &collection_name "test" data: - _id: 1 - _id: 2 - _id: 3 tests: - description: findOneAndReplace operations: - name: startTransaction object: session0 - name: findOneAndReplace object: collection arguments: session: session0 filter: {_id: 3} replacement: {x: 1} returnDocument: Before result: {_id: 3} - name: findOneAndReplace object: collection arguments: session: session0 filter: {_id: 4} replacement: {x: 1} upsert: true returnDocument: After result: {_id: 4, x: 1} - name: commitTransaction object: session0 expectations: - command_started_event: command: findAndModify: *collection_name query: {_id: 3} update: {x: 1} new: false lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false readConcern: writeConcern: command_name: findAndModify database_name: *database_name - command_started_event: command: findAndModify: *collection_name query: {_id: 4} update: {x: 1} new: true upsert: true lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false readConcern: writeConcern: command_name: findAndModify database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false readConcern: writeConcern: command_name: commitTransaction database_name: admin outcome: collection: data: - {_id: 1} - {_id: 2} - {_id: 3, x: 1} - {_id: 4, x: 1} - description: collection writeConcern ignored for findOneAndReplace operations: - name: startTransaction object: session0 arguments: options: writeConcern: w: majority - name: findOneAndReplace object: collection collectionOptions: writeConcern: w: majority arguments: session: session0 filter: {_id: 3} replacement: {x: 1} returnDocument: Before result: {_id: 3} - name: commitTransaction object: session0 expectations: - command_started_event: command: findAndModify: *collection_name query: {_id: 3} update: {x: 1} new: false lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false readConcern: writeConcern: command_name: findAndModify database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false readConcern: writeConcern: w: majority command_name: commitTransaction database_name: admin mongo-ruby-driver-2.21.3/spec/spec_tests/data/transactions/findOneAndUpdate.yml000066400000000000000000000142751505113246500276230ustar00rootroot00000000000000runOn: - minServerVersion: "4.0" topology: ["replicaset"] - minServerVersion: "4.1.8" topology: ["sharded"] database_name: &database_name "transaction-tests" collection_name: &collection_name "test" data: - _id: 1 - _id: 2 - _id: 3 tests: - description: findOneAndUpdate operations: - name: startTransaction object: session0 - name: findOneAndUpdate object: collection arguments: session: session0 filter: {_id: 3} update: $inc: {x: 1} returnDocument: Before result: {_id: 3} - name: findOneAndUpdate object: collection arguments: session: session0 filter: {_id: 4} update: $inc: {x: 1} upsert: true returnDocument: After result: {_id: 4, x: 1} - name: commitTransaction object: session0 - name: startTransaction object: session0 # Test a second time to ensure txnNumber is incremented. - name: findOneAndUpdate object: collection arguments: session: session0 filter: {_id: 3} update: $inc: {x: 1} returnDocument: Before result: {_id: 3, x: 1} - name: commitTransaction object: session0 # Test a third time to ensure abort works. - name: startTransaction object: session0 - name: findOneAndUpdate object: collection arguments: session: session0 filter: {_id: 3} update: $inc: {x: 1} returnDocument: Before result: {_id: 3, x: 2} - name: abortTransaction object: session0 expectations: - command_started_event: command: findAndModify: *collection_name query: {_id: 3} update: {$inc: {x: 1}} new: false lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false readConcern: writeConcern: command_name: findAndModify database_name: *database_name - command_started_event: command: findAndModify: *collection_name query: {_id: 4} update: {$inc: {x: 1}} new: true upsert: true lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false readConcern: writeConcern: command_name: findAndModify database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false readConcern: writeConcern: command_name: commitTransaction database_name: admin - command_started_event: command: findAndModify: *collection_name query: {_id: 3} update: {$inc: {x: 1}} new: false lsid: session0 txnNumber: $numberLong: "2" startTransaction: true autocommit: false readConcern: afterClusterTime: 42 writeConcern: command_name: findAndModify database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "2" startTransaction: autocommit: false readConcern: writeConcern: command_name: commitTransaction database_name: admin - command_started_event: command: findAndModify: *collection_name query: {_id: 3} update: {$inc: {x: 1}} new: false lsid: session0 txnNumber: $numberLong: "3" startTransaction: true autocommit: false readConcern: afterClusterTime: 42 writeConcern: command_name: findAndModify database_name: *database_name - command_started_event: command: abortTransaction: 1 lsid: session0 txnNumber: $numberLong: "3" startTransaction: autocommit: false readConcern: writeConcern: command_name: abortTransaction database_name: admin outcome: collection: data: - {_id: 1} - {_id: 2} - {_id: 3, x: 2} - {_id: 4, x: 1} - description: collection writeConcern ignored for findOneAndUpdate operations: - name: startTransaction object: session0 arguments: options: writeConcern: w: majority - name: findOneAndUpdate object: collection collectionOptions: writeConcern: w: majority arguments: session: session0 filter: {_id: 3} update: $inc: {x: 1} returnDocument: Before result: {_id: 3} - name: commitTransaction object: session0 expectations: - command_started_event: command: findAndModify: *collection_name query: {_id: 3} update: $inc: {x: 1} new: false lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false readConcern: writeConcern: command_name: findAndModify database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false readConcern: writeConcern: w: majority command_name: commitTransaction database_name: admin mongo-ruby-driver-2.21.3/spec/spec_tests/data/transactions/insert.yml000066400000000000000000000232131505113246500257470ustar00rootroot00000000000000runOn: - minServerVersion: "4.0" topology: ["replicaset"] - minServerVersion: "4.1.8" topology: ["sharded"] database_name: &database_name "transaction-tests" collection_name: &collection_name "test" data: [] tests: - description: insert operations: - name: startTransaction object: session0 - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 - name: insertMany object: collection arguments: documents: - _id: 2 - _id: 3 session: session0 result: insertedIds: {0: 2, 1: 3} - name: insertOne object: collection arguments: session: session0 document: _id: 4 result: insertedId: 4 - name: commitTransaction object: session0 - name: startTransaction object: session0 - name: insertOne object: collection arguments: session: session0 document: _id: 5 result: insertedId: 5 - name: commitTransaction object: session0 expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: insert: *collection_name documents: - _id: 2 - _id: 3 ordered: true lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: insert: *collection_name documents: - _id: 4 ordered: true lsid: session0 txnNumber: $numberLong: "1" autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: commitTransaction database_name: admin - command_started_event: command: insert: *collection_name documents: - _id: 5 ordered: true readConcern: afterClusterTime: 42 lsid: session0 txnNumber: $numberLong: "2" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "2" startTransaction: autocommit: false writeConcern: command_name: commitTransaction database_name: admin outcome: collection: data: - _id: 1 - _id: 2 - _id: 3 - _id: 4 - _id: 5 # This test proves that the driver uses "session1" correctly in operations # and APM expectations. - description: insert with session1 operations: - name: startTransaction object: session1 - name: insertOne object: collection arguments: session: session1 document: _id: 1 result: insertedId: 1 - name: insertMany object: collection arguments: documents: - _id: 2 - _id: 3 session: session1 result: insertedIds: {0: 2, 1: 3} - name: commitTransaction object: session1 - name: startTransaction object: session1 - name: insertOne object: collection arguments: session: session1 document: _id: 4 result: insertedId: 4 - name: abortTransaction object: session1 expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true readConcern: lsid: session1 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: insert: *collection_name documents: - _id: 2 - _id: 3 ordered: true lsid: session1 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session1 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: commitTransaction database_name: admin - command_started_event: command: insert: *collection_name documents: - _id: 4 ordered: true readConcern: afterClusterTime: 42 lsid: session1 txnNumber: $numberLong: "2" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: abortTransaction: 1 lsid: session1 txnNumber: $numberLong: "2" startTransaction: autocommit: false writeConcern: command_name: abortTransaction database_name: admin outcome: collection: data: - _id: 1 - _id: 2 - _id: 3 # This test proves that the driver parses the collectionOptions writeConcern. - description: collection writeConcern without transaction clientOptions: retryWrites: false operations: - name: insertOne object: collection collectionOptions: writeConcern: w: majority arguments: session: session0 document: _id: 1 result: insertedId: 1 expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true readConcern: lsid: session0 txnNumber: startTransaction: autocommit: writeConcern: w: majority command_name: insert database_name: *database_name outcome: collection: data: - _id: 1 - description: collection writeConcern ignored for insert operations: - name: startTransaction object: session0 arguments: options: writeConcern: w: majority - name: insertOne object: collection collectionOptions: writeConcern: w: majority arguments: session: session0 document: _id: 1 result: insertedId: 1 - name: insertMany object: collection collectionOptions: writeConcern: w: majority arguments: documents: - _id: 2 - _id: 3 session: session0 result: insertedIds: {0: 2, 1: 3} - name: commitTransaction object: session0 expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: insert: *collection_name documents: - _id: 2 - _id: 3 ordered: true lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: w: majority command_name: commitTransaction database_name: admin outcome: collection: data: - _id: 1 - _id: 2 - _id: 3 mongo-ruby-driver-2.21.3/spec/spec_tests/data/transactions/isolation.yml000066400000000000000000000053621505113246500264510ustar00rootroot00000000000000# Test snapshot isolation. # This test doesn't check contents of command-started events. runOn: - minServerVersion: "4.0" topology: ["replicaset"] - minServerVersion: "4.1.8" topology: ["sharded"] database_name: &database_name "transaction-tests" collection_name: &collection_name "test" data: [] tests: - description: one transaction operations: - name: startTransaction object: session0 - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 - name: find object: collection arguments: session: session0 filter: _id: 1 result: - {_id: 1} - name: find object: collection arguments: session: session1 filter: _id: 1 result: [] - name: find object: collection arguments: filter: _id: 1 result: [] - name: commitTransaction object: session0 - name: find object: collection arguments: session: session1 filter: _id: 1 result: - {_id: 1} - name: find object: collection arguments: filter: _id: 1 result: - {_id: 1} outcome: collection: data: - _id: 1 - description: two transactions operations: - name: startTransaction object: session0 - name: startTransaction object: session1 - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 - name: find object: collection arguments: session: session0 filter: _id: 1 result: - {_id: 1} - name: find object: collection arguments: session: session1 filter: _id: 1 result: [] - name: find object: collection arguments: filter: _id: 1 result: [] - name: commitTransaction object: session0 # Snapshot isolation in session1, not read-committed. - name: find object: collection arguments: session: session1 filter: _id: 1 result: [] - name: find object: collection arguments: filter: _id: 1 result: - {_id: 1} - name: commitTransaction object: session1 outcome: collection: data: - {_id: 1} mongo-ruby-driver-2.21.3/spec/spec_tests/data/transactions/mongos-pin-auto.yml000066400000000000000000001362711505113246500275100ustar00rootroot00000000000000# Autogenerated tests that transient errors in a transaction unpin the session. # See mongos-pin-auto-tests.py runOn: - minServerVersion: "4.1.8" topology: ["sharded"] # serverless proxy doesn't append error labels to errors in transactions # caused by failpoints (CLOUDP-88216) serverless: "forbid" database_name: &database_name "transaction-tests" collection_name: &collection_name "test" data: &data - {_id: 1} - {_id: 2} tests: - description: remain pinned after non-transient Interrupted error on insertOne useMultipleMongoses: true operations: - &startTransaction name: startTransaction object: session0 - &initialCommand name: insertOne object: collection arguments: session: session0 document: {_id: 3} result: insertedId: 3 - name: targetedFailPoint object: testRunner arguments: session: session0 failPoint: configureFailPoint: failCommand mode: {times: 1} data: failCommands: ["insert"] errorCode: 11601 - name: insertOne object: collection arguments: session: session0 document: _id: 4 result: errorLabelsOmit: ["TransientTransactionError", "UnknownTransactionCommitResult"] errorCodeName: Interrupted - &assertSessionPinned name: assertSessionPinned object: testRunner arguments: session: session0 - &commitTransaction name: commitTransaction object: session0 expectations: - command_started_event: command: insert: *collection_name documents: - _id: 3 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: insert: *collection_name documents: - _id: 4 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: recoveryToken: 42 command_name: commitTransaction database_name: admin outcome: &outcome collection: data: - {_id: 1} - {_id: 2} - {_id: 3} - description: unpin after transient error within a transaction useMultipleMongoses: true operations: - &startTransaction name: startTransaction object: session0 - &initialCommand name: insertOne object: collection arguments: session: session0 document: _id: 3 result: insertedId: 3 - name: targetedFailPoint object: testRunner arguments: session: session0 failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["insert"] closeConnection: true - name: insertOne object: collection arguments: session: session0 document: _id: 4 result: errorLabelsContain: ["TransientTransactionError"] errorLabelsOmit: ["UnknownTransactionCommitResult"] # Session unpins from the first mongos after the insert error and # abortTransaction succeeds immediately on any mongos. - &assertSessionUnpinned name: assertSessionUnpinned object: testRunner arguments: session: session0 - &abortTransaction name: abortTransaction object: session0 expectations: - command_started_event: command: insert: *collection_name documents: - _id: 3 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: insert: *collection_name documents: - _id: 4 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: abortTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: recoveryToken: 42 command_name: abortTransaction database_name: admin outcome: &outcome collection: data: *data # The rest of the tests in this file test every operation type against # multiple types of transient errors (connection and error code). - description: remain pinned after non-transient Interrupted error on insertOne insert useMultipleMongoses: true operations: - *startTransaction - *initialCommand - name: targetedFailPoint object: testRunner arguments: session: session0 failPoint: configureFailPoint: failCommand mode: {times: 1} data: failCommands: ["insert"] errorCode: 11601 - name: insertOne object: collection arguments: session: session0 document: {_id: 4} result: errorLabelsOmit: ["TransientTransactionError"] - *assertSessionPinned - *abortTransaction outcome: *outcome - description: remain pinned after non-transient Interrupted error on insertMany insert useMultipleMongoses: true operations: - *startTransaction - *initialCommand - name: targetedFailPoint object: testRunner arguments: session: session0 failPoint: configureFailPoint: failCommand mode: {times: 1} data: failCommands: ["insert"] errorCode: 11601 - name: insertMany object: collection arguments: session: session0 documents: [{_id: 4}, {_id: 5}] result: errorLabelsOmit: ["TransientTransactionError"] - *assertSessionPinned - *abortTransaction outcome: *outcome - description: remain pinned after non-transient Interrupted error on updateOne update useMultipleMongoses: true operations: - *startTransaction - *initialCommand - name: targetedFailPoint object: testRunner arguments: session: session0 failPoint: configureFailPoint: failCommand mode: {times: 1} data: failCommands: ["update"] errorCode: 11601 - name: updateOne object: collection arguments: session: session0 filter: {_id: 1} update: {$inc: {x: 1}} result: errorLabelsOmit: ["TransientTransactionError"] - *assertSessionPinned - *abortTransaction outcome: *outcome - description: remain pinned after non-transient Interrupted error on replaceOne update useMultipleMongoses: true operations: - *startTransaction - *initialCommand - name: targetedFailPoint object: testRunner arguments: session: session0 failPoint: configureFailPoint: failCommand mode: {times: 1} data: failCommands: ["update"] errorCode: 11601 - name: replaceOne object: collection arguments: session: session0 filter: {_id: 1} replacement: {y: 1} result: errorLabelsOmit: ["TransientTransactionError"] - *assertSessionPinned - *abortTransaction outcome: *outcome - description: remain pinned after non-transient Interrupted error on updateMany update useMultipleMongoses: true operations: - *startTransaction - *initialCommand - name: targetedFailPoint object: testRunner arguments: session: session0 failPoint: configureFailPoint: failCommand mode: {times: 1} data: failCommands: ["update"] errorCode: 11601 - name: updateMany object: collection arguments: session: session0 filter: {_id: {$gte: 1}} update: {$set: {z: 1}} result: errorLabelsOmit: ["TransientTransactionError"] - *assertSessionPinned - *abortTransaction outcome: *outcome - description: remain pinned after non-transient Interrupted error on deleteOne delete useMultipleMongoses: true operations: - *startTransaction - *initialCommand - name: targetedFailPoint object: testRunner arguments: session: session0 failPoint: configureFailPoint: failCommand mode: {times: 1} data: failCommands: ["delete"] errorCode: 11601 - name: deleteOne object: collection arguments: session: session0 filter: {_id: 1} result: errorLabelsOmit: ["TransientTransactionError"] - *assertSessionPinned - *abortTransaction outcome: *outcome - description: remain pinned after non-transient Interrupted error on deleteMany delete useMultipleMongoses: true operations: - *startTransaction - *initialCommand - name: targetedFailPoint object: testRunner arguments: session: session0 failPoint: configureFailPoint: failCommand mode: {times: 1} data: failCommands: ["delete"] errorCode: 11601 - name: deleteMany object: collection arguments: session: session0 filter: {_id: {$gte: 1}} result: errorLabelsOmit: ["TransientTransactionError"] - *assertSessionPinned - *abortTransaction outcome: *outcome - description: remain pinned after non-transient Interrupted error on findOneAndDelete findAndModify useMultipleMongoses: true operations: - *startTransaction - *initialCommand - name: targetedFailPoint object: testRunner arguments: session: session0 failPoint: configureFailPoint: failCommand mode: {times: 1} data: failCommands: ["findAndModify"] errorCode: 11601 - name: findOneAndDelete object: collection arguments: session: session0 filter: {_id: 1} result: errorLabelsOmit: ["TransientTransactionError"] - *assertSessionPinned - *abortTransaction outcome: *outcome - description: remain pinned after non-transient Interrupted error on findOneAndUpdate findAndModify useMultipleMongoses: true operations: - *startTransaction - *initialCommand - name: targetedFailPoint object: testRunner arguments: session: session0 failPoint: configureFailPoint: failCommand mode: {times: 1} data: failCommands: ["findAndModify"] errorCode: 11601 - name: findOneAndUpdate object: collection arguments: session: session0 filter: {_id: 1} update: {$inc: {x: 1}} returnDocument: Before result: errorLabelsOmit: ["TransientTransactionError"] - *assertSessionPinned - *abortTransaction outcome: *outcome - description: remain pinned after non-transient Interrupted error on findOneAndReplace findAndModify useMultipleMongoses: true operations: - *startTransaction - *initialCommand - name: targetedFailPoint object: testRunner arguments: session: session0 failPoint: configureFailPoint: failCommand mode: {times: 1} data: failCommands: ["findAndModify"] errorCode: 11601 - name: findOneAndReplace object: collection arguments: session: session0 filter: {_id: 1} replacement: {y: 1} returnDocument: Before result: errorLabelsOmit: ["TransientTransactionError"] - *assertSessionPinned - *abortTransaction outcome: *outcome - description: remain pinned after non-transient Interrupted error on bulkWrite insert useMultipleMongoses: true operations: - *startTransaction - *initialCommand - name: targetedFailPoint object: testRunner arguments: session: session0 failPoint: configureFailPoint: failCommand mode: {times: 1} data: failCommands: ["insert"] errorCode: 11601 - name: bulkWrite object: collection arguments: session: session0 requests: - name: insertOne arguments: document: {_id: 1} result: errorLabelsOmit: ["TransientTransactionError"] - *assertSessionPinned - *abortTransaction outcome: *outcome - description: remain pinned after non-transient Interrupted error on bulkWrite update useMultipleMongoses: true operations: - *startTransaction - *initialCommand - name: targetedFailPoint object: testRunner arguments: session: session0 failPoint: configureFailPoint: failCommand mode: {times: 1} data: failCommands: ["update"] errorCode: 11601 - name: bulkWrite object: collection arguments: session: session0 requests: - name: updateOne arguments: filter: {_id: 1} update: {$set: {x: 1}} result: errorLabelsOmit: ["TransientTransactionError"] - *assertSessionPinned - *abortTransaction outcome: *outcome - description: remain pinned after non-transient Interrupted error on bulkWrite delete useMultipleMongoses: true operations: - *startTransaction - *initialCommand - name: targetedFailPoint object: testRunner arguments: session: session0 failPoint: configureFailPoint: failCommand mode: {times: 1} data: failCommands: ["delete"] errorCode: 11601 - name: bulkWrite object: collection arguments: session: session0 requests: - name: deleteOne arguments: filter: {_id: 1} result: errorLabelsOmit: ["TransientTransactionError"] - *assertSessionPinned - *abortTransaction outcome: *outcome - description: remain pinned after non-transient Interrupted error on find find useMultipleMongoses: true operations: - *startTransaction - *initialCommand - name: targetedFailPoint object: testRunner arguments: session: session0 failPoint: configureFailPoint: failCommand mode: {times: 1} data: failCommands: ["find"] errorCode: 11601 - name: find object: collection arguments: session: session0 filter: {_id: 1} result: errorLabelsOmit: ["TransientTransactionError"] - *assertSessionPinned - *abortTransaction outcome: *outcome - description: remain pinned after non-transient Interrupted error on countDocuments aggregate useMultipleMongoses: true operations: - *startTransaction - *initialCommand - name: targetedFailPoint object: testRunner arguments: session: session0 failPoint: configureFailPoint: failCommand mode: {times: 1} data: failCommands: ["aggregate"] errorCode: 11601 - name: countDocuments object: collection arguments: session: session0 filter: {} result: errorLabelsOmit: ["TransientTransactionError"] - *assertSessionPinned - *abortTransaction outcome: *outcome - description: remain pinned after non-transient Interrupted error on aggregate aggregate useMultipleMongoses: true operations: - *startTransaction - *initialCommand - name: targetedFailPoint object: testRunner arguments: session: session0 failPoint: configureFailPoint: failCommand mode: {times: 1} data: failCommands: ["aggregate"] errorCode: 11601 - name: aggregate object: collection arguments: session: session0 pipeline: [] result: errorLabelsOmit: ["TransientTransactionError"] - *assertSessionPinned - *abortTransaction outcome: *outcome - description: remain pinned after non-transient Interrupted error on distinct distinct useMultipleMongoses: true operations: - *startTransaction - *initialCommand - name: targetedFailPoint object: testRunner arguments: session: session0 failPoint: configureFailPoint: failCommand mode: {times: 1} data: failCommands: ["distinct"] errorCode: 11601 - name: distinct object: collection arguments: session: session0 fieldName: _id result: errorLabelsOmit: ["TransientTransactionError"] - *assertSessionPinned - *abortTransaction outcome: *outcome - description: remain pinned after non-transient Interrupted error on runCommand insert useMultipleMongoses: true operations: - *startTransaction - *initialCommand - name: targetedFailPoint object: testRunner arguments: session: session0 failPoint: configureFailPoint: failCommand mode: {times: 1} data: failCommands: ["insert"] errorCode: 11601 - name: runCommand object: database command_name: insert arguments: session: session0 command: insert: *collection_name documents: - _id : 1 result: errorLabelsOmit: ["TransientTransactionError"] - *assertSessionPinned - *abortTransaction outcome: *outcome - description: unpin after transient connection error on insertOne insert useMultipleMongoses: true operations: - *startTransaction - *initialCommand - name: targetedFailPoint object: testRunner arguments: session: session0 failPoint: configureFailPoint: failCommand mode: {times: 1} data: failCommands: ["insert"] closeConnection: true - name: insertOne object: collection arguments: session: session0 document: {_id: 4} result: errorLabelsContain: ["TransientTransactionError"] - *assertSessionUnpinned - *abortTransaction outcome: *outcome - description: unpin after transient ShutdownInProgress error on insertOne insert useMultipleMongoses: true operations: - *startTransaction - *initialCommand - name: targetedFailPoint object: testRunner arguments: session: session0 failPoint: configureFailPoint: failCommand mode: {times: 1} data: failCommands: ["insert"] errorCode: 91 - name: insertOne object: collection arguments: session: session0 document: {_id: 4} result: errorLabelsContain: ["TransientTransactionError"] - *assertSessionUnpinned - *abortTransaction outcome: *outcome - description: unpin after transient connection error on insertMany insert useMultipleMongoses: true operations: - *startTransaction - *initialCommand - name: targetedFailPoint object: testRunner arguments: session: session0 failPoint: configureFailPoint: failCommand mode: {times: 1} data: failCommands: ["insert"] closeConnection: true - name: insertMany object: collection arguments: session: session0 documents: [{_id: 4}, {_id: 5}] result: errorLabelsContain: ["TransientTransactionError"] - *assertSessionUnpinned - *abortTransaction outcome: *outcome - description: unpin after transient ShutdownInProgress error on insertMany insert useMultipleMongoses: true operations: - *startTransaction - *initialCommand - name: targetedFailPoint object: testRunner arguments: session: session0 failPoint: configureFailPoint: failCommand mode: {times: 1} data: failCommands: ["insert"] errorCode: 91 - name: insertMany object: collection arguments: session: session0 documents: [{_id: 4}, {_id: 5}] result: errorLabelsContain: ["TransientTransactionError"] - *assertSessionUnpinned - *abortTransaction outcome: *outcome - description: unpin after transient connection error on updateOne update useMultipleMongoses: true operations: - *startTransaction - *initialCommand - name: targetedFailPoint object: testRunner arguments: session: session0 failPoint: configureFailPoint: failCommand mode: {times: 1} data: failCommands: ["update"] closeConnection: true - name: updateOne object: collection arguments: session: session0 filter: {_id: 1} update: {$inc: {x: 1}} result: errorLabelsContain: ["TransientTransactionError"] - *assertSessionUnpinned - *abortTransaction outcome: *outcome - description: unpin after transient ShutdownInProgress error on updateOne update useMultipleMongoses: true operations: - *startTransaction - *initialCommand - name: targetedFailPoint object: testRunner arguments: session: session0 failPoint: configureFailPoint: failCommand mode: {times: 1} data: failCommands: ["update"] errorCode: 91 - name: updateOne object: collection arguments: session: session0 filter: {_id: 1} update: {$inc: {x: 1}} result: errorLabelsContain: ["TransientTransactionError"] - *assertSessionUnpinned - *abortTransaction outcome: *outcome - description: unpin after transient connection error on replaceOne update useMultipleMongoses: true operations: - *startTransaction - *initialCommand - name: targetedFailPoint object: testRunner arguments: session: session0 failPoint: configureFailPoint: failCommand mode: {times: 1} data: failCommands: ["update"] closeConnection: true - name: replaceOne object: collection arguments: session: session0 filter: {_id: 1} replacement: {y: 1} result: errorLabelsContain: ["TransientTransactionError"] - *assertSessionUnpinned - *abortTransaction outcome: *outcome - description: unpin after transient ShutdownInProgress error on replaceOne update useMultipleMongoses: true operations: - *startTransaction - *initialCommand - name: targetedFailPoint object: testRunner arguments: session: session0 failPoint: configureFailPoint: failCommand mode: {times: 1} data: failCommands: ["update"] errorCode: 91 - name: replaceOne object: collection arguments: session: session0 filter: {_id: 1} replacement: {y: 1} result: errorLabelsContain: ["TransientTransactionError"] - *assertSessionUnpinned - *abortTransaction outcome: *outcome - description: unpin after transient connection error on updateMany update useMultipleMongoses: true operations: - *startTransaction - *initialCommand - name: targetedFailPoint object: testRunner arguments: session: session0 failPoint: configureFailPoint: failCommand mode: {times: 1} data: failCommands: ["update"] closeConnection: true - name: updateMany object: collection arguments: session: session0 filter: {_id: {$gte: 1}} update: {$set: {z: 1}} result: errorLabelsContain: ["TransientTransactionError"] - *assertSessionUnpinned - *abortTransaction outcome: *outcome - description: unpin after transient ShutdownInProgress error on updateMany update useMultipleMongoses: true operations: - *startTransaction - *initialCommand - name: targetedFailPoint object: testRunner arguments: session: session0 failPoint: configureFailPoint: failCommand mode: {times: 1} data: failCommands: ["update"] errorCode: 91 - name: updateMany object: collection arguments: session: session0 filter: {_id: {$gte: 1}} update: {$set: {z: 1}} result: errorLabelsContain: ["TransientTransactionError"] - *assertSessionUnpinned - *abortTransaction outcome: *outcome - description: unpin after transient connection error on deleteOne delete useMultipleMongoses: true operations: - *startTransaction - *initialCommand - name: targetedFailPoint object: testRunner arguments: session: session0 failPoint: configureFailPoint: failCommand mode: {times: 1} data: failCommands: ["delete"] closeConnection: true - name: deleteOne object: collection arguments: session: session0 filter: {_id: 1} result: errorLabelsContain: ["TransientTransactionError"] - *assertSessionUnpinned - *abortTransaction outcome: *outcome - description: unpin after transient ShutdownInProgress error on deleteOne delete useMultipleMongoses: true operations: - *startTransaction - *initialCommand - name: targetedFailPoint object: testRunner arguments: session: session0 failPoint: configureFailPoint: failCommand mode: {times: 1} data: failCommands: ["delete"] errorCode: 91 - name: deleteOne object: collection arguments: session: session0 filter: {_id: 1} result: errorLabelsContain: ["TransientTransactionError"] - *assertSessionUnpinned - *abortTransaction outcome: *outcome - description: unpin after transient connection error on deleteMany delete useMultipleMongoses: true operations: - *startTransaction - *initialCommand - name: targetedFailPoint object: testRunner arguments: session: session0 failPoint: configureFailPoint: failCommand mode: {times: 1} data: failCommands: ["delete"] closeConnection: true - name: deleteMany object: collection arguments: session: session0 filter: {_id: {$gte: 1}} result: errorLabelsContain: ["TransientTransactionError"] - *assertSessionUnpinned - *abortTransaction outcome: *outcome - description: unpin after transient ShutdownInProgress error on deleteMany delete useMultipleMongoses: true operations: - *startTransaction - *initialCommand - name: targetedFailPoint object: testRunner arguments: session: session0 failPoint: configureFailPoint: failCommand mode: {times: 1} data: failCommands: ["delete"] errorCode: 91 - name: deleteMany object: collection arguments: session: session0 filter: {_id: {$gte: 1}} result: errorLabelsContain: ["TransientTransactionError"] - *assertSessionUnpinned - *abortTransaction outcome: *outcome - description: unpin after transient connection error on findOneAndDelete findAndModify useMultipleMongoses: true operations: - *startTransaction - *initialCommand - name: targetedFailPoint object: testRunner arguments: session: session0 failPoint: configureFailPoint: failCommand mode: {times: 1} data: failCommands: ["findAndModify"] closeConnection: true - name: findOneAndDelete object: collection arguments: session: session0 filter: {_id: 1} result: errorLabelsContain: ["TransientTransactionError"] - *assertSessionUnpinned - *abortTransaction outcome: *outcome - description: unpin after transient ShutdownInProgress error on findOneAndDelete findAndModify useMultipleMongoses: true operations: - *startTransaction - *initialCommand - name: targetedFailPoint object: testRunner arguments: session: session0 failPoint: configureFailPoint: failCommand mode: {times: 1} data: failCommands: ["findAndModify"] errorCode: 91 - name: findOneAndDelete object: collection arguments: session: session0 filter: {_id: 1} result: errorLabelsContain: ["TransientTransactionError"] - *assertSessionUnpinned - *abortTransaction outcome: *outcome - description: unpin after transient connection error on findOneAndUpdate findAndModify useMultipleMongoses: true operations: - *startTransaction - *initialCommand - name: targetedFailPoint object: testRunner arguments: session: session0 failPoint: configureFailPoint: failCommand mode: {times: 1} data: failCommands: ["findAndModify"] closeConnection: true - name: findOneAndUpdate object: collection arguments: session: session0 filter: {_id: 1} update: {$inc: {x: 1}} returnDocument: Before result: errorLabelsContain: ["TransientTransactionError"] - *assertSessionUnpinned - *abortTransaction outcome: *outcome - description: unpin after transient ShutdownInProgress error on findOneAndUpdate findAndModify useMultipleMongoses: true operations: - *startTransaction - *initialCommand - name: targetedFailPoint object: testRunner arguments: session: session0 failPoint: configureFailPoint: failCommand mode: {times: 1} data: failCommands: ["findAndModify"] errorCode: 91 - name: findOneAndUpdate object: collection arguments: session: session0 filter: {_id: 1} update: {$inc: {x: 1}} returnDocument: Before result: errorLabelsContain: ["TransientTransactionError"] - *assertSessionUnpinned - *abortTransaction outcome: *outcome - description: unpin after transient connection error on findOneAndReplace findAndModify useMultipleMongoses: true operations: - *startTransaction - *initialCommand - name: targetedFailPoint object: testRunner arguments: session: session0 failPoint: configureFailPoint: failCommand mode: {times: 1} data: failCommands: ["findAndModify"] closeConnection: true - name: findOneAndReplace object: collection arguments: session: session0 filter: {_id: 1} replacement: {y: 1} returnDocument: Before result: errorLabelsContain: ["TransientTransactionError"] - *assertSessionUnpinned - *abortTransaction outcome: *outcome - description: unpin after transient ShutdownInProgress error on findOneAndReplace findAndModify useMultipleMongoses: true operations: - *startTransaction - *initialCommand - name: targetedFailPoint object: testRunner arguments: session: session0 failPoint: configureFailPoint: failCommand mode: {times: 1} data: failCommands: ["findAndModify"] errorCode: 91 - name: findOneAndReplace object: collection arguments: session: session0 filter: {_id: 1} replacement: {y: 1} returnDocument: Before result: errorLabelsContain: ["TransientTransactionError"] - *assertSessionUnpinned - *abortTransaction outcome: *outcome - description: unpin after transient connection error on bulkWrite insert useMultipleMongoses: true operations: - *startTransaction - *initialCommand - name: targetedFailPoint object: testRunner arguments: session: session0 failPoint: configureFailPoint: failCommand mode: {times: 1} data: failCommands: ["insert"] closeConnection: true - name: bulkWrite object: collection arguments: session: session0 requests: - name: insertOne arguments: document: {_id: 1} result: errorLabelsContain: ["TransientTransactionError"] - *assertSessionUnpinned - *abortTransaction outcome: *outcome - description: unpin after transient ShutdownInProgress error on bulkWrite insert useMultipleMongoses: true operations: - *startTransaction - *initialCommand - name: targetedFailPoint object: testRunner arguments: session: session0 failPoint: configureFailPoint: failCommand mode: {times: 1} data: failCommands: ["insert"] errorCode: 91 - name: bulkWrite object: collection arguments: session: session0 requests: - name: insertOne arguments: document: {_id: 1} result: errorLabelsContain: ["TransientTransactionError"] - *assertSessionUnpinned - *abortTransaction outcome: *outcome - description: unpin after transient connection error on bulkWrite update useMultipleMongoses: true operations: - *startTransaction - *initialCommand - name: targetedFailPoint object: testRunner arguments: session: session0 failPoint: configureFailPoint: failCommand mode: {times: 1} data: failCommands: ["update"] closeConnection: true - name: bulkWrite object: collection arguments: session: session0 requests: - name: updateOne arguments: filter: {_id: 1} update: {$set: {x: 1}} result: errorLabelsContain: ["TransientTransactionError"] - *assertSessionUnpinned - *abortTransaction outcome: *outcome - description: unpin after transient ShutdownInProgress error on bulkWrite update useMultipleMongoses: true operations: - *startTransaction - *initialCommand - name: targetedFailPoint object: testRunner arguments: session: session0 failPoint: configureFailPoint: failCommand mode: {times: 1} data: failCommands: ["update"] errorCode: 91 - name: bulkWrite object: collection arguments: session: session0 requests: - name: updateOne arguments: filter: {_id: 1} update: {$set: {x: 1}} result: errorLabelsContain: ["TransientTransactionError"] - *assertSessionUnpinned - *abortTransaction outcome: *outcome - description: unpin after transient connection error on bulkWrite delete useMultipleMongoses: true operations: - *startTransaction - *initialCommand - name: targetedFailPoint object: testRunner arguments: session: session0 failPoint: configureFailPoint: failCommand mode: {times: 1} data: failCommands: ["delete"] closeConnection: true - name: bulkWrite object: collection arguments: session: session0 requests: - name: deleteOne arguments: filter: {_id: 1} result: errorLabelsContain: ["TransientTransactionError"] - *assertSessionUnpinned - *abortTransaction outcome: *outcome - description: unpin after transient ShutdownInProgress error on bulkWrite delete useMultipleMongoses: true operations: - *startTransaction - *initialCommand - name: targetedFailPoint object: testRunner arguments: session: session0 failPoint: configureFailPoint: failCommand mode: {times: 1} data: failCommands: ["delete"] errorCode: 91 - name: bulkWrite object: collection arguments: session: session0 requests: - name: deleteOne arguments: filter: {_id: 1} result: errorLabelsContain: ["TransientTransactionError"] - *assertSessionUnpinned - *abortTransaction outcome: *outcome - description: unpin after transient connection error on find find useMultipleMongoses: true operations: - *startTransaction - *initialCommand - name: targetedFailPoint object: testRunner arguments: session: session0 failPoint: configureFailPoint: failCommand mode: {times: 1} data: failCommands: ["find"] closeConnection: true - name: find object: collection arguments: session: session0 filter: {_id: 1} result: errorLabelsContain: ["TransientTransactionError"] - *assertSessionUnpinned - *abortTransaction outcome: *outcome - description: unpin after transient ShutdownInProgress error on find find useMultipleMongoses: true operations: - *startTransaction - *initialCommand - name: targetedFailPoint object: testRunner arguments: session: session0 failPoint: configureFailPoint: failCommand mode: {times: 1} data: failCommands: ["find"] errorCode: 91 - name: find object: collection arguments: session: session0 filter: {_id: 1} result: errorLabelsContain: ["TransientTransactionError"] - *assertSessionUnpinned - *abortTransaction outcome: *outcome - description: unpin after transient connection error on countDocuments aggregate useMultipleMongoses: true operations: - *startTransaction - *initialCommand - name: targetedFailPoint object: testRunner arguments: session: session0 failPoint: configureFailPoint: failCommand mode: {times: 1} data: failCommands: ["aggregate"] closeConnection: true - name: countDocuments object: collection arguments: session: session0 filter: {} result: errorLabelsContain: ["TransientTransactionError"] - *assertSessionUnpinned - *abortTransaction outcome: *outcome - description: unpin after transient ShutdownInProgress error on countDocuments aggregate useMultipleMongoses: true operations: - *startTransaction - *initialCommand - name: targetedFailPoint object: testRunner arguments: session: session0 failPoint: configureFailPoint: failCommand mode: {times: 1} data: failCommands: ["aggregate"] errorCode: 91 - name: countDocuments object: collection arguments: session: session0 filter: {} result: errorLabelsContain: ["TransientTransactionError"] - *assertSessionUnpinned - *abortTransaction outcome: *outcome - description: unpin after transient connection error on aggregate aggregate useMultipleMongoses: true operations: - *startTransaction - *initialCommand - name: targetedFailPoint object: testRunner arguments: session: session0 failPoint: configureFailPoint: failCommand mode: {times: 1} data: failCommands: ["aggregate"] closeConnection: true - name: aggregate object: collection arguments: session: session0 pipeline: [] result: errorLabelsContain: ["TransientTransactionError"] - *assertSessionUnpinned - *abortTransaction outcome: *outcome - description: unpin after transient ShutdownInProgress error on aggregate aggregate useMultipleMongoses: true operations: - *startTransaction - *initialCommand - name: targetedFailPoint object: testRunner arguments: session: session0 failPoint: configureFailPoint: failCommand mode: {times: 1} data: failCommands: ["aggregate"] errorCode: 91 - name: aggregate object: collection arguments: session: session0 pipeline: [] result: errorLabelsContain: ["TransientTransactionError"] - *assertSessionUnpinned - *abortTransaction outcome: *outcome - description: unpin after transient connection error on distinct distinct useMultipleMongoses: true operations: - *startTransaction - *initialCommand - name: targetedFailPoint object: testRunner arguments: session: session0 failPoint: configureFailPoint: failCommand mode: {times: 1} data: failCommands: ["distinct"] closeConnection: true - name: distinct object: collection arguments: session: session0 fieldName: _id result: errorLabelsContain: ["TransientTransactionError"] - *assertSessionUnpinned - *abortTransaction outcome: *outcome - description: unpin after transient ShutdownInProgress error on distinct distinct useMultipleMongoses: true operations: - *startTransaction - *initialCommand - name: targetedFailPoint object: testRunner arguments: session: session0 failPoint: configureFailPoint: failCommand mode: {times: 1} data: failCommands: ["distinct"] errorCode: 91 - name: distinct object: collection arguments: session: session0 fieldName: _id result: errorLabelsContain: ["TransientTransactionError"] - *assertSessionUnpinned - *abortTransaction outcome: *outcome - description: unpin after transient connection error on runCommand insert useMultipleMongoses: true operations: - *startTransaction - *initialCommand - name: targetedFailPoint object: testRunner arguments: session: session0 failPoint: configureFailPoint: failCommand mode: {times: 1} data: failCommands: ["insert"] closeConnection: true - name: runCommand object: database command_name: insert arguments: session: session0 command: insert: *collection_name documents: - _id : 1 result: errorLabelsContain: ["TransientTransactionError"] - *assertSessionUnpinned - *abortTransaction outcome: *outcome - description: unpin after transient ShutdownInProgress error on runCommand insert useMultipleMongoses: true operations: - *startTransaction - *initialCommand - name: targetedFailPoint object: testRunner arguments: session: session0 failPoint: configureFailPoint: failCommand mode: {times: 1} data: failCommands: ["insert"] errorCode: 91 - name: runCommand object: database command_name: insert arguments: session: session0 command: insert: *collection_name documents: - _id : 1 result: errorLabelsContain: ["TransientTransactionError"] - *assertSessionUnpinned - *abortTransaction outcome: *outcome mongo-ruby-driver-2.21.3/spec/spec_tests/data/transactions/mongos-recovery-token.yml000066400000000000000000000253071505113246500307250ustar00rootroot00000000000000runOn: - minServerVersion: "4.1.8" topology: ["sharded"] # serverless proxy doesn't use recovery tokens serverless: "forbid" database_name: &database_name "transaction-tests" collection_name: &collection_name "test" data: [] tests: - description: commitTransaction explicit retries include recoveryToken useMultipleMongoses: true operations: - name: startTransaction object: session0 - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 - name: commitTransaction object: session0 - name: commitTransaction object: session0 - name: commitTransaction object: session0 expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: recoveryToken: 42 command_name: commitTransaction database_name: admin - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false # commitTransaction applies w:majority on retries writeConcern: { w: majority, wtimeout: 10000 } recoveryToken: 42 command_name: commitTransaction database_name: admin - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false # commitTransaction applies w:majority on retries writeConcern: { w: majority, wtimeout: 10000 } recoveryToken: 42 command_name: commitTransaction database_name: admin outcome: collection: data: - _id: 1 - description: commitTransaction retry succeeds on new mongos useMultipleMongoses: true operations: - name: startTransaction object: session0 arguments: options: writeConcern: w: majority - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 # Enable the fail point only on the Mongos that session0 is pinned to. - name: targetedFailPoint object: testRunner arguments: session: session0 failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["commitTransaction"] errorLabels: ["RetryableWriteError"] writeConcernError: code: 91 errmsg: Replication is being shut down # The client sees a retryable writeConcernError on the first # commitTransaction due to the fail point but it actually succeeds on the # server (SERVER-39346). The retry will succeed both on a new mongos and # on the original. - name: commitTransaction object: session0 expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: w: majority recoveryToken: 42 command_name: commitTransaction database_name: admin - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false # commitTransaction applies w:majority on retries writeConcern: { w: majority, wtimeout: 10000 } recoveryToken: 42 command_name: commitTransaction database_name: admin outcome: collection: data: - _id: 1 - description: commitTransaction retry fails on new mongos useMultipleMongoses: true clientOptions: # Increase heartbeatFrequencyMS to avoid the race condition where an in # flight heartbeat refreshes the first mongoes' SDAM state in between # the initial commitTransaction and the retry attempt. heartbeatFrequencyMS: 30000 operations: - name: startTransaction object: session0 - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 # Enable the fail point only on the Mongos that session0 is pinned to. # Fail isMaster to prevent the heartbeat requested directly after the # retryable commit error from racing with server selection for the retry. # Note: times: 7 is slightly artbitrary but it accounts for one failed # commit and some SDAM heartbeats. A test runner will have multiple # clients connected to this server so this fail point configuration # is also racy. - name: targetedFailPoint object: testRunner arguments: session: session0 failPoint: configureFailPoint: failCommand mode: { times: 7 } data: failCommands: ["commitTransaction", "isMaster"] closeConnection: true # The first commitTransaction sees a retryable connection error due to # the fail point and also fails on the server. The retry attempt on a # new mongos will wait for the transaction to timeout and will fail # because the transaction was aborted. Note that the retry attempt should # not select the original mongos because that server's SDAM state is # reset by the connection error, heartbeatFrequencyMS is high, and # subsequent isMaster heartbeats should fail. - name: commitTransaction object: session0 result: # https://jira.mongodb.org/browse/SPEC-1330 errorLabelsContain: ["UnknownTransactionCommitResult"] errorLabelsOmit: ["TransientTransactionError"] expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: recoveryToken: 42 command_name: commitTransaction database_name: admin - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false # commitTransaction applies w:majority on retries writeConcern: { w: majority, wtimeout: 10000 } recoveryToken: 42 command_name: commitTransaction database_name: admin outcome: collection: data: [] - description: abortTransaction sends recoveryToken useMultipleMongoses: true operations: - name: startTransaction object: session0 - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 # Enable the fail point only on the Mongos that session0 is pinned to. - name: targetedFailPoint object: testRunner arguments: session: session0 failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["abortTransaction"] closeConnection: true # The first abortTransaction sees a retryable connection error due to # the fail point. The retry attempt on a new mongos will send the # recoveryToken. Note that the retry attempt will also fail because the # server does not yet support aborting from a new mongos, however this # operation should "succeed" since abortTransaction ignores errors. - name: abortTransaction object: session0 expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: abortTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: recoveryToken: 42 command_name: abortTransaction database_name: admin - command_started_event: command: abortTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: recoveryToken: 42 command_name: abortTransaction database_name: admin outcome: collection: data: [] mongo-ruby-driver-2.21.3/spec/spec_tests/data/transactions/pin-mongos.yml000066400000000000000000000356141505113246500265410ustar00rootroot00000000000000# Test that all the operations go to the same mongos. # # In tests that don't include command-started events the assertion is implicit: # that all the read operations succeed. If the driver does not properly pin to # a single mongos then one of the operations in a transaction will eventually # be sent to a different mongos, which is unaware of the transaction, and the # mongos will return a command error. An example of such an error is: # { # 'ok': 0.0, # 'errmsg': 'cannot continue txnId -1 for session 28938f50-9d29-4ca5-8de5-ddaf261267c4 - 47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU= with txnId 1', # 'code': 251, # 'codeName': 'NoSuchTransaction', # 'errorLabels': ['TransientTransactionError'] # } runOn: - minServerVersion: "4.1.8" topology: ["sharded"] # serverless proxy doesn't append error labels to errors in transactions # caused by failpoints (CLOUDP-88216) serverless: "forbid" database_name: &database_name "transaction-tests" collection_name: &collection_name "test" data: &data - {_id: 1} - {_id: 2} tests: - description: countDocuments useMultipleMongoses: true operations: - &startTransaction name: startTransaction object: session0 - &countDocuments name: countDocuments object: collection arguments: filter: _id: 2 session: session0 result: 1 - *countDocuments - *countDocuments - *countDocuments - *countDocuments - *countDocuments - *countDocuments - *countDocuments - &commitTransaction name: commitTransaction object: session0 outcome: collection: data: *data - description: distinct useMultipleMongoses: true operations: - *startTransaction - &distinct name: distinct object: collection arguments: fieldName: _id session: session0 result: [1, 2] - *distinct - *distinct - *distinct - *distinct - *distinct - *distinct - *distinct - *commitTransaction outcome: collection: data: *data - description: find useMultipleMongoses: true operations: - name: startTransaction object: session0 - &find name: find object: collection arguments: filter: _id: 2 session: session0 result: - {_id: 2} - *find - *find - *find - *find - *find - *find - *find - *commitTransaction outcome: collection: data: *data - description: insertOne useMultipleMongoses: true operations: - name: startTransaction object: session0 - name: insertOne object: collection arguments: document: _id: 3 session: session0 result: insertedId: 3 - name: insertOne object: collection arguments: document: _id: 4 session: session0 result: insertedId: 4 - name: insertOne object: collection arguments: document: _id: 5 session: session0 result: insertedId: 5 - name: insertOne object: collection arguments: document: _id: 6 session: session0 result: insertedId: 6 - name: insertOne object: collection arguments: document: _id: 7 session: session0 result: insertedId: 7 - name: insertOne object: collection arguments: document: _id: 8 session: session0 result: insertedId: 8 - name: insertOne object: collection arguments: document: _id: 9 session: session0 result: insertedId: 9 - name: insertOne object: collection arguments: document: _id: 10 session: session0 result: insertedId: 10 - *commitTransaction outcome: collection: data: - {_id: 1} - {_id: 2} - {_id: 3} - {_id: 4} - {_id: 5} - {_id: 6} - {_id: 7} - {_id: 8} - {_id: 9} - {_id: 10} - description: mixed read write operations useMultipleMongoses: true operations: - name: startTransaction object: session0 - name: insertOne object: collection arguments: document: _id: 3 session: session0 result: insertedId: 3 - &countDocuments name: countDocuments object: collection arguments: filter: _id: 3 session: session0 result: 1 - *countDocuments - *countDocuments - *countDocuments - *countDocuments - name: insertOne object: collection arguments: document: _id: 4 session: session0 result: insertedId: 4 - name: insertOne object: collection arguments: document: _id: 5 session: session0 result: insertedId: 5 - name: insertOne object: collection arguments: document: _id: 6 session: session0 result: insertedId: 6 - name: insertOne object: collection arguments: document: _id: 7 session: session0 result: insertedId: 7 - *commitTransaction outcome: collection: data: - {_id: 1} - {_id: 2} - {_id: 3} - {_id: 4} - {_id: 5} - {_id: 6} - {_id: 7} - description: multiple commits useMultipleMongoses: true operations: - name: startTransaction object: session0 - name: insertMany object: collection arguments: documents: - _id: 3 - _id: 4 session: session0 result: insertedIds: {0: 3, 1: 4} # Session is pinned and remains pinned after successful commits. - &assertSessionPinned name: assertSessionPinned object: testRunner arguments: session: session0 - *commitTransaction - *assertSessionPinned - *commitTransaction - *commitTransaction - *commitTransaction - *commitTransaction - *commitTransaction - *commitTransaction - *commitTransaction - *commitTransaction - *commitTransaction - *assertSessionPinned outcome: collection: data: - {_id: 1} - {_id: 2} - {_id: 3} - {_id: 4} - description: remain pinned after non-transient error on commit useMultipleMongoses: true operations: - name: startTransaction object: session0 - name: insertMany object: collection arguments: documents: - _id: 3 - _id: 4 session: session0 result: insertedIds: {0: 3, 1: 4} # Session is pinned. - *assertSessionPinned # Fail the commit with a non-transient error. - name: targetedFailPoint object: testRunner arguments: session: session0 failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["commitTransaction"] errorCode: 51 # ManualInterventionRequired - name: commitTransaction object: session0 result: errorLabelsOmit: ["TransientTransactionError"] errorCode: 51 - *assertSessionPinned # The next commit should succeed. - name: commitTransaction object: session0 - *assertSessionPinned outcome: collection: data: - {_id: 1} - {_id: 2} - {_id: 3} - {_id: 4} - description: unpin after transient error within a transaction useMultipleMongoses: true operations: - name: startTransaction object: session0 - name: insertOne object: collection arguments: session: session0 document: _id: 3 result: insertedId: 3 # Enable the fail point only on the Mongos that session0 is pinned to. - name: targetedFailPoint object: testRunner arguments: session: session0 failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["insert"] closeConnection: true - name: insertOne object: collection arguments: session: session0 document: _id: 4 result: errorLabelsContain: ["TransientTransactionError"] errorLabelsOmit: ["UnknownTransactionCommitResult"] # Session unpins from the first mongos after the insert error and # abortTransaction succeeds immediately on any mongos. - name: abortTransaction object: session0 expectations: - command_started_event: command: insert: *collection_name documents: - _id: 3 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: insert: *collection_name documents: - _id: 4 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: abortTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: recoveryToken: 42 command_name: abortTransaction database_name: admin outcome: collection: data: - _id: 1 - _id: 2 # Applications should not run commitTransaction after transient errors but # the transactions API allows it and this test confirms unpinning behavior. # In a sharded cluster, a transient error within a transaction unpins the # session. This way a subsequent abort can "succeed" immediately instead of # blocking for serverSelectionTimeoutMS in the case the mongos went down. # However since the abortTransaction helper ignores errors, this test uses # commitTransaction to prove the session was unpinned. - description: unpin after transient error within a transaction and commit useMultipleMongoses: true clientOptions: # Increase heartbeatFrequencyMS to avoid the race condition where an in # flight heartbeat refreshes the first mongoes' SDAM state in between # the insert connection error and the single commit attempt. heartbeatFrequencyMS: 30000 operations: - name: startTransaction object: session0 - name: insertOne object: collection arguments: session: session0 document: _id: 3 result: insertedId: 3 # Enable the fail point only on the Mongos that session0 is pinned to. # Fail hello/legacy hello to prevent the heartbeat requested directly after the # insert error from racing with server selection for the commit. # Note: times: 7 is slightly artbitrary but it accounts for one failed # insert and some SDAM heartbeats. A test runner will have multiple # clients connected to this server so this fail point configuration # is also racy. - name: targetedFailPoint object: testRunner arguments: session: session0 failPoint: configureFailPoint: failCommand mode: { times: 7 } data: failCommands: ["insert", "isMaster", "hello"] closeConnection: true - name: insertOne object: collection arguments: session: session0 document: _id: 4 result: errorLabelsContain: ["TransientTransactionError"] errorLabelsOmit: ["UnknownTransactionCommitResult"] # Session unpins from the first mongos after the insert error and # commitTransaction selects the second mongos which is unaware of the # transaction and therefore fails with NoSuchTransaction error. If this # commit succeeds it indicates a bug, either: # - the driver mistakenly remained pinned even after the insert error, or # - the test client was initialized with a single mongos seed # # Note that the commit attempt should not select the original mongos # because that server's SDAM state is reset by the connection error, # heartbeatFrequencyMS is high, and subsequent heartbeats # should fail. - name: commitTransaction object: session0 result: errorLabelsContain: ["TransientTransactionError"] errorLabelsOmit: ["UnknownTransactionCommitResult"] errorCodeName: NoSuchTransaction expectations: - command_started_event: command: insert: *collection_name documents: - _id: 3 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: insert: *collection_name documents: - _id: 4 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: recoveryToken: 42 command_name: commitTransaction database_name: admin outcome: collection: data: - _id: 1 - _id: 2 mongo-ruby-driver-2.21.3/spec/spec_tests/data/transactions/read-concern.yml000066400000000000000000000410311505113246500270010ustar00rootroot00000000000000runOn: - minServerVersion: "4.0" topology: ["replicaset"] - minServerVersion: "4.1.8" topology: ["sharded"] database_name: &database_name "transaction-tests" collection_name: &collection_name "test" data: &data - {_id: 1} - {_id: 2} - {_id: 3} - {_id: 4} tests: - description: only first countDocuments includes readConcern operations: - &startTransaction name: startTransaction object: session0 arguments: options: readConcern: level: majority - &countDocuments name: countDocuments object: collection collectionOptions: readConcern: level: majority arguments: session: session0 filter: {_id: {$gte: 2}} result: 3 - *countDocuments - &commitTransaction name: commitTransaction object: session0 expectations: - command_started_event: command: aggregate: *collection_name pipeline: - $match: {_id: {$gte: 2}} - $group: {_id: 1, n: {$sum: 1}} cursor: {} lsid: session0 readConcern: level: majority txnNumber: $numberLong: "1" startTransaction: true autocommit: false command_name: aggregate database_name: *database_name - command_started_event: command: aggregate: *collection_name pipeline: - $match: {_id: {$gte: 2}} - $group: {_id: 1, n: {$sum: 1}} cursor: {} lsid: session0 readConcern: # No readConcern txnNumber: $numberLong: "1" startTransaction: autocommit: false command_name: aggregate database_name: *database_name - &commitTransactionEvent command_started_event: command: commitTransaction: 1 lsid: session0 readConcern: txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: commitTransaction database_name: admin outcome: &outcome collection: data: *data - description: only first find includes readConcern operations: - *startTransaction - &find name: find object: collection collectionOptions: readConcern: level: majority arguments: session: session0 batchSize: 3 result: *data - *find - *commitTransaction expectations: - command_started_event: command: find: *collection_name batchSize: 3 lsid: session0 readConcern: level: majority txnNumber: $numberLong: "1" startTransaction: true autocommit: false command_name: find database_name: *database_name - command_started_event: command: getMore: # 42 is a fake placeholder value for the cursorId. $numberLong: '42' collection: *collection_name batchSize: 3 lsid: session0 readConcern: # No readConcern txnNumber: $numberLong: "1" startTransaction: autocommit: false command_name: getMore database_name: *database_name - command_started_event: command: find: *collection_name batchSize: 3 lsid: session0 readConcern: # No readConcern txnNumber: $numberLong: "1" startTransaction: autocommit: false command_name: find database_name: *database_name - command_started_event: command: getMore: $numberLong: '42' collection: *collection_name batchSize: 3 lsid: session0 readConcern: # No readConcern txnNumber: $numberLong: "1" startTransaction: autocommit: false command_name: getMore database_name: *database_name - *commitTransactionEvent outcome: &outcome collection: data: *data - description: only first aggregate includes readConcern operations: - *startTransaction - &aggregate name: aggregate object: collection collectionOptions: readConcern: level: majority arguments: pipeline: - $project: _id: 1 batchSize: 3 session: session0 result: *data - *aggregate - *commitTransaction expectations: - command_started_event: command: aggregate: *collection_name pipeline: - $project: _id: 1 cursor: batchSize: 3 lsid: session0 readConcern: level: majority txnNumber: $numberLong: "1" startTransaction: true autocommit: false command_name: aggregate database_name: *database_name - command_started_event: command: getMore: # 42 is a fake placeholder value for the cursorId. $numberLong: '42' collection: *collection_name batchSize: 3 lsid: session0 readConcern: # No readConcern txnNumber: $numberLong: "1" startTransaction: autocommit: false command_name: getMore database_name: *database_name - command_started_event: command: aggregate: *collection_name pipeline: - $project: _id: 1 cursor: batchSize: 3 lsid: session0 readConcern: # No readConcern txnNumber: $numberLong: "1" startTransaction: autocommit: false command_name: aggregate database_name: *database_name - command_started_event: command: getMore: $numberLong: '42' collection: *collection_name batchSize: 3 lsid: session0 readConcern: # No readConcern txnNumber: $numberLong: "1" startTransaction: autocommit: false command_name: getMore database_name: *database_name - *commitTransactionEvent outcome: *outcome - description: only first distinct includes readConcern operations: - *startTransaction - &distinct name: distinct object: collection collectionOptions: readConcern: level: majority arguments: session: session0 fieldName: _id result: [1, 2, 3, 4] - *distinct - *commitTransaction expectations: - command_started_event: command: distinct: *collection_name key: _id lsid: session0 readConcern: level: majority txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: distinct database_name: *database_name - command_started_event: command: distinct: *collection_name key: _id lsid: session0 readConcern: # No readConcern txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: distinct database_name: *database_name - *commitTransactionEvent outcome: *outcome - description: only first runCommand includes readConcern operations: - *startTransaction - &runCommand name: runCommand object: database command_name: find arguments: session: session0 command: find: *collection_name - *runCommand - *commitTransaction expectations: - command_started_event: command: find: *collection_name lsid: session0 readConcern: level: majority txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: find database_name: *database_name - command_started_event: command: find: *collection_name lsid: session0 readConcern: # No readConcern txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: find database_name: *database_name - *commitTransactionEvent outcome: *outcome - description: countDocuments ignores collection readConcern operations: - &startTransactionNoReadConcern name: startTransaction object: session0 - *countDocuments - *countDocuments - *commitTransaction expectations: - command_started_event: command: aggregate: *collection_name pipeline: - $match: {_id: {$gte: 2}} - $group: {_id: 1, n: {$sum: 1}} cursor: {} lsid: session0 readConcern: # No readConcern txnNumber: $numberLong: "1" startTransaction: true autocommit: false command_name: aggregate database_name: *database_name - command_started_event: command: aggregate: *collection_name pipeline: - $match: {_id: {$gte: 2}} - $group: {_id: 1, n: {$sum: 1}} cursor: {} lsid: session0 readConcern: # No readConcern txnNumber: $numberLong: "1" startTransaction: autocommit: false command_name: aggregate database_name: *database_name - *commitTransactionEvent outcome: *outcome - description: find ignores collection readConcern operations: - *startTransactionNoReadConcern - *find - *find - *commitTransaction expectations: - command_started_event: command: find: *collection_name batchSize: 3 lsid: session0 readConcern: # No readConcern txnNumber: $numberLong: "1" startTransaction: true autocommit: false command_name: find database_name: *database_name - command_started_event: command: getMore: # 42 is a fake placeholder value for the cursorId. $numberLong: '42' collection: *collection_name batchSize: 3 lsid: session0 readConcern: # No readConcern txnNumber: $numberLong: "1" startTransaction: autocommit: false command_name: getMore database_name: *database_name - command_started_event: command: find: *collection_name batchSize: 3 lsid: session0 readConcern: # No readConcern txnNumber: $numberLong: "1" startTransaction: autocommit: false command_name: find database_name: *database_name - command_started_event: command: getMore: $numberLong: '42' collection: *collection_name batchSize: 3 lsid: session0 readConcern: # No readConcern txnNumber: $numberLong: "1" startTransaction: autocommit: false command_name: getMore database_name: *database_name - *commitTransactionEvent outcome: *outcome - description: aggregate ignores collection readConcern operations: - *startTransactionNoReadConcern - *aggregate - *aggregate - *commitTransaction expectations: - command_started_event: command: aggregate: *collection_name pipeline: - $project: _id: 1 cursor: batchSize: 3 lsid: session0 readConcern: # No readConcern txnNumber: $numberLong: "1" startTransaction: true autocommit: false command_name: aggregate database_name: *database_name - command_started_event: command: getMore: # 42 is a fake placeholder value for the cursorId. $numberLong: '42' collection: *collection_name batchSize: 3 lsid: session0 readConcern: # No readConcern txnNumber: $numberLong: "1" startTransaction: autocommit: false command_name: getMore database_name: *database_name - command_started_event: command: aggregate: *collection_name pipeline: - $project: _id: 1 cursor: batchSize: 3 lsid: session0 readConcern: # No readConcern txnNumber: $numberLong: "1" startTransaction: autocommit: false command_name: aggregate database_name: *database_name - command_started_event: command: getMore: $numberLong: '42' collection: *collection_name batchSize: 3 lsid: session0 readConcern: # No readConcern txnNumber: $numberLong: "1" startTransaction: autocommit: false command_name: getMore database_name: *database_name - *commitTransactionEvent outcome: *outcome - description: distinct ignores collection readConcern operations: - *startTransactionNoReadConcern - *distinct - *distinct - *commitTransaction expectations: - command_started_event: command: distinct: *collection_name key: _id lsid: session0 readConcern: # No readConcern txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: distinct database_name: *database_name - command_started_event: command: distinct: *collection_name key: _id lsid: session0 readConcern: # No readConcern txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: distinct database_name: *database_name - *commitTransactionEvent outcome: *outcome - description: runCommand ignores database readConcern operations: - *startTransactionNoReadConcern - name: runCommand object: database databaseOptions: readConcern: level: majority command_name: find arguments: session: session0 command: find: *collection_name - *runCommand - *commitTransaction expectations: - command_started_event: command: find: *collection_name lsid: session0 readConcern: # No readConcern txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: find database_name: *database_name - command_started_event: command: find: *collection_name lsid: session0 readConcern: # No readConcern txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: find database_name: *database_name - *commitTransactionEvent outcome: *outcome mongo-ruby-driver-2.21.3/spec/spec_tests/data/transactions/read-pref.yml000066400000000000000000000205651505113246500263170ustar00rootroot00000000000000# This test doesn't check contents of command-started events. runOn: - minServerVersion: "4.0" topology: ["replicaset"] - minServerVersion: "4.1.8" topology: ["sharded"] database_name: &database_name "transaction-tests" collection_name: &collection_name "test" data: [] tests: - description: default readPreference operations: - name: startTransaction object: session0 - name: insertMany object: collection arguments: documents: &insertedDocs - _id: 1 - _id: 2 - _id: 3 - _id: 4 session: session0 result: insertedIds: {0: 1, 1: 2, 2: 3, 3: 4} - name: aggregate object: collection collectionOptions: # The driver overrides the collection's read pref with the # transaction's so count runs with Primary and succeeds. readPreference: mode: Secondary arguments: session: session0 pipeline: - $match: _id: 1 - $count: count result: - count: 1 - name: find object: collection collectionOptions: readPreference: mode: Secondary arguments: session: session0 batchSize: 3 result: *insertedDocs - name: aggregate object: collection collectionOptions: readPreference: mode: Secondary arguments: pipeline: - $project: _id: 1 batchSize: 3 session: session0 result: *insertedDocs - name: commitTransaction object: session0 outcome: collection: data: *insertedDocs - description: primary readPreference operations: - name: startTransaction object: session0 arguments: options: readPreference: mode: Primary - name: insertMany object: collection arguments: documents: &insertedDocs - _id: 1 - _id: 2 - _id: 3 - _id: 4 session: session0 result: insertedIds: {0: 1, 1: 2, 2: 3, 3: 4} - name: aggregate object: collection collectionOptions: readPreference: mode: Secondary arguments: session: session0 pipeline: - $match: _id: 1 - $count: count result: - count: 1 - name: find object: collection collectionOptions: readPreference: mode: Secondary arguments: session: session0 batchSize: 3 result: *insertedDocs - name: aggregate object: collection collectionOptions: readPreference: mode: Secondary arguments: pipeline: - $project: _id: 1 batchSize: 3 session: session0 result: *insertedDocs - name: commitTransaction object: session0 outcome: collection: data: *insertedDocs - description: secondary readPreference operations: - name: startTransaction object: session0 arguments: options: readPreference: mode: Secondary - name: insertMany object: collection arguments: documents: &insertedDocs - _id: 1 - _id: 2 - _id: 3 - _id: 4 session: session0 result: insertedIds: {0: 1, 1: 2, 2: 3, 3: 4} - name: aggregate object: collection collectionOptions: readPreference: mode: Primary arguments: session: session0 pipeline: - $match: _id: 1 - $count: count result: errorContains: read preference in a transaction must be primary - name: find object: collection collectionOptions: readPreference: mode: Primary arguments: session: session0 batchSize: 3 result: errorContains: read preference in a transaction must be primary - name: aggregate object: collection collectionOptions: readPreference: mode: Primary arguments: pipeline: - $project: _id: 1 batchSize: 3 session: session0 result: errorContains: read preference in a transaction must be primary - name: abortTransaction object: session0 outcome: collection: data: [] - description: primaryPreferred readPreference operations: - name: startTransaction object: session0 arguments: options: readPreference: mode: PrimaryPreferred - name: insertMany object: collection arguments: documents: &insertedDocs - _id: 1 - _id: 2 - _id: 3 - _id: 4 session: session0 result: insertedIds: {0: 1, 1: 2, 2: 3, 3: 4} - name: aggregate object: collection collectionOptions: readPreference: mode: Primary arguments: session: session0 pipeline: - $match: _id: 1 - $count: count result: errorContains: read preference in a transaction must be primary - name: find object: collection collectionOptions: readPreference: mode: Primary arguments: session: session0 batchSize: 3 result: errorContains: read preference in a transaction must be primary - name: aggregate object: collection collectionOptions: readPreference: mode: Primary arguments: pipeline: - $project: _id: 1 batchSize: 3 session: session0 result: errorContains: read preference in a transaction must be primary - name: abortTransaction object: session0 outcome: collection: data: [] - description: nearest readPreference operations: - name: startTransaction object: session0 arguments: options: readPreference: mode: Nearest - name: insertMany object: collection arguments: documents: &insertedDocs - _id: 1 - _id: 2 - _id: 3 - _id: 4 session: session0 result: insertedIds: {0: 1, 1: 2, 2: 3, 3: 4} - name: aggregate object: collection collectionOptions: readPreference: mode: Primary arguments: session: session0 pipeline: - $match: _id: 1 - $count: count result: errorContains: read preference in a transaction must be primary - name: find object: collection collectionOptions: readPreference: mode: Primary arguments: session: session0 batchSize: 3 result: errorContains: read preference in a transaction must be primary - name: aggregate object: collection collectionOptions: readPreference: mode: Primary arguments: pipeline: - $project: _id: 1 batchSize: 3 session: session0 result: errorContains: read preference in a transaction must be primary - name: abortTransaction object: session0 outcome: collection: data: [] - description: secondary write only operations: - name: startTransaction object: session0 arguments: options: readPreference: mode: Secondary - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 - name: commitTransaction object: session0 outcome: collection: data: - _id: 1 mongo-ruby-driver-2.21.3/spec/spec_tests/data/transactions/reads.yml000066400000000000000000000150041505113246500255400ustar00rootroot00000000000000runOn: - minServerVersion: "4.0" topology: ["replicaset"] - minServerVersion: "4.1.8" topology: ["sharded"] database_name: &database_name "transaction-tests" collection_name: &collection_name "test" data: &data - {_id: 1} - {_id: 2} - {_id: 3} - {_id: 4} tests: - description: collection readConcern without transaction operations: - name: find object: collection collectionOptions: readConcern: level: majority arguments: session: session0 result: *data expectations: - command_started_event: command: find: *collection_name readConcern: level: majority lsid: session0 txnNumber: startTransaction: autocommit: command_name: find database_name: *database_name outcome: &outcome collection: data: *data - description: find operations: - &startTransaction name: startTransaction object: session0 - &find name: find object: collection arguments: session: session0 batchSize: 3 result: *data - *find - &commitTransaction name: commitTransaction object: session0 expectations: - command_started_event: command: find: *collection_name batchSize: 3 readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false command_name: find database_name: *database_name - command_started_event: command: getMore: # 42 is a fake placeholder value for the cursorId. $numberLong: '42' collection: *collection_name batchSize: 3 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false command_name: getMore database_name: *database_name - command_started_event: command: find: *collection_name batchSize: 3 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false command_name: find database_name: *database_name - command_started_event: command: getMore: $numberLong: '42' collection: *collection_name batchSize: 3 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false command_name: getMore database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: commitTransaction database_name: admin outcome: *outcome - description: aggregate operations: - *startTransaction - &aggregate name: aggregate object: collection arguments: pipeline: - $project: _id: 1 batchSize: 3 session: session0 result: *data - *aggregate - *commitTransaction expectations: - command_started_event: command: aggregate: *collection_name pipeline: - $project: _id: 1 cursor: batchSize: 3 readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false command_name: aggregate database_name: *database_name - command_started_event: command: getMore: # 42 is a fake placeholder value for the cursorId. $numberLong: '42' collection: *collection_name batchSize: 3 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false command_name: getMore database_name: *database_name - command_started_event: command: aggregate: *collection_name pipeline: - $project: _id: 1 cursor: batchSize: 3 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false command_name: aggregate database_name: *database_name - command_started_event: command: getMore: $numberLong: '42' collection: *collection_name batchSize: 3 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false command_name: getMore database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: commitTransaction database_name: admin outcome: *outcome - description: distinct operations: - *startTransaction - name: distinct object: collection arguments: session: session0 fieldName: _id result: [1, 2, 3, 4] - *commitTransaction expectations: - command_started_event: command: distinct: *collection_name key: _id lsid: session0 readConcern: txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: distinct database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 readConcern: txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: commitTransaction database_name: admin outcome: *outcome mongo-ruby-driver-2.21.3/spec/spec_tests/data/transactions/retryable-abort-errorLabels.yml000066400000000000000000000071201505113246500320120ustar00rootroot00000000000000runOn: - minServerVersion: "4.3.1" topology: ["replicaset", "sharded"] database_name: &database_name "transaction-tests" collection_name: &collection_name "test" data: [] tests: - description: abortTransaction only retries once with RetryableWriteError from server failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["abortTransaction"] errorCode: 112 # WriteConflict, not a retryable error code errorLabels: ["RetryableWriteError"] # Override server behavior: send RetryableWriteError label with non-retryable error code operations: - name: startTransaction object: session0 - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 - name: abortTransaction object: session0 expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: abortTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: abortTransaction database_name: admin - command_started_event: # Driver retries abort once command: abortTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: abortTransaction database_name: admin outcome: collection: data: [] - description: abortTransaction does not retry without RetryableWriteError label failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["abortTransaction"] errorCode: 11600 # InterruptedAtShutdown, normally a retryable error code errorLabels: [] # Override server behavior: do not send RetryableWriteError label with retryable code operations: - name: startTransaction object: session0 - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 - name: abortTransaction object: session0 expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: abortTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: abortTransaction database_name: admin outcome: # Driver does not retry abort collection: data: [] mongo-ruby-driver-2.21.3/spec/spec_tests/data/transactions/retryable-abort.yml000066400000000000000000001020311505113246500275350ustar00rootroot00000000000000runOn: - minServerVersion: "4.0" topology: ["replicaset"] - minServerVersion: "4.1.8" topology: ["sharded"] database_name: &database_name "transaction-tests" collection_name: &collection_name "test" data: [] tests: - description: abortTransaction only performs a single retry clientOptions: retryWrites: false failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["abortTransaction"] closeConnection: true operations: - name: startTransaction object: session0 - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 # Call to abort returns no error even when the retry attempt fails. - name: abortTransaction object: session0 expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: abortTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: abortTransaction database_name: admin - command_started_event: command: abortTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: abortTransaction database_name: admin outcome: collection: data: [] - description: abortTransaction does not retry after Interrupted failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["abortTransaction"] errorCode: 11601 closeConnection: false operations: - name: startTransaction object: session0 - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 - name: abortTransaction object: session0 expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: abortTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: abortTransaction database_name: admin outcome: collection: data: [] - description: abortTransaction does not retry after WriteConcernError Interrupted failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["abortTransaction"] writeConcernError: code: 11601 errmsg: operation was interrupted operations: - name: startTransaction object: session0 arguments: options: writeConcern: w: majority - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 - name: abortTransaction object: session0 expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: abortTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: w: majority command_name: abortTransaction database_name: admin outcome: collection: data: [] - description: abortTransaction succeeds after connection error failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["abortTransaction"] closeConnection: true operations: - name: startTransaction object: session0 - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 - name: abortTransaction object: session0 expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: abortTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: abortTransaction database_name: admin - command_started_event: command: abortTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: abortTransaction database_name: admin outcome: collection: data: [] - description: abortTransaction succeeds after NotWritablePrimary failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["abortTransaction"] errorCode: 10107 errorLabels: ["RetryableWriteError"] closeConnection: false operations: - name: startTransaction object: session0 - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 - name: abortTransaction object: session0 expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: abortTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: abortTransaction database_name: admin - command_started_event: command: abortTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: abortTransaction database_name: admin outcome: collection: data: [] - description: abortTransaction succeeds after NotPrimaryOrSecondary failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["abortTransaction"] errorCode: 13436 errorLabels: ["RetryableWriteError"] closeConnection: false operations: - name: startTransaction object: session0 - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 - name: abortTransaction object: session0 expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: abortTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: abortTransaction database_name: admin - command_started_event: command: abortTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: abortTransaction database_name: admin outcome: collection: data: [] - description: abortTransaction succeeds after NotPrimaryNoSecondaryOk failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["abortTransaction"] errorCode: 13435 errorLabels: ["RetryableWriteError"] closeConnection: false operations: - name: startTransaction object: session0 - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 - name: abortTransaction object: session0 expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: abortTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: abortTransaction database_name: admin - command_started_event: command: abortTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: abortTransaction database_name: admin outcome: collection: data: [] - description: abortTransaction succeeds after InterruptedDueToReplStateChange failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["abortTransaction"] errorCode: 11602 errorLabels: ["RetryableWriteError"] closeConnection: false operations: - name: startTransaction object: session0 - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 - name: abortTransaction object: session0 expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: abortTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: abortTransaction database_name: admin - command_started_event: command: abortTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: abortTransaction database_name: admin outcome: collection: data: [] - description: abortTransaction succeeds after InterruptedAtShutdown failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["abortTransaction"] errorCode: 11600 errorLabels: ["RetryableWriteError"] closeConnection: false operations: - name: startTransaction object: session0 - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 - name: abortTransaction object: session0 expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: abortTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: abortTransaction database_name: admin - command_started_event: command: abortTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: abortTransaction database_name: admin outcome: collection: data: [] - description: abortTransaction succeeds after PrimarySteppedDown failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["abortTransaction"] errorCode: 189 errorLabels: ["RetryableWriteError"] closeConnection: false operations: - name: startTransaction object: session0 - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 - name: abortTransaction object: session0 expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: abortTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: abortTransaction database_name: admin - command_started_event: command: abortTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: abortTransaction database_name: admin outcome: collection: data: [] - description: abortTransaction succeeds after ShutdownInProgress failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["abortTransaction"] errorCode: 91 errorLabels: ["RetryableWriteError"] closeConnection: false operations: - name: startTransaction object: session0 - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 - name: abortTransaction object: session0 expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: abortTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: abortTransaction database_name: admin - command_started_event: command: abortTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: abortTransaction database_name: admin outcome: collection: data: [] - description: abortTransaction succeeds after HostNotFound failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["abortTransaction"] errorCode: 7 errorLabels: ["RetryableWriteError"] closeConnection: false operations: - name: startTransaction object: session0 - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 - name: abortTransaction object: session0 expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: abortTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: abortTransaction database_name: admin - command_started_event: command: abortTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: abortTransaction database_name: admin outcome: collection: data: [] - description: abortTransaction succeeds after HostUnreachable failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["abortTransaction"] errorCode: 6 errorLabels: ["RetryableWriteError"] closeConnection: false operations: - name: startTransaction object: session0 - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 - name: abortTransaction object: session0 expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: abortTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: abortTransaction database_name: admin - command_started_event: command: abortTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: abortTransaction database_name: admin outcome: collection: data: [] - description: abortTransaction succeeds after SocketException failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["abortTransaction"] errorCode: 9001 errorLabels: ["RetryableWriteError"] closeConnection: false operations: - name: startTransaction object: session0 - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 - name: abortTransaction object: session0 expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: abortTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: abortTransaction database_name: admin - command_started_event: command: abortTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: abortTransaction database_name: admin outcome: collection: data: [] - description: abortTransaction succeeds after NetworkTimeout failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["abortTransaction"] errorCode: 89 errorLabels: ["RetryableWriteError"] closeConnection: false operations: - name: startTransaction object: session0 - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 - name: abortTransaction object: session0 expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: abortTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: abortTransaction database_name: admin - command_started_event: command: abortTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: abortTransaction database_name: admin outcome: collection: data: [] - description: abortTransaction succeeds after WriteConcernError InterruptedAtShutdown failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["abortTransaction"] errorLabels: ["RetryableWriteError"] writeConcernError: code: 11600 errmsg: Replication is being shut down operations: - name: startTransaction object: session0 arguments: options: writeConcern: w: majority - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 - name: abortTransaction object: session0 expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: abortTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: w: majority command_name: abortTransaction database_name: admin - command_started_event: command: abortTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: w: majority command_name: abortTransaction database_name: admin outcome: collection: data: [] - description: abortTransaction succeeds after WriteConcernError InterruptedDueToReplStateChange failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["abortTransaction"] errorLabels: ["RetryableWriteError"] writeConcernError: code: 11602 errmsg: Replication is being shut down operations: - name: startTransaction object: session0 arguments: options: writeConcern: w: majority - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 - name: abortTransaction object: session0 expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: abortTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: w: majority command_name: abortTransaction database_name: admin - command_started_event: command: abortTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: w: majority command_name: abortTransaction database_name: admin outcome: collection: data: [] - description: abortTransaction succeeds after WriteConcernError PrimarySteppedDown failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["abortTransaction"] errorLabels: ["RetryableWriteError"] writeConcernError: code: 189 errmsg: Replication is being shut down operations: - name: startTransaction object: session0 arguments: options: writeConcern: w: majority - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 - name: abortTransaction object: session0 expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: abortTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: w: majority command_name: abortTransaction database_name: admin - command_started_event: command: abortTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: w: majority command_name: abortTransaction database_name: admin outcome: collection: data: [] - description: abortTransaction succeeds after WriteConcernError ShutdownInProgress failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["abortTransaction"] errorLabels: ["RetryableWriteError"] writeConcernError: code: 91 errmsg: Replication is being shut down operations: - name: startTransaction object: session0 arguments: options: writeConcern: w: majority - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 - name: abortTransaction object: session0 expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: abortTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: w: majority command_name: abortTransaction database_name: admin - command_started_event: command: abortTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: w: majority command_name: abortTransaction database_name: admin outcome: collection: data: [] mongo-ruby-driver-2.21.3/spec/spec_tests/data/transactions/retryable-commit-errorLabels.yml000066400000000000000000000076151505113246500322040ustar00rootroot00000000000000runOn: - minServerVersion: "4.3.1" topology: ["replicaset", "sharded"] database_name: &database_name "transaction-tests" collection_name: &collection_name "test" data: [] tests: - description: commitTransaction does not retry error without RetryableWriteError label clientOptions: retryWrites: false failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["commitTransaction"] errorCode: 11600 # InterruptedAtShutdown, normally a retryable error code errorLabels: [] # Override server behavior: do not send RetryableWriteError label with retryable code operations: - name: startTransaction object: session0 - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 - name: commitTransaction object: session0 result: errorLabelsOmit: ["RetryableWriteError", "TransientTransactionError"] expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: commitTransaction database_name: admin outcome: # Driver does not retry commit because there was no RetryableWriteError label on response collection: data: [] - description: commitTransaction retries once with RetryableWriteError from server clientOptions: retryWrites: false failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["commitTransaction"] errorCode: 112 # WriteConflict, not a retryable error code errorLabels: ["RetryableWriteError"] # Override server behavior: send RetryableWriteError label with non-retryable error code operations: - name: startTransaction object: session0 - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 - name: commitTransaction object: session0 expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: commitTransaction database_name: admin - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: { w: majority, wtimeout: 10000 } command_name: commitTransaction database_name: admin outcome: # Driver retries commit and it succeeds collection: data: - _id: 1 mongo-ruby-driver-2.21.3/spec/spec_tests/data/transactions/retryable-commit.yml000066400000000000000000001147311505113246500277300ustar00rootroot00000000000000runOn: - minServerVersion: "4.0" topology: ["replicaset"] - minServerVersion: "4.1.8" topology: ["sharded"] database_name: &database_name "transaction-tests" collection_name: &collection_name "test" data: [] tests: - description: commitTransaction fails after two errors clientOptions: retryWrites: false failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["commitTransaction"] closeConnection: true operations: - name: startTransaction object: session0 - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 # First call to commit fails after a single retry attempt. - name: commitTransaction object: session0 result: errorLabelsContain: ["RetryableWriteError", "UnknownTransactionCommitResult"] errorLabelsOmit: ["TransientTransactionError"] # Second call to commit succeeds because the failpoint was disabled. - name: commitTransaction object: session0 expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: commitTransaction database_name: admin - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false # commitTransaction applies w:majority on retries writeConcern: { w: majority, wtimeout: 10000 } command_name: commitTransaction database_name: admin - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: { w: majority, wtimeout: 10000 } command_name: commitTransaction database_name: admin outcome: collection: data: - _id: 1 - description: commitTransaction applies majority write concern on retries clientOptions: retryWrites: false failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["commitTransaction"] closeConnection: true operations: - name: startTransaction object: session0 arguments: options: writeConcern: { w: 2, j: true, wtimeout: 5000 } - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 # First call to commit fails after a single retry attempt. - name: commitTransaction object: session0 result: errorLabelsContain: ["RetryableWriteError", "UnknownTransactionCommitResult"] errorLabelsOmit: ["TransientTransactionError"] # Second call to commit succeeds because the failpoint was disabled. - name: commitTransaction object: session0 expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: { w: 2, j: true, wtimeout: 5000 } command_name: commitTransaction database_name: admin - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false # commitTransaction applies w:majority on retries writeConcern: { w: majority, j: true, wtimeout: 5000 } command_name: commitTransaction database_name: admin - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: { w: majority, j: true, wtimeout: 5000 } command_name: commitTransaction database_name: admin outcome: collection: data: - _id: 1 - description: commitTransaction fails after Interrupted failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["commitTransaction"] errorCode: 11601 closeConnection: false operations: - name: startTransaction object: session0 - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 - name: commitTransaction object: session0 result: errorCodeName: Interrupted errorLabelsOmit: ["RetryableWriteError", "TransientTransactionError", "UnknownTransactionCommitResult"] expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: commitTransaction database_name: admin outcome: collection: data: [] - description: commitTransaction is not retried after UnsatisfiableWriteConcern error failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["commitTransaction"] writeConcernError: code: 100 errmsg: Not enough data-bearing nodes operations: - name: startTransaction object: session0 arguments: options: writeConcern: w: majority - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 - name: commitTransaction object: session0 result: errorLabelsOmit: ["RetryableWriteError", "TransientTransactionError", "UnknownTransactionCommitResult"] expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: w: majority command_name: commitTransaction database_name: admin outcome: collection: data: - _id: 1 - description: commitTransaction succeeds after connection error failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["commitTransaction"] closeConnection: true operations: - name: startTransaction object: session0 - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 - name: commitTransaction object: session0 expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: commitTransaction database_name: admin - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false # commitTransaction applies w:majority on retries writeConcern: { w: majority, wtimeout: 10000 } command_name: commitTransaction database_name: admin outcome: collection: data: - _id: 1 - description: commitTransaction succeeds after NotWritablePrimary failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["commitTransaction"] errorCode: 10107 errorLabels: ["RetryableWriteError"] closeConnection: false operations: - name: startTransaction object: session0 - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 - name: commitTransaction object: session0 expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: commitTransaction database_name: admin - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false # commitTransaction applies w:majority on retries writeConcern: { w: majority, wtimeout: 10000 } command_name: commitTransaction database_name: admin outcome: collection: data: - _id: 1 - description: commitTransaction succeeds after NotPrimaryOrSecondary failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["commitTransaction"] errorCode: 13436 errorLabels: ["RetryableWriteError"] closeConnection: false operations: - name: startTransaction object: session0 - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 - name: commitTransaction object: session0 expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: commitTransaction database_name: admin - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false # commitTransaction applies w:majority on retries writeConcern: { w: majority, wtimeout: 10000 } command_name: commitTransaction database_name: admin outcome: collection: data: - _id: 1 - description: commitTransaction succeeds after NotPrimaryNoSecondaryOk failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["commitTransaction"] errorCode: 13435 errorLabels: ["RetryableWriteError"] closeConnection: false operations: - name: startTransaction object: session0 - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 - name: commitTransaction object: session0 expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: commitTransaction database_name: admin - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false # commitTransaction applies w:majority on retries writeConcern: { w: majority, wtimeout: 10000 } command_name: commitTransaction database_name: admin outcome: collection: data: - _id: 1 - description: commitTransaction succeeds after InterruptedDueToReplStateChange failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["commitTransaction"] errorCode: 11602 errorLabels: ["RetryableWriteError"] closeConnection: false operations: - name: startTransaction object: session0 - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 - name: commitTransaction object: session0 expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: commitTransaction database_name: admin - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false # commitTransaction applies w:majority on retries writeConcern: { w: majority, wtimeout: 10000 } command_name: commitTransaction database_name: admin outcome: collection: data: - _id: 1 - description: commitTransaction succeeds after InterruptedAtShutdown failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["commitTransaction"] errorCode: 11600 errorLabels: ["RetryableWriteError"] closeConnection: false operations: - name: startTransaction object: session0 - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 - name: commitTransaction object: session0 expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: commitTransaction database_name: admin - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false # commitTransaction applies w:majority on retries writeConcern: { w: majority, wtimeout: 10000 } command_name: commitTransaction database_name: admin outcome: collection: data: - _id: 1 - description: commitTransaction succeeds after PrimarySteppedDown failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["commitTransaction"] errorCode: 189 errorLabels: ["RetryableWriteError"] closeConnection: false operations: - name: startTransaction object: session0 - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 - name: commitTransaction object: session0 expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: commitTransaction database_name: admin - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false # commitTransaction applies w:majority on retries writeConcern: { w: majority, wtimeout: 10000 } command_name: commitTransaction database_name: admin outcome: collection: data: - _id: 1 - description: commitTransaction succeeds after ShutdownInProgress failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["commitTransaction"] errorCode: 91 errorLabels: ["RetryableWriteError"] closeConnection: false operations: - name: startTransaction object: session0 - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 - name: commitTransaction object: session0 expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: commitTransaction database_name: admin - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false # commitTransaction applies w:majority on retries writeConcern: { w: majority, wtimeout: 10000 } command_name: commitTransaction database_name: admin outcome: collection: data: - _id: 1 - description: commitTransaction succeeds after HostNotFound failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["commitTransaction"] errorCode: 7 errorLabels: ["RetryableWriteError"] closeConnection: false operations: - name: startTransaction object: session0 - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 - name: commitTransaction object: session0 expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: commitTransaction database_name: admin - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false # commitTransaction applies w:majority on retries writeConcern: { w: majority, wtimeout: 10000 } command_name: commitTransaction database_name: admin outcome: collection: data: - _id: 1 - description: commitTransaction succeeds after HostUnreachable failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["commitTransaction"] errorCode: 6 errorLabels: ["RetryableWriteError"] closeConnection: false operations: - name: startTransaction object: session0 - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 - name: commitTransaction object: session0 expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: commitTransaction database_name: admin - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false # commitTransaction applies w:majority on retries writeConcern: { w: majority, wtimeout: 10000 } command_name: commitTransaction database_name: admin outcome: collection: data: - _id: 1 - description: commitTransaction succeeds after SocketException failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["commitTransaction"] errorCode: 9001 errorLabels: ["RetryableWriteError"] closeConnection: false operations: - name: startTransaction object: session0 - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 - name: commitTransaction object: session0 expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: commitTransaction database_name: admin - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false # commitTransaction applies w:majority on retries writeConcern: { w: majority, wtimeout: 10000 } command_name: commitTransaction database_name: admin outcome: collection: data: - _id: 1 - description: commitTransaction succeeds after NetworkTimeout failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["commitTransaction"] errorCode: 89 errorLabels: ["RetryableWriteError"] closeConnection: false operations: - name: startTransaction object: session0 - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 - name: commitTransaction object: session0 expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: commitTransaction database_name: admin - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false # commitTransaction applies w:majority on retries writeConcern: { w: majority, wtimeout: 10000 } command_name: commitTransaction database_name: admin outcome: collection: data: - _id: 1 - description: commitTransaction succeeds after WriteConcernError InterruptedAtShutdown failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["commitTransaction"] errorLabels: ["RetryableWriteError"] writeConcernError: code: 11600 errmsg: Replication is being shut down operations: - name: startTransaction object: session0 arguments: options: writeConcern: w: majority - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 - name: commitTransaction object: session0 expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: w: majority command_name: commitTransaction database_name: admin - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false # commitTransaction applies w:majority on retries writeConcern: { w: majority, wtimeout: 10000 } command_name: commitTransaction database_name: admin outcome: collection: data: - _id: 1 - description: commitTransaction succeeds after WriteConcernError InterruptedDueToReplStateChange failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["commitTransaction"] errorLabels: ["RetryableWriteError"] writeConcernError: code: 11602 errmsg: Replication is being shut down operations: - name: startTransaction object: session0 arguments: options: writeConcern: w: majority - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 - name: commitTransaction object: session0 expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: w: majority command_name: commitTransaction database_name: admin - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false # commitTransaction applies w:majority on retries writeConcern: { w: majority, wtimeout: 10000 } command_name: commitTransaction database_name: admin outcome: collection: data: - _id: 1 - description: commitTransaction succeeds after WriteConcernError PrimarySteppedDown failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["commitTransaction"] errorLabels: ["RetryableWriteError"] writeConcernError: code: 189 errmsg: Replication is being shut down operations: - name: startTransaction object: session0 arguments: options: writeConcern: w: majority - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 - name: commitTransaction object: session0 expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: w: majority command_name: commitTransaction database_name: admin - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false # commitTransaction applies w:majority on retries writeConcern: { w: majority, wtimeout: 10000 } command_name: commitTransaction database_name: admin outcome: collection: data: - _id: 1 - description: commitTransaction succeeds after WriteConcernError ShutdownInProgress failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["commitTransaction"] errorLabels: ["RetryableWriteError"] writeConcernError: code: 91 errmsg: Replication is being shut down operations: - name: startTransaction object: session0 arguments: options: writeConcern: w: majority - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 - name: commitTransaction object: session0 expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: w: majority command_name: commitTransaction database_name: admin - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false # commitTransaction applies w:majority on retries writeConcern: { w: majority, wtimeout: 10000 } command_name: commitTransaction database_name: admin outcome: collection: data: - _id: 1 mongo-ruby-driver-2.21.3/spec/spec_tests/data/transactions/retryable-writes.yml000066400000000000000000000125011505113246500277450ustar00rootroot00000000000000runOn: - minServerVersion: "4.0" topology: ["replicaset"] - minServerVersion: "4.1.8" topology: ["sharded"] database_name: &database_name "transaction-tests" collection_name: &collection_name "test" data: [] tests: - description: increment txnNumber clientOptions: retryWrites: true operations: - name: startTransaction object: session0 - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 - name: commitTransaction object: session0 # Retryable write should include the next txnNumber - name: insertOne object: collection arguments: session: session0 document: _id: 2 result: insertedId: 2 # Next transaction should include the next txnNumber - name: startTransaction object: session0 - name: insertOne object: collection arguments: session: session0 document: _id: 3 result: insertedId: 3 - name: abortTransaction object: session0 # Retryable write should include the next txnNumber - name: insertMany object: collection arguments: documents: - _id: 4 - _id: 5 session: session0 result: insertedIds: {0: 4, 1: 5} expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: commitTransaction database_name: admin - command_started_event: command: insert: *collection_name documents: - _id: 2 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "2" startTransaction: autocommit: writeConcern: command_name: insert database_name: *database_name - command_started_event: command: insert: *collection_name documents: - _id: 3 ordered: true readConcern: afterClusterTime: 42 lsid: session0 txnNumber: $numberLong: "3" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: abortTransaction: 1 lsid: session0 txnNumber: $numberLong: "3" startTransaction: autocommit: false writeConcern: command_name: abortTransaction database_name: admin - command_started_event: command: insert: *collection_name documents: - _id: 4 - _id: 5 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "4" startTransaction: autocommit: writeConcern: command_name: insert database_name: *database_name outcome: collection: data: - _id: 1 - _id: 2 - _id: 4 - _id: 5 - description: writes are not retried clientOptions: retryWrites: true failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["insert"] closeConnection: true operations: - name: startTransaction object: session0 - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: errorLabelsContain: ["TransientTransactionError"] - name: abortTransaction object: session0 expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: abortTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: abortTransaction database_name: admin outcome: collection: data: [] mongo-ruby-driver-2.21.3/spec/spec_tests/data/transactions/run-command.yml000066400000000000000000000117721505113246500266720ustar00rootroot00000000000000runOn: - minServerVersion: "4.0" topology: ["replicaset"] - minServerVersion: "4.1.8" topology: ["sharded"] database_name: &database_name "transaction-tests" collection_name: &collection_name "test" data: [] tests: - description: run command with default read preference operations: - name: startTransaction object: session0 - name: runCommand object: database command_name: insert arguments: session: session0 command: insert: *collection_name documents: - _id : 1 result: n: 1 - name: commitTransaction object: session0 expectations: - command_started_event: command: insert: *collection_name documents: - _id : 1 readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: commitTransaction database_name: admin - description: run command with secondary read preference in client option and primary read preference in transaction options clientOptions: readPreference: secondary operations: - name: startTransaction object: session0 arguments: options: readPreference: mode: Primary - name: runCommand object: database command_name: insert arguments: session: session0 command: insert: *collection_name documents: - _id : 1 result: n: 1 - name: commitTransaction object: session0 expectations: - command_started_event: command: insert: *collection_name documents: - _id : 1 readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: commitTransaction database_name: admin - description: run command with explicit primary read preference operations: - name: startTransaction object: session0 - name: runCommand object: database command_name: insert arguments: session: session0 command: insert: *collection_name documents: - _id : 1 readPreference: mode: Primary result: n: 1 - name: commitTransaction object: session0 expectations: - command_started_event: command: insert: *collection_name documents: - _id : 1 readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: insert database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: commitTransaction database_name: admin - description: run command fails with explicit secondary read preference operations: - name: startTransaction object: session0 - name: runCommand object: database command_name: find arguments: session: session0 command: find: *collection_name readPreference: mode: Secondary result: errorContains: read preference in a transaction must be primary - description: run command fails with secondary read preference from transaction options operations: - name: startTransaction object: session0 arguments: options: readPreference: mode: Secondary - name: runCommand object: database command_name: find arguments: session: session0 command: find: *collection_name result: errorContains: read preference in a transaction must be primary mongo-ruby-driver-2.21.3/spec/spec_tests/data/transactions/transaction-options-repl.yml000066400000000000000000000055371505113246500314320ustar00rootroot00000000000000runOn: - minServerVersion: "4.0" topology: ["replicaset"] database_name: &database_name "transaction-tests" collection_name: &collection_name "test" data: [] tests: - description: readConcern snapshot in startTransaction options sessionOptions: session0: defaultTransactionOptions: readConcern: level: majority # Overridden. operations: - name: startTransaction object: session0 arguments: options: readConcern: level: snapshot - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 - name: commitTransaction object: session0 # Now test abort. - name: startTransaction object: session0 arguments: options: readConcern: level: snapshot - name: insertOne object: collection arguments: session: session0 document: _id: 2 result: insertedId: 2 - name: abortTransaction object: session0 expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false readConcern: level: snapshot writeConcern: command_name: insert database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false readConcern: writeConcern: command_name: commitTransaction database_name: admin - command_started_event: command: insert: *collection_name documents: - _id: 2 ordered: true lsid: session0 txnNumber: $numberLong: "2" startTransaction: true autocommit: false readConcern: level: snapshot afterClusterTime: 42 writeConcern: command_name: insert database_name: *database_name - command_started_event: command: abortTransaction: 1 lsid: session0 txnNumber: $numberLong: "2" startTransaction: autocommit: false readConcern: writeConcern: command_name: abortTransaction database_name: admin outcome: collection: data: - _id: 1 mongo-ruby-driver-2.21.3/spec/spec_tests/data/transactions/transaction-options.yml000066400000000000000000000466031505113246500304710ustar00rootroot00000000000000runOn: - minServerVersion: "4.0" topology: ["replicaset"] - minServerVersion: "4.1.8" topology: ["sharded"] database_name: &database_name "transaction-tests" collection_name: &collection_name "test" data: [] tests: - description: no transaction options set operations: &commitAbortOperations - name: startTransaction object: session0 - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 - name: commitTransaction object: session0 # Now test abort. - name: startTransaction object: session0 - name: insertOne object: collection arguments: session: session0 document: _id: 2 result: insertedId: 2 - name: abortTransaction object: session0 expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false readConcern: writeConcern: command_name: insert database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false readConcern: writeConcern: command_name: commitTransaction database_name: admin - command_started_event: command: insert: *collection_name documents: - _id: 2 ordered: true lsid: session0 txnNumber: $numberLong: "2" startTransaction: true autocommit: false readConcern: afterClusterTime: 42 writeConcern: command_name: insert database_name: *database_name - command_started_event: command: abortTransaction: 1 lsid: session0 txnNumber: $numberLong: "2" startTransaction: autocommit: false readConcern: writeConcern: command_name: abortTransaction database_name: admin outcome: &outcome collection: data: - _id: 1 - description: transaction options inherited from client clientOptions: w: 1 readConcernLevel: local operations: *commitAbortOperations expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false readConcern: level: local writeConcern: command_name: insert database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false readConcern: writeConcern: w: 1 command_name: commitTransaction database_name: admin - command_started_event: command: insert: *collection_name documents: - _id: 2 ordered: true lsid: session0 txnNumber: $numberLong: "2" startTransaction: true autocommit: false readConcern: level: local afterClusterTime: 42 writeConcern: command_name: insert database_name: *database_name - command_started_event: command: abortTransaction: 1 lsid: session0 txnNumber: $numberLong: "2" startTransaction: autocommit: false readConcern: writeConcern: w: 1 command_name: abortTransaction database_name: admin outcome: *outcome - description: transaction options inherited from defaultTransactionOptions sessionOptions: session0: defaultTransactionOptions: readConcern: level: majority writeConcern: w: 1 operations: *commitAbortOperations expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false readConcern: level: majority writeConcern: command_name: insert database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false readConcern: writeConcern: w: 1 command_name: commitTransaction database_name: admin - command_started_event: command: insert: *collection_name documents: - _id: 2 ordered: true lsid: session0 txnNumber: $numberLong: "2" startTransaction: true autocommit: false readConcern: level: majority afterClusterTime: 42 writeConcern: command_name: insert database_name: *database_name - command_started_event: command: abortTransaction: 1 lsid: session0 txnNumber: $numberLong: "2" startTransaction: autocommit: false readConcern: writeConcern: w: 1 command_name: abortTransaction database_name: admin outcome: *outcome - description: startTransaction options override defaults clientOptions: readConcernLevel: local w: 1 sessionOptions: session0: defaultTransactionOptions: readConcern: level: snapshot writeConcern: w: 1 operations: - name: startTransaction object: session0 arguments: options: readConcern: level: majority writeConcern: w: majority - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 - name: commitTransaction object: session0 - name: startTransaction object: session0 arguments: options: readConcern: level: majority writeConcern: w: majority - name: insertOne object: collection arguments: session: session0 document: _id: 2 result: insertedId: 2 - name: abortTransaction object: session0 expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false readConcern: level: majority writeConcern: command_name: insert database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false readConcern: writeConcern: w: majority command_name: commitTransaction database_name: admin - command_started_event: command: insert: *collection_name documents: - _id: 2 ordered: true lsid: session0 txnNumber: $numberLong: "2" startTransaction: true autocommit: false readConcern: level: majority afterClusterTime: 42 writeConcern: command_name: insert database_name: *database_name - command_started_event: command: abortTransaction: 1 lsid: session0 txnNumber: $numberLong: "2" startTransaction: autocommit: false readConcern: writeConcern: w: majority command_name: abortTransaction database_name: admin outcome: *outcome - description: defaultTransactionOptions override client options clientOptions: readConcernLevel: local w: 1 sessionOptions: session0: defaultTransactionOptions: readConcern: level: majority writeConcern: w: majority operations: *commitAbortOperations expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false readConcern: level: majority writeConcern: command_name: insert database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false readConcern: writeConcern: w: majority command_name: commitTransaction database_name: admin - command_started_event: command: insert: *collection_name documents: - _id: 2 ordered: true lsid: session0 txnNumber: $numberLong: "2" startTransaction: true autocommit: false readConcern: level: majority afterClusterTime: 42 writeConcern: command_name: insert database_name: *database_name - command_started_event: command: abortTransaction: 1 lsid: session0 txnNumber: $numberLong: "2" startTransaction: autocommit: false readConcern: writeConcern: w: majority command_name: abortTransaction database_name: admin outcome: *outcome - description: readConcern local in defaultTransactionOptions clientOptions: w: 1 sessionOptions: session0: defaultTransactionOptions: readConcern: level: local operations: *commitAbortOperations expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false readConcern: level: local writeConcern: command_name: insert database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false readConcern: writeConcern: w: 1 command_name: commitTransaction database_name: admin - command_started_event: command: insert: *collection_name documents: - _id: 2 ordered: true lsid: session0 txnNumber: $numberLong: "2" startTransaction: true autocommit: false readConcern: level: local afterClusterTime: 42 writeConcern: command_name: insert database_name: *database_name - command_started_event: command: abortTransaction: 1 lsid: session0 txnNumber: $numberLong: "2" startTransaction: autocommit: false readConcern: writeConcern: w: 1 command_name: abortTransaction database_name: admin outcome: *outcome - description: client writeConcern ignored for bulk clientOptions: w: majority operations: - name: startTransaction object: session0 arguments: options: writeConcern: w: 1 - name: bulkWrite object: collection arguments: requests: - name: insertOne arguments: document: {_id: 1} session: session0 result: deletedCount: 0 insertedIds: {0: 1} matchedCount: 0 modifiedCount: 0 upsertedCount: 0 upsertedIds: {} - name: commitTransaction object: session0 expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false # No writeConcern. writeConcern: command_name: insert database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: w: 1 command_name: commitTransaction database_name: admin outcome: *outcome - description: readPreference inherited from client clientOptions: readPreference: secondary operations: - name: startTransaction object: session0 - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 - name: find object: collection arguments: session: session0 filter: _id: 1 result: errorContains: read preference in a transaction must be primary - name: commitTransaction object: session0 expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false readConcern: writeConcern: command_name: insert database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false readConcern: writeConcern: command_name: commitTransaction database_name: admin outcome: collection: data: - _id: 1 - description: readPreference inherited from defaultTransactionOptions clientOptions: readPreference: primary sessionOptions: session0: defaultTransactionOptions: readPreference: mode: Secondary operations: - name: startTransaction object: session0 - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 - name: find object: collection arguments: session: session0 filter: _id: 1 result: errorContains: read preference in a transaction must be primary - name: commitTransaction object: session0 expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false readConcern: writeConcern: command_name: insert database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false readConcern: writeConcern: command_name: commitTransaction database_name: admin outcome: collection: data: - _id: 1 - description: startTransaction overrides readPreference clientOptions: readPreference: primary sessionOptions: session0: defaultTransactionOptions: readPreference: mode: Primary operations: - name: startTransaction object: session0 arguments: options: readPreference: mode: Secondary - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 - name: find object: collection arguments: session: session0 filter: _id: 1 result: errorContains: read preference in a transaction must be primary - name: commitTransaction object: session0 expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false readConcern: writeConcern: command_name: insert database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false readConcern: writeConcern: command_name: commitTransaction database_name: admin outcome: collection: data: - _id: 1 mongo-ruby-driver-2.21.3/spec/spec_tests/data/transactions/update.yml000066400000000000000000000141751505113246500257340ustar00rootroot00000000000000runOn: - minServerVersion: "4.0" topology: ["replicaset"] - minServerVersion: "4.1.8" topology: ["sharded"] database_name: &database_name "transaction-tests" collection_name: &collection_name "test" data: - _id: 1 - _id: 2 - _id: 3 tests: - description: update operations: - name: startTransaction object: session0 - name: updateOne object: collection arguments: session: session0 filter: {_id: 4} update: $inc: {x: 1} upsert: true result: matchedCount: 0 modifiedCount: 0 upsertedCount: 1 upsertedId: 4 - name: replaceOne object: collection arguments: session: session0 filter: {x: 1} replacement: {y: 1} result: matchedCount: 1 modifiedCount: 1 upsertedCount: 0 - name: updateMany object: collection arguments: session: session0 filter: _id: {$gte: 3} update: $set: {z: 1} result: matchedCount: 2 modifiedCount: 2 upsertedCount: 0 - name: commitTransaction object: session0 expectations: - command_started_event: command: update: *collection_name updates: - q: {_id: 4} u: {$inc: {x: 1}} upsert: true ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: update database_name: *database_name - command_started_event: command: update: *collection_name updates: - q: {x: 1} u: {y: 1} ordered: true lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: update database_name: *database_name - command_started_event: command: update: *collection_name updates: - q: {_id: {$gte: 3}} u: {$set: {z: 1}} multi: true ordered: true lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: update database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: commitTransaction database_name: admin outcome: collection: data: - {_id: 1} - {_id: 2} - {_id: 3, z: 1} - {_id: 4, y: 1, z: 1} - description: collections writeConcern ignored for update operations: - name: startTransaction object: session0 arguments: options: writeConcern: w: majority - name: updateOne object: collection collectionOptions: writeConcern: w: majority arguments: session: session0 filter: {_id: 4} update: $inc: {x: 1} upsert: true result: matchedCount: 0 modifiedCount: 0 upsertedCount: 1 upsertedId: 4 - name: replaceOne object: collection collectionOptions: writeConcern: w: majority arguments: session: session0 filter: {x: 1} replacement: {y: 1} result: matchedCount: 1 modifiedCount: 1 upsertedCount: 0 - name: updateMany object: collection collectionOptions: writeConcern: w: majority arguments: session: session0 filter: _id: {$gte: 3} update: $set: {z: 1} result: matchedCount: 2 modifiedCount: 2 upsertedCount: 0 - name: commitTransaction object: session0 expectations: - command_started_event: command: update: *collection_name updates: - q: {_id: 4} u: {$inc: {x: 1}} upsert: true ordered: true readConcern: lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false writeConcern: command_name: update database_name: *database_name - command_started_event: command: update: *collection_name updates: - q: {x: 1} u: {y: 1} ordered: true lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: update database_name: *database_name - command_started_event: command: update: *collection_name updates: - q: {_id: {$gte: 3}} u: {$set: {z: 1}} multi: true ordered: true lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: update database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: w: majority command_name: commitTransaction database_name: admin mongo-ruby-driver-2.21.3/spec/spec_tests/data/transactions/write-concern.yml000066400000000000000000000315501505113246500272250ustar00rootroot00000000000000# Assumes the default for transactions is the same as for all ops, tests # setting the writeConcern to "majority". runOn: - minServerVersion: "4.0" topology: ["replicaset"] - minServerVersion: "4.1.8" topology: ["sharded"] database_name: &database_name "transaction-tests" collection_name: &collection_name "test" data: &data - _id: 0 tests: - description: commit with majority operations: - name: startTransaction object: session0 arguments: options: writeConcern: w: majority - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 - &commitTransaction name: commitTransaction object: session0 expectations: - &insertOneEvent command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true <<: &transactionCommandArgs lsid: session0 txnNumber: $numberLong: "1" startTransaction: true autocommit: false readConcern: writeConcern: command_name: insert database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: w: majority command_name: commitTransaction database_name: admin outcome: collection: data: - _id: 0 - _id: 1 - description: commit with default operations: - &startTransaction name: startTransaction object: session0 - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 - *commitTransaction expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true <<: *transactionCommandArgs command_name: insert database_name: *database_name - &commitWithDefaultWCEvent command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: commitTransaction database_name: admin outcome: collection: data: - _id: 0 - _id: 1 - description: abort with majority operations: - name: startTransaction object: session0 arguments: options: writeConcern: w: majority - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 - name: abortTransaction object: session0 expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true <<: *transactionCommandArgs command_name: insert database_name: *database_name - command_started_event: command: abortTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: w: majority command_name: abortTransaction database_name: admin outcome: collection: data: *data - description: abort with default operations: - name: startTransaction object: session0 - name: insertOne object: collection arguments: session: session0 document: _id: 1 result: insertedId: 1 - name: abortTransaction object: session0 expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 ordered: true <<: *transactionCommandArgs command_name: insert database_name: *database_name - command_started_event: command: abortTransaction: 1 lsid: session0 txnNumber: $numberLong: "1" startTransaction: autocommit: false writeConcern: command_name: abortTransaction database_name: admin outcome: collection: data: *data - description: start with unacknowledged write concern operations: - name: startTransaction object: session0 arguments: options: writeConcern: w: 0 result: # Client-side error. errorContains: transactions do not support unacknowledged write concern - description: start with implicit unacknowledged write concern clientOptions: w: 0 operations: - name: startTransaction object: session0 result: # Client-side error. errorContains: transactions do not support unacknowledged write concern - description: unacknowledged write concern coll insertOne operations: - *startTransaction - name: insertOne <<: &collection_w0 object: collection collectionOptions: writeConcern: { w: 0 } arguments: session: session0 document: _id: 1 result: insertedId: 1 - *commitTransaction expectations: - *insertOneEvent - *commitWithDefaultWCEvent outcome: collection: data: - _id: 0 - _id: 1 - description: unacknowledged write concern coll insertMany operations: - *startTransaction - name: insertMany <<: *collection_w0 arguments: session: session0 documents: - _id: 1 - _id: 2 result: insertedIds: {0: 1, 1: 2} - *commitTransaction expectations: - command_started_event: command: insert: *collection_name documents: - _id: 1 - _id: 2 ordered: true <<: *transactionCommandArgs command_name: insert database_name: *database_name - *commitWithDefaultWCEvent outcome: collection: data: - _id: 0 - _id: 1 - _id: 2 - description: unacknowledged write concern coll bulkWrite operations: - *startTransaction - name: bulkWrite <<: *collection_w0 arguments: session: session0 requests: - name: insertOne arguments: document: {_id: 1} result: deletedCount: 0 insertedCount: 1 insertedIds: {0: 1} matchedCount: 0 modifiedCount: 0 upsertedCount: 0 upsertedIds: {} - *commitTransaction expectations: - *insertOneEvent - *commitWithDefaultWCEvent outcome: collection: data: - _id: 0 - _id: 1 - description: unacknowledged write concern coll deleteOne operations: - *startTransaction - name: deleteOne <<: *collection_w0 arguments: session: session0 filter: _id: 0 result: deletedCount: 1 - *commitTransaction expectations: - command_started_event: command: delete: *collection_name deletes: - q: {_id: 0} limit: 1 ordered: true <<: *transactionCommandArgs command_name: delete database_name: *database_name - *commitWithDefaultWCEvent outcome: collection: data: [] - description: unacknowledged write concern coll deleteMany operations: - *startTransaction - name: deleteMany <<: *collection_w0 arguments: session: session0 filter: _id: 0 result: deletedCount: 1 - *commitTransaction expectations: - command_started_event: command: delete: *collection_name deletes: - q: {_id: 0} limit: 0 ordered: true <<: *transactionCommandArgs command_name: delete database_name: *database_name - *commitWithDefaultWCEvent outcome: collection: data: [] - description: unacknowledged write concern coll updateOne operations: - *startTransaction - name: updateOne <<: *collection_w0 arguments: session: session0 filter: {_id: 0} update: $inc: {x: 1} upsert: true result: matchedCount: 1 modifiedCount: 1 upsertedCount: 0 - *commitTransaction expectations: - command_started_event: command: update: *collection_name updates: - q: {_id: 0} u: {$inc: {x: 1}} upsert: true ordered: true <<: *transactionCommandArgs command_name: update database_name: *database_name - *commitWithDefaultWCEvent outcome: collection: data: - {_id: 0, x: 1} - description: unacknowledged write concern coll updateMany operations: - *startTransaction - name: updateMany <<: *collection_w0 arguments: session: session0 filter: {_id: 0} update: $inc: {x: 1} upsert: true result: matchedCount: 1 modifiedCount: 1 upsertedCount: 0 - *commitTransaction expectations: - command_started_event: command: update: *collection_name updates: - q: {_id: 0} u: {$inc: {x: 1}} multi: true upsert: true ordered: true <<: *transactionCommandArgs command_name: update database_name: *database_name - *commitWithDefaultWCEvent outcome: collection: data: - {_id: 0, x: 1} - description: unacknowledged write concern coll findOneAndDelete operations: - *startTransaction - name: findOneAndDelete <<: *collection_w0 arguments: session: session0 filter: {_id: 0} result: {_id: 0} - *commitTransaction expectations: - command_started_event: command: findAndModify: *collection_name query: {_id: 0} remove: True <<: *transactionCommandArgs command_name: findAndModify database_name: *database_name - *commitWithDefaultWCEvent outcome: collection: data: [] - description: unacknowledged write concern coll findOneAndReplace operations: - *startTransaction - name: findOneAndReplace <<: *collection_w0 arguments: session: session0 filter: {_id: 0} replacement: {x: 1} returnDocument: Before result: {_id: 0} - *commitTransaction expectations: - command_started_event: command: findAndModify: *collection_name query: {_id: 0} update: {x: 1} new: false <<: *transactionCommandArgs command_name: findAndModify database_name: *database_name - *commitWithDefaultWCEvent outcome: collection: data: - {_id: 0, x: 1} - description: unacknowledged write concern coll findOneAndUpdate operations: - *startTransaction - name: findOneAndUpdate <<: *collection_w0 arguments: session: session0 filter: {_id: 0} update: $inc: {x: 1} returnDocument: Before result: {_id: 0} - *commitTransaction expectations: - command_started_event: command: findAndModify: *collection_name query: {_id: 0} update: {$inc: {x: 1}} new: false <<: *transactionCommandArgs command_name: findAndModify database_name: *database_name - *commitWithDefaultWCEvent outcome: collection: data: - {_id: 0, x: 1} mongo-ruby-driver-2.21.3/spec/spec_tests/data/transactions_api/000077500000000000000000000000001505113246500245505ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/transactions_api/callback-aborts.yml000066400000000000000000000113311505113246500303160ustar00rootroot00000000000000runOn: - minServerVersion: "4.0" topology: ["replicaset"] - minServerVersion: "4.1.8" topology: ["sharded"] database_name: &database_name "withTransaction-tests" collection_name: &collection_name "test" data: [] tests: - # Session state will be ABORTED when callback returns to withTransaction description: withTransaction succeeds if callback aborts useMultipleMongoses: true operations: - name: withTransaction object: session0 arguments: callback: operations: - name: insertOne object: collection arguments: session: session0 document: { _id: 1 } result: insertedId: 1 - name: abortTransaction object: session0 expectations: - command_started_event: command: insert: *collection_name documents: - { _id: 1 } ordered: true lsid: session0 txnNumber: { $numberLong: "1" } startTransaction: true autocommit: false # omitted fields readConcern: ~ writeConcern: ~ command_name: insert database_name: *database_name - command_started_event: command: abortTransaction: 1 lsid: session0 txnNumber: { $numberLong: "1" } autocommit: false # omitted fields readConcern: ~ startTransaction: ~ writeConcern: ~ command_name: abortTransaction database_name: admin outcome: collection: data: [] - # Session state will be ABORTED when callback returns to withTransaction description: withTransaction succeeds if callback aborts with no ops useMultipleMongoses: true operations: - name: withTransaction object: session0 arguments: callback: operations: - name: abortTransaction object: session0 expectations: [] outcome: collection: data: [] - # Session state will be NO_TXN when callback returns to withTransaction description: withTransaction still succeeds if callback aborts and runs extra op useMultipleMongoses: true operations: - name: withTransaction object: session0 arguments: callback: operations: - name: insertOne object: collection arguments: session: session0 document: { _id: 1 } result: insertedId: 1 - name: abortTransaction object: session0 - name: insertOne object: collection arguments: session: session0 document: { _id: 2 } result: insertedId: 2 expectations: - command_started_event: command: insert: *collection_name documents: - { _id: 1 } ordered: true lsid: session0 txnNumber: { $numberLong: "1" } startTransaction: true autocommit: false # omitted fields readConcern: ~ writeConcern: ~ command_name: insert database_name: *database_name - command_started_event: command: abortTransaction: 1 lsid: session0 txnNumber: { $numberLong: "1" } autocommit: false # omitted fields readConcern: ~ startTransaction: ~ writeConcern: ~ command_name: abortTransaction database_name: admin - command_started_event: command: # This test is agnostic about retryWrites, so we do not assert the # txnNumber. If retryWrites=true, the txnNumber will be incremented # from the value used in the previous transaction; otherwise, the # field will not be present at all. insert: *collection_name documents: - { _id: 2 } ordered: true lsid: session0 # omitted fields autocommit: ~ readConcern: ~ startTransaction: ~ writeConcern: ~ command_name: insert database_name: *database_name outcome: collection: data: - { _id: 2 } mongo-ruby-driver-2.21.3/spec/spec_tests/data/transactions_api/callback-commits.yml000066400000000000000000000133211505113246500305000ustar00rootroot00000000000000runOn: - minServerVersion: "4.0" topology: ["replicaset"] - minServerVersion: "4.1.8" topology: ["sharded"] database_name: &database_name "withTransaction-tests" collection_name: &collection_name "test" data: [] tests: - # Session state will be COMMITTED when callback returns to withTransaction description: withTransaction succeeds if callback commits useMultipleMongoses: true operations: - name: withTransaction object: session0 arguments: callback: operations: - name: insertOne object: collection arguments: session: session0 document: { _id: 1 } result: insertedId: 1 - name: insertOne object: collection arguments: session: session0 document: { _id: 2 } result: insertedId: 2 - name: commitTransaction object: session0 expectations: - command_started_event: command: insert: *collection_name documents: - { _id: 1 } ordered: true lsid: session0 txnNumber: { $numberLong: "1" } startTransaction: true autocommit: false # omitted fields readConcern: ~ writeConcern: ~ command_name: insert database_name: *database_name - command_started_event: command: insert: *collection_name documents: - { _id: 2 } ordered: true lsid: session0 txnNumber: { $numberLong: "1" } autocommit: false # omitted fields readConcern: ~ startTransaction: ~ writeConcern: ~ command_name: insert database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: { $numberLong: "1" } autocommit: false # omitted fields readConcern: ~ startTransaction: ~ writeConcern: ~ command_name: commitTransaction database_name: admin outcome: collection: data: - { _id: 1 } - { _id: 2 } - # Session state will be NO_TXN when callback returns to withTransaction description: withTransaction still succeeds if callback commits and runs extra op useMultipleMongoses: true operations: - name: withTransaction object: session0 arguments: callback: operations: - name: insertOne object: collection arguments: session: session0 document: { _id: 1 } result: insertedId: 1 - name: insertOne object: collection arguments: session: session0 document: { _id: 2 } result: insertedId: 2 - name: commitTransaction object: session0 - name: insertOne object: collection arguments: session: session0 document: { _id: 3 } result: insertedId: 3 expectations: - command_started_event: command: insert: *collection_name documents: - { _id: 1 } ordered: true lsid: session0 txnNumber: { $numberLong: "1" } startTransaction: true autocommit: false # omitted fields readConcern: ~ writeConcern: ~ command_name: insert database_name: *database_name - command_started_event: command: insert: *collection_name documents: - { _id: 2 } ordered: true lsid: session0 txnNumber: { $numberLong: "1" } autocommit: false # omitted fields readConcern: ~ startTransaction: ~ writeConcern: ~ command_name: insert database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: { $numberLong: "1" } autocommit: false # omitted fields readConcern: ~ startTransaction: ~ writeConcern: ~ command_name: commitTransaction database_name: admin - command_started_event: command: # This test is agnostic about retryWrites, so we do not assert the # txnNumber. If retryWrites=true, the txnNumber will be incremented # from the value used in the previous transaction; otherwise, the # field will not be present at all. insert: *collection_name documents: - { _id: 3 } ordered: true lsid: session0 # omitted fields autocommit: ~ readConcern: ~ startTransaction: ~ writeConcern: ~ command_name: insert database_name: *database_name outcome: collection: data: - { _id: 1 } - { _id: 2 } - { _id: 3 } mongo-ruby-driver-2.21.3/spec/spec_tests/data/transactions_api/callback-retry.yml000066400000000000000000000145401505113246500301760ustar00rootroot00000000000000runOn: - minServerVersion: "4.0" topology: ["replicaset"] - minServerVersion: "4.1.8" topology: ["sharded"] database_name: &database_name "withTransaction-tests" collection_name: &collection_name "test" data: [] tests: - description: callback succeeds after multiple connection errors failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["insert"] closeConnection: true operations: - name: withTransaction object: session0 arguments: callback: operations: - # We do not assert the result here, as insertOne will fail for # the first two executions of the callback before ultimately # succeeding and returning a result. Asserting the state of the # output collection after the test is sufficient. name: insertOne object: collection arguments: session: session0 document: { _id: 1 } expectations: - command_started_event: command: insert: *collection_name documents: - { _id: 1 } ordered: true lsid: session0 txnNumber: { $numberLong: "1" } startTransaction: true autocommit: false # omitted fields readConcern: ~ writeConcern: ~ command_name: insert database_name: *database_name - command_started_event: command: abortTransaction: 1 lsid: session0 txnNumber: { $numberLong: "1" } autocommit: false # omitted fields readConcern: ~ startTransaction: ~ writeConcern: ~ command_name: abortTransaction database_name: admin - command_started_event: command: insert: *collection_name documents: - { _id: 1 } ordered: true lsid: session0 # second transaction will be causally consistent with the first readConcern: { afterClusterTime: 42 } # txnNumber is incremented when retrying the transaction txnNumber: { $numberLong: "2" } startTransaction: true autocommit: false # omitted fields writeConcern: ~ command_name: insert database_name: *database_name - command_started_event: command: abortTransaction: 1 lsid: session0 txnNumber: { $numberLong: "2" } autocommit: false # omitted fields readConcern: ~ startTransaction: ~ writeConcern: ~ command_name: abortTransaction database_name: admin - command_started_event: command: insert: *collection_name documents: - { _id: 1 } ordered: true lsid: session0 # third transaction will be causally consistent with the second readConcern: { afterClusterTime: 42 } # txnNumber is incremented when retrying the transaction txnNumber: { $numberLong: "3" } startTransaction: true autocommit: false # omitted fields writeConcern: ~ command_name: insert database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: { $numberLong: "3" } autocommit: false # omitted fields readConcern: ~ startTransaction: ~ writeConcern: ~ command_name: commitTransaction database_name: admin outcome: collection: data: - { _id: 1 } - description: callback is not retried after non-transient error (DuplicateKeyError) useMultipleMongoses: true operations: - name: withTransaction object: session0 arguments: callback: operations: - name: insertOne object: collection arguments: session: session0 document: { _id: 1 } result: insertedId: 1 - name: insertOne object: collection arguments: session: session0 document: { _id: 1 } result: errorLabelsOmit: ["TransientTransactionError", "UnknownTransactionCommitResult"] result: errorLabelsOmit: ["TransientTransactionError", "UnknownTransactionCommitResult"] # DuplicateKey error code included in the bulk write error message # returned by the server errorContains: E11000 expectations: - command_started_event: command: insert: *collection_name documents: - { _id: 1 } ordered: true lsid: session0 txnNumber: { $numberLong: "1" } startTransaction: true autocommit: false # omitted fields readConcern: ~ writeConcern: ~ command_name: insert database_name: *database_name - command_started_event: command: insert: *collection_name documents: - { _id: 1 } ordered: true lsid: session0 txnNumber: { $numberLong: "1" } autocommit: false # omitted fields readConcern: ~ startTransaction: ~ writeConcern: ~ command_name: insert database_name: *database_name - command_started_event: command: abortTransaction: 1 lsid: session0 txnNumber: { $numberLong: "1" } autocommit: false # omitted fields readConcern: ~ startTransaction: ~ writeConcern: ~ command_name: abortTransaction database_name: admin outcome: collection: data: [] mongo-ruby-driver-2.21.3/spec/spec_tests/data/transactions_api/commit-retry.yml000066400000000000000000000227511505113246500277350ustar00rootroot00000000000000runOn: - minServerVersion: "4.0" topology: ["replicaset"] - minServerVersion: "4.1.8" topology: ["sharded"] database_name: &database_name "withTransaction-tests" collection_name: &collection_name "test" data: [] tests: - description: commitTransaction succeeds after multiple connection errors failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["commitTransaction"] closeConnection: true operations: - &withTransaction name: withTransaction object: session0 arguments: callback: operations: - name: insertOne object: collection arguments: session: session0 document: { _id: 1 } result: insertedId: 1 expectations: - command_started_event: command: insert: *collection_name documents: - { _id: 1 } ordered: true lsid: session0 txnNumber: { $numberLong: "1" } startTransaction: true autocommit: false # omitted fields readConcern: ~ writeConcern: ~ command_name: insert database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: { $numberLong: "1" } autocommit: false # omitted fields readConcern: ~ startTransaction: ~ writeConcern: ~ command_name: commitTransaction database_name: admin - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: { $numberLong: "1" } autocommit: false # commitTransaction applies w:majority on retries (SPEC-1185) writeConcern: { w: majority, wtimeout: 10000 } # omitted fields readConcern: ~ startTransaction: ~ command_name: commitTransaction database_name: admin - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: { $numberLong: "1" } autocommit: false # commitTransaction applies w:majority on retries (SPEC-1185) writeConcern: { w: majority, wtimeout: 10000 } # omitted fields readConcern: ~ startTransaction: ~ command_name: commitTransaction database_name: admin outcome: collection: data: - { _id: 1 } - description: commitTransaction retry only overwrites write concern w option failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["commitTransaction"] closeConnection: true operations: - name: withTransaction object: session0 arguments: callback: operations: - name: insertOne object: collection arguments: session: session0 document: { _id: 1 } result: insertedId: 1 options: writeConcern: { w: 2, j: true, wtimeout: 5000 } expectations: - command_started_event: command: insert: *collection_name documents: - { _id: 1 } ordered: true lsid: session0 txnNumber: { $numberLong: "1" } startTransaction: true autocommit: false # omitted fields readConcern: ~ writeConcern: ~ command_name: insert database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: { $numberLong: "1" } autocommit: false writeConcern: { w: 2, j: true, wtimeout: 5000 } # omitted fields readConcern: ~ startTransaction: ~ command_name: commitTransaction database_name: admin - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: { $numberLong: "1" } autocommit: false # commitTransaction applies w:majority on retries (SPEC-1185) writeConcern: { w: majority, j: true, wtimeout: 5000 } # omitted fields readConcern: ~ startTransaction: ~ command_name: commitTransaction database_name: admin - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: { $numberLong: "1" } autocommit: false # commitTransaction applies w:majority on retries (SPEC-1185) writeConcern: { w: majority, j: true, wtimeout: 5000 } # omitted fields readConcern: ~ startTransaction: ~ command_name: commitTransaction database_name: admin outcome: collection: data: - { _id: 1 } - description: commit is retried after commitTransaction UnknownTransactionCommitResult (NotMaster) failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["commitTransaction"] errorCode: 10107 # NotMaster closeConnection: false operations: - *withTransaction expectations: - command_started_event: command: insert: *collection_name documents: - { _id: 1 } ordered: true lsid: session0 txnNumber: { $numberLong: "1" } startTransaction: true autocommit: false # omitted fields readConcern: ~ writeConcern: ~ command_name: insert database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: { $numberLong: "1" } autocommit: false # omitted fields readConcern: ~ startTransaction: ~ writeConcern: ~ command_name: commitTransaction database_name: admin - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: { $numberLong: "1" } autocommit: false # commitTransaction applies w:majority on retries (SPEC-1185) writeConcern: { w: majority, wtimeout: 10000 } # omitted fields readConcern: ~ startTransaction: ~ command_name: commitTransaction database_name: admin - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: { $numberLong: "1" } autocommit: false # commitTransaction applies w:majority on retries (SPEC-1185) writeConcern: { w: majority, wtimeout: 10000 } # omitted fields readConcern: ~ startTransaction: ~ command_name: commitTransaction database_name: admin outcome: collection: data: - { _id: 1 } - description: commit is not retried after MaxTimeMSExpired error failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["commitTransaction"] errorCode: 50 # MaxTimeMSExpired operations: - name: withTransaction object: session0 arguments: callback: operations: - name: insertOne object: collection arguments: session: session0 document: { _id: 1 } result: insertedId: 1 options: maxCommitTimeMS: 60000 result: errorCodeName: MaxTimeMSExpired errorLabelsContain: ["UnknownTransactionCommitResult"] errorLabelsOmit: ["TransientTransactionError"] expectations: - command_started_event: command: insert: *collection_name documents: - { _id: 1 } ordered: true lsid: session0 txnNumber: { $numberLong: "1" } startTransaction: true autocommit: false # omitted fields readConcern: ~ writeConcern: ~ command_name: insert database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: { $numberLong: "1" } autocommit: false maxTimeMS: 60000 # omitted fields readConcern: ~ startTransaction: ~ writeConcern: ~ command_name: commitTransaction database_name: admin outcome: collection: # In reality, the outcome of the commit is unknown but we fabricate # the error with failCommand.errorCode which does not apply the commit # operation. data: [] commit-transienttransactionerror-4.2.yml000066400000000000000000000102511505113246500343310ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/transactions_apirunOn: - minServerVersion: "4.1.6" topology: ["replicaset"] - minServerVersion: "4.1.8" topology: ["sharded"] database_name: &database_name "withTransaction-tests" collection_name: &collection_name "test" data: [] # These tests use error codes where the TransientTransactionError label will be # applied to the error response for commitTransaction. This will cause the # entire transaction to be retried instead of commitTransaction. # # See: https://github.com/mongodb/mongo/blob/r4.1.6/src/mongo/db/handle_request_response.cpp tests: - description: transaction is retried after commitTransaction TransientTransactionError (PreparedTransactionInProgress) failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["commitTransaction"] errorCode: 267 # PreparedTransactionInProgress closeConnection: false operations: - name: withTransaction object: session0 arguments: callback: operations: - name: insertOne object: collection arguments: session: session0 document: { _id: 1 } result: insertedId: 1 expectations: - command_started_event: command: insert: *collection_name documents: - { _id: 1 } ordered: true lsid: session0 txnNumber: { $numberLong: "1" } startTransaction: true autocommit: false # omitted fields readConcern: ~ writeConcern: ~ command_name: insert database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: { $numberLong: "1" } autocommit: false # omitted fields readConcern: ~ startTransaction: ~ writeConcern: ~ command_name: commitTransaction database_name: admin - command_started_event: command: insert: *collection_name documents: - { _id: 1 } ordered: true lsid: session0 # second transaction will be causally consistent with the first readConcern: { afterClusterTime: 42 } # txnNumber is incremented when retrying the transaction txnNumber: { $numberLong: "2" } startTransaction: true autocommit: false # omitted fields writeConcern: ~ command_name: insert database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: { $numberLong: "2" } autocommit: false # omitted fields readConcern: ~ startTransaction: ~ writeConcern: ~ command_name: commitTransaction database_name: admin - command_started_event: command: insert: *collection_name documents: - { _id: 1 } ordered: true lsid: session0 # third transaction will be causally consistent with the second readConcern: { afterClusterTime: 42 } # txnNumber is incremented when retrying the transaction txnNumber: { $numberLong: "3" } startTransaction: true autocommit: false # omitted fields writeConcern: ~ command_name: insert database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: { $numberLong: "3" } autocommit: false # omitted fields readConcern: ~ startTransaction: ~ writeConcern: ~ command_name: commitTransaction database_name: admin outcome: collection: data: - { _id: 1 } mongo-ruby-driver-2.21.3/spec/spec_tests/data/transactions_api/commit-transienttransactionerror.yml000066400000000000000000000125571505113246500341220ustar00rootroot00000000000000runOn: - minServerVersion: "4.0" topology: ["replicaset"] - minServerVersion: "4.1.8" topology: ["sharded"] database_name: &database_name "withTransaction-tests" collection_name: &collection_name "test" data: [] # These tests use error codes where the TransientTransactionError label will be # applied to the error response for commitTransaction. This will cause the # entire transaction to be retried instead of commitTransaction. # # See: https://github.com/mongodb/mongo/blob/r4.1.6/src/mongo/db/handle_request_response.cpp tests: - description: transaction is retried after commitTransaction TransientTransactionError (LockTimeout) failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["commitTransaction"] errorCode: 24 # LockTimeout closeConnection: false operations: &operations - name: withTransaction object: session0 arguments: callback: operations: - name: insertOne object: collection arguments: session: session0 document: { _id: 1 } result: insertedId: 1 expectations: &expectations - command_started_event: command: insert: *collection_name documents: - { _id: 1 } ordered: true lsid: session0 txnNumber: { $numberLong: "1" } startTransaction: true autocommit: false # omitted fields readConcern: ~ writeConcern: ~ command_name: insert database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: { $numberLong: "1" } autocommit: false # omitted fields readConcern: ~ startTransaction: ~ writeConcern: ~ command_name: commitTransaction database_name: admin - command_started_event: command: insert: *collection_name documents: - { _id: 1 } ordered: true lsid: session0 # second transaction will be causally consistent with the first readConcern: { afterClusterTime: 42 } # txnNumber is incremented when retrying the transaction txnNumber: { $numberLong: "2" } startTransaction: true autocommit: false # omitted fields writeConcern: ~ command_name: insert database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: { $numberLong: "2" } autocommit: false # omitted fields readConcern: ~ startTransaction: ~ writeConcern: ~ command_name: commitTransaction database_name: admin - command_started_event: command: insert: *collection_name documents: - { _id: 1 } ordered: true lsid: session0 # third transaction will be causally consistent with the second readConcern: { afterClusterTime: 42 } # txnNumber is incremented when retrying the transaction txnNumber: { $numberLong: "3" } startTransaction: true autocommit: false # omitted fields writeConcern: ~ command_name: insert database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: { $numberLong: "3" } autocommit: false # omitted fields readConcern: ~ startTransaction: ~ writeConcern: ~ command_name: commitTransaction database_name: admin outcome: &outcome collection: data: - { _id: 1 } - description: transaction is retried after commitTransaction TransientTransactionError (WriteConflict) failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["commitTransaction"] errorCode: 112 # WriteConflict closeConnection: false operations: *operations expectations: *expectations outcome: *outcome - description: transaction is retried after commitTransaction TransientTransactionError (SnapshotUnavailable) failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["commitTransaction"] errorCode: 246 # SnapshotUnavailable closeConnection: false operations: *operations expectations: *expectations outcome: *outcome - description: transaction is retried after commitTransaction TransientTransactionError (NoSuchTransaction) failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["commitTransaction"] errorCode: 251 # NoSuchTransaction closeConnection: false operations: *operations expectations: *expectations outcome: *outcome mongo-ruby-driver-2.21.3/spec/spec_tests/data/transactions_api/commit-writeconcernerror.yml000066400000000000000000000157541505113246500323510ustar00rootroot00000000000000runOn: - minServerVersion: "4.0" topology: ["replicaset"] - minServerVersion: "4.1.8" topology: ["sharded"] database_name: &database_name "withTransaction-tests" collection_name: &collection_name "test" data: [] tests: - description: commitTransaction is retried after WriteConcernFailed timeout error failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["commitTransaction"] # Do not specify closeConnection: false, since that would conflict # with writeConcernError (see: SERVER-39292) writeConcernError: code: 64 codeName: WriteConcernFailed errmsg: "waiting for replication timed out" errInfo: { wtimeout: true } operations: - &operation name: withTransaction object: session0 arguments: callback: operations: - name: insertOne object: collection arguments: session: session0 document: { _id: 1 } result: insertedId: 1 expectations: &expectations_with_retries - command_started_event: command: insert: *collection_name documents: - { _id: 1 } ordered: true lsid: session0 txnNumber: { $numberLong: "1" } startTransaction: true autocommit: false # omitted fields readConcern: ~ writeConcern: ~ command_name: insert database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: { $numberLong: "1" } autocommit: false # omitted fields readConcern: ~ startTransaction: ~ writeConcern: ~ command_name: commitTransaction database_name: admin - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: { $numberLong: "1" } autocommit: false # commitTransaction applies w:majority on retries (SPEC-1185) writeConcern: { w: majority, wtimeout: 10000 } # omitted fields readConcern: ~ startTransaction: ~ command_name: commitTransaction database_name: admin - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: { $numberLong: "1" } autocommit: false # commitTransaction applies w:majority on retries (SPEC-1185) writeConcern: { w: majority, wtimeout: 10000 } # omitted fields readConcern: ~ startTransaction: ~ command_name: commitTransaction database_name: admin # The write operation is still applied despite the write concern error outcome: &outcome collection: data: - { _id: 1 } - # This test configures the fail point to return an error with the # WriteConcernFailed code but without errInfo that would identify it as a # wtimeout error. This tests that drivers do not assume that all # WriteConcernFailed errors are due to a replication timeout. description: commitTransaction is retried after WriteConcernFailed non-timeout error failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: ["commitTransaction"] # Do not specify closeConnection: false, since that would conflict # with writeConcernError (see: SERVER-39292) writeConcernError: code: 64 codeName: WriteConcernFailed errmsg: "multiple errors reported" operations: - *operation expectations: *expectations_with_retries outcome: *outcome - description: commitTransaction is not retried after UnknownReplWriteConcern error failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["commitTransaction"] writeConcernError: code: 79 codeName: UnknownReplWriteConcern errmsg: "No write concern mode named 'foo' found in replica set configuration" operations: - <<: *operation result: errorCodeName: UnknownReplWriteConcern errorLabelsOmit: ["TransientTransactionError", "UnknownTransactionCommitResult"] expectations: &expectations_without_retries - command_started_event: command: insert: *collection_name documents: - { _id: 1 } ordered: true lsid: session0 txnNumber: { $numberLong: "1" } startTransaction: true autocommit: false # omitted fields readConcern: ~ writeConcern: ~ command_name: insert database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: { $numberLong: "1" } autocommit: false # omitted fields readConcern: ~ startTransaction: ~ writeConcern: ~ command_name: commitTransaction database_name: admin # failCommand with writeConcernError still applies the write operation(s) outcome: *outcome - description: commitTransaction is not retried after UnsatisfiableWriteConcern error failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["commitTransaction"] writeConcernError: code: 100 codeName: UnsatisfiableWriteConcern errmsg: "Not enough data-bearing nodes" operations: - <<: *operation result: errorCodeName: UnsatisfiableWriteConcern errorLabelsOmit: ["TransientTransactionError", "UnknownTransactionCommitResult"] expectations: *expectations_without_retries # failCommand with writeConcernError still applies the write operation(s) outcome: *outcome - description: commitTransaction is not retried after MaxTimeMSExpired error failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: ["commitTransaction"] writeConcernError: code: 50 codeName: MaxTimeMSExpired errmsg: "operation exceeded time limit" operations: - <<: *operation result: errorCodeName: MaxTimeMSExpired errorLabelsContain: ["UnknownTransactionCommitResult"] errorLabelsOmit: ["TransientTransactionError"] expectations: *expectations_without_retries # failCommand with writeConcernError still applies the write operation(s) outcome: *outcome mongo-ruby-driver-2.21.3/spec/spec_tests/data/transactions_api/commit.yml000066400000000000000000000127001505113246500265630ustar00rootroot00000000000000runOn: - minServerVersion: "4.0" topology: ["replicaset"] - minServerVersion: "4.1.8" topology: ["sharded"] database_name: &database_name "withTransaction-tests" collection_name: &collection_name "test" data: [] tests: - description: withTransaction commits after callback returns useMultipleMongoses: true operations: - name: withTransaction object: session0 arguments: callback: operations: - name: insertOne object: collection arguments: session: session0 document: { _id: 1 } result: insertedId: 1 - name: insertOne object: collection arguments: session: session0 document: { _id: 2 } result: insertedId: 2 expectations: - command_started_event: command: insert: *collection_name documents: - { _id: 1 } ordered: true lsid: session0 txnNumber: { $numberLong: "1" } startTransaction: true autocommit: false # omitted fields readConcern: ~ writeConcern: ~ command_name: insert database_name: *database_name - command_started_event: command: insert: *collection_name documents: - { _id: 2 } ordered: true lsid: session0 txnNumber: { $numberLong: "1" } autocommit: false # omitted fields readConcern: ~ startTransaction: ~ writeConcern: ~ command_name: insert database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: { $numberLong: "1" } autocommit: false # omitted fields readConcern: ~ startTransaction: ~ writeConcern: ~ command_name: commitTransaction database_name: admin outcome: collection: data: - { _id: 1 } - { _id: 2 } - # In this scenario, the callback commits the transaction originally started # by withTransaction and starts a second transaction before returning. Since # withTransaction only examines the session's state, it should commit that # second transaction after the callback returns. description: withTransaction commits after callback returns (second transaction) useMultipleMongoses: true operations: - name: withTransaction object: session0 arguments: callback: operations: - name: insertOne object: collection arguments: session: session0 document: { _id: 1 } result: insertedId: 1 - name: commitTransaction object: session0 - name: startTransaction object: session0 - name: insertOne object: collection arguments: session: session0 document: { _id: 2 } result: insertedId: 2 expectations: - command_started_event: command: insert: *collection_name documents: - { _id: 1 } ordered: true lsid: session0 txnNumber: { $numberLong: "1" } startTransaction: true autocommit: false # omitted fields readConcern: ~ writeConcern: ~ command_name: insert database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: { $numberLong: "1" } autocommit: false # omitted fields readConcern: ~ startTransaction: ~ writeConcern: ~ command_name: commitTransaction database_name: admin - command_started_event: command: insert: *collection_name documents: - { _id: 2 } ordered: true lsid: session0 # second transaction will be causally consistent with the first readConcern: { afterClusterTime: 42 } # txnNumber is incremented for the second transaction txnNumber: { $numberLong: "2" } startTransaction: true autocommit: false # omitted fields writeConcern: ~ command_name: insert database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: { $numberLong: "2" } autocommit: false # omitted fields readConcern: ~ startTransaction: ~ writeConcern: ~ command_name: commitTransaction database_name: admin outcome: collection: data: - { _id: 1 } - { _id: 2 } mongo-ruby-driver-2.21.3/spec/spec_tests/data/transactions_api/transaction-options.yml000066400000000000000000000173721505113246500313230ustar00rootroot00000000000000runOn: - minServerVersion: "4.0" topology: ["replicaset"] - minServerVersion: "4.1.8" topology: ["sharded"] database_name: &database_name "withTransaction-tests" collection_name: &collection_name "test" data: [] tests: - description: withTransaction and no transaction options set useMultipleMongoses: true operations: &operations - name: withTransaction object: session0 arguments: callback: operations: - name: insertOne object: collection arguments: session: session0 document: { _id: 1 } result: insertedId: 1 expectations: - command_started_event: command: insert: *collection_name documents: - { _id: 1 } ordered: true lsid: session0 txnNumber: { $numberLong: "1" } startTransaction: true autocommit: false # omitted fields readConcern: ~ writeConcern: ~ command_name: insert database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: { $numberLong: "1" } autocommit: false # omitted fields readConcern: ~ startTransaction: ~ writeConcern: ~ command_name: commitTransaction database_name: admin outcome: &outcome collection: data: - { _id: 1 } - description: withTransaction inherits transaction options from client useMultipleMongoses: true clientOptions: readConcernLevel: local w: 1 operations: *operations expectations: - command_started_event: command: insert: *collection_name documents: - { _id: 1 } ordered: true lsid: session0 txnNumber: { $numberLong: "1" } startTransaction: true autocommit: false readConcern: { level: local } # omitted fields writeConcern: ~ command_name: insert database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: { $numberLong: "1" } autocommit: false writeConcern: { w: 1 } # omitted fields readConcern: ~ startTransaction: ~ command_name: commitTransaction database_name: admin outcome: *outcome - description: withTransaction inherits transaction options from defaultTransactionOptions useMultipleMongoses: true sessionOptions: session0: defaultTransactionOptions: readConcern: { level: majority } writeConcern: { w: 1 } operations: *operations expectations: - command_started_event: command: insert: *collection_name documents: - { _id: 1 } ordered: true lsid: session0 txnNumber: { $numberLong: "1" } startTransaction: true autocommit: false readConcern: { level: majority } # omitted fields writeConcern: ~ command_name: insert database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: { $numberLong: "1" } autocommit: false writeConcern: { w: 1 } # omitted fields readConcern: ~ startTransaction: ~ command_name: commitTransaction database_name: admin outcome: *outcome - description: withTransaction explicit transaction options useMultipleMongoses: true operations: &operations_explicit_transactionOptions - name: withTransaction object: session0 arguments: callback: operations: - name: insertOne object: collection arguments: session: session0 document: { _id: 1 } result: insertedId: 1 options: readConcern: { level: majority } writeConcern: { w: 1 } expectations: - command_started_event: command: insert: *collection_name documents: - { _id: 1 } ordered: true lsid: session0 txnNumber: { $numberLong: "1" } startTransaction: true autocommit: false readConcern: { level: majority } # omitted fields writeConcern: ~ command_name: insert database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: { $numberLong: "1" } autocommit: false writeConcern: { w: 1 } # omitted fields readConcern: ~ startTransaction: ~ command_name: commitTransaction database_name: admin outcome: *outcome - description: withTransaction explicit transaction options override defaultTransactionOptions useMultipleMongoses: true sessionOptions: session0: defaultTransactionOptions: readConcern: { level: snapshot } writeConcern: { w: majority } operations: *operations_explicit_transactionOptions expectations: - command_started_event: command: insert: *collection_name documents: - { _id: 1 } ordered: true lsid: session0 txnNumber: { $numberLong: "1" } startTransaction: true autocommit: false readConcern: { level: majority } # omitted fields writeConcern: ~ command_name: insert database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: { $numberLong: "1" } autocommit: false writeConcern: { w: 1 } # omitted fields readConcern: ~ startTransaction: ~ command_name: commitTransaction database_name: admin outcome: *outcome - description: withTransaction explicit transaction options override client options useMultipleMongoses: true clientOptions: readConcernLevel: local w: majority operations: *operations_explicit_transactionOptions expectations: - command_started_event: command: insert: *collection_name documents: - { _id: 1 } ordered: true lsid: session0 txnNumber: { $numberLong: "1" } startTransaction: true autocommit: false readConcern: { level: majority } # omitted fields writeConcern: ~ command_name: insert database_name: *database_name - command_started_event: command: commitTransaction: 1 lsid: session0 txnNumber: { $numberLong: "1" } autocommit: false writeConcern: { w: 1 } # omitted fields readConcern: ~ startTransaction: ~ command_name: commitTransaction database_name: admin outcome: *outcome mongo-ruby-driver-2.21.3/spec/spec_tests/data/transactions_unified/000077500000000000000000000000001505113246500254225ustar00rootroot00000000000000do-not-retry-read-in-transaction.yml000066400000000000000000000032331505113246500342720ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/transactions_unifieddescription: "do not retry read in a transaction" schemaVersion: "1.4" runOnRequirements: - minServerVersion: "4.0.0" topologies: [ replicaset ] - minServerVersion: "4.2.0" topologies: [ sharded, load-balanced ] createEntities: - client: id: &client0 client0 useMultipleMongoses: false observeEvents: [commandStartedEvent] uriOptions: { retryReads: true } - database: id: &database0 database0 client: *client0 databaseName: &databaseName retryable-read-in-transaction-test - collection: id: &collection0 collection0 database: *database0 collectionName: &collectionName coll - session: id: &session0 session0 client: *client0 tests: - description: "find does not retry in a transaction" operations: - name: startTransaction object: *session0 - name: failPoint # fail the following find command object: testRunner arguments: client: *client0 failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: [find] closeConnection: true - name: find object: *collection0 arguments: filter: {} session: *session0 expectError: isError: true errorLabelsContain: ["TransientTransactionError"] expectEvents: - client: *client0 events: - commandStartedEvent: command: find: *collectionName filter: {} startTransaction: true commandName: find databaseName: *databaseName mongo-ruby-driver-2.21.3/spec/spec_tests/data/transactions_unified/mongos-unpin.yml000066400000000000000000000117171505113246500306050ustar00rootroot00000000000000description: mongos-unpin schemaVersion: '1.4' runOnRequirements: - minServerVersion: '4.2' topologies: [ sharded ] createEntities: - client: id: &client0 client0 useMultipleMongoses: true - database: id: &database0 database0 client: *client0 databaseName: &database0Name mongos-unpin-db - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name test - session: id: &session0 session0 client: *client0 initialData: - collectionName: *collection0Name databaseName: *database0Name documents: [] _yamlAnchors: anchors: # LockTimeout will cause the server to add a TransientTransactionError label. It is not retryable. &lockTimeoutErrorCode 24 tests: - description: unpin after TransientTransactionError error on commit runOnRequirements: # serverless proxy doesn't append error labels to errors in transactions # caused by failpoints (CLOUDP-88216) - serverless: "forbid" operations: - &startTransaction name: startTransaction object: *session0 - &insertOne name: insertOne object: *collection0 arguments: document: { x: 1 } session: *session0 - name: targetedFailPoint object: testRunner arguments: session: *session0 failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: [ commitTransaction ] errorCode: *lockTimeoutErrorCode - name: commitTransaction object: *session0 expectError: # LockTimeout is not retryable, so the commit fails. errorCode: *lockTimeoutErrorCode errorLabelsContain: [ TransientTransactionError ] errorLabelsOmit: [ UnknownTransactionCommitResult ] - &assertNoPinnedServer name: assertSessionUnpinned object: testRunner arguments: session: *session0 # Cleanup the potentionally open server transaction by starting and # aborting a new transaction on the same session. - *startTransaction - *insertOne - &abortTransaction name: abortTransaction object: *session0 - description: unpin on successful abort operations: - *startTransaction - *insertOne - *abortTransaction - *assertNoPinnedServer - description: unpin after non-transient error on abort runOnRequirements: # serverless proxy doesn't append error labels to errors in transactions # caused by failpoints (CLOUDP-88216) - serverless: "forbid" operations: - *startTransaction - *insertOne - name: targetedFailPoint object: testRunner arguments: session: *session0 failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: [ abortTransaction ] errorCode: *lockTimeoutErrorCode - *abortTransaction - *assertNoPinnedServer # Cleanup the potentionally open server transaction by starting and # aborting a new transaction on the same session. - *startTransaction - *insertOne - *abortTransaction - description: unpin after TransientTransactionError error on abort operations: - *startTransaction - *insertOne - name: targetedFailPoint object: testRunner arguments: session: *session0 failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: [ abortTransaction ] errorCode: 91 # ShutdownInProgress - *abortTransaction - *assertNoPinnedServer # Cleanup the potentionally open server transaction by starting and # aborting a new transaction on the same session. - *startTransaction - *insertOne - *abortTransaction - description: unpin when a new transaction is started operations: - *startTransaction - *insertOne - name: commitTransaction object: *session0 - &assertPinnedServer name: assertSessionPinned object: testRunner arguments: session: *session0 - *startTransaction - *assertNoPinnedServer - description: unpin when a non-transaction write operation uses a session operations: - *startTransaction - *insertOne - name: commitTransaction object: *session0 - *assertPinnedServer - *insertOne - *assertNoPinnedServer - description: unpin when a non-transaction read operation uses a session operations: - *startTransaction - *insertOne - name: commitTransaction object: *session0 - *assertPinnedServer - name: find object: *collection0 arguments: filter: { x: 1 } session: *session0 - *assertNoPinnedServer mongo-ruby-driver-2.21.3/spec/spec_tests/data/transactions_unified/retryable-abort-handshake.yml000066400000000000000000000073741505113246500332020ustar00rootroot00000000000000description: "retryable abortTransaction on handshake errors" schemaVersion: "1.4" runOnRequirements: - minServerVersion: "4.2" topologies: [replicaset, sharded, load-balanced] serverless: "forbid" auth: true createEntities: - client: id: &client0 client0 useMultipleMongoses: false observeEvents: [commandStartedEvent, connectionCheckOutStartedEvent] - database: id: &database0 database0 client: *client0 databaseName: &databaseName retryable-handshake-tests - collection: id: &collection0 collection0 database: *database0 collectionName: &collectionName coll - session: # This session will be used to execute the transaction id: &session0 session0 client: *client0 - session: # This session will be used to create the failPoint, and empty the pool id: &session1 session1 client: *client0 initialData: - collectionName: *collectionName databaseName: *databaseName documents: - { _id: 1, x: 11 } tests: - description: "AbortTransaction succeeds after handshake network error" skipReason: "DRIVERS-2032: Pinned servers need to be checked if they are still selectable" operations: - name: startTransaction object: *session0 - name: insertOne object: *collection0 arguments: session: *session0 document: { _id: 2, x: 22 } # The following failPoint and ping utilize session1 so that # the transaction won't be failed by the intentional erroring of ping # and it will have an empty pool when it goes to run abortTransaction - name: failPoint # fail the next connection establishment object: testRunner arguments: client: *client0 session: *session1 failPoint: configureFailPoint: failCommand mode: { times: 2 } data: # use saslContinue here to avoid SDAM errors # this failPoint itself will create a usable connection in the connection pool # so we run a ping (with closeConnection: true) in order to discard the connection # before testing that abortTransaction will fail a handshake but will get retried failCommands: [saslContinue, ping] closeConnection: true - name: runCommand object: *database0 arguments: commandName: ping command: { ping: 1 } session: *session1 expectError: isError: true - name: abortTransaction object: *session0 expectEvents: - client: *client0 eventType: cmap events: - { connectionCheckOutStartedEvent: {} } # startTransaction - { connectionCheckOutStartedEvent: {} } # insertOne - { connectionCheckOutStartedEvent: {} } # failPoint - { connectionCheckOutStartedEvent: {} } # abortTransaction - { connectionCheckOutStartedEvent: {} } # abortTransaction retry - client: *client0 events: - commandStartedEvent: command: insert: *collectionName documents: [{ _id: 2, x: 22 }] startTransaction: true commandName: insert databaseName: *databaseName - commandStartedEvent: command: ping: 1 databaseName: *databaseName - commandStartedEvent: command: abortTransaction: 1 lsid: $$sessionLsid: *session0 commandName: abortTransaction databaseName: admin outcome: - collectionName: *collectionName databaseName: *databaseName documents: - { _id: 1, x: 11 } mongo-ruby-driver-2.21.3/spec/spec_tests/data/transactions_unified/retryable-commit-handshake.yml000066400000000000000000000073731505113246500333620ustar00rootroot00000000000000description: "retryable commitTransaction on handshake errors" schemaVersion: "1.4" runOnRequirements: - minServerVersion: "4.2" topologies: [replicaset, sharded, load-balanced] serverless: "forbid" auth: true createEntities: - client: id: &client0 client0 useMultipleMongoses: false observeEvents: [commandStartedEvent, connectionCheckOutStartedEvent] uriOptions: { retryWrites: false } # commitTransaction is retryable regardless of this option being set - database: id: &database0 database0 client: *client0 databaseName: &databaseName retryable-handshake-tests - collection: id: &collection0 collection0 database: *database0 collectionName: &collectionName coll - session: id: &session0 session0 client: *client0 - session: id: &session1 session1 client: *client0 initialData: - collectionName: *collectionName databaseName: *databaseName documents: - { _id: 1, x: 11 } tests: - description: "CommitTransaction succeeds after handshake network error" skipReason: "DRIVERS-2032: Pinned servers need to be checked if they are still selectable" operations: - name: startTransaction object: *session0 - name: insertOne object: *collection0 arguments: session: *session0 document: { _id: 2, x: 22 } # The following failPoint and ping utilize session1 so that # the transaction won't be failed by the intentional erroring of ping # and it will have an empty pool when it goes to run commitTransaction - name: failPoint # fail the next connection establishment object: testRunner arguments: client: *client0 session: *session1 failPoint: configureFailPoint: failCommand mode: { times: 2 } data: # use saslContinue here to avoid SDAM errors # this failPoint itself will create a usable connection in the connection pool # so we run a ping (that also fails) in order to discard the connection # before testing that commitTransaction gets retried failCommands: [saslContinue, ping] closeConnection: true - name: runCommand object: *database0 arguments: commandName: ping command: { ping: 1 } session: *session1 expectError: isError: true - name: commitTransaction object: *session0 expectEvents: - client: *client0 eventType: cmap events: - { connectionCheckOutStartedEvent: {} } # startTransaction - { connectionCheckOutStartedEvent: {} } # insertOne - { connectionCheckOutStartedEvent: {} } # failPoint - { connectionCheckOutStartedEvent: {} } # commitTransaction - { connectionCheckOutStartedEvent: {} } # commitTransaction retry - client: *client0 events: - commandStartedEvent: command: insert: *collectionName documents: [{ _id: 2, x: 22 }] startTransaction: true commandName: insert databaseName: *databaseName - commandStartedEvent: command: ping: 1 databaseName: *databaseName - commandStartedEvent: command: commitTransaction: 1 lsid: $$sessionLsid: *session0 commandName: commitTransaction databaseName: admin outcome: - collectionName: *collectionName databaseName: *databaseName documents: - { _id: 1, x: 11 } - { _id: 2, x: 22 } # The write was still applied mongo-ruby-driver-2.21.3/spec/spec_tests/data/unified/000077500000000000000000000000001505113246500226325ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/unified/valid-fail/000077500000000000000000000000001505113246500246425ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/unified/valid-fail/entity-findCursor-malformed.yml000066400000000000000000000020631505113246500327620ustar00rootroot00000000000000# This test is split out into a separate file to accomodate drivers that validate operation structure while decoding # from JSON/YML. Such drivers fail to decode any files containing invalid operations. Combining this test in a file # with other entity-findCursor valid-fail tests, which test failures that occur during test execution, would prevent # such drivers from decoding the file and running any of the tests. description: entity-findCursor-malformed schemaVersion: '1.3' createEntities: - client: id: &client0 client0 - database: id: &database0 database0 client: *client0 databaseName: &database0Name database0Name - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name coll0 initialData: - databaseName: *database0Name collectionName: *collection0Name documents: [] tests: - description: createFindCursor fails if filter is not specified operations: - name: createFindCursor object: *collection0 saveResultAsEntity: &cursor0 cursor0 mongo-ruby-driver-2.21.3/spec/spec_tests/data/unified/valid-fail/entity-findCursor.yml000066400000000000000000000013661505113246500310230ustar00rootroot00000000000000description: entity-findCursor schemaVersion: '1.3' createEntities: - client: id: &client0 client0 - database: id: &database0 database0 client: *client0 databaseName: &database0Name database0Name - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name coll0 initialData: - databaseName: *database0Name collectionName: *collection0Name documents: [] tests: - description: iterateUntilDocumentOrError fails if it references a nonexistent entity operations: - name: iterateUntilDocumentOrError object: cursor0 - description: close fails if it references a nonexistent entity operations: - name: close object: cursor0 mongo-ruby-driver-2.21.3/spec/spec_tests/data/unified/valid-fail/ignoreResultAndError-malformed.yml000066400000000000000000000021701505113246500334500ustar00rootroot00000000000000# This test is split out into a separate file to accomodate drivers that validate operation structure while decoding # from JSON/YML. Such drivers fail to decode any files containing invalid operations. Combining this test in a file # with other ignoreResultAndError valid-fail tests, which test failures that occur during test execution, would prevent # such drivers from decoding the file and running any of the tests. description: ignoreResultAndError-malformed schemaVersion: '1.3' createEntities: - client: id: &client0 client0 useMultipleMongoses: true - database: id: &database0 database0 client: *client0 databaseName: &database0Name database0Name - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name coll0 initialData: - collectionName: *collection0Name databaseName: *database0Name documents: [] tests: - description: malformed operation fails if ignoreResultAndError is true operations: - name: insertOne object: *collection0 arguments: foo: bar ignoreResultAndError: true mongo-ruby-driver-2.21.3/spec/spec_tests/data/unified/valid-fail/ignoreResultAndError.yml000066400000000000000000000016361505113246500315120ustar00rootroot00000000000000description: ignoreResultAndError schemaVersion: '1.3' createEntities: - client: id: &client0 client0 useMultipleMongoses: true - database: id: &database0 database0 client: *client0 databaseName: &database0Name database0Name - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name coll0 initialData: - collectionName: *collection0Name databaseName: *database0Name documents: [] tests: - description: operation errors are not ignored if ignoreResultAndError is false operations: - name: insertOne object: *collection0 arguments: document: &insertDocument { _id: 1 } - name: insertOne object: *collection0 arguments: # Insert the same document to force a DuplicateKey error. document: *insertDocument ignoreResultAndError: false mongo-ruby-driver-2.21.3/spec/spec_tests/data/unified/valid-fail/operation-failure.yml000066400000000000000000000013251505113246500310130ustar00rootroot00000000000000description: "operation-failure" schemaVersion: "1.0" createEntities: - client: id: &client0 client0 - database: id: &database0 database0 client: *client0 databaseName: operation-failure - collection: id: &collection0 collection0 database: *database0 collectionName: coll0 tests: - description: "Unsupported command" operations: - name: runCommand object: *database0 arguments: commandName: unsupportedCommand command: { unsupportedCommand: 1 } - description: "Unsupported query operator" operations: - name: find object: *collection0 arguments: filter: { $unsupportedQueryOperator: 1 } mongo-ruby-driver-2.21.3/spec/spec_tests/data/unified/valid-fail/operation-unsupported.yml000066400000000000000000000003601505113246500317520ustar00rootroot00000000000000description: "operation-unsupported" schemaVersion: "1.0" createEntities: - client: id: &client0 client0 tests: - description: "Unsupported operation" operations: - name: unsupportedOperation object: *client0 mongo-ruby-driver-2.21.3/spec/spec_tests/data/unified/valid-pass/000077500000000000000000000000001505113246500246755ustar00rootroot00000000000000assertNumberConnectionsCheckedOut.yml000066400000000000000000000005611505113246500341570ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/unified/valid-passdescription: assertNumberConnectionsCheckedOut schemaVersion: '1.3' createEntities: - client: id: &client0 client0 useMultipleMongoses: true tests: - description: basic assertion succeeds operations: - name: assertNumberConnectionsCheckedOut object: testRunner arguments: client: *client0 connections: 0 mongo-ruby-driver-2.21.3/spec/spec_tests/data/unified/valid-pass/collectionData-createOptions.yml000066400000000000000000000021571505113246500331670ustar00rootroot00000000000000description: collectionData-createOptions schemaVersion: "1.9" runOnRequirements: - minServerVersion: "3.6" # Capped collections cannot be created on serverless instances. serverless: forbid createEntities: - client: id: &client0 client0 - database: id: &database0 database0 client: *client0 databaseName: &database0Name database0 - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name coll0 initialData: - collectionName: *collection0Name databaseName: *database0Name createOptions: capped: true # With MMAPv1, the size field cannot be less than 4096. size: &cappedSize 4096 documents: - { _id: 1, x: 11 } tests: - description: collection is created with the correct options operations: - object: *collection0 name: aggregate arguments: pipeline: - $collStats: { storageStats: {} } - $project: { capped: '$storageStats.capped', maxSize: '$storageStats.maxSize'} expectResult: - { capped: true, maxSize: *cappedSize }mongo-ruby-driver-2.21.3/spec/spec_tests/data/unified/valid-pass/entity-client-cmap-events.yml000066400000000000000000000017471505113246500324410ustar00rootroot00000000000000description: entity-client-cmap-events schemaVersion: '1.3' createEntities: - client: id: &client0 client0 useMultipleMongoses: true observeEvents: - connectionReadyEvent - connectionCheckedOutEvent - connectionCheckedInEvent - database: id: &database0 database0 client: *client0 databaseName: &database0Name database0Name - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name coll0 initialData: - collectionName: *collection0Name databaseName: *database0Name documents: [] tests: - description: events are captured during an operation operations: - name: insertOne object: *collection0 arguments: document: { x: 1 } expectEvents: - client: *client0 eventType: cmap events: - connectionReadyEvent: {} - connectionCheckedOutEvent: {} - connectionCheckedInEvent: {} entity-client-storeEventsAsEntities.yml000066400000000000000000000020441505113246500344410ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/unified/valid-passdescription: "entity-client-storeEventsAsEntities" schemaVersion: "1.2" createEntities: - client: id: &client0 client0 storeEventsAsEntities: - id: client0_events events: ["CommandStartedEvent", "CommandSucceededEvent", "CommandFailedEvent"] - database: id: &database0 database0 client: *client0 databaseName: &database0Name test - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name coll0 initialData: - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1, x: 11 } tests: # Note: this test does not assert that the events are actually saved to the # entity since there is presently no assertion syntax to do so. We are only # asserting that the test executes successfully. - description: "storeEventsAsEntities captures events" operations: - name: find object: *collection0 arguments: filter: {} expectResult: - { _id: 1, x: 11 } mongo-ruby-driver-2.21.3/spec/spec_tests/data/unified/valid-pass/expectedError-errorResponse.yml000066400000000000000000000021011505113246500330730ustar00rootroot00000000000000description: "expectedError-errorResponse" schemaVersion: "1.12" createEntities: - client: id: &client0 client0 - database: id: &database0 database0 client: *client0 databaseName: &database0Name test - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name coll0 tests: - description: "Unsupported command" operations: - name: runCommand object: *database0 arguments: commandName: unsupportedCommand command: { unsupportedCommand: 1 } expectError: # Avoid asserting the exact error since it may vary by server version errorResponse: errmsg: { $$type: "string" } - description: "Unsupported query operator" operations: - name: find object: *collection0 arguments: filter: { $unsupportedQueryOperator: 1 } expectError: # Avoid asserting the exact error since it may vary by server version errorResponse: errmsg: { $$type: "string" } expectedEventsForClient-eventType.yml000066400000000000000000000032441505113246500341210ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/unified/valid-passdescription: expectedEventsForClient-eventType schemaVersion: '1.3' createEntities: - client: id: &client0 client0 useMultipleMongoses: true observeEvents: - commandStartedEvent - connectionReadyEvent - database: id: &database0 database0 client: *client0 databaseName: &database0Name database0Name - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name coll0 initialData: - collectionName: *collection0Name databaseName: *database0Name documents: [] tests: - description: eventType can be set to command and cmap operations: - name: insertOne object: *collection0 arguments: document: &insertDocument { _id: 1 } expectEvents: - client: *client0 eventType: command events: - commandStartedEvent: command: insert: *collection0Name documents: - *insertDocument commandName: insert - client: *client0 eventType: cmap events: - connectionReadyEvent: {} - description: eventType defaults to command if unset operations: - name: insertOne object: *collection0 arguments: document: *insertDocument expectEvents: - client: *client0 events: - commandStartedEvent: command: insert: *collection0Name documents: - *insertDocument commandName: insert - client: *client0 eventType: cmap events: - connectionReadyEvent: {} expectedEventsForClient-ignoreExtraEvents.yml000066400000000000000000000040341505113246500356100ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/unified/valid-passdescription: expectedEventsForClient-ignoreExtraEvents schemaVersion: '1.7' createEntities: - client: id: &client0 client0 useMultipleMongoses: true observeEvents: - commandStartedEvent - database: id: &database0 database0 client: *client0 databaseName: &database0Name database0Name - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name coll0 initialData: - collectionName: *collection0Name databaseName: *database0Name documents: [] tests: - description: ignoreExtraEvents can be set to false operations: - name: insertOne object: *collection0 arguments: document: &insertDocument1 { _id: 1 } expectEvents: - client: *client0 ignoreExtraEvents: false events: - commandStartedEvent: command: insert: *collection0Name documents: - *insertDocument1 commandName: insert - description: ignoreExtraEvents can be set to true operations: - name: insertOne object: *collection0 arguments: document: &insertDocument2 { _id: 2 } - name: insertOne object: *collection0 arguments: document: { _id: 3 } expectEvents: - client: *client0 ignoreExtraEvents: true events: - commandStartedEvent: command: insert: *collection0Name documents: - *insertDocument2 commandName: insert - description: ignoreExtraEvents defaults to false if unset operations: - name: insertOne object: *collection0 arguments: document: &insertDocument4 { _id: 4 } expectEvents: - client: *client0 events: - commandStartedEvent: command: insert: *collection0Name documents: - *insertDocument4 commandName: insertmongo-ruby-driver-2.21.3/spec/spec_tests/data/unified/valid-pass/ignoreResultAndError.yml000066400000000000000000000015221505113246500315370ustar00rootroot00000000000000description: ignoreResultAndError schemaVersion: '1.3' createEntities: - client: id: &client0 client0 useMultipleMongoses: true - database: id: &database0 database0 client: *client0 databaseName: &database0Name database0Name - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name coll0 initialData: - collectionName: *collection0Name databaseName: *database0Name documents: [] tests: - description: operation errors are ignored if ignoreResultAndError is true operations: - name: insertOne object: *collection0 arguments: document: &insertDocument { _id: 1 } - name: insertOne object: *collection0 arguments: document: *insertDocument ignoreResultAndError: true mongo-ruby-driver-2.21.3/spec/spec_tests/data/unified/valid-pass/observeSensitiveCommands.yml000066400000000000000000000217421505113246500324470ustar00rootroot00000000000000description: "observeSensitiveCommands" schemaVersion: "1.5" runOnRequirements: - auth: false createEntities: - client: id: &clientObserveSensitiveCommands client0 observeEvents: - commandStartedEvent - commandSucceededEvent observeSensitiveCommands: true - client: id: &clientDoNotObserveSensitiveCommands client1 observeEvents: - commandStartedEvent - commandSucceededEvent observeSensitiveCommands: false - client: id: &clientDoNotObserveSensitiveCommandsByDefault client2 observeEvents: - commandStartedEvent - commandSucceededEvent - database: id: &databaseObserveSensitiveCommands database0 client: *clientObserveSensitiveCommands databaseName: &databaseName observeSensitiveCommands - database: id: &databaseDoNotObserveSensitiveCommands database1 client: *clientDoNotObserveSensitiveCommands databaseName: *databaseName - database: id: &databaseDoNotObserveSensitiveCommandsByDefault database2 client: *clientDoNotObserveSensitiveCommandsByDefault databaseName: *databaseName tests: - description: "getnonce is observed with observeSensitiveCommands=true" runOnRequirements: - maxServerVersion: 6.1.99 # getnonce removed as of 6.2 via SERVER-71007 operations: - name: runCommand object: *databaseObserveSensitiveCommands arguments: commandName: getnonce command: { getnonce: 1 } expectEvents: - client: *clientObserveSensitiveCommands events: - commandStartedEvent: commandName: getnonce command: { getnonce: { $$exists: false } } - commandSucceededEvent: commandName: getnonce reply: ok: { $$exists: false } nonce: { $$exists: false } - description: "getnonce is not observed with observeSensitiveCommands=false" runOnRequirements: - maxServerVersion: 6.1.99 # getnonce removed as of 6.2 via SERVER-71007 operations: - name: runCommand object: *databaseDoNotObserveSensitiveCommands arguments: commandName: getnonce command: { getnonce: 1 } expectEvents: - client: *clientDoNotObserveSensitiveCommands events: [] - description: "getnonce is not observed by default" runOnRequirements: - maxServerVersion: 6.1.99 # getnonce removed as of 6.2 via SERVER-71007 operations: - name: runCommand object: *databaseDoNotObserveSensitiveCommandsByDefault arguments: commandName: getnonce command: { getnonce: 1 } expectEvents: - client: *clientDoNotObserveSensitiveCommandsByDefault events: [] - description: "hello with speculativeAuthenticate" runOnRequirements: - minServerVersion: "4.9" operations: - name: runCommand object: *databaseObserveSensitiveCommands arguments: &helloArgs commandName: hello command: hello: 1 speculativeAuthenticate: { saslStart: 1 } - name: runCommand object: *databaseDoNotObserveSensitiveCommands arguments: *helloArgs - name: runCommand object: *databaseDoNotObserveSensitiveCommandsByDefault arguments: *helloArgs expectEvents: - client: *clientObserveSensitiveCommands events: - commandStartedEvent: commandName: hello command: # Assert that all fields in command are redacted hello: { $$exists: false } speculativeAuthenticate: { $$exists: false } - commandSucceededEvent: commandName: hello reply: # Assert that all fields in reply are redacted isWritablePrimary: { $$exists: false } speculativeAuthenticate: { $$exists: false } - client: *clientDoNotObserveSensitiveCommands events: [] - client: *clientDoNotObserveSensitiveCommandsByDefault events: [] - description: "hello without speculativeAuthenticate is always observed" runOnRequirements: - minServerVersion: "4.9" operations: - name: runCommand object: *databaseObserveSensitiveCommands arguments: &helloArgs commandName: hello command: { hello: 1 } - name: runCommand object: *databaseDoNotObserveSensitiveCommands arguments: *helloArgs - name: runCommand object: *databaseDoNotObserveSensitiveCommandsByDefault arguments: *helloArgs expectEvents: - client: *clientObserveSensitiveCommands events: &helloEvents - commandStartedEvent: commandName: hello command: { hello: 1 } - commandSucceededEvent: commandName: hello reply: { isWritablePrimary: { $$exists: true } } - client: *clientDoNotObserveSensitiveCommands events: *helloEvents - client: *clientDoNotObserveSensitiveCommandsByDefault events: *helloEvents - description: "legacy hello with speculativeAuthenticate" operations: - name: runCommand object: *databaseObserveSensitiveCommands arguments: &ismasterArgs commandName: ismaster command: ismaster: 1 speculativeAuthenticate: { saslStart: 1 } - name: runCommand object: *databaseObserveSensitiveCommands arguments: &isMasterArgs commandName: isMaster command: isMaster: 1 speculativeAuthenticate: { saslStart: 1 } - name: runCommand object: *databaseDoNotObserveSensitiveCommands arguments: *ismasterArgs - name: runCommand object: *databaseDoNotObserveSensitiveCommands arguments: *isMasterArgs - name: runCommand object: *databaseDoNotObserveSensitiveCommandsByDefault arguments: *ismasterArgs - name: runCommand object: *databaseDoNotObserveSensitiveCommandsByDefault arguments: *isMasterArgs expectEvents: - client: *clientObserveSensitiveCommands events: - commandStartedEvent: commandName: ismaster command: # Assert that all fields in command are redacted ismaster: { $$exists: false } speculativeAuthenticate: { $$exists: false } - commandSucceededEvent: commandName: ismaster reply: # Assert that all fields in reply are redacted ismaster: { $$exists: false } speculativeAuthenticate: { $$exists: false } - commandStartedEvent: commandName: isMaster command: # Assert that all fields in command are redacted isMaster: { $$exists: false } speculativeAuthenticate: { $$exists: false } - commandSucceededEvent: commandName: isMaster reply: # Assert that all fields in reply are redacted ismaster: { $$exists: false } speculativeAuthenticate: { $$exists: false } - client: *clientDoNotObserveSensitiveCommands events: [] - client: *clientDoNotObserveSensitiveCommandsByDefault events: [] - description: "legacy hello without speculativeAuthenticate is always observed" operations: - name: runCommand object: *databaseObserveSensitiveCommands arguments: &ismasterArgs commandName: ismaster command: { ismaster: 1 } - name: runCommand object: *databaseObserveSensitiveCommands arguments: &isMasterArgs commandName: isMaster command: { isMaster: 1 } - name: runCommand object: *databaseDoNotObserveSensitiveCommands arguments: *ismasterArgs - name: runCommand object: *databaseDoNotObserveSensitiveCommands arguments: *isMasterArgs - name: runCommand object: *databaseDoNotObserveSensitiveCommandsByDefault arguments: *ismasterArgs - name: runCommand object: *databaseDoNotObserveSensitiveCommandsByDefault arguments: *isMasterArgs expectEvents: - client: *clientObserveSensitiveCommands events: &ismasterAndisMasterEvents - commandStartedEvent: commandName: ismaster command: { ismaster: 1 } - commandSucceededEvent: commandName: ismaster reply: { ismaster: { $$exists: true } } - commandStartedEvent: commandName: isMaster command: { isMaster: 1 } - commandSucceededEvent: commandName: isMaster reply: { ismaster: { $$exists: true } } - client: *clientDoNotObserveSensitiveCommands events: *ismasterAndisMasterEvents - client: *clientDoNotObserveSensitiveCommandsByDefault events: *ismasterAndisMasterEvents mongo-ruby-driver-2.21.3/spec/spec_tests/data/unified/valid-pass/poc-change-streams.yml000066400000000000000000000165471505113246500311150ustar00rootroot00000000000000description: "poc-change-streams" schemaVersion: "1.4" runOnRequirements: - serverless: forbid createEntities: # Entities for creating changeStreams - client: id: &client0 client0 useMultipleMongoses: false observeEvents: [ commandStartedEvent ] # Original tests do not observe getMore commands but only because event # assertions ignore extra events. killCursors is explicitly ignored. ignoreCommandMonitoringEvents: [ getMore, killCursors ] - database: id: &database0 database0 client: *client0 databaseName: &database0Name change-stream-tests - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name test # Entities for executing insert operations - client: id: &client1 client1 useMultipleMongoses: false - database: id: &database1 database1 client: *client1 databaseName: &database1Name change-stream-tests - database: id: &database2 database2 client: *client1 databaseName: &database2Name change-stream-tests-2 - collection: id: &collection1 collection1 database: *database1 collectionName: &collection1Name test - collection: id: &collection2 collection2 database: *database1 collectionName: &collection2Name test2 - collection: id: &collection3 collection3 database: *database2 collectionName: &collection3Name test initialData: - collectionName: *collection1Name databaseName: *database1Name documents: [] - collectionName: *collection2Name databaseName: *database1Name documents: [] - collectionName: *collection3Name databaseName: *database2Name documents: [] tests: - description: "saveResultAsEntity is optional for createChangeStream" runOnRequirements: - minServerVersion: "3.8.0" topologies: [ replicaset ] operations: - name: createChangeStream object: *client0 arguments: pipeline: [] expectEvents: - client: *client0 events: - commandStartedEvent: command: aggregate: 1 commandName: aggregate databaseName: admin - description: "Executing a watch helper on a MongoClient results in notifications for changes to all collections in all databases in the cluster." runOnRequirements: - minServerVersion: "3.8.0" topologies: [ replicaset ] operations: - name: createChangeStream object: *client0 arguments: pipeline: [] saveResultAsEntity: &changeStream0 changeStream0 - name: insertOne object: *collection2 arguments: document: { x: 1 } - name: insertOne object: *collection3 arguments: document: { y: 1 } - name: insertOne object: *collection1 arguments: document: { z: 1 } - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: operationType: insert ns: db: *database1Name coll: *collection2Name fullDocument: _id: { $$type: objectId } x: 1 - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: operationType: insert ns: db: *database2Name coll: *collection3Name fullDocument: # Original tests did not include _id, but matching now only permits # extra keys for root-level documents. _id: { $$type: objectId } y: 1 - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: operationType: insert ns: db: *database1Name coll: *collection1Name fullDocument: _id: { $$type: objectId } z: 1 expectEvents: - client: *client0 events: - commandStartedEvent: command: aggregate: 1 cursor: {} pipeline: - $changeStream: allChangesForCluster: true # Some drivers may send a default value for fullDocument # or omit it entirely (see: SPEC-1350). fullDocument: { $$unsetOrMatches: default } commandName: aggregate databaseName: admin - description: "Test consecutive resume" runOnRequirements: - minServerVersion: "4.1.7" topologies: [ replicaset ] operations: - name: failPoint object: testRunner arguments: client: *client0 failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: [ getMore ] closeConnection: true - name: createChangeStream object: *collection0 arguments: batchSize: 1 pipeline: [] saveResultAsEntity: *changeStream0 - name: insertOne object: *collection1 arguments: document: { x: 1 } - name: insertOne object: *collection1 arguments: document: { x: 2 } - name: insertOne object: *collection1 arguments: document: { x: 3 } - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: operationType: insert ns: db: *database1Name coll: *collection1Name fullDocument: _id: { $$type: objectId } x: 1 - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: operationType: insert ns: db: *database1Name coll: *collection1Name fullDocument: _id: { $$type: objectId } x: 2 - name: iterateUntilDocumentOrError object: *changeStream0 expectResult: operationType: insert ns: db: *database1Name coll: *collection1Name fullDocument: _id: { $$type: objectId } x: 3 expectEvents: - client: *client0 events: - commandStartedEvent: command: aggregate: *collection1Name cursor: { batchSize: 1 } pipeline: - $changeStream: fullDocument: { $$unsetOrMatches: default } commandName: aggregate databaseName: *database1Name # The original test only asserted the first command, since expected # events were only an ordered subset. This file does ignore getMore # commands but we must expect the subsequent aggregate commands, since # each failed getMore will resume. While doing so we can also assert # that those commands include a resume token. - &resumingAggregate commandStartedEvent: command: aggregate: *collection1Name cursor: { batchSize: 1 } pipeline: - $changeStream: fullDocument: { $$unsetOrMatches: default } resumeAfter: { $$exists: true } commandName: aggregate databaseName: *database0Name - *resumingAggregate mongo-ruby-driver-2.21.3/spec/spec_tests/data/unified/valid-pass/poc-command-monitoring.yml000066400000000000000000000055751505113246500320140ustar00rootroot00000000000000description: "poc-command-monitoring" schemaVersion: "1.0" createEntities: - client: id: &client0 client0 observeEvents: - commandStartedEvent - commandSucceededEvent - commandFailedEvent - database: id: &database0 database0 client: *client0 databaseName: &database0Name command-monitoring-tests - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name test initialData: - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1, x: 11 } - { _id: 2, x: 22 } - { _id: 3, x: 33 } - { _id: 4, x: 44 } - { _id: 5, x: 55 } tests: - description: "A successful find event with a getmore and the server kills the cursor (<= 4.4)" runOnRequirements: - minServerVersion: "3.1" maxServerVersion: "4.4.99" topologies: [ single, replicaset ] operations: - name: find object: *collection0 arguments: filter: { _id: { $gte: 1 }} sort: { _id: 1 } batchSize: 3 limit: 4 expectEvents: - client: *client0 events: - commandStartedEvent: command: find: *collection0Name filter: { _id: { $gte : 1 } } sort: { _id: 1 } batchSize: 3 limit: 4 commandName: find databaseName: *database0Name - commandSucceededEvent: reply: ok: 1 cursor: id: { $$type: [ int, long ] } ns: &namespace command-monitoring-tests.test firstBatch: - { _id: 1, x: 11 } - { _id: 2, x: 22 } - { _id: 3, x: 33 } commandName: find - commandStartedEvent: command: getMore: { $$type: [ int, long ] } collection: *collection0Name batchSize: 1 commandName: getMore databaseName: *database0Name - commandSucceededEvent: reply: ok: 1 cursor: id: 0 ns: *namespace nextBatch: - { _id: 4, x: 44 } commandName: getMore - description: "A failed find event" operations: - name: find object: *collection0 arguments: filter: { $or: true } expectError: { isError: true } expectEvents: - client: *client0 events: - commandStartedEvent: command: find: *collection0Name filter: { $or: true } commandName: find databaseName: *database0Name - commandFailedEvent: commandName: find mongo-ruby-driver-2.21.3/spec/spec_tests/data/unified/valid-pass/poc-crud.yml000066400000000000000000000131761505113246500271440ustar00rootroot00000000000000description: "poc-crud" schemaVersion: "1.4" createEntities: - client: id: &client0 client0 observeEvents: [ commandStartedEvent ] - database: id: &database0 database0 client: *client0 databaseName: &database0Name crud-tests - database: id: &database1 database1 client: *client0 databaseName: &database1Name admin - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name coll0 - collection: id: &collection1 collection1 database: *database0 collectionName: &collection1Name coll1 - collection: id: &collection2 collection2 database: *database0 collectionName: &collection2Name coll2 collectionOptions: readConcern: { level: majority } initialData: - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1, x: 11 } - { _id: 2, x: 22 } - collectionName: *collection1Name databaseName: *database0Name documents: - { _id: 1, x: 11 } - collectionName: *collection2Name databaseName: *database0Name documents: - { _id: 1, x: 11 } - { _id: 2, x: 22 } - { _id: 3, x: 33 } - collectionName: &out aggregate_out databaseName: *database0Name documents: [] tests: - description: "BulkWrite with mixed ordered operations" operations: - name: bulkWrite object: *collection0 arguments: requests: - insertOne: document: { _id: 3, x: 33 } - updateOne: filter: { _id: 2 } update: { $inc: { x: 1 } } - updateMany: filter: { _id: { $gt: 1 } } update: { $inc: { x: 1 } } - insertOne: document: { _id: 4, x: 44 } - deleteMany: filter: { x: { $nin: [ 24, 34 ] } } - replaceOne: filter: { _id: 4 } replacement: { _id: 4, x: 44 } upsert: true ordered: true expectResult: deletedCount: 2 insertedCount: 2 insertedIds: { $$unsetOrMatches: { 0: 3, 3: 4 } } matchedCount: 3 modifiedCount: 3 upsertedCount: 1 upsertedIds: { 5: 4 } outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - {_id: 2, x: 24 } - {_id: 3, x: 34 } - {_id: 4, x: 44 } - description: "InsertMany continue-on-error behavior with unordered (duplicate key in requests)" operations: - name: insertMany object: *collection1 arguments: documents: - { _id: 2, x: 22 } - { _id: 2, x: 22 } - { _id: 3, x: 33 } ordered: false expectError: expectResult: # insertMany throws BulkWriteException, which may optionally include # an intermediary BulkWriteResult $$unsetOrMatches: deletedCount: 0 insertedCount: 2 # Since the map of insertedIds is generated before execution it # could indicate inserts that did not actually succeed. We omit # this field rather than expect drivers to provide an accurate # map filtered by write errors. matchedCount: 0 modifiedCount: 0 upsertedCount: 0 upsertedIds: { } outcome: - collectionName: *collection1Name databaseName: *database0Name documents: - { _id: 1, x: 11 } - { _id: 2, x: 22 } - { _id: 3, x: 33 } - description: "ReplaceOne prohibits atomic modifiers" operations: - name: replaceOne object: *collection1 arguments: filter: { _id: 1 } replacement: { $set: { x: 22 }} expectError: isClientError: true expectEvents: - client: *client0 events: [] outcome: - collectionName: *collection1Name databaseName: *database0Name documents: - { _id: 1, x: 11 } - description: "readConcern majority with out stage" runOnRequirements: - minServerVersion: "4.1.0" topologies: [ replicaset, sharded ] serverless: "forbid" operations: - name: aggregate object: *collection2 arguments: pipeline: &pipeline - $sort: { x : 1 } - $match: { _id: { $gt: 1 } } - $out: *out expectEvents: - client: *client0 events: - commandStartedEvent: command: aggregate: *collection2Name pipeline: *pipeline readConcern: { level: majority } # The following two assertions were not in the original test commandName: aggregate databaseName: *database0Name outcome: - collectionName: *out databaseName: *database0Name documents: - { _id: 2, x: 22 } - { _id: 3, x: 33 } - description: "Aggregate with $listLocalSessions" runOnRequirements: - minServerVersion: "3.6.0" # serverless does not support either of the current database-level aggregation stages ($listLocalSessions and # $currentOp) serverless: forbid operations: - name: aggregate object: *database1 arguments: pipeline: - $listLocalSessions: { } - $limit: 1 - $addFields: { dummy: "dummy field"} - $project: { _id: 0, dummy: 1} expectResult: - { dummy: "dummy field" } mongo-ruby-driver-2.21.3/spec/spec_tests/data/unified/valid-pass/poc-gridfs.yml000066400000000000000000000133001505113246500274520ustar00rootroot00000000000000description: "poc-gridfs" schemaVersion: "1.0" createEntities: - client: id: &client0 client0 - database: id: &database0 database0 client: *client0 databaseName: &database0Name gridfs-tests - bucket: id: &bucket0 bucket0 database: *database0 - collection: id: &bucket0_files_collection bucket0_files_collection database: *database0 collectionName: &bucket0_files_collectionName fs.files - collection: id: &bucket0_chunks_collection bucket0_chunks_collection database: *database0 collectionName: &bucket0_chunks_collectionName fs.chunks initialData: - collectionName: *bucket0_files_collectionName databaseName: *database0Name documents: - _id: { $oid: "000000000000000000000005" } length: 10 chunkSize: 4 uploadDate: { $date: "1970-01-01T00:00:00.000Z" } md5: "57d83cd477bfb1ccd975ab33d827a92b" filename: "length-10" contentType: "application/octet-stream" aliases: [] metadata: {} - collectionName: *bucket0_chunks_collectionName databaseName: *database0Name documents: - _id: { $oid: "000000000000000000000005" } files_id: { $oid: "000000000000000000000005" } n: 0 data: { $binary: { base64: "ESIzRA==", subType: "00" } } # hex: 11223344 - _id: { $oid: "000000000000000000000006" } files_id: { $oid: "000000000000000000000005" } n: 1 data: { $binary: { base64: "VWZ3iA==", subType: "00" } } # hex: 55667788 - _id: { $oid: "000000000000000000000007" } files_id: { $oid: "000000000000000000000005" } n: 2 data: { $binary: { base64: "mao=", subType: "00" } } # hex: 99aa tests: # Changed from original test ("length is 8") to operate on same initialData - description: "Delete when length is 10" operations: - name: delete object: *bucket0 arguments: id: { $oid: "000000000000000000000005" } # Original test uses "assert.data" syntax to modify outcome collection for # comparison. This can be accomplished using "outcome" directly. outcome: - collectionName: *bucket0_files_collectionName databaseName: *database0Name documents: [] - collectionName: *bucket0_chunks_collectionName databaseName: *database0Name documents: [] - description: "Download when there are three chunks" operations: # Original test uses "download" operation. We use an explicit operation # that returns a stream and then assert the contents of that stream. - name: download object: *bucket0 arguments: id: { $oid: "000000000000000000000005" } expectResult: { $$matchesHexBytes: "112233445566778899aa" } - description: "Download when files entry does not exist" operations: - name: download object: *bucket0 arguments: id: { $oid: "000000000000000000000000" } # Original test expects "FileNotFound" error, which isn't specified expectError: { isError: true } - description: "Download when an intermediate chunk is missing" operations: # Original test uses "arrange" syntax to modify initialData. This can be # accomplished as a delete operation on the chunks collection. - name: deleteOne object: *bucket0_chunks_collection arguments: filter: files_id: { $oid: "000000000000000000000005" } n: 1 expectResult: deletedCount: 1 - name: download object: *bucket0 arguments: id: { $oid: "000000000000000000000005" } # Original test expects "ChunkIsMissing" error, which isn't specified expectError: { isError: true } - description: "Upload when length is 5" operations: # Original test uses "upload" operation. We use an explicit operation # that takes a stream, which has been created from the expected hex bytes. - name: upload object: *bucket0 arguments: filename: filename source: { $$hexBytes: "1122334455" } chunkSizeBytes: 4 # Original test references the result directly in "assert.data". Here, # we need to save the result as an entity, which we can later reference. expectResult: { $$type: objectId } saveResultAsEntity: &oid0 oid0 # "outcome" does not allow operators, but we can perform the assertions # with separate find operations. - name: find object: *bucket0_files_collection arguments: filter: {} sort: { uploadDate: -1 } limit: 1 expectResult: - _id: { $$matchesEntity: *oid0 } length: 5 chunkSize: 4 uploadDate: { $$type: date } # The md5 field is deprecated so some drivers do not calculate it when uploading files. md5: { $$unsetOrMatches: "283d4fea5dded59cf837d3047328f5af" } filename: filename - name: find object: *bucket0_chunks_collection arguments: # We cannot use the saved ObjectId when querying, but filtering by a # non-zero timestamp will exclude initialData and sort can return the # expected chunks in order. filter: { _id: { $gt: { $oid: "000000000000000000000007" } } } sort: { n: 1 } expectResult: - _id: { $$type: objectId } files_id: { $$matchesEntity: *oid0 } n: 0 data: { $binary: { base64: "ESIzRA==", subType: "00" } } # hex 11223344 - _id: { $$type: objectId } files_id: { $$matchesEntity: *oid0 } n: 1 data: { $binary: { base64: "VQ==", subType: "00" } } # hex 55 mongo-ruby-driver-2.21.3/spec/spec_tests/data/unified/valid-pass/poc-retryable-reads.yml000066400000000000000000000124411505113246500312660ustar00rootroot00000000000000description: "poc-retryable-reads" schemaVersion: "1.0" runOnRequirements: - minServerVersion: "4.0" topologies: [ single, replicaset ] - minServerVersion: "4.1.7" topologies: [ sharded ] createEntities: - client: id: &client0 client0 useMultipleMongoses: false observeEvents: [ commandStartedEvent ] - client: id: &client1 client1 uriOptions: { retryReads: false } useMultipleMongoses: false observeEvents: [ commandStartedEvent ] - database: id: &database0 database0 client: *client0 databaseName: &databaseName retryable-reads-tests - database: id: &database1 database1 client: *client1 databaseName: *databaseName - collection: id: &collection0 collection0 database: *database0 collectionName: &collectionName coll - collection: id: &collection1 collection1 database: *database1 collectionName: *collectionName initialData: - collectionName: *collectionName databaseName: *databaseName documents: - {_id: 1, x: 11} - {_id: 2, x: 22} - {_id: 3, x: 33} tests: - description: "Aggregate succeeds after InterruptedAtShutdown" operations: - name: failPoint object: testRunner arguments: client: *client0 failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: [ aggregate ] errorCode: 11600 # InterruptedAtShutdown - name: aggregate object: *collection0 arguments: pipeline: &pipeline - $match: { _id: { $gt: 1 } } - $sort: { x: 1 } expectResult: - { _id: 2, x: 22 } - { _id: 3, x: 33 } expectEvents: - client: *client0 events: - commandStartedEvent: command: aggregate: *collectionName pipeline: *pipeline databaseName: *databaseName - commandStartedEvent: command: aggregate: *collectionName pipeline: *pipeline databaseName: *databaseName - description: "Find succeeds on second attempt" operations: - name: failPoint object: testRunner arguments: client: *client0 failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: [ find ] closeConnection: true # Find options and expected result changed to use common initialData - name: find object: *collection0 arguments: filter: {} sort: { _id: 1 } limit: 2 expectResult: - { _id: 1, x: 11 } - { _id: 2, x: 22 } expectEvents: - client: *client0 events: - &findAttempt commandStartedEvent: command: find: *collectionName filter: {} sort: { _id: 1 } limit: 2 databaseName: *databaseName - *findAttempt - description: "Find fails on first attempt" operations: - name: failPoint object: testRunner arguments: client: *client0 failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: [ find ] closeConnection: true - name: find object: *collection1 # client uses retryReads=false arguments: filter: {} # Other arguments in the original test are not relevant expectError: { isError: true } expectEvents: - client: *client1 events: - commandStartedEvent: command: find: *collectionName filter: {} databaseName: *databaseName - description: "Find fails on second attempt" operations: - name: failPoint object: testRunner arguments: client: *client0 failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: [ find ] closeConnection: true - name: find object: *collection0 arguments: filter: {} # Other arguments in the original test are not relevant expectError: { isError: true } expectEvents: - client: *client0 events: - &findAttempt commandStartedEvent: command: find: *collectionName filter: {} databaseName: *databaseName - *findAttempt - description: "ListDatabases succeeds on second attempt" operations: - name: failPoint object: testRunner arguments: client: *client0 failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: [ listDatabases ] closeConnection: true - name: listDatabases object: *client0 expectEvents: - client: *client0 events: - commandStartedEvent: command: { listDatabases: 1 } - commandStartedEvent: command: { listDatabases: 1 } mongo-ruby-driver-2.21.3/spec/spec_tests/data/unified/valid-pass/poc-retryable-writes.yml000066400000000000000000000146501505113246500315110ustar00rootroot00000000000000description: "poc-retryable-writes" schemaVersion: "1.0" createEntities: - client: id: &client0 client0 useMultipleMongoses: false observeEvents: [ commandStartedEvent ] - client: id: &client1 client1 uriOptions: { retryWrites: false } useMultipleMongoses: false observeEvents: [ commandStartedEvent ] - database: id: &database0 database0 client: *client0 databaseName: &databaseName retryable-writes-tests - database: id: &database1 database1 client: *client1 databaseName: *databaseName - collection: id: &collection0 collection0 database: *database0 collectionName: &collectionName coll - collection: id: &collection1 collection1 database: *database1 collectionName: *collectionName initialData: - collectionName: *collectionName databaseName: *databaseName documents: - { _id: 1, x: 11 } - { _id: 2, x: 22 } tests: - description: "FindOneAndUpdate is committed on first attempt" runOnRequirements: &onPrimaryTransactionalWrite_requirements - minServerVersion: "3.6" topologies: [ replicaset ] operations: - name: failPoint object: testRunner arguments: client: *client0 failPoint: configureFailPoint: onPrimaryTransactionalWrite mode: { times: 1 } - name: findOneAndUpdate object: *collection0 arguments: filter: { _id: 1 } update: { $inc: { x : 1 } } returnDocument: Before expectResult: { _id: 1, x: 11 } outcome: - collectionName: *collectionName databaseName: *databaseName documents: - { _id: 1, x: 12 } - { _id: 2, x: 22 } - description: "FindOneAndUpdate is not committed on first attempt" runOnRequirements: *onPrimaryTransactionalWrite_requirements operations: - name: failPoint object: testRunner arguments: client: *client0 failPoint: configureFailPoint: onPrimaryTransactionalWrite mode: { times: 1 } data: { failBeforeCommitExceptionCode: 1 } - name: findOneAndUpdate object: *collection0 arguments: filter: { _id: 1 } update: { $inc: { x : 1 } } returnDocument: Before expectResult: { _id: 1, x: 11 } outcome: - collectionName: *collectionName databaseName: *databaseName documents: - { _id: 1, x: 12 } - { _id: 2, x: 22 } - description: "FindOneAndUpdate is never committed" runOnRequirements: *onPrimaryTransactionalWrite_requirements operations: - name: failPoint object: testRunner arguments: client: *client0 failPoint: configureFailPoint: onPrimaryTransactionalWrite mode: { times: 2 } data: { failBeforeCommitExceptionCode: 1 } - name: findOneAndUpdate object: *collection0 arguments: filter: { _id: 1 } update: { $inc: { x : 1 } } returnDocument: Before expectError: { isError: true } outcome: - collectionName: *collectionName databaseName: *databaseName documents: - { _id: 1, x: 11 } - { _id: 2, x: 22 } - description: "InsertMany succeeds after PrimarySteppedDown" runOnRequirements: &failCommand_requirements - minServerVersion: "4.0" topologies: [ replicaset ] - minServerVersion: "4.1.7" topologies: [ sharded ] operations: - name: failPoint object: testRunner arguments: client: *client0 failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: [ insert ] errorCode: 189 # PrimarySteppedDown errorLabels: [ RetryableWriteError ] - name: insertMany object: *collection0 arguments: documents: # Documents are modified from original test for "initialData" - { _id: 3, x: 33 } - { _id: 4, x: 44 } ordered: true expectResult: # InsertManyResult is optional because all of its fields are optional $$unsetOrMatches: { insertedIds: { $$unsetOrMatches: { 0: 3, 1: 4 } } } outcome: - collectionName: *collectionName databaseName: *databaseName documents: - { _id: 1, x: 11 } - { _id: 2, x: 22 } - { _id: 3, x: 33 } - { _id: 4, x: 44 } - description: "InsertOne fails after connection failure when retryWrites option is false" runOnRequirements: *failCommand_requirements operations: - name: failPoint object: testRunner arguments: client: *client1 failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: [ insert ] closeConnection: true - name: insertOne object: *collection1 arguments: document: { _id: 3, x: 33 } expectError: # If retryWrites is false, the driver should not add the # RetryableWriteError label to the error. errorLabelsOmit: [ RetryableWriteError ] outcome: - collectionName: *collectionName databaseName: *databaseName documents: - { _id: 1, x: 11 } - { _id: 2, x: 22 } - description: "InsertOne fails after multiple retryable writeConcernErrors" runOnRequirements: *failCommand_requirements operations: - name: failPoint object: testRunner arguments: client: *client0 failPoint: configureFailPoint: failCommand mode: { times: 2 } data: failCommands: [ insert ] errorLabels: [ RetryableWriteError ] writeConcernError: code: 91 # ShutdownInProgress errmsg: "Replication is being shut down" - name: insertOne object: *collection0 arguments: document: { _id: 3, x: 33 } expectError: errorLabelsContain: [ RetryableWriteError ] outcome: - collectionName: *collectionName databaseName: *databaseName documents: - { _id: 1, x: 11 } - { _id: 2, x: 22 } - { _id: 3, x: 33 } # The write was still applied mongo-ruby-driver-2.21.3/spec/spec_tests/data/unified/valid-pass/poc-sessions.yml000066400000000000000000000146401505113246500300520ustar00rootroot00000000000000description: "poc-sessions" schemaVersion: "1.0" runOnRequirements: - minServerVersion: "3.6.0" createEntities: - client: id: &client0 client0 useMultipleMongoses: false observeEvents: [ commandStartedEvent ] - database: id: &database0 database0 client: *client0 databaseName: &database0Name session-tests - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name test - session: id: &session0 session0 client: *client0 initialData: - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1 } tests: - description: "Server supports explicit sessions" operations: - name: assertSessionNotDirty object: testRunner arguments: session: *session0 - name: insertOne object: *collection0 arguments: session: *session0 document: { _id: 2 } expectResult: { $$unsetOrMatches: { insertedId: { $$unsetOrMatches: 2 } } } - name: assertSessionNotDirty object: testRunner arguments: session: *session0 - name: endSession object: *session0 - &find_with_implicit_session name: find object: *collection0 arguments: filter: { _id: -1 } expectResult: [] - name: assertSameLsidOnLastTwoCommands object: testRunner arguments: client: *client0 expectEvents: - client: *client0 events: - commandStartedEvent: command: insert: *collection0Name documents: [ { _id: 2 } ] ordered: true lsid: { $$sessionLsid: *session0 } commandName: insert databaseName: *database0Name - commandStartedEvent: command: find: *collection0Name filter: { _id: -1 } lsid: { $$sessionLsid: *session0 } commandName: find databaseName: *database0Name outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1 } - { _id: 2 } - description: "Server supports implicit sessions" operations: - name: insertOne object: *collection0 arguments: document: { _id: 2 } expectResult: { $$unsetOrMatches: { insertedId: { $$unsetOrMatches: 2 } } } - *find_with_implicit_session - name: assertSameLsidOnLastTwoCommands object: testRunner arguments: client: *client0 expectEvents: - client: *client0 events: - commandStartedEvent: command: insert: *collection0Name documents: - { _id: 2 } ordered: true # Original test did not include any assertion, but we can use # $$type to expect an arbitrary lsid document lsid: { $$type: object } commandName: insert databaseName: *database0Name - commandStartedEvent: command: find: *collection0Name filter: { _id: -1 } lsid: { $$type: object } commandName: find databaseName: *database0Name outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1 } - { _id: 2 } - description: "Dirty explicit session is discarded" skipReason: RUBY-1813 # Original test specified retryWrites=true, but that is now the default. runOnRequirements: - minServerVersion: "4.0" topologies: [ replicaset ] - minServerVersion: "4.1.8" topologies: [ sharded ] operations: - name: failPoint object: testRunner arguments: client: *client0 failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: [ insert ] closeConnection: true - name: assertSessionNotDirty object: testRunner arguments: session: *session0 - name: insertOne object: *collection0 arguments: session: *session0 document: { _id: 2 } expectResult: { $$unsetOrMatches: { insertedId: { $$unsetOrMatches: 2 } } } - name: assertSessionDirty object: testRunner arguments: session: *session0 - name: insertOne object: *collection0 arguments: session: *session0 document: { _id: 3 } expectResult: { $$unsetOrMatches: { insertedId: { $$unsetOrMatches: 3 } } } - name: assertSessionDirty object: testRunner arguments: session: *session0 - name: endSession object: *session0 - *find_with_implicit_session - name: assertDifferentLsidOnLastTwoCommands object: testRunner arguments: client: *client0 expectEvents: - client: *client0 events: # ajv's YAML parser is unable to handle anchors on array elements, so # we define an anchor on the commandStartedEvent object instead - commandStartedEvent: &insert_attempt command: insert: *collection0Name documents: - { _id: 2 } ordered: true lsid: { $$sessionLsid: *session0 } txnNumber: 1 commandName: insert databaseName: *database0Name - commandStartedEvent: *insert_attempt - commandStartedEvent: command: insert: *collection0Name documents: - { _id: 3 } ordered: true lsid: { $$sessionLsid: *session0 } txnNumber: 2 commandName: insert databaseName: *database0Name - commandStartedEvent: command: find: *collection0Name filter: { _id: -1 } lsid: { $$type: object } commandName: find databaseName: *database0Name outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1 } - { _id: 2 } - { _id: 3 } mongo-ruby-driver-2.21.3/spec/spec_tests/data/unified/valid-pass/poc-transactions-convenient-api.yml000066400000000000000000000167471505113246500336430ustar00rootroot00000000000000description: "poc-transactions-convenient-api" schemaVersion: "1.0" runOnRequirements: - minServerVersion: "4.0" topologies: [ replicaset ] - minServerVersion: "4.1.8" topologies: [ sharded ] createEntities: - client: id: &client0 client0 useMultipleMongoses: true observeEvents: [ commandStartedEvent ] - client: id: &client1 client1 uriOptions: readConcernLevel: local w: 1 useMultipleMongoses: true observeEvents: [ commandStartedEvent ] - database: id: &database0 database0 client: *client0 databaseName: &databaseName transaction-tests - database: id: &database1 database1 client: *client1 databaseName: *databaseName - collection: id: &collection0 collection0 database: *database0 collectionName: &collectionName test - collection: id: &collection1 collection1 database: *database1 collectionName: *collectionName - session: id: &session0 session0 client: *client0 - session: id: &session1 session1 client: *client1 - session: id: &session2 session2 client: *client0 sessionOptions: defaultTransactionOptions: readConcern: { level: majority } writeConcern: { w: 1 } initialData: - collectionName: *collectionName databaseName: *databaseName documents: [] tests: - description: "withTransaction and no transaction options set" operations: - name: withTransaction object: *session0 arguments: callback: - name: insertOne object: *collection0 arguments: session: *session0 document: { _id: 1 } expectResult: { $$unsetOrMatches: { insertedId: { $$unsetOrMatches: 1 } } } expectEvents: - client: *client0 events: - commandStartedEvent: command: insert: *collectionName documents: [ { _id: 1 } ] ordered: true lsid: { $$sessionLsid: *session0 } txnNumber: 1 startTransaction: true autocommit: false # omitted fields readConcern: { $$exists: false } writeConcern: { $$exists: false } commandName: insert databaseName: *databaseName - commandStartedEvent: command: commitTransaction: 1 lsid: { $$sessionLsid: *session0 } txnNumber: 1 autocommit: false # omitted fields readConcern: { $$exists: false } startTransaction: { $$exists: false } writeConcern: { $$exists: false } commandName: commitTransaction databaseName: admin outcome: &outcome - collectionName: *collectionName databaseName: *databaseName documents: - { _id: 1 } - description: "withTransaction inherits transaction options from client" operations: - name: withTransaction object: *session1 arguments: callback: - name: insertOne object: *collection1 arguments: session: *session1 document: { _id: 1 } expectResult: { $$unsetOrMatches: { insertedId: { $$unsetOrMatches: 1 } } } expectEvents: - client: *client1 events: - commandStartedEvent: command: insert: *collectionName documents: [ { _id: 1 } ] ordered: true lsid: { $$sessionLsid: *session1 } txnNumber: 1 startTransaction: true autocommit: false readConcern: { level: local } # omitted fields writeConcern: { $$exists: false } commandName: insert databaseName: *databaseName - commandStartedEvent: command: commitTransaction: 1 lsid: { $$sessionLsid: *session1 } txnNumber: 1 autocommit: false writeConcern: { w: 1 } # omitted fields readConcern: { $$exists: false } startTransaction: { $$exists: false } commandName: commitTransaction databaseName: admin outcome: *outcome - description: "withTransaction inherits transaction options from defaultTransactionOptions" operations: - name: withTransaction object: *session2 arguments: callback: - name: insertOne object: *collection0 arguments: session: *session2 document: { _id: 1 } expectResult: { $$unsetOrMatches: { insertedId: { $$unsetOrMatches: 1 } } } expectEvents: - client: *client0 events: - commandStartedEvent: command: insert: *collectionName documents: [ { _id: 1 } ] ordered: true lsid: { $$sessionLsid: *session2 } txnNumber: 1 startTransaction: true autocommit: false readConcern: { level: majority } # omitted fields writeConcern: { $$exists: false } commandName: insert databaseName: *databaseName - commandStartedEvent: command: commitTransaction: 1 lsid: { $$sessionLsid: *session2 } txnNumber: 1 autocommit: false writeConcern: { w: 1 } # omitted fields readConcern: { $$exists: false } startTransaction: { $$exists: false } commandName: commitTransaction databaseName: admin outcome: *outcome - description: "withTransaction explicit transaction options" operations: - name: withTransaction object: *session0 arguments: callback: - name: insertOne object: *collection0 arguments: session: *session0 document: { _id: 1 } expectResult: { $$unsetOrMatches: { insertedId: { $$unsetOrMatches: 1 } } } readConcern: { level: majority } writeConcern: { w: 1 } expectEvents: - client: *client0 events: - commandStartedEvent: command: insert: *collectionName documents: [ { _id: 1 } ] ordered: true lsid: { $$sessionLsid: *session0 } txnNumber: 1 startTransaction: true autocommit: false readConcern: { level: majority } # omitted fields writeConcern: { $$exists: false } commandName: insert databaseName: *databaseName - commandStartedEvent: command: commitTransaction: 1 lsid: { $$sessionLsid: *session0 } txnNumber: 1 autocommit: false writeConcern: { w: 1 } # omitted fields readConcern: { $$exists: false } startTransaction: { $$exists: false } commandName: commitTransaction databaseName: admin outcome: *outcome poc-transactions-mongos-pin-auto.yml000066400000000000000000000124521505113246500336660ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/unified/valid-passdescription: "poc-transactions-mongos-pin-auto" schemaVersion: "1.0" runOnRequirements: - minServerVersion: "4.1.8" topologies: [ sharded ] createEntities: - client: id: &client0 client0 useMultipleMongoses: true observeEvents: [ commandStartedEvent ] - database: id: &database0 database0 client: *client0 databaseName: &database0Name transaction-tests - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name test - session: id: &session0 session0 client: *client0 initialData: - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1 } - { _id: 2 } tests: - description: "remain pinned after non-transient Interrupted error on insertOne" operations: - &startTransaction name: startTransaction object: *session0 - &firstInsert name: insertOne object: *collection0 arguments: session: *session0 document: { _id: 3 } expectResult: { $$unsetOrMatches: { insertedId: { $$unsetOrMatches: 3 } } } - name: targetedFailPoint object: testRunner arguments: session: *session0 failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: [ insert ] errorCode: 11601 # Interrupted - name: insertOne object: *collection0 arguments: session: *session0 document: { _id: 4 } expectError: errorLabelsOmit: [ TransientTransactionError, UnknownTransactionCommitResult ] errorCodeName: Interrupted - name: assertSessionPinned object: testRunner arguments: session: *session0 - name: commitTransaction object: *session0 expectEvents: - client: *client0 events: - commandStartedEvent: &firstInsertEvent command: insert: *collection0Name documents: [ { _id: 3 } ] ordered: true readConcern: { $$exists: false } lsid: { $$sessionLsid: *session0 } txnNumber: 1 startTransaction: true autocommit: false writeConcern: { $$exists: false } commandName: insert databaseName: *database0Name - commandStartedEvent: &secondInsertEvent command: insert: *collection0Name documents: [ { _id: 4 } ] ordered: true readConcern: { $$exists: false } lsid: { $$sessionLsid: *session0 } txnNumber: 1 startTransaction: { $$exists: false } autocommit: false writeConcern: { $$exists: false } commandName: insert databaseName: *database0Name - commandStartedEvent: command: commitTransaction: 1 lsid: { $$sessionLsid: *session0 } txnNumber: 1 startTransaction: { $$exists: false } autocommit: false writeConcern: { $$exists: false } # Original test expected any value, but we can assert an object recoveryToken: { $$type: object } commandName: commitTransaction databaseName: admin outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1 } - { _id: 2 } - { _id: 3 } - description: "unpin after transient error within a transaction" operations: - *startTransaction - *firstInsert - name: targetedFailPoint object: testRunner arguments: session: *session0 failPoint: configureFailPoint: failCommand mode: { times: 1 } data: failCommands: [ insert ] closeConnection: true - name: insertOne object: *collection0 arguments: session: *session0 document: { _id: 4 } expectError: errorLabelsContain: [ TransientTransactionError ] errorLabelsOmit: [ UnknownTransactionCommitResult ] - name: assertSessionUnpinned object: testRunner arguments: session: *session0 - name: abortTransaction object: *session0 expectEvents: - client: *client0 events: - commandStartedEvent: *firstInsertEvent - commandStartedEvent: *secondInsertEvent - commandStartedEvent: command: abortTransaction: 1 lsid: { $$sessionLsid: *session0 } txnNumber: 1 startTransaction: { $$exists: false } autocommit: false writeConcern: { $$exists: false } # Original test expected any value, but we can assert an object recoveryToken: { $$type: object } commandName: abortTransaction databaseName: admin outcome: - collectionName: *collection0Name databaseName: *database0Name documents: - { _id: 1 } - { _id: 2 } mongo-ruby-driver-2.21.3/spec/spec_tests/data/unified/valid-pass/poc-transactions.yml000066400000000000000000000122711505113246500307120ustar00rootroot00000000000000description: "poc-transactions" schemaVersion: "1.0" runOnRequirements: - minServerVersion: "4.0" topologies: [ replicaset ] - minServerVersion: "4.1.8" topologies: [ sharded ] createEntities: - client: id: &client0 client0 observeEvents: [ commandStartedEvent ] - database: id: &database0 database0 client: *client0 databaseName: &database0Name transaction-tests - collection: id: &collection0 collection0 database: *database0 collectionName: &collection0Name test - session: id: &session0 session0 client: *client0 initialData: - collectionName: *collection0Name databaseName: *database0Name documents: [] tests: - description: "Client side error in command starting transaction" operations: - name: startTransaction object: *session0 - name: updateOne object: *collection0 arguments: session: *session0 filter: { _id: 1 } update: { x: 1 } # Original test only asserted a generic error expectError: { isClientError: true } - name: assertSessionTransactionState object: testRunner arguments: session: *session0 state: starting - description: "explicitly create collection using create command" runOnRequirements: - minServerVersion: "4.3.4" topologies: [ replicaset, sharded ] operations: - name: dropCollection object: *database0 arguments: collection: *collection0Name - name: startTransaction object: *session0 - name: createCollection object: *database0 arguments: session: *session0 collection: *collection0Name - name: assertCollectionNotExists object: testRunner arguments: databaseName: *database0Name collectionName: *collection0Name - name: commitTransaction object: *session0 - name: assertCollectionExists object: testRunner arguments: databaseName: *database0Name collectionName: *collection0Name expectEvents: - client: *client0 events: - commandStartedEvent: command: drop: *collection0Name writeConcern: { $$exists: false } commandName: drop databaseName: *database0Name - commandStartedEvent: command: create: *collection0Name lsid: { $$sessionLsid: *session0 } txnNumber: 1 startTransaction: true autocommit: false writeConcern: { $$exists: false } commandName: create databaseName: *database0Name - commandStartedEvent: command: commitTransaction: 1 lsid: { $$sessionLsid: *session0 } txnNumber: 1 startTransaction: { $$exists: false } autocommit: false writeConcern: { $$exists: false } commandName: commitTransaction databaseName: admin - description: "create index on a non-existing collection" runOnRequirements: - minServerVersion: "4.3.4" topologies: [ replicaset, sharded ] operations: - name: dropCollection object: *database0 arguments: collection: *collection0Name - name: startTransaction object: *session0 - name: createIndex object: *collection0 arguments: session: *session0 name: &indexName "x_1" keys: { x: 1 } - name: assertIndexNotExists object: testRunner arguments: databaseName: *database0Name collectionName: *collection0Name indexName: *indexName - name: commitTransaction object: *session0 - name: assertIndexExists object: testRunner arguments: databaseName: *database0Name collectionName: *collection0Name indexName: *indexName expectEvents: - client: *client0 events: - commandStartedEvent: command: drop: *collection0Name writeConcern: { $$exists: false } commandName: drop databaseName: *database0Name - commandStartedEvent: command: createIndexes: *collection0Name indexes: - name: *indexName key: { x: 1 } lsid: { $$sessionLsid: *session0 } txnNumber: 1 startTransaction: true autocommit: false writeConcern: { $$exists: false } commandName: createIndexes databaseName: *database0Name - commandStartedEvent: command: commitTransaction: 1 lsid: { $$sessionLsid: *session0 } txnNumber: 1 startTransaction: { $$exists: false } autocommit: false writeConcern: { $$exists: false } commandName: commitTransaction databaseName: admin mongo-ruby-driver-2.21.3/spec/spec_tests/data/uri_options/000077500000000000000000000000001505113246500235615ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/uri_options/auth-options.yml000066400000000000000000000034351505113246500267430ustar00rootroot00000000000000tests: - description: "Valid auth options are parsed correctly (GSSAPI)" uri: "mongodb://foo:bar@example.com/?authMechanism=GSSAPI&authMechanismProperties=SERVICE_NAME:other,CANONICALIZE_HOST_NAME:true&authSource=$external" valid: true warning: false hosts: ~ auth: ~ options: authMechanism: "GSSAPI" authMechanismProperties: SERVICE_NAME: "other" CANONICALIZE_HOST_NAME: true authSource: "$external" - description: "Mixed case in auth mechanism properties is preserved" uri: "mongodb://foo:bar@example.com/?authMechanism=GSSAPI&authMechanismProperties=PropertyName:PropertyValue&authSource=$external" valid: true warning: false hosts: ~ auth: ~ options: authMechanism: "GSSAPI" authMechanismProperties: PropertyName: PropertyValue service_name: mongodb authSource: "$external" - description: "Auth mechanism properties are all invalid" uri: "mongodb://foo:bar@example.com/?authMechanism=GSSAPI&authMechanismProperties=PropertyName&authSource=$external" valid: true warning: true hosts: ~ auth: ~ options: authMechanism: "GSSAPI" authMechanismProperties: service_name: mongodb authSource: "$external" - description: "Valid auth options are parsed correctly (SCRAM-SHA-1)" uri: "mongodb://foo:bar@example.com/?authMechanism=SCRAM-SHA-1&authSource=authSourceDB" valid: true warning: false hosts: ~ auth: ~ options: authMechanism: "SCRAM-SHA-1" authSource: "authSourceDB" mongo-ruby-driver-2.21.3/spec/spec_tests/data/uri_options/compression-options.yml000066400000000000000000000030261505113246500303370ustar00rootroot00000000000000tests: - description: "Valid compression options are parsed correctly" uri: "mongodb://example.com/?compressors=zlib&zlibCompressionLevel=9" valid: true warning: false hosts: ~ auth: ~ options: compressors: - "zlib" zlibCompressionLevel: 9 - description: "Multiple compressors are parsed correctly" uri: "mongodb://example.com/?compressors=snappy,zlib" valid: true warning: false hosts: ~ auth: ~ options: compressors: - "snappy" - "zlib" - description: "Non-numeric zlibCompressionLevel causes a warning" uri: "mongodb://example.com/?compressors=zlib&zlibCompressionLevel=invalid" valid: true warning: true hosts: ~ auth: ~ # https://jira.mongodb.org/browse/DRIVERS-1368 options: ~ - description: "Too low zlibCompressionLevel causes a warning" uri: "mongodb://example.com/?compressors=zlib&zlibCompressionLevel=-2" valid: true warning: true hosts: ~ auth: ~ # https://jira.mongodb.org/browse/DRIVERS-1368 options: ~ - description: "Too high zlibCompressionLevel causes a warning" uri: "mongodb://example.com/?compressors=zlib&zlibCompressionLevel=10" valid: true warning: true hosts: ~ auth: ~ # https://jira.mongodb.org/browse/DRIVERS-1368 options: ~ mongo-ruby-driver-2.21.3/spec/spec_tests/data/uri_options/concern-options.yml000066400000000000000000000031231505113246500274230ustar00rootroot00000000000000tests: - description: "Valid read and write concern are parsed correctly" uri: "mongodb://example.com/?readConcernLevel=majority&w=5&wTimeoutMS=30000&journal=false" valid: true warning: false hosts: ~ auth: ~ options: readConcernLevel: "majority" w: 5 wTimeoutMS: 30000 journal: false - description: "Arbitrary string readConcernLevel does not cause a warning" uri: "mongodb://example.com/?readConcernLevel=arbitraryButStillValid" valid: true warning: false hosts: ~ auth: ~ options: readConcernLevel: "arbitraryButStillValid" - description: "Arbitrary string w doesn't cause a warning" uri: "mongodb://example.com/?w=arbitraryButStillValid" valid: true warning: false hosts: ~ auth: ~ options: w: "arbitraryButStillValid" - description: "Non-numeric wTimeoutMS causes a warning" uri: "mongodb://example.com/?wTimeoutMS=invalid" valid: true warning: true hosts: ~ auth: ~ options: {} - description: "Too low wTimeoutMS causes a warning" uri: "mongodb://example.com/?wTimeoutMS=-2" valid: true warning: true hosts: ~ auth: ~ options: {} - description: "Invalid journal causes a warning" uri: "mongodb://example.com/?journal=invalid" valid: true warning: true hosts: ~ auth: ~ options: {} mongo-ruby-driver-2.21.3/spec/spec_tests/data/uri_options/connection-options.yml000066400000000000000000000136521505113246500301430ustar00rootroot00000000000000tests: - description: "Valid connection and timeout options are parsed correctly" uri: "mongodb://example.com/?appname=URI-OPTIONS-SPEC-TEST&connectTimeoutMS=20000&heartbeatFrequencyMS=5000&localThresholdMS=3000&maxIdleTimeMS=50000&replicaSet=uri-options-spec&retryWrites=true&serverSelectionTimeoutMS=15000&socketTimeoutMS=7500" valid: true warning: false hosts: ~ auth: ~ options: appname: "URI-OPTIONS-SPEC-TEST" connectTimeoutMS: 20000 heartbeatFrequencyMS: 5000 localThresholdMS: 3000 maxIdleTimeMS: 50000 replicaSet: "uri-options-spec" retryWrites: true serverSelectionTimeoutMS: 15000 socketTimeoutMS: 7500 - description: "Non-numeric connectTimeoutMS causes a warning" uri: "mongodb://example.com/?connectTimeoutMS=invalid" valid: true warning: true hosts: ~ auth: ~ options: {} - description: "Too low connectTimeoutMS causes a warning" uri: "mongodb://example.com/?connectTimeoutMS=-2" valid: true warning: true hosts: ~ auth: ~ options: {} - description: "Non-numeric heartbeatFrequencyMS causes a warning" uri: "mongodb://example.com/?heartbeatFrequencyMS=invalid" valid: true warning: true hosts: ~ auth: ~ options: {} - description: "Too low heartbeatFrequencyMS causes a warning" uri: "mongodb://example.com/?heartbeatFrequencyMS=-2" valid: true warning: true hosts: ~ auth: ~ options: {} - description: "Non-numeric localThresholdMS causes a warning" uri: "mongodb://example.com/?localThresholdMS=invalid" valid: true warning: true hosts: ~ auth: ~ options: {} - description: "Too low localThresholdMS causes a warning" uri: "mongodb://example.com/?localThresholdMS=-2" valid: true warning: true hosts: ~ auth: ~ options: {} - description: "Invalid retryWrites causes a warning" uri: "mongodb://example.com/?retryWrites=invalid" valid: true warning: true hosts: ~ auth: ~ options: {} - description: "Non-numeric serverSelectionTimeoutMS causes a warning" uri: "mongodb://example.com/?serverSelectionTimeoutMS=invalid" valid: true warning: true hosts: ~ auth: ~ options: {} - description: "Too low serverSelectionTimeoutMS causes a warning" uri: "mongodb://example.com/?serverSelectionTimeoutMS=-2" valid: true warning: true hosts: ~ auth: ~ options: {} - description: "Non-numeric socketTimeoutMS causes a warning" uri: "mongodb://example.com/?socketTimeoutMS=invalid" valid: true warning: true hosts: ~ auth: ~ options: {} - description: "Too low socketTimeoutMS causes a warning" uri: "mongodb://example.com/?socketTimeoutMS=-2" valid: true warning: true hosts: ~ auth: ~ options: {} - description: directConnection=true uri: "mongodb://example.com/?directConnection=true" valid: true warning: false hosts: ~ auth: ~ options: directConnection: true - description: directConnection=true with multiple seeds uri: "mongodb://example1.com,example2.com/?directConnection=true" valid: false warning: false hosts: ~ auth: ~ - description: directConnection=false uri: "mongodb://example.com/?directConnection=false" valid: true warning: false hosts: ~ auth: ~ options: directConnection: false - description: directConnection=false with multiple seeds uri: "mongodb://example1.com,example2.com/?directConnection=false" valid: true warning: false hosts: ~ auth: ~ options: directConnection: false - description: Invalid directConnection value uri: "mongodb://example.com/?directConnection=invalid" valid: true warning: true hosts: ~ auth: ~ options: {} - description: loadBalanced=true uri: "mongodb://example.com/?loadBalanced=true" valid: true warning: false hosts: ~ auth: ~ options: loadBalanced: true - description: loadBalanced=true with directConnection=false uri: "mongodb://example.com/?loadBalanced=true&directConnection=false" valid: true warning: false hosts: ~ auth: ~ options: loadBalanced: true directConnection: false - description: loadBalanced=false uri: "mongodb://example.com/?loadBalanced=false" valid: true warning: false hosts: ~ auth: ~ options: loadBalanced: false - description: Invalid loadBalanced value uri: "mongodb://example.com/?loadBalanced=1" valid: true warning: true hosts: ~ auth: ~ options: {} - description: loadBalanced=true with multiple hosts causes an error uri: "mongodb://example1,example2/?loadBalanced=true" valid: false warning: false hosts: ~ auth: ~ options: {} - description: loadBalanced=true with directConnection=true causes an error uri: "mongodb://example.com/?loadBalanced=true&directConnection=true" valid: false warning: false hosts: ~ auth: ~ options: {} - description: loadBalanced=true with replicaSet causes an error uri: "mongodb://example.com/?loadBalanced=true&replicaSet=replset" valid: false warning: false hosts: ~ auth: ~ options: {} mongo-ruby-driver-2.21.3/spec/spec_tests/data/uri_options/connection-pool-options.yml000066400000000000000000000013141505113246500311020ustar00rootroot00000000000000tests: - description: "Valid connection pool options are parsed correctly" uri: "mongodb://example.com/?maxIdleTimeMS=50000" valid: true warning: false hosts: ~ auth: ~ options: maxIdleTimeMS: 50000 - description: "Non-numeric maxIdleTimeMS causes a warning" uri: "mongodb://example.com/?maxIdleTimeMS=invalid" valid: true warning: true hosts: ~ auth: ~ options: {} - description: "Too low maxIdleTimeMS causes a warning" uri: "mongodb://example.com/?maxIdleTimeMS=-2" valid: true warning: true hosts: ~ auth: ~ options: {} mongo-ruby-driver-2.21.3/spec/spec_tests/data/uri_options/read-preference-options.yml000066400000000000000000000037741505113246500310370ustar00rootroot00000000000000tests: - description: "Valid read preference options are parsed correctly" uri: "mongodb://example.com/?readPreference=primaryPreferred&readPreferenceTags=dc:ny,rack:1&maxStalenessSeconds=120&readPreferenceTags=dc:ny" valid: true warning: false hosts: ~ auth: ~ options: readPreference: "primaryPreferred" readPreferenceTags: - dc: "ny" rack: "1" - dc: "ny" maxStalenessSeconds: 120 - description: "Case is preserved in read preference tag names and values" uri: "mongodb://example.com/?readPreference=secondary&readPreferenceTags=DataCenter:NewYork" valid: true warning: false hosts: ~ auth: ~ options: readPreference: "secondary" readPreferenceTags: - DataCenter: NewYork - description: "Invalid readPreferenceTags causes a warning" uri: "mongodb://example.com/?readPreferenceTags=invalid" valid: true warning: true hosts: ~ auth: ~ options: {} # https://jira.mongodb.org/browse/DRIVERS-1369 - description: "Valid and invalid readPreferenceTags mix" uri: "mongodb://example.com/?readPreferenceTags=a:b,invalid" valid: true warning: true hosts: ~ auth: ~ options: readPreferenceTags: - a: b - description: "Non-numeric maxStalenessSeconds causes a warning" uri: "mongodb://example.com/?maxStalenessSeconds=invalid" valid: true warning: true hosts: ~ auth: ~ options: {} - description: "Too low maxStalenessSeconds causes a warning" uri: "mongodb://example.com/?maxStalenessSeconds=-2" valid: true warning: true hosts: ~ auth: ~ options: {} mongo-ruby-driver-2.21.3/spec/spec_tests/data/uri_options/ruby-auth-options.yml000066400000000000000000000005661505113246500277240ustar00rootroot00000000000000tests: - description: Equal sign in auth mechanism properties uri: "mongodb://foo:bar@example.com/?authMechanismProperties=foo:a=bar&authMechanism=MONGODB-AWS" valid: true warning: false hosts: ~ auth: ~ options: authMechanismProperties: foo: a=bar authMechanism: MONGODB-AWS mongo-ruby-driver-2.21.3/spec/spec_tests/data/uri_options/ruby-connection-options.yml000066400000000000000000000026771505113246500311270ustar00rootroot00000000000000tests: - description: directConnection=true and connect=direct uri: "mongodb://example.com/?directConnection=true&connect=direct" valid: true warning: false hosts: ~ auth: ~ options: directConnection: true connect: direct - description: directConnection=false and connect=direct uri: "mongodb://example.com/?directConnection=false&connect=direct" valid: false warning: false hosts: ~ auth: ~ - description: directConnection=true and connect=replica_set uri: "mongodb://example.com/?directConnection=true&connect=replica_set&replicaSet=foo" valid: false warning: false hosts: ~ auth: ~ - description: directConnection=false and connect=replica_set uri: "mongodb://example.com/?directConnection=false&connect=replica_set&replicaSet=foo" valid: true warning: false hosts: ~ auth: ~ options: directConnection: false connect: replica_set replicaSet: foo - description: directConnection=true and connect=sharded uri: "mongodb://example.com/?directConnection=true&connect=sharded" valid: false warning: false hosts: ~ auth: ~ - description: directConnection=false and connect=replica_set uri: "mongodb://example.com/?directConnection=false&connect=sharded" valid: true warning: false hosts: ~ auth: ~ options: directConnection: false connect: sharded mongo-ruby-driver-2.21.3/spec/spec_tests/data/uri_options/srv-options.yml000066400000000000000000000057201505113246500266130ustar00rootroot00000000000000tests: - description: "SRV URI with custom srvServiceName" uri: "mongodb+srv://test22.test.build.10gen.cc/?srvServiceName=customname" valid: true warning: false hosts: ~ auth: ~ options: srvServiceName: "customname" tls: true - description: "Non-SRV URI with custom srvServiceName" uri: "mongodb://example.com/?srvServiceName=customname" valid: false warning: false hosts: ~ auth: ~ options: {} - description: "SRV URI with srvMaxHosts" uri: "mongodb+srv://test1.test.build.10gen.cc/?srvMaxHosts=2" valid: true warning: false hosts: ~ auth: ~ options: srvMaxHosts: 2 tls: true - description: "SRV URI with negative integer for srvMaxHosts" uri: "mongodb+srv://test1.test.build.10gen.cc/?srvMaxHosts=-1" valid: true warning: true hosts: ~ auth: ~ options: tls: true - description: "SRV URI with invalid type for srvMaxHosts" uri: "mongodb+srv://test1.test.build.10gen.cc/?srvMaxHosts=foo" valid: true warning: true hosts: ~ auth: ~ options: tls: true - description: "Non-SRV URI with srvMaxHosts" uri: "mongodb://example.com/?srvMaxHosts=2" valid: false warning: false hosts: ~ auth: ~ options: {} # Note: Testing URI validation for srvMaxHosts conflicting with either # loadBalanced=true or replicaSet specified via TXT records is covered by # the Initial DNS Seedlist Discovery test suite. - description: "SRV URI with positive srvMaxHosts and replicaSet" uri: "mongodb+srv://test1.test.build.10gen.cc/?srvMaxHosts=2&replicaSet=foo" valid: false warning: false hosts: ~ auth: ~ options: {} - description: "SRV URI with positive srvMaxHosts and loadBalanced=true" uri: "mongodb+srv://test1.test.build.10gen.cc/?srvMaxHosts=2&loadBalanced=true" valid: false warning: false hosts: ~ auth: ~ options: {} - description: "SRV URI with positive srvMaxHosts and loadBalanced=false" uri: "mongodb+srv://test1.test.build.10gen.cc/?srvMaxHosts=2&loadBalanced=false" valid: true warning: false hosts: ~ auth: ~ options: loadBalanced: false srvMaxHosts: 2 tls: true - description: "SRV URI with srvMaxHosts=0 and replicaSet" uri: "mongodb+srv://test1.test.build.10gen.cc/?srvMaxHosts=0&replicaSet=foo" valid: true warning: false hosts: ~ auth: ~ options: replicaSet: foo srvMaxHosts: 0 tls: true - description: "SRV URI with srvMaxHosts=0 and loadBalanced=true" uri: "mongodb+srv://test3.test.build.10gen.cc/?srvMaxHosts=0&loadBalanced=true" valid: true warning: false hosts: ~ auth: ~ options: loadBalanced: true srvMaxHosts: 0 tls: truemongo-ruby-driver-2.21.3/spec/spec_tests/data/uri_options/tls-options.yml000066400000000000000000000307261505113246500266070ustar00rootroot00000000000000tests: - description: "Valid required tls options are parsed correctly" uri: "mongodb://example.com/?tls=true&tlsCAFile=ca.pem&tlsCertificateKeyFile=cert.pem" valid: true warning: false hosts: ~ auth: ~ options: tls: true tlsCAFile: "ca.pem" tlsCertificateKeyFile: "cert.pem" - description: "Valid tlsCertificateKeyFilePassword is parsed correctly" uri: "mongodb://example.com/?tlsCertificateKeyFilePassword=hunter2" valid: true warning: false hosts: ~ auth: ~ options: tlsCertificateKeyFilePassword: "hunter2" - description: "Invalid tlsAllowInvalidCertificates causes a warning" uri: "mongodb://example.com/?tlsAllowInvalidCertificates=invalid" valid: true warning: true hosts: ~ auth: ~ options: {} - description: "tlsAllowInvalidCertificates is parsed correctly" uri: "mongodb://example.com/?tlsAllowInvalidCertificates=true" valid: true warning: false hosts: ~ auth: ~ options: tlsAllowInvalidCertificates: true - description: "Invalid tlsAllowInvalidCertificates causes a warning" uri: "mongodb://example.com/?tlsAllowInvalidCertificates=invalid" valid: true warning: true hosts: ~ auth: ~ options: {} - description: "tlsAllowInvalidHostnames is parsed correctly" uri: "mongodb://example.com/?tlsAllowInvalidHostnames=true" valid: true warning: false hosts: ~ auth: ~ options: tlsAllowInvalidHostnames: true - description: "Invalid tlsAllowInvalidHostnames causes a warning" uri: "mongodb://example.com/?tlsAllowInvalidHostnames=invalid" valid: true warning: true hosts: ~ auth: ~ options: {} - description: "tlsInsecure is parsed correctly" uri: "mongodb://example.com/?tlsInsecure=true" valid: true warning: false hosts: ~ auth: ~ options: tlsInsecure: true - description: "Invalid tlsInsecure causes a warning" uri: "mongodb://example.com/?tlsInsecure=invalid" valid: true warning: true hosts: ~ auth: ~ options: {} - description: "tlsInsecure and tlsAllowInvalidCertificates both present (and true) raises an error" uri: "mongodb://example.com/?tlsInsecure=true&tlsAllowInvalidCertificates=true" valid: false warning: false hosts: ~ auth: ~ options: {} - description: "tlsInsecure and tlsAllowInvalidCertificates both present (and false) raises an error" uri: "mongodb://example.com/?tlsInsecure=false&tlsAllowInvalidCertificates=false" valid: false warning: false hosts: ~ auth: ~ options: {} - description: "tlsAllowInvalidCertificates and tlsInsecure both present (and true) raises an error" uri: "mongodb://example.com/?tlsAllowInvalidCertificates=true&tlsInsecure=true" valid: false warning: false hosts: ~ auth: ~ options: {} - description: "tlsAllowInvalidCertificates and tlsInsecure both present (and false) raises an error" uri: "mongodb://example.com/?tlsAllowInvalidCertificates=false&tlsInsecure=false" valid: false warning: false hosts: ~ auth: ~ options: {} - description: "tlsInsecure and tlsAllowInvalidHostnames both present (and true) raises an error" uri: "mongodb://example.com/?tlsInsecure=true&tlsAllowInvalidHostnames=true" valid: false warning: false hosts: ~ auth: ~ options: {} - description: "tlsInsecure and tlsAllowInvalidHostnames both present (and false) raises an error" uri: "mongodb://example.com/?tlsInsecure=false&tlsAllowInvalidHostnames=false" valid: false warning: false hosts: ~ auth: ~ options: {} - description: "tlsAllowInvalidHostnames and tlsInsecure both present (and true) raises an error" uri: "mongodb://example.com/?tlsAllowInvalidHostnames=true&tlsInsecure=true" valid: false warning: false hosts: ~ auth: ~ options: {} - description: "tlsAllowInvalidHostnames and tlsInsecure both present (and false) raises an error" uri: "mongodb://example.com/?tlsAllowInvalidHostnames=false&tlsInsecure=false" valid: false warning: false hosts: ~ auth: ~ options: {} - description: "tls=true and ssl=true doesn't warn" uri: "mongodb://example.com/?tls=true&ssl=true" valid: true warning: false hosts: ~ auth: ~ # https://jira.mongodb.org/browse/DRIVERS-1368 options: ~ - description: "tls=false and ssl=false doesn't warn" uri: "mongodb://example.com/?tls=false&ssl=false" valid: true warning: false hosts: ~ auth: ~ # https://jira.mongodb.org/browse/DRIVERS-1368 options: ~ - description: "ssl=true and tls=true doesn't warn" uri: "mongodb://example.com/?ssl=true&tls=true" valid: true warning: false hosts: ~ auth: ~ # https://jira.mongodb.org/browse/DRIVERS-1368 options: ~ - description: "ssl=false and tls=false doesn't warn" uri: "mongodb://example.com/?ssl=false&tls=false" valid: true warning: false hosts: ~ auth: ~ # https://jira.mongodb.org/browse/DRIVERS-1368 options: ~ - description: "tls=false and ssl=true raises error" uri: "mongodb://example.com/?tls=false&ssl=true" valid: false warning: false hosts: ~ auth: ~ options: {} - description: "tls=true and ssl=false raises error" uri: "mongodb://example.com/?tls=true&ssl=false" valid: false warning: false hosts: ~ auth: ~ options: {} - description: "ssl=false and tls=true raises error" uri: "mongodb://example.com/?ssl=false&tls=true" valid: false warning: false hosts: ~ auth: ~ options: {} - description: "ssl=true and tls=false raises error" uri: "mongodb://example.com/?ssl=true&tls=false" valid: false warning: false hosts: ~ auth: ~ options: {} - description: "tlsDisableOCSPEndpointCheck can be set to true" uri: "mongodb://example.com/?tls=true&tlsDisableOCSPEndpointCheck=true" valid: true warning: false hosts: ~ auth: ~ options: tls: true tlsDisableOCSPEndpointCheck: true - description: "tlsDisableOCSPEndpointCheck can be set to false" uri: "mongodb://example.com/?tls=true&tlsDisableOCSPEndpointCheck=false" valid: true warning: false hosts: ~ auth: ~ options: tls: true tlsDisableOCSPEndpointCheck: false # 4 permutations of [tlsInsecure=true/false, tlsDisableOCSPEndpointCheck=true/false] - description: "tlsInsecure and tlsDisableOCSPEndpointCheck both present (and true) raises an error" uri: "mongodb://example.com/?tlsInsecure=true&tlsDisableOCSPEndpointCheck=true" valid: false warning: false hosts: ~ auth: ~ options: {} - description: "tlsInsecure=true and tlsDisableOCSPEndpointCheck=false raises an error" uri: "mongodb://example.com/?tlsInsecure=true&tlsDisableOCSPEndpointCheck=false" valid: false warning: false hosts: ~ auth: ~ options: {} - description: "tlsInsecure=false and tlsDisableOCSPEndpointCheck=true raises an error" uri: "mongodb://example.com/?tlsInsecure=false&tlsDisableOCSPEndpointCheck=true" valid: false warning: false hosts: ~ auth: ~ options: {} - description: "tlsInsecure and tlsDisableOCSPEndpointCheck both present (and false) raises an error" uri: "mongodb://example.com/?tlsInsecure=false&tlsDisableOCSPEndpointCheck=false" valid: false warning: false hosts: ~ auth: ~ options: {} # 4 permutations of [tlsDisableOCSPEndpointCheck=true/false, tlsInsecure=true/false] - description: "tlsDisableOCSPEndpointCheck and tlsInsecure both present (and true) raises an error" uri: "mongodb://example.com/?tlsDisableOCSPEndpointCheck=true&tlsInsecure=true" valid: false warning: false hosts: ~ auth: ~ options: {} - description: "tlsDisableOCSPEndpointCheck=true and tlsInsecure=false raises an error" uri: "mongodb://example.com/?tlsDisableOCSPEndpointCheck=true&tlsInsecure=false" valid: false warning: false hosts: ~ auth: ~ options: {} - description: "tlsDisableOCSPEndpointCheck=false and tlsInsecure=true raises an error" uri: "mongodb://example.com/?tlsDisableOCSPEndpointCheck=false&tlsInsecure=true" valid: false warning: false hosts: ~ auth: ~ options: {} - description: "tlsDisableOCSPEndpointCheck and tlsInsecure both present (and false) raises an error" uri: "mongodb://example.com/?tlsDisableOCSPEndpointCheck=false&tlsInsecure=false" valid: false warning: false hosts: ~ auth: ~ options: {} # 4 permutations of [tlsAllowInvalidCertificates=true/false, tlsDisableOCSPEndpointCheck=true/false] - description: "tlsAllowInvalidCertificates and tlsDisableOCSPEndpointCheck both present (and true) raises an error" uri: "mongodb://example.com/?tlsAllowInvalidCertificates=true&tlsDisableOCSPEndpointCheck=true" valid: false warning: false hosts: ~ auth: ~ options: {} - description: "tlsAllowInvalidCertificates=true and tlsDisableOCSPEndpointCheck=false raises an error" uri: "mongodb://example.com/?tlsAllowInvalidCertificates=true&tlsDisableOCSPEndpointCheck=false" valid: false warning: false hosts: ~ auth: ~ options: {} - description: "tlsAllowInvalidCertificates=false and tlsDisableOCSPEndpointCheck=true raises an error" uri: "mongodb://example.com/?tlsAllowInvalidCertificates=false&tlsDisableOCSPEndpointCheck=true" valid: false warning: false hosts: ~ auth: ~ options: {} - description: "tlsAllowInvalidCertificates and tlsDisableOCSPEndpointCheck both present (and false) raises an error" uri: "mongodb://example.com/?tlsAllowInvalidCertificates=false&tlsDisableOCSPEndpointCheck=false" valid: false warning: false hosts: ~ auth: ~ options: {} # 4 permutations of [tlsDisableOCSPEndpointCheck=true/false, tlsAllowInvalidCertificates=true/false] - description: "tlsDisableOCSPEndpointCheck and tlsAllowInvalidCertificates both present (and true) raises an error" uri: "mongodb://example.com/?tlsDisableOCSPEndpointCheck=true&tlsAllowInvalidCertificates=true" valid: false warning: false hosts: ~ auth: ~ options: {} - description: "tlsDisableOCSPEndpointCheck=true and tlsAllowInvalidCertificates=false raises an error" uri: "mongodb://example.com/?tlsDisableOCSPEndpointCheck=true&tlsAllowInvalidCertificates=false" valid: false warning: false hosts: ~ auth: ~ options: {} - description: "tlsDisableOCSPEndpointCheck=false and tlsAllowInvalidCertificates=true raises an error" uri: "mongodb://example.com/?tlsDisableOCSPEndpointCheck=false&tlsAllowInvalidCertificates=true" valid: false warning: false hosts: ~ auth: ~ options: {} - description: "tlsDisableOCSPEndpointCheck and tlsAllowInvalidCertificates both present (and false) raises an error" uri: "mongodb://example.com/?tlsDisableOCSPEndpointCheck=false&tlsAllowInvalidCertificates=false" valid: false warning: false hosts: ~ auth: ~ options: {} mongo-ruby-driver-2.21.3/spec/spec_tests/data/versioned_api/000077500000000000000000000000001505113246500240365ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/versioned_api/crud-api-version-1-strict.yml000066400000000000000000000301221505113246500314120ustar00rootroot00000000000000description: "CRUD Api Version 1 (strict)" schemaVersion: "1.4" runOnRequirements: - minServerVersion: "4.9" createEntities: - client: id: &client client observeEvents: - commandStartedEvent serverApi: version: "1" strict: true - database: id: &database database client: *client databaseName: &databaseName versioned-api-tests - database: id: &adminDatabase adminDatabase client: *client databaseName: &adminDatabaseName admin - collection: id: &collection collection database: *database collectionName: &collectionName test _yamlAnchors: versions: - &expectedApiVersion apiVersion: "1" apiStrict: true apiDeprecationErrors: { $$unsetOrMatches: false } initialData: - collectionName: *collectionName databaseName: *databaseName documents: - { _id: 1, x: 11 } - { _id: 2, x: 22 } - { _id: 3, x: 33 } - { _id: 4, x: 44 } - { _id: 5, x: 55 } tests: - description: "aggregate on collection appends declared API version" operations: - name: aggregate object: *collection arguments: pipeline: &pipeline - $sort: { x : 1 } - $match: { _id: { $gt: 1 } } expectEvents: - client: *client events: - commandStartedEvent: command: aggregate: *collectionName pipeline: *pipeline <<: *expectedApiVersion - description: "aggregate on database appends declared API version" runOnRequirements: # serverless does not support either of the current database-level aggregation stages ($listLocalSessions and # $currentOp) - serverless: "forbid" operations: - name: aggregate object: *adminDatabase arguments: pipeline: &pipeline - $listLocalSessions: {} - $limit: 1 expectError: errorCodeName: "APIStrictError" expectEvents: - client: *client events: - commandStartedEvent: command: aggregate: 1 pipeline: *pipeline <<: *expectedApiVersion - description: "bulkWrite appends declared API version" operations: - name: bulkWrite object: *collection arguments: requests: - insertOne: document: { _id: 6, x: 66 } - updateOne: filter: { _id: 2 } update: { $inc: { x: 1 } } - deleteMany: filter: { x: { $nin: [ 24, 34 ] } } - updateMany: filter: { _id: { $gt: 1 } } update: { $inc: { x: 1 } } - deleteOne: filter: { _id: 7 } - replaceOne: filter: { _id: 4 } replacement: { _id: 4, x: 44 } upsert: true ordered: true expectEvents: - client: *client events: - commandStartedEvent: command: insert: *collectionName documents: - { _id: 6, x: 66 } <<: *expectedApiVersion - commandStartedEvent: command: update: *collectionName updates: - q: { _id: 2 } u: { $inc: { x: 1 } } multi: { $$unsetOrMatches: false } upsert: { $$unsetOrMatches: false } <<: *expectedApiVersion - commandStartedEvent: command: delete: *collectionName deletes: - { q: { x: { $nin: [ 24, 34 ] } }, limit: 0 } <<: *expectedApiVersion - commandStartedEvent: command: update: *collectionName updates: - q: { _id: { $gt: 1 } } u: { $inc: { x: 1 } } multi: true upsert: { $$unsetOrMatches: false } <<: *expectedApiVersion - commandStartedEvent: command: delete: *collectionName deletes: - { q: { _id: 7 }, limit: 1 } <<: *expectedApiVersion - commandStartedEvent: command: update: *collectionName updates: - q: { _id: 4 } u: { _id: 4, x: 44 } multi: { $$unsetOrMatches: false } upsert: true <<: *expectedApiVersion - description: "countDocuments appends declared API version" operations: - name: countDocuments object: *collection arguments: filter: &filter x : { $gt: 11 } expectEvents: - client: *client events: - commandStartedEvent: command: aggregate: *collectionName pipeline: - { $match: *filter } - { $group: { _id: 1, n: { $sum: 1 } } } <<: *expectedApiVersion - description: "deleteMany appends declared API version" operations: - name: deleteMany object: *collection arguments: filter: { x: { $nin: [ 24, 34 ] } } expectEvents: - client: *client events: - commandStartedEvent: command: delete: *collectionName deletes: - { q: { x: { $nin: [ 24, 34 ] } }, limit: 0 } <<: *expectedApiVersion - description: "deleteOne appends declared API version" operations: - name: deleteOne object: *collection arguments: filter: { _id: 7 } expectEvents: - client: *client events: - commandStartedEvent: command: delete: *collectionName deletes: - { q: { _id: 7 }, limit: 1 } <<: *expectedApiVersion # distinct will fail until drivers replace it with an alternative # implementation - description: "distinct appends declared API version" operations: - name: distinct object: *collection arguments: fieldName: x filter: {} expectError: isError: true errorContains: "command distinct is not in API Version 1" errorCodeName: "APIStrictError" expectEvents: - client: *client events: - commandStartedEvent: command: distinct: *collectionName key: x <<: *expectedApiVersion - description: "estimatedDocumentCount appends declared API version" # See: https://jira.mongodb.org/browse/SERVER-63850 runOnRequirements: - minServerVersion: "5.0.9" maxServerVersion: "5.0.99" - minServerVersion: "5.3.2" operations: - name: estimatedDocumentCount object: *collection arguments: {} expectEvents: - client: *client events: - commandStartedEvent: command: count: *collectionName <<: *expectedApiVersion - description: "find and getMore append API version" operations: - name: find object: *collection arguments: filter: {} sort: { _id: 1 } batchSize: 3 expectResult: - { _id: 1, x: 11 } - { _id: 2, x: 22 } - { _id: 3, x: 33 } - { _id: 4, x: 44 } - { _id: 5, x: 55 } expectEvents: - client: *client events: - commandStartedEvent: command: find: *collectionName <<: *expectedApiVersion - commandStartedEvent: command: getMore: { $$type: [ int, long ] } <<: *expectedApiVersion - description: "findOneAndDelete appends declared API version" operations: - name: findOneAndDelete object: *collection arguments: filter: &filter { _id: 1 } expectEvents: - client: *client events: - commandStartedEvent: command: findAndModify: *collectionName query: *filter remove: true <<: *expectedApiVersion - description: "findOneAndReplace appends declared API version" operations: - name: findOneAndReplace object: *collection arguments: filter: &filter { _id: 1 } replacement: &replacement { x: 33 } expectEvents: - client: *client events: - commandStartedEvent: command: findAndModify: *collectionName query: *filter update: *replacement <<: *expectedApiVersion - description: "findOneAndUpdate appends declared API version" operations: - name: findOneAndUpdate object: collection arguments: filter: &filter { _id: 1 } update: &update { $inc: { x: 1 } } expectEvents: - client: *client events: - commandStartedEvent: command: findAndModify: *collectionName query: *filter update: *update <<: *expectedApiVersion - description: "insertMany appends declared API version" operations: - name: insertMany object: *collection arguments: documents: - { _id: 6, x: 66 } - { _id: 7, x: 77 } expectEvents: - client: *client events: - commandStartedEvent: command: insert: *collectionName documents: - { _id: 6, x: 66 } - { _id: 7, x: 77 } <<: *expectedApiVersion - description: "insertOne appends declared API version" operations: - name: insertOne object: *collection arguments: document: { _id: 6, x: 66 } expectEvents: - client: *client events: - commandStartedEvent: command: insert: *collectionName documents: - { _id: 6, x: 66 } <<: *expectedApiVersion - description: "replaceOne appends declared API version" operations: - name: replaceOne object: *collection arguments: filter: { _id: 4 } replacement: { _id: 4, x: 44 } upsert: true expectEvents: - client: *client events: - commandStartedEvent: command: update: *collectionName updates: - q: { _id: 4 } u: { _id: 4, x: 44 } multi: { $$unsetOrMatches: false } upsert: true <<: *expectedApiVersion - description: "updateMany appends declared API version" operations: - name: updateMany object: *collection arguments: filter: { _id: { $gt: 1 } } update: { $inc: { x: 1 } } expectEvents: - client: *client events: - commandStartedEvent: command: update: *collectionName updates: - q: { _id: { $gt: 1 } } u: { $inc: { x: 1 } } multi: true upsert: { $$unsetOrMatches: false } <<: *expectedApiVersion - description: "updateOne appends declared API version" operations: - name: updateOne object: *collection arguments: filter: { _id: 2 } update: { $inc: { x: 1 } } expectEvents: - client: *client events: - commandStartedEvent: command: update: *collectionName updates: - q: { _id: 2 } u: { $inc: { x: 1 } } multi: { $$unsetOrMatches: false } upsert: { $$unsetOrMatches: false } <<: *expectedApiVersion mongo-ruby-driver-2.21.3/spec/spec_tests/data/versioned_api/crud-api-version-1.yml000066400000000000000000000276421505113246500301210ustar00rootroot00000000000000description: "CRUD Api Version 1" schemaVersion: "1.4" runOnRequirements: - minServerVersion: "4.9" createEntities: - client: id: &client client observeEvents: - commandStartedEvent serverApi: version: "1" # Deprecation errors is set to true to ensure that drivers don't use any # deprecated server API in their logic. deprecationErrors: true - database: id: &database database client: *client databaseName: &databaseName versioned-api-tests - database: id: &adminDatabase adminDatabase client: *client databaseName: &adminDatabaseName admin - collection: id: &collection collection database: *database collectionName: &collectionName test _yamlAnchors: versions: - &expectedApiVersion apiVersion: "1" apiStrict: { $$unsetOrMatches: false } apiDeprecationErrors: true initialData: - collectionName: *collectionName databaseName: *databaseName documents: - { _id: 1, x: 11 } - { _id: 2, x: 22 } - { _id: 3, x: 33 } - { _id: 4, x: 44 } - { _id: 5, x: 55 } tests: - description: "aggregate on collection appends declared API version" operations: - name: aggregate object: *collection arguments: pipeline: &pipeline - $sort: { x : 1 } - $match: { _id: { $gt: 1 } } expectEvents: - client: *client events: - commandStartedEvent: command: aggregate: *collectionName pipeline: *pipeline <<: *expectedApiVersion - description: "aggregate on database appends declared API version" runOnRequirements: # serverless does not support either of the current database-level aggregation stages ($listLocalSessions and # $currentOp) - serverless: forbid operations: - name: aggregate object: *adminDatabase arguments: pipeline: &pipeline - $listLocalSessions: {} - $limit: 1 expectEvents: - client: *client events: - commandStartedEvent: command: aggregate: 1 pipeline: *pipeline <<: *expectedApiVersion - description: "bulkWrite appends declared API version" operations: - name: bulkWrite object: *collection arguments: requests: - insertOne: document: { _id: 6, x: 66 } - updateOne: filter: { _id: 2 } update: { $inc: { x: 1 } } - deleteMany: filter: { x: { $nin: [ 24, 34 ] } } - updateMany: filter: { _id: { $gt: 1 } } update: { $inc: { x: 1 } } - deleteOne: filter: { _id: 7 } - replaceOne: filter: { _id: 4 } replacement: { _id: 4, x: 44 } upsert: true ordered: true expectEvents: - client: *client events: - commandStartedEvent: command: insert: *collectionName documents: - { _id: 6, x: 66 } <<: *expectedApiVersion - commandStartedEvent: command: update: *collectionName updates: - q: { _id: 2 } u: { $inc: { x: 1 } } multi: { $$unsetOrMatches: false } upsert: { $$unsetOrMatches: false } <<: *expectedApiVersion - commandStartedEvent: command: delete: *collectionName deletes: - { q: { x: { $nin: [ 24, 34 ] } }, limit: 0 } <<: *expectedApiVersion - commandStartedEvent: command: update: *collectionName updates: - q: { _id: { $gt: 1 } } u: { $inc: { x: 1 } } multi: true upsert: { $$unsetOrMatches: false } <<: *expectedApiVersion - commandStartedEvent: command: delete: *collectionName deletes: - { q: { _id: 7 }, limit: 1 } <<: *expectedApiVersion - commandStartedEvent: command: update: *collectionName updates: - q: { _id: 4 } u: { _id: 4, x: 44 } multi: { $$unsetOrMatches: false } upsert: true <<: *expectedApiVersion - description: "countDocuments appends declared API version" operations: - name: countDocuments object: *collection arguments: filter: &filter x : { $gt: 11 } expectEvents: - client: *client events: - commandStartedEvent: command: aggregate: *collectionName pipeline: - { $match: *filter } - { $group: { _id: 1, n: { $sum: 1 } } } <<: *expectedApiVersion - description: "deleteMany appends declared API version" operations: - name: deleteMany object: *collection arguments: filter: { x: { $nin: [ 24, 34 ] } } expectEvents: - client: *client events: - commandStartedEvent: command: delete: *collectionName deletes: - { q: { x: { $nin: [ 24, 34 ] } }, limit: 0 } <<: *expectedApiVersion - description: "deleteOne appends declared API version" operations: - name: deleteOne object: *collection arguments: filter: { _id: 7 } expectEvents: - client: *client events: - commandStartedEvent: command: delete: *collectionName deletes: - { q: { _id: 7 }, limit: 1 } <<: *expectedApiVersion - description: "distinct appends declared API version" operations: - name: distinct object: *collection arguments: fieldName: x filter: {} expectEvents: - client: *client events: - commandStartedEvent: command: distinct: *collectionName key: x <<: *expectedApiVersion - description: "estimatedDocumentCount appends declared API version" # See: https://jira.mongodb.org/browse/SERVER-63850 runOnRequirements: - minServerVersion: "5.0.9" maxServerVersion: "5.0.99" - minServerVersion: "5.3.2" operations: - name: estimatedDocumentCount object: *collection arguments: {} expectEvents: - client: *client events: - commandStartedEvent: command: count: *collectionName <<: *expectedApiVersion - description: "find and getMore append API version" operations: - name: find object: *collection arguments: filter: {} sort: { _id: 1 } batchSize: 3 expectResult: - { _id: 1, x: 11 } - { _id: 2, x: 22 } - { _id: 3, x: 33 } - { _id: 4, x: 44 } - { _id: 5, x: 55 } expectEvents: - client: *client events: - commandStartedEvent: command: find: *collectionName <<: *expectedApiVersion - commandStartedEvent: command: getMore: { $$type: [ int, long ] } <<: *expectedApiVersion - description: "findOneAndDelete appends declared API version" operations: - name: findOneAndDelete object: *collection arguments: filter: &filter { _id: 1 } expectEvents: - client: *client events: - commandStartedEvent: command: findAndModify: *collectionName query: *filter remove: true <<: *expectedApiVersion - description: "findOneAndReplace appends declared API version" operations: - name: findOneAndReplace object: *collection arguments: filter: &filter { _id: 1 } replacement: &replacement { x: 33 } expectEvents: - client: *client events: - commandStartedEvent: command: findAndModify: *collectionName query: *filter update: *replacement <<: *expectedApiVersion - description: "findOneAndUpdate appends declared API version" operations: - name: findOneAndUpdate object: collection arguments: filter: &filter { _id: 1 } update: &update { $inc: { x: 1 } } expectEvents: - client: *client events: - commandStartedEvent: command: findAndModify: *collectionName query: *filter update: *update <<: *expectedApiVersion - description: "insertMany appends declared API version" operations: - name: insertMany object: *collection arguments: documents: - { _id: 6, x: 66 } - { _id: 7, x: 77 } expectEvents: - client: *client events: - commandStartedEvent: command: insert: *collectionName documents: - { _id: 6, x: 66 } - { _id: 7, x: 77 } <<: *expectedApiVersion - description: "insertOne appends declared API version" operations: - name: insertOne object: *collection arguments: document: { _id: 6, x: 66 } expectEvents: - client: *client events: - commandStartedEvent: command: insert: *collectionName documents: - { _id: 6, x: 66 } <<: *expectedApiVersion - description: "replaceOne appends declared API version" operations: - name: replaceOne object: *collection arguments: filter: { _id: 4 } replacement: { _id: 4, x: 44 } upsert: true expectEvents: - client: *client events: - commandStartedEvent: command: update: *collectionName updates: - q: { _id: 4 } u: { _id: 4, x: 44 } multi: { $$unsetOrMatches: false } upsert: true <<: *expectedApiVersion - description: "updateMany appends declared API version" operations: - name: updateMany object: *collection arguments: filter: { _id: { $gt: 1 } } update: { $inc: { x: 1 } } expectEvents: - client: *client events: - commandStartedEvent: command: update: *collectionName updates: - q: { _id: { $gt: 1 } } u: { $inc: { x: 1 } } multi: true upsert: { $$unsetOrMatches: false } <<: *expectedApiVersion - description: "updateOne appends declared API version" operations: - name: updateOne object: *collection arguments: filter: { _id: 2 } update: { $inc: { x: 1 } } expectEvents: - client: *client events: - commandStartedEvent: command: update: *collectionName updates: - q: { _id: 2 } u: { $inc: { x: 1 } } multi: { $$unsetOrMatches: false } upsert: { $$unsetOrMatches: false } <<: *expectedApiVersion runcommand-helper-no-api-version-declared.yml000066400000000000000000000042251505113246500345320ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/spec_tests/data/versioned_apidescription: "RunCommand helper: No API version declared" schemaVersion: "1.4" runOnRequirements: - minServerVersion: "4.9" serverParameters: requireApiVersion: false createEntities: - client: id: &client client observeEvents: - commandStartedEvent - database: id: &database database client: *client databaseName: &databaseName versioned-api-tests tests: - description: "runCommand does not inspect or change the command document" runOnRequirements: # serverless does not currently reject invalid API versions on # certain commands (CLOUDP-87926) - serverless: "forbid" operations: - name: runCommand object: *database arguments: commandName: ping command: ping: 1 apiVersion: "server_will_never_support_this_api_version" expectError: isError: true isClientError: false expectEvents: - client: *client events: - commandStartedEvent: command: ping: 1 apiVersion: "server_will_never_support_this_api_version" apiStrict: { $$exists: false } apiDeprecationErrors: { $$exists: false } commandName: ping databaseName: *databaseName - description: "runCommand does not prevent sending invalid API version declarations" runOnRequirements: # serverless does not currently reject invalid API versions on # certain commands (CLOUDP-87926) - serverless: "forbid" operations: - name: runCommand object: *database arguments: commandName: ping command: ping: 1 apiStrict: true expectError: isError: true isClientError: false expectEvents: - client: *client events: - commandStartedEvent: command: ping: 1 apiVersion: { $$exists: false } apiStrict: true apiDeprecationErrors: { $$exists: false } commandName: ping databaseName: *databaseName mongo-ruby-driver-2.21.3/spec/spec_tests/data/versioned_api/test-commands-deprecation-errors.yml000066400000000000000000000025621505113246500331510ustar00rootroot00000000000000description: "Test commands: deprecation errors" schemaVersion: "1.1" runOnRequirements: - minServerVersion: "4.9" serverParameters: enableTestCommands: true acceptApiVersion2: true requireApiVersion: false createEntities: - client: id: &client client observeEvents: - commandStartedEvent # This client is configured without a declared API version, as we cannot # declare an unknown API version. - database: id: &database database client: *client databaseName: &databaseName versioned-api-tests tests: - description: "Running a command that is deprecated raises a deprecation error" operations: - name: runCommand object: *database arguments: commandName: testDeprecationInVersion2 command: testDeprecationInVersion2: 1 apiVersion: "2" apiDeprecationErrors: true expectError: isError: true errorContains: "command testDeprecationInVersion2 is deprecated in API Version 2" errorCodeName: "APIDeprecationError" expectEvents: - client: *client events: - commandStartedEvent: command: testDeprecationInVersion2: 1 apiVersion: "2" apiStrict: { $$exists: false } apiDeprecationErrors: true mongo-ruby-driver-2.21.3/spec/spec_tests/data/versioned_api/test-commands-strict-mode.yml000066400000000000000000000023531505113246500315720ustar00rootroot00000000000000description: "Test commands: strict mode" schemaVersion: "1.4" runOnRequirements: - minServerVersion: "4.9" serverParameters: enableTestCommands: true # serverless gives a different error for unrecognized testVersion2 command serverless: "forbid" createEntities: - client: id: &client client observeEvents: - commandStartedEvent serverApi: version: "1" strict: true - database: id: &database database client: *client databaseName: &databaseName versioned-api-tests tests: - description: "Running a command that is not part of the versioned API results in an error" operations: - name: runCommand object: *database arguments: commandName: testVersion2 command: testVersion2: 1 expectError: isError: true errorContains: "command testVersion2 is not in API Version 1" errorCodeName: "APIStrictError" expectEvents: - client: *client events: - commandStartedEvent: command: testVersion2: 1 apiVersion: "1" apiStrict: true apiDeprecationErrors: { $$unsetOrMatches: false } mongo-ruby-driver-2.21.3/spec/spec_tests/data/versioned_api/transaction-handling.yml000066400000000000000000000075571505113246500307060ustar00rootroot00000000000000description: "Transaction handling" schemaVersion: "1.3" runOnRequirements: - minServerVersion: "4.9" topologies: [ replicaset, sharded, load-balanced ] createEntities: - client: id: &client client observeEvents: - commandStartedEvent serverApi: version: "1" - database: id: &database database client: *client databaseName: &databaseName versioned-api-tests - collection: id: &collection collection database: *database collectionName: &collectionName test - session: id: &session session client: *client _yamlAnchors: versions: - &expectedApiVersion apiVersion: "1" apiStrict: { $$unsetOrMatches: false } apiDeprecationErrors: { $$unsetOrMatches: false } initialData: - collectionName: *collectionName databaseName: *databaseName documents: - { _id: 1, x: 11 } - { _id: 2, x: 22 } - { _id: 3, x: 33 } - { _id: 4, x: 44 } - { _id: 5, x: 55 } tests: - description: "All commands in a transaction declare an API version" runOnRequirements: - topologies: [ replicaset, sharded, load-balanced ] operations: - name: startTransaction object: *session - name: insertOne object: *collection arguments: session: *session document: { _id: 6, x: 66 } expectResult: { $$unsetOrMatches: { insertedId: { $$unsetOrMatches: 6 } } } - name: insertOne object: *collection arguments: session: *session document: { _id: 7, x: 77 } expectResult: { $$unsetOrMatches: { insertedId: { $$unsetOrMatches: 7 } } } - name: commitTransaction object: *session expectEvents: - client: *client events: - commandStartedEvent: command: insert: *collectionName documents: [ { _id: 6, x: 66 } ] lsid: { $$sessionLsid: *session } startTransaction: true <<: *expectedApiVersion - commandStartedEvent: command: insert: *collectionName documents: [ { _id: 7, x: 77 } ] lsid: { $$sessionLsid: *session } <<: *expectedApiVersion - commandStartedEvent: command: commitTransaction: 1 lsid: { $$sessionLsid: *session } <<: *expectedApiVersion - description: "abortTransaction includes an API version" runOnRequirements: - topologies: [ replicaset, sharded, load-balanced ] operations: - name: startTransaction object: *session - name: insertOne object: *collection arguments: session: *session document: { _id: 6, x: 66 } expectResult: { $$unsetOrMatches: { insertedId: { $$unsetOrMatches: 6 } } } - name: insertOne object: *collection arguments: session: *session document: { _id: 7, x: 77 } expectResult: { $$unsetOrMatches: { insertedId: { $$unsetOrMatches: 7 } } } - name: abortTransaction object: *session expectEvents: - client: *client events: - commandStartedEvent: command: insert: *collectionName documents: [ { _id: 6, x: 66 } ] lsid: { $$sessionLsid: *session } startTransaction: true <<: *expectedApiVersion - commandStartedEvent: command: insert: *collectionName documents: [ { _id: 7, x: 77 } ] lsid: { $$sessionLsid: *session } <<: *expectedApiVersion - commandStartedEvent: command: abortTransaction: 1 lsid: { $$sessionLsid: *session } <<: *expectedApiVersion mongo-ruby-driver-2.21.3/spec/spec_tests/gridfs_spec.rb000066400000000000000000000026301505113246500231140ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' require 'runners/gridfs' describe 'GridFS' do include Mongo::GridFS GRIDFS_TESTS.each do |file| spec = Mongo::GridFS::Spec.new(file) context(spec.description) do spec.tests.each do |test| context(test.description) do after do fs.files_collection.delete_many fs.chunks_collection.delete_many test.expected_files_collection.delete_many test.expected_chunks_collection.delete_many end let!(:result) do test.run(fs) end let(:fs) do authorized_collection.database.fs end it "raises the correct error", if: test.error? do expect(result).to match_error(test.expected_error) end it 'completes successfully', unless: test.error? do expect(result).to completes_successfully(test) end it 'has the correct documents in the files collection', if: test.assert_data? do expect(fs.files_collection).to match_files_collection(test.expected_files_collection) end it 'has the correct documents in the chunks collection', if: test.assert_data? do expect(fs.chunks_collection).to match_chunks_collection(test.expected_chunks_collection) end end end end end end mongo-ruby-driver-2.21.3/spec/spec_tests/gridfs_unified_spec.rb000066400000000000000000000004721505113246500246210ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' require 'runners/unified' base = "#{CURRENT_PATH}/spec_tests/data/gridfs_unified" GRIDFS_UNIFIED_TESTS = Dir.glob("#{base}/**/*.yml").sort describe 'GridFS unified spec tests' do define_unified_spec_tests(base, GRIDFS_UNIFIED_TESTS) end mongo-ruby-driver-2.21.3/spec/spec_tests/index_management_unified_spec.rb000066400000000000000000000006311505113246500266430ustar00rootroot00000000000000# frozen_string_literal: true require 'spec_helper' require 'runners/unified' base = "#{CURRENT_PATH}/spec_tests/data/index_management" INDEX_MANAGEMENT_UNIFIED_TESTS = Dir.glob("#{base}/**/*.yml").sort # rubocop:disable RSpec/EmptyExampleGroup describe 'index management unified spec tests' do define_unified_spec_tests(base, INDEX_MANAGEMENT_UNIFIED_TESTS) end # rubocop:enable RSpec/EmptyExampleGroup mongo-ruby-driver-2.21.3/spec/spec_tests/load_balancers_spec.rb000066400000000000000000000005321505113246500245660ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' require 'runners/unified' base = "#{CURRENT_PATH}/spec_tests/data/load_balancers" LOAD_BALANCER_TESTS = Dir.glob("#{base}/**/*.yml").sort describe 'Load balancer spec tests' do require_topology :load_balanced define_unified_spec_tests(base, LOAD_BALANCER_TESTS) end mongo-ruby-driver-2.21.3/spec/spec_tests/max_staleness_spec.rb000066400000000000000000000004661505113246500245110ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' require 'runners/server_selection' MAX_STALENESS_TESTS = Dir.glob("#{CURRENT_PATH}/spec_tests/data/max_staleness/**/*.yml").sort describe 'Max staleness spec tests' do define_server_selection_spec_tests(MAX_STALENESS_TESTS) end mongo-ruby-driver-2.21.3/spec/spec_tests/read_write_concern_connection_string_spec.rb000066400000000000000000000005651505113246500313040ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' require 'runners/connection_string' READ_WRITE_CONCERN_CONNECTION_STRING_TESTS = Dir.glob("#{CURRENT_PATH}/spec_tests/data/read_write_concern/connection-string/*.yml").sort describe 'Connection String' do define_connection_string_spec_tests(READ_WRITE_CONCERN_CONNECTION_STRING_TESTS) end mongo-ruby-driver-2.21.3/spec/spec_tests/read_write_concern_document_spec.rb000066400000000000000000000036101505113246500273670ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' require 'runners/read_write_concern_document' READ_WRITE_CONCERN_DOCUMENT_TESTS = Dir.glob("#{CURRENT_PATH}/spec_tests/data/read_write_concern/document/*.yml").sort describe 'Connection String' do READ_WRITE_CONCERN_DOCUMENT_TESTS.each do |test_path| spec = ReadWriteConcernDocument::Spec.new(test_path) context(spec.description) do spec.tests.each_with_index do |test, index| context test.description do let(:actual) do Mongo::WriteConcern.get(test.input_document) end let(:actual_server_document) do Utils.camelize_hash(actual.options) end if test.valid? it 'parses successfully' do expect do actual end.not_to raise_error end it 'has expected server document' do expect(actual_server_document).to eq(test.server_document) end if test.server_default? it 'is server default' do expect(actual.options).to eq({}) end end if test.server_default? == false it 'is not server default' do expect(actual.options).not_to eq({}) end end if test.acknowledged? it 'is acknowledged' do expect(actual.acknowledged?).to be true end end if test.acknowledged? == false it 'is not acknowledged' do expect(actual.acknowledged?).to be false end end else it 'is invalid' do expect do actual end.to raise_error(Mongo::Error::InvalidWriteConcern) end end end end end end end mongo-ruby-driver-2.21.3/spec/spec_tests/read_write_concern_operaton_spec.rb000066400000000000000000000005141505113246500274000ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' require 'runners/crud' require 'runners/transactions' test_paths = Dir.glob("#{CURRENT_PATH}/spec_tests/data/read_write_concern/operation/**/*.yml").sort describe 'Read write concern operation spec tests' do define_transactions_spec_tests(test_paths) end mongo-ruby-driver-2.21.3/spec/spec_tests/retryable_reads_spec.rb000066400000000000000000000030141505113246500250020ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' require 'runners/crud' base = "#{CURRENT_PATH}/spec_tests/data/retryable_reads" RETRYABLE_READS_TESTS = Dir.glob("#{base}/legacy/**/*.yml").sort describe 'Retryable reads legacy spec tests' do require_wired_tiger require_no_multi_mongos define_crud_spec_tests(RETRYABLE_READS_TESTS) do |spec, req, test| let(:client) do authorized_client.use(spec.database_name).with({max_read_retries: 0}.update(test.client_options)).tap do |client| client.subscribe(Mongo::Monitoring::COMMAND, event_subscriber) end end end end describe 'Retryable reads spec tests - legacy' do require_no_multi_mongos define_crud_spec_tests(RETRYABLE_READS_TESTS) do |spec, req, test| retry_test let(:client_options) do { max_read_retries: 1, read_retry_interval: 0, retry_reads: false, }.update(test.client_options) end let(:client) do authorized_client.use(spec.database_name).with(client_options).tap do |client| client.subscribe(Mongo::Monitoring::COMMAND, event_subscriber) end end around do |example| desc = example.full_description # Skip tests that disable modern retryable reads because they expect # no retries - and since legacy retryable reads are used, the tests # will fail. if desc =~/retryReads is false|fails on first attempt/ skip 'Test not applicable to legacy read retries' end example.run end end end mongo-ruby-driver-2.21.3/spec/spec_tests/retryable_reads_unified_spec.rb000066400000000000000000000012171505113246500265100ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' require 'runners/unified' base = "#{CURRENT_PATH}/spec_tests/data/retryable_reads" RETRYABLE_READ_UNIFIED_TESTS = Dir.glob("#{base}/unified/**/*.yml").sort describe 'Retryable reads spec tests - unified' do require_wired_tiger require_no_multi_mongos define_unified_spec_tests(base, RETRYABLE_READ_UNIFIED_TESTS) do |spec, req, test| let(:client) do authorized_client.use(spec.database_name).with({max_read_retries: 0}.update(test.client_options)).tap do |client| client.subscribe(Mongo::Monitoring::COMMAND, event_subscriber) end end end end mongo-ruby-driver-2.21.3/spec/spec_tests/retryable_writes_spec.rb000066400000000000000000000013011505113246500252160ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' require 'runners/crud' base = "#{CURRENT_PATH}/spec_tests/data/retryable_writes" RETRYABLE_WRITES_TESTS = Dir.glob("#{base}/legacy/**/*.yml").sort describe 'Retryable writes spec tests - legacy' do require_wired_tiger require_no_multi_mongos # Do not run these tests when write retries are disabled globally - # the tests won't work in that case and testing them with retries enabled # is simply redundant. require_retry_writes define_crud_spec_tests(RETRYABLE_WRITES_TESTS) do |spec, req, test| let(:client) do authorized_client.with(test.client_options.merge({max_write_retries: 0})) end end end mongo-ruby-driver-2.21.3/spec/spec_tests/retryable_writes_unified_spec.rb000066400000000000000000000011251505113246500267250ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' require 'runners/unified' base = "#{CURRENT_PATH}/spec_tests/data/retryable_writes" RETRYABLE_WRITE_UNIFIED_TESTS = Dir.glob("#{base}/unified/**/*.yml").sort describe 'Retryable reads spec tests - unified' do require_wired_tiger require_no_multi_mongos # Do not run these tests when write retries are disabled globally - # the tests won't work in that case and testing them with retries enabled # is simply redundant. require_retry_writes define_unified_spec_tests(base, RETRYABLE_WRITE_UNIFIED_TESTS) end mongo-ruby-driver-2.21.3/spec/spec_tests/run_command_unified_spec.rb000066400000000000000000000005151505113246500256430ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' require 'runners/unified' base = "#{CURRENT_PATH}/spec_tests/data/run_command_unified" RUN_COMMAND_UNIFIED_TESTS = Dir.glob("#{base}/**/*.yml").sort describe 'runCommand unified spec tests' do define_unified_spec_tests(base, RUN_COMMAND_UNIFIED_TESTS) end mongo-ruby-driver-2.21.3/spec/spec_tests/sdam_monitoring_spec.rb000066400000000000000000000073611505113246500250350ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' require 'runners/sdam' require 'runners/sdam/verifier' describe 'SDAM Monitoring' do include Mongo::SDAM SDAM_MONITORING_TESTS.each do |file| spec = Mongo::SDAM::Spec.new(file) context("#{spec.description} (#{file.sub(%r'.*/data/sdam_monitoring/', '')})") do before(:all) do @subscriber = Mrss::PhasedEventSubscriber.new sdam_proc = lambda do |client| client.subscribe(Mongo::Monitoring::SERVER_OPENING, @subscriber) client.subscribe(Mongo::Monitoring::SERVER_CLOSED, @subscriber) client.subscribe(Mongo::Monitoring::SERVER_DESCRIPTION_CHANGED, @subscriber) client.subscribe(Mongo::Monitoring::TOPOLOGY_OPENING, @subscriber) client.subscribe(Mongo::Monitoring::TOPOLOGY_CHANGED, @subscriber) end @client = new_local_client_nmio(spec.uri_string, sdam_proc: sdam_proc, heartbeat_frequency: 100, connect_timeout: 0.1) # We do not want to create servers when an event referencing them # is processed, because this may result in server duplication # when events are processed for servers that had been removed # from the topology. Instead set up a server cache we can use # to reference servers removed from the topology @servers_cache = {} @client.cluster.servers_list.each do |server| @servers_cache[server.address.to_s] = server # Since we set monitoring_io: false, servers are not monitored # by the cluster. Start monitoring on them manually (this publishes # the server opening event but, again due to monitoring_io being # false, does not do network I/O or change server status). # # If the server is a load balancer, it doesn't normally get monitored # so don't start here either. unless server.load_balancer? server.start_monitoring end end end after(:all) do @client.close end spec.phases.each_with_index do |phase, phase_index| context("Phase: #{phase_index + 1}") do before(:all) do phase.responses&.each do |response| # For each response in the phase, we need to change that server's description. server = find_server(@client, response.address) server ||= @servers_cache[response.address.to_s] if server.nil? raise "Server should have been found" end result = response.hello # Spec tests do not always specify wire versions, but the # driver requires them. Set them to zero which was # the legacy default in the driver. result['minWireVersion'] ||= 0 result['maxWireVersion'] ||= 0 new_description = Mongo::Server::Description.new( server.description.address, result, average_round_trip_time: 0.5) @client.cluster.run_sdam_flow(server.description, new_description) end @subscriber.phase_finished(phase_index) end it "expects #{phase.outcome.events.length} events to be published" do expect(@subscriber.phase_events(phase_index).length).to eq(phase.outcome.events.length) end let(:verifier) do Sdam::Verifier.new end phase.outcome.events.each_with_index do |expectation, index| it "expects event #{index+1} to be #{expectation.name}" do verifier.verify_sdam_event( phase.outcome.events, @subscriber.phase_events(phase_index), index) end end end end end end end mongo-ruby-driver-2.21.3/spec/spec_tests/sdam_spec.rb000066400000000000000000000216711505113246500225700ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' require 'runners/sdam' require 'runners/sdam/verifier' describe 'Server Discovery and Monitoring' do include Mongo::SDAM class Executor include Mongo::Operation::Executable def session nil end end SERVER_DISCOVERY_TESTS.each do |file| spec = Mongo::SDAM::Spec.new(file) context("#{spec.description} (#{file.sub(%r'.*/data/sdam/', '')})") do before(:all) do class Mongo::Server::Monitor alias_method :run_saved!, :run! # Replace run! method to do nothing, to avoid races between # the background thread started by Server.new and our mocking. # Replace with refinements once ruby 1.9 support is dropped def run! end end end after(:all) do class Mongo::Server::Monitor alias_method :run!, :run_saved! end end before(:all) do # Since we supply all server descriptions and drive events, # background monitoring only gets in the way. Disable it. @client = new_local_client_nmio(spec.uri_string, heartbeat_frequency: 1000, connect_timeout: 0.1) end before do @client.reconnect if @client.closed? end after(:all) do @client && @client.close end def raise_application_error(error, connection = nil) case error.type when :network exc = Mongo::Error::SocketError.new exc.generation = error.generation raise exc when :timeout exc = Mongo::Error::SocketTimeoutError.new exc.generation = error.generation raise exc when :command result = error.result if error.generation allow(connection).to receive(:generation).and_return(error.generation) end Executor.new.send(:process_result_for_sdam, result, connection) else raise NotImplementedError, "Error type #{error.type} is not implemented" end end spec.phases.each_with_index do |phase, index| context("Phase: #{index + 1}") do before do allow(@client.cluster).to receive(:connected?).and_return(true) phase.responses&.each do |response| server = find_server(@client, response.address) unless server server = Mongo::Server.new( Mongo::Address.new(response.address), @client.cluster, @client.send(:monitoring), @client.cluster.send(:event_listeners), @client.cluster.options, ) end monitor = server.instance_variable_get(:@monitor) result = response.hello # Spec tests do not always specify wire versions, but the # driver requires them. Set them to zero which was # the legacy default in the driver. result['minWireVersion'] ||= 0 result['maxWireVersion'] ||= 0 new_description = Mongo::Server::Description.new( server.description.address, result, average_round_trip_time: 0.5) @client.cluster.run_sdam_flow(server.description, new_description) end phase.application_errors&.each do |error| server = find_server(@client, error.address_str) unless server raise NotImplementedError, 'Errors can only be produced on known servers' end begin case error.when when :before_handshake_completes connection = Mongo::Server::Connection.new(server, generation: server.pool.generation) server.handle_handshake_failure! do raise_application_error(error, connection) end when :after_handshake_completes connection = Mongo::Server::Connection.new(server, generation: server.pool.generation) allow(connection).to receive(:description).and_return(server.description) connection.send(:handle_errors) do raise_application_error(error, connection) end else raise NotImplementedError, "Error position #{error.when} is not implemented" end rescue Mongo::Error # This was the exception we raised end end end if phase.outcome.compatible? let(:cluster_addresses) do @client.cluster.instance_variable_get(:@servers). collect(&:address).collect(&:to_s).uniq.sort end let(:phase_addresses) do phase.outcome.servers.keys.sort end it "sets the cluster topology to #{phase.outcome.topology_type}" do expect(@client.cluster).to be_topology(phase.outcome.topology_type) end it "sets the cluster replica set name to #{phase.outcome.set_name.inspect}" do expect(@client.cluster.replica_set_name).to eq(phase.outcome.set_name) end it "sets the cluster logical session timeout minutes to #{phase.outcome.logical_session_timeout.inspect}" do expect(@client.cluster.logical_session_timeout).to eq(phase.outcome.logical_session_timeout) end it "has the expected servers in the cluster" do expect(cluster_addresses).to eq(phase_addresses) end # If compatible is not expliticly specified in the fixture, # wire protocol versions aren't either and the topology # is actually incompatible if phase.outcome.compatible_specified? it 'is compatible' do expect(@client.cluster.topology.compatible?).to be true end end phase.outcome.servers.each do |address_str, server_spec| it "sets #{address_str} server to #{server_spec['type']}" do server = find_server(@client, address_str) unless server_of_type?(server, server_spec['type']) raise RSpec::Expectations::ExpectationNotMetError, "Server #{server.summary} not of type #{server_spec['type']}" end end it "sets #{address_str} server replica set name to #{server_spec['setName'].inspect}" do expect(find_server(@client, address_str).replica_set_name).to eq(server_spec['setName']) end it "sets #{address_str} server description in topology to match server description in cluster" do desc = @client.cluster.topology.server_descriptions[address_str] server = find_server(@client, address_str) # eql doesn't work here because it's aliased to eq # and two unknowns are not eql as a result, # compare by object id unless desc.object_id == server.description.object_id unless desc == server.description expect(desc).to be_unknown expect(server.description).to be_unknown end end end let(:verifier) { Sdam::Verifier.new } it "#{address_str} server description has expected values" do actual = @client.cluster.topology.server_descriptions[address_str] verifier.verify_description_matches(server_spec, actual) end end if %w(ReplicaSetWithPrimary ReplicaSetNoPrimary).include?(phase.outcome.topology_type) it 'has expected max election id' do expect(@client.cluster.topology.max_election_id).to eq(phase.outcome.max_election_id) end it 'has expected max set version' do expect(@client.cluster.topology.max_set_version).to eq(phase.outcome.max_set_version) end end else before do @client.cluster.servers.each do |server| allow(server).to receive(:connectable?).and_return(true) end end it 'is incompatible' do expect(@client.cluster.topology.compatible?).to be false end it 'raises an UnsupportedFeatures error' do expect { p = Mongo::ServerSelector.primary.select_server(@client.cluster) s = Mongo::ServerSelector.get(mode: :secondary).select_server(@client.cluster) raise "UnsupportedFeatures not raised but we got #{p.inspect} as primary and #{s.inspect} as secondary" }.to raise_exception(Mongo::Error::UnsupportedFeatures) end end end end end end end mongo-ruby-driver-2.21.3/spec/spec_tests/sdam_unified_spec.rb000066400000000000000000000005061505113246500242650ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' require 'runners/unified' base = "#{CURRENT_PATH}/spec_tests/data/sdam_unified" SDAM_UNIFIED_TESTS = Dir.glob("#{base}/**/*.yml").sort describe 'SDAM unified spec tests' do forbid_x509_auth define_unified_spec_tests(base, SDAM_UNIFIED_TESTS) end mongo-ruby-driver-2.21.3/spec/spec_tests/seed_list_discovery_spec.rb000066400000000000000000000103451505113246500257020ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' require 'support/using_hash' require 'runners/connection_string' require 'mrss/lite_constraints' SEED_LIST_DISCOVERY_TESTS = Dir.glob("#{CURRENT_PATH}/spec_tests/data/seed_list_discovery/**/*.yml").sort describe 'DNS Seedlist Discovery' do require_external_connectivity include Mongo::ConnectionString SEED_LIST_DISCOVERY_TESTS.each do |test_path| spec = ::Utils.load_spec_yaml_file(test_path) test = Mongo::ConnectionString::Test.new(spec) context(File.basename(test_path)) do if test.raise_error? context 'the uri is invalid' do retry_test let(:valid_errors) do [ Mongo::Error::InvalidTXTRecord, Mongo::Error::NoSRVRecords, Mongo::Error::InvalidURI, Mongo::Error::MismatchedDomain, # This is unfortunate. RUBY-2624 ArgumentError, ] end let(:error) do begin test.client rescue => ex end ex end # In Evergreen sometimes this test fails intermittently. it 'raises an error' do expect(valid_errors).to include(error.class) end end else context 'the uri is valid' do retry_test # In Evergreen sometimes this test fails intermittently. it 'does not raise an exception' do expect(test.uri).to be_a(Mongo::URI::SRVProtocol) end if test.seeds # DNS seed list tests specify both seeds and hosts. # To get the hosts, the client must do SDAM (as required in the # spec tests' description), but this isn't testing DNS seed list - # it is testing SDAM. Plus, all of the hosts are always the same. # If seed list is given in the expectations, just test the seed # list and not the expanded hosts. it 'creates a client with the correct seeds' do expect(test.client).to have_hosts(test, test.seeds) end elsif test.num_seeds it 'has the right number of seeds' do num_servers = test.client.cluster.servers_list.length expect(num_servers).to eq(test.num_seeds) end else it 'creates a client with the correct hosts' do expect(test.client).to have_hosts(test, test.hosts) end end if test.expected_options it 'creates a client with the correct uri options' do mapped = Mongo::URI::OptionsMapper.new.ruby_to_smc(test.client.options) # Connection string spec tests do not use canonical URI option names actual = Utils.downcase_keys(mapped) expected = Utils.downcase_keys(test.expected_options) # SRV tests use ssl URI option instead of tls one if expected.key?('ssl') && !expected.key?('tls') expected['tls'] = expected.delete('ssl') end # The client object contains auth source in options which # isn't asserted in some tests. if actual.key?('authsource') && !expected.key?('authsource') actual.delete('authsource') end actual.should == expected end end if test.non_uri_options it 'creates a client with the correct non-uri options' do opts = UsingHash[test.non_uri_options] if user = opts.use('user') test.client.options[:user].should == user end if password = opts.use('password') test.client.options[:password].should == password end if db = opts.use('db') test.client.database.name.should == db end if auth_source = opts.use('auth_database') Mongo::Auth::User.new(test.client.options).auth_source == auth_source end unless opts.empty? raise "Unhandled keys: #{opts}" end end end end end end end end mongo-ruby-driver-2.21.3/spec/spec_tests/server_selection_rtt_spec.rb000066400000000000000000000015451505113246500261060ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' require 'runners/server_selection_rtt' describe 'Server Selection moving average round trip time calculation' do include Mongo::ServerSelection::RTT SERVER_SELECTION_RTT_TESTS.each do |file| spec = Mongo::ServerSelection::RTT::Spec.new(file) context(spec.description) do let(:calculator) do Mongo::Server::RoundTripTimeCalculator.new end before do calculator.instance_variable_set(:@average_round_trip_time, spec.average_rtt) calculator.instance_variable_set(:@last_round_trip_time, spec.new_rtt) calculator.update_average_round_trip_time end it 'correctly calculates the moving average round trip time' do expect(calculator.average_round_trip_time).to eq(spec.new_average_rtt) end end end end mongo-ruby-driver-2.21.3/spec/spec_tests/server_selection_spec.rb000066400000000000000000000005021505113246500252050ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' require 'runners/server_selection' SERVER_SELECTION_TESTS = Dir.glob("#{CURRENT_PATH}/spec_tests/data/server_selection/**/*.yml").sort describe 'Server selection spec tests' do define_server_selection_spec_tests(SERVER_SELECTION_TESTS) end mongo-ruby-driver-2.21.3/spec/spec_tests/sessions_unified_spec.rb000066400000000000000000000005021505113246500252030ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' require 'runners/unified' base = "#{CURRENT_PATH}/spec_tests/data/sessions_unified" SESSIONS_UNIFIED_TESTS = Dir.glob("#{base}/**/*.yml").sort describe 'Sessions unified spec tests' do define_unified_spec_tests(base, SESSIONS_UNIFIED_TESTS) end mongo-ruby-driver-2.21.3/spec/spec_tests/transactions_api_spec.rb000066400000000000000000000003631505113246500252000ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' require 'runners/crud' require 'runners/transactions' describe 'Transactions API' do require_wired_tiger define_transactions_spec_tests(TRANSACTIONS_API_TESTS) end mongo-ruby-driver-2.21.3/spec/spec_tests/transactions_spec.rb000066400000000000000000000003531505113246500243460ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' require 'runners/crud' require 'runners/transactions' describe 'Transactions' do require_wired_tiger define_transactions_spec_tests(TRANSACTIONS_TESTS) end mongo-ruby-driver-2.21.3/spec/spec_tests/transactions_unified_spec.rb000066400000000000000000000007031505113246500260500ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' require 'runners/unified' base = "#{CURRENT_PATH}/spec_tests/data/transactions_unified" # See https://jira.mongodb.org/browse/RUBY-3502 for more details TRANSACTIONS_UNIFIED_TESTS = Dir.glob("#{base}/**/*.yml").sort.reject { |name| name =~ /.*mongos-unpin.yml$/ } describe 'Transactions unified spec tests' do define_unified_spec_tests(base, TRANSACTIONS_UNIFIED_TESTS) end mongo-ruby-driver-2.21.3/spec/spec_tests/unified_spec.rb000066400000000000000000000010101505113246500232500ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' require 'runners/unified' base = "#{CURRENT_PATH}/spec_tests/data/unified" PASS_UNIFIED_TESTS = Dir.glob("#{base}/valid-pass/**/*.yml").sort FAIL_UNIFIED_TESTS = Dir.glob("#{base}/valid-fail/**/*.yml").sort describe 'Unified spec tests - valid pass' do define_unified_spec_tests(base, PASS_UNIFIED_TESTS) end describe 'Unified spec tests - expected failures' do define_unified_spec_tests(base, FAIL_UNIFIED_TESTS, expect_failure: true) end mongo-ruby-driver-2.21.3/spec/spec_tests/uri_options_spec.rb000066400000000000000000000055311505113246500242130ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'lite_spec_helper' require 'runners/connection_string' describe 'URI options' do include Mongo::ConnectionString # Since the tests issue global assertions on Mongo::Logger, # we need to close all clients/stop monitoring to avoid monitoring # threads warning and interfering with these assertions clean_slate_for_all_if_possible URI_OPTIONS_TESTS.each do |file| spec = Mongo::ConnectionString::Spec.new(file) context(spec.description) do spec.tests.each do |test| context "#{test.description}" do if test.description.downcase.include?("gssapi") require_mongo_kerberos end if test.valid? # The warning assertion needs to be first because the test caches # the client instance, and subsequent examples don't instantiate it # again. if test.warn? it 'warns' do expect(Mongo::Logger.logger).to receive(:warn)#.and_call_original expect(test.client).to be_a(Mongo::Client) end else it 'does not warn' do expect(Mongo::Logger.logger).not_to receive(:warn) expect(test.client).to be_a(Mongo::Client) end end if test.hosts it 'creates a client with the correct hosts' do expect(test.client).to have_hosts(test, test.hosts) end end it 'creates a client with the correct authentication properties' do expect(test.client).to match_auth(test) end if opts = test.expected_options if opts['compressors'] && opts['compressors'].include?('snappy') before do unless ENV.fetch('BUNDLE_GEMFILE', '') =~ /snappy/ skip "This test requires snappy compression" end end end if opts['compressors'] && opts['compressors'].include?('zstd') before do unless ENV.fetch('BUNDLE_GEMFILE', '') =~ /zstd/ skip "This test requires zstd compression" end end end it 'creates a client with the correct options' do mapped = Mongo::URI::OptionsMapper.new.ruby_to_smc(test.client.options) expected = Mongo::ConnectionString.adjust_expected_mongo_client_options( opts, ) mapped.should == expected end end else it 'raises an error' do expect{ test.uri }.to raise_exception(Mongo::Error::InvalidURI) end end end end end end end mongo-ruby-driver-2.21.3/spec/spec_tests/versioned_api_spec.rb000066400000000000000000000004521505113246500244650ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' require 'runners/unified' base = "#{CURRENT_PATH}/spec_tests/data/versioned_api" UNIFIED_TESTS = Dir.glob("#{base}/**/*.yml").sort describe 'Versioned API spec tests' do define_unified_spec_tests(base, UNIFIED_TESTS) end mongo-ruby-driver-2.21.3/spec/stress/000077500000000000000000000000001505113246500174455ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/stress/cleanup_spec.rb000066400000000000000000000030001505113246500224240ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe 'Cleanup stress test' do require_stress let(:options) do SpecConfig.instance.all_test_options end before(:all) do # load if necessary ClusterConfig.instance.primary_address ClientRegistry.instance.close_all_clients end context 'single client disconnect/reconnect' do let(:client) do new_local_client([ClusterConfig.instance.primary_address.seed], options) end it 'cleans up' do client client.cluster.servers_list.map(&:scan!) sleep 1 GC.start start_resources = resources 500.times do client.close client.reconnect end sleep 1 GC.start end_resources = resources # There seem to be a temporary file descriptor leak in CI, # where we start with 75 fds and end with 77 fds. # Allow a few to be leaked, run more iterations to ensure the leak # is not a real one. # Sometimes we end with fewer fds than we started with also... end_resources[:open_file_count].should >= start_resources[:open_file_count] - 3 end_resources[:open_file_count].should <= start_resources[:open_file_count] + 3 end_resources[:running_thread_count].should == start_resources[:running_thread_count] end end def resources { open_file_count: Dir["/proc/#{Process.pid}/fd/*"].count, running_thread_count: Thread.list.select { |thread| thread.status == 'run' }.count, } end end mongo-ruby-driver-2.21.3/spec/stress/connection_pool_stress_spec.rb000066400000000000000000000060751505113246500256070ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe 'Connection pool stress test' do require_stress let(:options) do { max_pool_size: 5, min_pool_size: 3 } end let(:thread_count) { 5 } let(:documents) do [].tap do |documents| 10000.times do |i| documents << { a: i} end end end let(:operation_threads) do [].tap do |threads| thread_count.times do |i| threads << Thread.new do 100.times do |j| collection.find(a: i+j).to_a sleep 0.1 collection.find(a: i+j).to_a end end end end end let(:client) do authorized_client.with(options) end let(:collection) do client[authorized_collection.name].tap do |collection| collection.drop collection.insert_many(documents) end end shared_examples_for 'does not raise error' do it 'does not raise error' do collection expect { threads.collect { |t| t.join } }.not_to raise_error end end describe 'when several threads run operations on the collection' do let(:threads) { operation_threads } context 'min pool size 0, max pool size 5' do let(:options) do { max_pool_size: 5, min_pool_size: 0 } end let(:thread_count) { 7 } it_behaves_like 'does not raise error' end context 'min pool size 1, max pool size 5' do let(:options) do { max_pool_size: 5, min_pool_size: 1 } end let(:thread_count) { 7 } it_behaves_like 'does not raise error' end context 'min pool size 2, max pool size 5' do let(:options) do { max_pool_size: 5, min_pool_size: 2 } end let(:thread_count) { 7 } it_behaves_like 'does not raise error' end context 'min pool size 3, max pool size 5' do let(:options) do { max_pool_size: 5, min_pool_size: 3 } end let(:thread_count) { 7 } it_behaves_like 'does not raise error' end context 'min pool size 4, max pool size 5' do let(:options) do { max_pool_size: 5, min_pool_size: 4 } end let(:thread_count) { 7 } it_behaves_like 'does not raise error' end context 'min pool size 5, max pool size 5' do let(:options) do { max_pool_size: 5, min_pool_size: 5 } end let(:thread_count) { 7 } it_behaves_like 'does not raise error' end end describe 'when there are many more threads than the max pool size' do let(:threads) { operation_threads } context '10 threads, max pool size 5' do let(:thread_count) { 10 } it_behaves_like 'does not raise error' end context '15 threads, max pool size 5' do let(:thread_count) { 15 } it_behaves_like 'does not raise error' end context '20 threads, max pool size 5' do let(:thread_count) { 20 } it_behaves_like 'does not raise error' end context '25 threads, max pool size 5' do let(:thread_count) { 25 } it_behaves_like 'does not raise error' end end end mongo-ruby-driver-2.21.3/spec/stress/connection_pool_timing_spec.rb000066400000000000000000000107471505113246500255540ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe 'Connection pool timing test' do require_stress clean_slate_for_all before(:all) do # This set up is taken from the step_down_spec file. In a future PR, ClusterTools # may be modified so this set up is no longer necessary. if ClusterConfig.instance.fcv_ish >= '4.2' && ClusterConfig.instance.topology == :replica_set ClusterTools.instance.set_election_timeout(5) ClusterTools.instance.set_election_handoff(false) end end after(:all) do if ClusterConfig.instance.fcv_ish >= '4.2' && ClusterConfig.instance.topology == :replica_set ClusterTools.instance.set_election_timeout(10) ClusterTools.instance.set_election_handoff(true) ClusterTools.instance.reset_priorities end end let(:client) do authorized_client.with(options.merge(monitoring_io: true)) end let!(:collection) do client[authorized_collection.name].tap do |collection| collection.drop collection.insert_many(documents) end end let(:documents) do [].tap do |documents| 10000.times do |i| documents << { a: i} end end end let(:operation_threads) do [].tap do |threads| thread_count.times do |i| threads << Thread.new do 100.times do |j| collection.find(a: i+j).to_a sleep 0.01 collection.find(a: i+j).to_a end end end end end let(:thread_count) { 5 } context 'when there is no max idle time' do let(:options) do { max_pool_size: 10, min_pool_size: 5 } end let(:threads) { operation_threads } it 'does not error' do start = Mongo::Utils.monotonic_time expect { threads.collect { |t| t.join } }.not_to raise_error puts "[Connection Pool Timing] Duration with no max idle time: #{Mongo::Utils.monotonic_time - start}" end end context 'when there is a low max idle time' do let(:options) do { max_pool_size: 10, min_pool_size: 5, max_idle_time: 0.1 } end let(:threads) { operation_threads } it 'does not error' do start = Mongo::Utils.monotonic_time expect { threads.collect { |t| t.join } }.not_to raise_error puts "[Connection Pool Timing] Duration with low max idle time: #{Mongo::Utils.monotonic_time - start}" end end context 'when primary is changed, then more operations are performed' do min_server_fcv '4.2' require_topology :replica_set let(:options) do { max_pool_size: 10, min_pool_size: 5 } end let(:more_threads) do PossiblyConcurrentArray.new.tap do |more_threads| 5.times do |i| more_threads << Thread.new do 10.times do |j| collection.find(a: i+j).to_a sleep 0.01 collection.find(a: i+j).to_a end end end end end let(:threads) do threads = PossiblyConcurrentArray.new 5.times do |i| threads << Thread.new do 10.times do |j| collection.find(a: i+j).to_a sleep 0.01 collection.find(a: i+j).to_a end end end threads << Thread.new do # Wait for other threads to terminate first, otherwise we get an error # when trying to perform operations during primary change sleep 1 @primary_change_start = Mongo::Utils.monotonic_time ClusterTools.instance.change_primary @primary_change_end = Mongo::Utils.monotonic_time # Primary change is complete; execute more operations more_threads.collect { |t| t.join } end threads end # On JRuby, sometimes the following error is produced indicating # possible data corruption or an interpreter bug: # RSpec::Expectations::ExpectationNotMetError: expected no Exception, got # retry_test tries: (BSON::Environment.jruby? ? 3 : 1) it 'does not error' do threads start = Mongo::Utils.monotonic_time expect do threads.each do |t| t.join end end.not_to raise_error puts "[Connection Pool Timing] Duration before primary change: #{@primary_change_start - start}. "\ "Duration after primary change: #{Mongo::Utils.monotonic_time - @primary_change_end}" end end end mongo-ruby-driver-2.21.3/spec/stress/fork_reconnect_stress_spec.rb000066400000000000000000000062001505113246500254060ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' describe 'fork reconnect' do require_fork require_mri # On multi-shard sharded clusters a succeeding write request does not # guarantee that the next operation will succeed (since it could be sent to # another shard with a dead connection). require_no_multi_mongos require_stress let(:client) { authorized_client } describe 'client' do it 'works after fork' do # Perform a write so that we discover the current primary. # Previous test may have stepped down the server that authorized client # considers the primary. # In standalone deployments there are no retries, hence execute the # operation twice manually. client['foo'].insert_one(test: 1) rescue nil client['foo'].insert_one(test: 1) pids = [] deadline = Mongo::Utils.monotonic_time + 5 1.upto(10) do if pid = fork pids << pid else Utils.wrap_forked_child do while Mongo::Utils.monotonic_time < deadline client.database.command(hello: 1).should be_a(Mongo::Operation::Result) end end end end while Mongo::Utils.monotonic_time < deadline # Use a read which is retried in case of an error client['foo'].find(test: 1).to_a end pids.each do |pid| pid, status = Process.wait2(pid) status.exitstatus.should == 0 end end retry_test context 'when parent is operating on client during the fork' do # This test intermittently fails in evergreen with pool size of 5, # with a number of pending connections in the pool. # The reason could be that handshaking is slow or operations are slow # post handshakes. # Sometimes it seems the monitoring connection experiences network # errors (despite being a loopback connection) which causes the test # to fail as then server selection fails. # The retry_test is to deal with network errors on monitoring connection. let(:client) { authorized_client.with(max_pool_size: 10, wait_queue_timeout: 10, socket_timeout: 2, connect_timeout: 2) } it 'works' do client.database.command(hello: 1).should be_a(Mongo::Operation::Result) threads = [] 5.times do threads << Thread.new do loop do client['foo'].find(test: 1).to_a end end end pids = [] deadline = Mongo::Utils.monotonic_time + 5 10.times do if pid = fork pids << pid else Utils.wrap_forked_child do while Mongo::Utils.monotonic_time < deadline client.database.command(hello: 1).should be_a(Mongo::Operation::Result) end end end end while Mongo::Utils.monotonic_time < deadline sleep 0.1 end threads.map(&:kill) threads.map(&:join) pids.each do |pid| pid, status = Process.wait2(pid) status.exitstatus.should == 0 end end end end end mongo-ruby-driver-2.21.3/spec/stress/push_monitor_close_spec.rb000066400000000000000000000022501505113246500247160ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'spec_helper' # This test repeatedly creates and closes clients across several threads. # Its goal is to ensure that the push monitor connections specifically get # closed without any errors or warnings being reported to applications. # # Although the test is specifically meant to test 4.4+ servers (that utilize # the push monitor) in non-LB connections, run it everywhere for good measure. describe 'Push monitor close test' do require_stress let(:options) do SpecConfig.instance.all_test_options end before(:all) do # load if necessary ClusterConfig.instance.primary_address ClientRegistry.instance.close_all_clients end it 'does not warn/error on cleanup' do Mongo::Logger.logger.should_not receive(:warn) threads = 10.times.map do Thread.new do 10.times do client = new_local_client([ClusterConfig.instance.primary_address.seed], options) if rand > 0.33 client.command(ping: 1) sleep(rand * 3) end client.close STDOUT << '.' end end end threads.each(&:join) puts end end mongo-ruby-driver-2.21.3/spec/support/000077500000000000000000000000001505113246500176365ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/support/authorization.rb000066400000000000000000000113621505113246500230660ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2009-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # The default test collection. # # @since 2.0.0 TEST_COLL = 'test'.freeze # An invalid write concern. # # @since 2.4.2 INVALID_WRITE_CONCERN = { w: 4000 } module Authorization # On inclusion provides helpers for use with testing with and without # authorization. # # # @since 2.0.0 def self.included(context) # Gets the root system administrator user. # # @since 2.0.0 context.let(:root_user) { SpecConfig.instance.root_user } # Get the default test user for the suite. # # @since 2.0.0 context.let(:test_user) { SpecConfig.instance.test_user } # Provides an authorized mongo client on the default test database for the # default test user. # # @since 2.0.0 context.let(:authorized_client) { ClientRegistry.instance.global_client('authorized') } # A client with a different cluster, for testing session use across # clients context.let(:another_authorized_client) do new_local_client( SpecConfig.instance.addresses, SpecConfig.instance.test_options.merge( database: SpecConfig.instance.test_db, user: SpecConfig.instance.test_user.name, password: SpecConfig.instance.test_user.password, heartbeat_frequency: 10, ), ) end # Provides an authorized mongo client on the default test database that retries writes. # # @since 2.5.1 context.let(:authorized_client_with_retry_writes) do ClientRegistry.instance.global_client('authorized_with_retry_writes') end context.let(:authorized_client_without_retry_writes) do ClientRegistry.instance.global_client('authorized_without_retry_writes') end context.let(:authorized_client_without_retry_reads) do ClientRegistry.instance.global_client('authorized_without_retry_reads') end context.let(:authorized_client_without_any_retry_reads) do ClientRegistry.instance.global_client('authorized_without_any_retry_reads') end context.let(:authorized_client_without_any_retries) do ClientRegistry.instance.global_client('authorized_without_any_retries') end # Provides an unauthorized mongo client on the default test database. # # @since 2.0.0 context.let(:unauthorized_client) { ClientRegistry.instance.global_client('unauthorized') } # Provides an unauthorized mongo client on the admin database, for use in # setting up the first admin root user. # # @since 2.0.0 context.let(:admin_unauthorized_client) { ClientRegistry.instance.global_client('admin_unauthorized') } # Get an authorized client on the test database logged in as the admin # root user. # # @since 2.0.0 context.let(:root_authorized_client) { ClientRegistry.instance.global_client('root_authorized') } context.let(:root_authorized_admin_client) do ClientRegistry.instance.global_client('root_authorized').use(:admin) end # Gets the default test collection from the authorized client. # # @since 2.0.0 context.let(:authorized_collection) do authorized_client[TEST_COLL] end # Gets the default test collection from the unauthorized client. # # @since 2.0.0 context.let(:unauthorized_collection) do unauthorized_client[TEST_COLL] end # Gets a primary server for the default authorized client. # # @since 2.0.0 context.let(:authorized_primary) do authorized_client.cluster.next_primary end # Get a primary server for the client authorized as the root system # administrator. # # @since 2.0.0 context.let(:root_authorized_primary) do root_authorized_client.cluster.next_primary end # Get a primary server from the unauthorized client. # # @since 2.0.0 context.let(:unauthorized_primary) do authorized_client.cluster.next_primary end # Get a default address (of the primary). # # @since 2.2.6 context.let(:default_address) do authorized_client.cluster.next_primary.address end # Get a default app metadata. # # @since 2.4.0 context.let(:app_metadata) do authorized_client.cluster.app_metadata end end end mongo-ruby-driver-2.21.3/spec/support/aws_utils.rb000066400000000000000000000037251505113246500222040ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all autoload :Byebug, 'byebug' autoload :Paint, 'paint' require 'aws-sdk-core' module Aws autoload :CloudWatchLogs, 'aws-sdk-cloudwatchlogs' autoload :EC2, 'aws-sdk-ec2' autoload :ECS, 'aws-sdk-ecs' autoload :IAM, 'aws-sdk-iam' autoload :STS, 'aws-sdk-sts' end module AwsUtils NAMESPACE = 'mdb-ruby'.freeze AWS_AUTH_REGULAR_USER_NAME = "#{NAMESPACE}.aws-auth-regular".freeze AWS_AUTH_ASSUME_ROLE_NAME = "#{NAMESPACE}.assume-role".freeze AWS_AUTH_SECURITY_GROUP_NAME = "#{NAMESPACE}.ssh".freeze AWS_AUTH_VPC_GATEWAY_NAME = NAMESPACE AWS_AUTH_VPC_SECURITY_GROUP_NAME = "#{NAMESPACE}.vpc-ssh".freeze AWS_AUTH_VPC_CIDR = "10.42.142.64/28".freeze AWS_AUTH_EC2_AMI_NAMES = { # https://wiki.debian.org/Cloud/AmazonEC2Image/Buster 'debian10' => 'debian-10-amd64-20200210-166', 'ubuntu1604' => 'ubuntu/images/hvm-ssd/ubuntu-xenial-16.04-amd64-server-20200317', }.freeze AWS_AUTH_EC2_INSTANCE_NAME = "#{NAMESPACE}.aws-auth)".freeze AWS_AUTH_INSTANCE_PROFILE_NAME = "#{NAMESPACE}.ip".freeze AWS_AUTH_ASSUME_ROLE_USER_POLICY_NAME = "#{NAMESPACE}.assume-role-user-policy".freeze AWS_AUTH_EC2_ROLE_NAME = "#{NAMESPACE}.ec2-role".freeze AWS_AUTH_ECS_CLUSTER_NAME = "#{NAMESPACE}_aws-auth".freeze AWS_AUTH_ECS_TASK_FAMILY = "#{NAMESPACE}_aws-auth".freeze AWS_AUTH_ECS_SERVICE_NAME = "#{NAMESPACE}_aws-auth".freeze AWS_AUTH_ECS_LOG_GROUP = "/ecs/#{NAMESPACE}/aws-auth-ecs".freeze AWS_AUTH_ECS_LOG_STREAM_PREFIX = "task".freeze # This role allows ECS tasks access to output logs to CloudWatch. AWS_AUTH_ECS_EXECUTION_ROLE_NAME = "#{NAMESPACE}.ecs-execution-role".freeze # This role is assumed by ECS tasks. AWS_AUTH_ECS_TASK_ROLE_NAME = "#{NAMESPACE}.ecs-task-role".freeze autoload :Base, 'support/aws_utils/base' autoload :Inspector, 'support/aws_utils/inspector' autoload :Orchestrator, 'support/aws_utils/orchestrator' autoload :Provisioner, 'support/aws_utils/provisioner' end mongo-ruby-driver-2.21.3/spec/support/aws_utils/000077500000000000000000000000001505113246500216505ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/support/aws_utils/base.rb000066400000000000000000000070121505113246500231070ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all module AwsUtils class Base def initialize(access_key_id: nil, secret_access_key: nil, region: nil, **options) @access_key_id = access_key_id || ENV['AWS_ACCESS_KEY_ID'] @secret_access_key = secret_access_key || ENV['AWS_SECRET_ACCESS_KEY'] @region = region || ENV['AWS_REGION'] @options = options end attr_reader :access_key_id, :secret_access_key, :region, :options private def detect_object(resp, resp_attr, object_attr, value) resp.each do |batch| batch.send(resp_attr).each do |object| if object.send(object_attr) == value return object end end end nil end def ssh_security_group_id begin sg = ec2_client.describe_security_groups( group_names: [AWS_AUTH_SECURITY_GROUP_NAME], ).security_groups.first sg&.group_id rescue Aws::EC2::Errors::InvalidGroupNotFound # Unlike almost all other describe calls, this one raises an exception # if there isn't a security group matching the criteria. nil end end def ssh_security_group_id! ssh_security_group_id.tap do |security_group_id| if security_group_id.nil? raise 'Security group does not exist, please provision' end end end def ssh_vpc_security_group_id begin # If the top-level group_name parameter is used, only non-VPC # security groups are returned which does not find the VPC group # we are looking for here. sg = ec2_client.describe_security_groups( filters: [{ name: 'group-name', values: [AWS_AUTH_VPC_SECURITY_GROUP_NAME], }], ).security_groups.first sg&.group_id rescue Aws::EC2::Errors::InvalidGroupNotFound # Unlike almost all other describe calls, this one raises an exception # if there isn't a security group matching the criteria. nil end end def ssh_vpc_security_group_id! ssh_vpc_security_group_id.tap do |security_group_id| if security_group_id.nil? raise 'Security group does not exist, please provision' end end end def subnet_id # This directly queries the subnets for the one with the expected # CIDR block, to save on the number of requests made to AWS. ec2_client.describe_subnets( filters: [{ name: 'cidr-block', values: [AWS_AUTH_VPC_CIDR], }], ).subnets.first&.subnet_id end def subnet_id! subnet_id.tap do |subnet_id| if subnet_id.nil? raise 'Subnet does not exist, please provision' end end end def credentials Aws::Credentials.new(access_key_id, secret_access_key) end public def ec2_client @ec2_client ||= Aws::EC2::Client.new( region: region, credentials: credentials, ) end def iam_client iam_client = Aws::IAM::Client.new( region: region, credentials: credentials, ) end def ecs_client @ecs_client ||= Aws::ECS::Client.new( region: region, credentials: credentials, ) end def logs_client @logs_client ||= Aws::CloudWatchLogs::Client.new( region: region, credentials: credentials, ) end def sts_client @sts_client ||= Aws::STS::Client.new( region: region, credentials: credentials, ) end end end mongo-ruby-driver-2.21.3/spec/support/aws_utils/inspector.rb000066400000000000000000000157261505113246500242160ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all module AwsUtils class Inspector < Base def list_key_pairs ec2_client.describe_key_pairs.key_pairs.each do |key_pair| puts key_pair.key_name end end def assume_role_arn assume_role = detect_object(iam_client.list_roles, :roles, :role_name, AWS_AUTH_ASSUME_ROLE_NAME) if assume_role.nil? raise 'No user found, please run `aws setup-resources`' end assume_role.arn end def ecs_status(cluster_name: AWS_AUTH_ECS_CLUSTER_NAME, service_name: AWS_AUTH_ECS_SERVICE_NAME, get_public_ip: true, get_logs: true ) service = ecs_client.describe_services( cluster: cluster_name, services: [service_name], ).services.first if service.nil? raise "No service #{service_name} in cluster #{cluster_name} - provision first" end # When Ruby driver tooling is used, task definition generation is # going up on each service launch, and service name is the fixed. # When testing in Evergreen, generation is fixed because we do not # change the task definition, but service name is different for # each test run. if service.task_definition =~ /:(\d+)$/ generation = $1 puts "Current task definition generation: #{generation} for service: #{service_name}" else raise 'Could not determine task definition generation' end colors = { 'running' => :green, 'pending' => :yellow, 'stopped' => :red, } # Pending status in the API includes tasks in provisioning status as # show in the AWS console. # # The API returns the tasks unordered, in particular the latest task # may be in the middle of the list following relatively ancient tasks. # Collect all tasks in a single list and order them by generation. # We expect to have a single task per generation. tasks = [] %w(running pending stopped).each do |status| resp = ecs_client.list_tasks( cluster: cluster_name, service_name: service_name, desired_status: status, ) task_arns = resp.map(&:task_arns).flatten if task_arns.empty? next end ecs_client.describe_tasks( cluster: cluster_name, tasks: task_arns, ).each do |tbatch| unless tbatch.failures.empty? # The task list endpoint does not raise an exception if it can't # find the tasks, but reports "failures". puts "Failures for #{task_arns.join(', ')}:" tbatch.failures.each do |failure| puts "#{failure.arn}: #{failure.reason}" next end end tbatch.tasks.each do |task| tasks << task end end end tasks.each do |task| class << task def generation @generation ||= if task_definition_arn =~ /:(\d+)$/ $1.to_i else raise 'Could not determine generation' end end def task_uuid @uuid ||= task_arn.split('/').last end end end tasks = tasks.sort_by do |task| -task.generation end.first(3) running_task = nil running_private_ip = nil running_public_ip = nil if tasks.empty? puts 'No tasks in the cluster' end tasks.each do |task| status = task.last_status.downcase status_ext = case status when 'stopped' ": #{task.stopped_reason}" else '' end decorated_status = Paint[status.upcase, colors[status]] puts "Task for generation #{task.generation}: #{decorated_status}#{status_ext} (uuid: #{task.task_uuid})" if status == 'running' puts "Task ARN: #{task.task_arn}" running_task ||= task end task.containers.each do |container| if container.reason puts container.reason end end if status == 'running' attachment = detect_object([task], :attachments, :type, 'ElasticNetworkInterface') ip = detect_object([attachment], :details, :name, 'privateIPv4Address') if ip private_ip = ip.value running_private_ip ||= private_ip end msg = "Private IP: #{private_ip}" if get_public_ip niid = detect_object([attachment], :details, :name, 'networkInterfaceId') network_interface = ec2_client.describe_network_interfaces( network_interface_ids: [niid.value], ).network_interfaces.first public_ip = network_interface&.association&.public_ip running_public_ip ||= public_ip msg += ", public IP: #{public_ip}" end puts msg end puts end puts task_ids = [] max_event_count = 5 event_count = 0 service = ecs_client.describe_services( cluster: cluster_name, services: [service_name], ).services.first if service.nil? puts 'Service is missing' else if service.events.empty? puts 'No events for service' else puts "Events for #{service.service_arn}:" service.events.each do |event| event_count += 1 break if event_count > max_event_count if event.message =~ /\(task (\w+)\)/ task_ids << $1 end puts "#{event.created_at.strftime('%Y-%m-%d %H:%M:%S %z')} #{event.message}" end end end if get_logs && running_task puts log_stream_name = "task/ssh/#{running_task.task_uuid}" log_stream = logs_client.describe_log_streams( log_group_name: AWS_AUTH_ECS_LOG_GROUP, log_stream_name_prefix: log_stream_name, ).log_streams.first if log_stream log_events = logs_client.get_log_events( log_group_name: AWS_AUTH_ECS_LOG_GROUP, log_stream_name: log_stream_name, end_time: Time.now.to_i * 1000, limit: 100, ).events if log_events.any? puts "Task logs for task #{running_task.task_uuid}:" log_events.each do |event| puts "[#{Time.at(event.timestamp/1000r).strftime('%Y-%m-%d %H:%M:%S %z')}] #{event.message}" end else puts "No CloudWatch events in the log stream for task #{running_task.task_uuid}" end else puts "No CloudWatch log stream for task #{running_task.task_uuid}" end end if running_public_ip puts puts "ip=#{running_public_ip}" puts "ssh -o StrictHostKeyChecking=false root@#{running_public_ip}" end { private_ip: running_private_ip, } end private def ucfirst(str) str[0].upcase + str[1...str.length] end end end mongo-ruby-driver-2.21.3/spec/support/aws_utils/orchestrator.rb000066400000000000000000000311561505113246500247220ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'securerandom' module AwsUtils class Orchestrator < Base def assume_role(role_arn) # https://docs.aws.amazon.com/sdk-for-ruby/v3/api/Aws/STS/Client.html#assume_role-instance_method resp = sts_client.assume_role( role_arn: role_arn, role_session_name: "#{NAMESPACE}.test", ) resp.credentials end def assume_role_with_web_identity(role_arn, token_file) token = File.open(token_file).read resp = sts_client.assume_role_with_web_identity( role_arn: role_arn, role_session_name: SecureRandom.uuid, web_identity_token: token, duration_seconds: 900 ) resp.credentials end def set_instance_profile(instance_id, instance_profile_name: AWS_AUTH_INSTANCE_PROFILE_NAME, instance_profile_arn: nil ) clear_instance_profile(instance_id) deadline = Process.clock_gettime(Process::CLOCK_MONOTONIC) + 30 begin ec2_client.associate_iam_instance_profile( iam_instance_profile: { name: instance_profile_name, arn: instance_profile_arn, }, instance_id: instance_id, ) rescue Aws::EC2::Errors::RequestLimitExceeded => e if Process.clock_gettime(Process::CLOCK_MONOTONIC) >= deadline raise end STDERR.puts("AWS request limit exceeded: #{e.class}: #{e}, will retry") sleep 5 retry end end def clear_instance_profile(instance_id) assoc = detect_object(ec2_client.describe_iam_instance_profile_associations, :iam_instance_profile_associations, :instance_id, instance_id) if assoc deadline = Process.clock_gettime(Process::CLOCK_MONOTONIC) + 30 begin ec2_client.disassociate_iam_instance_profile( association_id: assoc.association_id, ) rescue Aws::EC2::Errors::RequestLimitExceeded => e if Process.clock_gettime(Process::CLOCK_MONOTONIC) >= deadline raise end STDERR.puts("AWS request limit exceeded: #{e.class}: #{e}, will retry") sleep 5 retry end end end def provision_auth_ec2_instance(key_pair_name: nil, public_key_path: nil, distro: 'ubuntu1604' ) security_group_id = ssh_security_group_id! reservations = ec2_client.describe_instances(filters: [ {name: 'tag:name', values: [AWS_AUTH_EC2_INSTANCE_NAME]}, ]).reservations instance = find_running_instance(reservations) if instance.nil? ami_name = AWS_AUTH_EC2_AMI_NAMES.fetch(distro) image = ec2_client.describe_images( filters: [{name: 'name', values: [ami_name]}], ).images.first if public_key_path public_key = File.read(public_key_path) user_data = Base64.encode64(<<-CMD) #!/bin/sh for user in `ls /home`; do cd /home/$user && mkdir -p .ssh && chmod 0700 .ssh && chown $user:$user .ssh && cat <<-EOT |tee -a .ssh/authorized_keys #{public_key} EOT done CMD end resp = ec2_client.run_instances( instance_type: 't3a.small', image_id: image.image_id, min_count: 1, max_count: 1, key_name: key_pair_name, user_data: user_data, tag_specifications: [{ resource_type: 'instance', tags: [{key: 'name', value: AWS_AUTH_EC2_INSTANCE_NAME}], }], monitoring: {enabled: false}, credit_specification: {cpu_credits: 'standard'}, security_group_ids: [security_group_id], metadata_options: { # This is required for Docker containers on the instance to be able # to use the instance metadata endpoints. http_put_response_hop_limit: 2, }, ).to_h instance_id = resp[:instances].first[:instance_id] reservations = ec2_client.describe_instances(instance_ids: [instance_id]).reservations instance = find_running_instance(reservations) end if instance.nil? raise "Instance should have been found here" end if instance.state.name == 'stopped' ec2_client.start_instances(instance_ids: [instance.instance_id]) end 10.times do if %w(stopped pending).include?(instance.state.name) puts "Waiting for instance #{instance.instance_id} to start (current state: #{instance.state.name})" sleep 5 end reservations = ec2_client.describe_instances(instance_ids: [instance.instance_id]).reservations instance = find_running_instance(reservations) end puts "Found usable instance #{instance.instance_id} at #{instance.public_ip_address}" end def terminate_auth_ec2_instance ec2_client.describe_instances(filters: [ {name: 'tag:name', values: [AWS_AUTH_EC2_INSTANCE_NAME]}, ]).each do |resp| resp.reservations.each do |res| res.instances.each do |instance| puts "Terminating #{instance.instance_id}" ec2_client.terminate_instances(instance_ids: [instance.instance_id]) end end end end def provision_auth_ecs_task(public_key_path: nil, cluster_name: AWS_AUTH_ECS_CLUSTER_NAME, service_name: AWS_AUTH_ECS_SERVICE_NAME, security_group_id: nil, subnet_ids: nil, task_definition_ref: AWS_AUTH_ECS_TASK_FAMILY ) security_group_id ||= ssh_vpc_security_group_id! subnet_ids ||= [subnet_id!] # https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ECS_AWSCLI_Fargate.html resp = ecs_client.describe_clusters( clusters: [cluster_name], ) cluster = detect_object(resp, :clusters, :cluster_name, cluster_name) if cluster.nil? raise 'No cluster found, please run `aws setup-resources`' end if public_key_path public_key = File.read(public_key_path) unless public_key =~ /\Assh-/ raise "The file at #{public_key_path} does not look like a public key" end entry_point = ['bash', '-c', <<-CMD] apt-get update && apt-get install -y openssh-server && cd /root && mkdir -p .ssh && chmod 0700 .ssh && cat >.ssh/authorized_keys <<-EOT && #{public_key} EOT service ssh start && sleep 10000000 #mkdir /run/sshd && /usr/sbin/sshd -d CMD else entry_point = nil end launch_type = if options[:ec2] 'EC2' else 'FARGATE' end # When testing in Evergreen, we are given the task definition ARN # and we always launch the tasks with that ARN. # When testing locally, we repace task definition every time we launch # the service. if task_definition_ref !~ /^arn:/ execution_role = detect_object(iam_client.list_roles, :roles, :role_name, AWS_AUTH_ECS_EXECUTION_ROLE_NAME) if execution_role.nil? raise 'Execution role not configured' end task_role = detect_object(iam_client.list_roles, :roles, :role_name, AWS_AUTH_ECS_TASK_ROLE_NAME) if task_role.nil? raise 'Task role not configured' end task_definition = ecs_client.register_task_definition( family: AWS_AUTH_ECS_TASK_FAMILY, container_definitions: [{ name: 'ssh', essential: true, entry_point: entry_point, image: 'debian:9', port_mappings: [{ container_port: 22, protocol: 'tcp', }], log_configuration: { log_driver: 'awslogs', options: { 'awslogs-group' => AWS_AUTH_ECS_LOG_GROUP, 'awslogs-region' => region, 'awslogs-stream-prefix' => AWS_AUTH_ECS_LOG_STREAM_PREFIX, }, }, }], requires_compatibilities: [launch_type], network_mode: 'awsvpc', cpu: '512', memory: '2048', # This is the ECS task role used for AWS auth testing task_role_arn: task_role.arn, # The execution role is required to support awslogs (logging to # CloudWatch). execution_role_arn: execution_role.arn, ).task_definition task_definition_ref = AWS_AUTH_ECS_TASK_FAMILY end service = ecs_client.describe_services( cluster: cluster_name, services: [service_name], ).services.first if service && service.status.downcase == 'draining' puts "Waiting for #{service_name} to drain" ecs_client.wait_until( :services_inactive, { cluster: cluster.cluster_name, services: [service_name], }, delay: 5, max_attempts: 36, ) puts "... done." service = nil end if service && service.status.downcase == 'inactive' service = nil end if service puts "Updating service with status #{service.status}" service = ecs_client.update_service( cluster: cluster_name, service: service_name, task_definition: task_definition_ref, ).service else puts "Creating a new service" vpc_config = {} unless options[:ec2] vpc_config[:assign_public_ip] = 'ENABLED' end service = ecs_client.create_service( desired_count: 1, service_name: service_name, task_definition: task_definition_ref, cluster: cluster_name, launch_type: launch_type, network_configuration: { awsvpc_configuration: vpc_config.merge( subnets: subnet_ids, security_groups: [security_group_id], ), }, ).service end end def wait_for_ecs_ready( cluster_name: AWS_AUTH_ECS_CLUSTER_NAME, service_name: AWS_AUTH_ECS_SERVICE_NAME, timeout: 20 ) deadline = Process.clock_gettime(Process::CLOCK_MONOTONIC) + timeout # The AWS SDK waiter seems to immediately fail sometimes right after # the service is created, so wait for the service to become active # manually and then use the waiter to wait for the service to become # stable. # # The failure may be due to the fact that apparently, it is possible for # describe_services to not return an existing service for some time. # Therefore, allow the lack of service to be a transient error. loop do service = ecs_client.describe_services( cluster: cluster_name, services: [service_name], ).services.first if service.nil? puts "Service #{service_name} in cluster #{cluster_name} does not exist (yet?)" status = 'MISSING' elsif service.status.downcase == 'active' break else status = service.status end if Process.clock_gettime(Process::CLOCK_MONOTONIC) >= deadline raise "Service #{service_name} in cluster #{cluster_name} did not become ready in #{timeout} seconds (current status: #{status})" end puts "Wating for service #{service_name} in cluster #{cluster_name} to become ready (#{'%2.1f' % (deadline - now)} seconds remaining, current status: #{status})" sleep 5 end puts "Wating for service #{service_name} in cluster #{cluster_name} to become stable" ecs_client.wait_until( :services_stable, { cluster: cluster_name, services: [service_name], }, delay: 5, max_attempts: 36, ) end def terminate_auth_ecs_task ecs_client.describe_services( cluster: AWS_AUTH_ECS_CLUSTER_NAME, services: [AWS_AUTH_ECS_SERVICE_NAME], ).each do |resp| resp.services.each do |service| puts "Terminating #{service.service_name}" begin ecs_client.update_service( cluster: AWS_AUTH_ECS_CLUSTER_NAME, service: service.service_name, desired_count: 0, ) rescue Aws::ECS::Errors::ServiceNotActiveException # No action needed end ecs_client.delete_service( cluster: AWS_AUTH_ECS_CLUSTER_NAME, service: service.service_name, ) end end end private def find_running_instance(reservations) instance = nil reservations.each do |reservation| instance = reservation.instances.detect do |instance| %w(pending running stopped).include?(instance.state.name) end break if instance end instance end end end mongo-ruby-driver-2.21.3/spec/support/aws_utils/provisioner.rb000066400000000000000000000266041505113246500245640ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all module AwsUtils class Provisioner < Base def setup_aws_auth_resources security_group_id = ssh_security_group_id if security_group_id.nil? security_group_id = ec2_client.create_security_group( group_name: AWS_AUTH_SECURITY_GROUP_NAME, description: 'Inbound SSH', ).group_id end puts "EC2 Security group: #{security_group_id}" setup_security_group(security_group_id) vpc = ec2_client.describe_vpcs( filters: [{ name: 'cidr', values: [AWS_AUTH_VPC_CIDR], }], ).vpcs.first if vpc.nil? vpc = ec2_client.create_vpc( cidr_block: AWS_AUTH_VPC_CIDR, ).vpc end # The VPC must have an internet gateway and the subnet in the VPC # must have a route to the internet gateway. # https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Internet_Gateway.html#d0e22943 # Internet gateways cannot be named when they are created, therefore # we check if our VPC has a gateway and if not, create an unnamed one # and attach it right away. # https://aws.amazon.com/premiumsupport/knowledge-center/ecs-pull-container-error/ igw = ec2_client.describe_internet_gateways( filters: [{ name: 'attachment.vpc-id', values: [vpc.vpc_id], }], ).internet_gateways.first if igw.nil? igw = ec2_client.create_internet_gateway.internet_gateway ec2_client.attach_internet_gateway( internet_gateway_id: igw.internet_gateway_id, vpc_id: vpc.vpc_id, ) end # https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Internet_Gateway.html#Add_IGW_Routing route_table = ec2_client.describe_route_tables( filters: [{ name: 'vpc-id', values: [vpc.vpc_id], }], ).route_tables.first ec2_client.create_route( destination_cidr_block: '0.0.0.0/0', gateway_id: igw.internet_gateway_id, route_table_id: route_table.route_table_id, ) vpc_security_group_id = ssh_vpc_security_group_id if vpc_security_group_id.nil? vpc_security_group_id = ec2_client.create_security_group( group_name: AWS_AUTH_VPC_SECURITY_GROUP_NAME, description: 'Inbound SSH', vpc_id: vpc.vpc_id, ).group_id end setup_security_group(vpc_security_group_id) subnet = ec2_client.describe_subnets( filters: [{ name: 'vpc-id', values: [vpc.vpc_id], }], ).subnets.first if subnet.nil? subnet = ec2_client.create_subnet( cidr_block: AWS_AUTH_VPC_CIDR, vpc_id: vpc.vpc_id, ).subnet end puts "VPC: #{vpc.vpc_id}, subnet: #{subnet.subnet_id}, security group: #{vpc_security_group_id}" # For testing regular credentials, create an IAM user with no permissions. user = detect_object(iam_client.list_users, :users, :user_name, AWS_AUTH_REGULAR_USER_NAME) if user.nil? resp = iam_client.create_user( user_name: AWS_AUTH_REGULAR_USER_NAME, ) user = resp.user end puts "Regular AWS auth unprivileged user: #{user.arn}" # Assume role testing # https://aws.amazon.com/premiumsupport/knowledge-center/iam-assume-role-cli/ # # The instructions given in the above guide create an intermediate user # who has the ability to assume the role. This script reuses the # regular unprivileged user to be the user that assumes the role. user_policy = detect_object(iam_client.list_policies, :policies, :policy_name, AWS_AUTH_ASSUME_ROLE_USER_POLICY_NAME) if user_policy.nil? user_policy_document = { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ec2:Describe*", "iam:ListRoles", "sts:AssumeRole", ], "Resource": "*", }, ], } user_policy = iam_client.create_policy( policy_name: AWS_AUTH_ASSUME_ROLE_USER_POLICY_NAME, policy_document: user_policy_document.to_json, ).policy end iam_client.attach_user_policy( policy_arn: user_policy.arn, user_name: user.user_name, ) assume_role = detect_object(iam_client.list_roles, :roles, :role_name, AWS_AUTH_ASSUME_ROLE_NAME) if assume_role.nil? aws_account_id = user.arn.split(':')[4] assume_role_policy = { "Version": "2012-10-17", "Statement": { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::#{aws_account_id}:root" }, "Action": "sts:AssumeRole", }, } resp = iam_client.create_role( role_name: AWS_AUTH_ASSUME_ROLE_NAME, assume_role_policy_document: assume_role_policy.to_json, max_session_duration: 12*3600, ) assume_role = resp.role end puts "Assume role ARN: #{assume_role.arn}" # For testing retrieval of credentials from EC2 link local endpoint, # create an instance profile. ips = iam_client.list_instance_profiles instance_profile = ips.instance_profiles.detect do |instance_profile| instance_profile.instance_profile_name == AWS_AUTH_INSTANCE_PROFILE_NAME end if instance_profile.nil? resp = iam_client.create_instance_profile( instance_profile_name: AWS_AUTH_INSTANCE_PROFILE_NAME, ) instance_profile = resp.instance_profile end puts "EC2 instance profile: #{instance_profile.arn}" # https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html#create-iam-role assume_role_policy_document = { "Version": "2012-10-17", "Statement": { "Effect": "Allow", "Principal": {"Service": "ec2.amazonaws.com"}, "Action": "sts:AssumeRole", }, } ec2_role_policy_document = { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ec2:Describe*", ], "Resource": "*", }, ], } ec2_role = create_role_with_policy( AWS_AUTH_EC2_ROLE_NAME, { assume_role_policy_document: assume_role_policy_document.to_json, }, ec2_role_policy_document, ) puts "EC2 role ARN: #{ec2_role.arn}" instance_profile.roles.each do |role| iam_client.remove_role_from_instance_profile( instance_profile_name: AWS_AUTH_INSTANCE_PROFILE_NAME, role_name: role.role_name, ) end iam_client.add_role_to_instance_profile( instance_profile_name: AWS_AUTH_INSTANCE_PROFILE_NAME, role_name: AWS_AUTH_EC2_ROLE_NAME, ) # https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ECS_AWSCLI_Fargate.html puts "ECS cluster name: #{AWS_AUTH_ECS_CLUSTER_NAME}" resp = ecs_client.describe_clusters( clusters: [AWS_AUTH_ECS_CLUSTER_NAME], ) cluster = detect_object(resp, :clusters, :cluster_name, AWS_AUTH_ECS_CLUSTER_NAME) if cluster.nil? resp = ecs_client.create_cluster( cluster_name: AWS_AUTH_ECS_CLUSTER_NAME, ) cluster = resp.cluster end # https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html ecs_assume_role_policy_document = { "Version": "2012-10-17", "Statement": [ { "Sid": "", "Effect": "Allow", "Principal": { "Service": "ecs-tasks.amazonaws.com", }, "Action": "sts:AssumeRole", }, ], } # The task role itself does not have any permissions. # The example given in https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html # allows read-only access to an S3 bucket. ecs_task_role_policy_document = { "Version": "2012-10-17", "Statement": [], } ecs_task_role = create_role_with_policy( AWS_AUTH_ECS_TASK_ROLE_NAME, { assume_role_policy_document: ecs_assume_role_policy_document.to_json, }, ) # Logging to CloudWatch: # https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/QuickStartEC2Instance.html ecs_execution_role_policy_document = { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "logs:CreateLogGroup", "logs:CreateLogStream", "logs:PutLogEvents", "logs:DescribeLogStreams", ], "Resource": [ "*" ], }], } ecs_execution_role = create_role_with_policy( AWS_AUTH_ECS_EXECUTION_ROLE_NAME, { assume_role_policy_document: ecs_assume_role_policy_document.to_json, }, ecs_execution_role_policy_document, ) =begin iam_client.attach_role_policy( role_name: AWS_AUTH_ECS_ROLE_NAME, policy_arn: "arn:aws:iam::aws:policy/AmazonECSTaskExecutionRolePolicy", ) =end log_group = logs_client.describe_log_groups( log_group_name_prefix: AWS_AUTH_ECS_LOG_GROUP, ).log_groups.first unless log_group logs_client.create_log_group( log_group_name: AWS_AUTH_ECS_LOG_GROUP, ) end logs_client.put_retention_policy( log_group_name: AWS_AUTH_ECS_LOG_GROUP, retention_in_days: 1, ) end def reset_keys user = detect_object(iam_client.list_users, :users, :user_name, AWS_AUTH_REGULAR_USER_NAME) if user.nil? raise 'No user found, please run `aws setup-resources`' end iam_client.list_access_keys( user_name: user.user_name, ).to_h[:access_key_metadata].each do |access_key| iam_client.delete_access_key( user_name: user.user_name, access_key_id: access_key[:access_key_id], ) end resp = iam_client.create_access_key( user_name: user.user_name, ) access_key = resp.to_h[:access_key] puts "Credentials for regular user (#{AWS_AUTH_REGULAR_USER_NAME}):" puts "AWS_ACCESS_KEY_ID=#{access_key[:access_key_id]}" puts "AWS_SECRET_ACCESS_KEY=#{access_key[:secret_access_key]}" puts end private def create_role_with_policy(role_name, role_options, role_policy_document = nil) role = detect_object(iam_client.list_roles, :roles, :role_name, role_name) if role.nil? resp = iam_client.create_role({ role_name: role_name, }.update(role_options)) role = resp.role end if role_policy_document iam_client.put_role_policy( role_name: role_name, policy_name: "#{role_name}.policy", policy_document: role_policy_document.to_json, ) end role end def setup_security_group(security_group_id) ec2_client.authorize_security_group_ingress( group_id: security_group_id, ip_permissions: [{ from_port: 22, to_port: 22, ip_protocol: 'tcp', ip_ranges: [{ cidr_ip: '0.0.0.0/0', }], }], ) rescue Aws::EC2::Errors::InvalidPermissionDuplicate end end end mongo-ruby-driver-2.21.3/spec/support/background_thread_registry.rb000066400000000000000000000027041505113246500255640ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'singleton' require 'ostruct' module Mongo module BackgroundThread alias :start_without_tracking! :start! def start! start_without_tracking!.tap do |thread| BackgroundThreadRegistry.instance.register(self, thread) end end end end class BackgroundThreadRegistry include Singleton def initialize @lock = Mutex.new @records = [] end def register(object, thread) @lock.synchronize do @records << OpenStruct.new( thread: thread, object: object, # When rake spec:prepare is run, the current_example method is not defined example: RSpec.respond_to?(:current_example) ? RSpec.current_example : nil, ) end end def verify_empty! @lock.synchronize do alive_thread_records = @records.select { |record| record.thread.alive? } if alive_thread_records.any? msg = +"Live background threads after closing all clients:" alive_thread_records.each do |record| msg << "\n #{record.object}" if record.object.respond_to?(:options) msg << "\n with options: #{record.object.options}" end if record.example msg << "\n in #{record.example.id}: #{record.example.full_description}" else msg << "\n not in an example" end end raise msg end @records.clear end end end mongo-ruby-driver-2.21.3/spec/support/certificates/000077500000000000000000000000001505113246500223035ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/support/certificates/README.md000066400000000000000000000071041505113246500235640ustar00rootroot00000000000000# Ruby Driver Test TLS Certificates ## File Types All files in this directory are in the PEM format. They are generated by the x509gen MongoDB tool. The file extensions map to content as follows: - `.key` - private key - `.crt` - certificate or a certificate chain - `.pem` - certificate (or a certificate chain) and private key combined in the same file The file name fragments have the following meaning: - `second-level` - these certificates are signed by the intermediate certificates (`client-int.crt` & `server-int.crt`) rather than directly by the CA certificates. - `int` - these are intermediate certificates used for testing certificate chains. The server and the client sides have their own intermediate certificates. - `bundle` - these files contain the leaf certificates followed by intermediate certificates up to the CA certificates, but do not include the CA certificates. ## Generation Keep in mind the following important notes: - In multi-ca.crt, the Ruby driver CA certificate must be last (the first certificate must be an unrelated certificate). - All server certificates should have `localhost.test.build.10gen.cc` in the Subject Alternate Name field for testing SRV monitoring. ## Tools To inspect a certificate: openssl x509 -text -in path.pem ## Manual Testing - openssl Start a test server using the simple certificate: openssl s_server -port 29999 -CAfile ca.crt -cert server.pem -verify 1 Use OpenSSL's test client to test certificate verification using the simple certificate: openssl s_client -connect :29999 -CAfile ca.crt -cert client.pem \ -verify 1 -verify_return_error Same thing but using the second level certificate with the intermediate certificate (server follows chain up to the CA): openssl s_client -connect :29999 -CAfile ca.crt \ -cert client-second-level-bundle.pem \ -verify 1 -verify_return_error Note however, that even though a client to server connection succeeds using the second level client bundle, openssl appears to be incapable to simply verify the same certificate chain using the verify command: # This fails openssl verify -verbose -CAfile ca.crt -untrusted client-int.crt \ client-second-level.pem # Also fails openssl verify -trusted client-int.crt client-second-level.crt And when the server's certificate uses an intermediate certificate, the client seems to be unable to verify it also: openssl s_server -port 29999 -CAfile ca.crt -verify 1 \ -cert server-second-level-bundle.pem # This fails openssl s_client -connect :29999 -CAfile ca.crt -cert client.pem \ -verify 1 -verify_return_error To sum up, openssl's command line tools appear to only handle certificate chains provided by the client when the server is verifying them, not the other way around and not when trying to standalone verify the chain. ## Manual Testing - mongosh When it comes to `mongod` and `mongosh`, certificate chains are supported in both directions: mongod --sslMode requireSSL \ --sslCAFile ca.crt \ --sslPEMKeyFile server-second-level-bundle.pem \ --sslClientCertificate client.pem mongosh --host localhost --ssl \ --sslCAFile ca.crt \ --sslPEMKeyFile client-second-level-bundle.pem The `--host` option needs to be given to `mongosh` because the certificates here do not include 127.0.0.1 in subject alternate name. If the intermediate certificate is not provided, the connection should fail. # Expected to fail mongosh --host localhost --ssl \ --sslCAFile ca.crt \ --sslPEMKeyFile client-second-level.pem mongo-ruby-driver-2.21.3/spec/support/certificates/atlas-ocsp-ca.crt000066400000000000000000000142211505113246500254440ustar00rootroot00000000000000Certificate: Data: Version: 3 (0x2) Serial Number: 4b:a8:52:93:f7:9a:2f:a2:73:06:4b:a8:04:8d:75:d0 Signature Algorithm: sha256WithRSAEncryption Issuer: C=US, O=Internet Security Research Group, CN=ISRG Root X1 Validity Not Before: Mar 13 00:00:00 2024 GMT Not After : Mar 12 23:59:59 2027 GMT Subject: C=US, O=Let's Encrypt, CN=R10 Subject Public Key Info: Public Key Algorithm: rsaEncryption Public-Key: (2048 bit) Modulus: 00:cf:57:e5:e6:c4:54:12:ed:b4:47:fe:c9:27:58: 76:46:50:28:8c:1d:3e:88:df:05:9d:d5:b5:18:29: bd:dd:b5:5a:bf:fa:f6:ce:a3:be:af:00:21:4b:62: 5a:5a:3c:01:2f:c5:58:03:f6:89:ff:8e:11:43:eb: c1:b5:e0:14:07:96:8f:6f:1f:d7:e7:ba:81:39:09: 75:65:b7:c2:af:18:5b:37:26:28:e7:a3:f4:07:2b: 6d:1a:ff:ab:58:bc:95:ae:40:ff:e9:cb:57:c4:b5: 5b:7f:78:0d:18:61:bc:17:e7:54:c6:bb:49:91:cd: 6e:18:d1:80:85:ee:a6:65:36:bc:74:ea:bc:50:4c: ea:fc:21:f3:38:16:93:94:ba:b0:d3:6b:38:06:cd: 16:12:7a:ca:52:75:c8:ad:76:b2:c2:9c:5d:98:45: 5c:6f:61:7b:c6:2d:ee:3c:13:52:86:01:d9:57:e6: 38:1c:df:8d:b5:1f:92:91:9a:e7:4a:1c:cc:45:a8: 72:55:f0:b0:e6:a3:07:ec:fd:a7:1b:66:9e:3f:48: 8b:71:84:71:58:c9:3a:fa:ef:5e:f2:5b:44:2b:3c: 74:e7:8f:b2:47:c1:07:6a:cd:9a:b7:0d:96:f7:12: 81:26:51:54:0a:ec:61:f6:f7:f5:e2:f2:8a:c8:95: 0d:8d Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Key Usage: critical Digital Signature, Certificate Sign, CRL Sign X509v3 Extended Key Usage: TLS Web Client Authentication, TLS Web Server Authentication X509v3 Basic Constraints: critical CA:TRUE, pathlen:0 X509v3 Subject Key Identifier: BB:BC:C3:47:A5:E4:BC:A9:C6:C3:A4:72:0C:10:8D:A2:35:E1:C8:E8 X509v3 Authority Key Identifier: 79:B4:59:E6:7B:B6:E5:E4:01:73:80:08:88:C8:1A:58:F6:E9:9B:6E Authority Information Access: CA Issuers - URI:http://x1.i.lencr.org/ X509v3 Certificate Policies: Policy: 2.23.140.1.2.1 X509v3 CRL Distribution Points: Full Name: URI:http://x1.c.lencr.org/ Signature Algorithm: sha256WithRSAEncryption Signature Value: 92:b1:e7:41:37:eb:79:9d:81:e6:cd:e2:25:e1:3a:20:e9:90: 44:95:a3:81:5c:cf:c3:5d:fd:bd:a0:70:d5:b1:96:28:22:0b: d2:f2:28:cf:0c:e7:d4:e6:43:8c:24:22:1d:c1:42:92:d1:09: af:9f:4b:f4:c8:70:4f:20:16:b1:5a:dd:01:f6:1f:f8:1f:61: 6b:14:27:b0:72:8d:63:ae:ee:e2:ce:4b:cf:37:dd:bb:a3:d4: cd:e7:ad:50:ad:bd:bf:e3:ec:3e:62:36:70:99:31:a7:e8:8d: dd:ea:62:e2:12:ae:f5:9c:d4:3d:2c:0c:aa:d0:9c:79:be:ea: 3d:5c:44:6e:96:31:63:5a:7d:d6:7e:4f:24:a0:4b:05:7f:5e: 6f:d2:d4:ea:5f:33:4b:13:d6:57:b6:ca:de:51:b8:5d:a3:09: 82:74:fd:c7:78:9e:b3:b9:ac:16:da:4a:2b:96:c3:b6:8b:62: 8f:f9:74:19:a2:9e:03:de:e9:6f:9b:b0:0f:d2:a0:5a:f6:85: 5c:c2:04:b7:c8:d5:4e:32:c4:bf:04:5d:bc:29:f6:f7:81:8f: 0c:5d:3c:53:c9:40:90:8b:fb:b6:08:65:b9:a4:21:d5:09:e5: 13:84:84:37:82:ce:10:28:fc:76:c2:06:25:7a:46:52:4d:da: 53:72:a4:27:3f:62:70:ac:be:69:48:00:fb:67:0f:db:5b:a1: e8:d7:03:21:2d:d7:c9:f6:99:42:39:83:43:df:77:0a:12:08: f1:25:d6:ba:94:19:54:18:88:a5:c5:8e:e1:1a:99:93:79:6b: ec:1c:f9:31:40:b0:cc:32:00:df:9f:5e:e7:b4:92:ab:90:82: 91:8d:0d:e0:1e:95:ba:59:3b:2e:4b:5f:c2:b7:46:35:52:39: 06:c0:bd:aa:ac:52:c1:22:a0:44:97:99:f7:0c:a0:21:a7:a1: 6c:71:47:16:17:01:68:c0:ca:a6:26:65:04:7c:b3:ae:c9:e7: 94:55:c2:6f:9b:3c:1c:a9:f9:2e:c5:20:1a:f0:76:e0:be:ec: 18:d6:4f:d8:25:fb:76:11:e8:bf:e6:21:0f:e8:e8:cc:b5:b6: a7:d5:b8:f7:9f:41:cf:61:22:46:6a:83:b6:68:97:2e:7c:ea: 4e:95:db:23:eb:2e:c8:2b:28:84:a4:60:e9:49:f4:44:2e:3b: f9:ca:62:57:01:e2:5d:90:16:f9:c9:fc:7a:23:48:8e:a6:d5: 81:72:f1:28:fa:5d:ce:fb:ed:4e:73:8f:94:2e:d2:41:94:98: 99:db:a7:af:70:5f:f5:be:fb:02:20:bf:66:27:6c:b4:ad:fa: 75:12:0b:2b:3e:ce:03:9e -----BEGIN CERTIFICATE----- MIIFBTCCAu2gAwIBAgIQS6hSk/eaL6JzBkuoBI110DANBgkqhkiG9w0BAQsFADBP MQswCQYDVQQGEwJVUzEpMCcGA1UEChMgSW50ZXJuZXQgU2VjdXJpdHkgUmVzZWFy Y2ggR3JvdXAxFTATBgNVBAMTDElTUkcgUm9vdCBYMTAeFw0yNDAzMTMwMDAwMDBa Fw0yNzAzMTIyMzU5NTlaMDMxCzAJBgNVBAYTAlVTMRYwFAYDVQQKEw1MZXQncyBF bmNyeXB0MQwwCgYDVQQDEwNSMTAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEK AoIBAQDPV+XmxFQS7bRH/sknWHZGUCiMHT6I3wWd1bUYKb3dtVq/+vbOo76vACFL YlpaPAEvxVgD9on/jhFD68G14BQHlo9vH9fnuoE5CXVlt8KvGFs3Jijno/QHK20a /6tYvJWuQP/py1fEtVt/eA0YYbwX51TGu0mRzW4Y0YCF7qZlNrx06rxQTOr8IfM4 FpOUurDTazgGzRYSespSdcitdrLCnF2YRVxvYXvGLe48E1KGAdlX5jgc3421H5KR mudKHMxFqHJV8LDmowfs/acbZp4/SItxhHFYyTr6717yW0QrPHTnj7JHwQdqzZq3 DZb3EoEmUVQK7GH29/Xi8orIlQ2NAgMBAAGjgfgwgfUwDgYDVR0PAQH/BAQDAgGG MB0GA1UdJQQWMBQGCCsGAQUFBwMCBggrBgEFBQcDATASBgNVHRMBAf8ECDAGAQH/ AgEAMB0GA1UdDgQWBBS7vMNHpeS8qcbDpHIMEI2iNeHI6DAfBgNVHSMEGDAWgBR5 tFnme7bl5AFzgAiIyBpY9umbbjAyBggrBgEFBQcBAQQmMCQwIgYIKwYBBQUHMAKG Fmh0dHA6Ly94MS5pLmxlbmNyLm9yZy8wEwYDVR0gBAwwCjAIBgZngQwBAgEwJwYD VR0fBCAwHjAcoBqgGIYWaHR0cDovL3gxLmMubGVuY3Iub3JnLzANBgkqhkiG9w0B AQsFAAOCAgEAkrHnQTfreZ2B5s3iJeE6IOmQRJWjgVzPw139vaBw1bGWKCIL0vIo zwzn1OZDjCQiHcFCktEJr59L9MhwTyAWsVrdAfYf+B9haxQnsHKNY67u4s5Lzzfd u6PUzeetUK29v+PsPmI2cJkxp+iN3epi4hKu9ZzUPSwMqtCceb7qPVxEbpYxY1p9 1n5PJKBLBX9eb9LU6l8zSxPWV7bK3lG4XaMJgnT9x3ies7msFtpKK5bDtotij/l0 GaKeA97pb5uwD9KgWvaFXMIEt8jVTjLEvwRdvCn294GPDF08U8lAkIv7tghluaQh 1QnlE4SEN4LOECj8dsIGJXpGUk3aU3KkJz9icKy+aUgA+2cP21uh6NcDIS3XyfaZ QjmDQ993ChII8SXWupQZVBiIpcWO4RqZk3lr7Bz5MUCwzDIA359e57SSq5CCkY0N 4B6Vulk7LktfwrdGNVI5BsC9qqxSwSKgRJeZ9wygIaehbHFHFhcBaMDKpiZlBHyz rsnnlFXCb5s8HKn5LsUgGvB24L7sGNZP2CX7dhHov+YhD+jozLW2p9W4959Bz2Ei RmqDtmiXLnzqTpXbI+suyCsohKRg6Un0RC47+cpiVwHiXZAW+cn8eiNIjqbVgXLx KPpdzvvtTnOPlC7SQZSYmdunr3Bf9b77AiC/ZidstK36dRILKz7OA54= -----END CERTIFICATE-----mongo-ruby-driver-2.21.3/spec/support/certificates/atlas-ocsp.crt000066400000000000000000000206241505113246500250670ustar00rootroot00000000000000Certificate: Data: Version: 3 (0x2) Serial Number: 05:d8:c7:c9:62:ab:ba:d9:4c:bf:4f:e2:ab:33:5c:f1:78:b2 Signature Algorithm: sha256WithRSAEncryption Issuer: C=US, O=Let's Encrypt, CN=R10 Validity Not Before: Jun 20 09:14:41 2025 GMT Not After : Sep 18 09:14:40 2025 GMT Subject: CN=*.g6fyiaq.mongodb-dev.net Subject Public Key Info: Public Key Algorithm: rsaEncryption Public-Key: (4096 bit) Modulus: 00:b5:69:8b:af:4b:a8:a0:ef:d6:13:78:c4:f1:9b: ef:a2:8e:93:8d:0b:c2:47:cb:a1:97:d8:31:03:c6: 2b:db:36:50:5d:26:86:d5:54:e8:4a:cf:c1:86:77: 83:02:64:bf:43:f8:67:f5:70:9b:f8:bc:65:47:31: 1a:76:07:64:b7:3b:a3:60:05:b8:a8:ac:d7:ce:16: aa:4e:99:64:82:d8:18:47:a8:ab:4d:95:4f:98:96: f3:e4:cf:53:4a:83:fb:58:98:42:52:09:7b:cf:ec: 03:b5:64:31:ec:78:ed:5d:0b:da:88:ef:46:6e:73: 29:ce:49:1d:fd:34:ed:24:f1:cb:a8:46:91:94:e7: 3a:55:fb:24:b5:fe:04:04:58:3b:98:ad:36:02:d7: 46:fe:e2:1b:f5:72:d8:c0:9c:96:9f:cd:e4:e1:c2: cc:90:8c:66:5e:10:06:1a:dd:4a:b0:87:d1:9b:70: 73:58:42:bb:fe:88:8a:77:31:85:7b:52:8d:e7:2e: 68:49:96:e4:53:6f:fa:fa:4a:91:3d:28:c7:78:64: 5e:e6:b6:1d:47:c2:6b:9a:d1:52:ef:e9:35:0f:af: ce:f8:4f:a4:32:a8:5c:f5:0d:39:95:98:85:33:14: d8:b2:3c:d1:20:3f:ab:44:66:d3:19:7d:4e:5a:00: 86:50:94:33:d5:05:47:9f:f6:15:09:2c:f1:5a:5f: 0f:bc:07:15:47:61:85:d5:0a:c2:51:48:a8:d1:6f: 0a:24:8d:fe:f5:ef:7f:46:b5:ca:69:e7:1c:fb:df: 9b:48:90:87:2a:33:55:04:86:33:96:8a:b5:6d:1e: 11:15:47:56:60:28:2a:7f:dd:a4:44:14:53:a7:cb: 85:de:45:d7:0a:26:29:35:10:98:82:1c:72:91:2f: e7:03:c6:23:30:a2:e9:61:e7:74:6c:96:9d:51:5a: 84:32:be:62:c5:d9:f9:a5:6c:01:f2:ca:57:a0:c6: e0:9e:89:2d:a9:3f:83:92:23:ff:8b:b1:37:b2:2a: 71:d2:75:37:8a:28:71:c4:41:42:fd:f1:59:a1:7d: cd:51:71:52:15:1e:7c:28:56:82:d6:e3:8e:c5:0d: d0:41:e2:27:3a:6c:d3:2c:15:92:c7:f8:cf:26:29: dc:9f:f7:2f:af:32:63:11:89:6c:4e:7e:21:08:99: 32:48:a1:db:e5:fc:95:d6:8d:dd:e9:15:03:42:e1: 94:20:4f:ba:ef:0e:ac:d9:99:d1:99:01:0b:74:2d: f8:8a:18:db:75:32:69:5c:f4:63:08:ea:b8:42:4f: 14:aa:30:3c:b0:c7:a2:ff:67:68:a8:f1:bc:3a:f0: 52:ef:e3 Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 Extended Key Usage: TLS Web Server Authentication, TLS Web Client Authentication X509v3 Basic Constraints: critical CA:FALSE X509v3 Subject Key Identifier: B2:31:60:0E:14:5F:5E:DA:26:4B:EA:D9:7C:F4:AF:EE:C9:22:41:7E X509v3 Authority Key Identifier: BB:BC:C3:47:A5:E4:BC:A9:C6:C3:A4:72:0C:10:8D:A2:35:E1:C8:E8 Authority Information Access: CA Issuers - URI:http://r10.i.lencr.org/ X509v3 Subject Alternative Name: DNS:*.g6fyiaq.mesh.mongodb-dev.net, DNS:*.g6fyiaq.mongodb-dev.net X509v3 Certificate Policies: Policy: 2.23.140.1.2.1 X509v3 CRL Distribution Points: Full Name: URI:http://r10.c.lencr.org/51.crl CT Precertificate SCTs: Signed Certificate Timestamp: Version : v1 (0x0) Log ID : ED:3C:4B:D6:E8:06:C2:A4:A2:00:57:DB:CB:24:E2:38: 01:DF:51:2F:ED:C4:86:C5:70:0F:20:DD:B7:3E:3F:E0 Timestamp : Jun 20 10:13:12.240 2025 GMT Extensions: none Signature : ecdsa-with-SHA256 30:45:02:20:65:DB:29:2D:FD:70:BE:01:14:66:55:61: 4F:F9:F4:1B:4C:87:92:2E:43:0F:F5:F4:CA:39:50:E5: 87:72:FC:B4:02:21:00:D2:F5:D3:79:3C:68:A2:52:77: E2:5D:80:DE:5B:32:D2:3C:17:BB:03:43:DE:4E:F9:4F: 79:C6:72:B1:9D:41:CC Signed Certificate Timestamp: Version : v1 (0x0) Log ID : 0D:E1:F2:30:2B:D3:0D:C1:40:62:12:09:EA:55:2E:FC: 47:74:7C:B1:D7:E9:30:EF:0E:42:1E:B4:7E:4E:AA:34 Timestamp : Jun 20 10:13:14.253 2025 GMT Extensions: none Signature : ecdsa-with-SHA256 30:44:02:20:42:7F:17:F0:44:9C:D2:FE:54:62:70:38: 12:41:54:66:E4:B3:24:D1:67:5C:50:D0:19:CC:24:6A: EB:EC:8B:E8:02:20:05:78:59:A0:8B:30:70:0D:BD:40: 13:19:AB:9D:70:E7:D3:E9:15:62:47:C5:7A:FF:86:00: D2:24:59:BB:42:DF Signature Algorithm: sha256WithRSAEncryption Signature Value: c6:3f:10:6b:0a:af:4b:c1:29:ed:28:4a:db:7f:50:d6:1b:ec: 5a:ce:9a:bd:74:05:1c:6d:99:b8:4a:cd:ae:64:56:b3:db:5e: 7d:02:89:a9:26:3e:72:02:66:6e:df:8c:ca:cd:b2:a1:cf:a8: f9:0a:cc:36:33:e4:77:5f:f5:ae:59:de:2c:24:ef:94:b3:2b: 22:18:39:9a:36:bb:3b:95:cb:0c:11:73:d9:4d:6a:66:df:36: 9f:e9:aa:94:57:cf:d7:b3:10:c5:0b:93:a1:8b:50:59:e1:4d: 4d:1d:06:ca:97:48:14:84:0a:33:0e:c9:ee:17:26:00:44:72: f7:25:a8:1b:99:06:7d:1d:ca:00:f9:b8:76:9f:e2:8e:8c:21: 78:fb:d2:fb:f0:9c:d8:0d:2b:a8:a0:c1:0e:45:dd:45:18:23: 07:a8:92:cd:ce:60:c0:32:60:13:c0:ae:a4:e1:dd:5f:36:2a: ff:02:85:82:25:51:17:f4:f2:a8:7e:92:e0:85:72:44:57:df: eb:5d:0a:6e:62:a3:90:82:89:07:bd:c4:3b:79:45:29:38:f7: a9:c5:72:fe:05:d2:1a:9a:b1:bd:ff:5e:fa:bc:06:9b:a0:f1: 25:ba:7f:8f:5d:2b:b4:f7:d0:34:15:ab:f1:cc:1b:df:d0:ed: 4b:c8:9c:9c -----BEGIN CERTIFICATE----- MIIGLTCCBRWgAwIBAgISBdjHyWKrutlMv0/iqzNc8XiyMA0GCSqGSIb3DQEBCwUA MDMxCzAJBgNVBAYTAlVTMRYwFAYDVQQKEw1MZXQncyBFbmNyeXB0MQwwCgYDVQQD EwNSMTAwHhcNMjUwNjIwMDkxNDQxWhcNMjUwOTE4MDkxNDQwWjAkMSIwIAYDVQQD DBkqLmc2ZnlpYXEubW9uZ29kYi1kZXYubmV0MIICIjANBgkqhkiG9w0BAQEFAAOC Ag8AMIICCgKCAgEAtWmLr0uooO/WE3jE8Zvvoo6TjQvCR8uhl9gxA8Yr2zZQXSaG 1VToSs/BhneDAmS/Q/hn9XCb+LxlRzEadgdktzujYAW4qKzXzhaqTplkgtgYR6ir TZVPmJbz5M9TSoP7WJhCUgl7z+wDtWQx7HjtXQvaiO9GbnMpzkkd/TTtJPHLqEaR lOc6Vfsktf4EBFg7mK02AtdG/uIb9XLYwJyWn83k4cLMkIxmXhAGGt1KsIfRm3Bz WEK7/oiKdzGFe1KN5y5oSZbkU2/6+kqRPSjHeGRe5rYdR8JrmtFS7+k1D6/O+E+k Mqhc9Q05lZiFMxTYsjzRID+rRGbTGX1OWgCGUJQz1QVHn/YVCSzxWl8PvAcVR2GF 1QrCUUio0W8KJI3+9e9/RrXKaecc+9+bSJCHKjNVBIYzloq1bR4RFUdWYCgqf92k RBRTp8uF3kXXCiYpNRCYghxykS/nA8YjMKLpYed0bJadUVqEMr5ixdn5pWwB8spX oMbgnoktqT+DkiP/i7E3sipx0nU3iihxxEFC/fFZoX3NUXFSFR58KFaC1uOOxQ3Q QeInOmzTLBWSx/jPJincn/cvrzJjEYlsTn4hCJkySKHb5fyV1o3d6RUDQuGUIE+6 7w6s2ZnRmQELdC34ihjbdTJpXPRjCOq4Qk8UqjA8sMei/2doqPG8OvBS7+MCAwEA AaOCAkgwggJEMA4GA1UdDwEB/wQEAwIFoDAdBgNVHSUEFjAUBggrBgEFBQcDAQYI KwYBBQUHAwIwDAYDVR0TAQH/BAIwADAdBgNVHQ4EFgQUsjFgDhRfXtomS+rZfPSv 7skiQX4wHwYDVR0jBBgwFoAUu7zDR6XkvKnGw6RyDBCNojXhyOgwMwYIKwYBBQUH AQEEJzAlMCMGCCsGAQUFBzAChhdodHRwOi8vcjEwLmkubGVuY3Iub3JnLzBEBgNV HREEPTA7gh4qLmc2ZnlpYXEubWVzaC5tb25nb2RiLWRldi5uZXSCGSouZzZmeWlh cS5tb25nb2RiLWRldi5uZXQwEwYDVR0gBAwwCjAIBgZngQwBAgEwLgYDVR0fBCcw JTAjoCGgH4YdaHR0cDovL3IxMC5jLmxlbmNyLm9yZy81MS5jcmwwggEDBgorBgEE AdZ5AgQCBIH0BIHxAO8AdgDtPEvW6AbCpKIAV9vLJOI4Ad9RL+3EhsVwDyDdtz4/ 4AAAAZeM0/uwAAAEAwBHMEUCIGXbKS39cL4BFGZVYU/59BtMh5IuQw/19Mo5UOWH cvy0AiEA0vXTeTxoolJ34l2A3lsy0jwXuwND3k75T3nGcrGdQcwAdQAN4fIwK9MN wUBiEgnqVS78R3R8sdfpMO8OQh60fk6qNAAAAZeM1AONAAAEAwBGMEQCIEJ/F/BE nNL+VGJwOBJBVGbksyTRZ1xQ0BnMJGrr7IvoAiAFeFmgizBwDb1AExmrnXDn0+kV YkfFev+GANIkWbtC3zANBgkqhkiG9w0BAQsFAAOCAQEAxj8QawqvS8Ep7ShK239Q 1hvsWs6avXQFHG2ZuErNrmRWs9tefQKJqSY+cgJmbt+Mys2yoc+o+QrMNjPkd1/1 rlneLCTvlLMrIhg5mja7O5XLDBFz2U1qZt82n+mqlFfP17MQxQuToYtQWeFNTR0G ypdIFIQKMw7J7hcmAERy9yWoG5kGfR3KAPm4dp/ijowhePvS+/Cc2A0rqKDBDkXd RRgjB6iSzc5gwDJgE8CupOHdXzYq/wKFgiVRF/TyqH6S4IVyRFff610KbmKjkIKJ B73EO3lFKTj3qcVy/gXSGpqxvf9e+rwGm6DxJbp/j10rtPfQNBWr8cwb39DtS8ic nA== -----END CERTIFICATE-----mongo-ruby-driver-2.21.3/spec/support/certificates/ca.crt000066400000000000000000000102461505113246500234030ustar00rootroot00000000000000Certificate: Data: Version: 3 (0x2) Serial Number: 210471 (0x33627) Signature Algorithm: sha256WithRSAEncryption Issuer: CN = Ruby Driver CA, OU = Drivers, O = MongoDB, L = New York City, ST = New York, C = US Validity Not Before: Feb 14 20:57:50 2019 GMT Not After : Feb 14 20:57:50 2039 GMT Subject: CN = Ruby Driver CA, OU = Drivers, O = MongoDB, L = New York City, ST = New York, C = US Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public-Key: (2048 bit) Modulus: 00:96:71:17:e8:aa:87:dc:16:8e:cb:90:4c:2c:61: 11:d1:1d:9d:b8:04:75:18:8a:f1:41:37:2e:06:e6: cb:67:2c:16:f3:24:f4:53:02:33:06:1c:6e:e7:7e: 83:14:44:a4:43:b6:5d:f1:4d:68:e7:8f:fe:4c:f7: ca:01:e5:d2:c1:2b:a5:93:2c:cd:12:58:c3:e1:6f: b2:31:c6:05:44:5b:99:61:99:f5:06:d0:a3:ad:de: 8f:a2:73:a1:46:94:30:e7:f7:4b:5d:fb:34:76:7e: 87:a5:26:89:0e:f9:8a:e7:12:5b:ff:11:71:e4:dd: 87:2d:e0:a9:26:a3:1b:7d:c4:00:b8:11:3a:05:f7: 00:f6:3b:80:7d:1b:0c:a3:38:42:0b:a2:17:e4:4a: c8:00:09:c8:a0:ad:d0:73:12:66:60:3d:ce:41:07: 56:11:e5:06:9a:af:9b:ec:29:65:b6:56:b1:2a:b3: b2:2d:10:c4:75:05:eb:1d:cb:c4:b4:2d:8f:e9:08: 3a:6d:67:e3:0a:81:6a:d5:97:9d:a0:08:f2:70:1c: 9d:9e:4b:e3:9b:42:4d:02:91:93:b8:bf:e7:e9:69: 7e:ef:ab:fc:a6:6a:69:35:37:ee:d9:b7:6f:c5:12: 38:93:4f:09:ea:84:f4:21:df:5a:50:e0:89:c8:da: 94:e1 Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Basic Constraints: CA:TRUE X509v3 Subject Alternative Name: DNS:localhost, IP Address:127.0.0.1 Signature Algorithm: sha256WithRSAEncryption 40:d9:19:82:d2:54:f5:eb:d5:f9:e1:85:b1:38:eb:d3:60:c2: be:b7:7c:0a:59:90:0f:00:30:09:c9:7e:e1:83:7d:ce:d2:d6: 28:e8:21:3e:4e:ea:ee:47:eb:89:c0:e4:13:72:51:d2:3c:48: 06:06:86:51:55:da:24:0f:86:fa:1f:27:d6:98:58:ef:13:3f: 8f:2b:57:05:ad:d1:40:99:8f:35:2d:f7:13:9e:19:a5:1a:23: 5e:29:28:b8:cb:e4:7c:7a:2f:81:7f:1f:72:2f:2c:d2:a5:cc: f1:fe:83:45:30:8d:23:d0:42:a5:f0:9d:e9:02:b5:09:ff:05: 72:af:00:ea:8b:38:41:88:3a:3c:75:6e:8b:5e:f3:b0:30:d3: fb:ff:6f:4e:68:62:2a:30:6b:3e:06:3f:a2:a6:02:91:f1:f5: 5d:31:e7:f4:f0:07:9d:a6:1f:04:fa:23:7f:1e:d3:d3:30:d1: 3d:55:46:d8:2f:da:4b:fc:4d:d2:93:0a:51:bf:78:e4:07:3f: 15:77:7a:2b:20:81:54:9a:9f:21:09:86:47:81:85:dc:e4:50: 37:34:18:b0:43:91:2a:a2:9c:97:fe:a2:1a:02:91:6d:71:b3: 65:e1:c7:00:17:d5:26:d9:69:17:3b:ec:e1:5f:77:e8:19:4b: a3:8c:2a:e0 -----BEGIN CERTIFICATE----- MIIDkzCCAnugAwIBAgIDAzYnMA0GCSqGSIb3DQEBCwUAMHUxFzAVBgNVBAMTDlJ1 YnkgRHJpdmVyIENBMRAwDgYDVQQLEwdEcml2ZXJzMRAwDgYDVQQKEwdNb25nb0RC MRYwFAYDVQQHEw1OZXcgWW9yayBDaXR5MREwDwYDVQQIEwhOZXcgWW9yazELMAkG A1UEBhMCVVMwHhcNMTkwMjE0MjA1NzUwWhcNMzkwMjE0MjA1NzUwWjB1MRcwFQYD VQQDEw5SdWJ5IERyaXZlciBDQTEQMA4GA1UECxMHRHJpdmVyczEQMA4GA1UEChMH TW9uZ29EQjEWMBQGA1UEBxMNTmV3IFlvcmsgQ2l0eTERMA8GA1UECBMITmV3IFlv cmsxCzAJBgNVBAYTAlVTMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA lnEX6KqH3BaOy5BMLGER0R2duAR1GIrxQTcuBubLZywW8yT0UwIzBhxu536DFESk Q7Zd8U1o54/+TPfKAeXSwSulkyzNEljD4W+yMcYFRFuZYZn1BtCjrd6PonOhRpQw 5/dLXfs0dn6HpSaJDvmK5xJb/xFx5N2HLeCpJqMbfcQAuBE6BfcA9juAfRsMozhC C6IX5ErIAAnIoK3QcxJmYD3OQQdWEeUGmq+b7ClltlaxKrOyLRDEdQXrHcvEtC2P 6Qg6bWfjCoFq1ZedoAjycBydnkvjm0JNApGTuL/n6Wl+76v8pmppNTfu2bdvxRI4 k08J6oT0Id9aUOCJyNqU4QIDAQABoywwKjAMBgNVHRMEBTADAQH/MBoGA1UdEQQT MBGCCWxvY2FsaG9zdIcEfwAAATANBgkqhkiG9w0BAQsFAAOCAQEAQNkZgtJU9evV +eGFsTjr02DCvrd8ClmQDwAwCcl+4YN9ztLWKOghPk7q7kfricDkE3JR0jxIBgaG UVXaJA+G+h8n1phY7xM/jytXBa3RQJmPNS33E54ZpRojXikouMvkfHovgX8fci8s 0qXM8f6DRTCNI9BCpfCd6QK1Cf8Fcq8A6os4QYg6PHVui17zsDDT+/9vTmhiKjBr PgY/oqYCkfH1XTHn9PAHnaYfBPojfx7T0zDRPVVG2C/aS/xN0pMKUb945Ac/FXd6 KyCBVJqfIQmGR4GF3ORQNzQYsEORKqKcl/6iGgKRbXGzZeHHABfVJtlpFzvs4V93 6BlLo4wq4A== -----END CERTIFICATE----- mongo-ruby-driver-2.21.3/spec/support/certificates/client-encrypted.key000066400000000000000000000033461505113246500262740ustar00rootroot00000000000000-----BEGIN RSA PRIVATE KEY----- Proc-Type: 4,ENCRYPTED DEK-Info: AES-256-CBC,1F48315ACA40642785914C510FCE7477 BX1IVFwM3hWjsLQ7fztKwjifJnLET4dQk0K9D/z2grsWSNoPhRkAj6mS2OqYceWI 6fVxAJ1Wkyzxvc0aUhZkyMCTz7eSZyB2nqJcrBWagN74cVtr541LjF9Tg80ZgGK9 SVl+yF8ApGLmaSeBqMIPu454TMHPmJUl8xFJQ+JxeGZyZiA/oYtahQEmmnDUHG/6 hQKyPMuJsTn+15KMpv0KJLDS1tKqUZRaxrb5scGTqqNa9Zs4WGWrX47tGkCXo0f+ KPPxLgq4emh52lhiEFnf1oOICw/2PGpof9ywzerwHkmJ1ggGieekQS803VISVwkg rvQihnmw4PQ63CdgUlUGjraj2Uo4N2+80lKs/B4vz5GqeHY8vkL0uovC41pcuerL zwabRVQKdA4iAzal2ln5cS9pXReSI4M3SKNi/xhmyEAukkgs4P17f34l0Ju2PrAR URhyb1Me2q7XydzMPCqGin70gWmv37CupryEilWAbim2tsLouk9H46lWrtHeixGS ofIHz7qHhEi51FBJdhG8EGKiu4LtbYyWYxONIw/IDoO9JY0TdIxPb2Fyyje9yTfb GI6e9/R49eXZVc5FlVhIfdaMFpDNRz9B2x5Jy+4VO+I0CVYg/AjDdd8gZ5Wg2xHz QnSkJT8sXnohIyUnBFYa1aA8ZuLwZuNzFKZJ3NhOaLmqrK4k0iOB1VjMOuLgFnwr A0uFI3zuDBAMEVnXX529gqxyEErGKqnHpKi8Ybim6RfwOIYi+Pjq5XPVI7rBAg73 59464dMtOkmZyHHhGTjxrjLWBgCALWQJbLp+uUVygAV17KlgWEmlpdk1V8G/wdjZ qUCf0czJ/KDBGxY3qFPdXcIgjF6Jh7QZ4PnM8BOhCDknYjZFirAVSTSr8LXUPIGt UJicODvbtcJgC9aKcRUQtMQQqffCHS87EIKBODzh27//SrD/naL+Nv8jSwuFHl4r tDRYVms7uua2+IGV3r4CUU6euT5LTdT7vSjZNVRT1UmhYSRkf43QYF0AzcBQrRhQ 3cWlkLolJOuG8VMhqQCfUINcitJEpgBGbunJBWmTetjydeBycz0S2akVDwWb6cvA FmfN8j/nNf2CWZ8r10g3PUzw2H0b9y7t0klcrgjJudu09OoMeyPZgUsc7Y60fAQC clDLSQWwO9IDBhYlAAut5p/y97R02RHfJXSWOj0eHy3m9+E/ldBpq43KDP+1EYb8 6TGwpYQhSgoPJbDDIgtx1EYQ7QRCTUJ6KVjbYfz9M3WJg6iS4G0bKRBbbYRDCO6h gZm40mCOVO2gsnMxu/QVuU7GIIWej7zYZJ6aQRvtwXJbI3vMYQeW1sqSKBYpgRoY dANNTeeIsz9PTGdEIr/aZp7SCSoIoE8i6zC4I2l22niUy1HlzqQ9ZgXD7ujDyBeq b/65HR4SB5XjKbQiPqvstSbBOg7pD5od+edlJyakcAF8+jW1spoLu80AIQA+HnZw ZAKL8t7rTejKECrouqeImAKXaJOq2vdDBlPMtU/Wer2hIY/miacG3D+41nOM6i+8 gnQKe+4WxZkEf1r8j2iQx8eb0ehtM7ZPun3iwuBdXKv53mRGhgDFidYp4R21NTZj -----END RSA PRIVATE KEY----- mongo-ruby-driver-2.21.3/spec/support/certificates/client-int.crt000066400000000000000000000104131505113246500250620ustar00rootroot00000000000000Certificate: Data: Version: 3 (0x2) Serial Number: 148609 (0x24481) Signature Algorithm: sha256WithRSAEncryption Issuer: CN = Ruby Driver CA, OU = Drivers, O = MongoDB, L = New York City, ST = New York, C = US Validity Not Before: May 30 22:24:53 2019 GMT Not After : May 30 22:24:53 2039 GMT Subject: CN = intermediate-client, OU = Ruby Driver, O = MongoDB, L = New York City, ST = New York, C = US Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public-Key: (2048 bit) Modulus: 00:a0:8e:fd:8b:92:49:40:d4:e6:f0:d9:0f:e8:b1: 20:26:79:1c:d2:47:5b:dc:60:6d:5b:4e:e2:ee:b6: 93:57:28:90:c0:3e:0e:8b:5f:d7:c8:90:a1:69:02: 81:8b:12:50:de:9f:5a:9d:47:cc:83:73:6a:a9:17: 36:44:04:24:e2:57:bd:7f:df:51:a5:f6:34:00:8d: 40:05:fa:54:fa:83:7a:9d:11:9f:51:ad:fd:a4:c4: b8:40:04:9f:8a:bd:e6:cd:4f:23:86:bd:25:21:25: 01:ef:38:49:90:d4:f5:95:d2:1f:46:fe:61:96:0f: 9c:86:77:c8:bc:a5:c2:4b:34:d5:9c:15:2c:57:7a: 48:a0:a1:f6:6c:24:90:fc:cd:3b:19:e5:41:97:ef: 86:6d:f3:7b:ab:ea:42:cb:82:4d:81:8a:19:64:24: 8c:ea:0a:45:54:be:91:67:90:a7:43:1a:30:48:35: 98:f6:ce:cd:56:f2:6e:ec:50:5a:e3:e0:e1:3e:53: 85:7e:ba:b2:01:ba:da:94:9e:17:e0:3c:70:bb:b7: 85:d6:e5:de:fd:2a:78:24:6e:91:bf:82:94:e4:44: 4e:b2:ee:d1:c0:25:c9:2c:2c:c0:7b:1f:cb:cf:79: 1e:b2:96:a9:c6:c9:3a:e4:1d:37:06:07:17:65:6f: 85:cb Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Basic Constraints: CA:TRUE X509v3 Key Usage: Certificate Sign X509v3 Subject Alternative Name: DNS:localhost, IP Address:127.0.0.1 Signature Algorithm: sha256WithRSAEncryption 60:09:93:0a:44:32:28:84:ab:a2:30:91:02:4a:82:30:df:90: a0:11:76:44:94:cc:f5:b3:53:69:dc:cf:97:dd:70:fd:99:fb: 0a:0c:5e:f2:7b:ee:4e:88:09:42:ef:ab:ff:18:ac:85:7a:6d: 13:47:e4:ea:63:00:a3:92:29:22:e3:08:6c:c1:07:68:52:0a: 0e:a5:e2:3c:a9:ec:f6:94:8e:72:f3:2c:a2:89:6f:a9:0c:42: 49:ce:23:4a:aa:8d:0b:70:88:99:38:92:58:60:f7:8c:96:16: 42:a8:d8:8b:92:c9:8f:c1:dd:49:2e:ff:68:bd:fa:2c:2f:93: f4:11:67:2b:c7:f9:4b:6f:85:b3:37:bd:08:83:40:94:6a:44: c2:d9:e9:91:47:70:79:6c:4d:23:20:73:0f:74:9f:33:7d:9d: 3e:74:b1:e8:55:0a:c5:2e:59:b4:9a:9d:95:82:cd:27:5f:63: b5:00:03:61:58:54:e8:5b:42:5d:f7:03:5d:e4:b7:b0:20:f8: 0b:3c:0b:b8:fb:68:36:ef:be:67:27:c1:b1:ca:ff:09:9a:77: 1d:97:69:b3:33:ef:bf:4e:ae:0f:78:9f:a8:73:10:77:b5:a9: e7:41:12:82:e1:25:94:cb:67:82:56:66:4d:00:d3:3a:7c:48: 4b:50:40:cc -----BEGIN CERTIFICATE----- MIIDqTCCApGgAwIBAgIDAkSBMA0GCSqGSIb3DQEBCwUAMHUxFzAVBgNVBAMTDlJ1 YnkgRHJpdmVyIENBMRAwDgYDVQQLEwdEcml2ZXJzMRAwDgYDVQQKEwdNb25nb0RC MRYwFAYDVQQHEw1OZXcgWW9yayBDaXR5MREwDwYDVQQIEwhOZXcgWW9yazELMAkG A1UEBhMCVVMwHhcNMTkwNTMwMjIyNDUzWhcNMzkwNTMwMjIyNDUzWjB+MRwwGgYD VQQDExNpbnRlcm1lZGlhdGUtY2xpZW50MRQwEgYDVQQLEwtSdWJ5IERyaXZlcjEQ MA4GA1UEChMHTW9uZ29EQjEWMBQGA1UEBxMNTmV3IFlvcmsgQ2l0eTERMA8GA1UE CBMITmV3IFlvcmsxCzAJBgNVBAYTAlVTMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8A MIIBCgKCAQEAoI79i5JJQNTm8NkP6LEgJnkc0kdb3GBtW07i7raTVyiQwD4Oi1/X yJChaQKBixJQ3p9anUfMg3NqqRc2RAQk4le9f99RpfY0AI1ABfpU+oN6nRGfUa39 pMS4QASfir3mzU8jhr0lISUB7zhJkNT1ldIfRv5hlg+chnfIvKXCSzTVnBUsV3pI oKH2bCSQ/M07GeVBl++GbfN7q+pCy4JNgYoZZCSM6gpFVL6RZ5CnQxowSDWY9s7N VvJu7FBa4+DhPlOFfrqyAbralJ4X4Dxwu7eF1uXe/Sp4JG6Rv4KU5EROsu7RwCXJ LCzAex/Lz3kespapxsk65B03BgcXZW+FywIDAQABozkwNzAMBgNVHRMEBTADAQH/ MAsGA1UdDwQEAwICBDAaBgNVHREEEzARgglsb2NhbGhvc3SHBH8AAAEwDQYJKoZI hvcNAQELBQADggEBAGAJkwpEMiiEq6IwkQJKgjDfkKARdkSUzPWzU2ncz5fdcP2Z +woMXvJ77k6ICULvq/8YrIV6bRNH5OpjAKOSKSLjCGzBB2hSCg6l4jyp7PaUjnLz LKKJb6kMQknOI0qqjQtwiJk4klhg94yWFkKo2IuSyY/B3Uku/2i9+iwvk/QRZyvH +UtvhbM3vQiDQJRqRMLZ6ZFHcHlsTSMgcw90nzN9nT50sehVCsUuWbSanZWCzSdf Y7UAA2FYVOhbQl33A13kt7Ag+As8C7j7aDbvvmcnwbHK/wmadx2XabMz779Org94 n6hzEHe1qedBEoLhJZTLZ4JWZk0A0zp8SEtQQMw= -----END CERTIFICATE----- mongo-ruby-driver-2.21.3/spec/support/certificates/client-second-level-bundle.pem000066400000000000000000000240011505113246500301060ustar00rootroot00000000000000Certificate: Data: Version: 3 (0x2) Serial Number: 403728 (0x62910) Signature Algorithm: sha256WithRSAEncryption Issuer: CN = intermediate-client, OU = Ruby Driver, O = MongoDB, L = New York City, ST = New York, C = US Validity Not Before: May 30 22:28:04 2019 GMT Not After : May 30 22:28:04 2039 GMT Subject: CN = localhost, OU = Ruby Driver, O = MongoDB, L = New York City, ST = New York, C = US Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public-Key: (2048 bit) Modulus: 00:93:97:c2:b6:1b:ca:ba:e7:c4:64:5a:a9:f7:1f: 32:ba:6d:83:fb:71:83:86:a1:d8:62:65:ba:bc:f0: ac:3c:c9:bd:85:79:03:72:1f:5d:fc:4e:ae:3d:85: 2d:6b:da:4c:c1:b3:dc:c3:c3:c1:b4:9d:f2:8e:2f: 97:68:31:44:2b:b9:c9:8c:8b:f7:89:e1:f0:d6:0b: 23:87:c6:5d:44:f3:9b:3d:c4:70:e2:03:c2:f2:0e: c6:b5:60:f7:28:44:71:d5:3e:9e:6c:5e:a7:1a:29: f0:9b:21:e3:be:b3:e0:0f:0d:c4:12:97:46:12:0b: 4f:84:61:79:65:3f:b2:45:90:e9:62:36:e7:9c:95: 00:93:79:69:b9:5c:b8:e6:37:ce:30:72:55:d9:19: 5f:6c:1a:9f:4d:af:9d:f2:ec:28:62:82:cf:27:3b: 83:0d:12:39:64:04:4e:68:84:8e:50:d9:52:83:db: df:50:69:5a:83:0e:be:57:35:cc:c9:5b:bb:25:7b: 6c:db:39:be:b7:76:db:b7:fc:3c:29:68:2e:2f:f3: 06:90:ff:37:c6:29:3c:fd:90:36:c4:44:87:b3:eb: 40:c4:fa:83:5e:e5:23:b3:13:bc:f6:89:7c:5e:bb: 18:0f:f3:d0:18:62:f2:0d:3a:72:9c:a3:22:ef:8c: 95:99 Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Subject Alternative Name: DNS:localhost, IP Address:127.0.0.1 Signature Algorithm: sha256WithRSAEncryption 53:66:80:0a:e4:a2:ec:d5:9f:af:f4:23:15:a1:82:27:e5:66: a9:7f:55:e3:12:0d:ed:8d:09:0a:d9:ed:37:d6:7b:58:ce:7e: 85:72:f2:d4:9f:4e:bc:e4:27:fe:90:6a:4c:a9:49:74:50:5e: 2b:5c:16:50:d6:d2:6f:c0:39:d6:fa:03:74:5e:79:e0:bd:eb: ac:8d:11:86:9e:fd:06:22:c2:c0:e2:33:c0:5a:be:d0:4e:8c: 8e:22:0f:8c:c1:19:56:3e:74:21:8e:7f:54:b5:cd:73:7b:70: 34:2d:e4:45:df:c4:b1:a9:84:ac:26:a8:cd:7f:0f:59:7b:d9: a4:5e:65:02:f6:be:11:b7:ee:f4:9e:b9:b8:1f:1c:94:da:0e: 1c:0e:3d:c0:e4:40:e7:1d:98:5c:df:22:9f:82:21:c3:a0:52: 1e:f4:e0:2d:07:96:f6:39:32:83:4e:88:0e:66:e2:11:18:b7: bc:30:e5:6d:4f:76:05:bf:ed:ff:98:b1:06:64:94:46:e5:46: d5:0e:b7:9a:c6:91:c5:29:78:83:a3:d1:40:c2:de:6e:ad:67: 6b:fd:0f:0e:0c:b2:d5:6f:2c:19:d2:0d:83:5b:c7:22:ba:8a: 35:2a:58:39:8b:87:e8:76:b5:3b:38:1e:7c:80:47:5c:73:be: 83:96:16:65 -----BEGIN CERTIFICATE----- MIIDjTCCAnWgAwIBAgIDBikQMA0GCSqGSIb3DQEBCwUAMH4xHDAaBgNVBAMTE2lu dGVybWVkaWF0ZS1jbGllbnQxFDASBgNVBAsTC1J1YnkgRHJpdmVyMRAwDgYDVQQK EwdNb25nb0RCMRYwFAYDVQQHEw1OZXcgWW9yayBDaXR5MREwDwYDVQQIEwhOZXcg WW9yazELMAkGA1UEBhMCVVMwHhcNMTkwNTMwMjIyODA0WhcNMzkwNTMwMjIyODA0 WjB0MRIwEAYDVQQDEwlsb2NhbGhvc3QxFDASBgNVBAsTC1J1YnkgRHJpdmVyMRAw DgYDVQQKEwdNb25nb0RCMRYwFAYDVQQHEw1OZXcgWW9yayBDaXR5MREwDwYDVQQI EwhOZXcgWW9yazELMAkGA1UEBhMCVVMwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAw ggEKAoIBAQCTl8K2G8q658RkWqn3HzK6bYP7cYOGodhiZbq88Kw8yb2FeQNyH138 Tq49hS1r2kzBs9zDw8G0nfKOL5doMUQrucmMi/eJ4fDWCyOHxl1E85s9xHDiA8Ly Dsa1YPcoRHHVPp5sXqcaKfCbIeO+s+APDcQSl0YSC0+EYXllP7JFkOliNueclQCT eWm5XLjmN84wclXZGV9sGp9Nr53y7Chigs8nO4MNEjlkBE5ohI5Q2VKD299QaVqD Dr5XNczJW7sle2zbOb63dtu3/DwpaC4v8waQ/zfGKTz9kDbERIez60DE+oNe5SOz E7z2iXxeuxgP89AYYvINOnKcoyLvjJWZAgMBAAGjHjAcMBoGA1UdEQQTMBGCCWxv Y2FsaG9zdIcEfwAAATANBgkqhkiG9w0BAQsFAAOCAQEAU2aACuSi7NWfr/QjFaGC J+VmqX9V4xIN7Y0JCtntN9Z7WM5+hXLy1J9OvOQn/pBqTKlJdFBeK1wWUNbSb8A5 1voDdF554L3rrI0Rhp79BiLCwOIzwFq+0E6MjiIPjMEZVj50IY5/VLXNc3twNC3k Rd/EsamErCaozX8PWXvZpF5lAva+Ebfu9J65uB8clNoOHA49wORA5x2YXN8in4Ih w6BSHvTgLQeW9jkyg06IDmbiERi3vDDlbU92Bb/t/5ixBmSURuVG1Q63msaRxSl4 g6PRQMLebq1na/0PDgyy1W8sGdINg1vHIrqKNSpYOYuH6Ha1OzgefIBHXHO+g5YW ZQ== -----END CERTIFICATE----- -----BEGIN RSA PRIVATE KEY----- MIIEpQIBAAKCAQEAk5fCthvKuufEZFqp9x8yum2D+3GDhqHYYmW6vPCsPMm9hXkD ch9d/E6uPYUta9pMwbPcw8PBtJ3yji+XaDFEK7nJjIv3ieHw1gsjh8ZdRPObPcRw 4gPC8g7GtWD3KERx1T6ebF6nGinwmyHjvrPgDw3EEpdGEgtPhGF5ZT+yRZDpYjbn nJUAk3lpuVy45jfOMHJV2RlfbBqfTa+d8uwoYoLPJzuDDRI5ZAROaISOUNlSg9vf UGlagw6+VzXMyVu7JXts2zm+t3bbt/w8KWguL/MGkP83xik8/ZA2xESHs+tAxPqD XuUjsxO89ol8XrsYD/PQGGLyDTpynKMi74yVmQIDAQABAoIBAQCPEqxzsFlD+exN g/4DSsD4K7Wnh5CCcF28dPUitwOgIciQnJCUjoejT/pkNLelN4b0txCozRj3p607 3DKflDKLWJxinEQn61h1hXK56bb8YlH4/HaZAiB2WZCSvx6YcFEQ8JTOZKsEF+ff 2mhVszTeIvARPYd1cnVw1LTDS43bFHbe0lnj/rxsX62IYfaTJjfDa3n8cXPvrP1Y Kkc+cV11FqSfPM0zMfE2ORNjnqEkKNb1eE9gIHSQ3nForTCASZR7gXKYTqJG6rJd XFluDztViR5ieNeh7rMBVadPTTpt+pwtBXdKuC9+OUEe6zHnsveIFTlq/mDQuXoW qaJgtJYhAoGBANF3oss8thvLVXAL6hdupz1htkv46LEkx8anZL3bKu/qlu5qZ0M3 sUAXZoKV1DF+LOxt5h+ZszDz1NtXh3zV/gTfNPDEipzpInEHIUWz+jZkGQ0h3kLb H184uq3sT3uN+pImyhHHU9DhsUg/E4JxgtNCVXFyOT6B4TEQL3q60c4tAoGBALRh VXKjfBYdm5cQqpsgx7wzHV8qmlXM4n9EwPHeUpORUMOQWD+8n9umHMtcXzg7JxyJ UnNFRWtr/s/QOdxDXofr+PJoD5DfFLQoe6TAx7/tS5XCfv2owisCCn0lVt70mw+K Bs8HjVl3D/LZqaohCW6PyRftySMQGS6oSAEbADSdAoGBAJ08S/R5t0233YOFPgym 4F1AOuJejvViYaAqSYIGwf1kQDXpo2gepywwJKADrkwUpc44VNUFwDAP6IlZ8/du fwbTMl9FebN9gYAM1RoIlts7Wl60PK485BjLcb5as/NQSXZqLacY1D7pG/Xae1+g q46/rXnCP1w/jHYS60EaeaFlAoGAacwZGRcohbQx+QXOexRb8lesp4/OW/rC8lC6 NmLm3iTCUSINkLyqqmMgympQcyPGyecFVBTSJbJ/DxabiUR+YoyWRF+imZ8ufois FLL5temRhrJAV7kuwZj92+8Vp8miVRfo7G8Kienakd72s5GS/aUaFo3ihk0/5+zN 5tAWa8UCgYEAovak6JyEl8SShNjLhJepF6STMkY0tm15K4kBvYQd9Jn5ubm9tK0I ZuenxJJSsKz/tLKaT4AK92r7lQp9nUFgiH1x4EM138UUihMbK1oPju+jukGMOwvk bkMIiIDRc+G+NURpaNLC3xzeV9/uUND4rJ2RxZvhCGcYlbJRbzoYcM0= -----END RSA PRIVATE KEY----- Certificate: Data: Version: 3 (0x2) Serial Number: 148609 (0x24481) Signature Algorithm: sha256WithRSAEncryption Issuer: CN = Ruby Driver CA, OU = Drivers, O = MongoDB, L = New York City, ST = New York, C = US Validity Not Before: May 30 22:24:53 2019 GMT Not After : May 30 22:24:53 2039 GMT Subject: CN = intermediate-client, OU = Ruby Driver, O = MongoDB, L = New York City, ST = New York, C = US Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public-Key: (2048 bit) Modulus: 00:a0:8e:fd:8b:92:49:40:d4:e6:f0:d9:0f:e8:b1: 20:26:79:1c:d2:47:5b:dc:60:6d:5b:4e:e2:ee:b6: 93:57:28:90:c0:3e:0e:8b:5f:d7:c8:90:a1:69:02: 81:8b:12:50:de:9f:5a:9d:47:cc:83:73:6a:a9:17: 36:44:04:24:e2:57:bd:7f:df:51:a5:f6:34:00:8d: 40:05:fa:54:fa:83:7a:9d:11:9f:51:ad:fd:a4:c4: b8:40:04:9f:8a:bd:e6:cd:4f:23:86:bd:25:21:25: 01:ef:38:49:90:d4:f5:95:d2:1f:46:fe:61:96:0f: 9c:86:77:c8:bc:a5:c2:4b:34:d5:9c:15:2c:57:7a: 48:a0:a1:f6:6c:24:90:fc:cd:3b:19:e5:41:97:ef: 86:6d:f3:7b:ab:ea:42:cb:82:4d:81:8a:19:64:24: 8c:ea:0a:45:54:be:91:67:90:a7:43:1a:30:48:35: 98:f6:ce:cd:56:f2:6e:ec:50:5a:e3:e0:e1:3e:53: 85:7e:ba:b2:01:ba:da:94:9e:17:e0:3c:70:bb:b7: 85:d6:e5:de:fd:2a:78:24:6e:91:bf:82:94:e4:44: 4e:b2:ee:d1:c0:25:c9:2c:2c:c0:7b:1f:cb:cf:79: 1e:b2:96:a9:c6:c9:3a:e4:1d:37:06:07:17:65:6f: 85:cb Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Basic Constraints: CA:TRUE X509v3 Key Usage: Certificate Sign X509v3 Subject Alternative Name: DNS:localhost, IP Address:127.0.0.1 Signature Algorithm: sha256WithRSAEncryption 60:09:93:0a:44:32:28:84:ab:a2:30:91:02:4a:82:30:df:90: a0:11:76:44:94:cc:f5:b3:53:69:dc:cf:97:dd:70:fd:99:fb: 0a:0c:5e:f2:7b:ee:4e:88:09:42:ef:ab:ff:18:ac:85:7a:6d: 13:47:e4:ea:63:00:a3:92:29:22:e3:08:6c:c1:07:68:52:0a: 0e:a5:e2:3c:a9:ec:f6:94:8e:72:f3:2c:a2:89:6f:a9:0c:42: 49:ce:23:4a:aa:8d:0b:70:88:99:38:92:58:60:f7:8c:96:16: 42:a8:d8:8b:92:c9:8f:c1:dd:49:2e:ff:68:bd:fa:2c:2f:93: f4:11:67:2b:c7:f9:4b:6f:85:b3:37:bd:08:83:40:94:6a:44: c2:d9:e9:91:47:70:79:6c:4d:23:20:73:0f:74:9f:33:7d:9d: 3e:74:b1:e8:55:0a:c5:2e:59:b4:9a:9d:95:82:cd:27:5f:63: b5:00:03:61:58:54:e8:5b:42:5d:f7:03:5d:e4:b7:b0:20:f8: 0b:3c:0b:b8:fb:68:36:ef:be:67:27:c1:b1:ca:ff:09:9a:77: 1d:97:69:b3:33:ef:bf:4e:ae:0f:78:9f:a8:73:10:77:b5:a9: e7:41:12:82:e1:25:94:cb:67:82:56:66:4d:00:d3:3a:7c:48: 4b:50:40:cc -----BEGIN CERTIFICATE----- MIIDqTCCApGgAwIBAgIDAkSBMA0GCSqGSIb3DQEBCwUAMHUxFzAVBgNVBAMTDlJ1 YnkgRHJpdmVyIENBMRAwDgYDVQQLEwdEcml2ZXJzMRAwDgYDVQQKEwdNb25nb0RC MRYwFAYDVQQHEw1OZXcgWW9yayBDaXR5MREwDwYDVQQIEwhOZXcgWW9yazELMAkG A1UEBhMCVVMwHhcNMTkwNTMwMjIyNDUzWhcNMzkwNTMwMjIyNDUzWjB+MRwwGgYD VQQDExNpbnRlcm1lZGlhdGUtY2xpZW50MRQwEgYDVQQLEwtSdWJ5IERyaXZlcjEQ MA4GA1UEChMHTW9uZ29EQjEWMBQGA1UEBxMNTmV3IFlvcmsgQ2l0eTERMA8GA1UE CBMITmV3IFlvcmsxCzAJBgNVBAYTAlVTMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8A MIIBCgKCAQEAoI79i5JJQNTm8NkP6LEgJnkc0kdb3GBtW07i7raTVyiQwD4Oi1/X yJChaQKBixJQ3p9anUfMg3NqqRc2RAQk4le9f99RpfY0AI1ABfpU+oN6nRGfUa39 pMS4QASfir3mzU8jhr0lISUB7zhJkNT1ldIfRv5hlg+chnfIvKXCSzTVnBUsV3pI oKH2bCSQ/M07GeVBl++GbfN7q+pCy4JNgYoZZCSM6gpFVL6RZ5CnQxowSDWY9s7N VvJu7FBa4+DhPlOFfrqyAbralJ4X4Dxwu7eF1uXe/Sp4JG6Rv4KU5EROsu7RwCXJ LCzAex/Lz3kespapxsk65B03BgcXZW+FywIDAQABozkwNzAMBgNVHRMEBTADAQH/ MAsGA1UdDwQEAwICBDAaBgNVHREEEzARgglsb2NhbGhvc3SHBH8AAAEwDQYJKoZI hvcNAQELBQADggEBAGAJkwpEMiiEq6IwkQJKgjDfkKARdkSUzPWzU2ncz5fdcP2Z +woMXvJ77k6ICULvq/8YrIV6bRNH5OpjAKOSKSLjCGzBB2hSCg6l4jyp7PaUjnLz LKKJb6kMQknOI0qqjQtwiJk4klhg94yWFkKo2IuSyY/B3Uku/2i9+iwvk/QRZyvH +UtvhbM3vQiDQJRqRMLZ6ZFHcHlsTSMgcw90nzN9nT50sehVCsUuWbSanZWCzSdf Y7UAA2FYVOhbQl33A13kt7Ag+As8C7j7aDbvvmcnwbHK/wmadx2XabMz779Org94 n6hzEHe1qedBEoLhJZTLZ4JWZk0A0zp8SEtQQMw= -----END CERTIFICATE----- mongo-ruby-driver-2.21.3/spec/support/certificates/client-second-level.crt000066400000000000000000000101471505113246500266540ustar00rootroot00000000000000Certificate: Data: Version: 3 (0x2) Serial Number: 403728 (0x62910) Signature Algorithm: sha256WithRSAEncryption Issuer: CN = intermediate-client, OU = Ruby Driver, O = MongoDB, L = New York City, ST = New York, C = US Validity Not Before: May 30 22:28:04 2019 GMT Not After : May 30 22:28:04 2039 GMT Subject: CN = localhost, OU = Ruby Driver, O = MongoDB, L = New York City, ST = New York, C = US Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public-Key: (2048 bit) Modulus: 00:93:97:c2:b6:1b:ca:ba:e7:c4:64:5a:a9:f7:1f: 32:ba:6d:83:fb:71:83:86:a1:d8:62:65:ba:bc:f0: ac:3c:c9:bd:85:79:03:72:1f:5d:fc:4e:ae:3d:85: 2d:6b:da:4c:c1:b3:dc:c3:c3:c1:b4:9d:f2:8e:2f: 97:68:31:44:2b:b9:c9:8c:8b:f7:89:e1:f0:d6:0b: 23:87:c6:5d:44:f3:9b:3d:c4:70:e2:03:c2:f2:0e: c6:b5:60:f7:28:44:71:d5:3e:9e:6c:5e:a7:1a:29: f0:9b:21:e3:be:b3:e0:0f:0d:c4:12:97:46:12:0b: 4f:84:61:79:65:3f:b2:45:90:e9:62:36:e7:9c:95: 00:93:79:69:b9:5c:b8:e6:37:ce:30:72:55:d9:19: 5f:6c:1a:9f:4d:af:9d:f2:ec:28:62:82:cf:27:3b: 83:0d:12:39:64:04:4e:68:84:8e:50:d9:52:83:db: df:50:69:5a:83:0e:be:57:35:cc:c9:5b:bb:25:7b: 6c:db:39:be:b7:76:db:b7:fc:3c:29:68:2e:2f:f3: 06:90:ff:37:c6:29:3c:fd:90:36:c4:44:87:b3:eb: 40:c4:fa:83:5e:e5:23:b3:13:bc:f6:89:7c:5e:bb: 18:0f:f3:d0:18:62:f2:0d:3a:72:9c:a3:22:ef:8c: 95:99 Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Subject Alternative Name: DNS:localhost, IP Address:127.0.0.1 Signature Algorithm: sha256WithRSAEncryption 53:66:80:0a:e4:a2:ec:d5:9f:af:f4:23:15:a1:82:27:e5:66: a9:7f:55:e3:12:0d:ed:8d:09:0a:d9:ed:37:d6:7b:58:ce:7e: 85:72:f2:d4:9f:4e:bc:e4:27:fe:90:6a:4c:a9:49:74:50:5e: 2b:5c:16:50:d6:d2:6f:c0:39:d6:fa:03:74:5e:79:e0:bd:eb: ac:8d:11:86:9e:fd:06:22:c2:c0:e2:33:c0:5a:be:d0:4e:8c: 8e:22:0f:8c:c1:19:56:3e:74:21:8e:7f:54:b5:cd:73:7b:70: 34:2d:e4:45:df:c4:b1:a9:84:ac:26:a8:cd:7f:0f:59:7b:d9: a4:5e:65:02:f6:be:11:b7:ee:f4:9e:b9:b8:1f:1c:94:da:0e: 1c:0e:3d:c0:e4:40:e7:1d:98:5c:df:22:9f:82:21:c3:a0:52: 1e:f4:e0:2d:07:96:f6:39:32:83:4e:88:0e:66:e2:11:18:b7: bc:30:e5:6d:4f:76:05:bf:ed:ff:98:b1:06:64:94:46:e5:46: d5:0e:b7:9a:c6:91:c5:29:78:83:a3:d1:40:c2:de:6e:ad:67: 6b:fd:0f:0e:0c:b2:d5:6f:2c:19:d2:0d:83:5b:c7:22:ba:8a: 35:2a:58:39:8b:87:e8:76:b5:3b:38:1e:7c:80:47:5c:73:be: 83:96:16:65 -----BEGIN CERTIFICATE----- MIIDjTCCAnWgAwIBAgIDBikQMA0GCSqGSIb3DQEBCwUAMH4xHDAaBgNVBAMTE2lu dGVybWVkaWF0ZS1jbGllbnQxFDASBgNVBAsTC1J1YnkgRHJpdmVyMRAwDgYDVQQK EwdNb25nb0RCMRYwFAYDVQQHEw1OZXcgWW9yayBDaXR5MREwDwYDVQQIEwhOZXcg WW9yazELMAkGA1UEBhMCVVMwHhcNMTkwNTMwMjIyODA0WhcNMzkwNTMwMjIyODA0 WjB0MRIwEAYDVQQDEwlsb2NhbGhvc3QxFDASBgNVBAsTC1J1YnkgRHJpdmVyMRAw DgYDVQQKEwdNb25nb0RCMRYwFAYDVQQHEw1OZXcgWW9yayBDaXR5MREwDwYDVQQI EwhOZXcgWW9yazELMAkGA1UEBhMCVVMwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAw ggEKAoIBAQCTl8K2G8q658RkWqn3HzK6bYP7cYOGodhiZbq88Kw8yb2FeQNyH138 Tq49hS1r2kzBs9zDw8G0nfKOL5doMUQrucmMi/eJ4fDWCyOHxl1E85s9xHDiA8Ly Dsa1YPcoRHHVPp5sXqcaKfCbIeO+s+APDcQSl0YSC0+EYXllP7JFkOliNueclQCT eWm5XLjmN84wclXZGV9sGp9Nr53y7Chigs8nO4MNEjlkBE5ohI5Q2VKD299QaVqD Dr5XNczJW7sle2zbOb63dtu3/DwpaC4v8waQ/zfGKTz9kDbERIez60DE+oNe5SOz E7z2iXxeuxgP89AYYvINOnKcoyLvjJWZAgMBAAGjHjAcMBoGA1UdEQQTMBGCCWxv Y2FsaG9zdIcEfwAAATANBgkqhkiG9w0BAQsFAAOCAQEAU2aACuSi7NWfr/QjFaGC J+VmqX9V4xIN7Y0JCtntN9Z7WM5+hXLy1J9OvOQn/pBqTKlJdFBeK1wWUNbSb8A5 1voDdF554L3rrI0Rhp79BiLCwOIzwFq+0E6MjiIPjMEZVj50IY5/VLXNc3twNC3k Rd/EsamErCaozX8PWXvZpF5lAva+Ebfu9J65uB8clNoOHA49wORA5x2YXN8in4Ih w6BSHvTgLQeW9jkyg06IDmbiERi3vDDlbU92Bb/t/5ixBmSURuVG1Q63msaRxSl4 g6PRQMLebq1na/0PDgyy1W8sGdINg1vHIrqKNSpYOYuH6Ha1OzgefIBHXHO+g5YW ZQ== -----END CERTIFICATE----- mongo-ruby-driver-2.21.3/spec/support/certificates/client-second-level.key000066400000000000000000000032171505113246500266540ustar00rootroot00000000000000-----BEGIN RSA PRIVATE KEY----- MIIEpQIBAAKCAQEAk5fCthvKuufEZFqp9x8yum2D+3GDhqHYYmW6vPCsPMm9hXkD ch9d/E6uPYUta9pMwbPcw8PBtJ3yji+XaDFEK7nJjIv3ieHw1gsjh8ZdRPObPcRw 4gPC8g7GtWD3KERx1T6ebF6nGinwmyHjvrPgDw3EEpdGEgtPhGF5ZT+yRZDpYjbn nJUAk3lpuVy45jfOMHJV2RlfbBqfTa+d8uwoYoLPJzuDDRI5ZAROaISOUNlSg9vf UGlagw6+VzXMyVu7JXts2zm+t3bbt/w8KWguL/MGkP83xik8/ZA2xESHs+tAxPqD XuUjsxO89ol8XrsYD/PQGGLyDTpynKMi74yVmQIDAQABAoIBAQCPEqxzsFlD+exN g/4DSsD4K7Wnh5CCcF28dPUitwOgIciQnJCUjoejT/pkNLelN4b0txCozRj3p607 3DKflDKLWJxinEQn61h1hXK56bb8YlH4/HaZAiB2WZCSvx6YcFEQ8JTOZKsEF+ff 2mhVszTeIvARPYd1cnVw1LTDS43bFHbe0lnj/rxsX62IYfaTJjfDa3n8cXPvrP1Y Kkc+cV11FqSfPM0zMfE2ORNjnqEkKNb1eE9gIHSQ3nForTCASZR7gXKYTqJG6rJd XFluDztViR5ieNeh7rMBVadPTTpt+pwtBXdKuC9+OUEe6zHnsveIFTlq/mDQuXoW qaJgtJYhAoGBANF3oss8thvLVXAL6hdupz1htkv46LEkx8anZL3bKu/qlu5qZ0M3 sUAXZoKV1DF+LOxt5h+ZszDz1NtXh3zV/gTfNPDEipzpInEHIUWz+jZkGQ0h3kLb H184uq3sT3uN+pImyhHHU9DhsUg/E4JxgtNCVXFyOT6B4TEQL3q60c4tAoGBALRh VXKjfBYdm5cQqpsgx7wzHV8qmlXM4n9EwPHeUpORUMOQWD+8n9umHMtcXzg7JxyJ UnNFRWtr/s/QOdxDXofr+PJoD5DfFLQoe6TAx7/tS5XCfv2owisCCn0lVt70mw+K Bs8HjVl3D/LZqaohCW6PyRftySMQGS6oSAEbADSdAoGBAJ08S/R5t0233YOFPgym 4F1AOuJejvViYaAqSYIGwf1kQDXpo2gepywwJKADrkwUpc44VNUFwDAP6IlZ8/du fwbTMl9FebN9gYAM1RoIlts7Wl60PK485BjLcb5as/NQSXZqLacY1D7pG/Xae1+g q46/rXnCP1w/jHYS60EaeaFlAoGAacwZGRcohbQx+QXOexRb8lesp4/OW/rC8lC6 NmLm3iTCUSINkLyqqmMgympQcyPGyecFVBTSJbJ/DxabiUR+YoyWRF+imZ8ufois FLL5temRhrJAV7kuwZj92+8Vp8miVRfo7G8Kienakd72s5GS/aUaFo3ihk0/5+zN 5tAWa8UCgYEAovak6JyEl8SShNjLhJepF6STMkY0tm15K4kBvYQd9Jn5ubm9tK0I ZuenxJJSsKz/tLKaT4AK92r7lQp9nUFgiH1x4EM138UUihMbK1oPju+jukGMOwvk bkMIiIDRc+G+NURpaNLC3xzeV9/uUND4rJ2RxZvhCGcYlbJRbzoYcM0= -----END RSA PRIVATE KEY----- mongo-ruby-driver-2.21.3/spec/support/certificates/client-second-level.pem000066400000000000000000000133661505113246500266530ustar00rootroot00000000000000Certificate: Data: Version: 3 (0x2) Serial Number: 403728 (0x62910) Signature Algorithm: sha256WithRSAEncryption Issuer: CN = intermediate-client, OU = Ruby Driver, O = MongoDB, L = New York City, ST = New York, C = US Validity Not Before: May 30 22:28:04 2019 GMT Not After : May 30 22:28:04 2039 GMT Subject: CN = localhost, OU = Ruby Driver, O = MongoDB, L = New York City, ST = New York, C = US Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public-Key: (2048 bit) Modulus: 00:93:97:c2:b6:1b:ca:ba:e7:c4:64:5a:a9:f7:1f: 32:ba:6d:83:fb:71:83:86:a1:d8:62:65:ba:bc:f0: ac:3c:c9:bd:85:79:03:72:1f:5d:fc:4e:ae:3d:85: 2d:6b:da:4c:c1:b3:dc:c3:c3:c1:b4:9d:f2:8e:2f: 97:68:31:44:2b:b9:c9:8c:8b:f7:89:e1:f0:d6:0b: 23:87:c6:5d:44:f3:9b:3d:c4:70:e2:03:c2:f2:0e: c6:b5:60:f7:28:44:71:d5:3e:9e:6c:5e:a7:1a:29: f0:9b:21:e3:be:b3:e0:0f:0d:c4:12:97:46:12:0b: 4f:84:61:79:65:3f:b2:45:90:e9:62:36:e7:9c:95: 00:93:79:69:b9:5c:b8:e6:37:ce:30:72:55:d9:19: 5f:6c:1a:9f:4d:af:9d:f2:ec:28:62:82:cf:27:3b: 83:0d:12:39:64:04:4e:68:84:8e:50:d9:52:83:db: df:50:69:5a:83:0e:be:57:35:cc:c9:5b:bb:25:7b: 6c:db:39:be:b7:76:db:b7:fc:3c:29:68:2e:2f:f3: 06:90:ff:37:c6:29:3c:fd:90:36:c4:44:87:b3:eb: 40:c4:fa:83:5e:e5:23:b3:13:bc:f6:89:7c:5e:bb: 18:0f:f3:d0:18:62:f2:0d:3a:72:9c:a3:22:ef:8c: 95:99 Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Subject Alternative Name: DNS:localhost, IP Address:127.0.0.1 Signature Algorithm: sha256WithRSAEncryption 53:66:80:0a:e4:a2:ec:d5:9f:af:f4:23:15:a1:82:27:e5:66: a9:7f:55:e3:12:0d:ed:8d:09:0a:d9:ed:37:d6:7b:58:ce:7e: 85:72:f2:d4:9f:4e:bc:e4:27:fe:90:6a:4c:a9:49:74:50:5e: 2b:5c:16:50:d6:d2:6f:c0:39:d6:fa:03:74:5e:79:e0:bd:eb: ac:8d:11:86:9e:fd:06:22:c2:c0:e2:33:c0:5a:be:d0:4e:8c: 8e:22:0f:8c:c1:19:56:3e:74:21:8e:7f:54:b5:cd:73:7b:70: 34:2d:e4:45:df:c4:b1:a9:84:ac:26:a8:cd:7f:0f:59:7b:d9: a4:5e:65:02:f6:be:11:b7:ee:f4:9e:b9:b8:1f:1c:94:da:0e: 1c:0e:3d:c0:e4:40:e7:1d:98:5c:df:22:9f:82:21:c3:a0:52: 1e:f4:e0:2d:07:96:f6:39:32:83:4e:88:0e:66:e2:11:18:b7: bc:30:e5:6d:4f:76:05:bf:ed:ff:98:b1:06:64:94:46:e5:46: d5:0e:b7:9a:c6:91:c5:29:78:83:a3:d1:40:c2:de:6e:ad:67: 6b:fd:0f:0e:0c:b2:d5:6f:2c:19:d2:0d:83:5b:c7:22:ba:8a: 35:2a:58:39:8b:87:e8:76:b5:3b:38:1e:7c:80:47:5c:73:be: 83:96:16:65 -----BEGIN CERTIFICATE----- MIIDjTCCAnWgAwIBAgIDBikQMA0GCSqGSIb3DQEBCwUAMH4xHDAaBgNVBAMTE2lu dGVybWVkaWF0ZS1jbGllbnQxFDASBgNVBAsTC1J1YnkgRHJpdmVyMRAwDgYDVQQK EwdNb25nb0RCMRYwFAYDVQQHEw1OZXcgWW9yayBDaXR5MREwDwYDVQQIEwhOZXcg WW9yazELMAkGA1UEBhMCVVMwHhcNMTkwNTMwMjIyODA0WhcNMzkwNTMwMjIyODA0 WjB0MRIwEAYDVQQDEwlsb2NhbGhvc3QxFDASBgNVBAsTC1J1YnkgRHJpdmVyMRAw DgYDVQQKEwdNb25nb0RCMRYwFAYDVQQHEw1OZXcgWW9yayBDaXR5MREwDwYDVQQI EwhOZXcgWW9yazELMAkGA1UEBhMCVVMwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAw ggEKAoIBAQCTl8K2G8q658RkWqn3HzK6bYP7cYOGodhiZbq88Kw8yb2FeQNyH138 Tq49hS1r2kzBs9zDw8G0nfKOL5doMUQrucmMi/eJ4fDWCyOHxl1E85s9xHDiA8Ly Dsa1YPcoRHHVPp5sXqcaKfCbIeO+s+APDcQSl0YSC0+EYXllP7JFkOliNueclQCT eWm5XLjmN84wclXZGV9sGp9Nr53y7Chigs8nO4MNEjlkBE5ohI5Q2VKD299QaVqD Dr5XNczJW7sle2zbOb63dtu3/DwpaC4v8waQ/zfGKTz9kDbERIez60DE+oNe5SOz E7z2iXxeuxgP89AYYvINOnKcoyLvjJWZAgMBAAGjHjAcMBoGA1UdEQQTMBGCCWxv Y2FsaG9zdIcEfwAAATANBgkqhkiG9w0BAQsFAAOCAQEAU2aACuSi7NWfr/QjFaGC J+VmqX9V4xIN7Y0JCtntN9Z7WM5+hXLy1J9OvOQn/pBqTKlJdFBeK1wWUNbSb8A5 1voDdF554L3rrI0Rhp79BiLCwOIzwFq+0E6MjiIPjMEZVj50IY5/VLXNc3twNC3k Rd/EsamErCaozX8PWXvZpF5lAva+Ebfu9J65uB8clNoOHA49wORA5x2YXN8in4Ih w6BSHvTgLQeW9jkyg06IDmbiERi3vDDlbU92Bb/t/5ixBmSURuVG1Q63msaRxSl4 g6PRQMLebq1na/0PDgyy1W8sGdINg1vHIrqKNSpYOYuH6Ha1OzgefIBHXHO+g5YW ZQ== -----END CERTIFICATE----- -----BEGIN RSA PRIVATE KEY----- MIIEpQIBAAKCAQEAk5fCthvKuufEZFqp9x8yum2D+3GDhqHYYmW6vPCsPMm9hXkD ch9d/E6uPYUta9pMwbPcw8PBtJ3yji+XaDFEK7nJjIv3ieHw1gsjh8ZdRPObPcRw 4gPC8g7GtWD3KERx1T6ebF6nGinwmyHjvrPgDw3EEpdGEgtPhGF5ZT+yRZDpYjbn nJUAk3lpuVy45jfOMHJV2RlfbBqfTa+d8uwoYoLPJzuDDRI5ZAROaISOUNlSg9vf UGlagw6+VzXMyVu7JXts2zm+t3bbt/w8KWguL/MGkP83xik8/ZA2xESHs+tAxPqD XuUjsxO89ol8XrsYD/PQGGLyDTpynKMi74yVmQIDAQABAoIBAQCPEqxzsFlD+exN g/4DSsD4K7Wnh5CCcF28dPUitwOgIciQnJCUjoejT/pkNLelN4b0txCozRj3p607 3DKflDKLWJxinEQn61h1hXK56bb8YlH4/HaZAiB2WZCSvx6YcFEQ8JTOZKsEF+ff 2mhVszTeIvARPYd1cnVw1LTDS43bFHbe0lnj/rxsX62IYfaTJjfDa3n8cXPvrP1Y Kkc+cV11FqSfPM0zMfE2ORNjnqEkKNb1eE9gIHSQ3nForTCASZR7gXKYTqJG6rJd XFluDztViR5ieNeh7rMBVadPTTpt+pwtBXdKuC9+OUEe6zHnsveIFTlq/mDQuXoW qaJgtJYhAoGBANF3oss8thvLVXAL6hdupz1htkv46LEkx8anZL3bKu/qlu5qZ0M3 sUAXZoKV1DF+LOxt5h+ZszDz1NtXh3zV/gTfNPDEipzpInEHIUWz+jZkGQ0h3kLb H184uq3sT3uN+pImyhHHU9DhsUg/E4JxgtNCVXFyOT6B4TEQL3q60c4tAoGBALRh VXKjfBYdm5cQqpsgx7wzHV8qmlXM4n9EwPHeUpORUMOQWD+8n9umHMtcXzg7JxyJ UnNFRWtr/s/QOdxDXofr+PJoD5DfFLQoe6TAx7/tS5XCfv2owisCCn0lVt70mw+K Bs8HjVl3D/LZqaohCW6PyRftySMQGS6oSAEbADSdAoGBAJ08S/R5t0233YOFPgym 4F1AOuJejvViYaAqSYIGwf1kQDXpo2gepywwJKADrkwUpc44VNUFwDAP6IlZ8/du fwbTMl9FebN9gYAM1RoIlts7Wl60PK485BjLcb5as/NQSXZqLacY1D7pG/Xae1+g q46/rXnCP1w/jHYS60EaeaFlAoGAacwZGRcohbQx+QXOexRb8lesp4/OW/rC8lC6 NmLm3iTCUSINkLyqqmMgympQcyPGyecFVBTSJbJ/DxabiUR+YoyWRF+imZ8ufois FLL5temRhrJAV7kuwZj92+8Vp8miVRfo7G8Kienakd72s5GS/aUaFo3ihk0/5+zN 5tAWa8UCgYEAovak6JyEl8SShNjLhJepF6STMkY0tm15K4kBvYQd9Jn5ubm9tK0I ZuenxJJSsKz/tLKaT4AK92r7lQp9nUFgiH1x4EM138UUihMbK1oPju+jukGMOwvk bkMIiIDRc+G+NURpaNLC3xzeV9/uUND4rJ2RxZvhCGcYlbJRbzoYcM0= -----END RSA PRIVATE KEY----- mongo-ruby-driver-2.21.3/spec/support/certificates/client-x509.crt000066400000000000000000000104061505113246500247770ustar00rootroot00000000000000Certificate: Data: Version: 3 (0x2) Serial Number: 602210 (0x93062) Signature Algorithm: sha256WithRSAEncryption Issuer: CN = Ruby Driver CA, OU = Drivers, O = MongoDB, L = New York City, ST = New York, C = US Validity Not Before: Sep 4 21:17:42 2019 GMT Not After : Sep 4 21:17:42 2039 GMT Subject: CN = localhost, OU = x509, O = MongoDB, L = New York City, ST = New York, C = US Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public-Key: (2048 bit) Modulus: 00:c4:e6:20:8f:58:42:53:51:24:64:b4:d0:25:cf: 79:e0:71:f8:9e:97:35:a8:df:fa:d9:63:eb:63:b2: b4:77:60:af:8e:09:6d:72:50:bc:ea:7c:57:3f:17: 51:b0:05:85:e8:3b:67:4b:97:84:61:bb:68:09:b4: 96:da:c8:3b:7d:53:b8:10:fe:0d:71:2f:b8:5d:83: 86:3f:06:57:e2:6c:d5:2f:c8:6c:74:fb:d8:6f:77: df:ba:6d:52:61:3c:33:76:a5:5f:62:68:af:a4:e8: dc:36:2a:b9:54:47:91:ec:4f:09:b9:2e:ef:37:4d: d7:04:db:48:fc:8d:c2:44:f1:9f:79:21:f0:06:fe: b4:e5:50:3c:cf:d1:3f:59:b5:8d:dc:d0:39:31:53: 95:42:d7:92:c3:c9:d5:93:48:e8:dc:16:ce:61:ec: 6f:ce:91:5c:91:2e:59:18:1f:fc:a8:ff:52:51:cf: 10:c0:be:a0:ad:cb:63:98:30:66:0e:42:e3:ca:6b: 2d:f8:92:c7:24:a7:03:65:96:0a:9c:ce:09:e7:ae: c2:a7:ea:6c:54:bb:e8:24:62:31:48:fb:d0:df:e1: a2:3c:5f:d2:89:29:de:4f:6b:73:88:2a:68:57:08: 7a:1e:aa:bd:70:79:e7:dc:f5:e1:9f:39:83:7b:70: 55:bb Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Key Usage: Digital Signature X509v3 Extended Key Usage: TLS Web Client Authentication X509v3 Subject Alternative Name: DNS:localhost, IP Address:127.0.0.1 Signature Algorithm: sha256WithRSAEncryption 92:04:8c:a5:56:c0:01:37:65:ff:d2:0e:5b:be:dd:78:9c:e2: 45:3d:fc:34:e5:23:f3:75:fb:70:3b:06:9f:e9:63:e9:f0:8a: 14:54:3d:d9:6a:22:af:04:00:25:95:80:e8:83:0b:c7:6f:f0: f1:58:2b:07:86:6b:e3:eb:b0:ea:09:b2:5e:15:05:14:89:2b: 02:99:09:97:6d:49:19:ac:c2:50:91:2b:03:e6:75:ce:27:9d: 8f:c0:b5:cd:b2:1f:7d:66:75:c7:d1:a7:16:b3:cf:8b:1e:9b: e4:46:da:e2:02:2c:55:74:56:8c:e6:d9:27:53:9f:b2:f5:09: ba:fe:df:e2:e1:b7:7d:43:8a:9d:bb:f0:3d:b9:d4:ce:26:8f: d9:cc:e6:2e:1c:81:fc:6e:a0:5f:01:23:68:9d:fe:1b:ee:03: 69:f1:10:af:5a:0e:dc:96:e2:56:ae:ca:35:b3:08:61:34:37: e1:e6:53:ef:68:84:87:f4:56:c5:49:45:08:90:46:3e:1c:b5: 40:08:f7:09:51:d7:24:53:49:b5:b1:2f:85:39:b9:0b:0e:f9: 05:ea:a3:d0:47:6d:69:6b:9c:25:8e:ad:61:01:86:96:28:3b: fd:6f:78:79:66:b1:cc:de:fc:18:45:cf:84:f1:d0:e2:46:4f: f8:9d:95:a4 -----BEGIN CERTIFICATE----- MIIDnzCCAoegAwIBAgIDCTBiMA0GCSqGSIb3DQEBCwUAMHUxFzAVBgNVBAMTDlJ1 YnkgRHJpdmVyIENBMRAwDgYDVQQLEwdEcml2ZXJzMRAwDgYDVQQKEwdNb25nb0RC MRYwFAYDVQQHEw1OZXcgWW9yayBDaXR5MREwDwYDVQQIEwhOZXcgWW9yazELMAkG A1UEBhMCVVMwHhcNMTkwOTA0MjExNzQyWhcNMzkwOTA0MjExNzQyWjBtMRIwEAYD VQQDEwlsb2NhbGhvc3QxDTALBgNVBAsTBHg1MDkxEDAOBgNVBAoTB01vbmdvREIx FjAUBgNVBAcTDU5ldyBZb3JrIENpdHkxETAPBgNVBAgTCE5ldyBZb3JrMQswCQYD VQQGEwJVUzCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMTmII9YQlNR JGS00CXPeeBx+J6XNajf+tlj62OytHdgr44JbXJQvOp8Vz8XUbAFheg7Z0uXhGG7 aAm0ltrIO31TuBD+DXEvuF2Dhj8GV+Js1S/IbHT72G9337ptUmE8M3alX2Jor6To 3DYquVRHkexPCbku7zdN1wTbSPyNwkTxn3kh8Ab+tOVQPM/RP1m1jdzQOTFTlULX ksPJ1ZNI6NwWzmHsb86RXJEuWRgf/Kj/UlHPEMC+oK3LY5gwZg5C48prLfiSxySn A2WWCpzOCeeuwqfqbFS76CRiMUj70N/hojxf0okp3k9rc4gqaFcIeh6qvXB559z1 4Z85g3twVbsCAwEAAaNAMD4wCwYDVR0PBAQDAgeAMBMGA1UdJQQMMAoGCCsGAQUF BwMCMBoGA1UdEQQTMBGCCWxvY2FsaG9zdIcEfwAAATANBgkqhkiG9w0BAQsFAAOC AQEAkgSMpVbAATdl/9IOW77deJziRT38NOUj83X7cDsGn+lj6fCKFFQ92WoirwQA JZWA6IMLx2/w8VgrB4Zr4+uw6gmyXhUFFIkrApkJl21JGazCUJErA+Z1ziedj8C1 zbIffWZ1x9GnFrPPix6b5Eba4gIsVXRWjObZJ1OfsvUJuv7f4uG3fUOKnbvwPbnU ziaP2czmLhyB/G6gXwEjaJ3+G+4DafEQr1oO3JbiVq7KNbMIYTQ34eZT72iEh/RW xUlFCJBGPhy1QAj3CVHXJFNJtbEvhTm5Cw75Beqj0EdtaWucJY6tYQGGlig7/W94 eWaxzN78GEXPhPHQ4kZP+J2VpA== -----END CERTIFICATE----- mongo-ruby-driver-2.21.3/spec/support/certificates/client-x509.key000066400000000000000000000032171505113246500250010ustar00rootroot00000000000000-----BEGIN RSA PRIVATE KEY----- MIIEpQIBAAKCAQEAxOYgj1hCU1EkZLTQJc954HH4npc1qN/62WPrY7K0d2Cvjglt clC86nxXPxdRsAWF6DtnS5eEYbtoCbSW2sg7fVO4EP4NcS+4XYOGPwZX4mzVL8hs dPvYb3ffum1SYTwzdqVfYmivpOjcNiq5VEeR7E8JuS7vN03XBNtI/I3CRPGfeSHw Bv605VA8z9E/WbWN3NA5MVOVQteSw8nVk0jo3BbOYexvzpFckS5ZGB/8qP9SUc8Q wL6grctjmDBmDkLjymst+JLHJKcDZZYKnM4J567Cp+psVLvoJGIxSPvQ3+GiPF/S iSneT2tziCpoVwh6Hqq9cHnn3PXhnzmDe3BVuwIDAQABAoIBAD9rdC9XmT1m1FcP mj+jfTka3J6QS7tSMWUV9vqM0+3jmYghZzw73y2T0TJEG46bqM7tW08QxZYJG/CM V06u0eKDNbVbbw729OZB99qS+3m13lDeuHhRqhv1O327up4RGu5rQ7bZy0FNs6hK yJjp2ImJx7L6+BgTHV+2FeMq8djsffJDvsLn65W2Fw0pBw+1pFYJMMLodNrSXkGi FaE+XLO/FMmFfI6fc4uqMgXd+RLmGC3DY4lnbZMf14nlNn+SNMr08v+wipJNqE3K OERRkgm/uXIbo+a275suUZ8kVRlVMtIXVYrwMj1JQY7YJ2uiOy91QoWvzPu6wUGH g/ehttECgYEA4yuEU4rdJs/roEy+rY6bEjgcvN6uzoEjv4z4MLLHk61p88RDykYZ C0crmwiXItWPZ6rm458+TwIqawrLQAmJU6iSSmVy/ed/5C3vKDyEq84N3eglac/U yj6kk5vztCtUrr8Z2dnrBAz1LuAYUqPs8fVmYGYiPfM0+jLaZK2L+uMCgYEA3eMj xANChhiTCdaw3hEL57c2pbZ/xBwGi8VWZqJvxdJNbZgc+RDw4ytz1d3DZCRWfxIF w4n69wjiakZ9DA5YdzIvplv8YfZ1bAo0JSGyybERXKTVUj3AqBCf4bGRZJCgD+/g aGZpJrfD+7ho8FyOvt9LvLos8UPaJD1Llse1+UkCgYEAyyNi1QHb+JT88v8tky1u ZcBfklTepDK+sM9yMLnt1ZTApgbfR8WfJ4Kg76Wi4Ldv4RfmF62SnjwlikrArabZ ckHPb0+AoKOerYCV17kmOiusIr8wlYoPkjqqGITgTEBjHVAt4a0Ihzq/FQe3OE71 1vfGcHVkMVmGCiXnPRgjkFkCgYEAu0TJGtXlf2eeMd+Qxtt8QMTQymuMyecdXzne AiF2VG96CdUoHs29gP1bdlUEY7CHkBeV5cK+nWBSN3/mahZxc6hXrwBTshpgYB78 g5o9WxymmppDsHWN9EqTpdhH7ahibxD1RSep95OBRSIO704u68lqEo7O/5FUuuFA urEzVIECgYEAyX44ZLYW7c68fS2zTvnGBBINgntookhRK0sMUwuYvDL5RobnKusP 2Fz4gZtTmpRfgxcglih+EJPUhqqn6UteXG/TNatrf27DOuQgJHliQa/GDcANkEkT UtGu2aCxd0Na9lPvEzor37PPzLKdkaiAmAnyLmTpn5whGFgpXa32Ups= -----END RSA PRIVATE KEY----- mongo-ruby-driver-2.21.3/spec/support/certificates/client-x509.pem000066400000000000000000000136251505113246500247760ustar00rootroot00000000000000Certificate: Data: Version: 3 (0x2) Serial Number: 602210 (0x93062) Signature Algorithm: sha256WithRSAEncryption Issuer: CN = Ruby Driver CA, OU = Drivers, O = MongoDB, L = New York City, ST = New York, C = US Validity Not Before: Sep 4 21:17:42 2019 GMT Not After : Sep 4 21:17:42 2039 GMT Subject: CN = localhost, OU = x509, O = MongoDB, L = New York City, ST = New York, C = US Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public-Key: (2048 bit) Modulus: 00:c4:e6:20:8f:58:42:53:51:24:64:b4:d0:25:cf: 79:e0:71:f8:9e:97:35:a8:df:fa:d9:63:eb:63:b2: b4:77:60:af:8e:09:6d:72:50:bc:ea:7c:57:3f:17: 51:b0:05:85:e8:3b:67:4b:97:84:61:bb:68:09:b4: 96:da:c8:3b:7d:53:b8:10:fe:0d:71:2f:b8:5d:83: 86:3f:06:57:e2:6c:d5:2f:c8:6c:74:fb:d8:6f:77: df:ba:6d:52:61:3c:33:76:a5:5f:62:68:af:a4:e8: dc:36:2a:b9:54:47:91:ec:4f:09:b9:2e:ef:37:4d: d7:04:db:48:fc:8d:c2:44:f1:9f:79:21:f0:06:fe: b4:e5:50:3c:cf:d1:3f:59:b5:8d:dc:d0:39:31:53: 95:42:d7:92:c3:c9:d5:93:48:e8:dc:16:ce:61:ec: 6f:ce:91:5c:91:2e:59:18:1f:fc:a8:ff:52:51:cf: 10:c0:be:a0:ad:cb:63:98:30:66:0e:42:e3:ca:6b: 2d:f8:92:c7:24:a7:03:65:96:0a:9c:ce:09:e7:ae: c2:a7:ea:6c:54:bb:e8:24:62:31:48:fb:d0:df:e1: a2:3c:5f:d2:89:29:de:4f:6b:73:88:2a:68:57:08: 7a:1e:aa:bd:70:79:e7:dc:f5:e1:9f:39:83:7b:70: 55:bb Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Key Usage: Digital Signature X509v3 Extended Key Usage: TLS Web Client Authentication X509v3 Subject Alternative Name: DNS:localhost, IP Address:127.0.0.1 Signature Algorithm: sha256WithRSAEncryption 92:04:8c:a5:56:c0:01:37:65:ff:d2:0e:5b:be:dd:78:9c:e2: 45:3d:fc:34:e5:23:f3:75:fb:70:3b:06:9f:e9:63:e9:f0:8a: 14:54:3d:d9:6a:22:af:04:00:25:95:80:e8:83:0b:c7:6f:f0: f1:58:2b:07:86:6b:e3:eb:b0:ea:09:b2:5e:15:05:14:89:2b: 02:99:09:97:6d:49:19:ac:c2:50:91:2b:03:e6:75:ce:27:9d: 8f:c0:b5:cd:b2:1f:7d:66:75:c7:d1:a7:16:b3:cf:8b:1e:9b: e4:46:da:e2:02:2c:55:74:56:8c:e6:d9:27:53:9f:b2:f5:09: ba:fe:df:e2:e1:b7:7d:43:8a:9d:bb:f0:3d:b9:d4:ce:26:8f: d9:cc:e6:2e:1c:81:fc:6e:a0:5f:01:23:68:9d:fe:1b:ee:03: 69:f1:10:af:5a:0e:dc:96:e2:56:ae:ca:35:b3:08:61:34:37: e1:e6:53:ef:68:84:87:f4:56:c5:49:45:08:90:46:3e:1c:b5: 40:08:f7:09:51:d7:24:53:49:b5:b1:2f:85:39:b9:0b:0e:f9: 05:ea:a3:d0:47:6d:69:6b:9c:25:8e:ad:61:01:86:96:28:3b: fd:6f:78:79:66:b1:cc:de:fc:18:45:cf:84:f1:d0:e2:46:4f: f8:9d:95:a4 -----BEGIN CERTIFICATE----- MIIDnzCCAoegAwIBAgIDCTBiMA0GCSqGSIb3DQEBCwUAMHUxFzAVBgNVBAMTDlJ1 YnkgRHJpdmVyIENBMRAwDgYDVQQLEwdEcml2ZXJzMRAwDgYDVQQKEwdNb25nb0RC MRYwFAYDVQQHEw1OZXcgWW9yayBDaXR5MREwDwYDVQQIEwhOZXcgWW9yazELMAkG A1UEBhMCVVMwHhcNMTkwOTA0MjExNzQyWhcNMzkwOTA0MjExNzQyWjBtMRIwEAYD VQQDEwlsb2NhbGhvc3QxDTALBgNVBAsTBHg1MDkxEDAOBgNVBAoTB01vbmdvREIx FjAUBgNVBAcTDU5ldyBZb3JrIENpdHkxETAPBgNVBAgTCE5ldyBZb3JrMQswCQYD VQQGEwJVUzCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMTmII9YQlNR JGS00CXPeeBx+J6XNajf+tlj62OytHdgr44JbXJQvOp8Vz8XUbAFheg7Z0uXhGG7 aAm0ltrIO31TuBD+DXEvuF2Dhj8GV+Js1S/IbHT72G9337ptUmE8M3alX2Jor6To 3DYquVRHkexPCbku7zdN1wTbSPyNwkTxn3kh8Ab+tOVQPM/RP1m1jdzQOTFTlULX ksPJ1ZNI6NwWzmHsb86RXJEuWRgf/Kj/UlHPEMC+oK3LY5gwZg5C48prLfiSxySn A2WWCpzOCeeuwqfqbFS76CRiMUj70N/hojxf0okp3k9rc4gqaFcIeh6qvXB559z1 4Z85g3twVbsCAwEAAaNAMD4wCwYDVR0PBAQDAgeAMBMGA1UdJQQMMAoGCCsGAQUF BwMCMBoGA1UdEQQTMBGCCWxvY2FsaG9zdIcEfwAAATANBgkqhkiG9w0BAQsFAAOC AQEAkgSMpVbAATdl/9IOW77deJziRT38NOUj83X7cDsGn+lj6fCKFFQ92WoirwQA JZWA6IMLx2/w8VgrB4Zr4+uw6gmyXhUFFIkrApkJl21JGazCUJErA+Z1ziedj8C1 zbIffWZ1x9GnFrPPix6b5Eba4gIsVXRWjObZJ1OfsvUJuv7f4uG3fUOKnbvwPbnU ziaP2czmLhyB/G6gXwEjaJ3+G+4DafEQr1oO3JbiVq7KNbMIYTQ34eZT72iEh/RW xUlFCJBGPhy1QAj3CVHXJFNJtbEvhTm5Cw75Beqj0EdtaWucJY6tYQGGlig7/W94 eWaxzN78GEXPhPHQ4kZP+J2VpA== -----END CERTIFICATE----- -----BEGIN RSA PRIVATE KEY----- MIIEpQIBAAKCAQEAxOYgj1hCU1EkZLTQJc954HH4npc1qN/62WPrY7K0d2Cvjglt clC86nxXPxdRsAWF6DtnS5eEYbtoCbSW2sg7fVO4EP4NcS+4XYOGPwZX4mzVL8hs dPvYb3ffum1SYTwzdqVfYmivpOjcNiq5VEeR7E8JuS7vN03XBNtI/I3CRPGfeSHw Bv605VA8z9E/WbWN3NA5MVOVQteSw8nVk0jo3BbOYexvzpFckS5ZGB/8qP9SUc8Q wL6grctjmDBmDkLjymst+JLHJKcDZZYKnM4J567Cp+psVLvoJGIxSPvQ3+GiPF/S iSneT2tziCpoVwh6Hqq9cHnn3PXhnzmDe3BVuwIDAQABAoIBAD9rdC9XmT1m1FcP mj+jfTka3J6QS7tSMWUV9vqM0+3jmYghZzw73y2T0TJEG46bqM7tW08QxZYJG/CM V06u0eKDNbVbbw729OZB99qS+3m13lDeuHhRqhv1O327up4RGu5rQ7bZy0FNs6hK yJjp2ImJx7L6+BgTHV+2FeMq8djsffJDvsLn65W2Fw0pBw+1pFYJMMLodNrSXkGi FaE+XLO/FMmFfI6fc4uqMgXd+RLmGC3DY4lnbZMf14nlNn+SNMr08v+wipJNqE3K OERRkgm/uXIbo+a275suUZ8kVRlVMtIXVYrwMj1JQY7YJ2uiOy91QoWvzPu6wUGH g/ehttECgYEA4yuEU4rdJs/roEy+rY6bEjgcvN6uzoEjv4z4MLLHk61p88RDykYZ C0crmwiXItWPZ6rm458+TwIqawrLQAmJU6iSSmVy/ed/5C3vKDyEq84N3eglac/U yj6kk5vztCtUrr8Z2dnrBAz1LuAYUqPs8fVmYGYiPfM0+jLaZK2L+uMCgYEA3eMj xANChhiTCdaw3hEL57c2pbZ/xBwGi8VWZqJvxdJNbZgc+RDw4ytz1d3DZCRWfxIF w4n69wjiakZ9DA5YdzIvplv8YfZ1bAo0JSGyybERXKTVUj3AqBCf4bGRZJCgD+/g aGZpJrfD+7ho8FyOvt9LvLos8UPaJD1Llse1+UkCgYEAyyNi1QHb+JT88v8tky1u ZcBfklTepDK+sM9yMLnt1ZTApgbfR8WfJ4Kg76Wi4Ldv4RfmF62SnjwlikrArabZ ckHPb0+AoKOerYCV17kmOiusIr8wlYoPkjqqGITgTEBjHVAt4a0Ihzq/FQe3OE71 1vfGcHVkMVmGCiXnPRgjkFkCgYEAu0TJGtXlf2eeMd+Qxtt8QMTQymuMyecdXzne AiF2VG96CdUoHs29gP1bdlUEY7CHkBeV5cK+nWBSN3/mahZxc6hXrwBTshpgYB78 g5o9WxymmppDsHWN9EqTpdhH7ahibxD1RSep95OBRSIO704u68lqEo7O/5FUuuFA urEzVIECgYEAyX44ZLYW7c68fS2zTvnGBBINgntookhRK0sMUwuYvDL5RobnKusP 2Fz4gZtTmpRfgxcglih+EJPUhqqn6UteXG/TNatrf27DOuQgJHliQa/GDcANkEkT UtGu2aCxd0Na9lPvEzor37PPzLKdkaiAmAnyLmTpn5whGFgpXa32Ups= -----END RSA PRIVATE KEY----- mongo-ruby-driver-2.21.3/spec/support/certificates/client.crt000066400000000000000000000101711505113246500242730ustar00rootroot00000000000000Certificate: Data: Version: 3 (0x2) Serial Number: 78625 (0x13321) Signature Algorithm: sha256WithRSAEncryption Issuer: CN = Ruby Driver CA, OU = Drivers, O = MongoDB, L = New York City, ST = New York, C = US Validity Not Before: Feb 14 20:57:58 2019 GMT Not After : Feb 14 20:57:58 2039 GMT Subject: CN = localhost, OU = Drivers, O = MongoDB, L = New York City, ST = New York, C = US Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public-Key: (2048 bit) Modulus: 00:96:2d:53:a9:0d:f6:fe:3c:34:5c:60:87:56:c8: 69:da:85:e7:10:96:c6:39:6e:5c:09:f3:42:e0:a7: bb:38:37:ef:a6:63:6c:28:8b:1d:1a:52:00:52:19: 94:63:f9:58:4e:9c:d2:ca:ab:2e:20:66:c3:7d:fb: a4:52:e6:7c:5b:bf:b4:06:1f:e4:9e:6b:77:f1:38: 14:b2:56:af:77:dd:23:99:1c:b8:07:e5:79:c6:b9: 10:18:ea:47:0c:b5:df:d0:a6:15:14:09:37:51:9e: f2:7c:2b:66:f8:e9:59:f6:51:e9:50:e2:11:52:d9: cf:00:0c:9e:15:55:51:e1:d8:96:9d:15:54:9c:78: db:5f:2e:f2:91:5f:55:3c:3f:18:6f:32:16:82:76: 9e:83:6c:25:22:b1:27:70:69:cd:aa:a1:52:64:60: e5:b3:24:ee:29:ef:2c:ad:de:09:53:02:08:39:10: 4f:4a:fc:8b:21:18:ce:f1:fc:54:0c:7f:a6:ec:b2: b1:d6:c7:61:bb:bb:3f:7e:31:80:f1:39:f8:4d:e8: c2:45:11:e1:ac:90:97:e5:4a:58:a6:07:1b:7f:61: c4:aa:f2:66:66:06:b2:c7:1b:71:df:dc:3f:53:fe: 85:e4:8b:97:11:c2:d0:7e:10:35:2b:a3:e7:7d:c7: 6b:f7 Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Subject Alternative Name: DNS:localhost, IP Address:127.0.0.1, IP Address:0:0:0:0:0:0:0:1 Signature Algorithm: sha256WithRSAEncryption 48:41:5e:7b:fd:f5:bf:32:52:cd:bc:f8:71:7e:0d:0d:22:05: 7d:a5:11:ed:86:ac:02:9c:c3:e1:f4:f4:36:d2:48:8d:a4:5b: 4a:d1:76:8f:25:17:72:07:99:49:2f:09:f0:25:f9:0a:a7:06: 99:ab:e8:f7:48:c7:4c:f1:a1:4e:f4:64:3a:d8:25:e4:76:30: 2a:f6:b8:71:ee:05:cd:b2:7a:7f:e6:c7:7a:c3:af:f1:d1:16: 73:a5:bf:db:14:71:c4:d8:f7:e7:ce:82:48:2f:ce:e5:fd:8f: 89:4b:a6:0c:1e:6b:42:9d:64:73:7e:37:00:07:b5:e6:b9:b9: 89:38:04:d6:67:dd:e1:26:98:e4:49:06:8e:2c:d3:ee:c1:ee: 09:b9:95:3a:bd:6a:61:c2:d2:19:6c:e5:86:49:63:ae:e4:93: 92:01:48:d2:32:94:2c:62:fd:04:2e:7f:a2:26:85:dd:99:78: da:9b:0c:84:19:29:b2:c6:55:e1:4d:97:d5:9a:63:e0:8d:f8: 67:4c:3f:0e:6b:67:13:58:ba:28:ec:40:e6:65:c1:18:23:ae: 16:1c:fb:7b:d9:bc:c2:84:71:fb:f5:a8:71:cc:a5:2f:28:3b: 45:97:d6:15:9e:e8:44:ec:9e:05:72:b2:0a:ac:31:fe:a9:0e: e1:ce:82:5f -----BEGIN CERTIFICATE----- MIIDkjCCAnqgAwIBAgIDATMhMA0GCSqGSIb3DQEBCwUAMHUxFzAVBgNVBAMTDlJ1 YnkgRHJpdmVyIENBMRAwDgYDVQQLEwdEcml2ZXJzMRAwDgYDVQQKEwdNb25nb0RC MRYwFAYDVQQHEw1OZXcgWW9yayBDaXR5MREwDwYDVQQIEwhOZXcgWW9yazELMAkG A1UEBhMCVVMwHhcNMTkwMjE0MjA1NzU4WhcNMzkwMjE0MjA1NzU4WjBwMRIwEAYD VQQDEwlsb2NhbGhvc3QxEDAOBgNVBAsTB0RyaXZlcnMxEDAOBgNVBAoTB01vbmdv REIxFjAUBgNVBAcTDU5ldyBZb3JrIENpdHkxETAPBgNVBAgTCE5ldyBZb3JrMQsw CQYDVQQGEwJVUzCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAJYtU6kN 9v48NFxgh1bIadqF5xCWxjluXAnzQuCnuzg376ZjbCiLHRpSAFIZlGP5WE6c0sqr LiBmw337pFLmfFu/tAYf5J5rd/E4FLJWr3fdI5kcuAfleca5EBjqRwy139CmFRQJ N1Ge8nwrZvjpWfZR6VDiEVLZzwAMnhVVUeHYlp0VVJx4218u8pFfVTw/GG8yFoJ2 noNsJSKxJ3BpzaqhUmRg5bMk7invLK3eCVMCCDkQT0r8iyEYzvH8VAx/puyysdbH Ybu7P34xgPE5+E3owkUR4ayQl+VKWKYHG39hxKryZmYGsscbcd/cP1P+heSLlxHC 0H4QNSuj533Ha/cCAwEAAaMwMC4wLAYDVR0RBCUwI4IJbG9jYWxob3N0hwR/AAAB hxAAAAAAAAAAAAAAAAAAAAABMA0GCSqGSIb3DQEBCwUAA4IBAQBIQV57/fW/MlLN vPhxfg0NIgV9pRHthqwCnMPh9PQ20kiNpFtK0XaPJRdyB5lJLwnwJfkKpwaZq+j3 SMdM8aFO9GQ62CXkdjAq9rhx7gXNsnp/5sd6w6/x0RZzpb/bFHHE2PfnzoJIL87l /Y+JS6YMHmtCnWRzfjcAB7XmubmJOATWZ93hJpjkSQaOLNPuwe4JuZU6vWphwtIZ bOWGSWOu5JOSAUjSMpQsYv0ELn+iJoXdmXjamwyEGSmyxlXhTZfVmmPgjfhnTD8O a2cTWLoo7EDmZcEYI64WHPt72bzChHH79ahxzKUvKDtFl9YVnuhE7J4FcrIKrDH+ qQ7hzoJf -----END CERTIFICATE----- mongo-ruby-driver-2.21.3/spec/support/certificates/client.key000066400000000000000000000032131505113246500242720ustar00rootroot00000000000000-----BEGIN RSA PRIVATE KEY----- MIIEowIBAAKCAQEAli1TqQ32/jw0XGCHVshp2oXnEJbGOW5cCfNC4Ke7ODfvpmNs KIsdGlIAUhmUY/lYTpzSyqsuIGbDffukUuZ8W7+0Bh/knmt38TgUslavd90jmRy4 B+V5xrkQGOpHDLXf0KYVFAk3UZ7yfCtm+OlZ9lHpUOIRUtnPAAyeFVVR4diWnRVU nHjbXy7ykV9VPD8YbzIWgnaeg2wlIrEncGnNqqFSZGDlsyTuKe8srd4JUwIIORBP SvyLIRjO8fxUDH+m7LKx1sdhu7s/fjGA8Tn4TejCRRHhrJCX5UpYpgcbf2HEqvJm Zgayxxtx39w/U/6F5IuXEcLQfhA1K6Pnfcdr9wIDAQABAoIBAClvRB/mrHkk30WF lJHLJfmW7FPVZce+vUI5jgAyByPRuxtrXxIon9T9Pv1n9VtIFqdJ+ZbVeBqUf+eo oIQG99TQpbjy378d46/4Sy2RYURvDT1XgSccl2bO9LQLH6NQIvqMeFBY4pxwgHLl /rk6mQmvO/KHDUSQt95JnOxB6B+plv1prpQVGHVqzre8LAZdJdv/8wVqFHbrAGoU 62wUQ66y5oSxLN9YUzTQNb8ECBRTfmF9WCF9aXx4TJyX0+WkyiztUx0ArBm/ANGv k5GzPoL0UFRjVr+ObM755SHrJQ9pIOjJnZAA0DW8fHAQcV8XcA2UcqrA+jw/XX9Q ku4cz5ECgYEA5iopSEu8WLEkCmVAYVurd1gQSlJQ6aYrcMas+p2kqW1p/hkDvREO dt55evAKIENgVo9N4daN4K5ijftxxTQyX0HlO/0eeC6LpWIPjkkJ02H5htKvvIhw zxJd7XTKyQA1/wOc/Ooo5MfGO44eBVVBzErX6rGzc/PFfLzlGDa567sCgYEApwiz h9SppQwsbjXDjcOfJoZbmnJjzdMJ1rDUnLY7iCaVdf7cIaHCY72xSlgtruVkGQ6m vYwjh/YnwU3qoBi1tbu+ByHWxImLwsLZMA4ct4IF5Qvi2d2v4GtOq/sQcqB+35d/ dJ4CcR1sHe6R5H0uvDDmNoJiJFESbtTD0Wt4VvUCgYAehOSoc3JsCEERJ8/bmP4p ewHd+QBFmwUTlSSGrrSQyrNNQB/gyAw08tcE2CNfl8+EasgW9A4oBreGwBqb3Yn4 W5J729pYcUOPEGujoEevQcSGfhVTWHws2PCfdecVs/N09xOv7ZSykVLVvsh4SI/K +PmcYye6bk53dcyi407P1QKBgAUbiwHYSue1G5azJiurk65F5X8viEW+8koSVi4E lIVxSJi3Flwg6iTKpCU11Q/IC+uIOykIo/2AVW7fxxDmMIhCGWl2a27PFer6slF5 3P7vhuaeGm23Da27GkjAAJzAs6B6rXcPbduvnqK7rNJj0Y4HoMKB8iZSJFInR2Wb 964RAoGBAIvzvWDusSLOmkAN7KKwbmVdrKLgKPGtyNjJyKdp0zUxslOQsSraOq/K hsBgotDdq8igR6iUHfmcSs7FVU1QBBMfqe5XVJhj4GWMe2VzRIvBcuvaMLyUuZ61 TwPjfJRUCACo4MSQHEBlbWq9ZYzO8nZ+FyCldN/n5Uevp84BOyBO -----END RSA PRIVATE KEY----- mongo-ruby-driver-2.21.3/spec/support/certificates/client.pem000066400000000000000000000134041505113246500242660ustar00rootroot00000000000000Certificate: Data: Version: 3 (0x2) Serial Number: 78625 (0x13321) Signature Algorithm: sha256WithRSAEncryption Issuer: CN = Ruby Driver CA, OU = Drivers, O = MongoDB, L = New York City, ST = New York, C = US Validity Not Before: Feb 14 20:57:58 2019 GMT Not After : Feb 14 20:57:58 2039 GMT Subject: CN = localhost, OU = Drivers, O = MongoDB, L = New York City, ST = New York, C = US Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public-Key: (2048 bit) Modulus: 00:96:2d:53:a9:0d:f6:fe:3c:34:5c:60:87:56:c8: 69:da:85:e7:10:96:c6:39:6e:5c:09:f3:42:e0:a7: bb:38:37:ef:a6:63:6c:28:8b:1d:1a:52:00:52:19: 94:63:f9:58:4e:9c:d2:ca:ab:2e:20:66:c3:7d:fb: a4:52:e6:7c:5b:bf:b4:06:1f:e4:9e:6b:77:f1:38: 14:b2:56:af:77:dd:23:99:1c:b8:07:e5:79:c6:b9: 10:18:ea:47:0c:b5:df:d0:a6:15:14:09:37:51:9e: f2:7c:2b:66:f8:e9:59:f6:51:e9:50:e2:11:52:d9: cf:00:0c:9e:15:55:51:e1:d8:96:9d:15:54:9c:78: db:5f:2e:f2:91:5f:55:3c:3f:18:6f:32:16:82:76: 9e:83:6c:25:22:b1:27:70:69:cd:aa:a1:52:64:60: e5:b3:24:ee:29:ef:2c:ad:de:09:53:02:08:39:10: 4f:4a:fc:8b:21:18:ce:f1:fc:54:0c:7f:a6:ec:b2: b1:d6:c7:61:bb:bb:3f:7e:31:80:f1:39:f8:4d:e8: c2:45:11:e1:ac:90:97:e5:4a:58:a6:07:1b:7f:61: c4:aa:f2:66:66:06:b2:c7:1b:71:df:dc:3f:53:fe: 85:e4:8b:97:11:c2:d0:7e:10:35:2b:a3:e7:7d:c7: 6b:f7 Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Subject Alternative Name: DNS:localhost, IP Address:127.0.0.1, IP Address:0:0:0:0:0:0:0:1 Signature Algorithm: sha256WithRSAEncryption 48:41:5e:7b:fd:f5:bf:32:52:cd:bc:f8:71:7e:0d:0d:22:05: 7d:a5:11:ed:86:ac:02:9c:c3:e1:f4:f4:36:d2:48:8d:a4:5b: 4a:d1:76:8f:25:17:72:07:99:49:2f:09:f0:25:f9:0a:a7:06: 99:ab:e8:f7:48:c7:4c:f1:a1:4e:f4:64:3a:d8:25:e4:76:30: 2a:f6:b8:71:ee:05:cd:b2:7a:7f:e6:c7:7a:c3:af:f1:d1:16: 73:a5:bf:db:14:71:c4:d8:f7:e7:ce:82:48:2f:ce:e5:fd:8f: 89:4b:a6:0c:1e:6b:42:9d:64:73:7e:37:00:07:b5:e6:b9:b9: 89:38:04:d6:67:dd:e1:26:98:e4:49:06:8e:2c:d3:ee:c1:ee: 09:b9:95:3a:bd:6a:61:c2:d2:19:6c:e5:86:49:63:ae:e4:93: 92:01:48:d2:32:94:2c:62:fd:04:2e:7f:a2:26:85:dd:99:78: da:9b:0c:84:19:29:b2:c6:55:e1:4d:97:d5:9a:63:e0:8d:f8: 67:4c:3f:0e:6b:67:13:58:ba:28:ec:40:e6:65:c1:18:23:ae: 16:1c:fb:7b:d9:bc:c2:84:71:fb:f5:a8:71:cc:a5:2f:28:3b: 45:97:d6:15:9e:e8:44:ec:9e:05:72:b2:0a:ac:31:fe:a9:0e: e1:ce:82:5f -----BEGIN CERTIFICATE----- MIIDkjCCAnqgAwIBAgIDATMhMA0GCSqGSIb3DQEBCwUAMHUxFzAVBgNVBAMTDlJ1 YnkgRHJpdmVyIENBMRAwDgYDVQQLEwdEcml2ZXJzMRAwDgYDVQQKEwdNb25nb0RC MRYwFAYDVQQHEw1OZXcgWW9yayBDaXR5MREwDwYDVQQIEwhOZXcgWW9yazELMAkG A1UEBhMCVVMwHhcNMTkwMjE0MjA1NzU4WhcNMzkwMjE0MjA1NzU4WjBwMRIwEAYD VQQDEwlsb2NhbGhvc3QxEDAOBgNVBAsTB0RyaXZlcnMxEDAOBgNVBAoTB01vbmdv REIxFjAUBgNVBAcTDU5ldyBZb3JrIENpdHkxETAPBgNVBAgTCE5ldyBZb3JrMQsw CQYDVQQGEwJVUzCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAJYtU6kN 9v48NFxgh1bIadqF5xCWxjluXAnzQuCnuzg376ZjbCiLHRpSAFIZlGP5WE6c0sqr LiBmw337pFLmfFu/tAYf5J5rd/E4FLJWr3fdI5kcuAfleca5EBjqRwy139CmFRQJ N1Ge8nwrZvjpWfZR6VDiEVLZzwAMnhVVUeHYlp0VVJx4218u8pFfVTw/GG8yFoJ2 noNsJSKxJ3BpzaqhUmRg5bMk7invLK3eCVMCCDkQT0r8iyEYzvH8VAx/puyysdbH Ybu7P34xgPE5+E3owkUR4ayQl+VKWKYHG39hxKryZmYGsscbcd/cP1P+heSLlxHC 0H4QNSuj533Ha/cCAwEAAaMwMC4wLAYDVR0RBCUwI4IJbG9jYWxob3N0hwR/AAAB hxAAAAAAAAAAAAAAAAAAAAABMA0GCSqGSIb3DQEBCwUAA4IBAQBIQV57/fW/MlLN vPhxfg0NIgV9pRHthqwCnMPh9PQ20kiNpFtK0XaPJRdyB5lJLwnwJfkKpwaZq+j3 SMdM8aFO9GQ62CXkdjAq9rhx7gXNsnp/5sd6w6/x0RZzpb/bFHHE2PfnzoJIL87l /Y+JS6YMHmtCnWRzfjcAB7XmubmJOATWZ93hJpjkSQaOLNPuwe4JuZU6vWphwtIZ bOWGSWOu5JOSAUjSMpQsYv0ELn+iJoXdmXjamwyEGSmyxlXhTZfVmmPgjfhnTD8O a2cTWLoo7EDmZcEYI64WHPt72bzChHH79ahxzKUvKDtFl9YVnuhE7J4FcrIKrDH+ qQ7hzoJf -----END CERTIFICATE----- -----BEGIN RSA PRIVATE KEY----- MIIEowIBAAKCAQEAli1TqQ32/jw0XGCHVshp2oXnEJbGOW5cCfNC4Ke7ODfvpmNs KIsdGlIAUhmUY/lYTpzSyqsuIGbDffukUuZ8W7+0Bh/knmt38TgUslavd90jmRy4 B+V5xrkQGOpHDLXf0KYVFAk3UZ7yfCtm+OlZ9lHpUOIRUtnPAAyeFVVR4diWnRVU nHjbXy7ykV9VPD8YbzIWgnaeg2wlIrEncGnNqqFSZGDlsyTuKe8srd4JUwIIORBP SvyLIRjO8fxUDH+m7LKx1sdhu7s/fjGA8Tn4TejCRRHhrJCX5UpYpgcbf2HEqvJm Zgayxxtx39w/U/6F5IuXEcLQfhA1K6Pnfcdr9wIDAQABAoIBAClvRB/mrHkk30WF lJHLJfmW7FPVZce+vUI5jgAyByPRuxtrXxIon9T9Pv1n9VtIFqdJ+ZbVeBqUf+eo oIQG99TQpbjy378d46/4Sy2RYURvDT1XgSccl2bO9LQLH6NQIvqMeFBY4pxwgHLl /rk6mQmvO/KHDUSQt95JnOxB6B+plv1prpQVGHVqzre8LAZdJdv/8wVqFHbrAGoU 62wUQ66y5oSxLN9YUzTQNb8ECBRTfmF9WCF9aXx4TJyX0+WkyiztUx0ArBm/ANGv k5GzPoL0UFRjVr+ObM755SHrJQ9pIOjJnZAA0DW8fHAQcV8XcA2UcqrA+jw/XX9Q ku4cz5ECgYEA5iopSEu8WLEkCmVAYVurd1gQSlJQ6aYrcMas+p2kqW1p/hkDvREO dt55evAKIENgVo9N4daN4K5ijftxxTQyX0HlO/0eeC6LpWIPjkkJ02H5htKvvIhw zxJd7XTKyQA1/wOc/Ooo5MfGO44eBVVBzErX6rGzc/PFfLzlGDa567sCgYEApwiz h9SppQwsbjXDjcOfJoZbmnJjzdMJ1rDUnLY7iCaVdf7cIaHCY72xSlgtruVkGQ6m vYwjh/YnwU3qoBi1tbu+ByHWxImLwsLZMA4ct4IF5Qvi2d2v4GtOq/sQcqB+35d/ dJ4CcR1sHe6R5H0uvDDmNoJiJFESbtTD0Wt4VvUCgYAehOSoc3JsCEERJ8/bmP4p ewHd+QBFmwUTlSSGrrSQyrNNQB/gyAw08tcE2CNfl8+EasgW9A4oBreGwBqb3Yn4 W5J729pYcUOPEGujoEevQcSGfhVTWHws2PCfdecVs/N09xOv7ZSykVLVvsh4SI/K +PmcYye6bk53dcyi407P1QKBgAUbiwHYSue1G5azJiurk65F5X8viEW+8koSVi4E lIVxSJi3Flwg6iTKpCU11Q/IC+uIOykIo/2AVW7fxxDmMIhCGWl2a27PFer6slF5 3P7vhuaeGm23Da27GkjAAJzAs6B6rXcPbduvnqK7rNJj0Y4HoMKB8iZSJFInR2Wb 964RAoGBAIvzvWDusSLOmkAN7KKwbmVdrKLgKPGtyNjJyKdp0zUxslOQsSraOq/K hsBgotDdq8igR6iUHfmcSs7FVU1QBBMfqe5XVJhj4GWMe2VzRIvBcuvaMLyUuZ61 TwPjfJRUCACo4MSQHEBlbWq9ZYzO8nZ+FyCldN/n5Uevp84BOyBO -----END RSA PRIVATE KEY----- mongo-ruby-driver-2.21.3/spec/support/certificates/crl.pem000066400000000000000000000012521505113246500235660ustar00rootroot00000000000000-----BEGIN X509 CRL----- MIIBzzCBuAIBATANBgkqhkiG9w0BAQsFADB1MRcwFQYDVQQDEw5SdWJ5IERyaXZl ciBDQTEQMA4GA1UECxMHRHJpdmVyczEQMA4GA1UEChMHTW9uZ29EQjEWMBQGA1UE BxMNTmV3IFlvcmsgQ2l0eTERMA8GA1UECBMITmV3IFlvcmsxCzAJBgNVBAYTAlVT Fw0xOTAyMTQyMjE2NDJaFw0zOTAyMTQyMjE2NDJaoA8wDTALBgNVHRQEBAICEAEw DQYJKoZIhvcNAQELBQADggEBAAZlakGW3JKfp6ZfMZqlyDvZZwXMYP4YEFT8ML81 vtOGyvDGqYVUCFSiXQK+pUaf9/TRijI5PH772hrD75/4n3t7tTLIvisJZTAFRXTd UFM1jAvxPBaQ7/GIQIYOgsE5d5g9LIM4S9J2h3tUWMJOk2FR9a5J3BMfSHCl3OdQ L7PXLq5EXc9NB871BQLB/O5IQyrDQUrB0gksFAryYGErZG39WrflB9Wdi5g5Vi4U JcH7QQ3M7EnsWPHjO1qCUl1Q0B6aAtQRi+R8DDgWeAi7I77xIU/FcK8W2f1C5pjA wLqusCP+pwuVidi9Rpsb687GnTEczdsdLn5RPWCQ0NT9lYY= -----END X509 CRL----- mongo-ruby-driver-2.21.3/spec/support/certificates/crl_client_revoked.pem000066400000000000000000000013131505113246500266410ustar00rootroot00000000000000-----BEGIN X509 CRL----- MIIB5zCB0AIBATANBgkqhkiG9w0BAQsFADB1MRcwFQYDVQQDEw5SdWJ5IERyaXZl ciBDQTEQMA4GA1UECxMHRHJpdmVyczEQMA4GA1UEChMHTW9uZ29EQjEWMBQGA1UE BxMNTmV3IFlvcmsgQ2l0eTERMA8GA1UECBMITmV3IFlvcmsxCzAJBgNVBAYTAlVT Fw0xOTAyMTQyMjE4MjJaFw0zOTAyMTQyMjE4MjJaMBYwFAIDATMhFw0xOTAyMTQy MjE3NDBaoA8wDTALBgNVHRQEBAICEAIwDQYJKoZIhvcNAQELBQADggEBABXDRWzZ lkzXwgISQbBeg5+6hX9C3oPRXu83bVgs96tpR0H9Eb69cbDOAucPuZdwhwL4SOqO 1dNAjjpQt+yCvzkLKLVtXHe7ElFWDQkJKG22vUJ+fXGt9Yz+IhXW40wa+eZrHZ1n BWl6AXx62cXQnTqmhEP0pz8YaVQXUpzqaCAg3CKzINllpIg5C5c380Hu4bZgWeIC VtJtO+jYsYz2I9nd2Gqn+8l/1Py6a2Ju08RAW6EvQ1DRYjde/bDf06t5gdlAgfsC n+NpGFfQNahgaMnZ02d9RG/1oUt2sx/wihXfepG2WXZG31WGurHAsEuF8Xy+41hw s8Cbtws19Tn+YxU= -----END X509 CRL----- mongo-ruby-driver-2.21.3/spec/support/certificates/multi-ca.crt000066400000000000000000000205101505113246500245260ustar00rootroot00000000000000Certificate: Data: Version: 3 (0x2) Serial Number: 554512 (0x87610) Signature Algorithm: sha1WithRSAEncryption Issuer: CN = Python Driver CA, OU = Drivers, O = MongoDB, L = Palo Alto, ST = California, C = US Validity Not Before: May 23 20:13:08 2016 GMT Not After : May 23 20:13:08 2036 GMT Subject: CN = Python Driver CA, OU = Drivers, O = MongoDB, L = Palo Alto, ST = California, C = US Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public-Key: (2048 bit) Modulus: 00:8a:38:78:1f:f1:be:14:3b:aa:69:ac:33:2e:9f: 6b:cb:49:12:22:f6:49:0a:c4:49:99:8c:01:f3:f2: 46:5e:3c:4a:06:8c:02:5b:9c:da:bf:05:1d:59:8a: e4:d2:91:f9:41:1c:7d:26:f3:b0:b9:bd:25:a5:84: e5:db:a3:0b:7c:d5:68:3d:ed:73:2f:e6:a1:87:28: 83:be:86:c9:aa:74:0e:3f:1c:6e:f3:ab:39:e9:b3: 4a:f9:76:41:ec:60:50:1a:84:2b:aa:6d:b9:cb:23: 83:75:10:51:6c:37:5e:32:a3:93:de:94:86:77:b2: 24:15:d1:56:15:56:ec:f5:1a:36:1b:35:00:73:d4: 9d:7e:80:14:dc:22:48:0f:d5:29:5b:71:9d:fe:ec: 4f:22:7d:1a:fe:86:0a:3c:f6:d9:ce:f0:19:d1:05: d5:7b:88:f4:d6:78:ce:4e:7f:e2:ef:46:ed:09:ac: 8d:3a:cf:fc:ba:fc:01:10:34:96:19:d1:99:04:70: 3e:75:f4:da:20:6e:9c:46:73:c2:ae:11:d4:20:a5: b4:60:80:77:78:ec:b7:b1:db:b5:15:48:85:41:7c: 6b:30:ba:95:bd:a5:71:c9:f0:ba:0f:5b:9b:1c:0c: 2e:ac:af:6b:ca:0b:86:d9:aa:56:bf:83:83:72:4a: 0b:cd Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Basic Constraints: CA:TRUE X509v3 Subject Alternative Name: DNS:localhost, IP Address:127.0.0.1 Signature Algorithm: sha1WithRSAEncryption 5d:bb:a4:ff:e8:06:62:f2:ce:8a:c4:87:51:d7:e5:58:72:6b: 1b:57:74:4e:9b:e5:c5:17:6e:f2:e0:d4:3e:d1:b9:aa:aa:74: 6f:b2:fa:e6:35:08:ed:20:6e:e9:81:77:c1:ae:47:b3:ec:53: 3c:69:d9:e1:21:b9:d1:52:7a:d8:22:5c:91:a8:d5:be:b8:4d: 93:54:d2:d0:d9:d9:74:20:84:dc:f5:d6:4a:6f:c2:50:97:7e: 59:02:e6:6a:1e:62:39:d6:f8:58:e1:8c:89:7a:01:01:2c:d6: 79:ed:72:83:da:71:03:1a:99:6e:a7:65:53:fd:c9:93:bb:29: f7:ca:71:bf:3d:c5:a9:01:ac:11:61:9e:81:55:12:9a:77:46: 5f:e0:eb:dd:86:dc:11:4a:e6:0f:00:19:76:78:cf:57:6d:8d: e0:79:5b:1d:78:59:86:7d:18:1f:d9:0d:d3:a9:b3:b1:93:43: fd:fc:70:e2:d0:a4:15:b2:c7:ea:3b:97:c8:fb:99:9d:18:fa: 4d:f3:7d:99:b8:0d:2c:ff:3b:91:59:d8:d5:03:a0:30:99:75: bb:04:d8:40:e2:b5:4b:33:20:f7:bd:cb:1e:ed:67:2b:3c:39: 56:96:3e:17:58:16:66:9b:4f:ff:76:44:fa:fa:60:23:a0:a8: c1:9d:cb:13 -----BEGIN CERTIFICATE----- MIIDkzCCAnugAwIBAgIDCHYQMA0GCSqGSIb3DQEBBQUAMHUxGTAXBgNVBAMTEFB5 dGhvbiBEcml2ZXIgQ0ExEDAOBgNVBAsTB0RyaXZlcnMxEDAOBgNVBAoTB01vbmdv REIxEjAQBgNVBAcTCVBhbG8gQWx0bzETMBEGA1UECBMKQ2FsaWZvcm5pYTELMAkG A1UEBhMCVVMwHhcNMTYwNTIzMjAxMzA4WhcNMzYwNTIzMjAxMzA4WjB1MRkwFwYD VQQDExBQeXRob24gRHJpdmVyIENBMRAwDgYDVQQLEwdEcml2ZXJzMRAwDgYDVQQK EwdNb25nb0RCMRIwEAYDVQQHEwlQYWxvIEFsdG8xEzARBgNVBAgTCkNhbGlmb3Ju aWExCzAJBgNVBAYTAlVTMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA ijh4H/G+FDuqaawzLp9ry0kSIvZJCsRJmYwB8/JGXjxKBowCW5zavwUdWYrk0pH5 QRx9JvOwub0lpYTl26MLfNVoPe1zL+ahhyiDvobJqnQOPxxu86s56bNK+XZB7GBQ GoQrqm25yyODdRBRbDdeMqOT3pSGd7IkFdFWFVbs9Ro2GzUAc9SdfoAU3CJID9Up W3Gd/uxPIn0a/oYKPPbZzvAZ0QXVe4j01njOTn/i70btCayNOs/8uvwBEDSWGdGZ BHA+dfTaIG6cRnPCrhHUIKW0YIB3eOy3sdu1FUiFQXxrMLqVvaVxyfC6D1ubHAwu rK9ryguG2apWv4ODckoLzQIDAQABoywwKjAMBgNVHRMEBTADAQH/MBoGA1UdEQQT MBGCCWxvY2FsaG9zdIcEfwAAATANBgkqhkiG9w0BAQUFAAOCAQEAXbuk/+gGYvLO isSHUdflWHJrG1d0TpvlxRdu8uDUPtG5qqp0b7L65jUI7SBu6YF3wa5Hs+xTPGnZ 4SG50VJ62CJckajVvrhNk1TS0NnZdCCE3PXWSm/CUJd+WQLmah5iOdb4WOGMiXoB ASzWee1yg9pxAxqZbqdlU/3Jk7sp98pxvz3FqQGsEWGegVUSmndGX+Dr3YbcEUrm DwAZdnjPV22N4HlbHXhZhn0YH9kN06mzsZND/fxw4tCkFbLH6juXyPuZnRj6TfN9 mbgNLP87kVnY1QOgMJl1uwTYQOK1SzMg973LHu1nKzw5VpY+F1gWZptP/3ZE+vpg I6CowZ3LEw== -----END CERTIFICATE----- Certificate: Data: Version: 3 (0x2) Serial Number: 210471 (0x33627) Signature Algorithm: sha256WithRSAEncryption Issuer: CN = Ruby Driver CA, OU = Drivers, O = MongoDB, L = New York City, ST = New York, C = US Validity Not Before: Feb 14 20:57:50 2019 GMT Not After : Feb 14 20:57:50 2039 GMT Subject: CN = Ruby Driver CA, OU = Drivers, O = MongoDB, L = New York City, ST = New York, C = US Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public-Key: (2048 bit) Modulus: 00:96:71:17:e8:aa:87:dc:16:8e:cb:90:4c:2c:61: 11:d1:1d:9d:b8:04:75:18:8a:f1:41:37:2e:06:e6: cb:67:2c:16:f3:24:f4:53:02:33:06:1c:6e:e7:7e: 83:14:44:a4:43:b6:5d:f1:4d:68:e7:8f:fe:4c:f7: ca:01:e5:d2:c1:2b:a5:93:2c:cd:12:58:c3:e1:6f: b2:31:c6:05:44:5b:99:61:99:f5:06:d0:a3:ad:de: 8f:a2:73:a1:46:94:30:e7:f7:4b:5d:fb:34:76:7e: 87:a5:26:89:0e:f9:8a:e7:12:5b:ff:11:71:e4:dd: 87:2d:e0:a9:26:a3:1b:7d:c4:00:b8:11:3a:05:f7: 00:f6:3b:80:7d:1b:0c:a3:38:42:0b:a2:17:e4:4a: c8:00:09:c8:a0:ad:d0:73:12:66:60:3d:ce:41:07: 56:11:e5:06:9a:af:9b:ec:29:65:b6:56:b1:2a:b3: b2:2d:10:c4:75:05:eb:1d:cb:c4:b4:2d:8f:e9:08: 3a:6d:67:e3:0a:81:6a:d5:97:9d:a0:08:f2:70:1c: 9d:9e:4b:e3:9b:42:4d:02:91:93:b8:bf:e7:e9:69: 7e:ef:ab:fc:a6:6a:69:35:37:ee:d9:b7:6f:c5:12: 38:93:4f:09:ea:84:f4:21:df:5a:50:e0:89:c8:da: 94:e1 Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Basic Constraints: CA:TRUE X509v3 Subject Alternative Name: DNS:localhost, IP Address:127.0.0.1 Signature Algorithm: sha256WithRSAEncryption 40:d9:19:82:d2:54:f5:eb:d5:f9:e1:85:b1:38:eb:d3:60:c2: be:b7:7c:0a:59:90:0f:00:30:09:c9:7e:e1:83:7d:ce:d2:d6: 28:e8:21:3e:4e:ea:ee:47:eb:89:c0:e4:13:72:51:d2:3c:48: 06:06:86:51:55:da:24:0f:86:fa:1f:27:d6:98:58:ef:13:3f: 8f:2b:57:05:ad:d1:40:99:8f:35:2d:f7:13:9e:19:a5:1a:23: 5e:29:28:b8:cb:e4:7c:7a:2f:81:7f:1f:72:2f:2c:d2:a5:cc: f1:fe:83:45:30:8d:23:d0:42:a5:f0:9d:e9:02:b5:09:ff:05: 72:af:00:ea:8b:38:41:88:3a:3c:75:6e:8b:5e:f3:b0:30:d3: fb:ff:6f:4e:68:62:2a:30:6b:3e:06:3f:a2:a6:02:91:f1:f5: 5d:31:e7:f4:f0:07:9d:a6:1f:04:fa:23:7f:1e:d3:d3:30:d1: 3d:55:46:d8:2f:da:4b:fc:4d:d2:93:0a:51:bf:78:e4:07:3f: 15:77:7a:2b:20:81:54:9a:9f:21:09:86:47:81:85:dc:e4:50: 37:34:18:b0:43:91:2a:a2:9c:97:fe:a2:1a:02:91:6d:71:b3: 65:e1:c7:00:17:d5:26:d9:69:17:3b:ec:e1:5f:77:e8:19:4b: a3:8c:2a:e0 -----BEGIN CERTIFICATE----- MIIDkzCCAnugAwIBAgIDAzYnMA0GCSqGSIb3DQEBCwUAMHUxFzAVBgNVBAMTDlJ1 YnkgRHJpdmVyIENBMRAwDgYDVQQLEwdEcml2ZXJzMRAwDgYDVQQKEwdNb25nb0RC MRYwFAYDVQQHEw1OZXcgWW9yayBDaXR5MREwDwYDVQQIEwhOZXcgWW9yazELMAkG A1UEBhMCVVMwHhcNMTkwMjE0MjA1NzUwWhcNMzkwMjE0MjA1NzUwWjB1MRcwFQYD VQQDEw5SdWJ5IERyaXZlciBDQTEQMA4GA1UECxMHRHJpdmVyczEQMA4GA1UEChMH TW9uZ29EQjEWMBQGA1UEBxMNTmV3IFlvcmsgQ2l0eTERMA8GA1UECBMITmV3IFlv cmsxCzAJBgNVBAYTAlVTMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA lnEX6KqH3BaOy5BMLGER0R2duAR1GIrxQTcuBubLZywW8yT0UwIzBhxu536DFESk Q7Zd8U1o54/+TPfKAeXSwSulkyzNEljD4W+yMcYFRFuZYZn1BtCjrd6PonOhRpQw 5/dLXfs0dn6HpSaJDvmK5xJb/xFx5N2HLeCpJqMbfcQAuBE6BfcA9juAfRsMozhC C6IX5ErIAAnIoK3QcxJmYD3OQQdWEeUGmq+b7ClltlaxKrOyLRDEdQXrHcvEtC2P 6Qg6bWfjCoFq1ZedoAjycBydnkvjm0JNApGTuL/n6Wl+76v8pmppNTfu2bdvxRI4 k08J6oT0Id9aUOCJyNqU4QIDAQABoywwKjAMBgNVHRMEBTADAQH/MBoGA1UdEQQT MBGCCWxvY2FsaG9zdIcEfwAAATANBgkqhkiG9w0BAQsFAAOCAQEAQNkZgtJU9evV +eGFsTjr02DCvrd8ClmQDwAwCcl+4YN9ztLWKOghPk7q7kfricDkE3JR0jxIBgaG UVXaJA+G+h8n1phY7xM/jytXBa3RQJmPNS33E54ZpRojXikouMvkfHovgX8fci8s 0qXM8f6DRTCNI9BCpfCd6QK1Cf8Fcq8A6os4QYg6PHVui17zsDDT+/9vTmhiKjBr PgY/oqYCkfH1XTHn9PAHnaYfBPojfx7T0zDRPVVG2C/aS/xN0pMKUb945Ac/FXd6 KyCBVJqfIQmGR4GF3ORQNzQYsEORKqKcl/6iGgKRbXGzZeHHABfVJtlpFzvs4V93 6BlLo4wq4A== -----END CERTIFICATE----- mongo-ruby-driver-2.21.3/spec/support/certificates/python-ca.crt000066400000000000000000000102421505113246500247160ustar00rootroot00000000000000Certificate: Data: Version: 3 (0x2) Serial Number: 554512 (0x87610) Signature Algorithm: sha1WithRSAEncryption Issuer: CN = Python Driver CA, OU = Drivers, O = MongoDB, L = Palo Alto, ST = California, C = US Validity Not Before: May 23 20:13:08 2016 GMT Not After : May 23 20:13:08 2036 GMT Subject: CN = Python Driver CA, OU = Drivers, O = MongoDB, L = Palo Alto, ST = California, C = US Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public-Key: (2048 bit) Modulus: 00:8a:38:78:1f:f1:be:14:3b:aa:69:ac:33:2e:9f: 6b:cb:49:12:22:f6:49:0a:c4:49:99:8c:01:f3:f2: 46:5e:3c:4a:06:8c:02:5b:9c:da:bf:05:1d:59:8a: e4:d2:91:f9:41:1c:7d:26:f3:b0:b9:bd:25:a5:84: e5:db:a3:0b:7c:d5:68:3d:ed:73:2f:e6:a1:87:28: 83:be:86:c9:aa:74:0e:3f:1c:6e:f3:ab:39:e9:b3: 4a:f9:76:41:ec:60:50:1a:84:2b:aa:6d:b9:cb:23: 83:75:10:51:6c:37:5e:32:a3:93:de:94:86:77:b2: 24:15:d1:56:15:56:ec:f5:1a:36:1b:35:00:73:d4: 9d:7e:80:14:dc:22:48:0f:d5:29:5b:71:9d:fe:ec: 4f:22:7d:1a:fe:86:0a:3c:f6:d9:ce:f0:19:d1:05: d5:7b:88:f4:d6:78:ce:4e:7f:e2:ef:46:ed:09:ac: 8d:3a:cf:fc:ba:fc:01:10:34:96:19:d1:99:04:70: 3e:75:f4:da:20:6e:9c:46:73:c2:ae:11:d4:20:a5: b4:60:80:77:78:ec:b7:b1:db:b5:15:48:85:41:7c: 6b:30:ba:95:bd:a5:71:c9:f0:ba:0f:5b:9b:1c:0c: 2e:ac:af:6b:ca:0b:86:d9:aa:56:bf:83:83:72:4a: 0b:cd Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Basic Constraints: CA:TRUE X509v3 Subject Alternative Name: DNS:localhost, IP Address:127.0.0.1 Signature Algorithm: sha1WithRSAEncryption 5d:bb:a4:ff:e8:06:62:f2:ce:8a:c4:87:51:d7:e5:58:72:6b: 1b:57:74:4e:9b:e5:c5:17:6e:f2:e0:d4:3e:d1:b9:aa:aa:74: 6f:b2:fa:e6:35:08:ed:20:6e:e9:81:77:c1:ae:47:b3:ec:53: 3c:69:d9:e1:21:b9:d1:52:7a:d8:22:5c:91:a8:d5:be:b8:4d: 93:54:d2:d0:d9:d9:74:20:84:dc:f5:d6:4a:6f:c2:50:97:7e: 59:02:e6:6a:1e:62:39:d6:f8:58:e1:8c:89:7a:01:01:2c:d6: 79:ed:72:83:da:71:03:1a:99:6e:a7:65:53:fd:c9:93:bb:29: f7:ca:71:bf:3d:c5:a9:01:ac:11:61:9e:81:55:12:9a:77:46: 5f:e0:eb:dd:86:dc:11:4a:e6:0f:00:19:76:78:cf:57:6d:8d: e0:79:5b:1d:78:59:86:7d:18:1f:d9:0d:d3:a9:b3:b1:93:43: fd:fc:70:e2:d0:a4:15:b2:c7:ea:3b:97:c8:fb:99:9d:18:fa: 4d:f3:7d:99:b8:0d:2c:ff:3b:91:59:d8:d5:03:a0:30:99:75: bb:04:d8:40:e2:b5:4b:33:20:f7:bd:cb:1e:ed:67:2b:3c:39: 56:96:3e:17:58:16:66:9b:4f:ff:76:44:fa:fa:60:23:a0:a8: c1:9d:cb:13 -----BEGIN CERTIFICATE----- MIIDkzCCAnugAwIBAgIDCHYQMA0GCSqGSIb3DQEBBQUAMHUxGTAXBgNVBAMTEFB5 dGhvbiBEcml2ZXIgQ0ExEDAOBgNVBAsTB0RyaXZlcnMxEDAOBgNVBAoTB01vbmdv REIxEjAQBgNVBAcTCVBhbG8gQWx0bzETMBEGA1UECBMKQ2FsaWZvcm5pYTELMAkG A1UEBhMCVVMwHhcNMTYwNTIzMjAxMzA4WhcNMzYwNTIzMjAxMzA4WjB1MRkwFwYD VQQDExBQeXRob24gRHJpdmVyIENBMRAwDgYDVQQLEwdEcml2ZXJzMRAwDgYDVQQK EwdNb25nb0RCMRIwEAYDVQQHEwlQYWxvIEFsdG8xEzARBgNVBAgTCkNhbGlmb3Ju aWExCzAJBgNVBAYTAlVTMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA ijh4H/G+FDuqaawzLp9ry0kSIvZJCsRJmYwB8/JGXjxKBowCW5zavwUdWYrk0pH5 QRx9JvOwub0lpYTl26MLfNVoPe1zL+ahhyiDvobJqnQOPxxu86s56bNK+XZB7GBQ GoQrqm25yyODdRBRbDdeMqOT3pSGd7IkFdFWFVbs9Ro2GzUAc9SdfoAU3CJID9Up W3Gd/uxPIn0a/oYKPPbZzvAZ0QXVe4j01njOTn/i70btCayNOs/8uvwBEDSWGdGZ BHA+dfTaIG6cRnPCrhHUIKW0YIB3eOy3sdu1FUiFQXxrMLqVvaVxyfC6D1ubHAwu rK9ryguG2apWv4ODckoLzQIDAQABoywwKjAMBgNVHRMEBTADAQH/MBoGA1UdEQQT MBGCCWxvY2FsaG9zdIcEfwAAATANBgkqhkiG9w0BAQUFAAOCAQEAXbuk/+gGYvLO isSHUdflWHJrG1d0TpvlxRdu8uDUPtG5qqp0b7L65jUI7SBu6YF3wa5Hs+xTPGnZ 4SG50VJ62CJckajVvrhNk1TS0NnZdCCE3PXWSm/CUJd+WQLmah5iOdb4WOGMiXoB ASzWee1yg9pxAxqZbqdlU/3Jk7sp98pxvz3FqQGsEWGegVUSmndGX+Dr3YbcEUrm DwAZdnjPV22N4HlbHXhZhn0YH9kN06mzsZND/fxw4tCkFbLH6juXyPuZnRj6TfN9 mbgNLP87kVnY1QOgMJl1uwTYQOK1SzMg973LHu1nKzw5VpY+F1gWZptP/3ZE+vpg I6CowZ3LEw== -----END CERTIFICATE----- mongo-ruby-driver-2.21.3/spec/support/certificates/retrieve-atlas-cert000077500000000000000000000016541505113246500261210ustar00rootroot00000000000000#!/usr/bin/env ruby # frozen_string_literal: true # rubocop:todo all require 'tmpdir' host = 'ac-ulwcmzm-shard-00-00.g6fyiaq.mongodb-dev.net' output = `openssl s_client -showcerts -servername #{host} -connect #{host}:27017 e # While waiting for secondaries to catch up before stepping down, this node decided to step down for other reasons (189) if e.code == 189 # success else raise end end # Attempts to elect the server at the specified address as the new primary # by asking it to step up. # # @param [ Mongo::Address ] address def step_up(address) client = direct_client(address) start = Mongo::Utils.monotonic_time loop do begin client.database.command(replSetStepUp: 1) break rescue Mongo::Error::OperationFailure::Family => e # Election failed. (125) if e.code == 125 # Possible reason is the node we are trying to elect has deny-listed # itself. This is where {replSetFreeze: 0} should make it eligible # for election again but this seems to not always work. else raise end if Mongo::Utils.monotonic_time > start + 10 raise e end end end reset_server_states end # The recommended guidance for changing a primary is: # # - turn off election handoff # - pick a server to be the new primary # - set the target's priority to 10, existing primary's priority to 1, # other servers' priorities to 0 # - call step down on the existing primary # - call step up on the target in a loop until it becomes the primary def change_primary start = Mongo::Utils.monotonic_time existing_primary = admin_client.cluster.next_primary existing_primary_address = existing_primary.address target = admin_client.cluster.servers_list.detect do |server| !server.arbiter? && server.address != existing_primary_address end cfg = get_rs_config cfg['members'].each do |member| member['priority'] = case member['host'] when existing_primary_address.to_s 1 when target.address.to_s 10 else 0 end end set_rs_config(cfg) if unfreeze_server(target.address) # Target server self-elected as primary, no further action is needed. return end step_down persistently_step_up(target.address) new_primary = admin_client.cluster.next_primary puts "#{Time.now} [CT] Primary changed to #{new_primary.address}. Time to change primaries: #{Mongo::Utils.monotonic_time - start}" end def persistently_step_up(address) start = Mongo::Utils.monotonic_time loop do puts "#{Time.now} [CT] Asking #{address} to step up" step_up(address) if admin_client.cluster.next_primary.address == address break end if Mongo::Utils.monotonic_time - start > 10 raise "Unable to get #{address} instated as primary after 10 seconds" end end end # Attempts to elect the server at the specified address as the new primary # by manipulating priorities. # # This method requires that there is an active primary in the replica set at # the time of the call (presumably a different one). # # @param [ Mongo::Address ] address def force_primary(address) current_primary = admin_client.cluster.next_primary if current_primary.address == address raise "Attempting to set primary to #{address} but it is already the primary" end encourage_primary(address) if unfreeze_server(address) # Target server self-elected as primary, no further action is needed. return end step_down persistently_step_up(address) admin_client.cluster.next_primary.unknown! new_primary = admin_client.cluster.next_primary if new_primary.address != address raise "Elected primary #{new_primary.address} is not what we wanted (#{address})" end end # Adjusts replica set configuration so that the next election is likely # to result in the server at the specified address becoming a primary. # Address should be a Mongo::Address object. # # This method requires that there is an active primary in the replica set at # the time of the call. # # @param [ Mongo::Address ] address def encourage_primary(address) existing_primary = admin_client.cluster.next_primary cfg = get_rs_config found = false cfg['members'].each do |member| if member['host'] == address.to_s member['priority'] = 10 found = true elsif member['host'] == existing_primary.address.to_s member['priority'] = 1 else member['priority'] = 0 end end unless found raise "No RS member for #{address}" end set_rs_config(cfg) end # Allows the server at the specified address to run for elections and # potentially become a primary. Use after issuing a step down command # to clear the prohibtion on the stepped down server to be a primary. # # Returns true if the server at address became a primary, such that # a step up command is not necessary. def unfreeze_server(address) begin direct_client(address).use('admin').database.command(replSetFreeze: 0) rescue Mongo::Error::OperationFailure::Family => e # Mongo::Error::OperationFailure: cannot freeze node when primary or running for election. state: Primary (95) if e.code == 95 # The server we want to become primary may have already become the # primary by holding a spontaneous election and winning due to the # priorities we have set. admin_client.cluster.servers_list.each do |server| server.unknown! end if admin_client.cluster.next_primary.address == address puts "#{Time.now} [CT] Primary self-elected to #{address}" return true end end raise end false end def unfreeze_all admin_client.cluster.servers_list.each do |server| next if server.arbiter? client = direct_client(server.address) # Primary refuses to be unfrozen with this message: # cannot freeze node when primary or running for election. state: Primary (95) if server != admin_client.cluster.next_primary client.use('admin').database.command(replSetFreeze: 0) end end end # Gets the current replica set configuration. def get_rs_config result = admin_client.database.command(replSetGetConfig: 1) doc = result.reply.documents.first if doc['ok'] != 1 raise 'Failed to get RS config' end doc['config'] end # Reconfigures the replica set with the specified configuration. # Automatically increases RS version in the process. def set_rs_config(config) config = config.dup config['version'] += 1 cmd = {replSetReconfig: config} if ClusterConfig.instance.fcv_ish >= '4.4' # Workaround for https://jira.mongodb.org/browse/SERVER-46894 cmd[:force] = true end result = admin_client.database.command(cmd) doc = result.reply.documents.first if doc['ok'] != 1 raise 'Failed to reconfigure RS' end end def admin_client # Since we are triggering elections, we need to have a higher server # selection timeout applied. The default timeout for tests assumes a # stable deployment. ( @admin_client ||= ClientRegistry.instance.global_client('root_authorized'). with(server_selection_timeout: 15).use(:admin) ).tap do |client| ClientRegistry.reconnect_client_if_perished(client) end end def direct_client(address, options = {}) connect = if SpecConfig.instance.connect_options[:connect] == :load_balanced :load_balanced else :direct end @direct_clients ||= {} cache_key = {address: address}.update(options) ( @direct_clients[cache_key] ||= ClientRegistry.instance.new_local_client( [address.to_s], SpecConfig.instance.test_options.merge( SpecConfig.instance.auth_options).merge( connect: connect, server_selection_timeout: 10).merge(options)) ).tap do |client| ClientRegistry.reconnect_client_if_perished(client) end end def close_clients if @admin_client @admin_client.close @admin_client = nil end if @direct_clients @direct_clients.each do |cache_key, client| client.close end @direct_clients = nil end end def each_server(&block) admin_client.cluster.servers_list.each(&block) end def direct_client_for_each_data_bearing_server(&block) each_server do |server| next if server.arbiter? yield direct_client(server.address) end end private def reset_server_states each_server do |server| server.unknown! end end end mongo-ruby-driver-2.21.3/spec/support/common_shortcuts.rb000066400000000000000000000307101505113246500235720ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all module CommonShortcuts module ClassMethods # Declares a topology double, which is configured to accept summary # calls as those are used in SDAM event creation def declare_topology_double let(:topology) do double('topology').tap do |topology| allow(topology).to receive(:summary) end end end # For tests which require clients to connect, clean slate asks all # existing clients to be closed prior to the test execution. # Note that clean_slate closes all clients for each test in the scope. def clean_slate before do ClientRegistry.instance.close_all_clients BackgroundThreadRegistry.instance.verify_empty! end end # Similar to clean slate but closes clients once before all tests in # the scope. Use when the tests do not create new clients but do not # want any background output from previously existing clients. def clean_slate_for_all before(:all) do ClientRegistry.instance.close_all_clients BackgroundThreadRegistry.instance.verify_empty! end end # If only the lite spec helper was loaded, this method does nothing. # If the full spec helper was loaded, this method performs the same function # as clean_state_for_all. def clean_slate_for_all_if_possible before(:all) do if defined?(ClusterTools) ClientRegistry.instance.close_all_clients BackgroundThreadRegistry.instance.verify_empty! end end end # For some reason, there are tests which fail on evergreen either # intermittently or reliably that always succeed locally. # Debugging of tests in evergreen is difficult/impossible, # thus this workaround. def clean_slate_on_evergreen before(:all) do if SpecConfig.instance.ci? ClientRegistry.instance.close_all_clients end end end # Applies environment variable overrides in +env+ to the global environment # (+ENV+) for the duration of each test. # # If a key's value in +env+ is nil, this key is removed from +ENV+. # # When the test finishes, the values in original +ENV+ that were overridden # by +env+ are restored. If a key was not in original +ENV+ and was # overridden by +env+, this key is removed from +ENV+ after the test. # # If the environment variables are not known at test definition time # but are determined at test execution time, pass a block instead of # the +env+ parameter and return the desired environment variables as # a Hash from the block. def local_env(env = nil, &block) around do |example| env ||= block.call # This duplicates ENV. # Note that ENV.dup produces an Object which does not behave like # the original ENV, and hence is not usable. saved_env = ENV.to_h env.each do |k, v| if v.nil? ENV.delete(k) else ENV[k] = v end end begin example.run ensure env.each do |k, v| if saved_env.key?(k) ENV[k] = saved_env[k] else ENV.delete(k) end end end end end def clear_ocsp_cache before do Mongo.clear_ocsp_cache end end def with_ocsp_mock(ca_file_path, responder_cert_path, responder_key_path, fault: nil, port: 8100 ) clear_ocsp_cache around do |example| args = [ SpecConfig.instance.ocsp_files_dir.join('ocsp_mock.py').to_s, '--ca_file', ca_file_path.to_s, '--ocsp_responder_cert', responder_cert_path.to_s, '--ocsp_responder_key', responder_key_path.to_s, '-p', port.to_s, ] if SpecConfig.instance.client_debug? # Use when debugging - tests run faster without -v. args << '-v' end if fault args += ['--fault', fault] end process = ChildProcess.new(*args) process.io.inherit! retried = false begin process.start rescue if retried raise else sleep 1 retried = true retry end end begin sleep 0.4 example.run ensure if process.exited? raise "Spawned process exited before we stopped it" end process.stop process.wait end end end def with_openssl_debug around do |example| v = OpenSSL.debug OpenSSL.debug = true begin example.run ensure OpenSSL.debug = v end end end end module InstanceMethods def kill_all_server_sessions begin ClientRegistry.instance.global_client('root_authorized').command(killAllSessions: []) # killAllSessions also kills the implicit session which the driver uses # to send this command, as a result it always fails rescue Mongo::Error::OperationFailure::Family => e # "operation was interrupted" unless e.code == 11601 raise end end end def wait_for_all_servers(cluster) # Cluster waits for initial round of sdam until the primary # is discovered, which means by the time a connection is obtained # here some of the servers in the topology may still be unknown. # This messes with event expectations below. Therefore, wait for # all servers in the topology to be checked. # # This wait here assumes all addresses specified for the test # suite are for working servers of the cluster; if this is not # the case, this test will fail due to exceeding the general # test timeout eventually. while cluster.servers_list.any? { |server| server.unknown? } warn "Waiting for unknown servers in #{cluster.summary}" sleep 0.25 end end def make_server(mode, options = {}) tags = options[:tags] || {} average_round_trip_time = if mode == :unknown nil else options[:average_round_trip_time] || 0 end if mode == :unknown config = {} else config = { 'isWritablePrimary' => mode == :primary, 'secondary' => mode == :secondary, 'arbiterOnly' => mode == :arbiter, 'isreplicaset' => mode == :ghost, 'hidden' => mode == :other, 'msg' => mode == :mongos ? 'isdbgrid' : nil, 'tags' => tags, 'ok' => 1, 'minWireVersion' => 2, 'maxWireVersion' => 8, } if [:primary, :secondary, :arbiter, :other].include?(mode) config['setName'] = 'mongodb_set' end end listeners = Mongo::Event::Listeners.new monitoring = Mongo::Monitoring.new address = options[:address] cluster = double('cluster') allow(cluster).to receive(:topology).and_return(topology) allow(cluster).to receive(:app_metadata) allow(cluster).to receive(:options).and_return({}) allow(cluster).to receive(:run_sdam_flow) allow(cluster).to receive(:monitor_app_metadata) allow(cluster).to receive(:push_monitor_app_metadata) allow(cluster).to receive(:heartbeat_interval).and_return(10) server = Mongo::Server.new(address, cluster, monitoring, listeners, monitoring_io: false) # Since the server references a double for the cluster, the server # must be closed in the scope of the example. register_server(server) description = Mongo::Server::Description.new( address, config, average_round_trip_time: average_round_trip_time, ) server.tap do |s| allow(s).to receive(:description).and_return(description) end end def make_protocol_reply(payload) Mongo::Protocol::Reply.new.tap do |reply| reply.instance_variable_set('@flags', []) reply.instance_variable_set('@documents', [payload]) end end def make_not_master_reply make_protocol_reply( 'ok' => 0, 'code' => 10107, 'errmsg' => 'not master' ) end def make_node_recovering_reply make_protocol_reply( 'ok' => 0, 'code' => 11602, 'errmsg' => 'InterruptedDueToStepDown' ) end def make_node_shutting_down_reply make_protocol_reply( 'ok' => 0, 'code' => 91, 'errmsg' => 'shutdown in progress' ) end def register_cluster(cluster) finalizer = lambda do |cluster| cluster.close end LocalResourceRegistry.instance.register(cluster, finalizer) end def register_server(server) finalizer = lambda do |server| if server.connected? server.close end end LocalResourceRegistry.instance.register(server, finalizer) end def register_background_thread_object(bgt_object) finalizer = lambda do |bgt_object| bgt_object.stop! end LocalResourceRegistry.instance.register(bgt_object, finalizer) end def register_pool(pool) finalizer = lambda do |pool| if !pool.closed? pool.close(wait: true) end end LocalResourceRegistry.instance.register(pool, finalizer) end # Stop monitoring threads on the specified clients, after ensuring # each client has a writable server. Used for tests which assert on # global side effects like log messages being generated, to prevent # background threads from interfering with assertions. def stop_monitoring(*clients) clients.each do |client| client.cluster.next_primary client.cluster.close # We have tests that stop monitoring to reduce the noise happening in # background. These tests perform operations which requires the pools # to function. See also RUBY-3102. client.cluster.servers_list.each do |server| if pool = server.pool pool.instance_variable_set('@closed', false) # Stop the populator so that we don't have leftover threads. pool.instance_variable_get('@populator').stop! end end end end DNS_INTERFACES = [ [:udp, "0.0.0.0", 5300], [:tcp, "0.0.0.0", 5300], ] # Starts the DNS server and returns it; should be run from within an # Async block. Prefer #mock_dns instead, which does the setup for you. def start_dns_server(config) RubyDNS::run_server(DNS_INTERFACES) do config.each do |(query, type, *answers)| resource_cls = Resolv::DNS::Resource::IN.const_get(type.to_s.upcase) resources = answers.map do |answer| resource_cls.new(*answer) end match(query, resource_cls) do |req| req.add(resources) end end end end # Starts and runs a DNS server, then yields to the attached block. def mock_dns(config) # only require rubydns when we need it; it's MRI-only. require 'rubydns' Async do |task| server = start_dns_server(config) yield ensure server.stop end end # Wait for snapshot reads to become available to prevent this error: # [246:SnapshotUnavailable]: Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1646666892, 4). Collection minimum is Timestamp(1646666892, 5) (on localhost:27017, modern retry, attempt 1) def wait_for_snapshot(db: nil, collection: nil, client: nil) client ||= authorized_client client = client.use(db) if db collection ||= 'any' start_time = Mongo::Utils.monotonic_time begin client.start_session(snapshot: true) do |session| client[collection].aggregate([{'$match': {any: true}}], session: session).to_a end rescue Mongo::Error::OperationFailure::Family => e # Retry them as the server demands... if e.code == 246 # SnapshotUnavailable if Mongo::Utils.monotonic_time < start_time + 10 retry end end raise end end # Make the server usable for operations after it was marked closed. # Used for tests that e.g. mock network operations to avoid interference # from server monitoring. def reset_pool(server) if pool = server.pool_internal pool.close end server.remove_instance_variable('@pool') server.pool.ready end end end mongo-ruby-driver-2.21.3/spec/support/constraints.rb000066400000000000000000000022571505113246500225400ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all module Constraints # Some tests hardcode the TLS certificates shipped with the driver's # test suite, and will fail when using TLS connections that use other # certificates. def require_local_tls require_tls before(:all) do # TODO This isn't actually the foolproof check if ENV['OCSP_ALGORITHM'] skip 'Driver TLS certificate required, OCSP certificates are not acceptable' end end end def minimum_mri_version(version) require_mri before(:all) do if RUBY_VERSION < version skip "Ruby #{version} or greater is required" end end end def forbid_x509_auth before(:all) do skip 'X.509 auth not allowed' if SpecConfig.instance.x509_auth? end end def max_bson_version(version) required_version = version.split('.').map(&:to_i) actual_version = bson_version(required_version.length) before(:all) do if (actual_version <=> required_version) > 0 skip "bson-ruby version #{version} or lower is required" end end end def bson_version(precision) BSON::VERSION.split('.')[0...precision].map(&:to_i) end end mongo-ruby-driver-2.21.3/spec/support/crypt.rb000066400000000000000000000276151505113246500213370ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Copyright (C) 2009-2020 MongoDB Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. module Crypt LOCAL_MASTER_KEY_B64 = 'Mng0NCt4ZHVUYUJCa1kxNkVyNUR1QURhZ2h2UzR2d2RrZzh0cFBwM3R6NmdWMDFBMUN3' + 'YkQ5aXRRMkhGRGdQV09wOGVNYUMxT2k3NjZKelhaQmRCZGJkTXVyZG9uSjFk'.freeze LOCAL_MASTER_KEY = Base64.decode64(LOCAL_MASTER_KEY_B64) # For all FLE-related tests shared_context 'define shared FLE helpers' do # 96-byte binary string, base64-encoded local master key let(:local_master_key_b64) do Crypt::LOCAL_MASTER_KEY_B64 end let(:local_master_key) { Crypt::LOCAL_MASTER_KEY } # Data key id as a binary string let(:key_id) { data_key['_id'] } # Data key alternate name let(:key_alt_name) { 'ssn_encryption_key' } # Deterministic encryption algorithm let(:algorithm) { 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic' } # Local KMS provider options let(:local_kms_providers) { { local: { key: local_master_key } } } # AWS KMS provider options let(:aws_kms_providers) do { aws: { access_key_id: SpecConfig.instance.fle_aws_key, secret_access_key: SpecConfig.instance.fle_aws_secret, } } end # Azure KMS provider options let(:azure_kms_providers) do { azure: { tenant_id: SpecConfig.instance.fle_azure_tenant_id, client_id: SpecConfig.instance.fle_azure_client_id, client_secret: SpecConfig.instance.fle_azure_client_secret, } } end let(:gcp_kms_providers) do { gcp: { email: SpecConfig.instance.fle_gcp_email, private_key: SpecConfig.instance.fle_gcp_private_key, } } end let(:kmip_kms_providers) do { kmip: { endpoint: SpecConfig.instance.fle_kmip_endpoint, } } end # Key vault database and collection names let(:key_vault_db) { 'keyvault' } let(:key_vault_coll) { 'datakeys' } let(:key_vault_namespace) { "#{key_vault_db}.#{key_vault_coll}" } # Example value to encrypt let(:ssn) { '123-456-7890' } let(:key_vault_collection) do authorized_client.with( database: key_vault_db, write_concern: { w: :majority } )[key_vault_coll] end let(:extra_options) do { mongocryptd_spawn_args: ["--port=#{SpecConfig.instance.mongocryptd_port}"], mongocryptd_uri: "mongodb://localhost:#{SpecConfig.instance.mongocryptd_port}", } end let(:kms_tls_options) do {} end let(:default_kms_tls_options_for_provider) do { ssl_ca_cert: SpecConfig.instance.fle_kmip_tls_ca_file, ssl_cert: SpecConfig.instance.fle_kmip_tls_certificate_key_file, ssl_key: SpecConfig.instance.fle_kmip_tls_certificate_key_file, } end let(:encrypted_fields) do BSON::ExtJSON.parse(File.read('spec/support/crypt/encrypted_fields/encryptedFields.json')) end %w[DecimalNoPrecision DecimalPrecision Date DoubleNoPrecision DoublePrecision Int Long].each do |type| let("range_encrypted_fields_#{type.downcase}".to_sym) do BSON::ExtJSON.parse( File.read("spec/support/crypt/encrypted_fields/range-encryptedFields-#{type}.json"), mode: :bson ) end end let(:key1_document) do BSON::ExtJSON.parse(File.read('spec/support/crypt/keys/key1-document.json')) end end # For tests that require local KMS to be configured shared_context 'with local kms_providers' do let(:kms_provider_name) { 'local' } let(:kms_providers) { local_kms_providers } let(:data_key) do BSON::ExtJSON.parse(File.read('spec/support/crypt/data_keys/key_document_local.json')) end let(:schema_map_file_path) do 'spec/support/crypt/schema_maps/schema_map_local.json' end let(:schema_map) do BSON::ExtJSON.parse(File.read(schema_map_file_path)) end let(:data_key_options) { {} } let(:encrypted_ssn) do "ASzggCwAAAAAAAAAAAAAAAAC/OvUvE0N5eZ5vhjcILtGKZlxovGhYJduEfsR\n7NiH68Ft" + "tXzHYqT0DKgvn3QjjTbS/4SPfBEYrMIS10Uzf9R1Ky4D5a19mYCp\nmv76Z8Rzdmo=\n" end end shared_context 'with local kms_providers and key alt names' do include_context 'with local kms_providers' let(:schema_map_file_path) do 'spec/support/crypt/schema_maps/schema_map_local_key_alt_names.json' end let(:schema_map) do BSON::ExtJSON.parse(File.read(schema_map_file_path)) end end # For tests that require AWS KMS to be configured shared_context 'with AWS kms_providers' do before do unless SpecConfig.instance.fle_aws_key && SpecConfig.instance.fle_aws_secret && SpecConfig.instance.fle_aws_region && SpecConfig.instance.fle_aws_arn reason = "This test requires the MONGO_RUBY_DRIVER_AWS_KEY, " + "MONGO_RUBY_DRIVER_AWS_SECRET, MONGO_RUBY_DRIVER_AWS_REGION, " + "MONGO_RUBY_DRIVER_AWS_ARN environment variables to be set information from AWS." if SpecConfig.instance.fle? fail(reason) else skip(reason) end end end let(:kms_provider_name) { 'aws' } let(:kms_providers) { aws_kms_providers } let(:data_key) do BSON::ExtJSON.parse(File.read('spec/support/crypt/data_keys/key_document_aws.json')) end let(:schema_map_file_path) do 'spec/support/crypt/schema_maps/schema_map_aws.json' end let(:schema_map) do BSON::ExtJSON.parse(File.read(schema_map_file_path)) end let(:data_key_options) do { master_key: { region: aws_region, key: aws_arn, endpoint: "#{aws_endpoint_host}:#{aws_endpoint_port}" } } end let(:aws_region) { SpecConfig.instance.fle_aws_region } let(:aws_arn) { SpecConfig.instance.fle_aws_arn } let(:aws_endpoint_host) { "kms.#{aws_region}.amazonaws.com" } let(:aws_endpoint_port) { 443 } let(:encrypted_ssn) do "AQFkgAAAAAAAAAAAAAAAAAACX/YG2ZOHWU54kARE17zDdeZzKgpZffOXNaoB\njmvdVa/" + "yTifOikvxEov16KxtQrnaKWdxQL03TVgpoLt4Jb28pqYKlgBj3XMp\nuItZpQeFQB4=\n" end end shared_context 'with AWS kms_providers and key alt names' do include_context 'with AWS kms_providers' let(:schema_map_file_path) do 'spec/support/crypt/schema_maps/schema_map_aws_key_alt_names.json' end let(:schema_map) do BSON::ExtJSON.parse(File.read(schema_map_file_path)) end end shared_context 'with Azure kms_providers' do before do unless SpecConfig.instance.fle_azure_client_id && SpecConfig.instance.fle_azure_client_secret && SpecConfig.instance.fle_azure_tenant_id && SpecConfig.instance.fle_azure_identity_platform_endpoint reason = 'This test requires the MONGO_RUBY_DRIVER_AZURE_TENANT_ID, ' + 'MONGO_RUBY_DRIVER_AZURE_CLIENT_ID, MONGO_RUBY_DRIVER_AZURE_CLIENT_SECRET, ' + 'MONGO_RUBY_DRIVER_AZURE_IDENTITY_PLATFORM_ENDPOINT environment variables to be set information from Azure.' if SpecConfig.instance.fle? fail(reason) else skip(reason) end end end let(:kms_provider_name) { 'azure' } let(:kms_providers) { azure_kms_providers } let(:data_key) do BSON::ExtJSON.parse(File.read('spec/support/crypt/data_keys/key_document_azure.json')) end let(:schema_map_file_path) do 'spec/support/crypt/schema_maps/schema_map_azure.json' end let(:schema_map) do BSON::ExtJSON.parse(File.read(schema_map_file_path)) end let(:data_key_options) do { master_key: { key_vault_endpoint: SpecConfig.instance.fle_azure_key_vault_endpoint, key_name: SpecConfig.instance.fle_azure_key_name, } } end let(:encrypted_ssn) do "AQGVERAAAAAAAAAAAAAAAAACFq9wVyHGWquXjaAjjBwI3MQNuyokz/+wWSi0\n8n9iu1cKzTGI9D5uVSNs64tBulnZpywtuewBQtJIphUoEr5YpSFLglOh3bp6\nmC9hfXSyFT4=" end end shared_context 'with Azure kms_providers and key alt names' do include_context 'with Azure kms_providers' let(:schema_map_file_path) do 'spec/support/crypt/schema_maps/schema_map_azure_key_alt_names.json' end let(:schema_map) do BSON::ExtJSON.parse(File.read(schema_map_file_path)) end end shared_context 'with GCP kms_providers' do before do unless SpecConfig.instance.fle_gcp_email && SpecConfig.instance.fle_gcp_private_key && SpecConfig.instance.fle_gcp_project_id && SpecConfig.instance.fle_gcp_location && SpecConfig.instance.fle_gcp_key_ring && SpecConfig.instance.fle_gcp_key_name reason = 'This test requires the MONGO_RUBY_DRIVER_GCP_EMAIL, ' + 'MONGO_RUBY_DRIVER_GCP_PRIVATE_KEY, ' + 'MONGO_RUBY_DRIVER_GCP_PROJECT_ID, MONGO_RUBY_DRIVER_GCP_LOCATION, ' + 'MONGO_RUBY_DRIVER_GCP_KEY_RING, MONGO_RUBY_DRIVER_GCP_KEY_NAME ' + 'environment variables to be set information from GCP.' if SpecConfig.instance.fle? fail(reason) else skip(reason) end end end let(:kms_provider_name) { 'gcp' } let(:kms_providers) { gcp_kms_providers } let(:data_key) do BSON::ExtJSON.parse(File.read('spec/support/crypt/data_keys/key_document_gcp.json')) end let(:schema_map_file_path) do 'spec/support/crypt/schema_maps/schema_map_gcp.json' end let(:schema_map) do BSON::ExtJSON.parse(File.read(schema_map_file_path)) end let(:data_key_options) do { master_key: { project_id: SpecConfig.instance.fle_gcp_project_id, location: SpecConfig.instance.fle_gcp_location, key_ring: SpecConfig.instance.fle_gcp_key_ring, key_name: SpecConfig.instance.fle_gcp_key_name, } } end let(:encrypted_ssn) do "ARgjwAAAAAAAAAAAAAAAAAACxH7FeQ7bsdbcs8uiNn5Anj2MAU7eS5hFiQsH\nYIEMN88QVamaAgiE+EIYHiRMYGxUFaaIwD17tjzZ2wyQbDd1qMO9TctkIFzn\nqQTOP6eSajU=" end end shared_context 'with GCP kms_providers and key alt names' do include_context 'with GCP kms_providers' let(:schema_map_file_path) do 'spec/support/crypt/schema_maps/schema_map_gcp_key_alt_names.json' end let(:schema_map) do BSON::ExtJSON.parse(File.read(schema_map_file_path)) end end shared_context 'with KMIP kms_providers' do let(:kms_provider_name) { 'kmip' } let(:kms_providers) { kmip_kms_providers } let(:kms_tls_options) do { kmip: default_kms_tls_options_for_provider } end let(:data_key) do BSON::ExtJSON.parse(File.read('spec/support/crypt/data_keys/key_document_kmip.json')) end let(:schema_map_file_path) do 'spec/support/crypt/schema_maps/schema_map_kmip.json' end let(:schema_map) do BSON::ExtJSON.parse(File.read(schema_map_file_path)) end let(:data_key_options) do { master_key: { key_id: "1" } } end let(:encrypted_ssn) do "ASjCDwAAAAAAAAAAAAAAAAAC/ga87lE2+z1ZVpLcoP51EWKVgne7f5/vb0Jq\nt3odeB0IIuoP7xxLCqSJe+ueFm86gVA1gIiip5CKe/043PD4mquxO2ARwy8s\nCX/D4tMmvDA=" end end shared_context 'with KMIP kms_providers and key alt names' do include_context 'with KMIP kms_providers' let(:schema_map_file_path) do 'spec/support/crypt/schema_maps/schema_map_kmip_key_alt_names.json' end let(:schema_map) do BSON::ExtJSON.parse(File.read(schema_map_file_path)) end end end mongo-ruby-driver-2.21.3/spec/support/crypt/000077500000000000000000000000001505113246500207775ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/support/crypt/corpus/000077500000000000000000000000001505113246500223125ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/support/crypt/corpus/corpus-encrypted.json000066400000000000000000007574761505113246500265450ustar00rootroot00000000000000{ "_id": "client_side_encryption_corpus", "altname_aws": "aws", "altname_local": "local", "aws_double_rand_auto_id": { "kms": "aws", "type": "double", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AgFkgAAAAAAAAAAAAAAAAAABchrWPF5OPeuFpk4tUV325TmoNpGW+L5iPSXcLQIr319WJFIp3EDy5QiAHBfz2rThI7imU4eLXndIUrsjM0S/vg==", "subType": "06" } } }, "aws_double_rand_auto_altname": { "kms": "aws", "type": "double", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AgFkgAAAAAAAAAAAAAAAAAABga5hXFiFvH/wOr0wOHSHFWRZ4pEs/UCC1XJWf46Dod3GY9Ry5j1ZyzeHueJxc4Ym5M8UHKSmJuXmNo9m9ZnkiA==", "subType": "06" } } }, "aws_double_rand_explicit_id": { "kms": "aws", "type": "double", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AgFkgAAAAAAAAAAAAAAAAAABjTYZbsro/YxLWBb88qPXEIDQdzY7UZyK4UaZZ8h62OTxp43Zp9j6WvOEzKhXt4oJPMxlAxyTdqO6MllX5bsDrw==", "subType": "06" } } }, "aws_double_rand_explicit_altname": { "kms": "aws", "type": "double", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AgFkgAAAAAAAAAAAAAAAAAABqkyXdeS3aWH2tRFoKxsIIL3ZH05gkiAEbutrjrdfw0b110iPhuCCOb0gP/nX/NRNCg1kCFZ543Vu0xZ0BRXlvQ==", "subType": "06" } } }, "aws_double_det_explicit_id": { "kms": "aws", "type": "double", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$numberDouble": "1.234" } }, "aws_double_det_explicit_altname": { "kms": "aws", "type": "double", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$numberDouble": "1.234" } }, "aws_string_rand_auto_id": { "kms": "aws", "type": "string", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AgFkgAAAAAAAAAAAAAAAAAACAsI5E0rVT8TpIONY3TnbRvIxUjKsiy9ynVd/fE7U1lndE7KR6dTzs8QWK13kdKxO+njKPeC2ObBX904QmJ65Sw==", "subType": "06" } } }, "aws_string_rand_auto_altname": { "kms": "aws", "type": "string", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AgFkgAAAAAAAAAAAAAAAAAACgBE6J6MRxPSDe+gfJPL8nBvuEIRBYxNS/73LqBTDJYyN/lsHQ6UlFDT5B4EkIPmHPTe+UBMOhZQ1bsP+DK8Aog==", "subType": "06" } } }, "aws_string_rand_explicit_id": { "kms": "aws", "type": "string", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AgFkgAAAAAAAAAAAAAAAAAACbdTVDBWn35M5caKZgLFoiSVeFGKRj5K/QtupKNc8/dPIyCE+/a4PU51G/YIzFpYmp91nLpyq7lD/eJ/V0q66Zw==", "subType": "06" } } }, "aws_string_rand_explicit_altname": { "kms": "aws", "type": "string", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AgFkgAAAAAAAAAAAAAAAAAACa4O+kE2BaqM0E+yiBrbCuE0YEGTrZ7L/+SuWm9gN3UupxwAQpRfxXAuUCTc9u1CXnvL+ga+VJMcWD2bawnn/Rg==", "subType": "06" } } }, "aws_string_det_auto_id": { "kms": "aws", "type": "string", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AQFkgAAAAAAAAAAAAAAAAAACyvOW8NcqRkZYzujivwVmYptJkic27PWr3Nq3Yv5Njz8cJdoyesVaQan6mn+U3wdfGEH8zbUUISdCx5qgvXEpvw==", "subType": "06" } } }, "aws_string_det_explicit_id": { "kms": "aws", "type": "string", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AQFkgAAAAAAAAAAAAAAAAAACyvOW8NcqRkZYzujivwVmYptJkic27PWr3Nq3Yv5Njz8cJdoyesVaQan6mn+U3wdfGEH8zbUUISdCx5qgvXEpvw==", "subType": "06" } } }, "aws_string_det_explicit_altname": { "kms": "aws", "type": "string", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AQFkgAAAAAAAAAAAAAAAAAACyvOW8NcqRkZYzujivwVmYptJkic27PWr3Nq3Yv5Njz8cJdoyesVaQan6mn+U3wdfGEH8zbUUISdCx5qgvXEpvw==", "subType": "06" } } }, "aws_object_rand_auto_id": { "kms": "aws", "type": "object", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AgFkgAAAAAAAAAAAAAAAAAADI+/afY6Eka8j1VNThWIeGkDZ7vo4/l66a01Z+lVUFFnVLeUV/nz9kM6uTTplNRUa+RXmNmwkoR/BHRnGc7wRNA==", "subType": "06" } } }, "aws_object_rand_auto_altname": { "kms": "aws", "type": "object", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AgFkgAAAAAAAAAAAAAAAAAADzN4hVXWXKerhggRRtwWnDu2W2wQ5KIWb/X1WCZJKTjQSQ5LNHVasabBCa4U1q46PQ5pDDM1PkVjW6o+zzl/4xw==", "subType": "06" } } }, "aws_object_rand_explicit_id": { "kms": "aws", "type": "object", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AgFkgAAAAAAAAAAAAAAAAAADhSs5zKFMuuux3fqFFuPito3N+bp5TgmkUtJtFXjmA/EnLuexGARvEeGUsMJ/n0VzKbbsiE8+AsUNY3o9YXutqQ==", "subType": "06" } } }, "aws_object_rand_explicit_altname": { "kms": "aws", "type": "object", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AgFkgAAAAAAAAAAAAAAAAAADpj8MSov16h26bFDrHepsNkW+tOLOjRP7oj1Tnj75qZ+uqxxVkQ5B/t/Ihk5fikHTJGAcRBR5Vv6kJ/ulMaDnvQ==", "subType": "06" } } }, "aws_object_det_explicit_id": { "kms": "aws", "type": "object", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": { "x": { "$numberInt": "1" } } }, "aws_object_det_explicit_altname": { "kms": "aws", "type": "object", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "x": { "$numberInt": "1" } } }, "aws_array_rand_auto_id": { "kms": "aws", "type": "array", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AgFkgAAAAAAAAAAAAAAAAAAETWDOZ6zV39H2+W+BkwZIoxI3BNF6phKoiBZ9+i4T9uEoyU3TmoTPjuI0YNwR1v/p5/9rlVCG0KLZd16eeMb3zxZXjqh6IAJqfhsBQ7bzBYI=", "subType": "06" } } }, "aws_array_rand_auto_altname": { "kms": "aws", "type": "array", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AgFkgAAAAAAAAAAAAAAAAAAE1xeHbld2JjUiPB1k+xMZuIzNSai7mv1iusCswxKEfYCZ7YtR0GDQTxN4676CwhcodSDiysjgOxSFIGlptKCvl0k46LNq0EGypP9yWBLvdjQ=", "subType": "06" } } }, "aws_array_rand_explicit_id": { "kms": "aws", "type": "array", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AgFkgAAAAAAAAAAAAAAAAAAEFVa4U2uW65MGihhdOmpZFgnwGTs3VeN5TXXbXJ5cfm0CwXF3EPlzAVjy5WO/+lbvFufpQnIiLH59/kVygmwn+2P9zPNJnSGIJW9gaV8Vye8=", "subType": "06" } } }, "aws_array_rand_explicit_altname": { "kms": "aws", "type": "array", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AgFkgAAAAAAAAAAAAAAAAAAE11VXbfg7DJQ5/CB9XdBO0hCrxOkK3RrEjPGJ0FXlUo76IMna1uo+NVmDnM63CRlGE3/TEbZPpp0w0jn4vZLKvBmGr7o7WQusRY4jnRf5oH4=", "subType": "06" } } }, "aws_array_det_explicit_id": { "kms": "aws", "type": "array", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": [ { "$numberInt": "1" }, { "$numberInt": "2" }, { "$numberInt": "3" } ] }, "aws_array_det_explicit_altname": { "kms": "aws", "type": "array", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": [ { "$numberInt": "1" }, { "$numberInt": "2" }, { "$numberInt": "3" } ] }, "aws_binData=00_rand_auto_id": { "kms": "aws", "type": "binData=00", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AgFkgAAAAAAAAAAAAAAAAAAFpZYSktIHzGLZ6mcBFxywICqxdurqLVJcQR34ngix5YIOOulCYEhBSDzzSEyixEPCuU6cEzeuafpZRHX4qgcr9Q==", "subType": "06" } } }, "aws_binData=00_rand_auto_altname": { "kms": "aws", "type": "binData=00", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AgFkgAAAAAAAAAAAAAAAAAAFshzESR9SyR++9r2yeaEjJYScMDez414s8pZkB3C8ihDa+rsyaxNy4yrF7qNEWjFrdFaH7zD2LdlPx+TKZgROlg==", "subType": "06" } } }, "aws_binData=00_rand_explicit_id": { "kms": "aws", "type": "binData=00", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AgFkgAAAAAAAAAAAAAAAAAAFpYwZRPDom7qyAe5WW/QNSq97/OYgRT8xUEaaR5pkbQEFd/Cwtl8Aib/3Bs1CT3MVaHVWna2u5Gcc4s/v18zLhg==", "subType": "06" } } }, "aws_binData=00_rand_explicit_altname": { "kms": "aws", "type": "binData=00", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AgFkgAAAAAAAAAAAAAAAAAAFBq1RIU1YGHKAS1SAtS42fKtQBHQ/BCQzRutirNdvWlrXxF81LSaS7QgQyycZ2ePiOLsSm2vZS4xaQETeCgRC4g==", "subType": "06" } } }, "aws_binData=00_det_auto_id": { "kms": "aws", "type": "binData=00", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AQFkgAAAAAAAAAAAAAAAAAAF6SJGmfD3hLVc4tLPm4v2zFuHoRxUDLumBR8Q0AlKK2nQPyvuHEPVBD3vQdDi+Q7PwFxmovJsHccr59VnzvpJeg==", "subType": "06" } } }, "aws_binData=00_det_explicit_id": { "kms": "aws", "type": "binData=00", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AQFkgAAAAAAAAAAAAAAAAAAF6SJGmfD3hLVc4tLPm4v2zFuHoRxUDLumBR8Q0AlKK2nQPyvuHEPVBD3vQdDi+Q7PwFxmovJsHccr59VnzvpJeg==", "subType": "06" } } }, "aws_binData=00_det_explicit_altname": { "kms": "aws", "type": "binData=00", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AQFkgAAAAAAAAAAAAAAAAAAF6SJGmfD3hLVc4tLPm4v2zFuHoRxUDLumBR8Q0AlKK2nQPyvuHEPVBD3vQdDi+Q7PwFxmovJsHccr59VnzvpJeg==", "subType": "06" } } }, "aws_binData=04_rand_auto_id": { "kms": "aws", "type": "binData=04", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AgFkgAAAAAAAAAAAAAAAAAAFM5685zqlM8pc3xubtCFuf724g/bWXsebpNzw5E5HrxUqSBBVOvjs3IJH74+Supe169qejY358nOG41mLZvO2wJByvT14qmgUGpgBaLaxPR0=", "subType": "06" } } }, "aws_binData=04_rand_auto_altname": { "kms": "aws", "type": "binData=04", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AgFkgAAAAAAAAAAAAAAAAAAFfLqOzpfjz/XYHDLnliUAA5ehi6s+OIjvrLa59ubqEf8DuoCEWlO13Dl8X42IBB4hoSsO2RUeWtc9MeH4SdIUh/xJN3qS7qzjh/H+GvZRdAM=", "subType": "06" } } }, "aws_binData=04_rand_explicit_id": { "kms": "aws", "type": "binData=04", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AgFkgAAAAAAAAAAAAAAAAAAFkmKfKAbz9tqVaiM9MRhYttiY3vgDwXpdYLQ4uUgWX89KRayLADWortYL+Oq+roFhO3oiwB9vjeWGIdgbj5wSh/50JT/2Gs85TXFe1GFjfWs=", "subType": "06" } } }, "aws_binData=04_rand_explicit_altname": { "kms": "aws", "type": "binData=04", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AgFkgAAAAAAAAAAAAAAAAAAFKbufv83ddN+07Q5Ocq0VxUEV+BesSrVM7Bol3cMlWjHi7P+MrdwhNEa94xlxlDwU3b+RD6kW+AuNEQ2byA3CX2JjZE1gHwN7l0ukXuqpD0A=", "subType": "06" } } }, "aws_binData=04_det_auto_id": { "kms": "aws", "type": "binData=04", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AQFkgAAAAAAAAAAAAAAAAAAFlg7ceq9w/JMhHcNzQks6UrKYAffpUyeWuBIpcuLoB7YbFO61Dphseh77pzZbk3OvmveUq6EtCP2pmsq7hA+QV4hkv6BTn4m6wnXw6ss/qfE=", "subType": "06" } } }, "aws_binData=04_det_explicit_id": { "kms": "aws", "type": "binData=04", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AQFkgAAAAAAAAAAAAAAAAAAFlg7ceq9w/JMhHcNzQks6UrKYAffpUyeWuBIpcuLoB7YbFO61Dphseh77pzZbk3OvmveUq6EtCP2pmsq7hA+QV4hkv6BTn4m6wnXw6ss/qfE=", "subType": "06" } } }, "aws_binData=04_det_explicit_altname": { "kms": "aws", "type": "binData=04", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AQFkgAAAAAAAAAAAAAAAAAAFlg7ceq9w/JMhHcNzQks6UrKYAffpUyeWuBIpcuLoB7YbFO61Dphseh77pzZbk3OvmveUq6EtCP2pmsq7hA+QV4hkv6BTn4m6wnXw6ss/qfE=", "subType": "06" } } }, "aws_undefined_rand_explicit_id": { "kms": "aws", "type": "undefined", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$undefined": true } }, "aws_undefined_rand_explicit_altname": { "kms": "aws", "type": "undefined", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$undefined": true } }, "aws_undefined_det_explicit_id": { "kms": "aws", "type": "undefined", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$undefined": true } }, "aws_undefined_det_explicit_altname": { "kms": "aws", "type": "undefined", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$undefined": true } }, "aws_objectId_rand_auto_id": { "kms": "aws", "type": "objectId", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AgFkgAAAAAAAAAAAAAAAAAAHASE+V+LlkmwgF9QNjBK8QBvC973NaTMk6wbd57VB2EpQzrgxMtR5gYzVeqq4xaaHqrncyZCOIxDJkFlaim2NqA==", "subType": "06" } } }, "aws_objectId_rand_auto_altname": { "kms": "aws", "type": "objectId", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AgFkgAAAAAAAAAAAAAAAAAAHf/+9Qj/ozcDoUb8RNBnajU1d9hJ/6fE17IEZnw+ma6v5yH8LqZk9w3dtm6Sfw1unMhcMKrmIgs6kxqRWhNREJg==", "subType": "06" } } }, "aws_objectId_rand_explicit_id": { "kms": "aws", "type": "objectId", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AgFkgAAAAAAAAAAAAAAAAAAHzX8ejVLhoarQ5xgWsJitU/9eBm/Hlt2IIbZtS0SBc80qzkkWTaP9Zl9wrILH/Hwwx8RFnts855eKII3NJFa3BA==", "subType": "06" } } }, "aws_objectId_rand_explicit_altname": { "kms": "aws", "type": "objectId", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AgFkgAAAAAAAAAAAAAAAAAAHG5l6nUCY8f/6xO6TsPDrZHcdPRyMe3muMlY2DxHwv9GJNDR5Ne5VEAzUjnbgoy+B29SX4oY8cXJ6XhVz8mt3Eg==", "subType": "06" } } }, "aws_objectId_det_auto_id": { "kms": "aws", "type": "objectId", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AQFkgAAAAAAAAAAAAAAAAAAHTMY2l+gY8glm4HeSsGfCSfOsTVTzYU8qnQV8iqEFHrO5SBJac59gv3N/jukMwAnt0j6vIIQrROkVetU24YY7sQ==", "subType": "06" } } }, "aws_objectId_det_explicit_id": { "kms": "aws", "type": "objectId", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AQFkgAAAAAAAAAAAAAAAAAAHTMY2l+gY8glm4HeSsGfCSfOsTVTzYU8qnQV8iqEFHrO5SBJac59gv3N/jukMwAnt0j6vIIQrROkVetU24YY7sQ==", "subType": "06" } } }, "aws_objectId_det_explicit_altname": { "kms": "aws", "type": "objectId", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AQFkgAAAAAAAAAAAAAAAAAAHTMY2l+gY8glm4HeSsGfCSfOsTVTzYU8qnQV8iqEFHrO5SBJac59gv3N/jukMwAnt0j6vIIQrROkVetU24YY7sQ==", "subType": "06" } } }, "aws_bool_rand_auto_id": { "kms": "aws", "type": "bool", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AgFkgAAAAAAAAAAAAAAAAAAISm4UFt1HC2j0ObpTBg7SvF2Dq31i9To2ED4F3JcTihhq0fVzaSCsUz9VTJ0ziHmeNPNdfPPZO6qA/CDEZBO4jg==", "subType": "06" } } }, "aws_bool_rand_auto_altname": { "kms": "aws", "type": "bool", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AgFkgAAAAAAAAAAAAAAAAAAIj93KeAa96DmZXdB8boFvW19jhJSMmtSs5ag5FDSkH8MdKG2d2VoBOdUlBrL+LHYELqeDHCszY7qCirvb5mIgZg==", "subType": "06" } } }, "aws_bool_rand_explicit_id": { "kms": "aws", "type": "bool", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AgFkgAAAAAAAAAAAAAAAAAAIMbDFEuHIl5MNEsWnYLIand1vpK6EMv7Mso6qxrN4wHSVVwmxK+GCPgrKoUQsNuTssFWNCu0IhwrXOagDEfmlxw==", "subType": "06" } } }, "aws_bool_rand_explicit_altname": { "kms": "aws", "type": "bool", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AgFkgAAAAAAAAAAAAAAAAAAIkIaWfmPdxgAV5Rtb6on6T0NGt9GPFDScQD5I/Ch0ngiTCCKceJOjU0ljd3YTgfWRA1p/MlMIV0I5YAWZXKTHlg==", "subType": "06" } } }, "aws_bool_det_explicit_id": { "kms": "aws", "type": "bool", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": true }, "aws_bool_det_explicit_altname": { "kms": "aws", "type": "bool", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": true }, "aws_date_rand_auto_id": { "kms": "aws", "type": "date", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AgFkgAAAAAAAAAAAAAAAAAAJz1VG4+QnQXEE+TGu/pzfPugGMVTiC1xnenG1ByRdPvsERVw9WComWl1tb9tt9oblD7H/q0y1+y8HevkDqohB2Q==", "subType": "06" } } }, "aws_date_rand_auto_altname": { "kms": "aws", "type": "date", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AgFkgAAAAAAAAAAAAAAAAAAJa1kI2mIIYWjf7zjf5dD9+psvAQpjZ3nnsoXA5upcIwEtZaC8bxKKHVpOLOP3rTbvT5EV6vLhXkferGoyaqd/8w==", "subType": "06" } } }, "aws_date_rand_explicit_id": { "kms": "aws", "type": "date", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AgFkgAAAAAAAAAAAAAAAAAAJ9Q5Xe4UuOLQTUwosk47A6xx40XJcNoICCNtKrHqsUYy0QLCFRc5v4nA0160BVghURizbUtX8iuIp11pnsDyRtA==", "subType": "06" } } }, "aws_date_rand_explicit_altname": { "kms": "aws", "type": "date", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AgFkgAAAAAAAAAAAAAAAAAAJkHOdUc/4U82wxWJZ0SYABkJjQqNApkH2Iy/5S+PoatPgynoeSFTU9FmAbuWV/gbtIfBiaCOIjlsdonl/gf9+5w==", "subType": "06" } } }, "aws_date_det_auto_id": { "kms": "aws", "type": "date", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AQFkgAAAAAAAAAAAAAAAAAAJEEpQNsiqMWPqD4lhMkiOJHGE8FxOeYrKPiiAp/bZTrLKyCSS0ZL1WT9H3cGzxWPm5veihCjKqWhjatC/pjtzbQ==", "subType": "06" } } }, "aws_date_det_explicit_id": { "kms": "aws", "type": "date", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AQFkgAAAAAAAAAAAAAAAAAAJEEpQNsiqMWPqD4lhMkiOJHGE8FxOeYrKPiiAp/bZTrLKyCSS0ZL1WT9H3cGzxWPm5veihCjKqWhjatC/pjtzbQ==", "subType": "06" } } }, "aws_date_det_explicit_altname": { "kms": "aws", "type": "date", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AQFkgAAAAAAAAAAAAAAAAAAJEEpQNsiqMWPqD4lhMkiOJHGE8FxOeYrKPiiAp/bZTrLKyCSS0ZL1WT9H3cGzxWPm5veihCjKqWhjatC/pjtzbQ==", "subType": "06" } } }, "aws_null_rand_explicit_id": { "kms": "aws", "type": "null", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": false, "value": null }, "aws_null_rand_explicit_altname": { "kms": "aws", "type": "null", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": false, "value": null }, "aws_null_det_explicit_id": { "kms": "aws", "type": "null", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": null }, "aws_null_det_explicit_altname": { "kms": "aws", "type": "null", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": null }, "aws_regex_rand_auto_id": { "kms": "aws", "type": "regex", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AgFkgAAAAAAAAAAAAAAAAAALnhViSt3HqTDzyLN4mWO9srBU8TjRvPWsAJYfj/5sgI/yFuWdrggMs3Aq6G+K3tRrX3Yb+osy5CLiFCxq9WIvAA==", "subType": "06" } } }, "aws_regex_rand_auto_altname": { "kms": "aws", "type": "regex", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AgFkgAAAAAAAAAAAAAAAAAALbL2RS2tGQLBZ+6LtXLKAWFKcoKui+u4+gMIlFemLgpdO2eLqrMJB53ccqZImX8ons9UgAwDkiD68hWy8e7KHfg==", "subType": "06" } } }, "aws_regex_rand_explicit_id": { "kms": "aws", "type": "regex", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AgFkgAAAAAAAAAAAAAAAAAALa0+ftF6W/0Ul4J9VT/3chXFktE1o+OK4S14h2kyOqDVNA8yMKuyCK5nWl1yZvjJ76TuhEABte23oxcBP5QwalQ==", "subType": "06" } } }, "aws_regex_rand_explicit_altname": { "kms": "aws", "type": "regex", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AgFkgAAAAAAAAAAAAAAAAAALS4Yo9Fwk6OTx2CWdnObFT2L4rHngeIbdCyT4/YMJYd+jLU3mph14M1ptZZg+TBIgSPHq+BkvpRDifbMmOVr/Hg==", "subType": "06" } } }, "aws_regex_det_auto_id": { "kms": "aws", "type": "regex", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AQFkgAAAAAAAAAAAAAAAAAALpwNlokiTCUtTa2Kx9NVGvXR/aKPGhR5iaCT7nHEk4BOiZ9Kr4cRHdPCeZ7A+gjG4cKoT62sm3Fj1FwSOl8J8aQ==", "subType": "06" } } }, "aws_regex_det_explicit_id": { "kms": "aws", "type": "regex", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AQFkgAAAAAAAAAAAAAAAAAALpwNlokiTCUtTa2Kx9NVGvXR/aKPGhR5iaCT7nHEk4BOiZ9Kr4cRHdPCeZ7A+gjG4cKoT62sm3Fj1FwSOl8J8aQ==", "subType": "06" } } }, "aws_regex_det_explicit_altname": { "kms": "aws", "type": "regex", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AQFkgAAAAAAAAAAAAAAAAAALpwNlokiTCUtTa2Kx9NVGvXR/aKPGhR5iaCT7nHEk4BOiZ9Kr4cRHdPCeZ7A+gjG4cKoT62sm3Fj1FwSOl8J8aQ==", "subType": "06" } } }, "aws_dbPointer_rand_auto_id": { "kms": "aws", "type": "dbPointer", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AgFkgAAAAAAAAAAAAAAAAAAMfCVAnMNbRGsThnoVGb2KDsCIU2ehcPtebk/TFG4GZvEmculscLLih813lEz5NHS2sAXBn721EzUS7d0TKAPbmEYFwUBnijIQIPvUoUO8AQM=", "subType": "06" } } }, "aws_dbPointer_rand_auto_altname": { "kms": "aws", "type": "dbPointer", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AgFkgAAAAAAAAAAAAAAAAAAMvYJ5BtaMLVXV+qj85q5WqKRlzlHOBIIxZfUE/BBXUwqSTpJLdQQD++DDh6F2dtorBeYa3oUv2ef3ImASk5j23joU35Pm3Zt9Ci1pMNGodWs=", "subType": "06" } } }, "aws_dbPointer_rand_explicit_id": { "kms": "aws", "type": "dbPointer", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AgFkgAAAAAAAAAAAAAAAAAAMdsmYtPDw8kKjfB2kWfx5W1oNEkWWct1lRpesN303pUWsawDJpfBx40lg18So2X/g4yGIwpY3qfEKQZA4vCJeT+MTjhRXFjXA7eS/mxv8f3E=", "subType": "06" } } }, "aws_dbPointer_rand_explicit_altname": { "kms": "aws", "type": "dbPointer", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AgFkgAAAAAAAAAAAAAAAAAAM0hcvS5zmY3mlTp0SfME/rINlflF/sx2KvP0eJTdH+Uk0WHuTkFIJAza+bXvV/gB7iNC350qyzUX3M6NHx/9s/5yBpY8MawTZTZ7WCQIA+ZI=", "subType": "06" } } }, "aws_dbPointer_det_auto_id": { "kms": "aws", "type": "dbPointer", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AQFkgAAAAAAAAAAAAAAAAAAMp4QxbaEOij66L+RtaMekrDSm6QbfJBTQ8lQFhxfq9n7SVuQ9Zwdy14Ja8tyI3cGgQzQ/73rHUJ3CKA4+OYr63skYUkkkdlHxUrIMd5j5woc=", "subType": "06" } } }, "aws_dbPointer_det_explicit_id": { "kms": "aws", "type": "dbPointer", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AQFkgAAAAAAAAAAAAAAAAAAMp4QxbaEOij66L+RtaMekrDSm6QbfJBTQ8lQFhxfq9n7SVuQ9Zwdy14Ja8tyI3cGgQzQ/73rHUJ3CKA4+OYr63skYUkkkdlHxUrIMd5j5woc=", "subType": "06" } } }, "aws_dbPointer_det_explicit_altname": { "kms": "aws", "type": "dbPointer", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AQFkgAAAAAAAAAAAAAAAAAAMp4QxbaEOij66L+RtaMekrDSm6QbfJBTQ8lQFhxfq9n7SVuQ9Zwdy14Ja8tyI3cGgQzQ/73rHUJ3CKA4+OYr63skYUkkkdlHxUrIMd5j5woc=", "subType": "06" } } }, "aws_javascript_rand_auto_id": { "kms": "aws", "type": "javascript", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AgFkgAAAAAAAAAAAAAAAAAAN3HzAC9BTD7Jgi0PR4RS/Z6L6QtAQ7VhbKRbX+1smmnYniH6jVBM6zyxMDM8h9YjMPNs8EJrGDnisuf33w5KI/A==", "subType": "06" } } }, "aws_javascript_rand_auto_altname": { "kms": "aws", "type": "javascript", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AgFkgAAAAAAAAAAAAAAAAAANJpw+znlu3ecSiNyZ0EerVsow4aDRF2auI3Wy69EVexJkQlHO753PjRn8hG/x2kY8ROy5IUU43jaugP5AN1bwNQ==", "subType": "06" } } }, "aws_javascript_rand_explicit_id": { "kms": "aws", "type": "javascript", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AgFkgAAAAAAAAAAAAAAAAAANzoDiq8uI0+l8COY8YdM9S3rpLvPOHOWmJqJNtOyS0ZXUx1SB5paRJ4W3Eg8KuXEeoFwvBDe9cW9YT66CzkjlBw==", "subType": "06" } } }, "aws_javascript_rand_explicit_altname": { "kms": "aws", "type": "javascript", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AgFkgAAAAAAAAAAAAAAAAAAN/JhtRongJweLC5SdrXHhsFz3p82q3cwXf8Sru21DK6S39S997y3uhVLn0xlX5d94PxK1XVYSjz1oVuMxZouZ7Q==", "subType": "06" } } }, "aws_javascript_det_auto_id": { "kms": "aws", "type": "javascript", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AQFkgAAAAAAAAAAAAAAAAAANE39aEGiuUZ1WyakVEBgkGzLp5whkIjJ4uiaFLXniRszJL70FRkcf+aFXlA5Y4So9/ODKF76qbSsH4Jk6L+3mog==", "subType": "06" } } }, "aws_javascript_det_explicit_id": { "kms": "aws", "type": "javascript", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AQFkgAAAAAAAAAAAAAAAAAANE39aEGiuUZ1WyakVEBgkGzLp5whkIjJ4uiaFLXniRszJL70FRkcf+aFXlA5Y4So9/ODKF76qbSsH4Jk6L+3mog==", "subType": "06" } } }, "aws_javascript_det_explicit_altname": { "kms": "aws", "type": "javascript", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AQFkgAAAAAAAAAAAAAAAAAANE39aEGiuUZ1WyakVEBgkGzLp5whkIjJ4uiaFLXniRszJL70FRkcf+aFXlA5Y4So9/ODKF76qbSsH4Jk6L+3mog==", "subType": "06" } } }, "aws_symbol_rand_auto_id": { "kms": "aws", "type": "symbol", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AgFkgAAAAAAAAAAAAAAAAAAOBv1T9tleM0xwNe7efg/MlShyzvXe3Pmg1GzPl3gjFRHZGWXR578KqX+8oiz65eXGzNuyOFvcpnR2gYCs3NeKeQfctO5plEiIva6nzCI5SK8=", "subType": "06" } } }, "aws_symbol_rand_auto_altname": { "kms": "aws", "type": "symbol", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AgFkgAAAAAAAAAAAAAAAAAAOwLgGws8CMh+GgkEJFAx8tDIflyjsgG+/1FmZZobKAg8NOKqfXjtbnNCbvR28OCk6g/8SqBm8m53G6JciwvthJ0DirdfEexiUqu7IPtaeeyw=", "subType": "06" } } }, "aws_symbol_rand_explicit_id": { "kms": "aws", "type": "symbol", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AgFkgAAAAAAAAAAAAAAAAAAORQi3dNkXzZeruWu19kEhDu6fFD/h47ILzk+OVKQMoriAQC5YFyVRp1yAkIaWsrsPcyCHlfZ99FySSQeqSYbZZNj5FqyonWvDuPTduHDy3CI=", "subType": "06" } } }, "aws_symbol_rand_explicit_altname": { "kms": "aws", "type": "symbol", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AgFkgAAAAAAAAAAAAAAAAAAOj+Yl1pQPiJ6mESOISOyUYsKN/VIvC8f0derhxIPakXkwn57U0sxv+geUkrl3JZDxY3+cX5M1JZmY+PfjaYQhbTorf9RZaVC2Wwo2lMftWi0=", "subType": "06" } } }, "aws_symbol_det_auto_id": { "kms": "aws", "type": "symbol", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AQFkgAAAAAAAAAAAAAAAAAAO5IHripygBGEsVK8RFWZ1rIIVUap8KVDuqOspZpERaj+5ZEfqIcyrP/WK9KdvwOfdOWXfP/mOwuImYgNdbaQe+ejkYe4W0Y0uneCuw88k95Q=", "subType": "06" } } }, "aws_symbol_det_explicit_id": { "kms": "aws", "type": "symbol", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AQFkgAAAAAAAAAAAAAAAAAAO5IHripygBGEsVK8RFWZ1rIIVUap8KVDuqOspZpERaj+5ZEfqIcyrP/WK9KdvwOfdOWXfP/mOwuImYgNdbaQe+ejkYe4W0Y0uneCuw88k95Q=", "subType": "06" } } }, "aws_symbol_det_explicit_altname": { "kms": "aws", "type": "symbol", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AQFkgAAAAAAAAAAAAAAAAAAO5IHripygBGEsVK8RFWZ1rIIVUap8KVDuqOspZpERaj+5ZEfqIcyrP/WK9KdvwOfdOWXfP/mOwuImYgNdbaQe+ejkYe4W0Y0uneCuw88k95Q=", "subType": "06" } } }, "aws_javascriptWithScope_rand_auto_id": { "kms": "aws", "type": "javascriptWithScope", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AgFkgAAAAAAAAAAAAAAAAAAPT31GSNkY1RM43miv1XPYtDX1vU/xORiM3U0pumjqA+JLU/HMhH++75OcMhcAQqMjm2nZtZScxdGJsJJPEEzqjbFNMJgYc9sqR5uLnzk+2dg=", "subType": "06" } } }, "aws_javascriptWithScope_rand_auto_altname": { "kms": "aws", "type": "javascriptWithScope", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AgFkgAAAAAAAAAAAAAAAAAAPUxgaKAxSQ1uzOZtzsbtrxtDT2P/zWY6lYsbChXuRUooqvyjXSkNDqKBBA7Gp5BdGiVB/JLR47Tihpbcw1s1yGhwQRvnqeDvPrf91nvElXRY=", "subType": "06" } } }, "aws_javascriptWithScope_rand_explicit_id": { "kms": "aws", "type": "javascriptWithScope", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AgFkgAAAAAAAAAAAAAAAAAAPv8W0ZtquFCLTG0TqvRjdzKa/4mvqT2FuEGQ0mXG2k2BZh2LY5APr/kgW0tP4eLjHzVld6OLiM9ZKAvENCZ6/fKOvqSwpIfkdLWUIeB4REQg=", "subType": "06" } } }, "aws_javascriptWithScope_rand_explicit_altname": { "kms": "aws", "type": "javascriptWithScope", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AgFkgAAAAAAAAAAAAAAAAAAPMVhWjaxLffdAOkVgIJpjgNIldMS451NQs3C1jb+pzopHp3DlfZ+AHQpK9reMVVKjaqanhWBpL25q+feA60XVgZPCUDroiRYqMFqU//y0amw=", "subType": "06" } } }, "aws_javascriptWithScope_det_explicit_id": { "kms": "aws", "type": "javascriptWithScope", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$code": "x=1", "$scope": {} } }, "aws_javascriptWithScope_det_explicit_altname": { "kms": "aws", "type": "javascriptWithScope", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$code": "x=1", "$scope": {} } }, "aws_int_rand_auto_id": { "kms": "aws", "type": "int", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AgFkgAAAAAAAAAAAAAAAAAAQFV5b3vsoZe+MT4z8soetpmrWJpm7be41FNu/rdEqHWTG32jCym6762PCNYH5+vA7ldCWQkdt+ncneHsxzPrm9w==", "subType": "06" } } }, "aws_int_rand_auto_altname": { "kms": "aws", "type": "int", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AgFkgAAAAAAAAAAAAAAAAAAQY9+QenvU1Tk/dEGZP11uOZJLHAJ9hWHbEhxbtxItt1LsdU/8gOZfypilIO5BUkLT/15PUuXV28GISNh6yIuWhw==", "subType": "06" } } }, "aws_int_rand_explicit_id": { "kms": "aws", "type": "int", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AgFkgAAAAAAAAAAAAAAAAAAQruCugbneumhcinuXm89WW1PXVuSOewttp9cpsPPsCRVqe/uAkZOdJnZ2KaEZ9zki2GeqaJTs1qDmaJofc6GMEA==", "subType": "06" } } }, "aws_int_rand_explicit_altname": { "kms": "aws", "type": "int", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AgFkgAAAAAAAAAAAAAAAAAAQb15qXl/tejk4pmgkc4pUxzt4eJrv/cetgzgcPVaROAQSzd8ptbgCjaV8vP46uqozRoaDFZbQ06t65c3f0x/Ucw==", "subType": "06" } } }, "aws_int_det_auto_id": { "kms": "aws", "type": "int", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AQFkgAAAAAAAAAAAAAAAAAAQCXo6ieWvfoqkG+rP7J2BV013AVf/oNMmmGWe44VEHahF+qZHzW5I/F2qIA+xgKkk172pFq0iTSOpe+K2WHMKFw==", "subType": "06" } } }, "aws_int_det_explicit_id": { "kms": "aws", "type": "int", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AQFkgAAAAAAAAAAAAAAAAAAQCXo6ieWvfoqkG+rP7J2BV013AVf/oNMmmGWe44VEHahF+qZHzW5I/F2qIA+xgKkk172pFq0iTSOpe+K2WHMKFw==", "subType": "06" } } }, "aws_int_det_explicit_altname": { "kms": "aws", "type": "int", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AQFkgAAAAAAAAAAAAAAAAAAQCXo6ieWvfoqkG+rP7J2BV013AVf/oNMmmGWe44VEHahF+qZHzW5I/F2qIA+xgKkk172pFq0iTSOpe+K2WHMKFw==", "subType": "06" } } }, "aws_timestamp_rand_auto_id": { "kms": "aws", "type": "timestamp", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AgFkgAAAAAAAAAAAAAAAAAAR63xXG8mrlixkQzD5VBIPE6NHicaWcS5CBhiIJDcZ0x8D9c5TgRJUfCeWhKvWFD4o0DoxcBQ2opPormFDpvmq/g==", "subType": "06" } } }, "aws_timestamp_rand_auto_altname": { "kms": "aws", "type": "timestamp", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AgFkgAAAAAAAAAAAAAAAAAARAgY9LsUxP6gP4gYRvvzZ4iaHVQRNbycATiVag1YNSiDmEr4LYserYuBscdrIy4v3zgGaulFM9KV86bx0ItycZA==", "subType": "06" } } }, "aws_timestamp_rand_explicit_id": { "kms": "aws", "type": "timestamp", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AgFkgAAAAAAAAAAAAAAAAAARLneAZqPcHdzGGnXz2Ne5E7HP9cDC1+yoIwcA8OSF/IlzEjrrMAi3z6Izol6gWDlD7VOh7QYL3sASJOXyzF1hPQ==", "subType": "06" } } }, "aws_timestamp_rand_explicit_altname": { "kms": "aws", "type": "timestamp", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AgFkgAAAAAAAAAAAAAAAAAARH2bU7KNo5SHxiO8JFEcT9wryuHNXyM7ADop1oPcESyay1Nc0WHPD3nr0yMAK481NxOkE3qXyaslu7bcP/744WA==", "subType": "06" } } }, "aws_timestamp_det_auto_id": { "kms": "aws", "type": "timestamp", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AQFkgAAAAAAAAAAAAAAAAAARG7kGfx0ky+d4Hl/fRBu8oUR1Mph26Dkv3J7fxGYanpzOFMiHIfVO0uwYMvsfzG54y0DDNlS3FmmS13DzepbzGQ==", "subType": "06" } } }, "aws_timestamp_det_explicit_id": { "kms": "aws", "type": "timestamp", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AQFkgAAAAAAAAAAAAAAAAAARG7kGfx0ky+d4Hl/fRBu8oUR1Mph26Dkv3J7fxGYanpzOFMiHIfVO0uwYMvsfzG54y0DDNlS3FmmS13DzepbzGQ==", "subType": "06" } } }, "aws_timestamp_det_explicit_altname": { "kms": "aws", "type": "timestamp", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AQFkgAAAAAAAAAAAAAAAAAARG7kGfx0ky+d4Hl/fRBu8oUR1Mph26Dkv3J7fxGYanpzOFMiHIfVO0uwYMvsfzG54y0DDNlS3FmmS13DzepbzGQ==", "subType": "06" } } }, "aws_long_rand_auto_id": { "kms": "aws", "type": "long", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AgFkgAAAAAAAAAAAAAAAAAASZbes2EdR78crt2pXVElW2YwAQh8HEBapYYeav2VQeg2syXaV/qZuD8ofnAVn4v/DydTTMVMmK+sVU/TlnAu2eA==", "subType": "06" } } }, "aws_long_rand_auto_altname": { "kms": "aws", "type": "long", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AgFkgAAAAAAAAAAAAAAAAAASt+7fmMYH+fLHgybc+sng8/UmKP3YPUEPCz1SXVQljQp6orsCILSgtgGPsdeGnN5NSxh3XzerHs6zlR92fWpZCw==", "subType": "06" } } }, "aws_long_rand_explicit_id": { "kms": "aws", "type": "long", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AgFkgAAAAAAAAAAAAAAAAAAS01fF1uo6zYDToJnOT/EbDipzk7YZ6I+IspZF+avjU3XYfpRxT9NdAgKr0euWJwyAsdpWqqCwFummfrPeZOy04A==", "subType": "06" } } }, "aws_long_rand_explicit_altname": { "kms": "aws", "type": "long", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AgFkgAAAAAAAAAAAAAAAAAAS6tpH796bqy58mXf38rJvVtA1uBcxBE5yIGQ4RN44oypc/pvw0ouhFI1dkoneKMtAFU/5RygZV+RvQhRtgKn76A==", "subType": "06" } } }, "aws_long_det_auto_id": { "kms": "aws", "type": "long", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AQFkgAAAAAAAAAAAAAAAAAASC7O/8JeB4WTqQFPuMpFRsAuonPS3yu7IAPZeRPIr03CmM6HNndYIKMoFM13eELNZTdJSgg9u9ItGqRw+/XMHzQ==", "subType": "06" } } }, "aws_long_det_explicit_id": { "kms": "aws", "type": "long", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AQFkgAAAAAAAAAAAAAAAAAASC7O/8JeB4WTqQFPuMpFRsAuonPS3yu7IAPZeRPIr03CmM6HNndYIKMoFM13eELNZTdJSgg9u9ItGqRw+/XMHzQ==", "subType": "06" } } }, "aws_long_det_explicit_altname": { "kms": "aws", "type": "long", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AQFkgAAAAAAAAAAAAAAAAAASC7O/8JeB4WTqQFPuMpFRsAuonPS3yu7IAPZeRPIr03CmM6HNndYIKMoFM13eELNZTdJSgg9u9ItGqRw+/XMHzQ==", "subType": "06" } } }, "aws_decimal_rand_auto_id": { "kms": "aws", "type": "decimal", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AgFkgAAAAAAAAAAAAAAAAAATgf5zW9EgnWHPxj4HAGt472eN9UXP41TaF8V2J7S2zqSpiBZGKDuOIjw2FBSqaNp53vvfl9HpwAuQBJZhrwkBCKRkKV/AAR3/pTpuoqhSKaM=", "subType": "06" } } }, "aws_decimal_rand_auto_altname": { "kms": "aws", "type": "decimal", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AgFkgAAAAAAAAAAAAAAAAAATPRfvZWdupE2N0W1DXUx7X8Zz7g43jawJL7PbQtTYetI78xRETkMdygwSEHgs+cvnUBBtYIeKRVkOGZQkwf568OclhDiPxUeD38cR5blBq/U=", "subType": "06" } } }, "aws_decimal_rand_explicit_id": { "kms": "aws", "type": "decimal", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AgFkgAAAAAAAAAAAAAAAAAAT+ZnCg2lSMIohZ9RJ4CNs3LZ0g+nV04cYAmrxTSrTSBPGlZ7Ywh5A2rCss7AUijYZiKiYyZbuAzukbOuVRhdCtm+xo9+DyLAwTezF18okk6Y=", "subType": "06" } } }, "aws_decimal_rand_explicit_altname": { "kms": "aws", "type": "decimal", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AgFkgAAAAAAAAAAAAAAAAAATlnQYASsTZRRHzFjcbCClXartcXBVRrYv7JImMkDmAj6EAjf/ZqpjeykkS/wohMhXaNwyZBdREr+n+GDV7imYoL4WRBOLnqB6hrYidlWqNzE=", "subType": "06" } } }, "aws_decimal_det_explicit_id": { "kms": "aws", "type": "decimal", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$numberDecimal": "1.234" } }, "aws_decimal_det_explicit_altname": { "kms": "aws", "type": "decimal", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$numberDecimal": "1.234" } }, "aws_minKey_rand_explicit_id": { "kms": "aws", "type": "minKey", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$minKey": 1 } }, "aws_minKey_rand_explicit_altname": { "kms": "aws", "type": "minKey", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$minKey": 1 } }, "aws_minKey_det_explicit_id": { "kms": "aws", "type": "minKey", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$minKey": 1 } }, "aws_minKey_det_explicit_altname": { "kms": "aws", "type": "minKey", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$minKey": 1 } }, "aws_maxKey_rand_explicit_id": { "kms": "aws", "type": "maxKey", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$maxKey": 1 } }, "aws_maxKey_rand_explicit_altname": { "kms": "aws", "type": "maxKey", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$maxKey": 1 } }, "aws_maxKey_det_explicit_id": { "kms": "aws", "type": "maxKey", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$maxKey": 1 } }, "aws_maxKey_det_explicit_altname": { "kms": "aws", "type": "maxKey", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$maxKey": 1 } }, "local_double_rand_auto_id": { "kms": "local", "type": "double", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAABGF195CB8nRmK9+KxYO7T96MeXucC/ILQtEEQAS4zrwj3Qz7YEQrf/apvbKTCkn3siN2XSDLQ/7dmddZa9xa9yQ==", "subType": "06" } } }, "local_double_rand_auto_altname": { "kms": "local", "type": "double", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAABY8g18z6ZOjGtfNxaAmU95tXMdoM6qbtDMpB72paqiHZTW1UGB22HPXiEnVz05JTBzzX4fc6tOldX6aJel812Zg==", "subType": "06" } } }, "local_double_rand_explicit_id": { "kms": "local", "type": "double", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAABDlHwN8hYyScEhhx64TdJ2Qp2rmKRg8983zdqIL1914tyPwRQq7ySCOhmFif2S7v4KT+r0uOfimYvKD1n9rKHlg==", "subType": "06" } } }, "local_double_rand_explicit_altname": { "kms": "local", "type": "double", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAAB2VnTFlaCRzAZZTQiMWQORFNgXIuAJlHJXIHiYow2eO6JbVghWTpH+MsdafBNPVnc0zKuZBL0Qs2Nuk1xiQaqhA==", "subType": "06" } } }, "local_double_det_explicit_id": { "kms": "local", "type": "double", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$numberDouble": "1.234" } }, "local_double_det_explicit_altname": { "kms": "local", "type": "double", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$numberDouble": "1.234" } }, "local_string_rand_auto_id": { "kms": "local", "type": "string", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAAC5NBAPM8q2n9fnkwQfE9so/XcO51plPBNs5VlBRbDw68k9T6/uZ2TWsAvTYtVooY59zHHr2QS3usKbGQB6J61rA==", "subType": "06" } } }, "local_string_rand_auto_altname": { "kms": "local", "type": "string", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAACM/EjGMrkYHvSZra26m74upuvLkfKXTs+tTWquGzrgWYLnLt8I6XBIwx1VymS9EybrCU/ewmtgjLUNUFQacIeXA==", "subType": "06" } } }, "local_string_rand_explicit_id": { "kms": "local", "type": "string", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAACn4tD26UG8lO9gTZaxen6yXzHo/a2lokeY1ClxHMtJODoJr2JZzIDHP3A9aZ8L4+Vu+nyqphaWyGaGONKu8gpcQ==", "subType": "06" } } }, "local_string_rand_explicit_altname": { "kms": "local", "type": "string", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAACZfoO2LjY+IB31FZ1Tq7pHr0DCFKGJqWcXcOrnZ7bV9Euc9f101motJc31sp8nF5CTCfd83VQE0319eQrxDDaSw==", "subType": "06" } } }, "local_string_det_auto_id": { "kms": "local", "type": "string", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ASzggCwAAAAAAAAAAAAAAAACW0cZMYWOY3eoqQQkSdBtS9iHC4CSQA27dy6XJGcmTV8EDuhGNnPmbx0EKFTDb0PCSyCjMyuE4nsgmNYgjTaSuw==", "subType": "06" } } }, "local_string_det_explicit_id": { "kms": "local", "type": "string", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ASzggCwAAAAAAAAAAAAAAAACW0cZMYWOY3eoqQQkSdBtS9iHC4CSQA27dy6XJGcmTV8EDuhGNnPmbx0EKFTDb0PCSyCjMyuE4nsgmNYgjTaSuw==", "subType": "06" } } }, "local_string_det_explicit_altname": { "kms": "local", "type": "string", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "ASzggCwAAAAAAAAAAAAAAAACW0cZMYWOY3eoqQQkSdBtS9iHC4CSQA27dy6XJGcmTV8EDuhGNnPmbx0EKFTDb0PCSyCjMyuE4nsgmNYgjTaSuw==", "subType": "06" } } }, "local_object_rand_auto_id": { "kms": "local", "type": "object", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAADlekcUsETAkkKTjCVx5EISJN+sftrQax/VhaWXLyRgRz97adXXmwZkMyt+035SHZsF91i2LaXziMA4RHoP+nKFw==", "subType": "06" } } }, "local_object_rand_auto_altname": { "kms": "local", "type": "object", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAADpaQmy5r6q9gLqEm+FIi/OyQgcuUnrICCP9rC4S3wR6qUHd82IW/3dFQUzwTkaXxgStjopamQMuZ4ESRj0xx0bA==", "subType": "06" } } }, "local_object_rand_explicit_id": { "kms": "local", "type": "object", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAADCHRJCINzWY0u4gZPWEmHg/JoQ8IW4yMfUyzYJCQrEMp4rUeupIuxqSuq2QyLBYZBBv0r7t3lNH49I5qDeav2vA==", "subType": "06" } } }, "local_object_rand_explicit_altname": { "kms": "local", "type": "object", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAADrHQQUnLF1jdNmFY/V266cS28XAB4nOKetHAcSbwkeUxNzgZT1g+XMQaYfcNMMv/ywypKU1KpgLMsEOpm4qcPkQ==", "subType": "06" } } }, "local_object_det_explicit_id": { "kms": "local", "type": "object", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": { "x": { "$numberInt": "1" } } }, "local_object_det_explicit_altname": { "kms": "local", "type": "object", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "x": { "$numberInt": "1" } } }, "local_array_rand_auto_id": { "kms": "local", "type": "array", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAAEXa7bQ5vGPNsLdklM/H+sop8aCL4vlDiVUoVjTAGjTngn2WLcdKLWxaNSyMdJpsI/NsxQJ58YrcwP+yHzi9rZVtRdbg7m8p+CYcq1vUm6UoQ=", "subType": "06" } } }, "local_array_rand_auto_altname": { "kms": "local", "type": "array", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAAEVlZlOvtRmGIhcYi/qPl3HKi/qf0yRQrkbVo9rScYkxDCBN9wA55pAWHDQ/5Sjy4d0DwL57k+M1G9e7xSIrv8xXKwoIuuabhSWaIX2eJHroY=", "subType": "06" } } }, "local_array_rand_explicit_id": { "kms": "local", "type": "array", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAAEYBLSYHHt2rohezMF4lMjNdqy9CY33EHf+pgRbJwVXZScLDgn9CcqeRsdU8bW5h2qgNpQvoSMBB7pW+Dgp1RauTHZSOd4PcZpAGjwoFDWSSM=", "subType": "06" } } }, "local_array_rand_explicit_altname": { "kms": "local", "type": "array", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAAES1IJ8S2NxWekolQockxLJvzFSGfKQ9Xbi55vO8LyWo0sIG9ZgPQXtVQkZ301CsdFduvx9A0vDqQ0MGYc4plxNnpUTizJPRUDyez5dOgZ9tI=", "subType": "06" } } }, "local_array_det_explicit_id": { "kms": "local", "type": "array", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": [ { "$numberInt": "1" }, { "$numberInt": "2" }, { "$numberInt": "3" } ] }, "local_array_det_explicit_altname": { "kms": "local", "type": "array", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": [ { "$numberInt": "1" }, { "$numberInt": "2" }, { "$numberInt": "3" } ] }, "local_binData=00_rand_auto_id": { "kms": "local", "type": "binData=00", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAAF+hgWs4ZCo9GnmhSM9SDSWzWX4E7Tlp4TwlEy3zfO/rrMREECGB4u8LD8Ju9b8YP+xcZhMI1tcz/vrQS87NffUg==", "subType": "06" } } }, "local_binData=00_rand_auto_altname": { "kms": "local", "type": "binData=00", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAAFtEvaXWpGfXC1GlKu0AeRDaeBKHryGoS0tAUr48vfYk7umCr+fJKyXCY9vSv7wCiQxWLe8V/EZWkHsu0zqhJw9w==", "subType": "06" } } }, "local_binData=00_rand_explicit_id": { "kms": "local", "type": "binData=00", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAAF/1L5bvmMX3Bk2nAw8KvvRd/7nZ82XHVasT0jrlPhSiJU7ehJMeUCOb7HCHU6KgCzZB9C2W3NoVhLKIhE9ZnYdg==", "subType": "06" } } }, "local_binData=00_rand_explicit_altname": { "kms": "local", "type": "binData=00", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAAFK0W5IWKzggR4UU+fhwA2p8YCHLfmx5y1OEtHc/9be9eEYTORACDmWY6207Vd4LhBJCedd+Q5qMm7NRZjjhyLEQ==", "subType": "06" } } }, "local_binData=00_det_auto_id": { "kms": "local", "type": "binData=00", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ASzggCwAAAAAAAAAAAAAAAAF1ofBnK9+ERP29P/i14GQ/y3muic6tNKY532zCkzQkJSktYCOeXS8DdY1DdaOP/asZWzPTdgwby6/iZcAxJU+xQ==", "subType": "06" } } }, "local_binData=00_det_explicit_id": { "kms": "local", "type": "binData=00", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ASzggCwAAAAAAAAAAAAAAAAF1ofBnK9+ERP29P/i14GQ/y3muic6tNKY532zCkzQkJSktYCOeXS8DdY1DdaOP/asZWzPTdgwby6/iZcAxJU+xQ==", "subType": "06" } } }, "local_binData=00_det_explicit_altname": { "kms": "local", "type": "binData=00", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "ASzggCwAAAAAAAAAAAAAAAAF1ofBnK9+ERP29P/i14GQ/y3muic6tNKY532zCkzQkJSktYCOeXS8DdY1DdaOP/asZWzPTdgwby6/iZcAxJU+xQ==", "subType": "06" } } }, "local_binData=04_rand_auto_id": { "kms": "local", "type": "binData=04", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAAFxq38aA4k/tYHPwJFRK0pahlo/3zjCe3VHJRqURRA+04lbJCvdkQTawxWlf8o+3Pcetl1UcPTQigdYp5KbIkstuPstLbT+TZXHVD1os9LTRw=", "subType": "06" } } }, "local_binData=04_rand_auto_altname": { "kms": "local", "type": "binData=04", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAAFTXNWchCPmCSY0+AL22/kCBmAoDJDX5T18jpJHLdvZtHs0zwD64b9hLvfRK268BlNu4P37KDFE6LT0QzjG7brqzFJf3ZaadDCKeIw1q7DWQs=", "subType": "06" } } }, "local_binData=04_rand_explicit_id": { "kms": "local", "type": "binData=04", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAAF7XgMgKjQmWYWmobrYWKiGYCKsy5kTgVweFBuzvFISaZjFsq2hrZB2DwUaOeT6XUPH/Onrdjc3fNElf3FdQDHif4rt+1lh9jEX+nMbRw9i3s=", "subType": "06" } } }, "local_binData=04_rand_explicit_altname": { "kms": "local", "type": "binData=04", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAAFGoA/1H0waFLor6LbkUCLC2Wm9j/ZT7yifPbf0G7WvO0+gBLlffr3aJIQ9ik5vxPbmDDMCoYlbEYgb8i9I5tKC17WPhjVH2N2+4l9y7aEmS4=", "subType": "06" } } }, "local_binData=04_det_auto_id": { "kms": "local", "type": "binData=04", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ASzggCwAAAAAAAAAAAAAAAAFwO3hsD8ee/uwgUiHWem8fGe54LsTJWqgbRCacIe6sxrsyLT6EsVIqg4Sn7Ou+FC3WJbFld5kx8euLe/MHa8FGYjxD97z5j+rUx5tt3T6YbA=", "subType": "06" } } }, "local_binData=04_det_explicit_id": { "kms": "local", "type": "binData=04", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ASzggCwAAAAAAAAAAAAAAAAFwO3hsD8ee/uwgUiHWem8fGe54LsTJWqgbRCacIe6sxrsyLT6EsVIqg4Sn7Ou+FC3WJbFld5kx8euLe/MHa8FGYjxD97z5j+rUx5tt3T6YbA=", "subType": "06" } } }, "local_binData=04_det_explicit_altname": { "kms": "local", "type": "binData=04", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "ASzggCwAAAAAAAAAAAAAAAAFwO3hsD8ee/uwgUiHWem8fGe54LsTJWqgbRCacIe6sxrsyLT6EsVIqg4Sn7Ou+FC3WJbFld5kx8euLe/MHa8FGYjxD97z5j+rUx5tt3T6YbA=", "subType": "06" } } }, "local_undefined_rand_explicit_id": { "kms": "local", "type": "undefined", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$undefined": true } }, "local_undefined_rand_explicit_altname": { "kms": "local", "type": "undefined", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$undefined": true } }, "local_undefined_det_explicit_id": { "kms": "local", "type": "undefined", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$undefined": true } }, "local_undefined_det_explicit_altname": { "kms": "local", "type": "undefined", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$undefined": true } }, "local_objectId_rand_auto_id": { "kms": "local", "type": "objectId", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAAHfvxWRZOzfao3faE3RglL0IcDpBcNwqiGL5KgSokmRxWjjWeiel88Mbo5Plo0SswwNQ2H7C5GVG21L+UbvcW63g==", "subType": "06" } } }, "local_objectId_rand_auto_altname": { "kms": "local", "type": "objectId", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAAHhd9lSOO7bHE7PM+Uxa2v3X1FF66IwyEr0wqnyTaOM+cHQLmec/RlEaRIQ1x2AiW7LwmmVgZ0xBMK9CMh0Lhbyw==", "subType": "06" } } }, "local_objectId_rand_explicit_id": { "kms": "local", "type": "objectId", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAAHETwT9bo+JtboBVW/8GzzMQCpn22iiNJnlxYfyO45jvYJQRs29RRIouCsnFkmC7cfAO3GlVxv113euYjIO7AlAg==", "subType": "06" } } }, "local_objectId_rand_explicit_altname": { "kms": "local", "type": "objectId", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAAHhsguAMBzQUFBAitpJDzKEaMDGUGfvCzmUUhf4rnp8xeall/p91TUudaSMcU11XEgJ0Mym4IbYRd8+TfUai0nvw==", "subType": "06" } } }, "local_objectId_det_auto_id": { "kms": "local", "type": "objectId", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ASzggCwAAAAAAAAAAAAAAAAH4ElF4AvQ+kkGfhadgKNy3GcYrDZPN6RpzaMYIhcCGDvC9W+cIS9dH1aJbPU7vTPmEZnnynPTDWjw3rAj2+9mOA==", "subType": "06" } } }, "local_objectId_det_explicit_id": { "kms": "local", "type": "objectId", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ASzggCwAAAAAAAAAAAAAAAAH4ElF4AvQ+kkGfhadgKNy3GcYrDZPN6RpzaMYIhcCGDvC9W+cIS9dH1aJbPU7vTPmEZnnynPTDWjw3rAj2+9mOA==", "subType": "06" } } }, "local_objectId_det_explicit_altname": { "kms": "local", "type": "objectId", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "ASzggCwAAAAAAAAAAAAAAAAH4ElF4AvQ+kkGfhadgKNy3GcYrDZPN6RpzaMYIhcCGDvC9W+cIS9dH1aJbPU7vTPmEZnnynPTDWjw3rAj2+9mOA==", "subType": "06" } } }, "local_bool_rand_auto_id": { "kms": "local", "type": "bool", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAAIxGld4J/2vSWg5tjQulpkm9C6WeUcLbv2yfKRXPAbmLpv3u4Yrmr5qisJtqmDPTcb993WosvCYAh0UGW+zpsdEg==", "subType": "06" } } }, "local_bool_rand_auto_altname": { "kms": "local", "type": "bool", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAAIpUFPiS2uoW1Aqs0WQkBa201OBmsuJ8WUKcv5aBPASkcwfaw9qSWs3QrbEDR2GyoU4SeYOByCAQMzXCPoIYAFdQ==", "subType": "06" } } }, "local_bool_rand_explicit_id": { "kms": "local", "type": "bool", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAAIJuzu1a60meYlU3LMjw/7G4Vh/lqKopxdpGWoLXEmY/NoHgX6Fkv9iTwxv/Nv8rZwtawpFV+mQUG/6A1IHMBASQ==", "subType": "06" } } }, "local_bool_rand_explicit_altname": { "kms": "local", "type": "bool", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAAIn9VjxL5TdGgJLckNHRrIaL32L31q5OERRZG2M5OYKk66TnrlfEs+ykcDvGwMGKpr/PYjY5kBHDc/oELGJJbWRQ==", "subType": "06" } } }, "local_bool_det_explicit_id": { "kms": "local", "type": "bool", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": true }, "local_bool_det_explicit_altname": { "kms": "local", "type": "bool", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": true }, "local_date_rand_auto_id": { "kms": "local", "type": "date", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAAJPPv4MC5xzt2uxPGBHH9g2z03o9SQjjmuxt97Ub1UcKCCHsGED3bx6YSrocuEMiFFI4d5Fqgl8HNeS4j0PR0tYA==", "subType": "06" } } }, "local_date_rand_auto_altname": { "kms": "local", "type": "date", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAAJ6i2A9Hi4xWlOMjFMGpwaRctR1VFnb4El166n18RvjKic46V+WoadvLHS32RhPOvkLVYwIeU4C+vrO5isBNoUdw==", "subType": "06" } } }, "local_date_rand_explicit_id": { "kms": "local", "type": "date", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAAJHcniV7Q0C8ZTWrE0hp5i5bUPlrrRdNLZckfODw8XNVtVPDjbznglccQmI7w1t8kOVp65eKzVzUOXN0YkqA+1QA==", "subType": "06" } } }, "local_date_rand_explicit_altname": { "kms": "local", "type": "date", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAAJKCUCjC3hsmEKKYwGP3ceh3zR+ArE8LYFOQfN87aEsTr60VrzHXmsE8PvizRhhMnrp07ljzQkuat39L+0QSR2qQ==", "subType": "06" } } }, "local_date_det_auto_id": { "kms": "local", "type": "date", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ASzggCwAAAAAAAAAAAAAAAAJ1GMYQTruoKr6fv9XCbcVkx/3yivymPSMEkPCRDYxQv45w4TqBKMDfpRd1TOLOv1qvcb+gjH+z5IfVBMp2IpG/Q==", "subType": "06" } } }, "local_date_det_explicit_id": { "kms": "local", "type": "date", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ASzggCwAAAAAAAAAAAAAAAAJ1GMYQTruoKr6fv9XCbcVkx/3yivymPSMEkPCRDYxQv45w4TqBKMDfpRd1TOLOv1qvcb+gjH+z5IfVBMp2IpG/Q==", "subType": "06" } } }, "local_date_det_explicit_altname": { "kms": "local", "type": "date", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "ASzggCwAAAAAAAAAAAAAAAAJ1GMYQTruoKr6fv9XCbcVkx/3yivymPSMEkPCRDYxQv45w4TqBKMDfpRd1TOLOv1qvcb+gjH+z5IfVBMp2IpG/Q==", "subType": "06" } } }, "local_null_rand_explicit_id": { "kms": "local", "type": "null", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": false, "value": null }, "local_null_rand_explicit_altname": { "kms": "local", "type": "null", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": false, "value": null }, "local_null_det_explicit_id": { "kms": "local", "type": "null", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": null }, "local_null_det_explicit_altname": { "kms": "local", "type": "null", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": null }, "local_regex_rand_auto_id": { "kms": "local", "type": "regex", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAALXKw7zSgqQj1AKoWO0MoMxsBuu0cMB6KdJQCRKdupoLV/Y22owwsVpDDMv5sgUpkG5YIV+Fz7taHodXE07qHopw==", "subType": "06" } } }, "local_regex_rand_auto_altname": { "kms": "local", "type": "regex", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAALntOLXq7VW1+jwba/dSbidMo2bewNo7AtK9A1CPwk9XrjUQaEOQxfRpho3BYQEo2U67fQdsY/tyhaj4jduHn9JQ==", "subType": "06" } } }, "local_regex_rand_explicit_id": { "kms": "local", "type": "regex", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAALlMMG2iS/gEOEsVKR7sxBJP2IUzZ+aRbozDSkqADncresBvaPBSE17lng5NG7H1JRCAcP1rH/Te+0CrMd7JpRAQ==", "subType": "06" } } }, "local_regex_rand_explicit_altname": { "kms": "local", "type": "regex", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAAL1YNnlVu5+njDLxh1LMhIPOH19RykAXhxrUbCy6TI5MLQsAOSgAJbXOTXeKr0D8/Ff0phToWOKl193gOOIp8yZQ==", "subType": "06" } } }, "local_regex_det_auto_id": { "kms": "local", "type": "regex", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ASzggCwAAAAAAAAAAAAAAAALiZbL5nFIZl7cSLH5E3wK3jJeAeFc7hLHNITtLAu+o10raEs5i/UCihMHmkf8KHZxghs056pfm5BjPzlL9x7IHQ==", "subType": "06" } } }, "local_regex_det_explicit_id": { "kms": "local", "type": "regex", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ASzggCwAAAAAAAAAAAAAAAALiZbL5nFIZl7cSLH5E3wK3jJeAeFc7hLHNITtLAu+o10raEs5i/UCihMHmkf8KHZxghs056pfm5BjPzlL9x7IHQ==", "subType": "06" } } }, "local_regex_det_explicit_altname": { "kms": "local", "type": "regex", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "ASzggCwAAAAAAAAAAAAAAAALiZbL5nFIZl7cSLH5E3wK3jJeAeFc7hLHNITtLAu+o10raEs5i/UCihMHmkf8KHZxghs056pfm5BjPzlL9x7IHQ==", "subType": "06" } } }, "local_dbPointer_rand_auto_id": { "kms": "local", "type": "dbPointer", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAAMUdAA9uOSk1tXJVe/CG3Ps6avYTEF1eHj1wSlCHkFxqlMtTO+rIQpikpjH0MrcXvEEdAO8g5hFZ01I7DWyK5AAxTxDqVF+kOaQ2VfKs6hyuo=", "subType": "06" } } }, "local_dbPointer_rand_auto_altname": { "kms": "local", "type": "dbPointer", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAAMiNqvqLwZrPnsF235z+Obl1K9iEXdJ5GucMGpJdRG4lRvRE0Oy1vh6ztNTpYPY/tXyUFTBWlzl/lITalSEm/dT1Bnlh0iPAFrAiNySf662og=", "subType": "06" } } }, "local_dbPointer_rand_explicit_id": { "kms": "local", "type": "dbPointer", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAAM+Tn31YcKiowBTJWRYCYAEO7UARDE2/jTVGEKXCpiwEqqP3JSAS0b80zYt8dxo5mVhUo2a02ClKrB8vs+B6sU1kXrahSaVSEHZlRSGN9fWgo=", "subType": "06" } } }, "local_dbPointer_rand_explicit_altname": { "kms": "local", "type": "dbPointer", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAAMdOZZUvpJIqG9qiOLy5x4BdftyHipPDZn/eeLEc7ir3v4jJsY3dsv6fQERo5U9lMynNGA9PJePVzq5tWsIMX0EcCQcMfGmosfkYDzN1OX99A=", "subType": "06" } } }, "local_dbPointer_det_auto_id": { "kms": "local", "type": "dbPointer", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ASzggCwAAAAAAAAAAAAAAAAMQWace2C1w3yqtmo/rgz3YtIDnx1Ia/oDsoHnnMZlEy5RoK3uosi1hvNAZCSg3Sen0H7MH3XVhGGMCL4cS69uJ0ENSvh+K6fiZzAXCKUPfvM=", "subType": "06" } } }, "local_dbPointer_det_explicit_id": { "kms": "local", "type": "dbPointer", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ASzggCwAAAAAAAAAAAAAAAAMQWace2C1w3yqtmo/rgz3YtIDnx1Ia/oDsoHnnMZlEy5RoK3uosi1hvNAZCSg3Sen0H7MH3XVhGGMCL4cS69uJ0ENSvh+K6fiZzAXCKUPfvM=", "subType": "06" } } }, "local_dbPointer_det_explicit_altname": { "kms": "local", "type": "dbPointer", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "ASzggCwAAAAAAAAAAAAAAAAMQWace2C1w3yqtmo/rgz3YtIDnx1Ia/oDsoHnnMZlEy5RoK3uosi1hvNAZCSg3Sen0H7MH3XVhGGMCL4cS69uJ0ENSvh+K6fiZzAXCKUPfvM=", "subType": "06" } } }, "local_javascript_rand_auto_id": { "kms": "local", "type": "javascript", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAANNL2AMKwTDyMIvxLKhBxZKx50C0tBdkLwuXmuMcrUqZeH8bsvjtttoM9LWkkileMyeTWgxblJ1b+uQ+V+4VT6fA==", "subType": "06" } } }, "local_javascript_rand_auto_altname": { "kms": "local", "type": "javascript", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAANBjBlHGw3K3TWQHpvfa1z0bKhNnVFC/lZArIexo3wjdGq3MdkGA5cuBIp87HHmOIv6o/pvQ9K74v48RQl+JH44A==", "subType": "06" } } }, "local_javascript_rand_explicit_id": { "kms": "local", "type": "javascript", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAANjvM7u3vNVyKpyI7g5kbzBpHPzXzOQToDSng5/c9yjMG+qi4TPtOyassobJOnMmDYBLyqRXCl/GsDLprbg5jxuA==", "subType": "06" } } }, "local_javascript_rand_explicit_altname": { "kms": "local", "type": "javascript", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAANMtO7KneuVx4gSOjX4MQjKL80zJhnt+efDBylkpNsqKyxBXB60nkiredGzwaK3/4QhIfGJrC1fQpwUwu/v1L17g==", "subType": "06" } } }, "local_javascript_det_auto_id": { "kms": "local", "type": "javascript", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ASzggCwAAAAAAAAAAAAAAAANmQsg9E/BzGJVNVhSNyunS/TH0332oVFdPS6gjX0Cp/JC0YhB97DLz3N4e/q8ECaz7tTdQt9JacNUgxo+YCULUA==", "subType": "06" } } }, "local_javascript_det_explicit_id": { "kms": "local", "type": "javascript", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ASzggCwAAAAAAAAAAAAAAAANmQsg9E/BzGJVNVhSNyunS/TH0332oVFdPS6gjX0Cp/JC0YhB97DLz3N4e/q8ECaz7tTdQt9JacNUgxo+YCULUA==", "subType": "06" } } }, "local_javascript_det_explicit_altname": { "kms": "local", "type": "javascript", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "ASzggCwAAAAAAAAAAAAAAAANmQsg9E/BzGJVNVhSNyunS/TH0332oVFdPS6gjX0Cp/JC0YhB97DLz3N4e/q8ECaz7tTdQt9JacNUgxo+YCULUA==", "subType": "06" } } }, "local_symbol_rand_auto_id": { "kms": "local", "type": "symbol", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAAOOuO2b23mekwI8b6gWeEgRy1lLOCsNyBKvdmizK7/oOVKCvd+3kwUn9a6TxygooiVAN/Aohr1cjb8jRlMPWpkP0iO0+Tt6+vkizgFsQW4iio=", "subType": "06" } } }, "local_symbol_rand_auto_altname": { "kms": "local", "type": "symbol", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAAOhN4QPOcmGnFKGvTfhz6TQleDA02X6oWULLHTnOUJYfE3OUSyf2ULEQh1yhdKdwXMuYVgGl28pMosiwkBShrXYe5ZlMjiZCIMZWSdUMV0tXk=", "subType": "06" } } }, "local_symbol_rand_explicit_id": { "kms": "local", "type": "symbol", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAAO9aWi9RliwQHdXHoJME9VyN6XgyGd95Eclx+ZFYfLxBGAuUnPNjSfVuNZwYdyKC8JX79+mYhk7IXmcGV4z+4486sxyLk3idi4Kmpz2ESqV5g=", "subType": "06" } } }, "local_symbol_rand_explicit_altname": { "kms": "local", "type": "symbol", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAAO/qev3DPfpkQoSW9aHOyalwfI/VYDQVN5VMINx4kw2vEqHiI1HRdZRPOz3q74TlQEy3TMNMTYdCvh5bpN/PptRZCTQbzP6ugz9dTp79w5/Ok=", "subType": "06" } } }, "local_symbol_det_auto_id": { "kms": "local", "type": "symbol", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ASzggCwAAAAAAAAAAAAAAAAOsg5cs6VpZWoTOFg4ztZmpj8kSTeCArVcI1Zz2pOnmMqNv/vcKQGhKSBbfniMripr7iuiYtlgkHGsdO2FqUp6Jb8NEWm5uWqdNU21zR9SRkE=", "subType": "06" } } }, "local_symbol_det_explicit_id": { "kms": "local", "type": "symbol", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ASzggCwAAAAAAAAAAAAAAAAOsg5cs6VpZWoTOFg4ztZmpj8kSTeCArVcI1Zz2pOnmMqNv/vcKQGhKSBbfniMripr7iuiYtlgkHGsdO2FqUp6Jb8NEWm5uWqdNU21zR9SRkE=", "subType": "06" } } }, "local_symbol_det_explicit_altname": { "kms": "local", "type": "symbol", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "ASzggCwAAAAAAAAAAAAAAAAOsg5cs6VpZWoTOFg4ztZmpj8kSTeCArVcI1Zz2pOnmMqNv/vcKQGhKSBbfniMripr7iuiYtlgkHGsdO2FqUp6Jb8NEWm5uWqdNU21zR9SRkE=", "subType": "06" } } }, "local_javascriptWithScope_rand_auto_id": { "kms": "local", "type": "javascriptWithScope", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAAP5gLMvLOAc6vGAvC7bGmEC4eweptAiX3A7L0iCoHps/wm0FBLkfpF6F4pCjVYiY1lTID38wliRLPyhntCj+cfvlMfKSjouNgXMIWyQ8GKZ2c=", "subType": "06" } } }, "local_javascriptWithScope_rand_auto_altname": { "kms": "local", "type": "javascriptWithScope", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAAPVsw9Opn/P5SAdJhX4MTxIcsmaG8isIN4NKPi9k1u/Vj7AVkcxYqwurAghaJpmfoAgMruvzi1hcKvd05yHd9Nk0vkvODwDgnjJB6QO+qUce8=", "subType": "06" } } }, "local_javascriptWithScope_rand_explicit_id": { "kms": "local", "type": "javascriptWithScope", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAAPLUa+nsrqiHkVdE5K1xl/ZsiZqQznG2yVXyA3b3loBylbcL2NEBp1JUeGnPZ0y5ZK4AmoL6NMH2Io313rW3V8FTArs/OOQWPRJSe6h0M3wXk=", "subType": "06" } } }, "local_javascriptWithScope_rand_explicit_altname": { "kms": "local", "type": "javascriptWithScope", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAAPzUKaXCH0JImSlY73HVop9g9c0YssNEiA7Dy7Vji61avxvnuJJfghDchdwwaY7Vc8+0bymoanUWcErRctLzjm+1uKeMnFQokR8wFtnS3PgpQ=", "subType": "06" } } }, "local_javascriptWithScope_det_explicit_id": { "kms": "local", "type": "javascriptWithScope", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$code": "x=1", "$scope": {} } }, "local_javascriptWithScope_det_explicit_altname": { "kms": "local", "type": "javascriptWithScope", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$code": "x=1", "$scope": {} } }, "local_int_rand_auto_id": { "kms": "local", "type": "int", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAAQHXpXb3KlHA2KFTBgl0VoLCu0CUf1ae4DckkwDorbredVSqxvA5e+NvVudY5yuea6bC9F57JlbjI8NWYAUw4q0Q==", "subType": "06" } } }, "local_int_rand_auto_altname": { "kms": "local", "type": "int", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAAQSxXMF4+TKV+a3lcxXky8VepEqdg5wI/jg+C4CAUgNurq2XhgrxyqiMjkU8z07tfyoLYyX6P+dTrwj6nzvvchCw==", "subType": "06" } } }, "local_int_rand_explicit_id": { "kms": "local", "type": "int", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAAQmzteYnshCI8HBGd7UYUKvcg4xl6M8PRyi1xX/WHbjyQkAJXxczS8hO91wuqStE3tBNSmulUejz9S691ufTd6ZA==", "subType": "06" } } }, "local_int_rand_explicit_altname": { "kms": "local", "type": "int", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAAQLCHLru//++QSoWVEyw2v6TUfCnlrPJXrpLLezWf16vK85jTfm8vJbb2X2UzX04wGzVL9tCFFsWX6Z5gHXhgSBg==", "subType": "06" } } }, "local_int_det_auto_id": { "kms": "local", "type": "int", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ASzggCwAAAAAAAAAAAAAAAAQIxWjLBromNUgiOoeoZ4RUJUYIfhfOmab0sa4qYlS9bgYI41FU6BtzaOevR16O9i+uACbiHL0X6FMXKjOmiRAug==", "subType": "06" } } }, "local_int_det_explicit_id": { "kms": "local", "type": "int", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ASzggCwAAAAAAAAAAAAAAAAQIxWjLBromNUgiOoeoZ4RUJUYIfhfOmab0sa4qYlS9bgYI41FU6BtzaOevR16O9i+uACbiHL0X6FMXKjOmiRAug==", "subType": "06" } } }, "local_int_det_explicit_altname": { "kms": "local", "type": "int", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "ASzggCwAAAAAAAAAAAAAAAAQIxWjLBromNUgiOoeoZ4RUJUYIfhfOmab0sa4qYlS9bgYI41FU6BtzaOevR16O9i+uACbiHL0X6FMXKjOmiRAug==", "subType": "06" } } }, "local_timestamp_rand_auto_id": { "kms": "local", "type": "timestamp", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAARntIycg0Xkd16GEa//VSJI4Rkl7dT6MpRa+D3MiTEeio5Yy8zGK0u2BtEP/9MCRQw2hJDYj5znVqwhdduM0OTiA==", "subType": "06" } } }, "local_timestamp_rand_auto_altname": { "kms": "local", "type": "timestamp", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAARWA9Ox5ejDPeWxfjbRgcGCtF/G5yrPMbBJD9ESDFc0NaVe0sdNNTisEVxsSkn7M/S4FCibKh+C8femr7xhu1iTw==", "subType": "06" } } }, "local_timestamp_rand_explicit_id": { "kms": "local", "type": "timestamp", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAARrEfOL4+4Qh7IkhHnHcBEANGfMF8n2wUDnsZ0lXEb0fACKzaN5OKaxMIQBs/3pFBw721qRfCHY+ByKeaQuABbzg==", "subType": "06" } } }, "local_timestamp_rand_explicit_altname": { "kms": "local", "type": "timestamp", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAARW8nwmnBt+LFIAcFWvOzX8llrGcveQKFhyYUIth9d7wtpTyc9myFp8GBQCnjDpKzA6lPmbqVYeLU0L9q0h6SHGQ==", "subType": "06" } } }, "local_timestamp_det_auto_id": { "kms": "local", "type": "timestamp", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ASzggCwAAAAAAAAAAAAAAAAR6uMylGytMq8QDr5Yz3w9HlW2MkGt6yIgUKcXYSaXru8eer+EkLv66/vy5rHqTfV0+8ryoi+d+PWO5U6b3Ng5Gg==", "subType": "06" } } }, "local_timestamp_det_explicit_id": { "kms": "local", "type": "timestamp", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ASzggCwAAAAAAAAAAAAAAAAR6uMylGytMq8QDr5Yz3w9HlW2MkGt6yIgUKcXYSaXru8eer+EkLv66/vy5rHqTfV0+8ryoi+d+PWO5U6b3Ng5Gg==", "subType": "06" } } }, "local_timestamp_det_explicit_altname": { "kms": "local", "type": "timestamp", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "ASzggCwAAAAAAAAAAAAAAAAR6uMylGytMq8QDr5Yz3w9HlW2MkGt6yIgUKcXYSaXru8eer+EkLv66/vy5rHqTfV0+8ryoi+d+PWO5U6b3Ng5Gg==", "subType": "06" } } }, "local_long_rand_auto_id": { "kms": "local", "type": "long", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAASrinKUOpHIB7MNRmCAPWcP4CjZwfr5JaRT3G/GqY9B/6csj3+N9jmo1fYvM8uHcnmf5hzDDOamaE2FF1jDKkrHw==", "subType": "06" } } }, "local_long_rand_auto_altname": { "kms": "local", "type": "long", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAAShWMPYDkCpTC2XLYyykPJMihASLKn6HHcB2Eh7jFwQb/8D1HCQoPmOHMyXaN4AtIKm1oqEfma6FSnEPENQoledQ==", "subType": "06" } } }, "local_long_rand_explicit_id": { "kms": "local", "type": "long", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAASd2h34ZLib+GiYayrm/FIZ/weg8wF41T0PfF8NCLTJCoT7gIkdpNRz2zkkQgZMR31efNKtsM8Bs4wgZbkrXsXWg==", "subType": "06" } } }, "local_long_rand_explicit_altname": { "kms": "local", "type": "long", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAASPAvdjz+a3FvXqDSjazaGqwZxrfXlfFB5/VjQFXQB0gpodCEaz1qaLSKfCWBg83ftrYKa/1sa44gU5NBthDfDwQ==", "subType": "06" } } }, "local_long_det_auto_id": { "kms": "local", "type": "long", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ASzggCwAAAAAAAAAAAAAAAASQk372m/hW3WX82/GH+ikPv3QUwK7Hh/RBpAguiNxMdNhkgA/y2gznVNm17t6djyub7+d5zN4P5PLS/EOm2kjtw==", "subType": "06" } } }, "local_long_det_explicit_id": { "kms": "local", "type": "long", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ASzggCwAAAAAAAAAAAAAAAASQk372m/hW3WX82/GH+ikPv3QUwK7Hh/RBpAguiNxMdNhkgA/y2gznVNm17t6djyub7+d5zN4P5PLS/EOm2kjtw==", "subType": "06" } } }, "local_long_det_explicit_altname": { "kms": "local", "type": "long", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "ASzggCwAAAAAAAAAAAAAAAASQk372m/hW3WX82/GH+ikPv3QUwK7Hh/RBpAguiNxMdNhkgA/y2gznVNm17t6djyub7+d5zN4P5PLS/EOm2kjtw==", "subType": "06" } } }, "local_decimal_rand_auto_id": { "kms": "local", "type": "decimal", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAATLnMMDZhnGSn5F5xHhsJXxiTGXd61Eq6fgppOlxUNVlsZNYyr5tZ3owfTTqRuD9yRg97x65WiHewBBnJJSeirCTAy9zZxWPVlJSiC0gO7rbM=", "subType": "06" } } }, "local_decimal_rand_auto_altname": { "kms": "local", "type": "decimal", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAATenMh7NKQioGjpuEojIrYKFaJhbuGxUgu2yTTbe3TndhgHryhW9GXiUqo8WTpnXqpC5E/z03ZYLWfCbe7qGdL6T7bbrTpaTaWZnnAm3XaCqY=", "subType": "06" } } }, "local_decimal_rand_explicit_id": { "kms": "local", "type": "decimal", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAAT9vqXuKRh+2HxeCMr+pQYdhYNw7xrTdU4dySWz0X6tCK7LZO5AV72utmRJxID7Bqv1ZlXAk00V92oDLyKG9kHeG5+S34QE/aLCPsAWcppfxY=", "subType": "06" } } }, "local_decimal_rand_explicit_altname": { "kms": "local", "type": "decimal", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAATtqOCFMbOkls3LikQNXlnlkRr5gJns1+5Kvbt7P7texMa/QlXkYSHhtwESyfOcCQ2sw1T0eZ9DDuNaznpdK2KIqZBkVEC9iMoxqIqXF7Nab0=", "subType": "06" } } }, "local_decimal_det_explicit_id": { "kms": "local", "type": "decimal", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$numberDecimal": "1.234" } }, "local_decimal_det_explicit_altname": { "kms": "local", "type": "decimal", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$numberDecimal": "1.234" } }, "local_minKey_rand_explicit_id": { "kms": "local", "type": "minKey", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$minKey": 1 } }, "local_minKey_rand_explicit_altname": { "kms": "local", "type": "minKey", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$minKey": 1 } }, "local_minKey_det_explicit_id": { "kms": "local", "type": "minKey", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$minKey": 1 } }, "local_minKey_det_explicit_altname": { "kms": "local", "type": "minKey", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$minKey": 1 } }, "local_maxKey_rand_explicit_id": { "kms": "local", "type": "maxKey", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$maxKey": 1 } }, "local_maxKey_rand_explicit_altname": { "kms": "local", "type": "maxKey", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$maxKey": 1 } }, "local_maxKey_det_explicit_id": { "kms": "local", "type": "maxKey", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$maxKey": 1 } }, "local_maxKey_det_explicit_altname": { "kms": "local", "type": "maxKey", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$maxKey": 1 } }, "payload=0,algo=rand": { "kms": "local", "type": "string", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAACcsBdT93ivCyvtkfQz9qb1A9Ll+I6hnGE0kFy3rmVG6xAvipmRJSoVq3iv7iUEDvaqmPXfjeH8h8cPYT86v3XSg==", "subType": "06" } } }, "payload=1,algo=rand": { "kms": "local", "type": "string", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAACQOzpNBEGSrANr3Wl8uYpqeIc7pjc8e2LS2FaSrb8tM9F3mR1FqGgfJtn3eD+HZf3Y3WEDGK8975a/1BufkMqIQ==", "subType": "06" } } }, "payload=2,algo=rand": { "kms": "local", "type": "string", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAACyGJEcuN1pG5oSEyxuKFwqddGHVU5Untbib7LkmtoJe9HngTofkOpeHZH/hV6Z3CFxLu6WFliJoySsFFbnFy9ag==", "subType": "06" } } }, "payload=3,algo=rand": { "kms": "local", "type": "string", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAACLbp4w6mx45lR1vvgmeRja/y8U+WnR2oH4IpfrDi4lKM+JPVnJweiN3/1wAy+sXSy0S1Yh9yxmhh9ISoTkAuVxw==", "subType": "06" } } }, "payload=4,algo=rand": { "kms": "local", "type": "string", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAACG0qMY/GPZ/2CR61cxbuizywefyMZVdeTCn5KFjqwejgxeBwX0JmGNHKKWbQIDQykRFv0q0WHUgsRmRhaotNCyQ==", "subType": "06" } } }, "payload=5,algo=rand": { "kms": "local", "type": "string", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAACJI1onNpQfZhaYWrPEzHvNaJRqUDZK2xoyonB5E473BPgp3zvn0Jmz1deL8GzS+HlkjCrx39OvHyVt3+3S0kYYw==", "subType": "06" } } }, "payload=6,algo=rand": { "kms": "local", "type": "string", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAAClyKY9tZBjl7SewSXr3MdoWRDUNgLaXDUjENpjyYvi/54EQ9a+J/LAAh1892i+mLpYxEUAmcftPyfX3VhbCgUQw==", "subType": "06" } } }, "payload=7,algo=rand": { "kms": "local", "type": "string", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAACAMbEA+kNvnVV7B//ds2/QoVot061kbazoMwB/psB5eFdLVB5qApAXEWgQEMwkNnsTUYbtSduQz6uGwdagtNBRw==", "subType": "06" } } }, "payload=8,algo=rand": { "kms": "local", "type": "string", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAACzdSK/d7Ni6D8qUgNopnEU5ia1K5llhBGk3O1Tf71t4ThnQjYW9eI/rIohWmev5CGWLHhwuvvKUtFcTAe+NMQww==", "subType": "06" } } }, "payload=9,algo=rand": { "kms": "local", "type": "string", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAACzQcEa+ktF2EZf35TtyatnSGGaIVvFhZNuo5P3VwQvoONJrK2cSad7PBDAv3xDAB+VPZAigXAGQvd051sHooOHg==", "subType": "06" } } }, "payload=10,algo=rand": { "kms": "local", "type": "string", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAACpfoDmApsR5xOD3TDhcHeD7Jco3kPFuuWjDpHtMepMOJ3S0c+ngGGhzPGZtEz2xuD/E7AQn1ryp/WAQ+WwkaJkQ==", "subType": "06" } } }, "payload=11,algo=rand": { "kms": "local", "type": "string", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAACICMRXmx3oKqYv0IpmzkSMBIGT4Li3MPBF4Lw1s5F69WvZApD58glIKB6b7koIrF5qc2Wrb1/Nw+stRv0zvQ8Y9CcFV4OHm6WoEw+XDlWXJ4=", "subType": "06" } } }, "payload=12,algo=rand": { "kms": "local", "type": "string", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAACTArUn0WUTojQC4fSvq3TwJVTsZNhWAK2WB057u2EnkUzMC0xsbU6611W6Okx6idZ7pMudXpBC34fRDrJPXOu3BxK+ZLCOWS2FqsvWq3HeTY=", "subType": "06" } } }, "payload=13,algo=rand": { "kms": "local", "type": "string", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAACU1Ojn7EM2i+5KK2Beh1gPLhryK3Y7PtaZ/v4JvstxuAV4OHOR9yROP7pwenHXxczkWXvcyMY9OCdmHO8pkQkXO21798IPkDDN/ejJUFI0Uw=", "subType": "06" } } }, "payload=14,algo=rand": { "kms": "local", "type": "string", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAAC0ZLwSliCbcr/e1uiYWk6gRuD/5qiyulQ7IUNWjhpBR6SLUfX2+yExLzps9hoOp53j9zRSKIzyleZ8yGLTLeN+Lz9BUe2ZT+sV8NiqZz3pkA=", "subType": "06" } } }, "payload=15,algo=rand": { "kms": "local", "type": "string", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAACQ9pmlQeFDr+jEhFwjL/eGVxdv70JdnkLaKdJ3/jkvCX1VPU5HmQIi+JWY3Rrw844E/6sBR6zIODn5aM0WfyP8a2zKRAWaVQZ7n+QE9hDN/8=", "subType": "06" } } }, "payload=16,algo=rand": { "kms": "local", "type": "string", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AizggCwAAAAAAAAAAAAAAAACiOcItInDGHqvkH0I3udp5nnX32XzDeqya/3KDjgZPT5GHek1vFTZ4924JVxFqFQz+No9rOVmyxm8O2fxjTK2vsjtADzKGnMTtFYZqghYCuc=", "subType": "06" } } }, "payload=0,algo=det": { "kms": "local", "type": "string", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ASzggCwAAAAAAAAAAAAAAAACijFptWQy7a1Y0rpXEvamXWI9v9dnx0Qj84/mKUsVpc3agkQ0B04uPYeROdt2MeEeiZoEKVWV0NjBocAQCEz7dw==", "subType": "06" } } }, "payload=1,algo=det": { "kms": "local", "type": "string", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ASzggCwAAAAAAAAAAAAAAAAChR90taVWsZk+++sgibX6CnFeQQHNoB8V+n2gmDe3CIT/t+WvhMf9D+mQipbAlrUyHgGihKMHcvAZ5RZ/spaH4Q==", "subType": "06" } } }, "payload=2,algo=det": { "kms": "local", "type": "string", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ASzggCwAAAAAAAAAAAAAAAAC67wemDv1Xdu7+EMR9LMBTOxfyAqsGaxQibwamZItzplslL/Dp3t9g9vPuNzq0dWwhnfxQ9GBe8OA3dtRaifYCA==", "subType": "06" } } }, "payload=3,algo=det": { "kms": "local", "type": "string", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ASzggCwAAAAAAAAAAAAAAAACVLxch+uC7weXrbtylCo1m4HYZmh0sd9JCrlTECO2M56JK1X9a30i2BDUdhPuoTvvODv74CGXkZKdist3o0mGAQ==", "subType": "06" } } }, "payload=4,algo=det": { "kms": "local", "type": "string", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ASzggCwAAAAAAAAAAAAAAAACexfIZGkOYaCGktOUc6cgAYg7Bd/C5ZYmdb7b8+rd5BKWbthW6N6CxhDIyh/DHvkPAeIzfTYA2/9w6tsjfD/TPQ==", "subType": "06" } } }, "payload=5,algo=det": { "kms": "local", "type": "string", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ASzggCwAAAAAAAAAAAAAAAACjUH/dPW4egOvFMJJnpWK8v27MeLkbXC4GFl1j+wPqTsIEeIWkzEmcXjHLTQGE2GplHHc/zxwRwD2dXdbzvsCDw==", "subType": "06" } } }, "payload=6,algo=det": { "kms": "local", "type": "string", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ASzggCwAAAAAAAAAAAAAAAACzvS+QkGlvb05pNn+vBMml09yKmE8yM6lwccNIST5uZSsUxXf2hrxPtO7Ylc4lmBAJt/9bcM59JIeT9fpYMc75w==", "subType": "06" } } }, "payload=7,algo=det": { "kms": "local", "type": "string", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ASzggCwAAAAAAAAAAAAAAAACSf2RxHJpRuh4j8nS1dfonUtsJEwgqfWrwOsfuT/tAGXgDN0ObUpzL2K7G2vmePjP4dwycCSIL3+2j34bqBJK1Q==", "subType": "06" } } }, "payload=8,algo=det": { "kms": "local", "type": "string", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ASzggCwAAAAAAAAAAAAAAAACu96YYeLXXoYdEZYNU9UAZjSd6G4fOE1edrA6/RjZKVGWKxftmvj5g1VAOiom0XuTZUe1ihbnwhvKexeoa3Vc8Q==", "subType": "06" } } }, "payload=9,algo=det": { "kms": "local", "type": "string", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ASzggCwAAAAAAAAAAAAAAAACX+UjBKo9+N0Z+mbyqZqkQv2ETMSn6aPTONWgJtw5nWklcxKjUSSLI+8LW/6M6Xf9a7177GsqmV2f/yCRF58Xtw==", "subType": "06" } } }, "payload=10,algo=det": { "kms": "local", "type": "string", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ASzggCwAAAAAAAAAAAAAAAACL6TVscFzIJ9+Zj6LsCZ9xhaZuTZdvz1nJe4l69nKyj9hCjnyuiV6Ve4AXwQ5W1wiPfkJ0fCZS33NwiHw7QQ/vg==", "subType": "06" } } }, "payload=11,algo=det": { "kms": "local", "type": "string", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ASzggCwAAAAAAAAAAAAAAAACPLq7IcWhTVwkKmy0flN7opoQzx7tTe1eD9JIc25FC9B6KGQkdcRDglDDR7/m6+kBtTnq88y63vBgomTxA8ZxQE+3pB7zCiBhX0QznuXvP44=", "subType": "06" } } }, "payload=12,algo=det": { "kms": "local", "type": "string", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ASzggCwAAAAAAAAAAAAAAAACxv7v4pKtom5z1g9FUuyjEWAbdzJ3ytPNZlOfVr6KZnUPhIH7PfCz3/lTdYYWBTj01+SUZiC/7ruof9QDhsSiNWP7nUyHpQ/C3joI/BBjtDA=", "subType": "06" } } }, "payload=13,algo=det": { "kms": "local", "type": "string", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ASzggCwAAAAAAAAAAAAAAAACZhiElQ/MvyVMwMkZPu8pT54Ap6TlpVSEbE4nIQzzeU3XKVuspMdI5IXvvgfULXKXc+AOu6oQXZ+wAJ1tErVOsb48HF1g0wbXbBA31C5qLEM=", "subType": "06" } } }, "payload=14,algo=det": { "kms": "local", "type": "string", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ASzggCwAAAAAAAAAAAAAAAACdp8mDOeDuDLhE0LzTOT2p0CMaUsAQrGCzmiK6Ab9xvaIcPPcejUcpdO3XXAS/pPab4+TUwO5GbI5pDJ29zwaOiOz2H3OJ2m2p5BHQp9mCys=", "subType": "06" } } }, "payload=15,algo=det": { "kms": "local", "type": "string", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ASzggCwAAAAAAAAAAAAAAAACmtLohoP/gotuon2IvnGeLEfCWHRMhG9Wp4tPu/vbJJkJkbQTP35HRG9VrMV7KKrEQbOsJ2Y6UDBra4tyjn0fIkwwc/0X9i+xaP+TrwpNabE=", "subType": "06" } } }, "payload=16,algo=det": { "kms": "local", "type": "string", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ASzggCwAAAAAAAAAAAAAAAAC6s9eUtSneKWj3/A7S+bPZLj3t1WtUh7ltW80b8jCRzA+kOI26j1MEb1tt68HgcnH1IJ3YQ/+UHlV95OgwSnIxlib/HJn3U0s8mpuCWe1Auo=", "subType": "06" } } }, "azure_double_rand_auto_id": { "kms": "azure", "type": "double", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AgGVERAAAAAAAAAAAAAAAAAB0S2kOZe54q6iZqeTLndkX+kehTKtb30jTP7FS+Zx+cxhFs626OrGY+jrH41cLfroCccacyNHUZFRinfqZPNOyw==", "subType": "06" } } }, "azure_double_rand_auto_altname": { "kms": "azure", "type": "double", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AgGVERAAAAAAAAAAAAAAAAABYViH7PLjCIdmTibW9dGCJADwXx2dRSMYxEmulPu89clAoeLDa8pwJ7YxLFQCcTGmZRfmp58dDDAzV8tyyE8QMg==", "subType": "06" } } }, "azure_double_rand_explicit_id": { "kms": "azure", "type": "double", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AgGVERAAAAAAAAAAAAAAAAABeRahSj4pniBp0rLIEZE8MdeyiIKcYuTZiuGzGiXbFbntEPow88DFHIBSxbMGR7p/8jCpPL+GqBwFkPkafXbMzg==", "subType": "06" } } }, "azure_double_rand_explicit_altname": { "kms": "azure", "type": "double", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AgGVERAAAAAAAAAAAAAAAAABdaa3vKtO4cAEUjYJfOPl1KbbgeWtphfUuJd6MxR9VReNSf1jc+kONwmkPVQs2WyZ1n+TSQMGRoBp1nHRttDdTg==", "subType": "06" } } }, "azure_double_det_explicit_id": { "kms": "azure", "type": "double", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$numberDouble": "1.2339999999999999858" } }, "azure_double_det_explicit_altname": { "kms": "azure", "type": "double", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$numberDouble": "1.2339999999999999858" } }, "azure_string_rand_auto_id": { "kms": "azure", "type": "string", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AgGVERAAAAAAAAAAAAAAAAACeoztcDg9oZ7ixHinReWQTrAumpsfyb0E1s3BGOFHgBCi1tW79CEXfqN8riFRc1YeRTlN4k5ShgHaBWBlax+XoQ==", "subType": "06" } } }, "azure_string_rand_auto_altname": { "kms": "azure", "type": "string", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AgGVERAAAAAAAAAAAAAAAAACov9cXQvDHeKOS5Gxcxa8vdAcTsTXDYgUucGzsCyh4TnTWKGQEVk3DHndUXX569TKCjq5QsC//oWEwweCn1nZ4g==", "subType": "06" } } }, "azure_string_rand_explicit_id": { "kms": "azure", "type": "string", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AgGVERAAAAAAAAAAAAAAAAACKU5qTdMdO0buQ/37ZRANUAAafcsoNMOTxJsDOfkqUb+/kRgM1ePlwVvk4EJiAGhJ/4SEmEOpwv05TT3PxGur2Q==", "subType": "06" } } }, "azure_string_rand_explicit_altname": { "kms": "azure", "type": "string", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AgGVERAAAAAAAAAAAAAAAAACX/ODKGHUyAKxoJ/c/3lEDBTc+eP/VS8OHrLhYoP96McpnFSgYi5jfUwvrFYa715fkass4N0nAHE6TzoGTYyk6Q==", "subType": "06" } } }, "azure_string_det_auto_id": { "kms": "azure", "type": "string", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AQGVERAAAAAAAAAAAAAAAAACmVI7YK4JLOzutEdQ79he817Vk5EDP/3hXwOlGmERZCtp8J8HcqClhV+pyvRLGbwmlh12fbSs9nEp7mrobQm9wA==", "subType": "06" } } }, "azure_string_det_explicit_id": { "kms": "azure", "type": "string", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AQGVERAAAAAAAAAAAAAAAAACmVI7YK4JLOzutEdQ79he817Vk5EDP/3hXwOlGmERZCtp8J8HcqClhV+pyvRLGbwmlh12fbSs9nEp7mrobQm9wA==", "subType": "06" } } }, "azure_string_det_explicit_altname": { "kms": "azure", "type": "string", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AQGVERAAAAAAAAAAAAAAAAACmVI7YK4JLOzutEdQ79he817Vk5EDP/3hXwOlGmERZCtp8J8HcqClhV+pyvRLGbwmlh12fbSs9nEp7mrobQm9wA==", "subType": "06" } } }, "azure_object_rand_auto_id": { "kms": "azure", "type": "object", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AgGVERAAAAAAAAAAAAAAAAADWkZMsfCo4dOPMH1RXC7GkZFt1RCjJf0vaLDA09ih1Jl47SOetZELQ7B1TQjRQitktzrfD43jk8Fn4J5ZYZu1qQ==", "subType": "06" } } }, "azure_object_rand_auto_altname": { "kms": "azure", "type": "object", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AgGVERAAAAAAAAAAAAAAAAADJFMymfstltZP1oAqj4bgbCk8uLGtCd12eLqvSq0ZO+JDvls7PAovwmoWwigHunP8BBXT8sLydK+jn1sHfnhrlw==", "subType": "06" } } }, "azure_object_rand_explicit_id": { "kms": "azure", "type": "object", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AgGVERAAAAAAAAAAAAAAAAADCen+XrLYKg7gIVubVfdbQwuJ0mFHxhSUUyyBWj4RCeLeLUYXckboPGixXWB9XdwcOnInfF9u6qvktY67GtYASQ==", "subType": "06" } } }, "azure_object_rand_explicit_altname": { "kms": "azure", "type": "object", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AgGVERAAAAAAAAAAAAAAAAADnUyp/7eLmxxxOdsP+mNuJABK4PQoKFWDAY7lDrH6MYa03ryASOihPZWYZWXZLrbAf7cQQhElEkKqKwY8+NXgqg==", "subType": "06" } } }, "azure_object_det_explicit_id": { "kms": "azure", "type": "object", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": { "x": { "$numberInt": "1" } } }, "azure_object_det_explicit_altname": { "kms": "azure", "type": "object", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "x": { "$numberInt": "1" } } }, "azure_array_rand_auto_id": { "kms": "azure", "type": "array", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AgGVERAAAAAAAAAAAAAAAAAEtk14WyoatZcNPlg3y/XJNsBt6neFJeQwR06B9rMGV58oIsmeE5zMtUOBYTgzlnwyKpqI/XVAg8s1VxvsrvGCyLVPwGVyDztwtMgVSW6QM3s=", "subType": "06" } } }, "azure_array_rand_auto_altname": { "kms": "azure", "type": "array", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AgGVERAAAAAAAAAAAAAAAAAERTO63J4Nj1BpFlqVduA2IrAiGoV4jEOH3FnFgx7ZP7da/YBmLX/bc1EqdpC8v4faHxp74iU0xAB0yW4WgySDX7rriL5cw9sMpqgLRaBxGug=", "subType": "06" } } }, "azure_array_rand_explicit_id": { "kms": "azure", "type": "array", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AgGVERAAAAAAAAAAAAAAAAAEs09qQdNVwh+KFqKPREQkw0XFdRNHAvjYJzs5MDE9+QxvtKlmVKSK3wkxDdCrcH4r7ePV2nCy2h1IHYqaDnnt4s5dSawI2l88iTT+bBcCSrU=", "subType": "06" } } }, "azure_array_rand_explicit_altname": { "kms": "azure", "type": "array", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AgGVERAAAAAAAAAAAAAAAAAEaQ/YL50up4YIMJuVJSiAP06IQ+YjdKLIfkN/prbOZMiXErcD1Vq1hwGhfGdpEsLVu8E7IhJb4wakVC/2dLZoRP95az6HqRRauNNZAIQMKfY=", "subType": "06" } } }, "azure_array_det_explicit_id": { "kms": "azure", "type": "array", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": [ { "$numberInt": "1" }, { "$numberInt": "2" }, { "$numberInt": "3" } ] }, "azure_array_det_explicit_altname": { "kms": "azure", "type": "array", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": [ { "$numberInt": "1" }, { "$numberInt": "2" }, { "$numberInt": "3" } ] }, "azure_binData=00_rand_auto_id": { "kms": "azure", "type": "binData=00", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AgGVERAAAAAAAAAAAAAAAAAFl/leuLAHf1p6aRKHdFyN9FM6MW2XzBemql2xQgqkwJ6YOQXW6Pu/aI1scXVOrvrSu3+wBvByjHu++1AqFgzZRQ==", "subType": "06" } } }, "azure_binData=00_rand_auto_altname": { "kms": "azure", "type": "binData=00", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AgGVERAAAAAAAAAAAAAAAAAF4Nq/LwyufT/mx0LtFSkupNHTuyjbr4yUy1N5/37XhkpqZ1e4sWCHGNaTDEm5+cvdnbqZ/MMkBv855dc8N7vnGA==", "subType": "06" } } }, "azure_binData=00_rand_explicit_id": { "kms": "azure", "type": "binData=00", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AgGVERAAAAAAAAAAAAAAAAAFv1Kbv54uXJ76Ih63vtmszQtzkXqDlv8LDCFO3sjzu70+tgRXOhLm3J8uZpwoiNkgM6oNLn0en7tnEekYB9++CA==", "subType": "06" } } }, "azure_binData=00_rand_explicit_altname": { "kms": "azure", "type": "binData=00", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AgGVERAAAAAAAAAAAAAAAAAFgcYC1n7cGGXpv0qf1Kb8t9y/6kbhscGt2QJkQpAiqadFPPYDU/wwaKdDz94NpAHMZizUbhf9tvZ3UXl1bozhDA==", "subType": "06" } } }, "azure_binData=00_det_auto_id": { "kms": "azure", "type": "binData=00", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AQGVERAAAAAAAAAAAAAAAAAFvswfP3+jgia6rAyrypvbso3Xm4d7MEgJRUCWFYzA+9ov++vmeirgoTp/rFavTNOPb+61fvl1WKbVwrgODusaMg==", "subType": "06" } } }, "azure_binData=00_det_explicit_id": { "kms": "azure", "type": "binData=00", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AQGVERAAAAAAAAAAAAAAAAAFvswfP3+jgia6rAyrypvbso3Xm4d7MEgJRUCWFYzA+9ov++vmeirgoTp/rFavTNOPb+61fvl1WKbVwrgODusaMg==", "subType": "06" } } }, "azure_binData=00_det_explicit_altname": { "kms": "azure", "type": "binData=00", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AQGVERAAAAAAAAAAAAAAAAAFvswfP3+jgia6rAyrypvbso3Xm4d7MEgJRUCWFYzA+9ov++vmeirgoTp/rFavTNOPb+61fvl1WKbVwrgODusaMg==", "subType": "06" } } }, "azure_binData=04_rand_auto_id": { "kms": "azure", "type": "binData=04", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AgGVERAAAAAAAAAAAAAAAAAFMzMC3BLn/zWE9dxpcD8G0h4aifSY0zSHS9xTVJXgq21s2WU++Ov2UvHatVozmtZltsUN9JvSWqOBQRkFsrXvI7bc4lYfOoOmfpTHFcRDA/c=", "subType": "06" } } }, "azure_binData=04_rand_auto_altname": { "kms": "azure", "type": "binData=04", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AgGVERAAAAAAAAAAAAAAAAAFDlBN5hUTcjamOg/sgyeG0S52kphsjUgvlpuqHYz6VVdLtZ69cGHOVqqyml3x2rVqWUZJjd4ZodOhlwWq9p+i5IYNot2QaBvi8NZSaiThTc0=", "subType": "06" } } }, "azure_binData=04_rand_explicit_id": { "kms": "azure", "type": "binData=04", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AgGVERAAAAAAAAAAAAAAAAAFjvS2ozJuAL3rCvyBpraVtgL91OMdiskmgYnyfKlzd8EhYLd1cL4yxnTUjRXx+W+p8uN0/QZo+mynhcWnwcq83raY+I1HftSTx+S6rZ0qyDM=", "subType": "06" } } }, "azure_binData=04_rand_explicit_altname": { "kms": "azure", "type": "binData=04", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AgGVERAAAAAAAAAAAAAAAAAFqUMd/I0yOdy5W4THvFc6yrgSzB6arkRs/06b0M9Ii+QtAY6vbz+/aJ0Iy3Jm8TahC1wOZVmTj5luQpr+PHZMCEAFadv+0K/Nsx6xVhAh9gg=", "subType": "06" } } }, "azure_binData=04_det_auto_id": { "kms": "azure", "type": "binData=04", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AQGVERAAAAAAAAAAAAAAAAAFmN+KMrERGmfmue8/hG4D+ZcGzxC2HntdYBLjEolzvS9FV5JH/adxyUAnMpyL8FNznARL51rbv/G1nXPn9mPabsQ4BtWEAQbHx9TiXd+xbB0=", "subType": "06" } } }, "azure_binData=04_det_explicit_id": { "kms": "azure", "type": "binData=04", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AQGVERAAAAAAAAAAAAAAAAAFmN+KMrERGmfmue8/hG4D+ZcGzxC2HntdYBLjEolzvS9FV5JH/adxyUAnMpyL8FNznARL51rbv/G1nXPn9mPabsQ4BtWEAQbHx9TiXd+xbB0=", "subType": "06" } } }, "azure_binData=04_det_explicit_altname": { "kms": "azure", "type": "binData=04", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AQGVERAAAAAAAAAAAAAAAAAFmN+KMrERGmfmue8/hG4D+ZcGzxC2HntdYBLjEolzvS9FV5JH/adxyUAnMpyL8FNznARL51rbv/G1nXPn9mPabsQ4BtWEAQbHx9TiXd+xbB0=", "subType": "06" } } }, "azure_undefined_rand_explicit_id": { "kms": "azure", "type": "undefined", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$undefined": true } }, "azure_undefined_rand_explicit_altname": { "kms": "azure", "type": "undefined", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$undefined": true } }, "azure_undefined_det_explicit_id": { "kms": "azure", "type": "undefined", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$undefined": true } }, "azure_undefined_det_explicit_altname": { "kms": "azure", "type": "undefined", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$undefined": true } }, "azure_objectId_rand_auto_id": { "kms": "azure", "type": "objectId", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AgGVERAAAAAAAAAAAAAAAAAH3sYVJpCKi310YxndMwm5ltEbbiRO1RwZxxeEkzI8tptbNXC8t7RkrT8VSJZ43wbGYCiqH5RZy9v8pYwtUm4STw==", "subType": "06" } } }, "azure_objectId_rand_auto_altname": { "kms": "azure", "type": "objectId", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AgGVERAAAAAAAAAAAAAAAAAHD7agzVEc0JwesHHhkpGYIDAHQ+3Hc691kqic6YmVvK2N45fD5aRKftaZNs5OxSj3tNHSo7lQ+DVtPj8uSSpsVg==", "subType": "06" } } }, "azure_objectId_rand_explicit_id": { "kms": "azure", "type": "objectId", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AgGVERAAAAAAAAAAAAAAAAAHEgKgy2mpMLpfeEWqbvQOaRZAy+cEGXGon3e53/JoH6dZneEyyt4ZrcrK6uRqyUPWX0q104JbCYxfbtHtdzWgPQ==", "subType": "06" } } }, "azure_objectId_rand_explicit_altname": { "kms": "azure", "type": "objectId", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AgGVERAAAAAAAAAAAAAAAAAHqSv6Nruw3TIi7y0FPRjSfnJmWSdv5XMhAtnHNkT8MVuHeM32ayo0yc8dTA1wlkRtAI5JrGxTfERCXYuCojvvXg==", "subType": "06" } } }, "azure_objectId_det_auto_id": { "kms": "azure", "type": "objectId", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AQGVERAAAAAAAAAAAAAAAAAHcPRjIOyLDUJCDcdWkUySKCFS2AFkIa1OQyQAfC3Zh5HwJ1O7j2o+iYKRerhbni8lBiZH7EUMm1JcxM99lLC5jQ==", "subType": "06" } } }, "azure_objectId_det_explicit_id": { "kms": "azure", "type": "objectId", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AQGVERAAAAAAAAAAAAAAAAAHcPRjIOyLDUJCDcdWkUySKCFS2AFkIa1OQyQAfC3Zh5HwJ1O7j2o+iYKRerhbni8lBiZH7EUMm1JcxM99lLC5jQ==", "subType": "06" } } }, "azure_objectId_det_explicit_altname": { "kms": "azure", "type": "objectId", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AQGVERAAAAAAAAAAAAAAAAAHcPRjIOyLDUJCDcdWkUySKCFS2AFkIa1OQyQAfC3Zh5HwJ1O7j2o+iYKRerhbni8lBiZH7EUMm1JcxM99lLC5jQ==", "subType": "06" } } }, "azure_bool_rand_auto_id": { "kms": "azure", "type": "bool", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AgGVERAAAAAAAAAAAAAAAAAIYVWPvzSmiCs9LwRlv/AoQWhaS5mzoKX4W26M5eg/gPjOZbEVYOV80pWMxCcZWRAyV/NDWDUmKtRQDMU9b8lCJw==", "subType": "06" } } }, "azure_bool_rand_auto_altname": { "kms": "azure", "type": "bool", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AgGVERAAAAAAAAAAAAAAAAAIsAB01Ugqtw4T9SkuJBQN1y/ewpRAyz0vjFPdKI+jmPMmaXpMlXDJU8ZbTKm/nh6sjJCFcY5oZJ83ylbp2gHc6w==", "subType": "06" } } }, "azure_bool_rand_explicit_id": { "kms": "azure", "type": "bool", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AgGVERAAAAAAAAAAAAAAAAAIr8/qFd564X1mqHEhB0y7bzGFdrHuw+Gk45nXla3VvGHzeIJy6j2Wdl0uziWslMmBvNp8WweW+jQ6E2Fu7SiojQ==", "subType": "06" } } }, "azure_bool_rand_explicit_altname": { "kms": "azure", "type": "bool", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AgGVERAAAAAAAAAAAAAAAAAIWsca5FAnS2zhHnmKmexvvXMTgsZZ7uAFHnjQassUcay6mvIWH4hOnGiRxt5Zm0wO4S6cZq+PZrmEH5/n9rJcJQ==", "subType": "06" } } }, "azure_bool_det_explicit_id": { "kms": "azure", "type": "bool", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": true }, "azure_bool_det_explicit_altname": { "kms": "azure", "type": "bool", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": true }, "azure_date_rand_auto_id": { "kms": "azure", "type": "date", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AgGVERAAAAAAAAAAAAAAAAAJwKo7XW5daIFlwY1mDAnJdHlcUgF+74oViL28hQGhde63pkPyyS6lPkYrc1gcCK5DL7PwsSX4Vb9SsNAG9860xw==", "subType": "06" } } }, "azure_date_rand_auto_altname": { "kms": "azure", "type": "date", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AgGVERAAAAAAAAAAAAAAAAAJYZdWIqvqTztGKJkSASMEOjyrUFKnYql8fMIEzfEZWx2BYsIkxxOUUUCASg/Jsn09fTLVQ7yLD+LwycuI2uaXsw==", "subType": "06" } } }, "azure_date_rand_explicit_id": { "kms": "azure", "type": "date", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AgGVERAAAAAAAAAAAAAAAAAJuWzKqi3KV8GbGGnT7i9N4BACUuNjt5AgKsjWIfrWRXK1+jRQFq0bYlVWaliT9CNIygL2aTF0H4eHl55PAI84MQ==", "subType": "06" } } }, "azure_date_rand_explicit_altname": { "kms": "azure", "type": "date", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AgGVERAAAAAAAAAAAAAAAAAJ5JTtTuP4zTnEbaVlS/W59SrZ08LOC4ZIl+h+H4RnfHUfBXDwUou+APolVaYko+VZMKecrikdPeewgzWaqazJ1g==", "subType": "06" } } }, "azure_date_det_auto_id": { "kms": "azure", "type": "date", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AQGVERAAAAAAAAAAAAAAAAAJCREIp/SPolAZcVU1iOmaJaN2tFId5HhrjNmhp6xhA1AIPLnN+U7TAqesxFN7iebR9fXI5fZxYNgyWqQC1rqUJw==", "subType": "06" } } }, "azure_date_det_explicit_id": { "kms": "azure", "type": "date", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AQGVERAAAAAAAAAAAAAAAAAJCREIp/SPolAZcVU1iOmaJaN2tFId5HhrjNmhp6xhA1AIPLnN+U7TAqesxFN7iebR9fXI5fZxYNgyWqQC1rqUJw==", "subType": "06" } } }, "azure_date_det_explicit_altname": { "kms": "azure", "type": "date", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AQGVERAAAAAAAAAAAAAAAAAJCREIp/SPolAZcVU1iOmaJaN2tFId5HhrjNmhp6xhA1AIPLnN+U7TAqesxFN7iebR9fXI5fZxYNgyWqQC1rqUJw==", "subType": "06" } } }, "azure_null_rand_explicit_id": { "kms": "azure", "type": "null", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": false, "value": null }, "azure_null_rand_explicit_altname": { "kms": "azure", "type": "null", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": false, "value": null }, "azure_null_det_explicit_id": { "kms": "azure", "type": "null", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": null }, "azure_null_det_explicit_altname": { "kms": "azure", "type": "null", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": null }, "azure_regex_rand_auto_id": { "kms": "azure", "type": "regex", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AgGVERAAAAAAAAAAAAAAAAALsMm3W2ogEiI6m0l8dS5Xhqnw+vMBvN1EesOTqAZOk4tQleX6fWARwUUnjFxbuejU7ISb50fc/Ul+ntL9z/2nHQ==", "subType": "06" } } }, "azure_regex_rand_auto_altname": { "kms": "azure", "type": "regex", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AgGVERAAAAAAAAAAAAAAAAALITQNQI0hfCeMTxH0Hce1Cf5tinQG+Bq8EolUACvxUUQcDqIXfFXn19tV/Qyj4lIdnnwh/18hiswgEpJRK7uLGw==", "subType": "06" } } }, "azure_regex_rand_explicit_id": { "kms": "azure", "type": "regex", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AgGVERAAAAAAAAAAAAAAAAALw/1QI/bKeiGUrrtC+yXOTvxZ2mJjSelPPGOm1mge0ws8DsX0DPHmo6MjhnRO4u0c/LWiE3hwHG2rYjAFlFXZ5A==", "subType": "06" } } }, "azure_regex_rand_explicit_altname": { "kms": "azure", "type": "regex", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AgGVERAAAAAAAAAAAAAAAAAL6Sl58UfFCHCZzWIB4r19/ZjeSRAoWeTFCFedKiwyR8/xnL+8jzXK/9+vTIspP6j35lFapr+f4iBNB9WjdpYNKA==", "subType": "06" } } }, "azure_regex_det_auto_id": { "kms": "azure", "type": "regex", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AQGVERAAAAAAAAAAAAAAAAALxshM91Tsql/8kPe3dC16oP36XSUIN6godiRVIJLJ+NAwYtEkThthQsln7CrkIxIx6npN6A/hw1CBJERS/cqWhw==", "subType": "06" } } }, "azure_regex_det_explicit_id": { "kms": "azure", "type": "regex", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AQGVERAAAAAAAAAAAAAAAAALxshM91Tsql/8kPe3dC16oP36XSUIN6godiRVIJLJ+NAwYtEkThthQsln7CrkIxIx6npN6A/hw1CBJERS/cqWhw==", "subType": "06" } } }, "azure_regex_det_explicit_altname": { "kms": "azure", "type": "regex", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AQGVERAAAAAAAAAAAAAAAAALxshM91Tsql/8kPe3dC16oP36XSUIN6godiRVIJLJ+NAwYtEkThthQsln7CrkIxIx6npN6A/hw1CBJERS/cqWhw==", "subType": "06" } } }, "azure_dbPointer_rand_auto_id": { "kms": "azure", "type": "dbPointer", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AgGVERAAAAAAAAAAAAAAAAAMaAd1v/XCYM2Kzi/f4utR6aHOFORmzZ17EepEjkn5IeKshktUpPWjI/dBwSunn5Qxx2zI3nm06c3SDvp6tw8qb7u4qXjLQYhlsQ0bHvvm+vE=", "subType": "06" } } }, "azure_dbPointer_rand_auto_altname": { "kms": "azure", "type": "dbPointer", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AgGVERAAAAAAAAAAAAAAAAAM6VNjkN9bMIzfC7AX0ZhOEXPpyPE0nzYq3c5TNHrgeGWdZDR9GVdbO9t55zQrQJJ2Mmevh8c0WaAUV+YODv7ty6TDBsPbaKWWqMzu/v9RXHo=", "subType": "06" } } }, "azure_dbPointer_rand_explicit_id": { "kms": "azure", "type": "dbPointer", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AgGVERAAAAAAAAAAAAAAAAAM66tywuMhwdyUjxfl7EOdKHNCLeIPnct3PgKrAKlOQFjiNQUIA2ShVy0qYpJcvvFsuQ5e8Bjr0IqeBc8mC7n4euRSM1UXpLqI5XHgXMMaYpI=", "subType": "06" } } }, "azure_dbPointer_rand_explicit_altname": { "kms": "azure", "type": "dbPointer", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AgGVERAAAAAAAAAAAAAAAAAMtPQEbZ4gWoSYjVZLd5X6j0XxutWY1Ecrys2ErKRgZaxP0uGe8uw0cnr2Z5PYylaYmsSicLwD1PwWY42PKmaGBDraHmdfqDOPvrNxhBrfU/E=", "subType": "06" } } }, "azure_dbPointer_det_auto_id": { "kms": "azure", "type": "dbPointer", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AQGVERAAAAAAAAAAAAAAAAAMxUcVqq6RpAUCv08qGkmjuwVAIgLeYyh7xZnMeCYVGmhJKIP1Zdt1SvRGRV0jzwCQmXgxNd04adRwJnG/PRQIsL9aH3ilJgEnUbOo1nqR7yw=", "subType": "06" } } }, "azure_dbPointer_det_explicit_id": { "kms": "azure", "type": "dbPointer", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AQGVERAAAAAAAAAAAAAAAAAMxUcVqq6RpAUCv08qGkmjuwVAIgLeYyh7xZnMeCYVGmhJKIP1Zdt1SvRGRV0jzwCQmXgxNd04adRwJnG/PRQIsL9aH3ilJgEnUbOo1nqR7yw=", "subType": "06" } } }, "azure_dbPointer_det_explicit_altname": { "kms": "azure", "type": "dbPointer", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AQGVERAAAAAAAAAAAAAAAAAMxUcVqq6RpAUCv08qGkmjuwVAIgLeYyh7xZnMeCYVGmhJKIP1Zdt1SvRGRV0jzwCQmXgxNd04adRwJnG/PRQIsL9aH3ilJgEnUbOo1nqR7yw=", "subType": "06" } } }, "azure_javascript_rand_auto_id": { "kms": "azure", "type": "javascript", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AgGVERAAAAAAAAAAAAAAAAANWXPb5z3a0S7F26vkmBF3fV+oXYUj15OEtnSlXlUrc+gbhbPDxSvCPnTBEy5sNu4ndkvEZZxYgZInkF2q4rhlfQ==", "subType": "06" } } }, "azure_javascript_rand_auto_altname": { "kms": "azure", "type": "javascript", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AgGVERAAAAAAAAAAAAAAAAANN4mcwLz/J4eOUknhVsy6kdF1ThDP8cx6dNpOwJWAiyPHEsn+i6JmMTlfQMBrUp9HB/u3R+jLO5yz4XgLUKE8Tw==", "subType": "06" } } }, "azure_javascript_rand_explicit_id": { "kms": "azure", "type": "javascript", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AgGVERAAAAAAAAAAAAAAAAANJ+t5Z8hSQaoNzszzkWndAo4A0avDf9bKFa7euznz8ZYInnl9RUVqWMyxjSuIotAvTyYSJzxh+w2hKCgVf+MjEA==", "subType": "06" } } }, "azure_javascript_rand_explicit_altname": { "kms": "azure", "type": "javascript", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AgGVERAAAAAAAAAAAAAAAAANRLOQFpmkEg/KdWMmaurkNtUhy45rgtoipc9kQz6olgDWiMim81XC0AW5cOvjbHXL3w7Du28Kwdsp4j0PTTXHUQ==", "subType": "06" } } }, "azure_javascript_det_auto_id": { "kms": "azure", "type": "javascript", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AQGVERAAAAAAAAAAAAAAAAANUrNUS/7/dmKVWBd+2JKGEn1hxbFSyu3p5sDNatukG2m16t4WwxzmYAg8PuQbAxekprs7iaLA+7D2Kn3ZuMSQOw==", "subType": "06" } } }, "azure_javascript_det_explicit_id": { "kms": "azure", "type": "javascript", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AQGVERAAAAAAAAAAAAAAAAANUrNUS/7/dmKVWBd+2JKGEn1hxbFSyu3p5sDNatukG2m16t4WwxzmYAg8PuQbAxekprs7iaLA+7D2Kn3ZuMSQOw==", "subType": "06" } } }, "azure_javascript_det_explicit_altname": { "kms": "azure", "type": "javascript", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AQGVERAAAAAAAAAAAAAAAAANUrNUS/7/dmKVWBd+2JKGEn1hxbFSyu3p5sDNatukG2m16t4WwxzmYAg8PuQbAxekprs7iaLA+7D2Kn3ZuMSQOw==", "subType": "06" } } }, "azure_symbol_rand_auto_id": { "kms": "azure", "type": "symbol", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AgGVERAAAAAAAAAAAAAAAAAORMcgtQSU+/2Qlq57neRrVuAFSeSwkqdo+z1fh6IKjyEzhCy+u5bTzSzTopyKJQTCUZA2mSpRezWkM87oiGfhMFkBRVreMcE62eH+BLlgUaM=", "subType": "06" } } }, "azure_symbol_rand_auto_altname": { "kms": "azure", "type": "symbol", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AgGVERAAAAAAAAAAAAAAAAAOIKlAw/A3nwHn0tO2cYtJx0azB8MGmXtt+bRptzn8yHlUSpMpYaiU0ssBBiLkmMLAITYebLqDk3NHESyP7PvbSfX1E2XVn2Nf694ZqPWMec8=", "subType": "06" } } }, "azure_symbol_rand_explicit_id": { "kms": "azure", "type": "symbol", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AgGVERAAAAAAAAAAAAAAAAAO8SXW76AEr/6D6zyP1RYwmwdVM2AINaXZn3Ipy+fynWTUV6XIPIRR7xMTttNo2zlh7fgXDZ28PmjooGlQzn0q0JVQmXPCIPM3aqAmMcgyuqg=", "subType": "06" } } }, "azure_symbol_rand_explicit_altname": { "kms": "azure", "type": "symbol", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AgGVERAAAAAAAAAAAAAAAAAOtoJWm2Ucre0foHIiOutsX1WIyub7t3Lby3/F8zRXn+l6ixlTjAPgWFwpRnYg96Lt2ACDDQ9CO51ejr9qk0b8LDBwG3qU5Cuibsp7vo1VsdI=", "subType": "06" } } }, "azure_symbol_det_auto_id": { "kms": "azure", "type": "symbol", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AQGVERAAAAAAAAAAAAAAAAAOvp/FMMmWVMkiuN51uFMFBiRQAcc9jftlNsHsLoNtohZaGni26kgX94b+/EI8pdWF5xA/73JlGlij0Rt+vC9s/zTDItRpn0bJL54WPphDcmA=", "subType": "06" } } }, "azure_symbol_det_explicit_id": { "kms": "azure", "type": "symbol", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AQGVERAAAAAAAAAAAAAAAAAOvp/FMMmWVMkiuN51uFMFBiRQAcc9jftlNsHsLoNtohZaGni26kgX94b+/EI8pdWF5xA/73JlGlij0Rt+vC9s/zTDItRpn0bJL54WPphDcmA=", "subType": "06" } } }, "azure_symbol_det_explicit_altname": { "kms": "azure", "type": "symbol", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AQGVERAAAAAAAAAAAAAAAAAOvp/FMMmWVMkiuN51uFMFBiRQAcc9jftlNsHsLoNtohZaGni26kgX94b+/EI8pdWF5xA/73JlGlij0Rt+vC9s/zTDItRpn0bJL54WPphDcmA=", "subType": "06" } } }, "azure_javascriptWithScope_rand_auto_id": { "kms": "azure", "type": "javascriptWithScope", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AgGVERAAAAAAAAAAAAAAAAAPCw9NnvJyuTYIgZxr1w1UiG85PGZ4rO62DWWDF98HwVM/Y6u7hNdNjkaWjYFsPMl38ioHw/pS8GFR62QmH2RAw/BV0wI7pNy2evANr3i3gKg=", "subType": "06" } } }, "azure_javascriptWithScope_rand_auto_altname": { "kms": "azure", "type": "javascriptWithScope", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AgGVERAAAAAAAAAAAAAAAAAPXQzqnQ2UWkIYof8/OfadNMa7iVKAbOaiu7YGm8iVrx+W6uxKLPFugVqHtQ29hYXXf33xr8rqGNxDlAe7/x1OeYEif71f7LUkmKF9WxJV9Ko=", "subType": "06" } } }, "azure_javascriptWithScope_rand_explicit_id": { "kms": "azure", "type": "javascriptWithScope", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AgGVERAAAAAAAAAAAAAAAAAP0nxlppgPyjLx0eBempbOlL21G6KbABSrE6+YuNDcsjJjxCQuLR9+aoAwa+yCDEC7GZ1E3oP489edKUuNpE4Ts26jy4aRegu4DmyECUeBwAg=", "subType": "06" } } }, "azure_javascriptWithScope_rand_explicit_altname": { "kms": "azure", "type": "javascriptWithScope", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AgGVERAAAAAAAAAAAAAAAAAPO89afu9Sb+cK9wwM1cO1DPjvu5UNyObjjTScy1hy9PzllJGfj7b84f0Ah74jPYsMPwI0Eslu/IYF3+5jmquq5Qp/VUQESlxqRqRK0xIeMfs=", "subType": "06" } } }, "azure_javascriptWithScope_det_explicit_id": { "kms": "azure", "type": "javascriptWithScope", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$code": "x=1", "$scope": {} } }, "azure_javascriptWithScope_det_explicit_altname": { "kms": "azure", "type": "javascriptWithScope", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$code": "x=1", "$scope": {} } }, "azure_int_rand_auto_id": { "kms": "azure", "type": "int", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AgGVERAAAAAAAAAAAAAAAAAQUyy4uWmWdzypsK81q9egREg4s80X3L2hzxJzC+fL08Xzy1z9grpPPCfJrluUVKMMGmmZR8gJPJ70igN3unJbzg==", "subType": "06" } } }, "azure_int_rand_auto_altname": { "kms": "azure", "type": "int", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AgGVERAAAAAAAAAAAAAAAAAQr4gyoHKpGsSJo8CMsYSJk/KilFMJhsDCmxrha7yfNW1uR5sjyZj4B4s6uTXGw76x7aR/AvecDlY3QFJb8L1mjg==", "subType": "06" } } }, "azure_int_rand_explicit_id": { "kms": "azure", "type": "int", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AgGVERAAAAAAAAAAAAAAAAAQ0zgXYPV1MuEFksmDpVDoWkoZQelm3+rYrMiT64KYywO//75799W8TbR3a7O6Q/ErjKQOin2OCp8EWwZqTDdz5w==", "subType": "06" } } }, "azure_int_rand_explicit_altname": { "kms": "azure", "type": "int", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AgGVERAAAAAAAAAAAAAAAAAQG+qz00yizREbP3tla1elMiwf8TKLbUU2XWUP+E0vey/wvbjTTIzqwUlz/b9St77CHJhavypP3hMrngXR9GapbQ==", "subType": "06" } } }, "azure_int_det_auto_id": { "kms": "azure", "type": "int", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AQGVERAAAAAAAAAAAAAAAAAQCkJH+CataLqp/xBjO77QBprC2xPV+rE+goSZ3C6aqwXIeTYHTOqEbeaFb5iZcqYH5nWvNvnfbZSIMyvSfrPjhw==", "subType": "06" } } }, "azure_int_det_explicit_id": { "kms": "azure", "type": "int", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AQGVERAAAAAAAAAAAAAAAAAQCkJH+CataLqp/xBjO77QBprC2xPV+rE+goSZ3C6aqwXIeTYHTOqEbeaFb5iZcqYH5nWvNvnfbZSIMyvSfrPjhw==", "subType": "06" } } }, "azure_int_det_explicit_altname": { "kms": "azure", "type": "int", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AQGVERAAAAAAAAAAAAAAAAAQCkJH+CataLqp/xBjO77QBprC2xPV+rE+goSZ3C6aqwXIeTYHTOqEbeaFb5iZcqYH5nWvNvnfbZSIMyvSfrPjhw==", "subType": "06" } } }, "azure_timestamp_rand_auto_id": { "kms": "azure", "type": "timestamp", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AgGVERAAAAAAAAAAAAAAAAARwcXYtx+A7g/zGkjGdkyVxZGCO9Nzj3D70NIpl2TeH2j9qYGP4DenwL1xSgrL2Ez+X58d2BvNhKrjA9y2w1Z8kA==", "subType": "06" } } }, "azure_timestamp_rand_auto_altname": { "kms": "azure", "type": "timestamp", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AgGVERAAAAAAAAAAAAAAAAARQ0Pjx3l92Aqhn2e1hot2M9rQ6aLPE2Iw8AVhm5AD8FWywWih12Fn2p9+kiE33yKPOCyrTWQHKPtB4yYhqnJgGg==", "subType": "06" } } }, "azure_timestamp_rand_explicit_id": { "kms": "azure", "type": "timestamp", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AgGVERAAAAAAAAAAAAAAAAARvFMlIzh2IjpHkTJ8buqTOqBA0+CxVDsZacUhSHVMgJLN+0DJsJy8OfkmKMu9Lk5hULY00Udoja87x+79mYfmeQ==", "subType": "06" } } }, "azure_timestamp_rand_explicit_altname": { "kms": "azure", "type": "timestamp", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AgGVERAAAAAAAAAAAAAAAAAR+2SCd7V5ukAkh7CYpNPIatzTL8osNoA4Mb5jjjbos8eMamImw0fbH8YA+Rdm4CgGdQQ9VDX7MtMWlArkj0Jpew==", "subType": "06" } } }, "azure_timestamp_det_auto_id": { "kms": "azure", "type": "timestamp", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AQGVERAAAAAAAAAAAAAAAAARe72T/oC09QGE1vuljb6ZEHa6llEwMLT+C4s9u1fREkOKndpmrOlGE8zOey4teizY1ypOMkIZ8GDQJJ4kLSpNkQ==", "subType": "06" } } }, "azure_timestamp_det_explicit_id": { "kms": "azure", "type": "timestamp", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AQGVERAAAAAAAAAAAAAAAAARe72T/oC09QGE1vuljb6ZEHa6llEwMLT+C4s9u1fREkOKndpmrOlGE8zOey4teizY1ypOMkIZ8GDQJJ4kLSpNkQ==", "subType": "06" } } }, "azure_timestamp_det_explicit_altname": { "kms": "azure", "type": "timestamp", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AQGVERAAAAAAAAAAAAAAAAARe72T/oC09QGE1vuljb6ZEHa6llEwMLT+C4s9u1fREkOKndpmrOlGE8zOey4teizY1ypOMkIZ8GDQJJ4kLSpNkQ==", "subType": "06" } } }, "azure_long_rand_auto_id": { "kms": "azure", "type": "long", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AgGVERAAAAAAAAAAAAAAAAASSSgX7k8iw0xFe0AiIzOu0e0P7Ujyfsk/Cdl0fR5X8V3QLVER+1Qa47Qpb8iWL2VLBSh+55HvIEtvhWn8SwXaog==", "subType": "06" } } }, "azure_long_rand_auto_altname": { "kms": "azure", "type": "long", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AgGVERAAAAAAAAAAAAAAAAASUhKr5K7ulGTeFbhIvJ2DDE10gRAFn5+2zqnsIFSY8lYV2PBYcENdeNBXZs6kyIAYhJdQyuOChVCerTI5jmQWDw==", "subType": "06" } } }, "azure_long_rand_explicit_id": { "kms": "azure", "type": "long", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AgGVERAAAAAAAAAAAAAAAAASHxawpjTHdXYRWQSZ7Qi7gFC+o4dW2mPH8s5nQkPFY/EubcJbdAZ5HFp66NfPaDJ/NSH6Vy+TkpX3683RC+bjSQ==", "subType": "06" } } }, "azure_long_rand_explicit_altname": { "kms": "azure", "type": "long", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AgGVERAAAAAAAAAAAAAAAAASVaMAv6UjuBOUZMJ9qz+58TQWmgaMpS9xrJziJY80ml9aRlDTtRubP7U40CgbDvrtY1QgHbkF/di1XDCB6iXMMg==", "subType": "06" } } }, "azure_long_det_auto_id": { "kms": "azure", "type": "long", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AQGVERAAAAAAAAAAAAAAAAAS06L8oEPeMvVlA32VlobdOWG24OoyMbv9PyYsHLsbT0bHFwU7lYUSQG9EkYVRNPEDzvXpciE1jT7KT8CRY8XT/g==", "subType": "06" } } }, "azure_long_det_explicit_id": { "kms": "azure", "type": "long", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AQGVERAAAAAAAAAAAAAAAAAS06L8oEPeMvVlA32VlobdOWG24OoyMbv9PyYsHLsbT0bHFwU7lYUSQG9EkYVRNPEDzvXpciE1jT7KT8CRY8XT/g==", "subType": "06" } } }, "azure_long_det_explicit_altname": { "kms": "azure", "type": "long", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AQGVERAAAAAAAAAAAAAAAAAS06L8oEPeMvVlA32VlobdOWG24OoyMbv9PyYsHLsbT0bHFwU7lYUSQG9EkYVRNPEDzvXpciE1jT7KT8CRY8XT/g==", "subType": "06" } } }, "azure_decimal_rand_auto_id": { "kms": "azure", "type": "decimal", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AgGVERAAAAAAAAAAAAAAAAATJ6LZgPu9F+rPtYsMuvwOx62+g1dAk858BUtE9FjC/300DnbDiolhkHNcyoFs07NYUNgLthW2rISb/ejmsDCt/oqnf8zWYf9vrJEfHaS/Ocw=", "subType": "06" } } }, "azure_decimal_rand_auto_altname": { "kms": "azure", "type": "decimal", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AgGVERAAAAAAAAAAAAAAAAATX8eD6qFYWKwIGvXtQG79fXKuPW9hkIV0OwrmNNIqRltw6gPHl+/1X8Q6rgmjCxqvhB05AxTj7xz64gP+ILkPQY8e8VGuCOvOdwDo2IPwy18=", "subType": "06" } } }, "azure_decimal_rand_explicit_id": { "kms": "azure", "type": "decimal", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AgGVERAAAAAAAAAAAAAAAAATBjQ9E5wDdTS/iI1XDqGmDBC5aLbPB4nSyrjRLfv1zEoPRjmcHlQmMRJA0mori2VQv6EBFNHeczFCenJaSAkuh77czeXM2vH3T6qwEIDs4dw=", "subType": "06" } } }, "azure_decimal_rand_explicit_altname": { "kms": "azure", "type": "decimal", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AgGVERAAAAAAAAAAAAAAAAATtkjbhdve7MNuLaTm6qvaewuVUxeC1DMz1fd4RC4jeiBFMd5uZUVJTiOIerwQ6P5G5lkMlezKDWgKl2FUvZH6c7V3JknhsaWcV5iLWGUL6Zc=", "subType": "06" } } }, "azure_decimal_det_explicit_id": { "kms": "azure", "type": "decimal", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$numberDecimal": "1.234" } }, "azure_decimal_det_explicit_altname": { "kms": "azure", "type": "decimal", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$numberDecimal": "1.234" } }, "azure_minKey_rand_explicit_id": { "kms": "azure", "type": "minKey", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$minKey": 1 } }, "azure_minKey_rand_explicit_altname": { "kms": "azure", "type": "minKey", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$minKey": 1 } }, "azure_minKey_det_explicit_id": { "kms": "azure", "type": "minKey", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$minKey": 1 } }, "azure_minKey_det_explicit_altname": { "kms": "azure", "type": "minKey", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$minKey": 1 } }, "azure_maxKey_rand_explicit_id": { "kms": "azure", "type": "maxKey", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$maxKey": 1 } }, "azure_maxKey_rand_explicit_altname": { "kms": "azure", "type": "maxKey", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$maxKey": 1 } }, "azure_maxKey_det_explicit_id": { "kms": "azure", "type": "maxKey", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$maxKey": 1 } }, "azure_maxKey_det_explicit_altname": { "kms": "azure", "type": "maxKey", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$maxKey": 1 } }, "gcp_double_rand_auto_id": { "kms": "gcp", "type": "double", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AhgjwAAAAAAAAAAAAAAAAAABFoHQxnh1XSC0k1B01uFFg7rE9sZVBn4PXo26JX8gx9tuxu+4l9Avb23H9BfOzuWiEc43iw87K/W2y0VfKp5CCg==", "subType": "06" } } }, "gcp_double_rand_auto_altname": { "kms": "gcp", "type": "double", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AhgjwAAAAAAAAAAAAAAAAAABRkZkEtQEFB/r268cNfYRQbN4u5Cxjl9Uh+8wq9TFWLQH2E/9wj2vTLlxQ2cQsM7Qd+XxR5idjfBf9CKAfvUa/A==", "subType": "06" } } }, "gcp_double_rand_explicit_id": { "kms": "gcp", "type": "double", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AhgjwAAAAAAAAAAAAAAAAAABDSUZ+0BbDDEZxCXA+J2T6Js8Uor2dfXSf7s/hpLrg6dxcW2chpht9XLiLOXG5w83TzCAI5pF8cQgBpBpYjR8RQ==", "subType": "06" } } }, "gcp_double_rand_explicit_altname": { "kms": "gcp", "type": "double", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AhgjwAAAAAAAAAAAAAAAAAABCYxugs7L+4S+1rr0VILSbtBm79JPTLuzluQAv0+8hbu5Z6zReOL6Ta1vQH1oA+pSPGYA4euye3zNl1X6ZewbPw==", "subType": "06" } } }, "gcp_double_det_explicit_id": { "kms": "gcp", "type": "double", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$numberDouble": "1.2339999999999999858" } }, "gcp_double_det_explicit_altname": { "kms": "gcp", "type": "double", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$numberDouble": "1.2339999999999999858" } }, "gcp_string_rand_auto_id": { "kms": "gcp", "type": "string", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AhgjwAAAAAAAAAAAAAAAAAACx3wSslJEiD80YLTH0n4Bbs4yWVPQl15AU8pZMLLQePqEtI+BJy3t2bqNP1098jS0CGSf+LQmQvXhJn1aNFeMTw==", "subType": "06" } } }, "gcp_string_rand_auto_altname": { "kms": "gcp", "type": "string", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AhgjwAAAAAAAAAAAAAAAAAAC5BTe5KP5UxSIk6dJlkz8aaZ/9fg44XPWHafiiL/48lcv3AWbu2gcBo1EDuc1sJQu6XMrtDCRQ7PCHsL7sEQMGQ==", "subType": "06" } } }, "gcp_string_rand_explicit_id": { "kms": "gcp", "type": "string", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AhgjwAAAAAAAAAAAAAAAAAACyJN55OcyXXJ71x8VphTaIuIg6kQtGgVKPhWx0LSdYc6JOjB6LTdA7SEWiSlSWWFZE26UmKcPbkbLDAYf4IVrzQ==", "subType": "06" } } }, "gcp_string_rand_explicit_altname": { "kms": "gcp", "type": "string", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AhgjwAAAAAAAAAAAAAAAAAACoa0d9gqfPP5s3+GoruwzxoQFgli8SmjpTVRLAOcFxqGdfrwSbpYffSw/OR45sZPxXCL6T2MtUvZsl7ukv0jBnw==", "subType": "06" } } }, "gcp_string_det_auto_id": { "kms": "gcp", "type": "string", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ARgjwAAAAAAAAAAAAAAAAAACTCkyETcWayIZ9YEoQEBVIF3i7iXEe6M3KjYYaSVCYdqSbSHBzlwKWYbP+Xj/MMYBYTLZ1aiRQWCMK4gWPYppZw==", "subType": "06" } } }, "gcp_string_det_explicit_id": { "kms": "gcp", "type": "string", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ARgjwAAAAAAAAAAAAAAAAAACTCkyETcWayIZ9YEoQEBVIF3i7iXEe6M3KjYYaSVCYdqSbSHBzlwKWYbP+Xj/MMYBYTLZ1aiRQWCMK4gWPYppZw==", "subType": "06" } } }, "gcp_string_det_explicit_altname": { "kms": "gcp", "type": "string", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "ARgjwAAAAAAAAAAAAAAAAAACTCkyETcWayIZ9YEoQEBVIF3i7iXEe6M3KjYYaSVCYdqSbSHBzlwKWYbP+Xj/MMYBYTLZ1aiRQWCMK4gWPYppZw==", "subType": "06" } } }, "gcp_object_rand_auto_id": { "kms": "gcp", "type": "object", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AhgjwAAAAAAAAAAAAAAAAAADy+8fkyeNYdIK001YogXfKc25zRXS1VGIFVWR6jRfrexy9C8LBBfX3iDwGNPbP2pkC3Tq16OoziQB6iNGf7s7yg==", "subType": "06" } } }, "gcp_object_rand_auto_altname": { "kms": "gcp", "type": "object", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AhgjwAAAAAAAAAAAAAAAAAADixoDdvm57gH8ooOaKI57WyZD5uaPmuYgmrgAFuV8I+oaalqYctnNSYlzQKCMQX/mIcTxvW3oOWY7+IzAz7npvw==", "subType": "06" } } }, "gcp_object_rand_explicit_id": { "kms": "gcp", "type": "object", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AhgjwAAAAAAAAAAAAAAAAAADvq0OAoijgHaVMhsoNMdfWFLyISDo6Y13sYM0CoBXS/oXJNIJJvhgKPbFSV/h4IgiDLy4qNYOTJQvpqt094RPgQ==", "subType": "06" } } }, "gcp_object_rand_explicit_altname": { "kms": "gcp", "type": "object", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AhgjwAAAAAAAAAAAAAAAAAADuTZF7/uqGjFbjzBYspPkxGWvvVAEN/ib8bfPOQrEobtTWuU+ju9H3TlT9DMuFy7RdUZnPB0D3HkM8+zky5xeBw==", "subType": "06" } } }, "gcp_object_det_explicit_id": { "kms": "gcp", "type": "object", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": { "x": { "$numberInt": "1" } } }, "gcp_object_det_explicit_altname": { "kms": "gcp", "type": "object", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "x": { "$numberInt": "1" } } }, "gcp_array_rand_auto_id": { "kms": "gcp", "type": "array", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AhgjwAAAAAAAAAAAAAAAAAAE085kJIBX6S93D94bcRjkOegEKsksi2R1cxoVDoOpSdHh3S6bZAOh50W405wvnOKf3KTP9SICDUehQKQZSC026Y5dwVQ2GiM7PtpSedthKJs=", "subType": "06" } } }, "gcp_array_rand_auto_altname": { "kms": "gcp", "type": "array", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AhgjwAAAAAAAAAAAAAAAAAAEk/FAXsaqyVr6I+MY5L0axeLhskcEfLZeB8whLMKbjLDLa8Iep+IdrFVSfKo03Zr/7Ah8Js01aT6+Vt4EDMJK0mGKZJOjsrAf3b6RS+Mzebg=", "subType": "06" } } }, "gcp_array_rand_explicit_id": { "kms": "gcp", "type": "array", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AhgjwAAAAAAAAAAAAAAAAAAEDY7J9JGiurctYr7ytakNjcryVm42fkubcVpQpUYEkpK/G9NLGjrJuFgNW5ZVjYiPKEBbDB7vEtJqGux0BU++hrvVHNJ3wUT2mbDE18NE4KE=", "subType": "06" } } }, "gcp_array_rand_explicit_altname": { "kms": "gcp", "type": "array", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AhgjwAAAAAAAAAAAAAAAAAAErFFlw8W9J2y+751RnYLw0TSK9ThD6sP3i4zPbZtiuhc90RFoJhScvqM9i4sDKuYePZZRLBxdX4EZhZClOmswCGDLCIWsQlSvCwgDcIsRR/w=", "subType": "06" } } }, "gcp_array_det_explicit_id": { "kms": "gcp", "type": "array", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": [ { "$numberInt": "1" }, { "$numberInt": "2" }, { "$numberInt": "3" } ] }, "gcp_array_det_explicit_altname": { "kms": "gcp", "type": "array", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": [ { "$numberInt": "1" }, { "$numberInt": "2" }, { "$numberInt": "3" } ] }, "gcp_binData=00_rand_auto_id": { "kms": "gcp", "type": "binData=00", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AhgjwAAAAAAAAAAAAAAAAAAF0R5BNkQKfm6wx/tob8nVGDEYV/pvy9UeCqc9gFNuB5d9KxCkgyxryV65rbB90OriqvWFO2jcxzchRYgRI3fQ+A==", "subType": "06" } } }, "gcp_binData=00_rand_auto_altname": { "kms": "gcp", "type": "binData=00", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AhgjwAAAAAAAAAAAAAAAAAAF4wcT8XGc3xNdKYDX5/cbUwPDdnkIXlWWCCYeSXSk2oWPxMZnPsVQ44nXKJJsKitoE3r/hL1sSG5239WzCWyx9g==", "subType": "06" } } }, "gcp_binData=00_rand_explicit_id": { "kms": "gcp", "type": "binData=00", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AhgjwAAAAAAAAAAAAAAAAAAF07OFs5mlx0AB6QBanaybLuhuFbG+19KxSqHlSgELcz6TQKI6equX97OZdaWSWf2SSeiYm5E6+Y3lgA5l4KxC2A==", "subType": "06" } } }, "gcp_binData=00_rand_explicit_altname": { "kms": "gcp", "type": "binData=00", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AhgjwAAAAAAAAAAAAAAAAAAFZ74Q7JMm7y2i3wRmjIRKefhmdnrhP1NXJgploi+44eQ2eRraZsW7peGPYyIfsXEbhgV5+aLmiYgvemBywfdogQ==", "subType": "06" } } }, "gcp_binData=00_det_auto_id": { "kms": "gcp", "type": "binData=00", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ARgjwAAAAAAAAAAAAAAAAAAFhwJkocj36WXoY3mg2GWUrJ5IQTo9MvkwEwRFKdkcxm9pX2PZPK7bN5ZWw3IFcQ/0GfaW6V4LYr8WarZdLF0p5g==", "subType": "06" } } }, "gcp_binData=00_det_explicit_id": { "kms": "gcp", "type": "binData=00", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ARgjwAAAAAAAAAAAAAAAAAAFhwJkocj36WXoY3mg2GWUrJ5IQTo9MvkwEwRFKdkcxm9pX2PZPK7bN5ZWw3IFcQ/0GfaW6V4LYr8WarZdLF0p5g==", "subType": "06" } } }, "gcp_binData=00_det_explicit_altname": { "kms": "gcp", "type": "binData=00", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "ARgjwAAAAAAAAAAAAAAAAAAFhwJkocj36WXoY3mg2GWUrJ5IQTo9MvkwEwRFKdkcxm9pX2PZPK7bN5ZWw3IFcQ/0GfaW6V4LYr8WarZdLF0p5g==", "subType": "06" } } }, "gcp_binData=04_rand_auto_id": { "kms": "gcp", "type": "binData=04", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AhgjwAAAAAAAAAAAAAAAAAAFmDO47RTVXzm8D4hfhLICILrQJg3yOwG3HYfCdz7yaanPow2Y6bMxvXxk+kDS29aS8pJKDqJQQoMGc1ZFD3yYKsLQHRi/8rW6TNDQd4sCQ00=", "subType": "06" } } }, "gcp_binData=04_rand_auto_altname": { "kms": "gcp", "type": "binData=04", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AhgjwAAAAAAAAAAAAAAAAAAFpiu9Q3LTuPmgdWBqo5Kw0vGF9xU1rMyE4xwR8GccZ7ZMrUcR4AnZnAP7ah5Oz8e7qonNYX4d09obesYSLlIjyK7J7qg+GWiEURgbvmOngaA=", "subType": "06" } } }, "gcp_binData=04_rand_explicit_id": { "kms": "gcp", "type": "binData=04", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AhgjwAAAAAAAAAAAAAAAAAAFHRy8dveGuMng9WMmadIp39jD7iEfl3bEjKmzyNoAc0wIcSJZo9kdGbNEwZ4p+A1gz273fmAt/AJwAxwvqdlanLWBr4wiSKz1Mu9VaBcTlyY=", "subType": "06" } } }, "gcp_binData=04_rand_explicit_altname": { "kms": "gcp", "type": "binData=04", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AhgjwAAAAAAAAAAAAAAAAAAFiqO+sKodqXuVox0zTbKuY4Ng0QE1If2hDLWXljAEZdYABPk20UJyL/CHR49WP2Cwvi4evJCf8sEfKpR+ugPiyxWzP3iVe6qqTzP93BBjqoc=", "subType": "06" } } }, "gcp_binData=04_det_auto_id": { "kms": "gcp", "type": "binData=04", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ARgjwAAAAAAAAAAAAAAAAAAFEp5Gut6iENHUqDMVdBm4cxQy35gnslTf7vSWW9InFh323BvaTTiubxbxTiMKIa/u47MfMprL9HNQSwgpAQc4lped+YnlRW8RYvTcG4frFtA=", "subType": "06" } } }, "gcp_binData=04_det_explicit_id": { "kms": "gcp", "type": "binData=04", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ARgjwAAAAAAAAAAAAAAAAAAFEp5Gut6iENHUqDMVdBm4cxQy35gnslTf7vSWW9InFh323BvaTTiubxbxTiMKIa/u47MfMprL9HNQSwgpAQc4lped+YnlRW8RYvTcG4frFtA=", "subType": "06" } } }, "gcp_binData=04_det_explicit_altname": { "kms": "gcp", "type": "binData=04", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "ARgjwAAAAAAAAAAAAAAAAAAFEp5Gut6iENHUqDMVdBm4cxQy35gnslTf7vSWW9InFh323BvaTTiubxbxTiMKIa/u47MfMprL9HNQSwgpAQc4lped+YnlRW8RYvTcG4frFtA=", "subType": "06" } } }, "gcp_undefined_rand_explicit_id": { "kms": "gcp", "type": "undefined", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$undefined": true } }, "gcp_undefined_rand_explicit_altname": { "kms": "gcp", "type": "undefined", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$undefined": true } }, "gcp_undefined_det_explicit_id": { "kms": "gcp", "type": "undefined", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$undefined": true } }, "gcp_undefined_det_explicit_altname": { "kms": "gcp", "type": "undefined", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$undefined": true } }, "gcp_objectId_rand_auto_id": { "kms": "gcp", "type": "objectId", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AhgjwAAAAAAAAAAAAAAAAAAH8Kt6coc8bPI4QIwS1tIdk6pPA05xlZvrOyAQgvoqaozMtWzG15OunQLDdS3yJ5WRiV7kO6CIKqRrvL2RykB5sw==", "subType": "06" } } }, "gcp_objectId_rand_auto_altname": { "kms": "gcp", "type": "objectId", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AhgjwAAAAAAAAAAAAAAAAAAHU5Yzmz2mbgNQrGSvglgVuv14nQWzipBkZUVSO4eYZ7wLrj/9t0fnizsu7Isgg5oA9fV0Snh/A9pDnHZWoccXUw==", "subType": "06" } } }, "gcp_objectId_rand_explicit_id": { "kms": "gcp", "type": "objectId", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AhgjwAAAAAAAAAAAAAAAAAAHsdq5/FLqbjMDiNzf+6k9yxUtFVjS/xSqErqaboOl21934pAzgkOzBGodpKKFuK0Ta4f3h21XS+84wlIYPMlTtw==", "subType": "06" } } }, "gcp_objectId_rand_explicit_altname": { "kms": "gcp", "type": "objectId", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AhgjwAAAAAAAAAAAAAAAAAAHokIdXxNQ/NBMdMAVNxyVuz/J5pMMdtfxxJxr7PbsRJ3FoD2QNjTgE1Wsz0G4o09Wv9UWD+/mIqPVlLgx1sRtPw==", "subType": "06" } } }, "gcp_objectId_det_auto_id": { "kms": "gcp", "type": "objectId", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ARgjwAAAAAAAAAAAAAAAAAAHkcbaj3Hy3b4HkjRkMgiw5h6jBW7Sc56QSJmAPmVSc2T4B8d79A49dW0RyEiInZJcnVRjrYzUTRtgRaG4/FRd8g==", "subType": "06" } } }, "gcp_objectId_det_explicit_id": { "kms": "gcp", "type": "objectId", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ARgjwAAAAAAAAAAAAAAAAAAHkcbaj3Hy3b4HkjRkMgiw5h6jBW7Sc56QSJmAPmVSc2T4B8d79A49dW0RyEiInZJcnVRjrYzUTRtgRaG4/FRd8g==", "subType": "06" } } }, "gcp_objectId_det_explicit_altname": { "kms": "gcp", "type": "objectId", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "ARgjwAAAAAAAAAAAAAAAAAAHkcbaj3Hy3b4HkjRkMgiw5h6jBW7Sc56QSJmAPmVSc2T4B8d79A49dW0RyEiInZJcnVRjrYzUTRtgRaG4/FRd8g==", "subType": "06" } } }, "gcp_bool_rand_auto_id": { "kms": "gcp", "type": "bool", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AhgjwAAAAAAAAAAAAAAAAAAIf7vUYS5XFrEU4g03lzj9dk8a2MkaQdlH8nE/507D2Gm5XKQLi2jCENZ9UaQm3MQtVr4Uqrgz2GZiQHt9mXcG3w==", "subType": "06" } } }, "gcp_bool_rand_auto_altname": { "kms": "gcp", "type": "bool", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AhgjwAAAAAAAAAAAAAAAAAAIdOC4Tx/TaVLRtOL/Qh8RUFIzHFB6nSegZoITwZeDethd8V3+R+aIAgzfN3pvmZzagHyVCm2nbNYJNdjOJhuDrg==", "subType": "06" } } }, "gcp_bool_rand_explicit_id": { "kms": "gcp", "type": "bool", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AhgjwAAAAAAAAAAAAAAAAAAIzB14mX2vaZdiW9kGc+wYEgTCXA0FB5AVEyuERD00+K7U5Otlc6ZUwMtb9nGUu+M7PnnfxiDFHCrUWrTkAZzSUw==", "subType": "06" } } }, "gcp_bool_rand_explicit_altname": { "kms": "gcp", "type": "bool", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AhgjwAAAAAAAAAAAAAAAAAAIhRLg79ACCMfeERBgG1wirirrZXZzbK11RxHkAbf14Fji2L3sdMBdLBU5I028+rmtDdC7khcNMt11V6XGKpAjnA==", "subType": "06" } } }, "gcp_bool_det_explicit_id": { "kms": "gcp", "type": "bool", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": true }, "gcp_bool_det_explicit_altname": { "kms": "gcp", "type": "bool", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": true }, "gcp_date_rand_auto_id": { "kms": "gcp", "type": "date", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AhgjwAAAAAAAAAAAAAAAAAAJL+mjI8xBmSahOOi3XkGRGxjhGNdJb445KZtRAaUdCV0vMKbrefuiDHJDPCYo7mLYNhRSIhQfs63IFYMrlKP26A==", "subType": "06" } } }, "gcp_date_rand_auto_altname": { "kms": "gcp", "type": "date", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AhgjwAAAAAAAAAAAAAAAAAAJbeyqO5FRmqvPYyOb0tdKtK6JOg8QKbCl37/iFeEm7N0T0Pjb8Io4U0ndB3O6fjokc3kDQrZcQkV+OFWIMuKFjw==", "subType": "06" } } }, "gcp_date_rand_explicit_id": { "kms": "gcp", "type": "date", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AhgjwAAAAAAAAAAAAAAAAAAJVz3rSYIcoYtM0tZ8pB2Ytgh8RvYPeZvW7aUVJfZkZlIhfUHOHEf5kHqxzt8E1l2n3lmK/7ZVCFUuCCmr8cZyWw==", "subType": "06" } } }, "gcp_date_rand_explicit_altname": { "kms": "gcp", "type": "date", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AhgjwAAAAAAAAAAAAAAAAAAJAiQqNyUcpuDEpFt7skp2NSHFCux2XObrIIFgXReYgtWoapL/n4zksJXl89PGavzNPBZbzgEa8uwwAe+S+Y6TLg==", "subType": "06" } } }, "gcp_date_det_auto_id": { "kms": "gcp", "type": "date", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ARgjwAAAAAAAAAAAAAAAAAAJmATV2A1P5DmrS8uES6AMD9y+EU3x7u4K4J0p296iSkCEgIdZZORhPIEnuJK3FHw1II6IEShW2nd7sOJRZSGKcg==", "subType": "06" } } }, "gcp_date_det_explicit_id": { "kms": "gcp", "type": "date", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ARgjwAAAAAAAAAAAAAAAAAAJmATV2A1P5DmrS8uES6AMD9y+EU3x7u4K4J0p296iSkCEgIdZZORhPIEnuJK3FHw1II6IEShW2nd7sOJRZSGKcg==", "subType": "06" } } }, "gcp_date_det_explicit_altname": { "kms": "gcp", "type": "date", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "ARgjwAAAAAAAAAAAAAAAAAAJmATV2A1P5DmrS8uES6AMD9y+EU3x7u4K4J0p296iSkCEgIdZZORhPIEnuJK3FHw1II6IEShW2nd7sOJRZSGKcg==", "subType": "06" } } }, "gcp_null_rand_explicit_id": { "kms": "gcp", "type": "null", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": false, "value": null }, "gcp_null_rand_explicit_altname": { "kms": "gcp", "type": "null", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": false, "value": null }, "gcp_null_det_explicit_id": { "kms": "gcp", "type": "null", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": null }, "gcp_null_det_explicit_altname": { "kms": "gcp", "type": "null", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": null }, "gcp_regex_rand_auto_id": { "kms": "gcp", "type": "regex", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AhgjwAAAAAAAAAAAAAAAAAALiebb3hWwJRqlgVEhLYKKvo6cnlU7BFnZnvlZ8GuIr11fUvcnS9Tg2m7vPmfL7WVyuNrXlR48x28Es49YuaxuIg==", "subType": "06" } } }, "gcp_regex_rand_auto_altname": { "kms": "gcp", "type": "regex", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AhgjwAAAAAAAAAAAAAAAAAALouDFNLVgBXqhJvBRj9DKacuD1AQ2NAVDW93P9NpZDFFwGOFxmKUcklbPj8KkHqvma8ovVUBTLLUDR+tKFRvC2Q==", "subType": "06" } } }, "gcp_regex_rand_explicit_id": { "kms": "gcp", "type": "regex", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AhgjwAAAAAAAAAAAAAAAAAALtdcT9+3R1he4eniT+1opqs/YtujFlqzBXssv+hCKhJQVY/IXde32nNpQ1WTgUc7jfIJl/v9HvuA9cDHPtDWWTg==", "subType": "06" } } }, "gcp_regex_rand_explicit_altname": { "kms": "gcp", "type": "regex", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AhgjwAAAAAAAAAAAAAAAAAALAwlRAlj4Zpn+wu9eOcs5CsNgrkVwrgmu1tc4wyQp0Lt+3UcplYsXQMrMPcTx3yB0JcI4Kh65n/DrAaA+G/a6iw==", "subType": "06" } } }, "gcp_regex_det_auto_id": { "kms": "gcp", "type": "regex", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ARgjwAAAAAAAAAAAAAAAAAALbCutQ7D94gk0djewcQiEdMFVVa21+Dn5enQf/mqPi3o7vPy7OejDBk9fiZRffsioRMhlx2cxqa8T3+AkeN96yg==", "subType": "06" } } }, "gcp_regex_det_explicit_id": { "kms": "gcp", "type": "regex", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ARgjwAAAAAAAAAAAAAAAAAALbCutQ7D94gk0djewcQiEdMFVVa21+Dn5enQf/mqPi3o7vPy7OejDBk9fiZRffsioRMhlx2cxqa8T3+AkeN96yg==", "subType": "06" } } }, "gcp_regex_det_explicit_altname": { "kms": "gcp", "type": "regex", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "ARgjwAAAAAAAAAAAAAAAAAALbCutQ7D94gk0djewcQiEdMFVVa21+Dn5enQf/mqPi3o7vPy7OejDBk9fiZRffsioRMhlx2cxqa8T3+AkeN96yg==", "subType": "06" } } }, "gcp_dbPointer_rand_auto_id": { "kms": "gcp", "type": "dbPointer", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AhgjwAAAAAAAAAAAAAAAAAAMG8P+Y2YNIgknxE0/yPDCHASBvCU1IJwsEyaJPuOjn03enxEN7z/wbjVMN0lGUptDP3SVL+OIZtQ35VRP84MtnbdhcfZWqMhLjzrCjmtHUEg=", "subType": "06" } } }, "gcp_dbPointer_rand_auto_altname": { "kms": "gcp", "type": "dbPointer", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AhgjwAAAAAAAAAAAAAAAAAAMKCLFUN6ApB5fSVEWazRddhKTEwgqI/mxfe0BBxht69pZQYhTjhOJP0YcIrtr+RCeHOa4FIJgQod1CFOellIzO5YH5CuV4wPxCAlOdbJcBK8=", "subType": "06" } } }, "gcp_dbPointer_rand_explicit_id": { "kms": "gcp", "type": "dbPointer", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AhgjwAAAAAAAAAAAAAAAAAAM7ULEA6uKKv4Pu4Sa3aAt7dXtEwfQC98aJoLBapHT+xXtn5GWPynOZQNtV3lGaYExQjiGdYbzOcav3SVy/sYTe3ktgkQnuZfe0tk0zyvKIMM=", "subType": "06" } } }, "gcp_dbPointer_rand_explicit_altname": { "kms": "gcp", "type": "dbPointer", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AhgjwAAAAAAAAAAAAAAAAAAMoMveHO1MadAKuT498xiKWWBUKRbH7k7P2YETDg/BufVw0swos07rk6WJa1vqyF61QEmACjy4pmlK/5P0VfKJBAIvif51YqHPQkobJVS3nVA=", "subType": "06" } } }, "gcp_dbPointer_det_auto_id": { "kms": "gcp", "type": "dbPointer", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ARgjwAAAAAAAAAAAAAAAAAAMz+9m1bE+Th9YeyPmJdtJPO0F5QYsGYtU/Eom/LSoYjDmTmV2ehkKx/cevIxJfZUc+Mvv/uGoeuubGl8tiX4l+f6yLrSIS6QBtIHYKXk+JNE=", "subType": "06" } } }, "gcp_dbPointer_det_explicit_id": { "kms": "gcp", "type": "dbPointer", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ARgjwAAAAAAAAAAAAAAAAAAMz+9m1bE+Th9YeyPmJdtJPO0F5QYsGYtU/Eom/LSoYjDmTmV2ehkKx/cevIxJfZUc+Mvv/uGoeuubGl8tiX4l+f6yLrSIS6QBtIHYKXk+JNE=", "subType": "06" } } }, "gcp_dbPointer_det_explicit_altname": { "kms": "gcp", "type": "dbPointer", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "ARgjwAAAAAAAAAAAAAAAAAAMz+9m1bE+Th9YeyPmJdtJPO0F5QYsGYtU/Eom/LSoYjDmTmV2ehkKx/cevIxJfZUc+Mvv/uGoeuubGl8tiX4l+f6yLrSIS6QBtIHYKXk+JNE=", "subType": "06" } } }, "gcp_javascript_rand_auto_id": { "kms": "gcp", "type": "javascript", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AhgjwAAAAAAAAAAAAAAAAAANqBD0ITMn4BaFnDp7BX7vXbRBkFwmjQRVUeBbwsQtv5WVlJMAd/2+w7tyH8Wc44x0/9U/DA5GVhpTrtdDyPBI3w==", "subType": "06" } } }, "gcp_javascript_rand_auto_altname": { "kms": "gcp", "type": "javascript", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AhgjwAAAAAAAAAAAAAAAAAANtA0q4mbkAaKX4x1xk0/094Mln0wnh2bYnI6s6dh+l2WLDH7A9JMZxCl6kc4uOsEfbOvjP/PLIYtdMGs14EjM5A==", "subType": "06" } } }, "gcp_javascript_rand_explicit_id": { "kms": "gcp", "type": "javascript", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AhgjwAAAAAAAAAAAAAAAAAANfrW3pmeiFdBFt5tJS6Auq9Wo/J4r/vMRiueLWxig5S1zYuf9kFPJMK/nN9HqQPIcBIJIC2i/uEPgeepaNXACCw==", "subType": "06" } } }, "gcp_javascript_rand_explicit_altname": { "kms": "gcp", "type": "javascript", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AhgjwAAAAAAAAAAAAAAAAAANL7UZNzpwfwhRn/HflWIE9CSxGYNwLSo9d86HsOJ42rrZKq6HQqm/hiEAg0lyqCxVIVFxYEc2BUWSaq4/+SSyZw==", "subType": "06" } } }, "gcp_javascript_det_auto_id": { "kms": "gcp", "type": "javascript", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ARgjwAAAAAAAAAAAAAAAAAANB2d97R8nUJqnG0JPsWzyFe5pct5jvUljdkPnlZvLN1ZH+wSu4WmLfjri6IzzYP//f8tywn4Il+R4lZ0Kr/RAeA==", "subType": "06" } } }, "gcp_javascript_det_explicit_id": { "kms": "gcp", "type": "javascript", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ARgjwAAAAAAAAAAAAAAAAAANB2d97R8nUJqnG0JPsWzyFe5pct5jvUljdkPnlZvLN1ZH+wSu4WmLfjri6IzzYP//f8tywn4Il+R4lZ0Kr/RAeA==", "subType": "06" } } }, "gcp_javascript_det_explicit_altname": { "kms": "gcp", "type": "javascript", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "ARgjwAAAAAAAAAAAAAAAAAANB2d97R8nUJqnG0JPsWzyFe5pct5jvUljdkPnlZvLN1ZH+wSu4WmLfjri6IzzYP//f8tywn4Il+R4lZ0Kr/RAeA==", "subType": "06" } } }, "gcp_symbol_rand_auto_id": { "kms": "gcp", "type": "symbol", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AhgjwAAAAAAAAAAAAAAAAAAOsGdnr6EKcBdOAvYrP0o1pWbhhJbYsqfVwwwS1zq6ZkBayOss2J3TuYwBGXhJFlq3iIiWLdxGQ883XIvuAECnqUNuvpK2rOLwtDg8xJLiH24=", "subType": "06" } } }, "gcp_symbol_rand_auto_altname": { "kms": "gcp", "type": "symbol", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AhgjwAAAAAAAAAAAAAAAAAAOpfa6CUSnJBvnWdd7pSZ2pXAbYm68Yka6xa/fuyhVx/Tc926/JpqmOmQtXqbOj8dZra0rQ3/yxHySwgD7s9Qr+xvyL7LvAguGkGmEV5H4Xz4=", "subType": "06" } } }, "gcp_symbol_rand_explicit_id": { "kms": "gcp", "type": "symbol", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AhgjwAAAAAAAAAAAAAAAAAAO085iqYGFdtjiFWHcNqE0HuKMNHmk49DVh+pX8Pb4p3ehB57JL1nRqaXqHPqhFenxSEInT/te9HQRr+ADcHADvUGsScfm/n85v85nq6X+5y4=", "subType": "06" } } }, "gcp_symbol_rand_explicit_altname": { "kms": "gcp", "type": "symbol", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AhgjwAAAAAAAAAAAAAAAAAAOiidb+2TsbAb2wc7MtDzb/UYsjgVNSw410Sz9pm+Uy7aZROE5SURKXdLjrCH2ZM2a+XCAl3o9yAoNgmAjEvYVxjmyzLK00EVjT42MBOrdA+k=", "subType": "06" } } }, "gcp_symbol_det_auto_id": { "kms": "gcp", "type": "symbol", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ARgjwAAAAAAAAAAAAAAAAAAOFBGo77joqvZl7QQMB9ebMsAI3uro8ILQTJsTUgAqNzSh1mNzqihGHZYe84xtgMrVxNuwcjkidkRbNnLXWLuarOx4tgmOLx5A5G1eYEe3s7Q=", "subType": "06" } } }, "gcp_symbol_det_explicit_id": { "kms": "gcp", "type": "symbol", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ARgjwAAAAAAAAAAAAAAAAAAOFBGo77joqvZl7QQMB9ebMsAI3uro8ILQTJsTUgAqNzSh1mNzqihGHZYe84xtgMrVxNuwcjkidkRbNnLXWLuarOx4tgmOLx5A5G1eYEe3s7Q=", "subType": "06" } } }, "gcp_symbol_det_explicit_altname": { "kms": "gcp", "type": "symbol", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "ARgjwAAAAAAAAAAAAAAAAAAOFBGo77joqvZl7QQMB9ebMsAI3uro8ILQTJsTUgAqNzSh1mNzqihGHZYe84xtgMrVxNuwcjkidkRbNnLXWLuarOx4tgmOLx5A5G1eYEe3s7Q=", "subType": "06" } } }, "gcp_javascriptWithScope_rand_auto_id": { "kms": "gcp", "type": "javascriptWithScope", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AhgjwAAAAAAAAAAAAAAAAAAPUsQHeXWhdmyfQ2Sq1ev1HMuMhBTc/FZFKO9tMMcI9qzjr+z4IdCOFCcx24/T/6NCsDpMiOGNnCdaBCCNRwNM0CTIkpHNLO+RSZORDgAsm9Q=", "subType": "06" } } }, "gcp_javascriptWithScope_rand_auto_altname": { "kms": "gcp", "type": "javascriptWithScope", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AhgjwAAAAAAAAAAAAAAAAAAPRZawtuu0gErebyFqiQw0LxniWhdeujGzaqfAXriGo/2fU7PalzTlWQa8wsv0y7Q/i1K4JbQwCEFpJWLppmtZshCGbVWjpPljB2BH4NNrLPE=", "subType": "06" } } }, "gcp_javascriptWithScope_rand_explicit_id": { "kms": "gcp", "type": "javascriptWithScope", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AhgjwAAAAAAAAAAAAAAAAAAP0qkQjuKmKIqdrsrR9djxt+1jFlEL7K9bP1oz7QWuY38dZJOoGwa6G1bP4wDzjsucJLCEgU2IY+t7BHraBFXvR/Aar8ID5eXcvJ7iOPIyqUw=", "subType": "06" } } }, "gcp_javascriptWithScope_rand_explicit_altname": { "kms": "gcp", "type": "javascriptWithScope", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AhgjwAAAAAAAAAAAAAAAAAAP6L41iuBWGLg3hQZuhXp4MupTQvIT07+/+CRY292sC02mehk5BkuSOEVrehlvyvBJFKia4Bqd/UWvY8PnUPLqFKTLnokONWbAuh36y3gjStw=", "subType": "06" } } }, "gcp_javascriptWithScope_det_explicit_id": { "kms": "gcp", "type": "javascriptWithScope", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$code": "x=1", "$scope": {} } }, "gcp_javascriptWithScope_det_explicit_altname": { "kms": "gcp", "type": "javascriptWithScope", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$code": "x=1", "$scope": {} } }, "gcp_int_rand_auto_id": { "kms": "gcp", "type": "int", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AhgjwAAAAAAAAAAAAAAAAAAQ+6oRKWMSvC+3UGrHSyGeVlR9bFnZtFTmYlUoGn04k6ndtCl8rsmBVUV6dMMYd7znnZtTSIGPI8q6jwf/NJjdIw==", "subType": "06" } } }, "gcp_int_rand_auto_altname": { "kms": "gcp", "type": "int", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AhgjwAAAAAAAAAAAAAAAAAAQnz5jAbrrdutTPFA4m3MvlVJr3bpurTKY5xjwO5k8DZpeWTJzr+kVEJjG6M8/RgC/0UFNgBBrDbDhYa8PZHRijw==", "subType": "06" } } }, "gcp_int_rand_explicit_id": { "kms": "gcp", "type": "int", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AhgjwAAAAAAAAAAAAAAAAAAQfRFoxUgjrv8up/eZ/fLlr/z++d/jFm30nYvKqsnQT7vkmmujJWc8yAtthR9OI6W5biBgAkounqRHhvatLZC6gA==", "subType": "06" } } }, "gcp_int_rand_explicit_altname": { "kms": "gcp", "type": "int", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AhgjwAAAAAAAAAAAAAAAAAAQY/ePk59RY6vLejx9a5ITwkT9000KAubVSqMoQwv7lNXO+GKZfZoLHG6k1MA/IxTvl1Zbz1Tw1bTctmj0HPEGNA==", "subType": "06" } } }, "gcp_int_det_auto_id": { "kms": "gcp", "type": "int", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ARgjwAAAAAAAAAAAAAAAAAAQE9RVV9pOuysUUEGKq0u6ztFM0gTpoOHcHsTFQstA7+L9XTvxWEgL3RgNeq5KtKdODlxl62niV8dnQwlSoDSSWw==", "subType": "06" } } }, "gcp_int_det_explicit_id": { "kms": "gcp", "type": "int", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ARgjwAAAAAAAAAAAAAAAAAAQE9RVV9pOuysUUEGKq0u6ztFM0gTpoOHcHsTFQstA7+L9XTvxWEgL3RgNeq5KtKdODlxl62niV8dnQwlSoDSSWw==", "subType": "06" } } }, "gcp_int_det_explicit_altname": { "kms": "gcp", "type": "int", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "ARgjwAAAAAAAAAAAAAAAAAAQE9RVV9pOuysUUEGKq0u6ztFM0gTpoOHcHsTFQstA7+L9XTvxWEgL3RgNeq5KtKdODlxl62niV8dnQwlSoDSSWw==", "subType": "06" } } }, "gcp_timestamp_rand_auto_id": { "kms": "gcp", "type": "timestamp", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AhgjwAAAAAAAAAAAAAAAAAARLnk1LpJIriKr6iiY1yBDGnfkRaHNwWcQyL+mORtYC4+AQ6oMv0qpGrJxS2QCbYY1tGmAISqZHCIExCG+TIv4bw==", "subType": "06" } } }, "gcp_timestamp_rand_auto_altname": { "kms": "gcp", "type": "timestamp", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AhgjwAAAAAAAAAAAAAAAAAARaqYXh9AVZI6gvRZrBwbprE5P3K5Qf4PIK1ca+mLRNOof0EExyAhtku7mYXusLeq0ww/tV6Zt1cA36KsT8a0Nog==", "subType": "06" } } }, "gcp_timestamp_rand_explicit_id": { "kms": "gcp", "type": "timestamp", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AhgjwAAAAAAAAAAAAAAAAAARLXzBjkCN8BpfXDIrb94kuZCD07Uo/DMBfMIWQtAb1++tTheUoY2ClQz33Luh4g8NXwuMJ7h8ufE70N2+b1yrUg==", "subType": "06" } } }, "gcp_timestamp_rand_explicit_altname": { "kms": "gcp", "type": "timestamp", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AhgjwAAAAAAAAAAAAAAAAAARe44QH9ZvTAuHsWhEMoue8eHod+cJpBm+Kl/Xtw7NI/6UTOOHC5Kkg20EvX3+GwXdAGk0bUSCFiTZb/yPox1OlA==", "subType": "06" } } }, "gcp_timestamp_det_auto_id": { "kms": "gcp", "type": "timestamp", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ARgjwAAAAAAAAAAAAAAAAAARzXjP6d6j/iQxiz1/TC/m+IfAGLFH9wY2ksS//i9x15QttlhcRrT3XmPvxaP5OjTHac4Gq3m2aXiJH56lETyl8A==", "subType": "06" } } }, "gcp_timestamp_det_explicit_id": { "kms": "gcp", "type": "timestamp", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ARgjwAAAAAAAAAAAAAAAAAARzXjP6d6j/iQxiz1/TC/m+IfAGLFH9wY2ksS//i9x15QttlhcRrT3XmPvxaP5OjTHac4Gq3m2aXiJH56lETyl8A==", "subType": "06" } } }, "gcp_timestamp_det_explicit_altname": { "kms": "gcp", "type": "timestamp", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "ARgjwAAAAAAAAAAAAAAAAAARzXjP6d6j/iQxiz1/TC/m+IfAGLFH9wY2ksS//i9x15QttlhcRrT3XmPvxaP5OjTHac4Gq3m2aXiJH56lETyl8A==", "subType": "06" } } }, "gcp_long_rand_auto_id": { "kms": "gcp", "type": "long", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AhgjwAAAAAAAAAAAAAAAAAASuGZs48eEyVBJ9vvM6cvRySfuR0WM4kL7lx52rSGXBKtkZywyP5rJwNtRn9WTBMDqc1O/4jUgYXpqHx39SLhUPA==", "subType": "06" } } }, "gcp_long_rand_auto_altname": { "kms": "gcp", "type": "long", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AhgjwAAAAAAAAAAAAAAAAAAS/62F71oKTX1GlvOP89uNhXpIyLZ5OdnuLeM/hvL5HWyOudSb06cG3+xnPg3QgppAYFK5X2PGgrEcrA87AykLPg==", "subType": "06" } } }, "gcp_long_rand_explicit_id": { "kms": "gcp", "type": "long", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AhgjwAAAAAAAAAAAAAAAAAASSgx+p4YzTvjZ+GCZCFHEKHNXJUSloPnLRHE4iJ515Epb8Tox7h8/aIAkB3ulnDS9BiT5UKdye2TWf8OBEwkXzg==", "subType": "06" } } }, "gcp_long_rand_explicit_altname": { "kms": "gcp", "type": "long", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AhgjwAAAAAAAAAAAAAAAAAAStqszyEfltpgd3aYeoyqaJX27OX861o06VhNX/N2fdSfKx0NQq/hWlWTkX6hK3hjCijiTtHmhFQR6QLkHD/6THw==", "subType": "06" } } }, "gcp_long_det_auto_id": { "kms": "gcp", "type": "long", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ARgjwAAAAAAAAAAAAAAAAAAS0wJHtZKnxJlWnlSu0xuq7bZR25UdwcbdCRSaXBC0EXEFuqlzrZSn1lcwKPKGZQO8EQ6SdQDqK95alMLmM8eQrQ==", "subType": "06" } } }, "gcp_long_det_explicit_id": { "kms": "gcp", "type": "long", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ARgjwAAAAAAAAAAAAAAAAAAS0wJHtZKnxJlWnlSu0xuq7bZR25UdwcbdCRSaXBC0EXEFuqlzrZSn1lcwKPKGZQO8EQ6SdQDqK95alMLmM8eQrQ==", "subType": "06" } } }, "gcp_long_det_explicit_altname": { "kms": "gcp", "type": "long", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "ARgjwAAAAAAAAAAAAAAAAAAS0wJHtZKnxJlWnlSu0xuq7bZR25UdwcbdCRSaXBC0EXEFuqlzrZSn1lcwKPKGZQO8EQ6SdQDqK95alMLmM8eQrQ==", "subType": "06" } } }, "gcp_decimal_rand_auto_id": { "kms": "gcp", "type": "decimal", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AhgjwAAAAAAAAAAAAAAAAAATg4U3nbHBX/Az3ie2yurEIJO6cFryQWKiCpBbx1z0NF7RXd7kFC1XzaY6zcBjfl2AfRO8FFmgjTmFXb6gTRSSF0iAZJZTslfe3n6YFtwSKDI=", "subType": "06" } } }, "gcp_decimal_rand_auto_altname": { "kms": "gcp", "type": "decimal", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AhgjwAAAAAAAAAAAAAAAAAATdSSyp0ewboV5zI3T3TV/FOrdx0UQbFHhqcH+yqpotoWPSw5dxE+BEoihYLeaPKuVU/rUIY4TUv05Egj7Ovg62Kpk3cPscxsGtE/T2Ppbt6o=", "subType": "06" } } }, "gcp_decimal_rand_explicit_id": { "kms": "gcp", "type": "decimal", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AhgjwAAAAAAAAAAAAAAAAAATl7k20T22pf5Y9knVwIDyOIlbHyZBJqyi3Mai8APEZIYjpSKDKs8QNAH69CIjupyge8Izw4Cuch0bRrvMbp6YFfrUgk1JIQ4iLKkqqzHpBTY=", "subType": "06" } } }, "gcp_decimal_rand_explicit_altname": { "kms": "gcp", "type": "decimal", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AhgjwAAAAAAAAAAAAAAAAAATF7YLkhkuLhXdxrQk2fJTs128tRNYHeodkqw7ha/TxW3Czr5gE272gnkdzfNoS7uu9XwOr1yjrC6y/8gHALAWn77WvGrAlBktLQbIIinsuds=", "subType": "06" } } }, "gcp_decimal_det_explicit_id": { "kms": "gcp", "type": "decimal", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$numberDecimal": "1.234" } }, "gcp_decimal_det_explicit_altname": { "kms": "gcp", "type": "decimal", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$numberDecimal": "1.234" } }, "gcp_minKey_rand_explicit_id": { "kms": "gcp", "type": "minKey", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$minKey": 1 } }, "gcp_minKey_rand_explicit_altname": { "kms": "gcp", "type": "minKey", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$minKey": 1 } }, "gcp_minKey_det_explicit_id": { "kms": "gcp", "type": "minKey", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$minKey": 1 } }, "gcp_minKey_det_explicit_altname": { "kms": "gcp", "type": "minKey", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$minKey": 1 } }, "gcp_maxKey_rand_explicit_id": { "kms": "gcp", "type": "maxKey", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$maxKey": 1 } }, "gcp_maxKey_rand_explicit_altname": { "kms": "gcp", "type": "maxKey", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$maxKey": 1 } }, "gcp_maxKey_det_explicit_id": { "kms": "gcp", "type": "maxKey", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$maxKey": 1 } }, "gcp_maxKey_det_explicit_altname": { "kms": "gcp", "type": "maxKey", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$maxKey": 1 } }, "kmip_double_rand_auto_id": { "kms": "kmip", "type": "double", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AijCDwAAAAAAAAAAAAAAAAAB1hL/nPkpQtqxQUANbIJr30PQ98vPvaoy4JWUoElOL+cCnrSra3o7W+12dydy0rCS2EKrVm7Fw0C8L9nf1hpWjw==", "subType": "06" } } }, "kmip_double_rand_auto_altname": { "kms": "kmip", "type": "double", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AijCDwAAAAAAAAAAAAAAAAABxlcphy2SxXlkRBvO1Z3nNUqchmeOhIhkdYBbbW7CwYeLVRDciXFsZN73Nb9Bm+W4IpUNpo6mqFEtfjevIjtFyg==", "subType": "06" } } }, "kmip_double_rand_explicit_id": { "kms": "kmip", "type": "double", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AijCDwAAAAAAAAAAAAAAAAABx5AfRSiblFc1DGwxRIaUSP2kaM76ryzPUKL9KnEgnX1kjIlFz5B15uMht2cxdrntHFe1qZZk8V9PxTBpWZhJ8Q==", "subType": "06" } } }, "kmip_double_rand_explicit_altname": { "kms": "kmip", "type": "double", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AijCDwAAAAAAAAAAAAAAAAABXUC9v9HPrmU9tINzFmr2sQM9f7GHDus+y5T4pWX28PRtfnTysN/ANCfB9RosoR/wuKsbznwwD2JfSzOvlKo3PQ==", "subType": "06" } } }, "kmip_double_det_explicit_id": { "kms": "kmip", "type": "double", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$numberDouble": "1.2339999999999999858" } }, "kmip_double_det_explicit_altname": { "kms": "kmip", "type": "double", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$numberDouble": "1.2339999999999999858" } }, "kmip_string_rand_auto_id": { "kms": "kmip", "type": "string", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AijCDwAAAAAAAAAAAAAAAAACGHmqW1qbfqVlfB0x0CkXCk9smhs3yXsxJ/8eypSgbDQqVLSW2nf5bbHpnoCHHNtQ7I7ZBXzPzDLH2GgMJpopeQ==", "subType": "06" } } }, "kmip_string_rand_auto_altname": { "kms": "kmip", "type": "string", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AijCDwAAAAAAAAAAAAAAAAAC9BJTD1pEMbslAjbJYt7yx/jzKkcZF3axu96+NYwp8afUCjXG5TOUZzODOwkbJuWgr7DBxa2GkZTvaAEk86h+Ow==", "subType": "06" } } }, "kmip_string_rand_explicit_id": { "kms": "kmip", "type": "string", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AijCDwAAAAAAAAAAAAAAAAACQlG28ECy8KHXC7GEPdC8+raBo2RMJwl5pofcPaTGkPUEbkreguMd1mYctNb90vXxby1nNeJY4o5zJJCMiNhNXg==", "subType": "06" } } }, "kmip_string_rand_explicit_altname": { "kms": "kmip", "type": "string", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AijCDwAAAAAAAAAAAAAAAAACbWuK+3nzeKSNVjmgHb0Ii7rA+CsAd+gYubPiMiHXZwE/o6i9FYWN+t/VK3p4K0CwIi6q3cycrMb2IgcvM27Q7Q==", "subType": "06" } } }, "kmip_string_det_auto_id": { "kms": "kmip", "type": "string", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ASjCDwAAAAAAAAAAAAAAAAAC5OZgr9keCXOIj5Fi06i4win1xt7gpsyPA4Os+HdFn1MIP9tnktvWNRb8Rqhuj2O9KO83brx74Hu3EQ4nT6uCMw==", "subType": "06" } } }, "kmip_string_det_explicit_id": { "kms": "kmip", "type": "string", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ASjCDwAAAAAAAAAAAAAAAAAC5OZgr9keCXOIj5Fi06i4win1xt7gpsyPA4Os+HdFn1MIP9tnktvWNRb8Rqhuj2O9KO83brx74Hu3EQ4nT6uCMw==", "subType": "06" } } }, "kmip_string_det_explicit_altname": { "kms": "kmip", "type": "string", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "ASjCDwAAAAAAAAAAAAAAAAAC5OZgr9keCXOIj5Fi06i4win1xt7gpsyPA4Os+HdFn1MIP9tnktvWNRb8Rqhuj2O9KO83brx74Hu3EQ4nT6uCMw==", "subType": "06" } } }, "kmip_object_rand_auto_id": { "kms": "kmip", "type": "object", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AijCDwAAAAAAAAAAAAAAAAADh2nGqaAUwHDRVjqYpj8JAPH7scmiHp1Z9SGBZQ6Fapxm+zWDdTBHyitM9U69BctJ5DaaafyqFOj5yr6sJ+ebJQ==", "subType": "06" } } }, "kmip_object_rand_auto_altname": { "kms": "kmip", "type": "object", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AijCDwAAAAAAAAAAAAAAAAAD1YhOKyNle4y0Qbeio1HlCULLeTCALCLgKSITd50bilD+oDyqQawixJAwphcdjhLdFzbFwst5RWqpsiWMPHx4hQ==", "subType": "06" } } }, "kmip_object_rand_explicit_id": { "kms": "kmip", "type": "object", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AijCDwAAAAAAAAAAAAAAAAADveILoWFgX7AhUWCv8UL52TUa75qHuoNadnTQydJlqd6PVmtRKj+8vS7VwxNWPaH4wB1Tk7emMyFEbZpvvzjxqQ==", "subType": "06" } } }, "kmip_object_rand_explicit_altname": { "kms": "kmip", "type": "object", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AijCDwAAAAAAAAAAAAAAAAADB/LN9V/4SROJn+ESHRLM7wwcUltQUx3+LbbYXjPDXiiV14HK76Iyy6ZxJ+M5qC9bRj3afhTKuWLBblB8WwksOg==", "subType": "06" } } }, "kmip_object_det_explicit_id": { "kms": "kmip", "type": "object", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": { "x": { "$numberInt": "1" } } }, "kmip_object_det_explicit_altname": { "kms": "kmip", "type": "object", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "x": { "$numberInt": "1" } } }, "kmip_array_rand_auto_id": { "kms": "kmip", "type": "array", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AijCDwAAAAAAAAAAAAAAAAAEasWXQam8XtOkSO0nEttMCQ0iZ4V8DDmhMKyQDFDsiNHyF2h98Ya/xFv4ZSlbpGWXPBvBATEGgov/PDg2vhVi53y4Pk33RHfY60hABuksp3o=", "subType": "06" } } }, "kmip_array_rand_auto_altname": { "kms": "kmip", "type": "array", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AijCDwAAAAAAAAAAAAAAAAAEj3A1DYSEHm/3SlEmusA+pewxRPUoZ2NAjs60ioEBlCw9n6yiiB+X8d/w40TKsjZcOSfh05NC0z3gnpqQvrNolkxkvi9dmFiZeiiv5vBZUPI=", "subType": "06" } } }, "kmip_array_rand_explicit_id": { "kms": "kmip", "type": "array", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AijCDwAAAAAAAAAAAAAAAAAEqeJW+L6lP0bn5QcD0FMI0C8vv2n5kV7SKgqKi1o5mxaxmp3Cjlspf7yumfSiQ5js6G9yJVAvHuxlqv14UFyR9RgXS0PIA8WzsAqkL0sJSw0=", "subType": "06" } } }, "kmip_array_rand_explicit_altname": { "kms": "kmip", "type": "array", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AijCDwAAAAAAAAAAAAAAAAAEnPlPwy0B1VKuNum1GzkZwQjZia5jNYL5bf/k+PbfhnToTRWGxx8+E3R7XXp6YT/rFkjPlzU8ww9+iZNo2oqNpYuHdrIC8ybhO6HZAlvcERo=", "subType": "06" } } }, "kmip_array_det_explicit_id": { "kms": "kmip", "type": "array", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": [ { "$numberInt": "1" }, { "$numberInt": "2" }, { "$numberInt": "3" } ] }, "kmip_array_det_explicit_altname": { "kms": "kmip", "type": "array", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": [ { "$numberInt": "1" }, { "$numberInt": "2" }, { "$numberInt": "3" } ] }, "kmip_binData=00_rand_auto_id": { "kms": "kmip", "type": "binData=00", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AijCDwAAAAAAAAAAAAAAAAAFliNDZ6DmjoVcYQBCKDI9njpBsDELg+TD6XLF7xbZnMaJCCHLHr7w3x2/xFfrFSN44CtGAKOniYPCMAspaxHqOA==", "subType": "06" } } }, "kmip_binData=00_rand_auto_altname": { "kms": "kmip", "type": "binData=00", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AijCDwAAAAAAAAAAAAAAAAAF/P8LPmHKGgG0l5/Xi7jdkwfxpGPxoY0417suCvN6zjM3JNdufytzkektrm9CbBb1SnZCGYF9c0FCMzFG+tN/dg==", "subType": "06" } } }, "kmip_binData=00_rand_explicit_id": { "kms": "kmip", "type": "binData=00", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AijCDwAAAAAAAAAAAAAAAAAFWI0N4RbnYdEiFrzNpbRN9p+bSLm8Lthiu4K3/CvBg6GQpLMVQFhjW01Bud0lxpT2ohRnOK+ASUhiFcUU/t/lWQ==", "subType": "06" } } }, "kmip_binData=00_rand_explicit_altname": { "kms": "kmip", "type": "binData=00", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AijCDwAAAAAAAAAAAAAAAAAFQZvAtpY4cjEr1rJWVoUGaZKmzocSJ0muHose7Tk5kRDczjFa4Jcu4hN7JLM9qz2z4g+WJC3KQTdW4ZBXStke/Q==", "subType": "06" } } }, "kmip_binData=00_det_auto_id": { "kms": "kmip", "type": "binData=00", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ASjCDwAAAAAAAAAAAAAAAAAFohIHrvzu8xLxVHsnYEDhZmv8BpEoEtFSjMUQzvBLUInvvTuU/rOzlVL88CkAEII7M3hcvrz8FKY7b7lC1veoYg==", "subType": "06" } } }, "kmip_binData=00_det_explicit_id": { "kms": "kmip", "type": "binData=00", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ASjCDwAAAAAAAAAAAAAAAAAFohIHrvzu8xLxVHsnYEDhZmv8BpEoEtFSjMUQzvBLUInvvTuU/rOzlVL88CkAEII7M3hcvrz8FKY7b7lC1veoYg==", "subType": "06" } } }, "kmip_binData=00_det_explicit_altname": { "kms": "kmip", "type": "binData=00", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "ASjCDwAAAAAAAAAAAAAAAAAFohIHrvzu8xLxVHsnYEDhZmv8BpEoEtFSjMUQzvBLUInvvTuU/rOzlVL88CkAEII7M3hcvrz8FKY7b7lC1veoYg==", "subType": "06" } } }, "kmip_binData=04_rand_auto_id": { "kms": "kmip", "type": "binData=04", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AijCDwAAAAAAAAAAAAAAAAAFn7rhdO8tYq77uVxcqd9Qjz84Yg7JnJMYf0ULTMTh1vJHacckkhXw+8fIMMiAKwuOVwGkMAtu5RBvrFqdfxryCg8RLTxu1YYVthufiClEIS0=", "subType": "06" } } }, "kmip_binData=04_rand_auto_altname": { "kms": "kmip", "type": "binData=04", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AijCDwAAAAAAAAAAAAAAAAAFwwXQx9dKyoyHq7GBMmHzYe9ysoJK/f/ZWzA6nErau9MtX1gqi7VRsYqkamb47/zVbsLZwPMmdgNyPxEh3kqbV2D61t5RG2A3VeqhO1pTF8c=", "subType": "06" } } }, "kmip_binData=04_rand_explicit_id": { "kms": "kmip", "type": "binData=04", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AijCDwAAAAAAAAAAAAAAAAAFALeGeinJ8DE+WZniLdCIW2gfJUj445Ukp9PvRLgBXLGedl8mIXlLF2eu3BA9vP6s5y9w6peQjhn+oEofrsUVYD2duyzeIRMKgNiNchjf6TU=", "subType": "06" } } }, "kmip_binData=04_rand_explicit_altname": { "kms": "kmip", "type": "binData=04", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AijCDwAAAAAAAAAAAAAAAAAF06Fx8CO3OSKE3fGri0VwK0e22YiG9LH2QkDTsRdFbT2lBm+bDD9FrEY8vKWS5RljMuysaxjBOzZ98d2LEs6k8LMOm83Nz/RESe4ZbbcfdQ0=", "subType": "06" } } }, "kmip_binData=04_det_auto_id": { "kms": "kmip", "type": "binData=04", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ASjCDwAAAAAAAAAAAAAAAAAFzmZI909fJgxOykJtvOlv5LsX8z6BxUX2Xg5TsIwOxJMPSC8usm/zR7sZawoVBOuJxtNVLY/8oNP/4pFtAmQo02bUOtTo1yxNz/IZa9x+Q5E=", "subType": "06" } } }, "kmip_binData=04_det_explicit_id": { "kms": "kmip", "type": "binData=04", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ASjCDwAAAAAAAAAAAAAAAAAFzmZI909fJgxOykJtvOlv5LsX8z6BxUX2Xg5TsIwOxJMPSC8usm/zR7sZawoVBOuJxtNVLY/8oNP/4pFtAmQo02bUOtTo1yxNz/IZa9x+Q5E=", "subType": "06" } } }, "kmip_binData=04_det_explicit_altname": { "kms": "kmip", "type": "binData=04", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "ASjCDwAAAAAAAAAAAAAAAAAFzmZI909fJgxOykJtvOlv5LsX8z6BxUX2Xg5TsIwOxJMPSC8usm/zR7sZawoVBOuJxtNVLY/8oNP/4pFtAmQo02bUOtTo1yxNz/IZa9x+Q5E=", "subType": "06" } } }, "kmip_undefined_rand_explicit_id": { "kms": "kmip", "type": "undefined", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$undefined": true } }, "kmip_undefined_rand_explicit_altname": { "kms": "kmip", "type": "undefined", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$undefined": true } }, "kmip_undefined_det_explicit_id": { "kms": "kmip", "type": "undefined", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$undefined": true } }, "kmip_undefined_det_explicit_altname": { "kms": "kmip", "type": "undefined", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$undefined": true } }, "kmip_objectId_rand_auto_id": { "kms": "kmip", "type": "objectId", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AijCDwAAAAAAAAAAAAAAAAAHZFzE908RuO5deEt3t2QQdT12ybwqbm8D+sMJrdKt2Wp4kVPsw4ocAGGsRYN6VXe46P5fmyG5HqVWn0hkflZnQg==", "subType": "06" } } }, "kmip_objectId_rand_auto_altname": { "kms": "kmip", "type": "objectId", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AijCDwAAAAAAAAAAAAAAAAAH3dPKyCCStvOtVGzlgIS33fsl8OAwQblt9i21pOVuLiliY1Tup9EtkSic88+nNEtXnq9gRknRzLthXv/k1ql+7Q==", "subType": "06" } } }, "kmip_objectId_rand_explicit_id": { "kms": "kmip", "type": "objectId", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AijCDwAAAAAAAAAAAAAAAAAHcEjxVfHDSfLzFxAuK/rs/Pn/XV7jLkgKXZYeY0PNlRi1MHojN2AvQqI3J2rOvAjuYfikGcpvGPp/goqUbV9HYw==", "subType": "06" } } }, "kmip_objectId_rand_explicit_altname": { "kms": "kmip", "type": "objectId", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AijCDwAAAAAAAAAAAAAAAAAHX65sNHnRYpx3VbWPCdQyFe7u0Y5ItabLEduqDeVsPk/iK4X3GjCSHQfw1yPi+CA+/veVpgdonwws6RiYV4ZZ5Q==", "subType": "06" } } }, "kmip_objectId_det_auto_id": { "kms": "kmip", "type": "objectId", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ASjCDwAAAAAAAAAAAAAAAAAHKU7mcdGEq2WGrDB6TicipLQstAk6G3PkiNt5F3bMavpKLjz04UBrd8aWGVG2gJTTON1UKRztiYFgRvb8f+LK/Q==", "subType": "06" } } }, "kmip_objectId_det_explicit_id": { "kms": "kmip", "type": "objectId", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ASjCDwAAAAAAAAAAAAAAAAAHKU7mcdGEq2WGrDB6TicipLQstAk6G3PkiNt5F3bMavpKLjz04UBrd8aWGVG2gJTTON1UKRztiYFgRvb8f+LK/Q==", "subType": "06" } } }, "kmip_objectId_det_explicit_altname": { "kms": "kmip", "type": "objectId", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "ASjCDwAAAAAAAAAAAAAAAAAHKU7mcdGEq2WGrDB6TicipLQstAk6G3PkiNt5F3bMavpKLjz04UBrd8aWGVG2gJTTON1UKRztiYFgRvb8f+LK/Q==", "subType": "06" } } }, "kmip_bool_rand_auto_id": { "kms": "kmip", "type": "bool", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AijCDwAAAAAAAAAAAAAAAAAIw/xgJlKEvErmVtue3X3RFsOI2sttAbxnzh1INc9GUQ2vok1VwYt9k88RxMPiOwMAZG7P1MlAdx7zt865onPKOw==", "subType": "06" } } }, "kmip_bool_rand_auto_altname": { "kms": "kmip", "type": "bool", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AijCDwAAAAAAAAAAAAAAAAAIn8IuzlNHbpTgXOd1wEp364zJOBxj2Zf7a9B5osUV1sDY0G1OVpEnuDvZeUsdiUSyRjTTxzyuD/KZlKZ3+qrnrA==", "subType": "06" } } }, "kmip_bool_rand_explicit_id": { "kms": "kmip", "type": "bool", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AijCDwAAAAAAAAAAAAAAAAAI3Nz9PdjUYQRGfTtvYSR8EQuUKFL0wdlEdfSCTBmMBhBPuuF9KxqCgy+ldVu1DRRgg3346DOKEEtE9BJPPInJ6Q==", "subType": "06" } } }, "kmip_bool_rand_explicit_altname": { "kms": "kmip", "type": "bool", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AijCDwAAAAAAAAAAAAAAAAAIEGjqoerIZBk8Rw+YTO7jFKWzagDS8mEpD+9Wm1Q0r0ZHUmV0dQZcIqRV4oUk8U8uHUn0N3t2qGLr+rhUs4GH/g==", "subType": "06" } } }, "kmip_bool_det_explicit_id": { "kms": "kmip", "type": "bool", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": true }, "kmip_bool_det_explicit_altname": { "kms": "kmip", "type": "bool", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": true }, "kmip_date_rand_auto_id": { "kms": "kmip", "type": "date", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AijCDwAAAAAAAAAAAAAAAAAJgr0v4xetUXjlLcPcyKv/rzjtWOKp9CZJcm23Noglu5RR/rXJS0qKI+W9MmJ64TMf27KvaJ0UXwfTRrvOC1plCg==", "subType": "06" } } }, "kmip_date_rand_auto_altname": { "kms": "kmip", "type": "date", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AijCDwAAAAAAAAAAAAAAAAAJoeysAaiPsVK+JL1P1vD/9xF92m5kKidUdn6yklPlSKN4VVEBTymDetTLujULs1u1TlrS71jVLxo3xEwpG/KQvg==", "subType": "06" } } }, "kmip_date_rand_explicit_id": { "kms": "kmip", "type": "date", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AijCDwAAAAAAAAAAAAAAAAAJVwu4+Su0DktpnZvzTBHYpWbWTq5gho/SLijrcIrFJcvq4YrjjPCXv+odCl95tkH+J1RlJdQ5Cr0umEIazLa6GA==", "subType": "06" } } }, "kmip_date_rand_explicit_altname": { "kms": "kmip", "type": "date", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AijCDwAAAAAAAAAAAAAAAAAJWTYpjbDkIf82QXHMGrvd0SqhP8cBIakfYJf5aNcNrs86vxRhiG3KwETWPeOOlPZ6n1WjE2bOLB+DJTAxmJvahA==", "subType": "06" } } }, "kmip_date_det_auto_id": { "kms": "kmip", "type": "date", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ASjCDwAAAAAAAAAAAAAAAAAJ/+sQrUqQh+JADSVIKM0d68gDUhDy37M1z1uvROzQw6hUAbQeD0DWdztADKg560UTPM4uOgH4NAyhLyBLMrWWHg==", "subType": "06" } } }, "kmip_date_det_explicit_id": { "kms": "kmip", "type": "date", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ASjCDwAAAAAAAAAAAAAAAAAJ/+sQrUqQh+JADSVIKM0d68gDUhDy37M1z1uvROzQw6hUAbQeD0DWdztADKg560UTPM4uOgH4NAyhLyBLMrWWHg==", "subType": "06" } } }, "kmip_date_det_explicit_altname": { "kms": "kmip", "type": "date", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "ASjCDwAAAAAAAAAAAAAAAAAJ/+sQrUqQh+JADSVIKM0d68gDUhDy37M1z1uvROzQw6hUAbQeD0DWdztADKg560UTPM4uOgH4NAyhLyBLMrWWHg==", "subType": "06" } } }, "kmip_null_rand_explicit_id": { "kms": "kmip", "type": "null", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": false, "value": null }, "kmip_null_rand_explicit_altname": { "kms": "kmip", "type": "null", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": false, "value": null }, "kmip_null_det_explicit_id": { "kms": "kmip", "type": "null", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": null }, "kmip_null_det_explicit_altname": { "kms": "kmip", "type": "null", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": null }, "kmip_regex_rand_auto_id": { "kms": "kmip", "type": "regex", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AijCDwAAAAAAAAAAAAAAAAALi8avMfpxSlDsSTqdxO8O2B1M79gOElyUIdXySQo7mvgHlf4oHQ7r94lL9dnsA2t/jmUmBKoGypaUQUSQE+9x+A==", "subType": "06" } } }, "kmip_regex_rand_auto_altname": { "kms": "kmip", "type": "regex", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AijCDwAAAAAAAAAAAAAAAAALfHerZ/KolaBrb5qi3SpeNVW+i/nh5mkcdtQg5f1pHePr68KryHucM/XDAzbMqrPlag2/41STGYdJqzYO7Mbppg==", "subType": "06" } } }, "kmip_regex_rand_explicit_id": { "kms": "kmip", "type": "regex", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AijCDwAAAAAAAAAAAAAAAAALOhKDVAN5cuDyB1EuRFWgKKt0wGJ63E5pPY8Tq2TXMNgCxUUc5O+TE+Ux4ls/uMyOBA3gPzND0CZKiru0i7ACUQ==", "subType": "06" } } }, "kmip_regex_rand_explicit_altname": { "kms": "kmip", "type": "regex", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AijCDwAAAAAAAAAAAAAAAAALK3Hg8xX9gX+d3vKh7aosRP9CS2CIFeG9sapZv3OAPv1eWjY62Cp/G16kJ0BQt33RYD+DzD3gWupfUSyNZR0gng==", "subType": "06" } } }, "kmip_regex_det_auto_id": { "kms": "kmip", "type": "regex", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ASjCDwAAAAAAAAAAAAAAAAALaQXA8rItT7ELVxO8XtAWdHuiXFFPmnMhS5PMrUy/6mRtbq4fvU9dascW7ozonKOh8ad6+MIT7B/STv9dVBF4Kw==", "subType": "06" } } }, "kmip_regex_det_explicit_id": { "kms": "kmip", "type": "regex", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ASjCDwAAAAAAAAAAAAAAAAALaQXA8rItT7ELVxO8XtAWdHuiXFFPmnMhS5PMrUy/6mRtbq4fvU9dascW7ozonKOh8ad6+MIT7B/STv9dVBF4Kw==", "subType": "06" } } }, "kmip_regex_det_explicit_altname": { "kms": "kmip", "type": "regex", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "ASjCDwAAAAAAAAAAAAAAAAALaQXA8rItT7ELVxO8XtAWdHuiXFFPmnMhS5PMrUy/6mRtbq4fvU9dascW7ozonKOh8ad6+MIT7B/STv9dVBF4Kw==", "subType": "06" } } }, "kmip_dbPointer_rand_auto_id": { "kms": "kmip", "type": "dbPointer", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AijCDwAAAAAAAAAAAAAAAAAMoGkfmmUWTI+0aW7jVyCJ5Dgru1SCXBUmJSRzDL0D57pNruQ+79tVVcI6Uz5j87DhZFxShHbPjj583vLOOBNM3WGzZCpqH3serhHTWvXK+NM=", "subType": "06" } } }, "kmip_dbPointer_rand_auto_altname": { "kms": "kmip", "type": "dbPointer", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AijCDwAAAAAAAAAAAAAAAAAMwu1WaRhhv43xgxLNxuenbND9M6mxGtCs9o4J5+yfL95XNB9Daie3RcLlyngz0pncBie6IqjhTycXsxTLQ94Jdg6m5GD5cU541LYKvhbv5f4=", "subType": "06" } } }, "kmip_dbPointer_rand_explicit_id": { "kms": "kmip", "type": "dbPointer", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AijCDwAAAAAAAAAAAAAAAAAM+CIoCAisUwhhJtWQLolxQGQWafniwYyvaJQHmJC94Uwbf1gPfhMR42v2VtrmIVP0J0BaP/xf0cco2/qWRdKGZpgkK2CK6M972NtnZ/2x03A=", "subType": "06" } } }, "kmip_dbPointer_rand_explicit_altname": { "kms": "kmip", "type": "dbPointer", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AijCDwAAAAAAAAAAAAAAAAAMjbeE9+EaJYjGfeAuxsV8teOdsW8bfnlkvji/tE11Zq89UMGx+oUsZzeLjUgVZ5nxsZKCZjEAq+DPnwFVC+MgqNeqWL7fRChODFlPGH2ZC+8=", "subType": "06" } } }, "kmip_dbPointer_det_auto_id": { "kms": "kmip", "type": "dbPointer", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ASjCDwAAAAAAAAAAAAAAAAAM5B+fjbjYCZzCYUu4N/pJI3srCCXN+OCCHweeweqmpIEmB7yw87bQRIMGtCm6HuekcZ5J5q+nY5AQb0du/wh1YIoOrC3u4w7ZcLHkDmuAJPg=", "subType": "06" } } }, "kmip_dbPointer_det_explicit_id": { "kms": "kmip", "type": "dbPointer", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ASjCDwAAAAAAAAAAAAAAAAAM5B+fjbjYCZzCYUu4N/pJI3srCCXN+OCCHweeweqmpIEmB7yw87bQRIMGtCm6HuekcZ5J5q+nY5AQb0du/wh1YIoOrC3u4w7ZcLHkDmuAJPg=", "subType": "06" } } }, "kmip_dbPointer_det_explicit_altname": { "kms": "kmip", "type": "dbPointer", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "ASjCDwAAAAAAAAAAAAAAAAAM5B+fjbjYCZzCYUu4N/pJI3srCCXN+OCCHweeweqmpIEmB7yw87bQRIMGtCm6HuekcZ5J5q+nY5AQb0du/wh1YIoOrC3u4w7ZcLHkDmuAJPg=", "subType": "06" } } }, "kmip_javascript_rand_auto_id": { "kms": "kmip", "type": "javascript", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AijCDwAAAAAAAAAAAAAAAAANuzlkWs/c8xArrAxPgYuCeShjj1zCfIMHOTPohspcyNofo9iY3P5MlhEOprZDiS8dBFg6EB7fZDzDdczx6VCN2A==", "subType": "06" } } }, "kmip_javascript_rand_auto_altname": { "kms": "kmip", "type": "javascript", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AijCDwAAAAAAAAAAAAAAAAANwJ72y7UqCBJh1NwVRiE3vU1ex7FMv/X5YWCMuO9MHPMo4g1V5eaO4KfOr+K8+9NtkflgMpeDkvwP92rfR5ud5Q==", "subType": "06" } } }, "kmip_javascript_rand_explicit_id": { "kms": "kmip", "type": "javascript", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AijCDwAAAAAAAAAAAAAAAAANj5q+888itRnLsw9PNGsBLhgqpvem5IJBOE2292r6zwjVueoEK/2I2PesRnn0esnkwdia1ADoMkcLUegwcFRkWQ==", "subType": "06" } } }, "kmip_javascript_rand_explicit_altname": { "kms": "kmip", "type": "javascript", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AijCDwAAAAAAAAAAAAAAAAANnvbnmApys7OIe8LGTsZKDG1F1G1SI/rfZVmF6q1fq5U7feYPp1ejb2t2S2+v7LfcOHytsQWGcYuWCDcl+vosvQ==", "subType": "06" } } }, "kmip_javascript_det_auto_id": { "kms": "kmip", "type": "javascript", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ASjCDwAAAAAAAAAAAAAAAAANOR9R/Da8j5iVxllLiGFlv4U/bVn/PyN9/5WeGJkGJeE/j/osKrKx6IL1igI0YVI+pKKzsINqJGIv+bJX0s7MNw==", "subType": "06" } } }, "kmip_javascript_det_explicit_id": { "kms": "kmip", "type": "javascript", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ASjCDwAAAAAAAAAAAAAAAAANOR9R/Da8j5iVxllLiGFlv4U/bVn/PyN9/5WeGJkGJeE/j/osKrKx6IL1igI0YVI+pKKzsINqJGIv+bJX0s7MNw==", "subType": "06" } } }, "kmip_javascript_det_explicit_altname": { "kms": "kmip", "type": "javascript", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "ASjCDwAAAAAAAAAAAAAAAAANOR9R/Da8j5iVxllLiGFlv4U/bVn/PyN9/5WeGJkGJeE/j/osKrKx6IL1igI0YVI+pKKzsINqJGIv+bJX0s7MNw==", "subType": "06" } } }, "kmip_symbol_rand_auto_id": { "kms": "kmip", "type": "symbol", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AijCDwAAAAAAAAAAAAAAAAAOe+vXpJSkmBM3WkxZrn4ea9/C6iNyMXWUzkQIzIYlnbkyu8od8nfOdhobUhoFxcKnvdaxN1s5NhJ1FA97RN/upGYN+AI/7cTCElmFSpdSvkI=", "subType": "06" } } }, "kmip_symbol_rand_auto_altname": { "kms": "kmip", "type": "symbol", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AijCDwAAAAAAAAAAAAAAAAAOPpCgK6Hc/M2elOJkwIU9J7PZa+h1chody2yvfDu/UlB6T5sxnEZ6aEY/ISNLhJlhsRzuApSgFOmnrcG6Eg9VnSKin2yK0ll+VFxQEDHAcSA=", "subType": "06" } } }, "kmip_symbol_rand_explicit_id": { "kms": "kmip", "type": "symbol", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AijCDwAAAAAAAAAAAAAAAAAOVoHX9GaOn71L5D9TpZmmxkx/asr0FHCLG5ZgLLA04yIhZHsDjt2DiVGGO/Mf4KwvoBn7Cf08qMhW7rQh2LgvvSLBO3zbw5l+MZ/bSn+Jylo=", "subType": "06" } } }, "kmip_symbol_rand_explicit_altname": { "kms": "kmip", "type": "symbol", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AijCDwAAAAAAAAAAAAAAAAAOPobmcO/I4QObtCUEmGWpSCJ6tlYyhbO59q78LZBucSNl7DSkf/13tOJ9t+WKXACcMKVMmfPoFsgHbVj1nKWULBT07n1OWWDTZkuMD6C2+Fc=", "subType": "06" } } }, "kmip_symbol_det_auto_id": { "kms": "kmip", "type": "symbol", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ASjCDwAAAAAAAAAAAAAAAAAOPpwX4mafoQJYHuzYfbKW1JunpjpB7Nd2slTC3n8Hsas9wQYf9VkModQhe5M4wZHOIXpehaODRcjKKfKRmpnNBOURSLm/ORJvy+UxtSLsnqo=", "subType": "06" } } }, "kmip_symbol_det_explicit_id": { "kms": "kmip", "type": "symbol", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ASjCDwAAAAAAAAAAAAAAAAAOPpwX4mafoQJYHuzYfbKW1JunpjpB7Nd2slTC3n8Hsas9wQYf9VkModQhe5M4wZHOIXpehaODRcjKKfKRmpnNBOURSLm/ORJvy+UxtSLsnqo=", "subType": "06" } } }, "kmip_symbol_det_explicit_altname": { "kms": "kmip", "type": "symbol", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "ASjCDwAAAAAAAAAAAAAAAAAOPpwX4mafoQJYHuzYfbKW1JunpjpB7Nd2slTC3n8Hsas9wQYf9VkModQhe5M4wZHOIXpehaODRcjKKfKRmpnNBOURSLm/ORJvy+UxtSLsnqo=", "subType": "06" } } }, "kmip_javascriptWithScope_rand_auto_id": { "kms": "kmip", "type": "javascriptWithScope", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AijCDwAAAAAAAAAAAAAAAAAPW2VMMm+EvsYpVtJQhsxgxgvV35kr9nxqKxP2qqIOAOQ58R/1oyYScFkNwB/tw0A1/zdvhoo+ERa7c0tjLIojFrosXhX2N/8Z4VnbZruz0Nk=", "subType": "06" } } }, "kmip_javascriptWithScope_rand_auto_altname": { "kms": "kmip", "type": "javascriptWithScope", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AijCDwAAAAAAAAAAAAAAAAAPjPq9BQR4EwG/CD+RthOJY04m99LCl/shY6HnaU/QL627kN1dbBAG5vs+MXfa+glg8waVTNgB94vm3j72FMV1ZOKvbl4faWF1Rl2EOpOlR9U=", "subType": "06" } } }, "kmip_javascriptWithScope_rand_explicit_id": { "kms": "kmip", "type": "javascriptWithScope", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AijCDwAAAAAAAAAAAAAAAAAPtqebrCAidKzBMvp3B5/vBeetqeCoMKS+vo+hLAYooXrnBunWxwRHpr45XYUvroG3aqOMkLtVZSgw8sO6Y/3z1viO2G0sGQW1ZMoW0/PX5Uw=", "subType": "06" } } }, "kmip_javascriptWithScope_rand_explicit_altname": { "kms": "kmip", "type": "javascriptWithScope", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AijCDwAAAAAAAAAAAAAAAAAPtkJwXKlq8Fx1f1+9HFofM4uKi6lHQRFRyiOyUFJYxxZY1LR/2WXXTqWz3MWtrcJFCB+QSVOb1N/ieC7AZUboPgIuPJISM3Hu5VU2x/Isbdc=", "subType": "06" } } }, "kmip_javascriptWithScope_det_explicit_id": { "kms": "kmip", "type": "javascriptWithScope", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$code": "x=1", "$scope": {} } }, "kmip_javascriptWithScope_det_explicit_altname": { "kms": "kmip", "type": "javascriptWithScope", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$code": "x=1", "$scope": {} } }, "kmip_int_rand_auto_id": { "kms": "kmip", "type": "int", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AijCDwAAAAAAAAAAAAAAAAAQ50kE7Tby9od2OsmIGZhp9k/mj4vy/YdnmF6YsSPxihbjV1vXGMraI/nGCr+0H1riwzq3m4sCT7aPw2VgiuwKMA==", "subType": "06" } } }, "kmip_int_rand_auto_altname": { "kms": "kmip", "type": "int", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AijCDwAAAAAAAAAAAAAAAAAQkNL14OSMX/bJbsLtB/UumRoat6QOY7fvwZxRrkXTS3VJVHigthI1cUX7Is/uUsY8oHOfk/ZuHklQkifmfdcklQ==", "subType": "06" } } }, "kmip_int_rand_explicit_id": { "kms": "kmip", "type": "int", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AijCDwAAAAAAAAAAAAAAAAAQtN2gNVU9Itoj+vgcK/4jEB5baSUH+Qz2WqTY7m0XaA3bPWGFCiWY4Sdw+qovednrSSSbC+azWi1QYclFRraldQ==", "subType": "06" } } }, "kmip_int_rand_explicit_altname": { "kms": "kmip", "type": "int", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AijCDwAAAAAAAAAAAAAAAAAQk6uBqwXXFF9zEM4bc124goI3pBy2Jdi8Cd0ycKkjXrPG7GVCUm2UMbO+zEzYODeVo35N11g2yMXcv9RVgjWtNA==", "subType": "06" } } }, "kmip_int_det_auto_id": { "kms": "kmip", "type": "int", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ASjCDwAAAAAAAAAAAAAAAAAQgrkPEf+RBZMn/J7HZObqEfus8icYls6ecaUrlabI6v1ALgxLuv23WSIfTr6mqpQCounqdA14DWS/Wl3kSkVC0w==", "subType": "06" } } }, "kmip_int_det_explicit_id": { "kms": "kmip", "type": "int", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ASjCDwAAAAAAAAAAAAAAAAAQgrkPEf+RBZMn/J7HZObqEfus8icYls6ecaUrlabI6v1ALgxLuv23WSIfTr6mqpQCounqdA14DWS/Wl3kSkVC0w==", "subType": "06" } } }, "kmip_int_det_explicit_altname": { "kms": "kmip", "type": "int", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "ASjCDwAAAAAAAAAAAAAAAAAQgrkPEf+RBZMn/J7HZObqEfus8icYls6ecaUrlabI6v1ALgxLuv23WSIfTr6mqpQCounqdA14DWS/Wl3kSkVC0w==", "subType": "06" } } }, "kmip_timestamp_rand_auto_id": { "kms": "kmip", "type": "timestamp", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AijCDwAAAAAAAAAAAAAAAAAR2Cu3o2e/u5o69MndeZPJU5ngVA1G2MNYn00t+up/GlmaUC1ni1CVl0ZR0EVZ0gCDUrfxwPISPib8y23tNjbsog==", "subType": "06" } } }, "kmip_timestamp_rand_auto_altname": { "kms": "kmip", "type": "timestamp", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AijCDwAAAAAAAAAAAAAAAAARgi8stgSQwqnN4Ws2ZBILOREsjreZcS1MBerL7dbGLVfzW99tqECglhGokkrE0aY69L0xMgcAUIaFRN4GanQAPg==", "subType": "06" } } }, "kmip_timestamp_rand_explicit_id": { "kms": "kmip", "type": "timestamp", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AijCDwAAAAAAAAAAAAAAAAARPxEEI8L5Q3Jybu88BLdf31T3uYEUbijgSlKlkTt141RYrlE8nxtiYU5/5H9GXBis0Qq1s2C+MauD2h/cNijTCA==", "subType": "06" } } }, "kmip_timestamp_rand_explicit_altname": { "kms": "kmip", "type": "timestamp", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AijCDwAAAAAAAAAAAAAAAAARh/QaU1dnGbii4LtXCpT5o6vencc8E2fzarjJFbSEd0ixW/UV1ppZdvD729d0umkaIwIEVA4q+XVvHfl/ckKPFg==", "subType": "06" } } }, "kmip_timestamp_det_auto_id": { "kms": "kmip", "type": "timestamp", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ASjCDwAAAAAAAAAAAAAAAAARqdpLb72mmzb75QBrE+ATMfS5LLqzAD/1g5ScT8zfgh0IHsZZBWCJlSVRNC12Sgr3zdXHMtYp8C3OZT6/tPkQGg==", "subType": "06" } } }, "kmip_timestamp_det_explicit_id": { "kms": "kmip", "type": "timestamp", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ASjCDwAAAAAAAAAAAAAAAAARqdpLb72mmzb75QBrE+ATMfS5LLqzAD/1g5ScT8zfgh0IHsZZBWCJlSVRNC12Sgr3zdXHMtYp8C3OZT6/tPkQGg==", "subType": "06" } } }, "kmip_timestamp_det_explicit_altname": { "kms": "kmip", "type": "timestamp", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "ASjCDwAAAAAAAAAAAAAAAAARqdpLb72mmzb75QBrE+ATMfS5LLqzAD/1g5ScT8zfgh0IHsZZBWCJlSVRNC12Sgr3zdXHMtYp8C3OZT6/tPkQGg==", "subType": "06" } } }, "kmip_long_rand_auto_id": { "kms": "kmip", "type": "long", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AijCDwAAAAAAAAAAAAAAAAASVv+ClXkh9spIaXWJYRV/o8UZjG+WWWrNpIjZ9LQn2bXakrKJ3REvdkrzGuxASmBhBYTplEyvxVCJwXuWRAGGYw==", "subType": "06" } } }, "kmip_long_rand_auto_altname": { "kms": "kmip", "type": "long", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AijCDwAAAAAAAAAAAAAAAAASeAz/dK+Gc4/jx3W07B2rNFvQ0LoyCllFRvRVGu1Xf1NByc4cRZLOMzlr99syz/fifF6WY30bOi5Pani9QtFuGg==", "subType": "06" } } }, "kmip_long_rand_explicit_id": { "kms": "kmip", "type": "long", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AijCDwAAAAAAAAAAAAAAAAASP1HD9uoDlwTldaznKxW71JUQcLsa4/cUWzeTnelQwdpohCbZsM8fBZBqgwwTWnjpYY/LBUipC6yhwLKfUXBoBQ==", "subType": "06" } } }, "kmip_long_rand_explicit_altname": { "kms": "kmip", "type": "long", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AijCDwAAAAAAAAAAAAAAAAASnGPH77bS/ETB1hn+VTvsBrxEvIHA6EAb8Z2SEz6BHt7SVeI+I7DLERvRVpV5kNJFcKgXDrvRmD+Et0rhSmk9sw==", "subType": "06" } } }, "kmip_long_det_auto_id": { "kms": "kmip", "type": "long", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ASjCDwAAAAAAAAAAAAAAAAAS+zKmtijSTPOEVlpwmaeMIOuzVNuZpV4Jw9zP8Yqa1xYtlItXDozqdibacRaA74KU49KNySdR1T7fxwxa2OOTrQ==", "subType": "06" } } }, "kmip_long_det_explicit_id": { "kms": "kmip", "type": "long", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "ASjCDwAAAAAAAAAAAAAAAAAS+zKmtijSTPOEVlpwmaeMIOuzVNuZpV4Jw9zP8Yqa1xYtlItXDozqdibacRaA74KU49KNySdR1T7fxwxa2OOTrQ==", "subType": "06" } } }, "kmip_long_det_explicit_altname": { "kms": "kmip", "type": "long", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "ASjCDwAAAAAAAAAAAAAAAAAS+zKmtijSTPOEVlpwmaeMIOuzVNuZpV4Jw9zP8Yqa1xYtlItXDozqdibacRaA74KU49KNySdR1T7fxwxa2OOTrQ==", "subType": "06" } } }, "kmip_decimal_rand_auto_id": { "kms": "kmip", "type": "decimal", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AijCDwAAAAAAAAAAAAAAAAATu/BbCc5Ti9SBlMR2B8zj3Q1yQ16Uob+10LWaT5QKS192IcnBGy4wmmNkIsTys060xUby9KKQF80dVPnjYfqJwEXCe/pVaPQZftE0DolKv78=", "subType": "06" } } }, "kmip_decimal_rand_auto_altname": { "kms": "kmip", "type": "decimal", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AijCDwAAAAAAAAAAAAAAAAATpq6/dtxq2ZUZHrK10aB0YjjPalEaXYcyAyRZjfXWAYCLZdT9sIybjX3Axjxisim+VSHx0QU7oXkKUfcbLgHyjUXj8g9059FHxKFkUsNv4Z8=", "subType": "06" } } }, "kmip_decimal_rand_explicit_id": { "kms": "kmip", "type": "decimal", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AijCDwAAAAAAAAAAAAAAAAATS++9KcfM7uiShZYxRpFPrBJquKv7dyvFRTjnxs6aaaPo0fiqpv6bco/cMLsldEVpWDEA/Tc2HtSXYPp4UJsMfASyBjoxCloL5SaRWyD9Ye8=", "subType": "06" } } }, "kmip_decimal_rand_explicit_altname": { "kms": "kmip", "type": "decimal", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AijCDwAAAAAAAAAAAAAAAAATREcETS5KoAGyj/P45owPrdFfy5ng8Z1ND+F+780lLddOyPeDnIsa7yg6uvhTZ65mHfGLvKcFocclYenq/AX1dY4xdjLRg/AfT088A27ORUA=", "subType": "06" } } }, "kmip_decimal_det_explicit_id": { "kms": "kmip", "type": "decimal", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$numberDecimal": "1.234" } }, "kmip_decimal_det_explicit_altname": { "kms": "kmip", "type": "decimal", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$numberDecimal": "1.234" } }, "kmip_minKey_rand_explicit_id": { "kms": "kmip", "type": "minKey", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$minKey": 1 } }, "kmip_minKey_rand_explicit_altname": { "kms": "kmip", "type": "minKey", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$minKey": 1 } }, "kmip_minKey_det_explicit_id": { "kms": "kmip", "type": "minKey", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$minKey": 1 } }, "kmip_minKey_det_explicit_altname": { "kms": "kmip", "type": "minKey", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$minKey": 1 } }, "kmip_maxKey_rand_explicit_id": { "kms": "kmip", "type": "maxKey", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$maxKey": 1 } }, "kmip_maxKey_rand_explicit_altname": { "kms": "kmip", "type": "maxKey", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$maxKey": 1 } }, "kmip_maxKey_det_explicit_id": { "kms": "kmip", "type": "maxKey", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$maxKey": 1 } }, "kmip_maxKey_det_explicit_altname": { "kms": "kmip", "type": "maxKey", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$maxKey": 1 } } }mongo-ruby-driver-2.21.3/spec/support/crypt/corpus/corpus-key-aws.json000066400000000000000000000020231505113246500260730ustar00rootroot00000000000000{ "status": { "$numberInt": "1" }, "_id": { "$binary": { "base64": "AWSAAAAAAAAAAAAAAAAAAA==", "subType": "04" } }, "masterKey": { "region": "us-east-1", "key": "arn:aws:kms:us-east-1:579766882180:key/89fcc2c4-08b0-4bd9-9f25-e30687b580d0", "provider": "aws" }, "updateDate": { "$date": { "$numberLong": "1557827033449" } }, "keyMaterial": { "$binary": { "base64": "AQICAHhQNmWG2CzOm1dq3kWLM+iDUZhEqnhJwH9wZVpuZ94A8gEqnsxXlR51T5EbEVezUqqKAAAAwjCBvwYJKoZIhvcNAQcGoIGxMIGuAgEAMIGoBgkqhkiG9w0BBwEwHgYJYIZIAWUDBAEuMBEEDHa4jo6yp0Z18KgbUgIBEIB74sKxWtV8/YHje5lv5THTl0HIbhSwM6EqRlmBiFFatmEWaeMk4tO4xBX65eq670I5TWPSLMzpp8ncGHMmvHqRajNBnmFtbYxN3E3/WjxmdbOOe+OXpnGJPcGsftc7cB2shRfA4lICPnE26+oVNXT6p0Lo20nY5XC7jyCO", "subType": "00" } }, "creationDate": { "$date": { "$numberLong": "1557827033449" } }, "keyAltNames": ["aws"] }mongo-ruby-driver-2.21.3/spec/support/crypt/corpus/corpus-key-azure.json000066400000000000000000000017751505113246500264440ustar00rootroot00000000000000{ "_id": { "$binary": { "base64": "AZUREAAAAAAAAAAAAAAAAA==", "subType": "04" } }, "keyMaterial": { "$binary": { "base64": "n+HWZ0ZSVOYA3cvQgP7inN4JSXfOH85IngmeQxRpQHjCCcqT3IFqEWNlrsVHiz3AELimHhX4HKqOLWMUeSIT6emUDDoQX9BAv8DR1+E1w4nGs/NyEneac78EYFkK3JysrFDOgl2ypCCTKAypkn9CkAx1if4cfgQE93LW4kczcyHdGiH36CIxrCDGv1UzAvERN5Qa47DVwsM6a+hWsF2AAAJVnF0wYLLJU07TuRHdMrrphPWXZsFgyV+lRqJ7DDpReKNO8nMPLV/mHqHBHGPGQiRdb9NoJo8CvokGz4+KE8oLwzKf6V24dtwZmRkrsDV4iOhvROAzz+Euo1ypSkL3mw==", "subType": "00" } }, "creationDate": { "$date": { "$numberLong": "1601573901680" } }, "updateDate": { "$date": { "$numberLong": "1601573901680" } }, "status": { "$numberInt": "0" }, "masterKey": { "provider": "azure", "keyVaultEndpoint": "key-vault-csfle.vault.azure.net", "keyName": "key-name-csfle" }, "keyAltNames": ["azure"] }mongo-ruby-driver-2.21.3/spec/support/crypt/corpus/corpus-key-gcp.json000066400000000000000000000016751505113246500260660ustar00rootroot00000000000000{ "_id": { "$binary": { "base64": "GCPAAAAAAAAAAAAAAAAAAA==", "subType": "04" } }, "keyMaterial": { "$binary": { "base64": "CiQAIgLj0WyktnB4dfYHo5SLZ41K4ASQrjJUaSzl5vvVH0G12G0SiQEAjlV8XPlbnHDEDFbdTO4QIe8ER2/172U1ouLazG0ysDtFFIlSvWX5ZnZUrRMmp/R2aJkzLXEt/zf8Mn4Lfm+itnjgo5R9K4pmPNvvPKNZX5C16lrPT+aA+rd+zXFSmlMg3i5jnxvTdLHhg3G7Q/Uv1ZIJskKt95bzLoe0tUVzRWMYXLIEcohnQg==", "subType": "00" } }, "creationDate": { "$date": { "$numberLong": "1601574333107" } }, "updateDate": { "$date": { "$numberLong": "1601574333107" } }, "status": { "$numberInt": "0" }, "masterKey": { "provider": "gcp", "projectId": "devprod-drivers", "location": "global", "keyRing": "key-ring-csfle", "keyName": "key-name-csfle" }, "keyAltNames": ["gcp"] }mongo-ruby-driver-2.21.3/spec/support/crypt/corpus/corpus-key-kmip.json000066400000000000000000000014551505113246500262510ustar00rootroot00000000000000{ "_id": { "$binary": { "base64": "KMIPAAAAAAAAAAAAAAAAAA==", "subType": "04" } }, "keyMaterial": { "$binary": { "base64": "eUYDyB0HuWb+lQgUwO+6qJQyTTDTY2gp9FbemL7ZFo0pvr0x6rm6Ff9OVUTGH6HyMKipaeHdiIJU1dzsLwvqKvi7Beh+U4iaIWX/K0oEg1GOsJc0+Z/in8gNHbGUYLmycHViM3LES3kdt7FdFSUl5rEBHrM71yoNEXImz17QJWMGOuT4x6yoi2pvnaRJwfrI4DjpmnnTrDMac92jgZehbg==", "subType": "00" } }, "creationDate": { "$date": { "$numberLong": "1634220190041" } }, "updateDate": { "$date": { "$numberLong": "1634220190041" } }, "status": { "$numberInt": "0" }, "masterKey": { "provider": "kmip", "keyId": "1" }, "keyAltNames": ["kmip"] }mongo-ruby-driver-2.21.3/spec/support/crypt/corpus/corpus-key-local.json000066400000000000000000000014421505113246500263770ustar00rootroot00000000000000{ "status": { "$numberInt": "1" }, "_id": { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } }, "masterKey": { "provider": "local" }, "updateDate": { "$date": { "$numberLong": "1557827033449" } }, "keyMaterial": { "$binary": { "base64": "Ce9HSz/HKKGkIt4uyy+jDuKGA+rLC2cycykMo6vc8jXxqa1UVDYHWq1r+vZKbnnSRBfB981akzRKZCFpC05CTyFqDhXv6OnMjpG97OZEREGIsHEYiJkBW0jJJvfLLgeLsEpBzsro9FztGGXASxyxFRZFhXvHxyiLOKrdWfs7X1O/iK3pEoHMx6uSNSfUOgbebLfIqW7TO++iQS5g1xovXA==", "subType": "00" } }, "creationDate": { "$date": { "$numberLong": "1557827033449" } }, "keyAltNames": [ "local" ] }mongo-ruby-driver-2.21.3/spec/support/crypt/corpus/corpus-schema.json000066400000000000000000004422361505113246500257710ustar00rootroot00000000000000{ "bsonType": "object", "properties": { "aws_double_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "AWSAAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "double" } } } }, "aws_double_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_aws", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "double" } } } }, "aws_double_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "aws_double_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "aws_string_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "AWSAAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "string" } } } }, "aws_string_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_aws", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "string" } } } }, "aws_string_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "aws_string_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "aws_string_det_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "AWSAAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic", "bsonType": "string" } } } }, "aws_string_det_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "aws_string_det_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "aws_object_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "AWSAAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "object" } } } }, "aws_object_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_aws", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "object" } } } }, "aws_object_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "aws_object_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "aws_array_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "AWSAAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "array" } } } }, "aws_array_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_aws", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "array" } } } }, "aws_array_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "aws_array_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "aws_binData=00_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "AWSAAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "binData" } } } }, "aws_binData=00_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_aws", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "binData" } } } }, "aws_binData=00_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "aws_binData=00_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "aws_binData=00_det_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "AWSAAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic", "bsonType": "binData" } } } }, "aws_binData=00_det_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "aws_binData=00_det_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "aws_binData=04_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "AWSAAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "binData" } } } }, "aws_binData=04_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_aws", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "binData" } } } }, "aws_binData=04_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "aws_binData=04_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "aws_binData=04_det_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "AWSAAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic", "bsonType": "binData" } } } }, "aws_binData=04_det_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "aws_binData=04_det_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "aws_objectId_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "AWSAAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "objectId" } } } }, "aws_objectId_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_aws", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "objectId" } } } }, "aws_objectId_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "aws_objectId_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "aws_objectId_det_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "AWSAAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic", "bsonType": "objectId" } } } }, "aws_objectId_det_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "aws_objectId_det_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "aws_bool_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "AWSAAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "bool" } } } }, "aws_bool_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_aws", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "bool" } } } }, "aws_bool_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "aws_bool_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "aws_date_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "AWSAAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "date" } } } }, "aws_date_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_aws", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "date" } } } }, "aws_date_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "aws_date_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "aws_date_det_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "AWSAAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic", "bsonType": "date" } } } }, "aws_date_det_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "aws_date_det_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "aws_regex_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "AWSAAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "regex" } } } }, "aws_regex_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_aws", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "regex" } } } }, "aws_regex_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "aws_regex_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "aws_regex_det_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "AWSAAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic", "bsonType": "regex" } } } }, "aws_regex_det_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "aws_regex_det_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "aws_dbPointer_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "AWSAAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "dbPointer" } } } }, "aws_dbPointer_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_aws", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "dbPointer" } } } }, "aws_dbPointer_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "aws_dbPointer_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "aws_dbPointer_det_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "AWSAAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic", "bsonType": "dbPointer" } } } }, "aws_dbPointer_det_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "aws_dbPointer_det_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "aws_javascript_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "AWSAAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "javascript" } } } }, "aws_javascript_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_aws", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "javascript" } } } }, "aws_javascript_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "aws_javascript_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "aws_javascript_det_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "AWSAAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic", "bsonType": "javascript" } } } }, "aws_javascript_det_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "aws_javascript_det_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "aws_symbol_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "AWSAAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "symbol" } } } }, "aws_symbol_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_aws", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "symbol" } } } }, "aws_symbol_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "aws_symbol_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "aws_symbol_det_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "AWSAAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic", "bsonType": "symbol" } } } }, "aws_symbol_det_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "aws_symbol_det_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "aws_javascriptWithScope_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "AWSAAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "javascriptWithScope" } } } }, "aws_javascriptWithScope_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_aws", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "javascriptWithScope" } } } }, "aws_javascriptWithScope_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "aws_javascriptWithScope_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "aws_int_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "AWSAAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "int" } } } }, "aws_int_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_aws", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "int" } } } }, "aws_int_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "aws_int_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "aws_int_det_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "AWSAAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic", "bsonType": "int" } } } }, "aws_int_det_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "aws_int_det_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "aws_timestamp_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "AWSAAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "timestamp" } } } }, "aws_timestamp_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_aws", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "timestamp" } } } }, "aws_timestamp_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "aws_timestamp_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "aws_timestamp_det_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "AWSAAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic", "bsonType": "timestamp" } } } }, "aws_timestamp_det_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "aws_timestamp_det_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "aws_long_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "AWSAAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "long" } } } }, "aws_long_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_aws", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "long" } } } }, "aws_long_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "aws_long_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "aws_long_det_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "AWSAAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic", "bsonType": "long" } } } }, "aws_long_det_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "aws_long_det_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "aws_decimal_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "AWSAAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "decimal" } } } }, "aws_decimal_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_aws", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "decimal" } } } }, "aws_decimal_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "aws_decimal_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "local_double_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "double" } } } }, "local_double_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_local", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "double" } } } }, "local_double_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "local_double_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "local_string_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "string" } } } }, "local_string_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_local", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "string" } } } }, "local_string_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "local_string_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "local_string_det_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic", "bsonType": "string" } } } }, "local_string_det_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "local_string_det_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "local_object_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "object" } } } }, "local_object_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_local", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "object" } } } }, "local_object_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "local_object_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "local_array_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "array" } } } }, "local_array_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_local", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "array" } } } }, "local_array_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "local_array_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "local_binData=00_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "binData" } } } }, "local_binData=00_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_local", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "binData" } } } }, "local_binData=00_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "local_binData=00_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "local_binData=00_det_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic", "bsonType": "binData" } } } }, "local_binData=00_det_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "local_binData=00_det_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "local_binData=04_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "binData" } } } }, "local_binData=04_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_local", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "binData" } } } }, "local_binData=04_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "local_binData=04_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "local_binData=04_det_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic", "bsonType": "binData" } } } }, "local_binData=04_det_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "local_binData=04_det_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "local_objectId_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "objectId" } } } }, "local_objectId_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_local", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "objectId" } } } }, "local_objectId_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "local_objectId_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "local_objectId_det_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic", "bsonType": "objectId" } } } }, "local_objectId_det_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "local_objectId_det_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "local_bool_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "bool" } } } }, "local_bool_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_local", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "bool" } } } }, "local_bool_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "local_bool_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "local_date_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "date" } } } }, "local_date_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_local", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "date" } } } }, "local_date_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "local_date_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "local_date_det_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic", "bsonType": "date" } } } }, "local_date_det_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "local_date_det_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "local_regex_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "regex" } } } }, "local_regex_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_local", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "regex" } } } }, "local_regex_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "local_regex_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "local_regex_det_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic", "bsonType": "regex" } } } }, "local_regex_det_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "local_regex_det_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "local_dbPointer_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "dbPointer" } } } }, "local_dbPointer_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_local", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "dbPointer" } } } }, "local_dbPointer_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "local_dbPointer_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "local_dbPointer_det_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic", "bsonType": "dbPointer" } } } }, "local_dbPointer_det_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "local_dbPointer_det_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "local_javascript_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "javascript" } } } }, "local_javascript_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_local", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "javascript" } } } }, "local_javascript_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "local_javascript_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "local_javascript_det_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic", "bsonType": "javascript" } } } }, "local_javascript_det_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "local_javascript_det_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "local_symbol_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "symbol" } } } }, "local_symbol_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_local", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "symbol" } } } }, "local_symbol_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "local_symbol_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "local_symbol_det_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic", "bsonType": "symbol" } } } }, "local_symbol_det_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "local_symbol_det_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "local_javascriptWithScope_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "javascriptWithScope" } } } }, "local_javascriptWithScope_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_local", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "javascriptWithScope" } } } }, "local_javascriptWithScope_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "local_javascriptWithScope_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "local_int_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "int" } } } }, "local_int_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_local", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "int" } } } }, "local_int_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "local_int_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "local_int_det_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic", "bsonType": "int" } } } }, "local_int_det_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "local_int_det_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "local_timestamp_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "timestamp" } } } }, "local_timestamp_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_local", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "timestamp" } } } }, "local_timestamp_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "local_timestamp_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "local_timestamp_det_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic", "bsonType": "timestamp" } } } }, "local_timestamp_det_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "local_timestamp_det_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "local_long_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "long" } } } }, "local_long_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_local", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "long" } } } }, "local_long_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "local_long_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "local_long_det_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic", "bsonType": "long" } } } }, "local_long_det_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "local_long_det_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "local_decimal_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "decimal" } } } }, "local_decimal_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_local", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "decimal" } } } }, "local_decimal_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "local_decimal_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "azure_double_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "AZUREAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "double" } } } }, "azure_double_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_azure", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "double" } } } }, "azure_double_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "azure_double_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "azure_string_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "AZUREAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "string" } } } }, "azure_string_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_azure", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "string" } } } }, "azure_string_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "azure_string_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "azure_string_det_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "AZUREAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic", "bsonType": "string" } } } }, "azure_string_det_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "azure_string_det_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "azure_object_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "AZUREAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "object" } } } }, "azure_object_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_azure", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "object" } } } }, "azure_object_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "azure_object_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "azure_array_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "AZUREAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "array" } } } }, "azure_array_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_azure", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "array" } } } }, "azure_array_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "azure_array_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "azure_binData=00_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "AZUREAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "binData" } } } }, "azure_binData=00_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_azure", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "binData" } } } }, "azure_binData=00_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "azure_binData=00_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "azure_binData=00_det_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "AZUREAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic", "bsonType": "binData" } } } }, "azure_binData=00_det_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "azure_binData=00_det_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "azure_binData=04_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "AZUREAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "binData" } } } }, "azure_binData=04_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_azure", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "binData" } } } }, "azure_binData=04_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "azure_binData=04_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "azure_binData=04_det_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "AZUREAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic", "bsonType": "binData" } } } }, "azure_binData=04_det_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "azure_binData=04_det_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "azure_objectId_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "AZUREAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "objectId" } } } }, "azure_objectId_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_azure", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "objectId" } } } }, "azure_objectId_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "azure_objectId_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "azure_objectId_det_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "AZUREAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic", "bsonType": "objectId" } } } }, "azure_objectId_det_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "azure_objectId_det_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "azure_bool_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "AZUREAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "bool" } } } }, "azure_bool_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_azure", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "bool" } } } }, "azure_bool_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "azure_bool_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "azure_date_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "AZUREAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "date" } } } }, "azure_date_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_azure", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "date" } } } }, "azure_date_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "azure_date_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "azure_date_det_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "AZUREAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic", "bsonType": "date" } } } }, "azure_date_det_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "azure_date_det_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "azure_regex_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "AZUREAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "regex" } } } }, "azure_regex_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_azure", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "regex" } } } }, "azure_regex_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "azure_regex_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "azure_regex_det_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "AZUREAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic", "bsonType": "regex" } } } }, "azure_regex_det_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "azure_regex_det_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "azure_dbPointer_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "AZUREAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "dbPointer" } } } }, "azure_dbPointer_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_azure", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "dbPointer" } } } }, "azure_dbPointer_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "azure_dbPointer_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "azure_dbPointer_det_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "AZUREAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic", "bsonType": "dbPointer" } } } }, "azure_dbPointer_det_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "azure_dbPointer_det_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "azure_javascript_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "AZUREAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "javascript" } } } }, "azure_javascript_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_azure", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "javascript" } } } }, "azure_javascript_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "azure_javascript_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "azure_javascript_det_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "AZUREAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic", "bsonType": "javascript" } } } }, "azure_javascript_det_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "azure_javascript_det_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "azure_symbol_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "AZUREAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "symbol" } } } }, "azure_symbol_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_azure", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "symbol" } } } }, "azure_symbol_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "azure_symbol_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "azure_symbol_det_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "AZUREAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic", "bsonType": "symbol" } } } }, "azure_symbol_det_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "azure_symbol_det_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "azure_javascriptWithScope_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "AZUREAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "javascriptWithScope" } } } }, "azure_javascriptWithScope_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_azure", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "javascriptWithScope" } } } }, "azure_javascriptWithScope_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "azure_javascriptWithScope_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "azure_int_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "AZUREAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "int" } } } }, "azure_int_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_azure", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "int" } } } }, "azure_int_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "azure_int_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "azure_int_det_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "AZUREAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic", "bsonType": "int" } } } }, "azure_int_det_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "azure_int_det_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "azure_timestamp_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "AZUREAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "timestamp" } } } }, "azure_timestamp_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_azure", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "timestamp" } } } }, "azure_timestamp_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "azure_timestamp_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "azure_timestamp_det_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "AZUREAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic", "bsonType": "timestamp" } } } }, "azure_timestamp_det_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "azure_timestamp_det_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "azure_long_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "AZUREAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "long" } } } }, "azure_long_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_azure", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "long" } } } }, "azure_long_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "azure_long_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "azure_long_det_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "AZUREAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic", "bsonType": "long" } } } }, "azure_long_det_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "azure_long_det_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "azure_decimal_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "AZUREAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "decimal" } } } }, "azure_decimal_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_azure", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "decimal" } } } }, "azure_decimal_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "azure_decimal_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "gcp_double_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "GCPAAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "double" } } } }, "gcp_double_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_gcp", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "double" } } } }, "gcp_double_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "gcp_double_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "gcp_string_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "GCPAAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "string" } } } }, "gcp_string_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_gcp", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "string" } } } }, "gcp_string_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "gcp_string_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "gcp_string_det_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "GCPAAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic", "bsonType": "string" } } } }, "gcp_string_det_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "gcp_string_det_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "gcp_object_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "GCPAAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "object" } } } }, "gcp_object_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_gcp", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "object" } } } }, "gcp_object_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "gcp_object_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "gcp_array_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "GCPAAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "array" } } } }, "gcp_array_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_gcp", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "array" } } } }, "gcp_array_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "gcp_array_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "gcp_binData=00_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "GCPAAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "binData" } } } }, "gcp_binData=00_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_gcp", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "binData" } } } }, "gcp_binData=00_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "gcp_binData=00_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "gcp_binData=00_det_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "GCPAAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic", "bsonType": "binData" } } } }, "gcp_binData=00_det_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "gcp_binData=00_det_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "gcp_binData=04_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "GCPAAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "binData" } } } }, "gcp_binData=04_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_gcp", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "binData" } } } }, "gcp_binData=04_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "gcp_binData=04_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "gcp_binData=04_det_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "GCPAAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic", "bsonType": "binData" } } } }, "gcp_binData=04_det_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "gcp_binData=04_det_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "gcp_objectId_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "GCPAAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "objectId" } } } }, "gcp_objectId_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_gcp", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "objectId" } } } }, "gcp_objectId_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "gcp_objectId_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "gcp_objectId_det_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "GCPAAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic", "bsonType": "objectId" } } } }, "gcp_objectId_det_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "gcp_objectId_det_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "gcp_bool_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "GCPAAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "bool" } } } }, "gcp_bool_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_gcp", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "bool" } } } }, "gcp_bool_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "gcp_bool_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "gcp_date_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "GCPAAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "date" } } } }, "gcp_date_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_gcp", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "date" } } } }, "gcp_date_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "gcp_date_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "gcp_date_det_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "GCPAAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic", "bsonType": "date" } } } }, "gcp_date_det_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "gcp_date_det_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "gcp_regex_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "GCPAAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "regex" } } } }, "gcp_regex_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_gcp", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "regex" } } } }, "gcp_regex_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "gcp_regex_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "gcp_regex_det_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "GCPAAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic", "bsonType": "regex" } } } }, "gcp_regex_det_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "gcp_regex_det_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "gcp_dbPointer_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "GCPAAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "dbPointer" } } } }, "gcp_dbPointer_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_gcp", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "dbPointer" } } } }, "gcp_dbPointer_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "gcp_dbPointer_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "gcp_dbPointer_det_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "GCPAAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic", "bsonType": "dbPointer" } } } }, "gcp_dbPointer_det_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "gcp_dbPointer_det_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "gcp_javascript_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "GCPAAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "javascript" } } } }, "gcp_javascript_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_gcp", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "javascript" } } } }, "gcp_javascript_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "gcp_javascript_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "gcp_javascript_det_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "GCPAAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic", "bsonType": "javascript" } } } }, "gcp_javascript_det_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "gcp_javascript_det_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "gcp_symbol_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "GCPAAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "symbol" } } } }, "gcp_symbol_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_gcp", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "symbol" } } } }, "gcp_symbol_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "gcp_symbol_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "gcp_symbol_det_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "GCPAAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic", "bsonType": "symbol" } } } }, "gcp_symbol_det_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "gcp_symbol_det_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "gcp_javascriptWithScope_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "GCPAAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "javascriptWithScope" } } } }, "gcp_javascriptWithScope_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_gcp", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "javascriptWithScope" } } } }, "gcp_javascriptWithScope_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "gcp_javascriptWithScope_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "gcp_int_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "GCPAAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "int" } } } }, "gcp_int_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_gcp", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "int" } } } }, "gcp_int_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "gcp_int_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "gcp_int_det_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "GCPAAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic", "bsonType": "int" } } } }, "gcp_int_det_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "gcp_int_det_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "gcp_timestamp_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "GCPAAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "timestamp" } } } }, "gcp_timestamp_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_gcp", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "timestamp" } } } }, "gcp_timestamp_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "gcp_timestamp_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "gcp_timestamp_det_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "GCPAAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic", "bsonType": "timestamp" } } } }, "gcp_timestamp_det_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "gcp_timestamp_det_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "gcp_long_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "GCPAAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "long" } } } }, "gcp_long_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_gcp", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "long" } } } }, "gcp_long_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "gcp_long_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "gcp_long_det_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "GCPAAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic", "bsonType": "long" } } } }, "gcp_long_det_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "gcp_long_det_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "gcp_decimal_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "GCPAAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "decimal" } } } }, "gcp_decimal_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_gcp", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "decimal" } } } }, "gcp_decimal_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "gcp_decimal_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "kmip_double_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "KMIPAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "double" } } } }, "kmip_double_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_kmip", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "double" } } } }, "kmip_double_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "kmip_double_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "kmip_string_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "KMIPAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "string" } } } }, "kmip_string_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_kmip", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "string" } } } }, "kmip_string_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "kmip_string_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "kmip_string_det_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "KMIPAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic", "bsonType": "string" } } } }, "kmip_string_det_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "kmip_string_det_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "kmip_object_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "KMIPAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "object" } } } }, "kmip_object_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_kmip", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "object" } } } }, "kmip_object_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "kmip_object_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "kmip_array_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "KMIPAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "array" } } } }, "kmip_array_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_kmip", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "array" } } } }, "kmip_array_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "kmip_array_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "kmip_binData=00_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "KMIPAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "binData" } } } }, "kmip_binData=00_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_kmip", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "binData" } } } }, "kmip_binData=00_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "kmip_binData=00_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "kmip_binData=00_det_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "KMIPAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic", "bsonType": "binData" } } } }, "kmip_binData=00_det_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "kmip_binData=00_det_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "kmip_binData=04_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "KMIPAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "binData" } } } }, "kmip_binData=04_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_kmip", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "binData" } } } }, "kmip_binData=04_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "kmip_binData=04_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "kmip_binData=04_det_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "KMIPAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic", "bsonType": "binData" } } } }, "kmip_binData=04_det_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "kmip_binData=04_det_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "kmip_objectId_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "KMIPAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "objectId" } } } }, "kmip_objectId_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_kmip", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "objectId" } } } }, "kmip_objectId_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "kmip_objectId_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "kmip_objectId_det_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "KMIPAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic", "bsonType": "objectId" } } } }, "kmip_objectId_det_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "kmip_objectId_det_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "kmip_bool_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "KMIPAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "bool" } } } }, "kmip_bool_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_kmip", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "bool" } } } }, "kmip_bool_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "kmip_bool_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "kmip_date_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "KMIPAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "date" } } } }, "kmip_date_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_kmip", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "date" } } } }, "kmip_date_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "kmip_date_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "kmip_date_det_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "KMIPAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic", "bsonType": "date" } } } }, "kmip_date_det_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "kmip_date_det_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "kmip_regex_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "KMIPAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "regex" } } } }, "kmip_regex_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_kmip", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "regex" } } } }, "kmip_regex_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "kmip_regex_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "kmip_regex_det_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "KMIPAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic", "bsonType": "regex" } } } }, "kmip_regex_det_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "kmip_regex_det_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "kmip_dbPointer_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "KMIPAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "dbPointer" } } } }, "kmip_dbPointer_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_kmip", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "dbPointer" } } } }, "kmip_dbPointer_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "kmip_dbPointer_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "kmip_dbPointer_det_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "KMIPAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic", "bsonType": "dbPointer" } } } }, "kmip_dbPointer_det_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "kmip_dbPointer_det_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "kmip_javascript_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "KMIPAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "javascript" } } } }, "kmip_javascript_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_kmip", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "javascript" } } } }, "kmip_javascript_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "kmip_javascript_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "kmip_javascript_det_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "KMIPAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic", "bsonType": "javascript" } } } }, "kmip_javascript_det_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "kmip_javascript_det_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "kmip_symbol_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "KMIPAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "symbol" } } } }, "kmip_symbol_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_kmip", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "symbol" } } } }, "kmip_symbol_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "kmip_symbol_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "kmip_symbol_det_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "KMIPAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic", "bsonType": "symbol" } } } }, "kmip_symbol_det_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "kmip_symbol_det_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "kmip_javascriptWithScope_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "KMIPAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "javascriptWithScope" } } } }, "kmip_javascriptWithScope_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_kmip", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "javascriptWithScope" } } } }, "kmip_javascriptWithScope_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "kmip_javascriptWithScope_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "kmip_int_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "KMIPAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "int" } } } }, "kmip_int_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_kmip", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "int" } } } }, "kmip_int_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "kmip_int_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "kmip_int_det_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "KMIPAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic", "bsonType": "int" } } } }, "kmip_int_det_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "kmip_int_det_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "kmip_timestamp_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "KMIPAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "timestamp" } } } }, "kmip_timestamp_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_kmip", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "timestamp" } } } }, "kmip_timestamp_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "kmip_timestamp_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "kmip_timestamp_det_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "KMIPAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic", "bsonType": "timestamp" } } } }, "kmip_timestamp_det_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "kmip_timestamp_det_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "kmip_long_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "KMIPAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "long" } } } }, "kmip_long_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_kmip", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "long" } } } }, "kmip_long_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "kmip_long_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "kmip_long_det_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "KMIPAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic", "bsonType": "long" } } } }, "kmip_long_det_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "kmip_long_det_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "kmip_decimal_rand_auto_id": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": [ { "$binary": { "base64": "KMIPAAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "decimal" } } } }, "kmip_decimal_rand_auto_altname": { "bsonType": "object", "properties": { "value": { "encrypt": { "keyId": "/altname_kmip", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random", "bsonType": "decimal" } } } }, "kmip_decimal_rand_explicit_id": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } }, "kmip_decimal_rand_explicit_altname": { "bsonType": "object", "properties": { "value": { "bsonType": "binData" } } } } }mongo-ruby-driver-2.21.3/spec/support/crypt/corpus/corpus.json000066400000000000000000005230711505113246500245300ustar00rootroot00000000000000{ "_id": "client_side_encryption_corpus", "altname_aws": "aws", "altname_local": "local", "altname_azure": "azure", "altname_gcp": "gcp", "altname_kmip": "kmip", "aws_double_rand_auto_id": { "kms": "aws", "type": "double", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$numberDouble": "1.234" } }, "aws_double_rand_auto_altname": { "kms": "aws", "type": "double", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$numberDouble": "1.234" } }, "aws_double_rand_explicit_id": { "kms": "aws", "type": "double", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$numberDouble": "1.234" } }, "aws_double_rand_explicit_altname": { "kms": "aws", "type": "double", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$numberDouble": "1.234" } }, "aws_double_det_explicit_id": { "kms": "aws", "type": "double", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$numberDouble": "1.234" } }, "aws_double_det_explicit_altname": { "kms": "aws", "type": "double", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$numberDouble": "1.234" } }, "aws_string_rand_auto_id": { "kms": "aws", "type": "string", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": "mongodb" }, "aws_string_rand_auto_altname": { "kms": "aws", "type": "string", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": "mongodb" }, "aws_string_rand_explicit_id": { "kms": "aws", "type": "string", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": "mongodb" }, "aws_string_rand_explicit_altname": { "kms": "aws", "type": "string", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": "mongodb" }, "aws_string_det_auto_id": { "kms": "aws", "type": "string", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": "mongodb" }, "aws_string_det_explicit_id": { "kms": "aws", "type": "string", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": "mongodb" }, "aws_string_det_explicit_altname": { "kms": "aws", "type": "string", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": "mongodb" }, "aws_object_rand_auto_id": { "kms": "aws", "type": "object", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "x": { "$numberInt": "1" } } }, "aws_object_rand_auto_altname": { "kms": "aws", "type": "object", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "x": { "$numberInt": "1" } } }, "aws_object_rand_explicit_id": { "kms": "aws", "type": "object", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "x": { "$numberInt": "1" } } }, "aws_object_rand_explicit_altname": { "kms": "aws", "type": "object", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "x": { "$numberInt": "1" } } }, "aws_object_det_explicit_id": { "kms": "aws", "type": "object", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": { "x": { "$numberInt": "1" } } }, "aws_object_det_explicit_altname": { "kms": "aws", "type": "object", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "x": { "$numberInt": "1" } } }, "aws_array_rand_auto_id": { "kms": "aws", "type": "array", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": [ { "$numberInt": "1" }, { "$numberInt": "2" }, { "$numberInt": "3" } ] }, "aws_array_rand_auto_altname": { "kms": "aws", "type": "array", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": [ { "$numberInt": "1" }, { "$numberInt": "2" }, { "$numberInt": "3" } ] }, "aws_array_rand_explicit_id": { "kms": "aws", "type": "array", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": [ { "$numberInt": "1" }, { "$numberInt": "2" }, { "$numberInt": "3" } ] }, "aws_array_rand_explicit_altname": { "kms": "aws", "type": "array", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": [ { "$numberInt": "1" }, { "$numberInt": "2" }, { "$numberInt": "3" } ] }, "aws_array_det_explicit_id": { "kms": "aws", "type": "array", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": [ { "$numberInt": "1" }, { "$numberInt": "2" }, { "$numberInt": "3" } ] }, "aws_array_det_explicit_altname": { "kms": "aws", "type": "array", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": [ { "$numberInt": "1" }, { "$numberInt": "2" }, { "$numberInt": "3" } ] }, "aws_binData=00_rand_auto_id": { "kms": "aws", "type": "binData=00", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AQIDBA==", "subType": "00" } } }, "aws_binData=00_rand_auto_altname": { "kms": "aws", "type": "binData=00", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AQIDBA==", "subType": "00" } } }, "aws_binData=00_rand_explicit_id": { "kms": "aws", "type": "binData=00", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AQIDBA==", "subType": "00" } } }, "aws_binData=00_rand_explicit_altname": { "kms": "aws", "type": "binData=00", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AQIDBA==", "subType": "00" } } }, "aws_binData=00_det_auto_id": { "kms": "aws", "type": "binData=00", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AQIDBA==", "subType": "00" } } }, "aws_binData=00_det_explicit_id": { "kms": "aws", "type": "binData=00", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AQIDBA==", "subType": "00" } } }, "aws_binData=00_det_explicit_altname": { "kms": "aws", "type": "binData=00", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AQIDBA==", "subType": "00" } } }, "aws_binData=04_rand_auto_id": { "kms": "aws", "type": "binData=04", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AAECAwQFBgcICQoLDA0ODw==", "subType": "04" } } }, "aws_binData=04_rand_auto_altname": { "kms": "aws", "type": "binData=04", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AAECAwQFBgcICQoLDA0ODw==", "subType": "04" } } }, "aws_binData=04_rand_explicit_id": { "kms": "aws", "type": "binData=04", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AAECAwQFBgcICQoLDA0ODw==", "subType": "04" } } }, "aws_binData=04_rand_explicit_altname": { "kms": "aws", "type": "binData=04", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AAECAwQFBgcICQoLDA0ODw==", "subType": "04" } } }, "aws_binData=04_det_auto_id": { "kms": "aws", "type": "binData=04", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AAECAwQFBgcICQoLDA0ODw==", "subType": "04" } } }, "aws_binData=04_det_explicit_id": { "kms": "aws", "type": "binData=04", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AAECAwQFBgcICQoLDA0ODw==", "subType": "04" } } }, "aws_binData=04_det_explicit_altname": { "kms": "aws", "type": "binData=04", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AAECAwQFBgcICQoLDA0ODw==", "subType": "04" } } }, "aws_undefined_rand_explicit_id": { "kms": "aws", "type": "undefined", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$undefined": true } }, "aws_undefined_rand_explicit_altname": { "kms": "aws", "type": "undefined", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$undefined": true } }, "aws_undefined_det_explicit_id": { "kms": "aws", "type": "undefined", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$undefined": true } }, "aws_undefined_det_explicit_altname": { "kms": "aws", "type": "undefined", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$undefined": true } }, "aws_objectId_rand_auto_id": { "kms": "aws", "type": "objectId", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$oid": "01234567890abcdef0123456" } }, "aws_objectId_rand_auto_altname": { "kms": "aws", "type": "objectId", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$oid": "01234567890abcdef0123456" } }, "aws_objectId_rand_explicit_id": { "kms": "aws", "type": "objectId", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$oid": "01234567890abcdef0123456" } }, "aws_objectId_rand_explicit_altname": { "kms": "aws", "type": "objectId", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$oid": "01234567890abcdef0123456" } }, "aws_objectId_det_auto_id": { "kms": "aws", "type": "objectId", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$oid": "01234567890abcdef0123456" } }, "aws_objectId_det_explicit_id": { "kms": "aws", "type": "objectId", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$oid": "01234567890abcdef0123456" } }, "aws_objectId_det_explicit_altname": { "kms": "aws", "type": "objectId", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$oid": "01234567890abcdef0123456" } }, "aws_bool_rand_auto_id": { "kms": "aws", "type": "bool", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": true }, "aws_bool_rand_auto_altname": { "kms": "aws", "type": "bool", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": true }, "aws_bool_rand_explicit_id": { "kms": "aws", "type": "bool", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": true }, "aws_bool_rand_explicit_altname": { "kms": "aws", "type": "bool", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": true }, "aws_bool_det_explicit_id": { "kms": "aws", "type": "bool", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": true }, "aws_bool_det_explicit_altname": { "kms": "aws", "type": "bool", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": true }, "aws_date_rand_auto_id": { "kms": "aws", "type": "date", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$date": { "$numberLong": "12345" } } }, "aws_date_rand_auto_altname": { "kms": "aws", "type": "date", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$date": { "$numberLong": "12345" } } }, "aws_date_rand_explicit_id": { "kms": "aws", "type": "date", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$date": { "$numberLong": "12345" } } }, "aws_date_rand_explicit_altname": { "kms": "aws", "type": "date", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$date": { "$numberLong": "12345" } } }, "aws_date_det_auto_id": { "kms": "aws", "type": "date", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$date": { "$numberLong": "12345" } } }, "aws_date_det_explicit_id": { "kms": "aws", "type": "date", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$date": { "$numberLong": "12345" } } }, "aws_date_det_explicit_altname": { "kms": "aws", "type": "date", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$date": { "$numberLong": "12345" } } }, "aws_null_rand_explicit_id": { "kms": "aws", "type": "null", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": false, "value": null }, "aws_null_rand_explicit_altname": { "kms": "aws", "type": "null", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": false, "value": null }, "aws_null_det_explicit_id": { "kms": "aws", "type": "null", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": null }, "aws_null_det_explicit_altname": { "kms": "aws", "type": "null", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": null }, "aws_regex_rand_auto_id": { "kms": "aws", "type": "regex", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$regularExpression": { "pattern": ".*", "options": "" } } }, "aws_regex_rand_auto_altname": { "kms": "aws", "type": "regex", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$regularExpression": { "pattern": ".*", "options": "" } } }, "aws_regex_rand_explicit_id": { "kms": "aws", "type": "regex", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$regularExpression": { "pattern": ".*", "options": "" } } }, "aws_regex_rand_explicit_altname": { "kms": "aws", "type": "regex", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$regularExpression": { "pattern": ".*", "options": "" } } }, "aws_regex_det_auto_id": { "kms": "aws", "type": "regex", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$regularExpression": { "pattern": ".*", "options": "" } } }, "aws_regex_det_explicit_id": { "kms": "aws", "type": "regex", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$regularExpression": { "pattern": ".*", "options": "" } } }, "aws_regex_det_explicit_altname": { "kms": "aws", "type": "regex", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$regularExpression": { "pattern": ".*", "options": "" } } }, "aws_dbPointer_rand_auto_id": { "kms": "aws", "type": "dbPointer", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$dbPointer": { "$ref": "db.example", "$id": { "$oid": "01234567890abcdef0123456" } } } }, "aws_dbPointer_rand_auto_altname": { "kms": "aws", "type": "dbPointer", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$dbPointer": { "$ref": "db.example", "$id": { "$oid": "01234567890abcdef0123456" } } } }, "aws_dbPointer_rand_explicit_id": { "kms": "aws", "type": "dbPointer", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$dbPointer": { "$ref": "db.example", "$id": { "$oid": "01234567890abcdef0123456" } } } }, "aws_dbPointer_rand_explicit_altname": { "kms": "aws", "type": "dbPointer", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$dbPointer": { "$ref": "db.example", "$id": { "$oid": "01234567890abcdef0123456" } } } }, "aws_dbPointer_det_auto_id": { "kms": "aws", "type": "dbPointer", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$dbPointer": { "$ref": "db.example", "$id": { "$oid": "01234567890abcdef0123456" } } } }, "aws_dbPointer_det_explicit_id": { "kms": "aws", "type": "dbPointer", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$dbPointer": { "$ref": "db.example", "$id": { "$oid": "01234567890abcdef0123456" } } } }, "aws_dbPointer_det_explicit_altname": { "kms": "aws", "type": "dbPointer", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$dbPointer": { "$ref": "db.example", "$id": { "$oid": "01234567890abcdef0123456" } } } }, "aws_javascript_rand_auto_id": { "kms": "aws", "type": "javascript", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$code": "x=1" } }, "aws_javascript_rand_auto_altname": { "kms": "aws", "type": "javascript", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$code": "x=1" } }, "aws_javascript_rand_explicit_id": { "kms": "aws", "type": "javascript", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$code": "x=1" } }, "aws_javascript_rand_explicit_altname": { "kms": "aws", "type": "javascript", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$code": "x=1" } }, "aws_javascript_det_auto_id": { "kms": "aws", "type": "javascript", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$code": "x=1" } }, "aws_javascript_det_explicit_id": { "kms": "aws", "type": "javascript", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$code": "x=1" } }, "aws_javascript_det_explicit_altname": { "kms": "aws", "type": "javascript", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$code": "x=1" } }, "aws_symbol_rand_auto_id": { "kms": "aws", "type": "symbol", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$symbol": "mongodb-symbol" } }, "aws_symbol_rand_auto_altname": { "kms": "aws", "type": "symbol", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$symbol": "mongodb-symbol" } }, "aws_symbol_rand_explicit_id": { "kms": "aws", "type": "symbol", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$symbol": "mongodb-symbol" } }, "aws_symbol_rand_explicit_altname": { "kms": "aws", "type": "symbol", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$symbol": "mongodb-symbol" } }, "aws_symbol_det_auto_id": { "kms": "aws", "type": "symbol", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$symbol": "mongodb-symbol" } }, "aws_symbol_det_explicit_id": { "kms": "aws", "type": "symbol", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$symbol": "mongodb-symbol" } }, "aws_symbol_det_explicit_altname": { "kms": "aws", "type": "symbol", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$symbol": "mongodb-symbol" } }, "aws_javascriptWithScope_rand_auto_id": { "kms": "aws", "type": "javascriptWithScope", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$code": "x=1", "$scope": {} } }, "aws_javascriptWithScope_rand_auto_altname": { "kms": "aws", "type": "javascriptWithScope", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$code": "x=1", "$scope": {} } }, "aws_javascriptWithScope_rand_explicit_id": { "kms": "aws", "type": "javascriptWithScope", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$code": "x=1", "$scope": {} } }, "aws_javascriptWithScope_rand_explicit_altname": { "kms": "aws", "type": "javascriptWithScope", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$code": "x=1", "$scope": {} } }, "aws_javascriptWithScope_det_explicit_id": { "kms": "aws", "type": "javascriptWithScope", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$code": "x=1", "$scope": {} } }, "aws_javascriptWithScope_det_explicit_altname": { "kms": "aws", "type": "javascriptWithScope", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$code": "x=1", "$scope": {} } }, "aws_int_rand_auto_id": { "kms": "aws", "type": "int", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$numberInt": "123" } }, "aws_int_rand_auto_altname": { "kms": "aws", "type": "int", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$numberInt": "123" } }, "aws_int_rand_explicit_id": { "kms": "aws", "type": "int", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$numberInt": "123" } }, "aws_int_rand_explicit_altname": { "kms": "aws", "type": "int", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$numberInt": "123" } }, "aws_int_det_auto_id": { "kms": "aws", "type": "int", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$numberInt": "123" } }, "aws_int_det_explicit_id": { "kms": "aws", "type": "int", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$numberInt": "123" } }, "aws_int_det_explicit_altname": { "kms": "aws", "type": "int", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$numberInt": "123" } }, "aws_timestamp_rand_auto_id": { "kms": "aws", "type": "timestamp", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$timestamp": { "t": 0, "i": 12345 } } }, "aws_timestamp_rand_auto_altname": { "kms": "aws", "type": "timestamp", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$timestamp": { "t": 0, "i": 12345 } } }, "aws_timestamp_rand_explicit_id": { "kms": "aws", "type": "timestamp", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$timestamp": { "t": 0, "i": 12345 } } }, "aws_timestamp_rand_explicit_altname": { "kms": "aws", "type": "timestamp", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$timestamp": { "t": 0, "i": 12345 } } }, "aws_timestamp_det_auto_id": { "kms": "aws", "type": "timestamp", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$timestamp": { "t": 0, "i": 12345 } } }, "aws_timestamp_det_explicit_id": { "kms": "aws", "type": "timestamp", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$timestamp": { "t": 0, "i": 12345 } } }, "aws_timestamp_det_explicit_altname": { "kms": "aws", "type": "timestamp", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$timestamp": { "t": 0, "i": 12345 } } }, "aws_long_rand_auto_id": { "kms": "aws", "type": "long", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$numberLong": "456" } }, "aws_long_rand_auto_altname": { "kms": "aws", "type": "long", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$numberLong": "456" } }, "aws_long_rand_explicit_id": { "kms": "aws", "type": "long", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$numberLong": "456" } }, "aws_long_rand_explicit_altname": { "kms": "aws", "type": "long", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$numberLong": "456" } }, "aws_long_det_auto_id": { "kms": "aws", "type": "long", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$numberLong": "456" } }, "aws_long_det_explicit_id": { "kms": "aws", "type": "long", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$numberLong": "456" } }, "aws_long_det_explicit_altname": { "kms": "aws", "type": "long", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$numberLong": "456" } }, "aws_decimal_rand_auto_id": { "kms": "aws", "type": "decimal", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$numberDecimal": "1.234" } }, "aws_decimal_rand_auto_altname": { "kms": "aws", "type": "decimal", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$numberDecimal": "1.234" } }, "aws_decimal_rand_explicit_id": { "kms": "aws", "type": "decimal", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$numberDecimal": "1.234" } }, "aws_decimal_rand_explicit_altname": { "kms": "aws", "type": "decimal", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$numberDecimal": "1.234" } }, "aws_decimal_det_explicit_id": { "kms": "aws", "type": "decimal", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$numberDecimal": "1.234" } }, "aws_decimal_det_explicit_altname": { "kms": "aws", "type": "decimal", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$numberDecimal": "1.234" } }, "aws_minKey_rand_explicit_id": { "kms": "aws", "type": "minKey", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$minKey": 1 } }, "aws_minKey_rand_explicit_altname": { "kms": "aws", "type": "minKey", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$minKey": 1 } }, "aws_minKey_det_explicit_id": { "kms": "aws", "type": "minKey", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$minKey": 1 } }, "aws_minKey_det_explicit_altname": { "kms": "aws", "type": "minKey", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$minKey": 1 } }, "aws_maxKey_rand_explicit_id": { "kms": "aws", "type": "maxKey", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$maxKey": 1 } }, "aws_maxKey_rand_explicit_altname": { "kms": "aws", "type": "maxKey", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$maxKey": 1 } }, "aws_maxKey_det_explicit_id": { "kms": "aws", "type": "maxKey", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$maxKey": 1 } }, "aws_maxKey_det_explicit_altname": { "kms": "aws", "type": "maxKey", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$maxKey": 1 } }, "local_double_rand_auto_id": { "kms": "local", "type": "double", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$numberDouble": "1.234" } }, "local_double_rand_auto_altname": { "kms": "local", "type": "double", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$numberDouble": "1.234" } }, "local_double_rand_explicit_id": { "kms": "local", "type": "double", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$numberDouble": "1.234" } }, "local_double_rand_explicit_altname": { "kms": "local", "type": "double", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$numberDouble": "1.234" } }, "local_double_det_explicit_id": { "kms": "local", "type": "double", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$numberDouble": "1.234" } }, "local_double_det_explicit_altname": { "kms": "local", "type": "double", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$numberDouble": "1.234" } }, "local_string_rand_auto_id": { "kms": "local", "type": "string", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": "mongodb" }, "local_string_rand_auto_altname": { "kms": "local", "type": "string", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": "mongodb" }, "local_string_rand_explicit_id": { "kms": "local", "type": "string", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": "mongodb" }, "local_string_rand_explicit_altname": { "kms": "local", "type": "string", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": "mongodb" }, "local_string_det_auto_id": { "kms": "local", "type": "string", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": "mongodb" }, "local_string_det_explicit_id": { "kms": "local", "type": "string", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": "mongodb" }, "local_string_det_explicit_altname": { "kms": "local", "type": "string", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": "mongodb" }, "local_object_rand_auto_id": { "kms": "local", "type": "object", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "x": { "$numberInt": "1" } } }, "local_object_rand_auto_altname": { "kms": "local", "type": "object", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "x": { "$numberInt": "1" } } }, "local_object_rand_explicit_id": { "kms": "local", "type": "object", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "x": { "$numberInt": "1" } } }, "local_object_rand_explicit_altname": { "kms": "local", "type": "object", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "x": { "$numberInt": "1" } } }, "local_object_det_explicit_id": { "kms": "local", "type": "object", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": { "x": { "$numberInt": "1" } } }, "local_object_det_explicit_altname": { "kms": "local", "type": "object", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "x": { "$numberInt": "1" } } }, "local_array_rand_auto_id": { "kms": "local", "type": "array", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": [ { "$numberInt": "1" }, { "$numberInt": "2" }, { "$numberInt": "3" } ] }, "local_array_rand_auto_altname": { "kms": "local", "type": "array", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": [ { "$numberInt": "1" }, { "$numberInt": "2" }, { "$numberInt": "3" } ] }, "local_array_rand_explicit_id": { "kms": "local", "type": "array", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": [ { "$numberInt": "1" }, { "$numberInt": "2" }, { "$numberInt": "3" } ] }, "local_array_rand_explicit_altname": { "kms": "local", "type": "array", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": [ { "$numberInt": "1" }, { "$numberInt": "2" }, { "$numberInt": "3" } ] }, "local_array_det_explicit_id": { "kms": "local", "type": "array", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": [ { "$numberInt": "1" }, { "$numberInt": "2" }, { "$numberInt": "3" } ] }, "local_array_det_explicit_altname": { "kms": "local", "type": "array", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": [ { "$numberInt": "1" }, { "$numberInt": "2" }, { "$numberInt": "3" } ] }, "local_binData=00_rand_auto_id": { "kms": "local", "type": "binData=00", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AQIDBA==", "subType": "00" } } }, "local_binData=00_rand_auto_altname": { "kms": "local", "type": "binData=00", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AQIDBA==", "subType": "00" } } }, "local_binData=00_rand_explicit_id": { "kms": "local", "type": "binData=00", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AQIDBA==", "subType": "00" } } }, "local_binData=00_rand_explicit_altname": { "kms": "local", "type": "binData=00", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AQIDBA==", "subType": "00" } } }, "local_binData=00_det_auto_id": { "kms": "local", "type": "binData=00", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AQIDBA==", "subType": "00" } } }, "local_binData=00_det_explicit_id": { "kms": "local", "type": "binData=00", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AQIDBA==", "subType": "00" } } }, "local_binData=00_det_explicit_altname": { "kms": "local", "type": "binData=00", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AQIDBA==", "subType": "00" } } }, "local_binData=04_rand_auto_id": { "kms": "local", "type": "binData=04", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AAECAwQFBgcICQoLDA0ODw==", "subType": "04" } } }, "local_binData=04_rand_auto_altname": { "kms": "local", "type": "binData=04", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AAECAwQFBgcICQoLDA0ODw==", "subType": "04" } } }, "local_binData=04_rand_explicit_id": { "kms": "local", "type": "binData=04", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AAECAwQFBgcICQoLDA0ODw==", "subType": "04" } } }, "local_binData=04_rand_explicit_altname": { "kms": "local", "type": "binData=04", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AAECAwQFBgcICQoLDA0ODw==", "subType": "04" } } }, "local_binData=04_det_auto_id": { "kms": "local", "type": "binData=04", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AAECAwQFBgcICQoLDA0ODw==", "subType": "04" } } }, "local_binData=04_det_explicit_id": { "kms": "local", "type": "binData=04", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AAECAwQFBgcICQoLDA0ODw==", "subType": "04" } } }, "local_binData=04_det_explicit_altname": { "kms": "local", "type": "binData=04", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AAECAwQFBgcICQoLDA0ODw==", "subType": "04" } } }, "local_undefined_rand_explicit_id": { "kms": "local", "type": "undefined", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$undefined": true } }, "local_undefined_rand_explicit_altname": { "kms": "local", "type": "undefined", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$undefined": true } }, "local_undefined_det_explicit_id": { "kms": "local", "type": "undefined", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$undefined": true } }, "local_undefined_det_explicit_altname": { "kms": "local", "type": "undefined", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$undefined": true } }, "local_objectId_rand_auto_id": { "kms": "local", "type": "objectId", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$oid": "01234567890abcdef0123456" } }, "local_objectId_rand_auto_altname": { "kms": "local", "type": "objectId", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$oid": "01234567890abcdef0123456" } }, "local_objectId_rand_explicit_id": { "kms": "local", "type": "objectId", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$oid": "01234567890abcdef0123456" } }, "local_objectId_rand_explicit_altname": { "kms": "local", "type": "objectId", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$oid": "01234567890abcdef0123456" } }, "local_objectId_det_auto_id": { "kms": "local", "type": "objectId", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$oid": "01234567890abcdef0123456" } }, "local_objectId_det_explicit_id": { "kms": "local", "type": "objectId", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$oid": "01234567890abcdef0123456" } }, "local_objectId_det_explicit_altname": { "kms": "local", "type": "objectId", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$oid": "01234567890abcdef0123456" } }, "local_bool_rand_auto_id": { "kms": "local", "type": "bool", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": true }, "local_bool_rand_auto_altname": { "kms": "local", "type": "bool", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": true }, "local_bool_rand_explicit_id": { "kms": "local", "type": "bool", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": true }, "local_bool_rand_explicit_altname": { "kms": "local", "type": "bool", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": true }, "local_bool_det_explicit_id": { "kms": "local", "type": "bool", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": true }, "local_bool_det_explicit_altname": { "kms": "local", "type": "bool", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": true }, "local_date_rand_auto_id": { "kms": "local", "type": "date", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$date": { "$numberLong": "12345" } } }, "local_date_rand_auto_altname": { "kms": "local", "type": "date", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$date": { "$numberLong": "12345" } } }, "local_date_rand_explicit_id": { "kms": "local", "type": "date", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$date": { "$numberLong": "12345" } } }, "local_date_rand_explicit_altname": { "kms": "local", "type": "date", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$date": { "$numberLong": "12345" } } }, "local_date_det_auto_id": { "kms": "local", "type": "date", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$date": { "$numberLong": "12345" } } }, "local_date_det_explicit_id": { "kms": "local", "type": "date", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$date": { "$numberLong": "12345" } } }, "local_date_det_explicit_altname": { "kms": "local", "type": "date", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$date": { "$numberLong": "12345" } } }, "local_null_rand_explicit_id": { "kms": "local", "type": "null", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": false, "value": null }, "local_null_rand_explicit_altname": { "kms": "local", "type": "null", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": false, "value": null }, "local_null_det_explicit_id": { "kms": "local", "type": "null", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": null }, "local_null_det_explicit_altname": { "kms": "local", "type": "null", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": null }, "local_regex_rand_auto_id": { "kms": "local", "type": "regex", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$regularExpression": { "pattern": ".*", "options": "" } } }, "local_regex_rand_auto_altname": { "kms": "local", "type": "regex", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$regularExpression": { "pattern": ".*", "options": "" } } }, "local_regex_rand_explicit_id": { "kms": "local", "type": "regex", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$regularExpression": { "pattern": ".*", "options": "" } } }, "local_regex_rand_explicit_altname": { "kms": "local", "type": "regex", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$regularExpression": { "pattern": ".*", "options": "" } } }, "local_regex_det_auto_id": { "kms": "local", "type": "regex", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$regularExpression": { "pattern": ".*", "options": "" } } }, "local_regex_det_explicit_id": { "kms": "local", "type": "regex", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$regularExpression": { "pattern": ".*", "options": "" } } }, "local_regex_det_explicit_altname": { "kms": "local", "type": "regex", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$regularExpression": { "pattern": ".*", "options": "" } } }, "local_dbPointer_rand_auto_id": { "kms": "local", "type": "dbPointer", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$dbPointer": { "$ref": "db.example", "$id": { "$oid": "01234567890abcdef0123456" } } } }, "local_dbPointer_rand_auto_altname": { "kms": "local", "type": "dbPointer", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$dbPointer": { "$ref": "db.example", "$id": { "$oid": "01234567890abcdef0123456" } } } }, "local_dbPointer_rand_explicit_id": { "kms": "local", "type": "dbPointer", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$dbPointer": { "$ref": "db.example", "$id": { "$oid": "01234567890abcdef0123456" } } } }, "local_dbPointer_rand_explicit_altname": { "kms": "local", "type": "dbPointer", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$dbPointer": { "$ref": "db.example", "$id": { "$oid": "01234567890abcdef0123456" } } } }, "local_dbPointer_det_auto_id": { "kms": "local", "type": "dbPointer", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$dbPointer": { "$ref": "db.example", "$id": { "$oid": "01234567890abcdef0123456" } } } }, "local_dbPointer_det_explicit_id": { "kms": "local", "type": "dbPointer", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$dbPointer": { "$ref": "db.example", "$id": { "$oid": "01234567890abcdef0123456" } } } }, "local_dbPointer_det_explicit_altname": { "kms": "local", "type": "dbPointer", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$dbPointer": { "$ref": "db.example", "$id": { "$oid": "01234567890abcdef0123456" } } } }, "local_javascript_rand_auto_id": { "kms": "local", "type": "javascript", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$code": "x=1" } }, "local_javascript_rand_auto_altname": { "kms": "local", "type": "javascript", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$code": "x=1" } }, "local_javascript_rand_explicit_id": { "kms": "local", "type": "javascript", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$code": "x=1" } }, "local_javascript_rand_explicit_altname": { "kms": "local", "type": "javascript", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$code": "x=1" } }, "local_javascript_det_auto_id": { "kms": "local", "type": "javascript", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$code": "x=1" } }, "local_javascript_det_explicit_id": { "kms": "local", "type": "javascript", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$code": "x=1" } }, "local_javascript_det_explicit_altname": { "kms": "local", "type": "javascript", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$code": "x=1" } }, "local_symbol_rand_auto_id": { "kms": "local", "type": "symbol", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$symbol": "mongodb-symbol" } }, "local_symbol_rand_auto_altname": { "kms": "local", "type": "symbol", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$symbol": "mongodb-symbol" } }, "local_symbol_rand_explicit_id": { "kms": "local", "type": "symbol", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$symbol": "mongodb-symbol" } }, "local_symbol_rand_explicit_altname": { "kms": "local", "type": "symbol", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$symbol": "mongodb-symbol" } }, "local_symbol_det_auto_id": { "kms": "local", "type": "symbol", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$symbol": "mongodb-symbol" } }, "local_symbol_det_explicit_id": { "kms": "local", "type": "symbol", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$symbol": "mongodb-symbol" } }, "local_symbol_det_explicit_altname": { "kms": "local", "type": "symbol", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$symbol": "mongodb-symbol" } }, "local_javascriptWithScope_rand_auto_id": { "kms": "local", "type": "javascriptWithScope", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$code": "x=1", "$scope": {} } }, "local_javascriptWithScope_rand_auto_altname": { "kms": "local", "type": "javascriptWithScope", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$code": "x=1", "$scope": {} } }, "local_javascriptWithScope_rand_explicit_id": { "kms": "local", "type": "javascriptWithScope", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$code": "x=1", "$scope": {} } }, "local_javascriptWithScope_rand_explicit_altname": { "kms": "local", "type": "javascriptWithScope", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$code": "x=1", "$scope": {} } }, "local_javascriptWithScope_det_explicit_id": { "kms": "local", "type": "javascriptWithScope", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$code": "x=1", "$scope": {} } }, "local_javascriptWithScope_det_explicit_altname": { "kms": "local", "type": "javascriptWithScope", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$code": "x=1", "$scope": {} } }, "local_int_rand_auto_id": { "kms": "local", "type": "int", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$numberInt": "123" } }, "local_int_rand_auto_altname": { "kms": "local", "type": "int", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$numberInt": "123" } }, "local_int_rand_explicit_id": { "kms": "local", "type": "int", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$numberInt": "123" } }, "local_int_rand_explicit_altname": { "kms": "local", "type": "int", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$numberInt": "123" } }, "local_int_det_auto_id": { "kms": "local", "type": "int", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$numberInt": "123" } }, "local_int_det_explicit_id": { "kms": "local", "type": "int", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$numberInt": "123" } }, "local_int_det_explicit_altname": { "kms": "local", "type": "int", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$numberInt": "123" } }, "local_timestamp_rand_auto_id": { "kms": "local", "type": "timestamp", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$timestamp": { "t": 0, "i": 12345 } } }, "local_timestamp_rand_auto_altname": { "kms": "local", "type": "timestamp", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$timestamp": { "t": 0, "i": 12345 } } }, "local_timestamp_rand_explicit_id": { "kms": "local", "type": "timestamp", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$timestamp": { "t": 0, "i": 12345 } } }, "local_timestamp_rand_explicit_altname": { "kms": "local", "type": "timestamp", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$timestamp": { "t": 0, "i": 12345 } } }, "local_timestamp_det_auto_id": { "kms": "local", "type": "timestamp", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$timestamp": { "t": 0, "i": 12345 } } }, "local_timestamp_det_explicit_id": { "kms": "local", "type": "timestamp", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$timestamp": { "t": 0, "i": 12345 } } }, "local_timestamp_det_explicit_altname": { "kms": "local", "type": "timestamp", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$timestamp": { "t": 0, "i": 12345 } } }, "local_long_rand_auto_id": { "kms": "local", "type": "long", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$numberLong": "456" } }, "local_long_rand_auto_altname": { "kms": "local", "type": "long", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$numberLong": "456" } }, "local_long_rand_explicit_id": { "kms": "local", "type": "long", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$numberLong": "456" } }, "local_long_rand_explicit_altname": { "kms": "local", "type": "long", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$numberLong": "456" } }, "local_long_det_auto_id": { "kms": "local", "type": "long", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$numberLong": "456" } }, "local_long_det_explicit_id": { "kms": "local", "type": "long", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$numberLong": "456" } }, "local_long_det_explicit_altname": { "kms": "local", "type": "long", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$numberLong": "456" } }, "local_decimal_rand_auto_id": { "kms": "local", "type": "decimal", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$numberDecimal": "1.234" } }, "local_decimal_rand_auto_altname": { "kms": "local", "type": "decimal", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$numberDecimal": "1.234" } }, "local_decimal_rand_explicit_id": { "kms": "local", "type": "decimal", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$numberDecimal": "1.234" } }, "local_decimal_rand_explicit_altname": { "kms": "local", "type": "decimal", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$numberDecimal": "1.234" } }, "local_decimal_det_explicit_id": { "kms": "local", "type": "decimal", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$numberDecimal": "1.234" } }, "local_decimal_det_explicit_altname": { "kms": "local", "type": "decimal", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$numberDecimal": "1.234" } }, "local_minKey_rand_explicit_id": { "kms": "local", "type": "minKey", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$minKey": 1 } }, "local_minKey_rand_explicit_altname": { "kms": "local", "type": "minKey", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$minKey": 1 } }, "local_minKey_det_explicit_id": { "kms": "local", "type": "minKey", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$minKey": 1 } }, "local_minKey_det_explicit_altname": { "kms": "local", "type": "minKey", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$minKey": 1 } }, "local_maxKey_rand_explicit_id": { "kms": "local", "type": "maxKey", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$maxKey": 1 } }, "local_maxKey_rand_explicit_altname": { "kms": "local", "type": "maxKey", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$maxKey": 1 } }, "local_maxKey_det_explicit_id": { "kms": "local", "type": "maxKey", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$maxKey": 1 } }, "local_maxKey_det_explicit_altname": { "kms": "local", "type": "maxKey", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$maxKey": 1 } }, "azure_double_rand_auto_id": { "kms": "azure", "type": "double", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$numberDouble": "1.234" } }, "azure_double_rand_auto_altname": { "kms": "azure", "type": "double", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$numberDouble": "1.234" } }, "azure_double_rand_explicit_id": { "kms": "azure", "type": "double", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$numberDouble": "1.234" } }, "azure_double_rand_explicit_altname": { "kms": "azure", "type": "double", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$numberDouble": "1.234" } }, "azure_double_det_explicit_id": { "kms": "azure", "type": "double", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$numberDouble": "1.234" } }, "azure_double_det_explicit_altname": { "kms": "azure", "type": "double", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$numberDouble": "1.234" } }, "azure_string_rand_auto_id": { "kms": "azure", "type": "string", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": "mongodb" }, "azure_string_rand_auto_altname": { "kms": "azure", "type": "string", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": "mongodb" }, "azure_string_rand_explicit_id": { "kms": "azure", "type": "string", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": "mongodb" }, "azure_string_rand_explicit_altname": { "kms": "azure", "type": "string", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": "mongodb" }, "azure_string_det_auto_id": { "kms": "azure", "type": "string", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": "mongodb" }, "azure_string_det_explicit_id": { "kms": "azure", "type": "string", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": "mongodb" }, "azure_string_det_explicit_altname": { "kms": "azure", "type": "string", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": "mongodb" }, "azure_object_rand_auto_id": { "kms": "azure", "type": "object", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "x": { "$numberInt": "1" } } }, "azure_object_rand_auto_altname": { "kms": "azure", "type": "object", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "x": { "$numberInt": "1" } } }, "azure_object_rand_explicit_id": { "kms": "azure", "type": "object", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "x": { "$numberInt": "1" } } }, "azure_object_rand_explicit_altname": { "kms": "azure", "type": "object", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "x": { "$numberInt": "1" } } }, "azure_object_det_explicit_id": { "kms": "azure", "type": "object", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": { "x": { "$numberInt": "1" } } }, "azure_object_det_explicit_altname": { "kms": "azure", "type": "object", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "x": { "$numberInt": "1" } } }, "azure_array_rand_auto_id": { "kms": "azure", "type": "array", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": [ { "$numberInt": "1" }, { "$numberInt": "2" }, { "$numberInt": "3" } ] }, "azure_array_rand_auto_altname": { "kms": "azure", "type": "array", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": [ { "$numberInt": "1" }, { "$numberInt": "2" }, { "$numberInt": "3" } ] }, "azure_array_rand_explicit_id": { "kms": "azure", "type": "array", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": [ { "$numberInt": "1" }, { "$numberInt": "2" }, { "$numberInt": "3" } ] }, "azure_array_rand_explicit_altname": { "kms": "azure", "type": "array", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": [ { "$numberInt": "1" }, { "$numberInt": "2" }, { "$numberInt": "3" } ] }, "azure_array_det_explicit_id": { "kms": "azure", "type": "array", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": [ { "$numberInt": "1" }, { "$numberInt": "2" }, { "$numberInt": "3" } ] }, "azure_array_det_explicit_altname": { "kms": "azure", "type": "array", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": [ { "$numberInt": "1" }, { "$numberInt": "2" }, { "$numberInt": "3" } ] }, "azure_binData=00_rand_auto_id": { "kms": "azure", "type": "binData=00", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AQIDBA==", "subType": "00" } } }, "azure_binData=00_rand_auto_altname": { "kms": "azure", "type": "binData=00", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AQIDBA==", "subType": "00" } } }, "azure_binData=00_rand_explicit_id": { "kms": "azure", "type": "binData=00", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AQIDBA==", "subType": "00" } } }, "azure_binData=00_rand_explicit_altname": { "kms": "azure", "type": "binData=00", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AQIDBA==", "subType": "00" } } }, "azure_binData=00_det_auto_id": { "kms": "azure", "type": "binData=00", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AQIDBA==", "subType": "00" } } }, "azure_binData=00_det_explicit_id": { "kms": "azure", "type": "binData=00", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AQIDBA==", "subType": "00" } } }, "azure_binData=00_det_explicit_altname": { "kms": "azure", "type": "binData=00", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AQIDBA==", "subType": "00" } } }, "azure_binData=04_rand_auto_id": { "kms": "azure", "type": "binData=04", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AAECAwQFBgcICQoLDA0ODw==", "subType": "04" } } }, "azure_binData=04_rand_auto_altname": { "kms": "azure", "type": "binData=04", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AAECAwQFBgcICQoLDA0ODw==", "subType": "04" } } }, "azure_binData=04_rand_explicit_id": { "kms": "azure", "type": "binData=04", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AAECAwQFBgcICQoLDA0ODw==", "subType": "04" } } }, "azure_binData=04_rand_explicit_altname": { "kms": "azure", "type": "binData=04", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AAECAwQFBgcICQoLDA0ODw==", "subType": "04" } } }, "azure_binData=04_det_auto_id": { "kms": "azure", "type": "binData=04", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AAECAwQFBgcICQoLDA0ODw==", "subType": "04" } } }, "azure_binData=04_det_explicit_id": { "kms": "azure", "type": "binData=04", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AAECAwQFBgcICQoLDA0ODw==", "subType": "04" } } }, "azure_binData=04_det_explicit_altname": { "kms": "azure", "type": "binData=04", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AAECAwQFBgcICQoLDA0ODw==", "subType": "04" } } }, "azure_undefined_rand_explicit_id": { "kms": "azure", "type": "undefined", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$undefined": true } }, "azure_undefined_rand_explicit_altname": { "kms": "azure", "type": "undefined", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$undefined": true } }, "azure_undefined_det_explicit_id": { "kms": "azure", "type": "undefined", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$undefined": true } }, "azure_undefined_det_explicit_altname": { "kms": "azure", "type": "undefined", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$undefined": true } }, "azure_objectId_rand_auto_id": { "kms": "azure", "type": "objectId", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$oid": "01234567890abcdef0123456" } }, "azure_objectId_rand_auto_altname": { "kms": "azure", "type": "objectId", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$oid": "01234567890abcdef0123456" } }, "azure_objectId_rand_explicit_id": { "kms": "azure", "type": "objectId", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$oid": "01234567890abcdef0123456" } }, "azure_objectId_rand_explicit_altname": { "kms": "azure", "type": "objectId", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$oid": "01234567890abcdef0123456" } }, "azure_objectId_det_auto_id": { "kms": "azure", "type": "objectId", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$oid": "01234567890abcdef0123456" } }, "azure_objectId_det_explicit_id": { "kms": "azure", "type": "objectId", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$oid": "01234567890abcdef0123456" } }, "azure_objectId_det_explicit_altname": { "kms": "azure", "type": "objectId", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$oid": "01234567890abcdef0123456" } }, "azure_bool_rand_auto_id": { "kms": "azure", "type": "bool", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": true }, "azure_bool_rand_auto_altname": { "kms": "azure", "type": "bool", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": true }, "azure_bool_rand_explicit_id": { "kms": "azure", "type": "bool", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": true }, "azure_bool_rand_explicit_altname": { "kms": "azure", "type": "bool", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": true }, "azure_bool_det_explicit_id": { "kms": "azure", "type": "bool", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": true }, "azure_bool_det_explicit_altname": { "kms": "azure", "type": "bool", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": true }, "azure_date_rand_auto_id": { "kms": "azure", "type": "date", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$date": { "$numberLong": "12345" } } }, "azure_date_rand_auto_altname": { "kms": "azure", "type": "date", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$date": { "$numberLong": "12345" } } }, "azure_date_rand_explicit_id": { "kms": "azure", "type": "date", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$date": { "$numberLong": "12345" } } }, "azure_date_rand_explicit_altname": { "kms": "azure", "type": "date", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$date": { "$numberLong": "12345" } } }, "azure_date_det_auto_id": { "kms": "azure", "type": "date", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$date": { "$numberLong": "12345" } } }, "azure_date_det_explicit_id": { "kms": "azure", "type": "date", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$date": { "$numberLong": "12345" } } }, "azure_date_det_explicit_altname": { "kms": "azure", "type": "date", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$date": { "$numberLong": "12345" } } }, "azure_null_rand_explicit_id": { "kms": "azure", "type": "null", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": false, "value": null }, "azure_null_rand_explicit_altname": { "kms": "azure", "type": "null", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": false, "value": null }, "azure_null_det_explicit_id": { "kms": "azure", "type": "null", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": null }, "azure_null_det_explicit_altname": { "kms": "azure", "type": "null", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": null }, "azure_regex_rand_auto_id": { "kms": "azure", "type": "regex", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$regularExpression": { "pattern": ".*", "options": "" } } }, "azure_regex_rand_auto_altname": { "kms": "azure", "type": "regex", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$regularExpression": { "pattern": ".*", "options": "" } } }, "azure_regex_rand_explicit_id": { "kms": "azure", "type": "regex", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$regularExpression": { "pattern": ".*", "options": "" } } }, "azure_regex_rand_explicit_altname": { "kms": "azure", "type": "regex", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$regularExpression": { "pattern": ".*", "options": "" } } }, "azure_regex_det_auto_id": { "kms": "azure", "type": "regex", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$regularExpression": { "pattern": ".*", "options": "" } } }, "azure_regex_det_explicit_id": { "kms": "azure", "type": "regex", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$regularExpression": { "pattern": ".*", "options": "" } } }, "azure_regex_det_explicit_altname": { "kms": "azure", "type": "regex", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$regularExpression": { "pattern": ".*", "options": "" } } }, "azure_dbPointer_rand_auto_id": { "kms": "azure", "type": "dbPointer", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$dbPointer": { "$ref": "db.example", "$id": { "$oid": "01234567890abcdef0123456" } } } }, "azure_dbPointer_rand_auto_altname": { "kms": "azure", "type": "dbPointer", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$dbPointer": { "$ref": "db.example", "$id": { "$oid": "01234567890abcdef0123456" } } } }, "azure_dbPointer_rand_explicit_id": { "kms": "azure", "type": "dbPointer", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$dbPointer": { "$ref": "db.example", "$id": { "$oid": "01234567890abcdef0123456" } } } }, "azure_dbPointer_rand_explicit_altname": { "kms": "azure", "type": "dbPointer", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$dbPointer": { "$ref": "db.example", "$id": { "$oid": "01234567890abcdef0123456" } } } }, "azure_dbPointer_det_auto_id": { "kms": "azure", "type": "dbPointer", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$dbPointer": { "$ref": "db.example", "$id": { "$oid": "01234567890abcdef0123456" } } } }, "azure_dbPointer_det_explicit_id": { "kms": "azure", "type": "dbPointer", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$dbPointer": { "$ref": "db.example", "$id": { "$oid": "01234567890abcdef0123456" } } } }, "azure_dbPointer_det_explicit_altname": { "kms": "azure", "type": "dbPointer", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$dbPointer": { "$ref": "db.example", "$id": { "$oid": "01234567890abcdef0123456" } } } }, "azure_javascript_rand_auto_id": { "kms": "azure", "type": "javascript", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$code": "x=1" } }, "azure_javascript_rand_auto_altname": { "kms": "azure", "type": "javascript", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$code": "x=1" } }, "azure_javascript_rand_explicit_id": { "kms": "azure", "type": "javascript", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$code": "x=1" } }, "azure_javascript_rand_explicit_altname": { "kms": "azure", "type": "javascript", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$code": "x=1" } }, "azure_javascript_det_auto_id": { "kms": "azure", "type": "javascript", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$code": "x=1" } }, "azure_javascript_det_explicit_id": { "kms": "azure", "type": "javascript", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$code": "x=1" } }, "azure_javascript_det_explicit_altname": { "kms": "azure", "type": "javascript", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$code": "x=1" } }, "azure_symbol_rand_auto_id": { "kms": "azure", "type": "symbol", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$symbol": "mongodb-symbol" } }, "azure_symbol_rand_auto_altname": { "kms": "azure", "type": "symbol", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$symbol": "mongodb-symbol" } }, "azure_symbol_rand_explicit_id": { "kms": "azure", "type": "symbol", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$symbol": "mongodb-symbol" } }, "azure_symbol_rand_explicit_altname": { "kms": "azure", "type": "symbol", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$symbol": "mongodb-symbol" } }, "azure_symbol_det_auto_id": { "kms": "azure", "type": "symbol", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$symbol": "mongodb-symbol" } }, "azure_symbol_det_explicit_id": { "kms": "azure", "type": "symbol", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$symbol": "mongodb-symbol" } }, "azure_symbol_det_explicit_altname": { "kms": "azure", "type": "symbol", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$symbol": "mongodb-symbol" } }, "azure_javascriptWithScope_rand_auto_id": { "kms": "azure", "type": "javascriptWithScope", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$code": "x=1", "$scope": {} } }, "azure_javascriptWithScope_rand_auto_altname": { "kms": "azure", "type": "javascriptWithScope", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$code": "x=1", "$scope": {} } }, "azure_javascriptWithScope_rand_explicit_id": { "kms": "azure", "type": "javascriptWithScope", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$code": "x=1", "$scope": {} } }, "azure_javascriptWithScope_rand_explicit_altname": { "kms": "azure", "type": "javascriptWithScope", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$code": "x=1", "$scope": {} } }, "azure_javascriptWithScope_det_explicit_id": { "kms": "azure", "type": "javascriptWithScope", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$code": "x=1", "$scope": {} } }, "azure_javascriptWithScope_det_explicit_altname": { "kms": "azure", "type": "javascriptWithScope", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$code": "x=1", "$scope": {} } }, "azure_int_rand_auto_id": { "kms": "azure", "type": "int", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$numberInt": "123" } }, "azure_int_rand_auto_altname": { "kms": "azure", "type": "int", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$numberInt": "123" } }, "azure_int_rand_explicit_id": { "kms": "azure", "type": "int", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$numberInt": "123" } }, "azure_int_rand_explicit_altname": { "kms": "azure", "type": "int", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$numberInt": "123" } }, "azure_int_det_auto_id": { "kms": "azure", "type": "int", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$numberInt": "123" } }, "azure_int_det_explicit_id": { "kms": "azure", "type": "int", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$numberInt": "123" } }, "azure_int_det_explicit_altname": { "kms": "azure", "type": "int", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$numberInt": "123" } }, "azure_timestamp_rand_auto_id": { "kms": "azure", "type": "timestamp", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$timestamp": { "t": 0, "i": 12345 } } }, "azure_timestamp_rand_auto_altname": { "kms": "azure", "type": "timestamp", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$timestamp": { "t": 0, "i": 12345 } } }, "azure_timestamp_rand_explicit_id": { "kms": "azure", "type": "timestamp", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$timestamp": { "t": 0, "i": 12345 } } }, "azure_timestamp_rand_explicit_altname": { "kms": "azure", "type": "timestamp", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$timestamp": { "t": 0, "i": 12345 } } }, "azure_timestamp_det_auto_id": { "kms": "azure", "type": "timestamp", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$timestamp": { "t": 0, "i": 12345 } } }, "azure_timestamp_det_explicit_id": { "kms": "azure", "type": "timestamp", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$timestamp": { "t": 0, "i": 12345 } } }, "azure_timestamp_det_explicit_altname": { "kms": "azure", "type": "timestamp", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$timestamp": { "t": 0, "i": 12345 } } }, "azure_long_rand_auto_id": { "kms": "azure", "type": "long", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$numberLong": "456" } }, "azure_long_rand_auto_altname": { "kms": "azure", "type": "long", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$numberLong": "456" } }, "azure_long_rand_explicit_id": { "kms": "azure", "type": "long", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$numberLong": "456" } }, "azure_long_rand_explicit_altname": { "kms": "azure", "type": "long", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$numberLong": "456" } }, "azure_long_det_auto_id": { "kms": "azure", "type": "long", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$numberLong": "456" } }, "azure_long_det_explicit_id": { "kms": "azure", "type": "long", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$numberLong": "456" } }, "azure_long_det_explicit_altname": { "kms": "azure", "type": "long", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$numberLong": "456" } }, "azure_decimal_rand_auto_id": { "kms": "azure", "type": "decimal", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$numberDecimal": "1.234" } }, "azure_decimal_rand_auto_altname": { "kms": "azure", "type": "decimal", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$numberDecimal": "1.234" } }, "azure_decimal_rand_explicit_id": { "kms": "azure", "type": "decimal", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$numberDecimal": "1.234" } }, "azure_decimal_rand_explicit_altname": { "kms": "azure", "type": "decimal", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$numberDecimal": "1.234" } }, "azure_decimal_det_explicit_id": { "kms": "azure", "type": "decimal", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$numberDecimal": "1.234" } }, "azure_decimal_det_explicit_altname": { "kms": "azure", "type": "decimal", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$numberDecimal": "1.234" } }, "azure_minKey_rand_explicit_id": { "kms": "azure", "type": "minKey", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$minKey": 1 } }, "azure_minKey_rand_explicit_altname": { "kms": "azure", "type": "minKey", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$minKey": 1 } }, "azure_minKey_det_explicit_id": { "kms": "azure", "type": "minKey", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$minKey": 1 } }, "azure_minKey_det_explicit_altname": { "kms": "azure", "type": "minKey", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$minKey": 1 } }, "azure_maxKey_rand_explicit_id": { "kms": "azure", "type": "maxKey", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$maxKey": 1 } }, "azure_maxKey_rand_explicit_altname": { "kms": "azure", "type": "maxKey", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$maxKey": 1 } }, "azure_maxKey_det_explicit_id": { "kms": "azure", "type": "maxKey", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$maxKey": 1 } }, "azure_maxKey_det_explicit_altname": { "kms": "azure", "type": "maxKey", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$maxKey": 1 } }, "gcp_double_rand_auto_id": { "kms": "gcp", "type": "double", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$numberDouble": "1.234" } }, "gcp_double_rand_auto_altname": { "kms": "gcp", "type": "double", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$numberDouble": "1.234" } }, "gcp_double_rand_explicit_id": { "kms": "gcp", "type": "double", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$numberDouble": "1.234" } }, "gcp_double_rand_explicit_altname": { "kms": "gcp", "type": "double", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$numberDouble": "1.234" } }, "gcp_double_det_explicit_id": { "kms": "gcp", "type": "double", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$numberDouble": "1.234" } }, "gcp_double_det_explicit_altname": { "kms": "gcp", "type": "double", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$numberDouble": "1.234" } }, "gcp_string_rand_auto_id": { "kms": "gcp", "type": "string", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": "mongodb" }, "gcp_string_rand_auto_altname": { "kms": "gcp", "type": "string", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": "mongodb" }, "gcp_string_rand_explicit_id": { "kms": "gcp", "type": "string", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": "mongodb" }, "gcp_string_rand_explicit_altname": { "kms": "gcp", "type": "string", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": "mongodb" }, "gcp_string_det_auto_id": { "kms": "gcp", "type": "string", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": "mongodb" }, "gcp_string_det_explicit_id": { "kms": "gcp", "type": "string", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": "mongodb" }, "gcp_string_det_explicit_altname": { "kms": "gcp", "type": "string", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": "mongodb" }, "gcp_object_rand_auto_id": { "kms": "gcp", "type": "object", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "x": { "$numberInt": "1" } } }, "gcp_object_rand_auto_altname": { "kms": "gcp", "type": "object", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "x": { "$numberInt": "1" } } }, "gcp_object_rand_explicit_id": { "kms": "gcp", "type": "object", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "x": { "$numberInt": "1" } } }, "gcp_object_rand_explicit_altname": { "kms": "gcp", "type": "object", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "x": { "$numberInt": "1" } } }, "gcp_object_det_explicit_id": { "kms": "gcp", "type": "object", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": { "x": { "$numberInt": "1" } } }, "gcp_object_det_explicit_altname": { "kms": "gcp", "type": "object", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "x": { "$numberInt": "1" } } }, "gcp_array_rand_auto_id": { "kms": "gcp", "type": "array", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": [ { "$numberInt": "1" }, { "$numberInt": "2" }, { "$numberInt": "3" } ] }, "gcp_array_rand_auto_altname": { "kms": "gcp", "type": "array", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": [ { "$numberInt": "1" }, { "$numberInt": "2" }, { "$numberInt": "3" } ] }, "gcp_array_rand_explicit_id": { "kms": "gcp", "type": "array", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": [ { "$numberInt": "1" }, { "$numberInt": "2" }, { "$numberInt": "3" } ] }, "gcp_array_rand_explicit_altname": { "kms": "gcp", "type": "array", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": [ { "$numberInt": "1" }, { "$numberInt": "2" }, { "$numberInt": "3" } ] }, "gcp_array_det_explicit_id": { "kms": "gcp", "type": "array", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": [ { "$numberInt": "1" }, { "$numberInt": "2" }, { "$numberInt": "3" } ] }, "gcp_array_det_explicit_altname": { "kms": "gcp", "type": "array", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": [ { "$numberInt": "1" }, { "$numberInt": "2" }, { "$numberInt": "3" } ] }, "gcp_binData=00_rand_auto_id": { "kms": "gcp", "type": "binData=00", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AQIDBA==", "subType": "00" } } }, "gcp_binData=00_rand_auto_altname": { "kms": "gcp", "type": "binData=00", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AQIDBA==", "subType": "00" } } }, "gcp_binData=00_rand_explicit_id": { "kms": "gcp", "type": "binData=00", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AQIDBA==", "subType": "00" } } }, "gcp_binData=00_rand_explicit_altname": { "kms": "gcp", "type": "binData=00", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AQIDBA==", "subType": "00" } } }, "gcp_binData=00_det_auto_id": { "kms": "gcp", "type": "binData=00", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AQIDBA==", "subType": "00" } } }, "gcp_binData=00_det_explicit_id": { "kms": "gcp", "type": "binData=00", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AQIDBA==", "subType": "00" } } }, "gcp_binData=00_det_explicit_altname": { "kms": "gcp", "type": "binData=00", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AQIDBA==", "subType": "00" } } }, "gcp_binData=04_rand_auto_id": { "kms": "gcp", "type": "binData=04", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AAECAwQFBgcICQoLDA0ODw==", "subType": "04" } } }, "gcp_binData=04_rand_auto_altname": { "kms": "gcp", "type": "binData=04", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AAECAwQFBgcICQoLDA0ODw==", "subType": "04" } } }, "gcp_binData=04_rand_explicit_id": { "kms": "gcp", "type": "binData=04", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AAECAwQFBgcICQoLDA0ODw==", "subType": "04" } } }, "gcp_binData=04_rand_explicit_altname": { "kms": "gcp", "type": "binData=04", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AAECAwQFBgcICQoLDA0ODw==", "subType": "04" } } }, "gcp_binData=04_det_auto_id": { "kms": "gcp", "type": "binData=04", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AAECAwQFBgcICQoLDA0ODw==", "subType": "04" } } }, "gcp_binData=04_det_explicit_id": { "kms": "gcp", "type": "binData=04", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AAECAwQFBgcICQoLDA0ODw==", "subType": "04" } } }, "gcp_binData=04_det_explicit_altname": { "kms": "gcp", "type": "binData=04", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AAECAwQFBgcICQoLDA0ODw==", "subType": "04" } } }, "gcp_undefined_rand_explicit_id": { "kms": "gcp", "type": "undefined", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$undefined": true } }, "gcp_undefined_rand_explicit_altname": { "kms": "gcp", "type": "undefined", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$undefined": true } }, "gcp_undefined_det_explicit_id": { "kms": "gcp", "type": "undefined", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$undefined": true } }, "gcp_undefined_det_explicit_altname": { "kms": "gcp", "type": "undefined", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$undefined": true } }, "gcp_objectId_rand_auto_id": { "kms": "gcp", "type": "objectId", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$oid": "01234567890abcdef0123456" } }, "gcp_objectId_rand_auto_altname": { "kms": "gcp", "type": "objectId", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$oid": "01234567890abcdef0123456" } }, "gcp_objectId_rand_explicit_id": { "kms": "gcp", "type": "objectId", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$oid": "01234567890abcdef0123456" } }, "gcp_objectId_rand_explicit_altname": { "kms": "gcp", "type": "objectId", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$oid": "01234567890abcdef0123456" } }, "gcp_objectId_det_auto_id": { "kms": "gcp", "type": "objectId", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$oid": "01234567890abcdef0123456" } }, "gcp_objectId_det_explicit_id": { "kms": "gcp", "type": "objectId", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$oid": "01234567890abcdef0123456" } }, "gcp_objectId_det_explicit_altname": { "kms": "gcp", "type": "objectId", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$oid": "01234567890abcdef0123456" } }, "gcp_bool_rand_auto_id": { "kms": "gcp", "type": "bool", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": true }, "gcp_bool_rand_auto_altname": { "kms": "gcp", "type": "bool", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": true }, "gcp_bool_rand_explicit_id": { "kms": "gcp", "type": "bool", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": true }, "gcp_bool_rand_explicit_altname": { "kms": "gcp", "type": "bool", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": true }, "gcp_bool_det_explicit_id": { "kms": "gcp", "type": "bool", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": true }, "gcp_bool_det_explicit_altname": { "kms": "gcp", "type": "bool", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": true }, "gcp_date_rand_auto_id": { "kms": "gcp", "type": "date", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$date": { "$numberLong": "12345" } } }, "gcp_date_rand_auto_altname": { "kms": "gcp", "type": "date", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$date": { "$numberLong": "12345" } } }, "gcp_date_rand_explicit_id": { "kms": "gcp", "type": "date", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$date": { "$numberLong": "12345" } } }, "gcp_date_rand_explicit_altname": { "kms": "gcp", "type": "date", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$date": { "$numberLong": "12345" } } }, "gcp_date_det_auto_id": { "kms": "gcp", "type": "date", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$date": { "$numberLong": "12345" } } }, "gcp_date_det_explicit_id": { "kms": "gcp", "type": "date", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$date": { "$numberLong": "12345" } } }, "gcp_date_det_explicit_altname": { "kms": "gcp", "type": "date", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$date": { "$numberLong": "12345" } } }, "gcp_null_rand_explicit_id": { "kms": "gcp", "type": "null", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": false, "value": null }, "gcp_null_rand_explicit_altname": { "kms": "gcp", "type": "null", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": false, "value": null }, "gcp_null_det_explicit_id": { "kms": "gcp", "type": "null", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": null }, "gcp_null_det_explicit_altname": { "kms": "gcp", "type": "null", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": null }, "gcp_regex_rand_auto_id": { "kms": "gcp", "type": "regex", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$regularExpression": { "pattern": ".*", "options": "" } } }, "gcp_regex_rand_auto_altname": { "kms": "gcp", "type": "regex", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$regularExpression": { "pattern": ".*", "options": "" } } }, "gcp_regex_rand_explicit_id": { "kms": "gcp", "type": "regex", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$regularExpression": { "pattern": ".*", "options": "" } } }, "gcp_regex_rand_explicit_altname": { "kms": "gcp", "type": "regex", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$regularExpression": { "pattern": ".*", "options": "" } } }, "gcp_regex_det_auto_id": { "kms": "gcp", "type": "regex", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$regularExpression": { "pattern": ".*", "options": "" } } }, "gcp_regex_det_explicit_id": { "kms": "gcp", "type": "regex", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$regularExpression": { "pattern": ".*", "options": "" } } }, "gcp_regex_det_explicit_altname": { "kms": "gcp", "type": "regex", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$regularExpression": { "pattern": ".*", "options": "" } } }, "gcp_dbPointer_rand_auto_id": { "kms": "gcp", "type": "dbPointer", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$dbPointer": { "$ref": "db.example", "$id": { "$oid": "01234567890abcdef0123456" } } } }, "gcp_dbPointer_rand_auto_altname": { "kms": "gcp", "type": "dbPointer", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$dbPointer": { "$ref": "db.example", "$id": { "$oid": "01234567890abcdef0123456" } } } }, "gcp_dbPointer_rand_explicit_id": { "kms": "gcp", "type": "dbPointer", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$dbPointer": { "$ref": "db.example", "$id": { "$oid": "01234567890abcdef0123456" } } } }, "gcp_dbPointer_rand_explicit_altname": { "kms": "gcp", "type": "dbPointer", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$dbPointer": { "$ref": "db.example", "$id": { "$oid": "01234567890abcdef0123456" } } } }, "gcp_dbPointer_det_auto_id": { "kms": "gcp", "type": "dbPointer", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$dbPointer": { "$ref": "db.example", "$id": { "$oid": "01234567890abcdef0123456" } } } }, "gcp_dbPointer_det_explicit_id": { "kms": "gcp", "type": "dbPointer", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$dbPointer": { "$ref": "db.example", "$id": { "$oid": "01234567890abcdef0123456" } } } }, "gcp_dbPointer_det_explicit_altname": { "kms": "gcp", "type": "dbPointer", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$dbPointer": { "$ref": "db.example", "$id": { "$oid": "01234567890abcdef0123456" } } } }, "gcp_javascript_rand_auto_id": { "kms": "gcp", "type": "javascript", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$code": "x=1" } }, "gcp_javascript_rand_auto_altname": { "kms": "gcp", "type": "javascript", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$code": "x=1" } }, "gcp_javascript_rand_explicit_id": { "kms": "gcp", "type": "javascript", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$code": "x=1" } }, "gcp_javascript_rand_explicit_altname": { "kms": "gcp", "type": "javascript", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$code": "x=1" } }, "gcp_javascript_det_auto_id": { "kms": "gcp", "type": "javascript", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$code": "x=1" } }, "gcp_javascript_det_explicit_id": { "kms": "gcp", "type": "javascript", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$code": "x=1" } }, "gcp_javascript_det_explicit_altname": { "kms": "gcp", "type": "javascript", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$code": "x=1" } }, "gcp_symbol_rand_auto_id": { "kms": "gcp", "type": "symbol", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$symbol": "mongodb-symbol" } }, "gcp_symbol_rand_auto_altname": { "kms": "gcp", "type": "symbol", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$symbol": "mongodb-symbol" } }, "gcp_symbol_rand_explicit_id": { "kms": "gcp", "type": "symbol", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$symbol": "mongodb-symbol" } }, "gcp_symbol_rand_explicit_altname": { "kms": "gcp", "type": "symbol", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$symbol": "mongodb-symbol" } }, "gcp_symbol_det_auto_id": { "kms": "gcp", "type": "symbol", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$symbol": "mongodb-symbol" } }, "gcp_symbol_det_explicit_id": { "kms": "gcp", "type": "symbol", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$symbol": "mongodb-symbol" } }, "gcp_symbol_det_explicit_altname": { "kms": "gcp", "type": "symbol", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$symbol": "mongodb-symbol" } }, "gcp_javascriptWithScope_rand_auto_id": { "kms": "gcp", "type": "javascriptWithScope", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$code": "x=1", "$scope": {} } }, "gcp_javascriptWithScope_rand_auto_altname": { "kms": "gcp", "type": "javascriptWithScope", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$code": "x=1", "$scope": {} } }, "gcp_javascriptWithScope_rand_explicit_id": { "kms": "gcp", "type": "javascriptWithScope", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$code": "x=1", "$scope": {} } }, "gcp_javascriptWithScope_rand_explicit_altname": { "kms": "gcp", "type": "javascriptWithScope", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$code": "x=1", "$scope": {} } }, "gcp_javascriptWithScope_det_explicit_id": { "kms": "gcp", "type": "javascriptWithScope", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$code": "x=1", "$scope": {} } }, "gcp_javascriptWithScope_det_explicit_altname": { "kms": "gcp", "type": "javascriptWithScope", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$code": "x=1", "$scope": {} } }, "gcp_int_rand_auto_id": { "kms": "gcp", "type": "int", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$numberInt": "123" } }, "gcp_int_rand_auto_altname": { "kms": "gcp", "type": "int", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$numberInt": "123" } }, "gcp_int_rand_explicit_id": { "kms": "gcp", "type": "int", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$numberInt": "123" } }, "gcp_int_rand_explicit_altname": { "kms": "gcp", "type": "int", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$numberInt": "123" } }, "gcp_int_det_auto_id": { "kms": "gcp", "type": "int", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$numberInt": "123" } }, "gcp_int_det_explicit_id": { "kms": "gcp", "type": "int", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$numberInt": "123" } }, "gcp_int_det_explicit_altname": { "kms": "gcp", "type": "int", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$numberInt": "123" } }, "gcp_timestamp_rand_auto_id": { "kms": "gcp", "type": "timestamp", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$timestamp": { "t": 0, "i": 12345 } } }, "gcp_timestamp_rand_auto_altname": { "kms": "gcp", "type": "timestamp", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$timestamp": { "t": 0, "i": 12345 } } }, "gcp_timestamp_rand_explicit_id": { "kms": "gcp", "type": "timestamp", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$timestamp": { "t": 0, "i": 12345 } } }, "gcp_timestamp_rand_explicit_altname": { "kms": "gcp", "type": "timestamp", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$timestamp": { "t": 0, "i": 12345 } } }, "gcp_timestamp_det_auto_id": { "kms": "gcp", "type": "timestamp", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$timestamp": { "t": 0, "i": 12345 } } }, "gcp_timestamp_det_explicit_id": { "kms": "gcp", "type": "timestamp", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$timestamp": { "t": 0, "i": 12345 } } }, "gcp_timestamp_det_explicit_altname": { "kms": "gcp", "type": "timestamp", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$timestamp": { "t": 0, "i": 12345 } } }, "gcp_long_rand_auto_id": { "kms": "gcp", "type": "long", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$numberLong": "456" } }, "gcp_long_rand_auto_altname": { "kms": "gcp", "type": "long", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$numberLong": "456" } }, "gcp_long_rand_explicit_id": { "kms": "gcp", "type": "long", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$numberLong": "456" } }, "gcp_long_rand_explicit_altname": { "kms": "gcp", "type": "long", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$numberLong": "456" } }, "gcp_long_det_auto_id": { "kms": "gcp", "type": "long", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$numberLong": "456" } }, "gcp_long_det_explicit_id": { "kms": "gcp", "type": "long", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$numberLong": "456" } }, "gcp_long_det_explicit_altname": { "kms": "gcp", "type": "long", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$numberLong": "456" } }, "gcp_decimal_rand_auto_id": { "kms": "gcp", "type": "decimal", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$numberDecimal": "1.234" } }, "gcp_decimal_rand_auto_altname": { "kms": "gcp", "type": "decimal", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$numberDecimal": "1.234" } }, "gcp_decimal_rand_explicit_id": { "kms": "gcp", "type": "decimal", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$numberDecimal": "1.234" } }, "gcp_decimal_rand_explicit_altname": { "kms": "gcp", "type": "decimal", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$numberDecimal": "1.234" } }, "gcp_decimal_det_explicit_id": { "kms": "gcp", "type": "decimal", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$numberDecimal": "1.234" } }, "gcp_decimal_det_explicit_altname": { "kms": "gcp", "type": "decimal", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$numberDecimal": "1.234" } }, "gcp_minKey_rand_explicit_id": { "kms": "gcp", "type": "minKey", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$minKey": 1 } }, "gcp_minKey_rand_explicit_altname": { "kms": "gcp", "type": "minKey", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$minKey": 1 } }, "gcp_minKey_det_explicit_id": { "kms": "gcp", "type": "minKey", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$minKey": 1 } }, "gcp_minKey_det_explicit_altname": { "kms": "gcp", "type": "minKey", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$minKey": 1 } }, "gcp_maxKey_rand_explicit_id": { "kms": "gcp", "type": "maxKey", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$maxKey": 1 } }, "gcp_maxKey_rand_explicit_altname": { "kms": "gcp", "type": "maxKey", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$maxKey": 1 } }, "gcp_maxKey_det_explicit_id": { "kms": "gcp", "type": "maxKey", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$maxKey": 1 } }, "gcp_maxKey_det_explicit_altname": { "kms": "gcp", "type": "maxKey", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$maxKey": 1 } }, "kmip_double_rand_auto_id": { "kms": "kmip", "type": "double", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$numberDouble": "1.234" } }, "kmip_double_rand_auto_altname": { "kms": "kmip", "type": "double", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$numberDouble": "1.234" } }, "kmip_double_rand_explicit_id": { "kms": "kmip", "type": "double", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$numberDouble": "1.234" } }, "kmip_double_rand_explicit_altname": { "kms": "kmip", "type": "double", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$numberDouble": "1.234" } }, "kmip_double_det_explicit_id": { "kms": "kmip", "type": "double", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$numberDouble": "1.234" } }, "kmip_double_det_explicit_altname": { "kms": "kmip", "type": "double", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$numberDouble": "1.234" } }, "kmip_string_rand_auto_id": { "kms": "kmip", "type": "string", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": "mongodb" }, "kmip_string_rand_auto_altname": { "kms": "kmip", "type": "string", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": "mongodb" }, "kmip_string_rand_explicit_id": { "kms": "kmip", "type": "string", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": "mongodb" }, "kmip_string_rand_explicit_altname": { "kms": "kmip", "type": "string", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": "mongodb" }, "kmip_string_det_auto_id": { "kms": "kmip", "type": "string", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": "mongodb" }, "kmip_string_det_explicit_id": { "kms": "kmip", "type": "string", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": "mongodb" }, "kmip_string_det_explicit_altname": { "kms": "kmip", "type": "string", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": "mongodb" }, "kmip_object_rand_auto_id": { "kms": "kmip", "type": "object", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "x": { "$numberInt": "1" } } }, "kmip_object_rand_auto_altname": { "kms": "kmip", "type": "object", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "x": { "$numberInt": "1" } } }, "kmip_object_rand_explicit_id": { "kms": "kmip", "type": "object", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "x": { "$numberInt": "1" } } }, "kmip_object_rand_explicit_altname": { "kms": "kmip", "type": "object", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "x": { "$numberInt": "1" } } }, "kmip_object_det_explicit_id": { "kms": "kmip", "type": "object", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": { "x": { "$numberInt": "1" } } }, "kmip_object_det_explicit_altname": { "kms": "kmip", "type": "object", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "x": { "$numberInt": "1" } } }, "kmip_array_rand_auto_id": { "kms": "kmip", "type": "array", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": [ { "$numberInt": "1" }, { "$numberInt": "2" }, { "$numberInt": "3" } ] }, "kmip_array_rand_auto_altname": { "kms": "kmip", "type": "array", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": [ { "$numberInt": "1" }, { "$numberInt": "2" }, { "$numberInt": "3" } ] }, "kmip_array_rand_explicit_id": { "kms": "kmip", "type": "array", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": [ { "$numberInt": "1" }, { "$numberInt": "2" }, { "$numberInt": "3" } ] }, "kmip_array_rand_explicit_altname": { "kms": "kmip", "type": "array", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": [ { "$numberInt": "1" }, { "$numberInt": "2" }, { "$numberInt": "3" } ] }, "kmip_array_det_explicit_id": { "kms": "kmip", "type": "array", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": [ { "$numberInt": "1" }, { "$numberInt": "2" }, { "$numberInt": "3" } ] }, "kmip_array_det_explicit_altname": { "kms": "kmip", "type": "array", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": [ { "$numberInt": "1" }, { "$numberInt": "2" }, { "$numberInt": "3" } ] }, "kmip_binData=00_rand_auto_id": { "kms": "kmip", "type": "binData=00", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AQIDBA==", "subType": "00" } } }, "kmip_binData=00_rand_auto_altname": { "kms": "kmip", "type": "binData=00", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AQIDBA==", "subType": "00" } } }, "kmip_binData=00_rand_explicit_id": { "kms": "kmip", "type": "binData=00", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AQIDBA==", "subType": "00" } } }, "kmip_binData=00_rand_explicit_altname": { "kms": "kmip", "type": "binData=00", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AQIDBA==", "subType": "00" } } }, "kmip_binData=00_det_auto_id": { "kms": "kmip", "type": "binData=00", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AQIDBA==", "subType": "00" } } }, "kmip_binData=00_det_explicit_id": { "kms": "kmip", "type": "binData=00", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AQIDBA==", "subType": "00" } } }, "kmip_binData=00_det_explicit_altname": { "kms": "kmip", "type": "binData=00", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AQIDBA==", "subType": "00" } } }, "kmip_binData=04_rand_auto_id": { "kms": "kmip", "type": "binData=04", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AAECAwQFBgcICQoLDA0ODw==", "subType": "04" } } }, "kmip_binData=04_rand_auto_altname": { "kms": "kmip", "type": "binData=04", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AAECAwQFBgcICQoLDA0ODw==", "subType": "04" } } }, "kmip_binData=04_rand_explicit_id": { "kms": "kmip", "type": "binData=04", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AAECAwQFBgcICQoLDA0ODw==", "subType": "04" } } }, "kmip_binData=04_rand_explicit_altname": { "kms": "kmip", "type": "binData=04", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AAECAwQFBgcICQoLDA0ODw==", "subType": "04" } } }, "kmip_binData=04_det_auto_id": { "kms": "kmip", "type": "binData=04", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AAECAwQFBgcICQoLDA0ODw==", "subType": "04" } } }, "kmip_binData=04_det_explicit_id": { "kms": "kmip", "type": "binData=04", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$binary": { "base64": "AAECAwQFBgcICQoLDA0ODw==", "subType": "04" } } }, "kmip_binData=04_det_explicit_altname": { "kms": "kmip", "type": "binData=04", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$binary": { "base64": "AAECAwQFBgcICQoLDA0ODw==", "subType": "04" } } }, "kmip_undefined_rand_explicit_id": { "kms": "kmip", "type": "undefined", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$undefined": true } }, "kmip_undefined_rand_explicit_altname": { "kms": "kmip", "type": "undefined", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$undefined": true } }, "kmip_undefined_det_explicit_id": { "kms": "kmip", "type": "undefined", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$undefined": true } }, "kmip_undefined_det_explicit_altname": { "kms": "kmip", "type": "undefined", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$undefined": true } }, "kmip_objectId_rand_auto_id": { "kms": "kmip", "type": "objectId", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$oid": "01234567890abcdef0123456" } }, "kmip_objectId_rand_auto_altname": { "kms": "kmip", "type": "objectId", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$oid": "01234567890abcdef0123456" } }, "kmip_objectId_rand_explicit_id": { "kms": "kmip", "type": "objectId", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$oid": "01234567890abcdef0123456" } }, "kmip_objectId_rand_explicit_altname": { "kms": "kmip", "type": "objectId", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$oid": "01234567890abcdef0123456" } }, "kmip_objectId_det_auto_id": { "kms": "kmip", "type": "objectId", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$oid": "01234567890abcdef0123456" } }, "kmip_objectId_det_explicit_id": { "kms": "kmip", "type": "objectId", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$oid": "01234567890abcdef0123456" } }, "kmip_objectId_det_explicit_altname": { "kms": "kmip", "type": "objectId", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$oid": "01234567890abcdef0123456" } }, "kmip_bool_rand_auto_id": { "kms": "kmip", "type": "bool", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": true }, "kmip_bool_rand_auto_altname": { "kms": "kmip", "type": "bool", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": true }, "kmip_bool_rand_explicit_id": { "kms": "kmip", "type": "bool", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": true }, "kmip_bool_rand_explicit_altname": { "kms": "kmip", "type": "bool", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": true }, "kmip_bool_det_explicit_id": { "kms": "kmip", "type": "bool", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": true }, "kmip_bool_det_explicit_altname": { "kms": "kmip", "type": "bool", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": true }, "kmip_date_rand_auto_id": { "kms": "kmip", "type": "date", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$date": { "$numberLong": "12345" } } }, "kmip_date_rand_auto_altname": { "kms": "kmip", "type": "date", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$date": { "$numberLong": "12345" } } }, "kmip_date_rand_explicit_id": { "kms": "kmip", "type": "date", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$date": { "$numberLong": "12345" } } }, "kmip_date_rand_explicit_altname": { "kms": "kmip", "type": "date", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$date": { "$numberLong": "12345" } } }, "kmip_date_det_auto_id": { "kms": "kmip", "type": "date", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$date": { "$numberLong": "12345" } } }, "kmip_date_det_explicit_id": { "kms": "kmip", "type": "date", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$date": { "$numberLong": "12345" } } }, "kmip_date_det_explicit_altname": { "kms": "kmip", "type": "date", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$date": { "$numberLong": "12345" } } }, "kmip_null_rand_explicit_id": { "kms": "kmip", "type": "null", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": false, "value": null }, "kmip_null_rand_explicit_altname": { "kms": "kmip", "type": "null", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": false, "value": null }, "kmip_null_det_explicit_id": { "kms": "kmip", "type": "null", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": null }, "kmip_null_det_explicit_altname": { "kms": "kmip", "type": "null", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": null }, "kmip_regex_rand_auto_id": { "kms": "kmip", "type": "regex", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$regularExpression": { "pattern": ".*", "options": "" } } }, "kmip_regex_rand_auto_altname": { "kms": "kmip", "type": "regex", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$regularExpression": { "pattern": ".*", "options": "" } } }, "kmip_regex_rand_explicit_id": { "kms": "kmip", "type": "regex", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$regularExpression": { "pattern": ".*", "options": "" } } }, "kmip_regex_rand_explicit_altname": { "kms": "kmip", "type": "regex", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$regularExpression": { "pattern": ".*", "options": "" } } }, "kmip_regex_det_auto_id": { "kms": "kmip", "type": "regex", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$regularExpression": { "pattern": ".*", "options": "" } } }, "kmip_regex_det_explicit_id": { "kms": "kmip", "type": "regex", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$regularExpression": { "pattern": ".*", "options": "" } } }, "kmip_regex_det_explicit_altname": { "kms": "kmip", "type": "regex", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$regularExpression": { "pattern": ".*", "options": "" } } }, "kmip_dbPointer_rand_auto_id": { "kms": "kmip", "type": "dbPointer", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$dbPointer": { "$ref": "db.example", "$id": { "$oid": "01234567890abcdef0123456" } } } }, "kmip_dbPointer_rand_auto_altname": { "kms": "kmip", "type": "dbPointer", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$dbPointer": { "$ref": "db.example", "$id": { "$oid": "01234567890abcdef0123456" } } } }, "kmip_dbPointer_rand_explicit_id": { "kms": "kmip", "type": "dbPointer", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$dbPointer": { "$ref": "db.example", "$id": { "$oid": "01234567890abcdef0123456" } } } }, "kmip_dbPointer_rand_explicit_altname": { "kms": "kmip", "type": "dbPointer", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$dbPointer": { "$ref": "db.example", "$id": { "$oid": "01234567890abcdef0123456" } } } }, "kmip_dbPointer_det_auto_id": { "kms": "kmip", "type": "dbPointer", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$dbPointer": { "$ref": "db.example", "$id": { "$oid": "01234567890abcdef0123456" } } } }, "kmip_dbPointer_det_explicit_id": { "kms": "kmip", "type": "dbPointer", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$dbPointer": { "$ref": "db.example", "$id": { "$oid": "01234567890abcdef0123456" } } } }, "kmip_dbPointer_det_explicit_altname": { "kms": "kmip", "type": "dbPointer", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$dbPointer": { "$ref": "db.example", "$id": { "$oid": "01234567890abcdef0123456" } } } }, "kmip_javascript_rand_auto_id": { "kms": "kmip", "type": "javascript", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$code": "x=1" } }, "kmip_javascript_rand_auto_altname": { "kms": "kmip", "type": "javascript", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$code": "x=1" } }, "kmip_javascript_rand_explicit_id": { "kms": "kmip", "type": "javascript", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$code": "x=1" } }, "kmip_javascript_rand_explicit_altname": { "kms": "kmip", "type": "javascript", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$code": "x=1" } }, "kmip_javascript_det_auto_id": { "kms": "kmip", "type": "javascript", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$code": "x=1" } }, "kmip_javascript_det_explicit_id": { "kms": "kmip", "type": "javascript", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$code": "x=1" } }, "kmip_javascript_det_explicit_altname": { "kms": "kmip", "type": "javascript", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$code": "x=1" } }, "kmip_symbol_rand_auto_id": { "kms": "kmip", "type": "symbol", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$symbol": "mongodb-symbol" } }, "kmip_symbol_rand_auto_altname": { "kms": "kmip", "type": "symbol", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$symbol": "mongodb-symbol" } }, "kmip_symbol_rand_explicit_id": { "kms": "kmip", "type": "symbol", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$symbol": "mongodb-symbol" } }, "kmip_symbol_rand_explicit_altname": { "kms": "kmip", "type": "symbol", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$symbol": "mongodb-symbol" } }, "kmip_symbol_det_auto_id": { "kms": "kmip", "type": "symbol", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$symbol": "mongodb-symbol" } }, "kmip_symbol_det_explicit_id": { "kms": "kmip", "type": "symbol", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$symbol": "mongodb-symbol" } }, "kmip_symbol_det_explicit_altname": { "kms": "kmip", "type": "symbol", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$symbol": "mongodb-symbol" } }, "kmip_javascriptWithScope_rand_auto_id": { "kms": "kmip", "type": "javascriptWithScope", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$code": "x=1", "$scope": {} } }, "kmip_javascriptWithScope_rand_auto_altname": { "kms": "kmip", "type": "javascriptWithScope", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$code": "x=1", "$scope": {} } }, "kmip_javascriptWithScope_rand_explicit_id": { "kms": "kmip", "type": "javascriptWithScope", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$code": "x=1", "$scope": {} } }, "kmip_javascriptWithScope_rand_explicit_altname": { "kms": "kmip", "type": "javascriptWithScope", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$code": "x=1", "$scope": {} } }, "kmip_javascriptWithScope_det_explicit_id": { "kms": "kmip", "type": "javascriptWithScope", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$code": "x=1", "$scope": {} } }, "kmip_javascriptWithScope_det_explicit_altname": { "kms": "kmip", "type": "javascriptWithScope", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$code": "x=1", "$scope": {} } }, "kmip_int_rand_auto_id": { "kms": "kmip", "type": "int", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$numberInt": "123" } }, "kmip_int_rand_auto_altname": { "kms": "kmip", "type": "int", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$numberInt": "123" } }, "kmip_int_rand_explicit_id": { "kms": "kmip", "type": "int", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$numberInt": "123" } }, "kmip_int_rand_explicit_altname": { "kms": "kmip", "type": "int", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$numberInt": "123" } }, "kmip_int_det_auto_id": { "kms": "kmip", "type": "int", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$numberInt": "123" } }, "kmip_int_det_explicit_id": { "kms": "kmip", "type": "int", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$numberInt": "123" } }, "kmip_int_det_explicit_altname": { "kms": "kmip", "type": "int", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$numberInt": "123" } }, "kmip_timestamp_rand_auto_id": { "kms": "kmip", "type": "timestamp", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$timestamp": { "t": 0, "i": 12345 } } }, "kmip_timestamp_rand_auto_altname": { "kms": "kmip", "type": "timestamp", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$timestamp": { "t": 0, "i": 12345 } } }, "kmip_timestamp_rand_explicit_id": { "kms": "kmip", "type": "timestamp", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$timestamp": { "t": 0, "i": 12345 } } }, "kmip_timestamp_rand_explicit_altname": { "kms": "kmip", "type": "timestamp", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$timestamp": { "t": 0, "i": 12345 } } }, "kmip_timestamp_det_auto_id": { "kms": "kmip", "type": "timestamp", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$timestamp": { "t": 0, "i": 12345 } } }, "kmip_timestamp_det_explicit_id": { "kms": "kmip", "type": "timestamp", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$timestamp": { "t": 0, "i": 12345 } } }, "kmip_timestamp_det_explicit_altname": { "kms": "kmip", "type": "timestamp", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$timestamp": { "t": 0, "i": 12345 } } }, "kmip_long_rand_auto_id": { "kms": "kmip", "type": "long", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$numberLong": "456" } }, "kmip_long_rand_auto_altname": { "kms": "kmip", "type": "long", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$numberLong": "456" } }, "kmip_long_rand_explicit_id": { "kms": "kmip", "type": "long", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$numberLong": "456" } }, "kmip_long_rand_explicit_altname": { "kms": "kmip", "type": "long", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$numberLong": "456" } }, "kmip_long_det_auto_id": { "kms": "kmip", "type": "long", "algo": "det", "method": "auto", "identifier": "id", "allowed": true, "value": { "$numberLong": "456" } }, "kmip_long_det_explicit_id": { "kms": "kmip", "type": "long", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$numberLong": "456" } }, "kmip_long_det_explicit_altname": { "kms": "kmip", "type": "long", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$numberLong": "456" } }, "kmip_decimal_rand_auto_id": { "kms": "kmip", "type": "decimal", "algo": "rand", "method": "auto", "identifier": "id", "allowed": true, "value": { "$numberDecimal": "1.234" } }, "kmip_decimal_rand_auto_altname": { "kms": "kmip", "type": "decimal", "algo": "rand", "method": "auto", "identifier": "altname", "allowed": true, "value": { "$numberDecimal": "1.234" } }, "kmip_decimal_rand_explicit_id": { "kms": "kmip", "type": "decimal", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": { "$numberDecimal": "1.234" } }, "kmip_decimal_rand_explicit_altname": { "kms": "kmip", "type": "decimal", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": true, "value": { "$numberDecimal": "1.234" } }, "kmip_decimal_det_explicit_id": { "kms": "kmip", "type": "decimal", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$numberDecimal": "1.234" } }, "kmip_decimal_det_explicit_altname": { "kms": "kmip", "type": "decimal", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$numberDecimal": "1.234" } }, "kmip_minKey_rand_explicit_id": { "kms": "kmip", "type": "minKey", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$minKey": 1 } }, "kmip_minKey_rand_explicit_altname": { "kms": "kmip", "type": "minKey", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$minKey": 1 } }, "kmip_minKey_det_explicit_id": { "kms": "kmip", "type": "minKey", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$minKey": 1 } }, "kmip_minKey_det_explicit_altname": { "kms": "kmip", "type": "minKey", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$minKey": 1 } }, "kmip_maxKey_rand_explicit_id": { "kms": "kmip", "type": "maxKey", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$maxKey": 1 } }, "kmip_maxKey_rand_explicit_altname": { "kms": "kmip", "type": "maxKey", "algo": "rand", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$maxKey": 1 } }, "kmip_maxKey_det_explicit_id": { "kms": "kmip", "type": "maxKey", "algo": "det", "method": "explicit", "identifier": "id", "allowed": false, "value": { "$maxKey": 1 } }, "kmip_maxKey_det_explicit_altname": { "kms": "kmip", "type": "maxKey", "algo": "det", "method": "explicit", "identifier": "altname", "allowed": false, "value": { "$maxKey": 1 } }, "payload=0,algo=rand": { "kms": "local", "type": "string", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": "" }, "payload=1,algo=rand": { "kms": "local", "type": "string", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": "a" }, "payload=2,algo=rand": { "kms": "local", "type": "string", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": "aa" }, "payload=3,algo=rand": { "kms": "local", "type": "string", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": "aaa" }, "payload=4,algo=rand": { "kms": "local", "type": "string", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": "aaaa" }, "payload=5,algo=rand": { "kms": "local", "type": "string", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": "aaaaa" }, "payload=6,algo=rand": { "kms": "local", "type": "string", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": "aaaaaa" }, "payload=7,algo=rand": { "kms": "local", "type": "string", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": "aaaaaaa" }, "payload=8,algo=rand": { "kms": "local", "type": "string", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": "aaaaaaaa" }, "payload=9,algo=rand": { "kms": "local", "type": "string", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": "aaaaaaaaa" }, "payload=10,algo=rand": { "kms": "local", "type": "string", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": "aaaaaaaaaa" }, "payload=11,algo=rand": { "kms": "local", "type": "string", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": "aaaaaaaaaaa" }, "payload=12,algo=rand": { "kms": "local", "type": "string", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": "aaaaaaaaaaaa" }, "payload=13,algo=rand": { "kms": "local", "type": "string", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": "aaaaaaaaaaaaa" }, "payload=14,algo=rand": { "kms": "local", "type": "string", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": "aaaaaaaaaaaaaa" }, "payload=15,algo=rand": { "kms": "local", "type": "string", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": "aaaaaaaaaaaaaaa" }, "payload=16,algo=rand": { "kms": "local", "type": "string", "algo": "rand", "method": "explicit", "identifier": "id", "allowed": true, "value": "aaaaaaaaaaaaaaaa" }, "payload=0,algo=det": { "kms": "local", "type": "string", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": "" }, "payload=1,algo=det": { "kms": "local", "type": "string", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": "a" }, "payload=2,algo=det": { "kms": "local", "type": "string", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": "aa" }, "payload=3,algo=det": { "kms": "local", "type": "string", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": "aaa" }, "payload=4,algo=det": { "kms": "local", "type": "string", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": "aaaa" }, "payload=5,algo=det": { "kms": "local", "type": "string", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": "aaaaa" }, "payload=6,algo=det": { "kms": "local", "type": "string", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": "aaaaaa" }, "payload=7,algo=det": { "kms": "local", "type": "string", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": "aaaaaaa" }, "payload=8,algo=det": { "kms": "local", "type": "string", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": "aaaaaaaa" }, "payload=9,algo=det": { "kms": "local", "type": "string", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": "aaaaaaaaa" }, "payload=10,algo=det": { "kms": "local", "type": "string", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": "aaaaaaaaaa" }, "payload=11,algo=det": { "kms": "local", "type": "string", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": "aaaaaaaaaaa" }, "payload=12,algo=det": { "kms": "local", "type": "string", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": "aaaaaaaaaaaa" }, "payload=13,algo=det": { "kms": "local", "type": "string", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": "aaaaaaaaaaaaa" }, "payload=14,algo=det": { "kms": "local", "type": "string", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": "aaaaaaaaaaaaaa" }, "payload=15,algo=det": { "kms": "local", "type": "string", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": "aaaaaaaaaaaaaaa" }, "payload=16,algo=det": { "kms": "local", "type": "string", "algo": "det", "method": "explicit", "identifier": "id", "allowed": true, "value": "aaaaaaaaaaaaaaaa" } }mongo-ruby-driver-2.21.3/spec/support/crypt/data_keys/000077500000000000000000000000001505113246500227435ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/support/crypt/data_keys/key_document_aws.json000066400000000000000000000020061505113246500271740ustar00rootroot00000000000000{ "status": { "$numberInt": "1" }, "_id": { "$binary": { "base64": "AWSAAAAAAAAAAAAAAAAAAA==", "subType": "04" } }, "masterKey": { "provider": "aws", "region": "us-east-1", "key": "arn:aws:kms:us-east-1:579766882180:key/89fcc2c4-08b0-4bd9-9f25-e30687b580d0", "endpoint": "kms.us-east-1.amazonaws.com:443" }, "updateDate": { "$date": { "$numberLong": "1557827033449" } }, "keyAltNames": ["ssn_encryption_key"], "keyMaterial": { "$binary": { "base64": "AQICAHhQNmWG2CzOm1dq3kWLM+iDUZhEqnhJwH9wZVpuZ94A8gEqnsxXlR51T5EbEVezUqqKAAAAwjCBvwYJKoZIhvcNAQcGoIGxMIGuAgEAMIGoBgkqhkiG9w0BBwEwHgYJYIZIAWUDBAEuMBEEDHa4jo6yp0Z18KgbUgIBEIB74sKxWtV8/YHje5lv5THTl0HIbhSwM6EqRlmBiFFatmEWaeMk4tO4xBX65eq670I5TWPSLMzpp8ncGHMmvHqRajNBnmFtbYxN3E3/WjxmdbOOe+OXpnGJPcGsftc7cB2shRfA4lICPnE26+oVNXT6p0Lo20nY5XC7jyCO", "subType": "00" } }, "creationDate": { "$date": { "$numberLong": "1557827033449" } } } mongo-ruby-driver-2.21.3/spec/support/crypt/data_keys/key_document_azure.json000066400000000000000000000017021505113246500275320ustar00rootroot00000000000000{ "status": { "$numberInt": "1" }, "_id": { "$binary": { "base64": "AZUREAAAAAAAAAAAAAAAAA==", "subType": "04" } }, "masterKey": { "provider": "azure", "keyVaultEndpoint": "key-vault-csfle.vault.azure.net", "keyName": "key-name-csfle" }, "updateDate": { "$date": { "$numberLong": "1557827033449" } }, "keyAltNames": ["ssn_encryption_key"], "keyMaterial": { "$binary": { "base64": "GjKv7Q/e62E14noqXuwOWf/AI3IZJcdQ1Jcsh86MR/582kuHBQSy7hXYi1sL\n8fn8zkWe987/Ll2Oq43049djxQGEobmw8Qg3Gk2czRCzS8TMy6yASMfwROO7\nn0k+QJwiTqzLRfP+rkJVxSde1v+nPjmonup8T1L98WJywjHFDWaxI32o7X6U\nY9iTVdQ1o8RfyR9IUOg7asHWq1zbn7CwmHz264OBG79SKXN5AkG8X4QfGIQh\nu0v7H3n4r8ZpvIMa2XFrHGkoDgkCKiAmat5s5RkChT57Bu6h4Q98Fg1clIBU\n2G/BrdtbzKYktpq6CZFyvtXd48juya3w7UDLAu7h6Q==", "subType": "00" } }, "creationDate": { "$date": { "$numberLong": "1637319584426" } } } mongo-ruby-driver-2.21.3/spec/support/crypt/data_keys/key_document_gcp.json000066400000000000000000000017401505113246500271570ustar00rootroot00000000000000{ "status": { "$numberInt": "1" }, "_id": { "$binary": { "base64": "GCPAAAAAAAAAAAAAAAAAAA==", "subType": "04" } }, "masterKey": { "provider": "gcp", "projectId": "devprod-drivers", "location": "global", "keyRing": "key-ring-csfle", "keyName": "key-name-csfle" }, "updateDate": { "$date": { "$numberLong": "1557827033449" } }, "keyAltNames": [ "ssn_encryption_key" ], "keyMaterial": { "$binary": { "base64": "CiQAIgLj0fMxF5M7RuQbBultgQXS8zwxnJbKQPbdsHvLPvfiP1QSiQEAuvl7\nn4jiN8avA4SFq/K/Yns9jBBAiSKtA3OVxrAe4VEZ12U2lntLYHECCzp8OIP8\nBf/FqRjr3AHYKbfRDjngKgGDJBfSjqiq7SJN1OThwQxaBp2nvuvjn6UQ3t/f\noYL0FHW20+PL23+K/35rr8iSAyR4w+7spOJ6XmaQDPuhzKthLcrPaedcAQ==", "subType": "00" } }, "creationDate": { "$date": { "$numberLong": "1557827033449" } } }mongo-ruby-driver-2.21.3/spec/support/crypt/data_keys/key_document_kmip.json000066400000000000000000000013611505113246500273450ustar00rootroot00000000000000{ "status": { "$numberInt": "1" }, "_id": { "$binary": { "base64": "KMIPAAAAAAAAAAAAAAAAAA==", "subType": "04" } }, "masterKey": { "provider": "kmip", "keyId": "1" }, "updateDate": { "$date": { "$numberLong": "1557827033449" } }, "keyAltNames": ["ssn_encryption_key"], "keyMaterial": { "$binary": { "base64": "eUYDyB0HuWb+lQgUwO+6qJQyTTDTY2gp9FbemL7ZFo0pvr0x6rm6Ff9OVUTGH6HyMKipaeHdiIJU1dzsLwvqKvi7Beh+U4iaIWX/K0oEg1GOsJc0+Z/in8gNHbGUYLmycHViM3LES3kdt7FdFSUl5rEBHrM71yoNEXImz17QJWMGOuT4x6yoi2pvnaRJwfrI4DjpmnnTrDMac92jgZehbg==", "subType": "00" } }, "creationDate": { "$date": { "$numberLong": "1557827033449" } } } mongo-ruby-driver-2.21.3/spec/support/crypt/data_keys/key_document_local.json000066400000000000000000000014511505113246500274770ustar00rootroot00000000000000{ "status": { "$numberInt": "1" }, "_id": { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } }, "masterKey": { "provider": "local" }, "updateDate": { "$date": { "$numberLong": "1557827033449" } }, "keyMaterial": { "$binary": { "base64": "Ce9HSz/HKKGkIt4uyy+jDuKGA+rLC2cycykMo6vc8jXxqa1UVDYHWq1r+vZKbnnSRBfB981akzRKZCFpC05CTyFqDhXv6OnMjpG97OZEREGIsHEYiJkBW0jJJvfLLgeLsEpBzsro9FztGGXASxyxFRZFhXvHxyiLOKrdWfs7X1O/iK3pEoHMx6uSNSfUOgbebLfIqW7TO++iQS5g1xovXA==", "subType": "00" } }, "creationDate": { "$date": { "$numberLong": "1557827033449" } }, "keyAltNames": [ "ssn_encryption_key" ] } mongo-ruby-driver-2.21.3/spec/support/crypt/encrypted_fields/000077500000000000000000000000001505113246500243225ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/support/crypt/encrypted_fields/encryptedFields.json000066400000000000000000000012031505113246500303350ustar00rootroot00000000000000{ "escCollection": "enxcol_.default.esc", "ecocCollection": "enxcol_.default.ecoc", "fields": [ { "keyId": { "$binary": { "base64": "EjRWeBI0mHYSNBI0VniQEg==", "subType": "04" } }, "path": "encryptedIndexed", "bsonType": "string", "queries": { "queryType": "equality", "contention": { "$numberLong": "0" } } }, { "keyId": { "$binary": { "base64": "q83vqxI0mHYSNBI0VniQEg==", "subType": "04" } }, "path": "encryptedUnindexed", "bsonType": "string" } ] } mongo-ruby-driver-2.21.3/spec/support/crypt/encrypted_fields/range-encryptedFields-Date.json000066400000000000000000000011021505113246500323000ustar00rootroot00000000000000{ "fields": [ { "keyId": { "$binary": { "base64": "EjRWeBI0mHYSNBI0VniQEg==", "subType": "04" } }, "path": "encryptedDate", "bsonType": "date", "queries": { "queryType": "range", "sparsity": { "$numberLong": "1" }, "min": { "$date": { "$numberLong": "0" } }, "max": { "$date": { "$numberLong": "200" } } } } ] } range-encryptedFields-DecimalNoPrecision.json000066400000000000000000000006141505113246500350620ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/support/crypt/encrypted_fields{ "fields": [ { "keyId": { "$binary": { "base64": "EjRWeBI0mHYSNBI0VniQEg==", "subType": "04" } }, "path": "encryptedDecimalNoPrecision", "bsonType": "decimal", "queries": { "queryType": "range", "sparsity": { "$numberInt": "1" } } } ] } range-encryptedFields-DecimalPrecision.json000066400000000000000000000010361505113246500345640ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/support/crypt/encrypted_fields{ "fields": [ { "keyId": { "$binary": { "base64": "EjRWeBI0mHYSNBI0VniQEg==", "subType": "04" } }, "path": "encryptedDecimalPrecision", "bsonType": "decimal", "queries": { "queryType": "range", "sparsity": { "$numberInt": "1" }, "min": { "$numberDecimal": "0.0" }, "max": { "$numberDecimal": "200.0" }, "precision": { "$numberInt": "2" } } } ] } range-encryptedFields-DoubleNoPrecision.json000066400000000000000000000006131505113246500347350ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/support/crypt/encrypted_fields{ "fields": [ { "keyId": { "$binary": { "base64": "EjRWeBI0mHYSNBI0VniQEg==", "subType": "04" } }, "path": "encryptedDoubleNoPrecision", "bsonType": "double", "queries": { "queryType": "range", "sparsity": { "$numberLong": "1" } } } ] } range-encryptedFields-DoublePrecision.json000066400000000000000000000011251505113246500344370ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/support/crypt/encrypted_fields{ "fields": [ { "keyId": { "$binary": { "base64": "EjRWeBI0mHYSNBI0VniQEg==", "subType": "04" } }, "path": "encryptedDoublePrecision", "bsonType": "double", "queries": { "queryType": "range", "sparsity": { "$numberLong": "1" }, "min": { "$numberDouble": "0.0" }, "max": { "$numberDouble": "200.0" }, "precision": { "$numberInt": "2" } } } ] } mongo-ruby-driver-2.21.3/spec/support/crypt/encrypted_fields/range-encryptedFields-Int.json000066400000000000000000000007701505113246500321670ustar00rootroot00000000000000{ "fields": [ { "keyId": { "$binary": { "base64": "EjRWeBI0mHYSNBI0VniQEg==", "subType": "04" } }, "path": "encryptedInt", "bsonType": "int", "queries": { "queryType": "range", "sparsity": { "$numberLong": "1" }, "min": { "$numberInt": "0" }, "max": { "$numberInt": "200" } } } ] } mongo-ruby-driver-2.21.3/spec/support/crypt/encrypted_fields/range-encryptedFields-Long.json000066400000000000000000000007741505113246500323400ustar00rootroot00000000000000{ "fields": [ { "keyId": { "$binary": { "base64": "EjRWeBI0mHYSNBI0VniQEg==", "subType": "04" } }, "path": "encryptedLong", "bsonType": "long", "queries": { "queryType": "range", "sparsity": { "$numberLong": "1" }, "min": { "$numberLong": "0" }, "max": { "$numberLong": "200" } } } ] } mongo-ruby-driver-2.21.3/spec/support/crypt/external/000077500000000000000000000000001505113246500226215ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/support/crypt/external/external-key.json000066400000000000000000000013421505113246500261240ustar00rootroot00000000000000{ "status": { "$numberInt": "1" }, "_id": { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } }, "masterKey": { "provider": "local" }, "updateDate": { "$date": { "$numberLong": "1557827033449" } }, "keyMaterial": { "$binary": { "base64": "Ce9HSz/HKKGkIt4uyy+jDuKGA+rLC2cycykMo6vc8jXxqa1UVDYHWq1r+vZKbnnSRBfB981akzRKZCFpC05CTyFqDhXv6OnMjpG97OZEREGIsHEYiJkBW0jJJvfLLgeLsEpBzsro9FztGGXASxyxFRZFhXvHxyiLOKrdWfs7X1O/iK3pEoHMx6uSNSfUOgbebLfIqW7TO++iQS5g1xovXA==", "subType": "00" } }, "creationDate": { "$date": { "$numberLong": "1557827033449" } }, "keyAltNames": [ "local" ] } mongo-ruby-driver-2.21.3/spec/support/crypt/external/external-schema.json000066400000000000000000000007101505113246500265720ustar00rootroot00000000000000{ "properties": { "encrypted": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } } }, "bsonType": "object" } mongo-ruby-driver-2.21.3/spec/support/crypt/keys/000077500000000000000000000000001505113246500217525ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/support/crypt/keys/key1-document.json000066400000000000000000000013741505113246500253370ustar00rootroot00000000000000{ "_id": { "$binary": { "base64": "EjRWeBI0mHYSNBI0VniQEg==", "subType": "04" } }, "keyMaterial": { "$binary": { "base64": "sHe0kz57YW7v8g9VP9sf/+K1ex4JqKc5rf/URX3n3p8XdZ6+15uXPaSayC6adWbNxkFskuMCOifDoTT+rkqMtFkDclOy884RuGGtUysq3X7zkAWYTKi8QAfKkajvVbZl2y23UqgVasdQu3OVBQCrH/xY00nNAs/52e958nVjBuzQkSb1T8pKJAyjZsHJ60+FtnfafDZSTAIBJYn7UWBCwQ==", "subType": "00" } }, "creationDate": { "$date": { "$numberLong": "1648914851981" } }, "updateDate": { "$date": { "$numberLong": "1648914851981" } }, "status": { "$numberInt": "0" }, "masterKey": { "provider": "local" } } mongo-ruby-driver-2.21.3/spec/support/crypt/limits/000077500000000000000000000000001505113246500223005ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/support/crypt/limits/limits-doc.json000066400000000000000000000024271505113246500252440ustar00rootroot00000000000000{ "00": "a", "01": "a", "02": "a", "03": "a", "04": "a", "05": "a", "06": "a", "07": "a", "08": "a", "09": "a", "10": "a", "11": "a", "12": "a", "13": "a", "14": "a", "15": "a", "16": "a", "17": "a", "18": "a", "19": "a", "20": "a", "21": "a", "22": "a", "23": "a", "24": "a", "25": "a", "26": "a", "27": "a", "28": "a", "29": "a", "30": "a", "31": "a", "32": "a", "33": "a", "34": "a", "35": "a", "36": "a", "37": "a", "38": "a", "39": "a", "40": "a", "41": "a", "42": "a", "43": "a", "44": "a", "45": "a", "46": "a", "47": "a", "48": "a", "49": "a", "50": "a", "51": "a", "52": "a", "53": "a", "54": "a", "55": "a", "56": "a", "57": "a", "58": "a", "59": "a", "60": "a", "61": "a", "62": "a", "63": "a", "64": "a", "65": "a", "66": "a", "67": "a", "68": "a", "69": "a", "70": "a", "71": "a", "72": "a", "73": "a", "74": "a", "75": "a", "76": "a", "77": "a", "78": "a", "79": "a", "80": "a", "81": "a", "82": "a", "83": "a", "84": "a", "85": "a", "86": "a", "87": "a", "88": "a", "89": "a", "90": "a", "91": "a", "92": "a", "93": "a", "94": "a", "95": "a", "96": "a", "97": "a", "98": "a", "99": "a" } mongo-ruby-driver-2.21.3/spec/support/crypt/limits/limits-key.json000066400000000000000000000013421505113246500252620ustar00rootroot00000000000000{ "status": { "$numberInt": "1" }, "_id": { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } }, "masterKey": { "provider": "local" }, "updateDate": { "$date": { "$numberLong": "1557827033449" } }, "keyMaterial": { "$binary": { "base64": "Ce9HSz/HKKGkIt4uyy+jDuKGA+rLC2cycykMo6vc8jXxqa1UVDYHWq1r+vZKbnnSRBfB981akzRKZCFpC05CTyFqDhXv6OnMjpG97OZEREGIsHEYiJkBW0jJJvfLLgeLsEpBzsro9FztGGXASxyxFRZFhXvHxyiLOKrdWfs7X1O/iK3pEoHMx6uSNSfUOgbebLfIqW7TO++iQS5g1xovXA==", "subType": "00" } }, "creationDate": { "$date": { "$numberLong": "1557827033449" } }, "keyAltNames": [ "local" ] } mongo-ruby-driver-2.21.3/spec/support/crypt/limits/limits-schema.json000066400000000000000000000761511505113246500257440ustar00rootroot00000000000000{ "properties": { "10": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "11": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "12": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "13": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "14": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "15": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "16": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "17": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "18": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "19": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "20": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "21": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "22": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "23": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "24": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "25": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "26": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "27": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "28": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "29": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "30": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "31": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "32": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "33": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "34": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "35": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "36": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "37": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "38": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "39": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "40": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "41": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "42": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "43": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "44": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "45": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "46": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "47": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "48": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "49": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "50": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "51": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "52": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "53": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "54": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "55": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "56": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "57": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "58": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "59": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "60": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "61": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "62": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "63": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "64": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "65": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "66": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "67": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "68": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "69": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "70": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "71": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "72": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "73": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "74": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "75": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "76": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "77": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "78": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "79": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "80": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "81": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "82": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "83": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "84": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "85": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "86": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "87": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "88": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "89": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "90": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "91": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "92": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "93": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "94": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "95": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "96": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "97": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "98": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "99": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "00": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "01": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "02": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "03": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "04": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "05": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "06": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "07": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "08": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } }, "09": { "encrypt": { "keyId": [ { "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } } ], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } } }, "bsonType": "object" } mongo-ruby-driver-2.21.3/spec/support/crypt/schema_maps/000077500000000000000000000000001505113246500232575ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/support/crypt/schema_maps/schema_map_aws.json000066400000000000000000000005221505113246500271200ustar00rootroot00000000000000{ "properties": { "ssn": { "encrypt": { "keyId": [{ "$binary": { "base64": "AWSAAAAAAAAAAAAAAAAAAA==", "subType": "04" } }], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } } }, "bsonType": "object" } mongo-ruby-driver-2.21.3/spec/support/crypt/schema_maps/schema_map_aws_key_alt_names.json000066400000000000000000000003271505113246500320160ustar00rootroot00000000000000{ "properties": { "ssn": { "encrypt": { "keyId": "/altname", "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random" } } }, "bsonType": "object" } mongo-ruby-driver-2.21.3/spec/support/crypt/schema_maps/schema_map_azure.json000066400000000000000000000005221505113246500274540ustar00rootroot00000000000000{ "properties": { "ssn": { "encrypt": { "keyId": [{ "$binary": { "base64": "AZUREAAAAAAAAAAAAAAAAA==", "subType": "04" } }], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } } }, "bsonType": "object" } mongo-ruby-driver-2.21.3/spec/support/crypt/schema_maps/schema_map_azure_key_alt_names.json000066400000000000000000000003271505113246500323520ustar00rootroot00000000000000{ "properties": { "ssn": { "encrypt": { "keyId": "/altname", "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random" } } }, "bsonType": "object" } mongo-ruby-driver-2.21.3/spec/support/crypt/schema_maps/schema_map_gcp.json000066400000000000000000000005221505113246500270770ustar00rootroot00000000000000{ "properties": { "ssn": { "encrypt": { "keyId": [{ "$binary": { "base64": "GCPAAAAAAAAAAAAAAAAAAA==", "subType": "04" } }], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } } }, "bsonType": "object" } mongo-ruby-driver-2.21.3/spec/support/crypt/schema_maps/schema_map_gcp_key_alt_names.json000066400000000000000000000003271505113246500317750ustar00rootroot00000000000000{ "properties": { "ssn": { "encrypt": { "keyId": "/altname", "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random" } } }, "bsonType": "object" } mongo-ruby-driver-2.21.3/spec/support/crypt/schema_maps/schema_map_kmip.json000066400000000000000000000005221505113246500272660ustar00rootroot00000000000000{ "properties": { "ssn": { "encrypt": { "keyId": [{ "$binary": { "base64": "KMIPAAAAAAAAAAAAAAAAAA==", "subType": "04" } }], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } } }, "bsonType": "object" } mongo-ruby-driver-2.21.3/spec/support/crypt/schema_maps/schema_map_kmip_key_alt_names.json000066400000000000000000000003271505113246500321640ustar00rootroot00000000000000{ "properties": { "ssn": { "encrypt": { "keyId": "/altname", "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random" } } }, "bsonType": "object" } mongo-ruby-driver-2.21.3/spec/support/crypt/schema_maps/schema_map_local.json000066400000000000000000000005331505113246500274220ustar00rootroot00000000000000{ "properties": { "ssn": { "encrypt": { "keyId": [{ "$binary": { "base64": "LOCALAAAAAAAAAAAAAAAAA==", "subType": "04" } }], "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic" } } }, "bsonType": "object" } mongo-ruby-driver-2.21.3/spec/support/crypt/schema_maps/schema_map_local_key_alt_names.json000066400000000000000000000003271505113246500323160ustar00rootroot00000000000000{ "properties": { "ssn": { "encrypt": { "keyId": "/altname", "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random" } } }, "bsonType": "object" } mongo-ruby-driver-2.21.3/spec/support/json_ext_formatter.rb000066400000000000000000000012111505113246500240720ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all class JsonExtFormatter < RSpec::Core::Formatters::JsonFormatter RSpec::Core::Formatters.register self, :message, :dump_summary, :dump_profile, :stop, :seed, :close def format_example(example) super.tap do |hash| # Time format is chosen to be the same as driver's log entries hash[:started_at] = example.execution_result.started_at.strftime('%Y-%m-%d %H:%M:%S.%L %z') hash[:finished_at] = example.execution_result.finished_at.strftime('%Y-%m-%d %H:%M:%S.%L %z') hash[:sdam_log_entries] = SdamFormatterIntegration.example_log_entries(example.id) end end end mongo-ruby-driver-2.21.3/spec/support/keyword_struct.rb000066400000000000000000000013671505113246500232620ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all # Intermediate step between a Struct and an OpenStruct. Allows only designated # field names to be read or written but allows passing fields to constructor # as keyword arguments. class KeywordStruct def self.new(*field_names, &block) Class.new.tap do |cls| cls.class_exec do define_method(:initialize) do |**fields| fields.each do |field, value| unless field_names.include?(field) raise ArgumentError, "Unknown field #{field}" end instance_variable_set("@#{field}", value) end end attr_accessor *field_names end if block_given? cls.class_exec(&block) end end end end mongo-ruby-driver-2.21.3/spec/support/local_resource_registry.rb000066400000000000000000000013151505113246500251140ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'singleton' class LocalResourceRegistry include Singleton def initialize @resources = [] end def register(resource, finalizer) @resources << [resource, finalizer] # Return resource for chaining resource end def unregister(resource) @resources.delete_if do |_resource, finalizer| _resource == resource end end def close_all @resources.each do |resource, finalizer| if finalizer.is_a?(Symbol) resource.send(finalizer) elsif finalizer.is_a?(Proc) finalizer.call(resource) else raise "Unknown finalizer: #{finalizer}" end end @resources = [] end end mongo-ruby-driver-2.21.3/spec/support/macros.rb000066400000000000000000000010251505113246500214450ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all module Mongo module Macros def config_override(key, value) around do |example| existing = Mongo.send(key) Mongo.send("#{key}=", value) example.run Mongo.send("#{key}=", existing) end end def with_config_values(key, *values, &block) values.each do |value| context "when #{key} is #{value}" do config_override key, value class_exec(value, &block) end end end end end mongo-ruby-driver-2.21.3/spec/support/matchers.rb000066400000000000000000000036451505113246500220010ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all RSpec::Matchers.define :be_int32 do |num| match do |actual| actual == [num].pack('l<') end end RSpec::Matchers.define :be_int64 do |num| match do |actual| actual == [num].pack('q<') end end RSpec::Matchers.define :be_int64_sequence do |array| match do |actual| actual == array.reduce(String.new) do |buffer, num| buffer << [num].pack('q<') end end end RSpec::Matchers.define :be_cstring do |string| match do |actual| actual == "#{string.dup.force_encoding(BSON::BINARY)}\0" end end RSpec::Matchers.define :be_bson do |hash| match do |actual| actual == hash.to_bson.to_s end end RSpec::Matchers.define :be_bson_sequence do |array| match do |actual| actual == array.map(&:to_bson).join end end RSpec::Matchers.define :be_ciphertext do match do |object| object.is_a?(BSON::Binary) && object.type == :ciphertext end end RSpec::Matchers.define :match_with_type do |event| match do |actual| Utils.match_with_type?(event, actual) end end RSpec::Matchers.define :be_uuid do match do |object| object.is_a?(BSON::Binary) && object.type == :uuid end end RSpec::Matchers.define :take_longer_than do |min_expected_time| match do |proc| start_time = Mongo::Utils.monotonic_time proc.call (Mongo::Utils.monotonic_time - start_time).should > min_expected_time end end RSpec::Matchers.define :take_shorter_than do |min_expected_time| match do |proc| start_time = Mongo::Utils.monotonic_time proc.call (Mongo::Utils.monotonic_time - start_time).should < min_expected_time end end RSpec::Matchers.define :be_explain_output do match do |actual| Hash === actual && ( actual.key?('queryPlanner') || actual.key?('allPlans') ) end failure_message do |actual| "expected that #{actual} is explain output: is a hash with either allPlans or queryPlanner keys present" end end mongo-ruby-driver-2.21.3/spec/support/mongos_macros.rb000066400000000000000000000013111505113246500230250ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all module MongosMacros class << self attr_accessor :distinct_ran end self.distinct_ran = {} # Work around for SERVER-39704 when seeing a Mongo::Error::OperationFailure # SnapshotUnavailable error -- run the distinct command on each mongos. def run_mongos_distincts(db_name, collection='test') MongosMacros.distinct_ran[db_name] ||= ::Utils.mongos_each_direct_client do |direct_client| direct_client.use(db_name)[collection].distinct('foo').to_a end end def maybe_run_mongos_distincts(db_name, collection='test') if ClusterConfig.instance.topology == :sharded run_mongos_distincts(db_name, collection) end end end mongo-ruby-driver-2.21.3/spec/support/monitoring_ext.rb000066400000000000000000000006421505113246500232320ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all module Mongo class Monitoring # #subscribers writes to the subscribers even when reading them, # confusing the tests. # This method returns only events with populated subscribers. def present_subscribers subs = {} subscribers.each do |k, v| unless v.empty? subs[k] = v end end subs end end end mongo-ruby-driver-2.21.3/spec/support/ocsp000077700000000000000000000000001505113246500315262../../.mod/drivers-evergreen-tools/.evergreen/ocspustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/support/primary_socket.rb000066400000000000000000000007511505113246500232210ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all module PrimarySocket def self.included(base) base.class_eval do let(:primary_server) do client.cluster.next_primary end let(:primary_connection) do connection = primary_server.pool.check_out connection.connect! primary_server.pool.check_in(connection) connection end let(:primary_socket) do primary_connection.send(:socket) end end end end mongo-ruby-driver-2.21.3/spec/support/recording_logger.rb000066400000000000000000000011501505113246500234730ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'stringio' # A "Logger-alike" class, quacking like ::Logger, used for recording messages # as they are written to the log class RecordingLogger < Logger def initialize(*args, **kwargs) @buffer = StringIO.new super(@buffer, *args, **kwargs) end # Accesses the raw contents of the log # # @return [ String ] the raw contents of the log def contents @buffer.string end # Returns the contents of the log as individual lines. # # @return [ Array ] the individual log lines def lines contents.split(/\n/) end end mongo-ruby-driver-2.21.3/spec/support/sdam_formatter_integration.rb000066400000000000000000000067221505113246500256040ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all $sdam_formatter_lock = Mutex.new module SdamFormatterIntegration def log_entries @log_entries ||= [] end module_function :log_entries def clear_log_entries @log_entries = [] end module_function :clear_log_entries def assign_log_entries(example_id) $sdam_formatter_lock.synchronize do @log_entries_by_example_id ||= {} @log_entries_by_example_id[example_id] ||= [] @log_entries_by_example_id[example_id] += log_entries clear_log_entries end end module_function :assign_log_entries def example_log_entries(example_id) $sdam_formatter_lock.synchronize do @log_entries_by_example_id ||= {} @log_entries_by_example_id[example_id] end end module_function :example_log_entries def subscribe topology_opening_subscriber = TopologyOpeningLogSubscriber.new server_opening_subscriber = ServerOpeningLogSubscriber.new server_description_changed_subscriber = ServerDescriptionChangedLogSubscriber.new topology_changed_subscriber = TopologyChangedLogSubscriber.new server_closed_subscriber = ServerClosedLogSubscriber.new topology_closed_subscriber = TopologyClosedLogSubscriber.new Mongo::Monitoring::Global.subscribe(Mongo::Monitoring::TOPOLOGY_OPENING, topology_opening_subscriber) Mongo::Monitoring::Global.subscribe(Mongo::Monitoring::SERVER_OPENING, server_opening_subscriber) Mongo::Monitoring::Global.subscribe(Mongo::Monitoring::SERVER_DESCRIPTION_CHANGED, server_description_changed_subscriber) Mongo::Monitoring::Global.subscribe(Mongo::Monitoring::TOPOLOGY_CHANGED, topology_changed_subscriber) Mongo::Monitoring::Global.subscribe(Mongo::Monitoring::SERVER_CLOSED, server_closed_subscriber) Mongo::Monitoring::Global.subscribe(Mongo::Monitoring::TOPOLOGY_CLOSED, topology_closed_subscriber) end module_function :subscribe class SDAMLogSubscriber def succeeded(event) SdamFormatterIntegration.log_entries << Time.now.strftime('%Y-%m-%d %H:%M:%S.%L %z') + ' | ' + format_event(event) end end class TopologyOpeningLogSubscriber < SDAMLogSubscriber private def format_event(event) "Topology type '#{event.topology.display_name}' initializing." end end class ServerOpeningLogSubscriber < SDAMLogSubscriber private def format_event(event) "Server #{event.address} initializing." end end class ServerDescriptionChangedLogSubscriber < SDAMLogSubscriber private def format_event(event) "Server description for #{event.address} changed from " + "'#{event.previous_description.server_type}' to '#{event.new_description.server_type}'." end end class TopologyChangedLogSubscriber < SDAMLogSubscriber private def format_event(event) if event.previous_topology != event.new_topology "Topology type '#{event.previous_topology.display_name}' changed to " + "type '#{event.new_topology.display_name}'." else "There was a change in the members of the '#{event.new_topology.display_name}' " + "topology." end end end class ServerClosedLogSubscriber < SDAMLogSubscriber private def format_event(event) "Server #{event.address} connection closed." end end class TopologyClosedLogSubscriber < SDAMLogSubscriber private def format_event(event) "Topology type '#{event.topology.display_name}' closed." end end end mongo-ruby-driver-2.21.3/spec/support/shared/000077500000000000000000000000001505113246500211045ustar00rootroot00000000000000mongo-ruby-driver-2.21.3/spec/support/shared/app_metadata.rb000066400000000000000000000123701505113246500240540ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all def target_arch @target_arch ||= begin uname = `uname -a`.strip case uname when /aarch/ then "aarch64" when /x86/ then "x86_64" when /arm/ then "arm64" else raise "unrecognized architecture: #{uname.inspect}" end end end shared_examples 'app metadata document' do let(:app_metadata) do described_class.new({}) end it 'includes Ruby driver identification' do document[:client][:driver][:name].should == 'mongo-ruby-driver' document[:client][:driver][:version].should == Mongo::VERSION end context 'linux' do before(:all) do unless SpecConfig.instance.linux? skip "Linux required, we have #{RbConfig::CONFIG['host_os']}" end end it 'includes operating system information' do document[:client][:os][:type].should == 'linux' if BSON::Environment.jruby? || RUBY_VERSION >= '3.0' document[:client][:os][:name].should == 'linux' else # Ruby 2.7.2 and earlier use linux-gnu. # Ruby 2.7.3 uses linux. %w(linux linux-gnu).should include(document[:client][:os][:name]) end document[:client][:os][:architecture].should == target_arch end end context 'macos' do before(:all) do unless SpecConfig.instance.macos? skip "MacOS required, we have #{RbConfig::CONFIG['host_os']}" end end it 'includes operating system information' do document[:client][:os][:type].should == 'darwin' if BSON::Environment.jruby? document[:client][:os][:name].should == 'darwin' else document[:client][:os][:name].should =~ /darwin\d+/ end document[:client][:os][:architecture].should == target_arch end end context 'mri' do require_mri it 'includes Ruby version' do document[:client][:platform].should start_with("Ruby #{RUBY_VERSION}") end context 'when custom platform is specified' do let(:app_metadata) do described_class.new(platform: 'foowidgets') end it 'starts with custom platform' do document[:client][:platform].should start_with("foowidgets, Ruby #{RUBY_VERSION}") end end end context 'jruby' do require_jruby it 'includes JRuby and Ruby compatibility versions' do document[:client][:platform].should start_with("JRuby #{JRUBY_VERSION}, like Ruby #{RUBY_VERSION}") end context 'when custom platform is specified' do let(:app_metadata) do described_class.new(platform: 'foowidgets') end it 'starts with custom platform' do document[:client][:platform].should start_with("foowidgets, JRuby #{JRUBY_VERSION}") end end end context 'when wrapping libraries are specified' do let(:app_metadata) do described_class.new(wrapping_libraries: wrapping_libraries) end context 'one' do let(:wrapping_libraries) { [wrapping_library] } context 'no fields' do let(:wrapping_library) do {} end it 'adds empty strings' do document[:client][:driver][:name].should == 'mongo-ruby-driver|' document[:client][:driver][:version].should == "#{Mongo::VERSION}|" document[:client][:platform].should =~ /\AJ?Ruby[^|]+\|\z/ end end context 'some fields' do let(:wrapping_library) do {name: 'Mongoid'} end it 'adds the fields' do document[:client][:driver][:name].should == 'mongo-ruby-driver|Mongoid' document[:client][:driver][:version].should == "#{Mongo::VERSION}|" document[:client][:platform].should =~ /\AJ?Ruby[^|]+\|\z/ end end context 'all fields' do let(:wrapping_library) do {name: 'Mongoid', version: '7.1.2', platform: 'OS9000'} end it 'adds the fields' do document[:client][:driver][:name].should == 'mongo-ruby-driver|Mongoid' document[:client][:driver][:version].should == "#{Mongo::VERSION}|7.1.2" document[:client][:platform].should =~ /\AJ?Ruby[^|]+\|OS9000\z/ end end end context 'two' do context 'some fields' do let(:wrapping_libraries) do [ {name: 'Mongoid', version: '42'}, # All libraries should be specifying their versions, in theory, # but test not specifying a version. {version: '4.0', platform: 'OS9000'}, ] end it 'adds the fields' do document[:client][:driver][:name].should == 'mongo-ruby-driver|Mongoid|' document[:client][:driver][:version].should == "#{Mongo::VERSION}|42|4.0" document[:client][:platform].should =~ /\AJ?Ruby[^|]+\|\|OS9000\z/ end end context 'a realistic Mongoid & Rails wrapping' do let(:wrapping_libraries) do [ {name: 'Mongoid', version: '7.1.2'}, {name: 'Rails', version: '6.0.3'}, ] end it 'adds the fields' do document[:client][:driver][:name].should == 'mongo-ruby-driver|Mongoid|Rails' document[:client][:driver][:version].should == "#{Mongo::VERSION}|7.1.2|6.0.3" document[:client][:platform].should =~ /\AJ?Ruby[^|]+\|\|\z/ end end end end end mongo-ruby-driver-2.21.3/spec/support/shared/auth_context.rb000066400000000000000000000007141505113246500241400ustar00rootroot00000000000000# rubocop:todo all shared_context 'auth unit tests' do let(:generation_manager) do Mongo::Server::ConnectionPool::GenerationManager.new(server: server) end let(:pool) do double('pool').tap do |pool| allow(pool).to receive(:generation_manager).and_return(generation_manager) end end let(:connection) do Mongo::Server::Connection.new(server, SpecConfig.instance.monitoring_options.merge( connection_pool: pool)) end end mongo-ruby-driver-2.21.3/spec/support/shared/protocol.rb000066400000000000000000000015111505113246500232700ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all shared_examples 'message with a header' do let(:collection_name) { 'test' } describe 'header' do describe 'length' do let(:field) { bytes.to_s[0..3] } it 'serializes the length' do expect(field).to be_int32(bytes.length) end end describe 'request id' do let(:field) { bytes.to_s[4..7] } it 'serializes the request id' do expect(field).to be_int32(message.request_id) end end describe 'response to' do let(:field) { bytes.to_s[8..11] } it 'serializes the response to' do expect(field).to be_int32(0) end end describe 'op code' do let(:field) { bytes.to_s[12..15] } it 'serializes the op code' do expect(field).to be_int32(opcode) end end end end mongo-ruby-driver-2.21.3/spec/support/shared/scram_conversation.rb000066400000000000000000000046411505113246500253350ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all shared_context 'scram conversation context' do let(:connection) do double('connection').tap do |connection| features = double('features') allow(features).to receive(:op_msg_enabled?).and_return(true) allow(connection).to receive(:features).and_return(features) allow(connection).to receive(:server) allow(connection).to receive(:mongos?) end end end shared_examples 'scram conversation' do describe '#parse_payload' do let(:user) { double('user') } let(:mechanism) { :scram } shared_examples_for 'parses as expected' do it 'parses as expected' do conversation.send(:parse_payload, payload).should == expected end end context 'regular payload' do let(:payload) { 'foo=bar,hello=world' } let(:expected) do {'foo' => 'bar', 'hello' => 'world'} end it_behaves_like 'parses as expected' end context 'equal signs in value' do let(:payload) { 'foo=bar==,hello=world=is=great' } let(:expected) do {'foo' => 'bar==', 'hello' => 'world=is=great'} end it_behaves_like 'parses as expected' end context 'missing value' do let(:payload) { 'foo=,hello=' } let(:expected) do {'foo' => '', 'hello' => ''} end it_behaves_like 'parses as expected' end context 'missing key/value pair' do let(:payload) { 'foo=,,hello=' } let(:expected) do {'foo' => '', 'hello' => ''} end it_behaves_like 'parses as expected' end context 'missing key' do let(:payload) { '=bar' } it 'raises an exception' do lambda do conversation.send(:parse_payload, payload) end.should raise_error(Mongo::Error::InvalidServerAuthResponse, /Payload malformed: missing key/) end end context 'all keys missing' do let(:payload) { ',,,' } let(:expected) do {} end it_behaves_like 'parses as expected' end end end shared_context 'scram continue and finalize replies' do let(:continue_document) do BSON::Document.new( 'conversationId' => 1, 'done' => false, 'payload' => continue_payload, 'ok' => 1.0 ) end let(:finalize_document) do BSON::Document.new( 'conversationId' => 1, 'done' => false, 'payload' => finalize_payload, 'ok' => 1.0 ) end end mongo-ruby-driver-2.21.3/spec/support/shared/server_selector.rb000066400000000000000000000117441505113246500246460ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all shared_context 'server selector' do let(:max_staleness) { nil } let(:tag_sets) { [] } let(:hedge) { nil } let(:tag_set) do { 'test' => 'tag' } end let(:server_tags) do { 'test' => 'tag', 'other' => 'tag' } end let(:primary) { make_server(:primary) } let(:secondary) { make_server(:secondary) } let(:mongos) do make_server(:mongos).tap do |server| expect(server.mongos?).to be true end end let(:unknown) do make_server(:unknown).tap do |server| expect(server.unknown?).to be true end end let(:server_selection_timeout_options) do { server_selection_timeout: 0.1, } end let(:options) do { mode: name, tag_sets: tag_sets, max_staleness: max_staleness, hedge: hedge, } end let(:selector) { described_class.new(options) } let(:monitoring) do Mongo::Monitoring.new(monitoring: false) end declare_topology_double before do # Do not run monitors and do not attempt real TCP connections # in server selector tests allow_any_instance_of(Mongo::Server).to receive(:start_monitoring) allow_any_instance_of(Mongo::Server).to receive(:disconnect!) end end shared_examples 'a server selector mode' do describe '#name' do it 'returns the name' do expect(selector.name).to eq(name) end end describe '#secondary_ok?' do it 'returns whether the secondary_ok bit should be set' do expect(selector.secondary_ok?).to eq(secondary_ok) end end describe '#==' do context 'when mode is the same' do let(:other) do described_class.new end context 'tag sets are the same' do it 'returns true' do expect(selector).to eq(other) end end end context 'mode is different' do let(:other) do described_class.new.tap do |sel| allow(sel).to receive(:name).and_return(:other_mode) end end it 'returns false' do expect(selector).not_to eq(other) end end end end shared_examples 'a server selector accepting tag sets' do describe '#tag_sets' do context 'tags not provided' do it 'returns an empty array' do expect(selector.tag_sets).to be_empty end end context 'tag sets provided' do let(:tag_sets) do [ tag_set ] end it 'returns the tag sets' do expect(selector.tag_sets).to eq(tag_sets) end end end describe '#==' do context 'when mode is the same' do let(:other) { described_class.new } context 'tag sets are different' do let(:tag_sets) { { 'other' => 'tag' } } it 'returns false' do expect(selector).not_to eq(other) end end end end end shared_examples 'a server selector accepting hedge' do describe '#initialize' do context 'when hedge is not provided' do it 'initializes successfully' do expect do selector end.not_to raise_error end end context 'when hedge is not a Hash' do let(:hedge) { true } it 'raises an exception' do expect do selector end.to raise_error(Mongo::Error::InvalidServerPreference, /`hedge` value \(true\) is invalid/) end end context 'when hedge is an empty Hash' do let(:hedge) { {} } it 'raises an exception' do expect do selector end.to raise_error(Mongo::Error::InvalidServerPreference, /`hedge` value \({}\) is invalid/) end end context 'when hedge is a Hash with data' do let(:hedge) { { enabled: false } } it 'initializes successfully' do expect do selector end.not_to raise_error end end end describe '#hedge' do context 'when hedge is not provided' do it 'returns nil' do expect(selector.hedge).to be_nil end end context 'when hedge is a Hash with data' do let(:hedge) { { enabled: false } } it 'returns the same Hash' do expect(selector.hedge).to eq({ enabled: false }) end end end describe '#==' do let(:other_selector) { described_class.new(hedge: { enabled: false }) } context 'when hedges are the same' do let(:hedge) { { enabled: false } } it 'returns true' do expect(selector).to eq(other_selector) end end context 'when hedges are different' do let(:hedge) { { enabled: true } } it 'returns false' do expect(selector).not_to eq(other_selector) end end end end shared_examples 'a server selector with sensitive data in its options' do describe '#inspect' do context 'when there is sensitive data in the options' do let(:options) do Mongo::Options::Redacted.new(:mode => name, :password => 'sensitive_data') end it 'does not print out sensitive data' do expect(selector.inspect).not_to match(options[:password]) end end end end mongo-ruby-driver-2.21.3/spec/support/shared/session.rb000066400000000000000000000606771505113246500231340ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all shared_examples 'an operation using a session' do describe 'operation execution' do min_server_fcv '3.6' require_topology :replica_set, :sharded context 'when the session is created from the same client used for the operation' do let(:session) do client.start_session end let(:server_session) do session.instance_variable_get(:@server_session) end let!(:before_last_use) do server_session.last_use end let!(:before_operation_time) do (session.operation_time || 0) end let!(:operation_result) do operation end after do session.end_session end it 'updates the last use value' do expect(server_session.last_use).not_to eq(before_last_use) end it 'updates the operation time value' do expect(session.operation_time).not_to eq(before_operation_time) end it 'does not close the session when the operation completes' do expect(session.ended?).to be(false) end end context 'when a session from another client is provided' do let(:session) do another_authorized_client.start_session end let(:operation_result) do operation end it 'raises an exception' do expect do operation_result end.to raise_exception(Mongo::Error::InvalidSession) end end context 'when the session is ended before it is used' do let(:session) do client.start_session end before do session.end_session end let(:operation_result) do operation end it 'raises an exception' do expect { operation_result }.to raise_exception(Mongo::Error::InvalidSession) end end end end shared_examples 'a failed operation using a session' do context 'when the operation fails' do min_server_fcv '3.6' require_topology :replica_set, :sharded let!(:before_last_use) do session.instance_variable_get(:@server_session).last_use end let!(:before_operation_time) do (session.operation_time || 0) end let!(:operation_result) do sleep 0.2 begin; failed_operation; rescue => e; e; end end let(:session) do client.start_session end it 'raises an error' do expect([Mongo::Error::OperationFailure::Family, Mongo::Error::BulkWriteError].any? { |e| e === operation_result }).to be true end it 'updates the last use value' do expect(session.instance_variable_get(:@server_session).last_use).not_to eq(before_last_use) end it 'updates the operation time value' do expect(session.operation_time).not_to eq(before_operation_time) end end end shared_examples 'an explicit session with an unacknowledged write' do context 'when sessions are supported' do min_server_fcv '3.6' let(:session) do client.start_session end it 'does not add a session id to the operation' do subscriber.clear_events! operation subscriber.non_auth_command_started_events.length.should == 1 expect(subscriber.non_auth_command_started_events.collect(&:command).collect { |cmd| cmd['lsid'] }.compact).to be_empty end end context 'when sessions are not supported' do max_server_version '3.4' let(:session) do nil end it 'does not add a session id to the operation' do expect(Mongo::Session).not_to receive(:new) subscriber.clear_events! operation subscriber.non_auth_command_started_events.length.should == 1 expect(subscriber.non_auth_command_started_events.collect(&:command).collect { |cmd| cmd['lsid'] }.compact).to be_empty end end end shared_examples 'an implicit session with an unacknowledged write' do context 'when sessions are supported' do min_server_fcv '3.6' it 'does not add a session id to the operation' do subscriber.clear_events! operation subscriber.non_auth_command_started_events.length.should == 1 expect(subscriber.non_auth_command_started_events.collect(&:command).collect { |cmd| cmd['lsid'] }.compact).to be_empty end end context 'when sessions are not supported' do max_server_version '3.4' it 'does not add a session id to the operation' do subscriber.clear_events! operation subscriber.non_auth_command_started_events.length.should == 1 expect(subscriber.non_auth_command_started_events.collect(&:command).collect { |cmd| cmd['lsid'] }.compact).to be_empty end end end shared_examples 'an operation supporting causally consistent reads' do let(:subscriber) { Mrss::EventSubscriber.new } let(:client) do authorized_client.tap do |client| client.subscribe(Mongo::Monitoring::COMMAND, subscriber) end end context 'when connected to a standalone' do min_server_fcv '3.6' require_topology :single context 'when the collection specifies a read concern' do let(:collection) do client[TEST_COLL, read_concern: { level: 'majority' }] end context 'when the session has causal_consistency set to true' do let(:session) do client.start_session(causal_consistency: true) end it 'does not add the afterClusterTime to the read concern in the command' do expect(command['readConcern']['afterClusterTime']).to be_nil end end context 'when the session has causal_consistency set to false' do let(:session) do client.start_session(causal_consistency: false) end it 'does not add the afterClusterTime to the read concern in the command' do expect(command['readConcern']['afterClusterTime']).to be_nil end end context 'when the session has causal_consistency not set' do let(:session) do client.start_session end it 'does not add the afterClusterTime to the read concern in the command' do expect(command['readConcern']['afterClusterTime']).to be_nil end end end context 'when the collection does not specify a read concern' do let(:collection) do client[TEST_COLL] end context 'when the session has causal_consistency set to true' do let(:session) do client.start_session(causal_consistency: true) end it 'does not include the read concern in the command' do expect(command['readConcern']).to be_nil end end context 'when the session has causal_consistency set to false' do let(:session) do client.start_session(causal_consistency: false) end it 'does not include the read concern in the command' do expect(command['readConcern']).to be_nil end end context 'when the session has causal_consistency not set' do let(:session) do client.start_session end it 'does not include the read concern in the command' do expect(command['readConcern']).to be_nil end end end end context 'when connected to replica set or sharded cluster' do min_server_fcv '3.6' require_topology :replica_set, :sharded context 'when the collection specifies a read concern' do let(:collection) do client[TEST_COLL, read_concern: { level: 'majority' }] end context 'when the session has causal_consistency set to true' do let(:session) do client.start_session(causal_consistency: true) end context 'when the session has an operation time' do before do client.database.command({ ping: 1 }, session: session) end let!(:operation_time) do session.operation_time end let(:expected_read_concern) do BSON::Document.new(level: 'majority', afterClusterTime: operation_time) end it 'merges the afterClusterTime with the read concern in the command' do expect(command['readConcern']).to eq(expected_read_concern) end end context 'when the session does not have an operation time' do let(:expected_read_concern) do BSON::Document.new(level: 'majority') end it 'leaves the read concern document unchanged' do expect(command['readConcern']).to eq(expected_read_concern) end end context 'when the operation time is advanced' do before do session.advance_operation_time(operation_time) end let(:operation_time) do BSON::Timestamp.new(0, 1) end let(:expected_read_concern) do BSON::Document.new(level: 'majority', afterClusterTime: operation_time) end it 'merges the afterClusterTime with the new operation time and read concern in the command' do expect(command['readConcern']).to eq(expected_read_concern) end end end context 'when the session has causal_consistency set to false' do let(:session) do client.start_session(causal_consistency: false) end context 'when the session does not have an operation time' do let(:expected_read_concern) do BSON::Document.new(level: 'majority') end it 'leaves the read concern document unchanged' do expect(command['readConcern']).to eq(expected_read_concern) end end context 'when the session has an operation time' do before do client.database.command({ ping: 1 }, session: session) end let(:expected_read_concern) do BSON::Document.new(level: 'majority') end it 'leaves the read concern document unchanged' do expect(command['readConcern']).to eq(expected_read_concern) end end context 'when the operation time is advanced' do before do session.advance_operation_time(operation_time) end let(:operation_time) do BSON::Timestamp.new(0, 1) end let(:expected_read_concern) do BSON::Document.new(level: 'majority') end it 'leaves the read concern document unchanged' do expect(command['readConcern']).to eq(expected_read_concern) end end end context 'when the session has causal_consistency not set' do let(:session) do client.start_session end context 'when the session does not have an operation time' do let(:expected_read_concern) do BSON::Document.new(level: 'majority') end it 'leaves the read concern document unchanged' do expect(command['readConcern']).to eq(expected_read_concern) end end context 'when the session has an operation time' do before do client.database.command({ ping: 1 }, session: session) end let!(:operation_time) do session.operation_time end let(:expected_read_concern) do BSON::Document.new(level: 'majority', afterClusterTime: operation_time) end it 'merges the afterClusterTime with the new operation time and read concern in the command' do expect(command['readConcern']).to eq(expected_read_concern) end end context 'when the operation time is advanced' do before do session.advance_operation_time(operation_time) end let(:operation_time) do BSON::Timestamp.new(0, 1) end let(:expected_read_concern) do BSON::Document.new(level: 'majority', afterClusterTime: operation_time) end it 'merges the afterClusterTime with the new operation time and read concern in the command' do expect(command['readConcern']).to eq(expected_read_concern) end end end end context 'when the collection does not specify a read concern' do let(:collection) do client[TEST_COLL] end context 'when the session has causal_consistency set to true' do let(:session) do client.start_session(causal_consistency: true) end context 'when the session does not have an operation time' do it 'does not include the read concern in the command' do expect(command['readConcern']).to be_nil end end context 'when the session has an operation time' do before do client.database.command({ ping: 1 }, session: session) end let!(:operation_time) do session.operation_time end let(:expected_read_concern) do BSON::Document.new(afterClusterTime: operation_time) end it 'merges the afterClusterTime with the read concern in the command' do expect(command['readConcern']).to eq(expected_read_concern) end end context 'when the operation time is advanced' do before do session.advance_operation_time(operation_time) end let(:operation_time) do BSON::Timestamp.new(0, 1) end let(:expected_read_concern) do BSON::Document.new(afterClusterTime: operation_time) end it 'merges the afterClusterTime with the new operation time in the command' do expect(command['readConcern']).to eq(expected_read_concern) end end end context 'when the session has causal_consistency set to false' do let(:session) do client.start_session(causal_consistency: false) end context 'when the session does not have an operation time' do it 'does not include the read concern in the command' do expect(command['readConcern']).to be_nil end end context 'when the session has an operation time' do before do client.database.command({ ping: 1 }, session: session) end it 'does not include the read concern in the command' do expect(command['readConcern']).to be_nil end end context 'when the operation time is advanced' do before do session.advance_operation_time(operation_time) end let(:operation_time) do BSON::Timestamp.new(0, 1) end let(:expected_read_concern) do BSON::Document.new(afterClusterTime: operation_time) end it 'does not include the read concern in the command' do expect(command['readConcern']).to be_nil end end end context 'when the session has causal_consistency not set' do let(:session) do client.start_session end context 'when the session does not have an operation time' do it 'does not include the read concern in the command' do expect(command['readConcern']).to be_nil end end context 'when the session has an operation time' do before do client.database.command({ ping: 1 }, session: session) end let!(:operation_time) do session.operation_time end let(:expected_read_concern) do BSON::Document.new(afterClusterTime: operation_time) end it 'merges the afterClusterTime with the read concern in the command' do expect(command['readConcern']).to eq(expected_read_concern) end end context 'when the operation time is advanced' do before do session.advance_operation_time(operation_time) end let(:operation_time) do BSON::Timestamp.new(0, 1) end let(:expected_read_concern) do BSON::Document.new(afterClusterTime: operation_time) end it 'merges the afterClusterTime with the new operation time in the command' do expect(command['readConcern']).to eq(expected_read_concern) end end end end end end # Since background operatons can advance cluster time, exact cluster time # comparisons sometimes fail. Work around this by retrying the tests. shared_examples 'an operation updating cluster time' do let(:cluster) do client.cluster end let(:session) do client.start_session end let(:subscriber) { Mrss::EventSubscriber.new } let(:client) do authorized_client.tap do |client| client.subscribe(Mongo::Monitoring::COMMAND, subscriber) end end shared_examples_for 'does not update the cluster time of the cluster' do retry_test it 'does not update the cluster time of the cluster' do bct = before_cluster_time reply_cluster_time expect(client.cluster.cluster_time).to eq(before_cluster_time) end end context 'when the command is run once' do context 'when the server is version 3.6' do min_server_fcv '3.6' context 'when the cluster is sharded or a replica set' do retry_test require_topology :replica_set, :sharded let(:reply_cluster_time) do operation_with_session subscriber.succeeded_events[-1].reply['$clusterTime'] end it 'updates the cluster time of the cluster' do rct = reply_cluster_time expect(cluster.cluster_time).to eq(rct) end it 'updates the cluster time of the session' do rct = reply_cluster_time expect(session.cluster_time).to eq(rct) end end context 'when the server is a standalone' do require_topology :single let(:before_cluster_time) do client.cluster.cluster_time end let!(:reply_cluster_time) do operation_with_session subscriber.succeeded_events[-1].reply['$clusterTime'] end it_behaves_like 'does not update the cluster time of the cluster' retry_test it 'does not update the cluster time of the session' do reply_cluster_time expect(session.cluster_time).to be_nil end end end context 'when the server is less than version 3.6' do max_server_version '3.4' let(:before_cluster_time) do client.cluster.cluster_time end let(:reply_cluster_time) do operation subscriber.succeeded_events[-1].reply['$clusterTime'] end it_behaves_like 'does not update the cluster time of the cluster' end end context 'when the command is run twice' do let(:reply_cluster_time) do operation_with_session subscriber.succeeded_events[-1].reply['$clusterTime'] end context 'when the cluster is sharded or a replica set' do min_server_fcv '3.6' require_topology :replica_set, :sharded context 'when the session cluster time is advanced' do before do session.advance_cluster_time(advanced_cluster_time) end let(:second_command_cluster_time) do second_operation subscriber.non_auth_command_started_events[-1].command['$clusterTime'] end context 'when the advanced cluster time is greater than the existing cluster time' do let(:advanced_cluster_time) do new_timestamp = BSON::Timestamp.new(reply_cluster_time[Mongo::Cluster::CLUSTER_TIME].seconds, reply_cluster_time[Mongo::Cluster::CLUSTER_TIME].increment + 1) new_cluster_time = reply_cluster_time.dup new_cluster_time.merge(Mongo::Cluster::CLUSTER_TIME => new_timestamp) end retry_test it 'includes the advanced cluster time in the second command' do expect(second_command_cluster_time).to eq(advanced_cluster_time) end end context 'when the advanced cluster time is not greater than the existing cluster time' do let(:advanced_cluster_time) do expect(reply_cluster_time[Mongo::Cluster::CLUSTER_TIME].increment > 0).to be true new_timestamp = BSON::Timestamp.new(reply_cluster_time[Mongo::Cluster::CLUSTER_TIME].seconds, reply_cluster_time[Mongo::Cluster::CLUSTER_TIME].increment - 1) new_cluster_time = reply_cluster_time.dup new_cluster_time.merge(Mongo::Cluster::CLUSTER_TIME => new_timestamp) end retry_test it 'does not advance the cluster time' do expect(second_command_cluster_time).to eq(reply_cluster_time) end end end context 'when the session cluster time is not advanced' do let(:second_command_cluster_time) do second_operation subscriber.non_auth_command_started_events[-1].command['$clusterTime'] end retry_test it 'includes the received cluster time in the second command' do reply_cluster_time expect(second_command_cluster_time).to eq(reply_cluster_time) end end end context 'when the server is a standalone' do min_server_fcv '3.6' require_topology :single let(:before_cluster_time) do client.cluster.cluster_time end let(:second_command_cluster_time) do second_operation subscriber.non_auth_command_started_events[-1].command['$clusterTime'] end it 'does not update the cluster time of the cluster' do bct = before_cluster_time second_command_cluster_time expect(client.cluster.cluster_time).to eq(bct) end end end context 'when the server is less than version 3.6' do max_server_version '3.4' let(:before_cluster_time) do client.cluster.cluster_time end it 'does not update the cluster time of the cluster' do bct = before_cluster_time operation expect(client.cluster.cluster_time).to eq(bct) end end end shared_examples 'an operation not using a session' do min_server_fcv '3.6' describe 'operation execution' do context 'when the client has a session' do let(:session) do client.start_session end let(:server_session) do session.instance_variable_get(:@server_session) end let!(:before_last_use) do server_session.last_use end let!(:before_operation_time) do session.operation_time end let!(:operation_result) do operation end after do session.end_session end it 'does not send session id in command' do expect(command).not_to have_key('lsid') end it 'does not update the last use value' do expect(server_session.last_use).to eq(before_last_use) end it 'does not update the operation time value' do expect(session.operation_time).to eq(before_operation_time) end it 'does not close the session when the operation completes' do expect(session.ended?).to be(false) end end context 'when the session is ended before it is used' do let(:session) do client.start_session end before do session.end_session end let(:operation_result) do operation end it 'does not raise an exception' do expect { operation_result }.not_to raise_exception end end end end shared_examples 'a failed operation not using a session' do min_server_fcv '3.6' context 'when the operation fails' do let!(:before_last_use) do session.instance_variable_get(:@server_session).last_use end let!(:before_operation_time) do session.operation_time end let!(:operation_result) do sleep 0.2 begin; failed_operation; rescue => e; e; end end let(:session) do client.start_session end it 'raises an error' do expect([Mongo::Error::OperationFailure, Mongo::Error::BulkWriteError]).to include(operation_result.class) end it 'does not update the last use value' do expect(session.instance_variable_get(:@server_session).last_use).to eq(before_last_use) end it 'does not update the operation time value' do expect(session.operation_time).to eq(before_operation_time) end end end mongo-ruby-driver-2.21.3/spec/support/spec_config.rb000066400000000000000000000515771505113246500224610ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require 'singleton' require 'pathname' class SpecConfig include Singleton # NB: constructor should not do I/O as SpecConfig may be used by tests # only loading the lite spec helper. Do I/O eagerly in accessor methods. def initialize @uri_options = {} @ruby_options = {} if ENV['MONGODB_URI'] @mongodb_uri = Mongo::URI.get(ENV['MONGODB_URI']) @uri_options = Mongo::Options::Mapper.transform_keys_to_symbols(@mongodb_uri.uri_options) if ENV['TOPOLOGY'] == 'load-balanced' @addresses = @mongodb_uri.servers @connect_options = { connect: :load_balanced } elsif @uri_options[:replica_set] @addresses = @mongodb_uri.servers @connect_options = { connect: :replica_set, replica_set: @uri_options[:replica_set] } elsif @uri_options[:connect] == :sharded || ENV['TOPOLOGY'] == 'sharded-cluster' @addresses = @mongodb_uri.servers @connect_options = { connect: :sharded } elsif @uri_options[:connect] == :direct @addresses = @mongodb_uri.servers @connect_options = { connect: :direct } end if @uri_options[:ssl].nil? @ssl = (ENV['SSL'] == 'ssl') || (ENV['SSL_ENABLED'] == 'true') else @ssl = @uri_options[:ssl] end end @uri_tls_options = {} @uri_options.each do |k, v| k = k.to_s.downcase if k.start_with?('ssl') @uri_tls_options[k] = v end end @ssl ||= false if (server_api = ENV['SERVER_API']) && !server_api.empty? @ruby_options[:server_api] = BSON::Document.new(YAML.load(server_api)) # Since the tests pass options provided by SpecConfig directly to # internal driver objects (e.g. connections), transform server api # parameters here as they would be transformed by Client constructor. if (v = @ruby_options[:server_api][:version]).is_a?(Integer) @ruby_options[:server_api][:version] = v.to_s end end end attr_reader :uri_options, :ruby_options, :connect_options def addresses @addresses ||= begin if @mongodb_uri @mongodb_uri.servers else client = Mongo::Client.new(['localhost:27017'], server_selection_timeout: 5.02) begin client.cluster.next_primary @addresses = client.cluster.servers_list.map do |server| server.address.to_s end ensure client.close end end end end def connect_options @connect_options ||= begin # Discover deployment topology. # TLS options need to be merged for evergreen due to # https://github.com/10gen/mongo-orchestration/issues/268 client = Mongo::Client.new(addresses, Mongo::Options::Redacted.new( server_selection_timeout: 5.03, ).merge(ssl_options).merge(ruby_options)) begin case client.cluster.topology.class.name when /LoadBalanced/ { connect: :load_balanced } when /Replica/ { connect: :replica_set, replica_set: client.cluster.topology.replica_set_name } when /Sharded/ { connect: :sharded } when /Single/ { connect: :direct } when /Unknown/ raise "Could not detect topology because the test client failed to connect to MongoDB deployment" else raise "Weird topology #{client.cluster.topology}" end ensure client.close end end end # Environment def ci? %w(1 true yes).include?(ENV['CI']&.downcase) end def mri? !jruby? end def jruby? !!(RUBY_PLATFORM =~ /\bjava\b/) end def linux? !!(RbConfig::CONFIG['host_os'].downcase =~ /\blinux/) end def macos? !!(RbConfig::CONFIG['host_os'].downcase =~ /\bdarwin/) end def windows? ENV['OS'] == 'Windows_NT' && !RUBY_PLATFORM.match?(/cygwin/) end def platform RUBY_PLATFORM end def stress? %w(1 true yes).include?(ENV['STRESS']&.downcase) end def fork? %w(1 true yes).include?(ENV['FORK']&.downcase) end # OCSP tests require python and various dependencies. # Assumes an OCSP responder is running on port 8100 (configured externally # to the test suite). def ocsp? %w(1 true yes).include?(ENV['OCSP']&.downcase) end # OCSP tests require python and various dependencies. # When testing OCSP verifier, there cannot be a responder running on # port 8100 or the tests will fail. def ocsp_verifier? %w(1 true yes).include?(ENV['OCSP_VERIFIER']&.downcase) end def ocsp_connectivity? ENV.key?('OCSP_CONNECTIVITY') && ENV['OCSP_CONNECTIVITY'] != '' end # Detect whether specs are running against Mongodb Atlas serverless instance. # This method does not do any magic, it just checks whether environment # variable SERVERLESS is set. This is a recommended way to inform spec runners # that they are running against a serverless instance # # @return [ true | false ] Whether specs are running against a serverless instance. def serverless? !!ENV['SERVERLESS'] end def kill_all_server_sessions? !serverless? && # Serverless instances do not support killAllSessions command. ClusterConfig.instance.fcv_ish >= '3.6' end # Test suite configuration def client_debug? %w(1 true yes).include?(ENV['MONGO_RUBY_DRIVER_CLIENT_DEBUG']&.downcase) end def drivers_tools? !!ENV['DRIVERS_TOOLS'] end def active_support? %w(1 true yes).include?(ENV['WITH_ACTIVE_SUPPORT']) end # What compressor to use, if any. def compressors uri_options[:compressors] end def retry_reads uri_option_or_env_var(:retry_reads, 'RETRY_READS') end def retry_writes uri_option_or_env_var(:retry_writes, 'RETRY_WRITES') end def uri_option_or_env_var(driver_option_symbol, env_var_key) case uri_options[driver_option_symbol] when true true when false false else case (ENV[env_var_key] || '').downcase when 'yes', 'true', 'on', '1' true when 'no', 'false', 'off', '0' false else nil end end end def retry_writes? if retry_writes == false false else # Current default is to retry writes true end end def ssl? @ssl end # Username, not user object def user @mongodb_uri && @mongodb_uri.credentials[:user] end def password @mongodb_uri && @mongodb_uri.credentials[:password] end def auth_source uri_options[:auth_source] end def connect_replica_set? connect_options[:connect] == :replica_set end def print_summary puts "Connection options: #{test_options}" client = ClientRegistry.instance.global_client('basic') client.cluster.next_primary puts <<-EOT Topology: #{client.cluster.topology.class} connect: #{connect_options[:connect]} EOT end # Derived data def any_port addresses.first.split(':')[1] || '27017' end def spec_root File.join(File.dirname(__FILE__), '..') end def ssl_certs_dir Pathname.new("#{spec_root}/support/certificates") end def ocsp_files_dir Pathname.new("#{spec_root}/../.mod/drivers-evergreen-tools/.evergreen/ocsp") end # TLS certificates & keys def local_client_key_path "#{ssl_certs_dir}/client.key" end def client_key_path if drivers_tools? && ENV['DRIVER_TOOLS_CLIENT_KEY_PEM'] ENV['DRIVER_TOOLS_CLIENT_KEY_PEM'] else local_client_key_path end end def local_client_cert_path "#{ssl_certs_dir}/client.crt" end def client_cert_path if drivers_tools? && ENV['DRIVER_TOOLS_CLIENT_CERT_PEM'] ENV['DRIVER_TOOLS_CLIENT_CERT_PEM'] else local_client_cert_path end end def local_client_pem_path if (algo = ENV['OCSP_ALGORITHM'])&.empty? "#{ssl_certs_dir}/client.pem" else Pathname.new("#{spec_root}/support/ocsp/#{algo}/server.pem") end end def client_pem_path if drivers_tools? && ENV['DRIVER_TOOLS_CLIENT_CERT_KEY_PEM'] ENV['DRIVER_TOOLS_CLIENT_CERT_KEY_PEM'] else local_client_pem_path end end def client_x509_pem_path "#{ssl_certs_dir}/client-x509.pem" end def second_level_cert_path "#{ssl_certs_dir}/client-second-level.crt" end def second_level_key_path "#{ssl_certs_dir}/client-second-level.key" end def second_level_cert_bundle_path "#{ssl_certs_dir}/client-second-level-bundle.pem" end def local_client_encrypted_key_path "#{ssl_certs_dir}/client-encrypted.key" end def client_encrypted_key_path if drivers_tools? && ENV['DRIVER_TOOLS_CLIENT_KEY_ENCRYPTED_PEM'] ENV['DRIVER_TOOLS_CLIENT_KEY_ENCRYPTED_PEM'] else local_client_encrypted_key_path end end def client_encrypted_key_passphrase 'passphrase' end def local_ca_cert_path "#{ssl_certs_dir}/ca.crt" end def ca_cert_path if drivers_tools? && ENV['DRIVER_TOOLS_CA_PEM'] ENV['DRIVER_TOOLS_CA_PEM'] else local_ca_cert_path end end def multi_ca_path "#{ssl_certs_dir}/multi-ca.crt" end # The default test database for all specs. def test_db 'ruby-driver'.freeze end # Whether FLE tests should be enabled def fle? %w(1 true yes helper).include?(ENV['FLE']&.downcase) end # AWS IAM user access key id def fle_aws_key ENV['MONGO_RUBY_DRIVER_AWS_KEY'] end # AWS IAM user secret access key def fle_aws_secret ENV['MONGO_RUBY_DRIVER_AWS_SECRET'] end # Region of AWS customer master key def fle_aws_region ENV['MONGO_RUBY_DRIVER_AWS_REGION'] end # Amazon resource name (ARN) of AWS customer master key def fle_aws_arn ENV['MONGO_RUBY_DRIVER_AWS_ARN'] end # AWS temporary access key id (set by set-temp-creds.sh) def fle_aws_temp_key ENV['CSFLE_AWS_TEMP_ACCESS_KEY_ID'] end # AWS temporary secret access key (set by set-temp-creds.sh) def fle_aws_temp_secret ENV['CSFLE_AWS_TEMP_SECRET_ACCESS_KEY'] end # AWS temporary session token (set by set-temp-creds.sh) def fle_aws_temp_session_token ENV['CSFLE_AWS_TEMP_SESSION_TOKEN'] end def fle_azure_tenant_id ENV['MONGO_RUBY_DRIVER_AZURE_TENANT_ID'] end def fle_azure_client_id ENV['MONGO_RUBY_DRIVER_AZURE_CLIENT_ID'] end def fle_azure_client_secret ENV['MONGO_RUBY_DRIVER_AZURE_CLIENT_SECRET'] end def fle_azure_identity_platform_endpoint ENV['MONGO_RUBY_DRIVER_AZURE_IDENTITY_PLATFORM_ENDPOINT'] end def fle_azure_key_vault_endpoint ENV['MONGO_RUBY_DRIVER_AZURE_KEY_VAULT_ENDPOINT'] end def fle_azure_key_name ENV['MONGO_RUBY_DRIVER_AZURE_KEY_NAME'] end def fle_gcp_email ENV['MONGO_RUBY_DRIVER_GCP_EMAIL'] end def fle_gcp_private_key ENV['MONGO_RUBY_DRIVER_GCP_PRIVATE_KEY'] end def fle_gcp_endpoint ENV['MONGO_RUBY_DRIVER_GCP_ENDPOINT'] end def fle_gcp_project_id ENV['MONGO_RUBY_DRIVER_GCP_PROJECT_ID'] end def fle_gcp_location ENV['MONGO_RUBY_DRIVER_GCP_LOCATION'] end def fle_gcp_key_ring ENV['MONGO_RUBY_DRIVER_GCP_KEY_RING'] end def fle_gcp_key_name ENV['MONGO_RUBY_DRIVER_GCP_KEY_NAME'] end def fle_gcp_key_version ENV['MONGO_RUBY_DRIVER_GCP_KEY_VERSION'] end def fle_kmip_endpoint "localhost:5698" end def fle_kmip_tls_ca_file "#{spec_root}/../.evergreen/x509gen/ca.pem" end def fle_kmip_tls_certificate_key_file "#{spec_root}/../.evergreen/x509gen/client.pem" end def mongocryptd_port if ENV['MONGO_RUBY_DRIVER_MONGOCRYPTD_PORT'] && !ENV['MONGO_RUBY_DRIVER_MONGOCRYPTD_PORT'].empty? then ENV['MONGO_RUBY_DRIVER_MONGOCRYPTD_PORT'].to_i else 27020 end end def crypt_shared_lib_path if @without_crypt_shared_lib_path nil else ENV['MONGO_RUBY_DRIVER_CRYPT_SHARED_LIB_PATH'] end end def without_crypt_shared_lib_path saved, @without_crypt_shared_lib_path = @without_crypt_shared_lib_path, true yield ensure @without_crypt_shared_lib_path = saved end attr_accessor :crypt_shared_lib_required def require_crypt_shared saved, self.crypt_shared_lib_required = crypt_shared_lib_required, true yield ensure self.crypt_shared_lib_required = saved end def auth? x509_auth? || user end # Option hashes def auth_options if x509_auth? { auth_mech: uri_options[:auth_mech], auth_source: '$external', } else { user: user, password: password, }.tap do |options| if auth_source options[:auth_source] = auth_source end %i(auth_mech auth_mech_properties).each do |key| if uri_options[key] options[key] = uri_options[key] end end end end end def ssl_options return {} unless ssl? { ssl: true, ssl_verify: true, }.tap do |options| # We should use bundled cetificates for ssl except for testing against # Atlas instances. Atlas instances have addresses in domains # mongodb.net or mongodb-dev.net. if @mongodb_uri.servers.grep(/mongodb.*\.net/).empty? options.merge!( { ssl_cert: client_cert_path, ssl_key: client_key_path, ssl_ca_cert: ca_cert_path, } ) end end.merge(Utils.underscore_hash(@uri_tls_options)) end def compressor_options if compressors {compressors: compressors} else {} end end def retry_writes_options {retry_writes: retry_writes?} end # The options needed for a successful socket connection to the server(s). # These exclude options needed to handshake (e.g. server api parameters). def connection_options ssl_options end # The options needed for successful monitoring of the server(s). # These exclude options needed to perform operations (e.g. credentials). def monitoring_options ssl_options.merge( server_api: ruby_options[:server_api], ) end # Base test options. def base_test_options { # Automatic encryption tests require a minimum of three connections: # - The driver checks out a connection to build a command. # - It may need to encrypt the command, which could require a query to # the key vault collection triggered by libmongocrypt. # - If the key vault client has auto encryption options, it will also # attempt to encrypt this query, resulting in a third connection. # In the worst case using FLE may end up tripling the number of # connections that the driver uses at any one time. max_pool_size: 3, heartbeat_frequency: 20, # The test suite seems to perform a number of operations # requiring server selection. Hence a timeout of 1 here, # together with e.g. a misconfigured replica set, # means the test suite hangs for about 4 seconds before # failing. # Server selection timeout of 1 is insufficient for evergreen. server_selection_timeout: uri_options[:server_selection_timeout] || (ssl? ? 8.01 : 7.01), # Since connections are established under the wait queue timeout, # the wait queue timeout should be at least as long as the # connect timeout. wait_queue_timeout: 6.04, connect_timeout: 2.91, socket_timeout: 5.09, max_idle_time: 100.02, # Uncomment to have exceptions in background threads log complete # backtraces. #bg_error_backtrace: true, }.merge(ruby_options).merge( server_api: ruby_options[:server_api] && ::Utils.underscore_hash(ruby_options[:server_api]) ) end # Options for test suite clients. def test_options base_test_options.merge(connect_options). merge(ssl_options).merge(compressor_options).merge(retry_writes_options) end # TODO auth_options should probably be in test_options def all_test_options test_options.merge(auth_options) end def authorized_test_options test_options.merge(credentials_or_external_user( user: test_user.name, password: test_user.password, auth_source: auth_options[:auth_source], )) end # User objects # Gets the root system administrator user. def root_user Mongo::Auth::User.new( user: user || 'root-user', password: password || 'password', roles: [ Mongo::Auth::Roles::USER_ADMIN_ANY_DATABASE, Mongo::Auth::Roles::DATABASE_ADMIN_ANY_DATABASE, Mongo::Auth::Roles::READ_WRITE_ANY_DATABASE, Mongo::Auth::Roles::HOST_MANAGER, Mongo::Auth::Roles::CLUSTER_ADMIN ] ) end # Get the default test user for the suite on versions 2.6 and higher. def test_user # When testing against a serverless instance, we are not allowed to create # new users, we just have one user for everyhing. return root_user if serverless? Mongo::Auth::User.new( database: 'admin', user: 'ruby-test-user', password: 'password', roles: [ { role: Mongo::Auth::Roles::READ_WRITE, db: test_db }, { role: Mongo::Auth::Roles::DATABASE_ADMIN, db: test_db }, { role: Mongo::Auth::Roles::READ_WRITE, db: 'invalid_database' }, { role: Mongo::Auth::Roles::DATABASE_ADMIN, db: 'invalid_database' }, # For transactions examples { role: Mongo::Auth::Roles::READ_WRITE, db: 'hr' }, { role: Mongo::Auth::Roles::DATABASE_ADMIN, db: 'hr' }, { role: Mongo::Auth::Roles::READ_WRITE, db: 'reporting' }, { role: Mongo::Auth::Roles::DATABASE_ADMIN, db: 'reporting' }, # For spec tests { role: Mongo::Auth::Roles::READ_WRITE, db: 'crud-tests' }, { role: Mongo::Auth::Roles::DATABASE_ADMIN, db: 'crud-tests' }, { role: Mongo::Auth::Roles::READ_WRITE, db: 'crud-default' }, { role: Mongo::Auth::Roles::DATABASE_ADMIN, db: 'crud-default' }, { role: Mongo::Auth::Roles::READ_WRITE, db: 'default_write_concern_db' }, { role: Mongo::Auth::Roles::DATABASE_ADMIN, db: 'default_write_concern_db' }, { role: Mongo::Auth::Roles::READ_WRITE, db: 'retryable-reads-tests' }, { role: Mongo::Auth::Roles::DATABASE_ADMIN, db: 'retryable-reads-tests' }, { role: Mongo::Auth::Roles::READ_WRITE, db: 'sdam-tests' }, { role: Mongo::Auth::Roles::DATABASE_ADMIN, db: 'sdam-tests' }, { role: Mongo::Auth::Roles::READ_WRITE, db: 'transaction-tests' }, { role: Mongo::Auth::Roles::DATABASE_ADMIN, db: 'transaction-tests' }, { role: Mongo::Auth::Roles::READ_WRITE, db: 'withTransaction-tests' }, { role: Mongo::Auth::Roles::DATABASE_ADMIN, db: 'withTransaction-tests' }, { role: Mongo::Auth::Roles::READ_WRITE, db: 'admin' }, { role: Mongo::Auth::Roles::DATABASE_ADMIN, db: 'admin' }, { role: Mongo::Auth::Roles::READ_WRITE, db: 'command-monitoring-tests' }, { role: Mongo::Auth::Roles::DATABASE_ADMIN, db: 'command-monitoring-tests' }, { role: Mongo::Auth::Roles::READ_WRITE, db: 'session-tests' }, { role: Mongo::Auth::Roles::DATABASE_ADMIN, db: 'session-tests' }, { role: Mongo::Auth::Roles::READ_WRITE, db: 'gridfs-tests' }, { role: Mongo::Auth::Roles::DATABASE_ADMIN, db: 'gridfs-tests' }, { role: Mongo::Auth::Roles::READ_WRITE, db: 'change-stream-tests' }, { role: Mongo::Auth::Roles::DATABASE_ADMIN, db: 'change-stream-tests' }, { role: Mongo::Auth::Roles::READ_WRITE, db: 'change-stream-tests-2' }, { role: Mongo::Auth::Roles::DATABASE_ADMIN, db: 'change-stream-tests-2' }, { role: Mongo::Auth::Roles::READ_WRITE, db: 'retryable-writes-tests' }, { role: Mongo::Auth::Roles::DATABASE_ADMIN, db: 'retryable-writes-tests' }, { role: Mongo::Auth::Roles::READ_WRITE, db: 'ts-tests' }, { role: Mongo::Auth::Roles::DATABASE_ADMIN, db: 'ts-tests' }, { role: Mongo::Auth::Roles::READ_WRITE, db: 'ci-tests' }, { role: Mongo::Auth::Roles::DATABASE_ADMIN, db: 'ci-tests' }, { role: Mongo::Auth::Roles::READ_WRITE, db: 'papi-tests' }, { role: Mongo::Auth::Roles::DATABASE_ADMIN, db: 'papi-tests' }, ] ) end def x509_auth? uri_options[:auth_mech] == :mongodb_x509 end # When we authenticate with a username & password mechanism (scram, cr) # we create a variety of users in the test suite for different purposes. # When we authenticate with passwordless mechanisms (x509, aws) we use # the globally specified user for all operations. def external_user? case uri_options[:auth_mech] when :mongodb_x509, :aws true when nil, :scram, :scram256 false else raise "Unknown auth mechanism value: #{uri_options[:auth_mech]}" end end # When we use external authentication, omit all of the users we normally # create and authenticate with the external mechanism. This also ensures # our various helpers work correctly when the only users available are # the external ones. def credentials_or_external_user(creds) if external_user? auth_options else creds end end # Returns whether the test suite was configured with a single mongos. def single_mongos? %w(1 true yes).include?(ENV['SINGLE_MONGOS']) end end mongo-ruby-driver-2.21.3/spec/support/spec_setup.rb000066400000000000000000000062621505113246500223430ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all require_relative './spec_config' require_relative './client_registry' class SpecSetup def run if SpecConfig.instance.external_user? warn 'Skipping user creation because the set of users is fixed' return end with_client do |client| # For historical reasons, the test suite always uses # password-authenticated users, even when authentication is not # requested in the configuration. When authentication is requested # and password authentication is used (i.e., not x509 and not kerberos), # a suitable user already exists (it's the one specified in the URI) # and no additional users are needed. In other cases, including x509 # auth and kerberos, create the "root user". # TODO redo the test suite so that this password-authenticated root user # is not required and the test suite uses whichever user is specified # in the URI, which could be none. if !SpecConfig.instance.auth? || SpecConfig.instance.x509_auth? # Create the root user administrator as the first user to be added to the # database. This user will need to be authenticated in order to add any # more users to any other databases. begin create_user(client, SpecConfig.instance.root_user) rescue Mongo::Error::OperationFailure::Family => e # When testing a cluster that requires auth, root user is already set up # and it is not creatable without auth. # Seems like every mongodb version has its own error message # for trying to make a user when not authenticated, # and prior to 4.0 or so the codes are supposedly not reliable either. # In order: 4.0, 3.6, 3.4 through 2.6 if e.message =~ /command createUser requires authentication|there are no users authenticated|not authorized on admin to execute command.*createUser/ # However, if the cluster is configured to require auth but # test suite has wrong credentials, then admin_authorized_test_client # won't be authenticated and the following line will raise an # exception if client.use('admin').database.users.info(SpecConfig.instance.root_user.name).any? warn "Skipping root user creation, likely auth is enabled on cluster" else raise end else raise end end end # Adds the test user to the test database with permissions on all # databases that will be used in the test suite. create_user(client, SpecConfig.instance.test_user) end end def create_user(client, user) users = client.use('admin').database.users begin users.create(user) rescue Mongo::Error::OperationFailure::Family => e if e.message =~ /User.*already exists/ users.remove(user.name) users.create(user) else raise end end end def with_client(&block) Mongo::Client.new( SpecConfig.instance.addresses, SpecConfig.instance.all_test_options.merge( socket_timeout: 5, connect_timeout: 5, ), &block ) end end mongo-ruby-driver-2.21.3/spec/support/using_hash.rb000066400000000000000000000010361505113246500223130ustar00rootroot00000000000000# frozen_string_literal: true # rubocop:todo all class UsingHash < Hash class UsingHashKeyError < KeyError end def use(key) wrap(self[key]).tap do delete(key) end end def use!(key) begin value = fetch(key) rescue KeyError => e raise UsingHashKeyError, e.to_s end wrap(value).tap do delete(key) end end private def wrap(v) case v when Hash self.class[v] when Array v.map do |subv| wrap(subv) end else v end end end mongo-ruby-driver-2.21.3/spec/support/utils.rb000066400000000000000000000511651505113246500213330ustar00rootroot00000000000000# frozen_string_literal: true autoload :Base64, 'base64' autoload :JSON, 'json' module Net autoload :HTTP, 'net/http' end module Utils extend self # Used by #yamlify_command_events MAP_REDUCE_COMMANDS = %w[ map reduce ].freeze # Used by #yamlify_command_events AUTHENTICATION_COMMANDS = %w[ saslStart saslContinue authenticate getnonce ].freeze # The system command to invoke to represent a false result BIN_FALSE = File.executable?('/bin/false') ? '/bin/false' : 'false' # The system command to invoke to represent a true result BIN_TRUE = File.executable?('/bin/true') ? '/bin/true' : 'true' # Converts a 'camelCase' string or symbol to a :under_score symbol. def underscore(str) str = str.to_s str = str[0].downcase + str[1...str.length].gsub(/([A-Z]+)/) { |m| "_#{m.downcase}" } str.to_sym end # Creates a copy of a hash where all keys and string values are converted to # snake-case symbols. # # For example, { 'fooBar' => { 'baz' => 'bingBing', :x => 1 } } converts to # { :foo_bar => { :baz => :bing_bing, :x => 1 } }. def underscore_hash(value) return value unless value.is_a?(Hash) value.reduce({}) do |hash, (k, v)| hash.tap do |h| h[underscore(k)] = underscore_hash(v) end end end # Creates a copy of a hash where all keys and string values are converted to # snake-case symbols. # # For example, { 'fooBar' => { 'baz' => 'bingBing', :x => 1 } } converts to # { :foo_bar => { :baz => :bing_bing, :x => 1 } }. def shallow_underscore_hash(value) return value unless value.is_a?(Hash) value.reduce({}) do |hash, (k, v)| hash.tap do |h| h[underscore(k)] = v end end end # Creates a copy of a hash where all keys and string values are converted to # snake-case symbols. # # For example, { 'fooBar' => { 'baz' => 'bingBing', :x => 1 } } converts to # { :foo_bar => { :baz => :bing_bing, :x => 1 } }. def snakeize_hash(value) return underscore(value) if value.is_a?(String) case value when Array value.map do |sub| case sub when Hash snakeize_hash(sub) else sub end end when Hash value.reduce({}) do |hash, (k, v)| hash.tap do |h| h[underscore(k)] = snakeize_hash(v) end end else value end end # Like snakeize_hash but does not recurse. def shallow_snakeize_hash(value) return underscore(value) if value.is_a?(String) return value unless value.is_a?(Hash) value.reduce({}) do |hash, (k, v)| hash.tap do |h| h[underscore(k)] = v end end end # Creates a copy of a hash where all keys and symbol values are converted to # camel-case strings. # # For example, { :foo_bar => { :baz => :bing_bing, 'x' => 1 } } converts to # { 'fooBar' => { 'baz' => 'bingBing', 'x' => 1 } }. def camelize_hash(value, upcase_first = false) return camelize(value.to_s, upcase_first) if value.is_a?(Symbol) return value unless value.is_a?(Hash) value.reduce({}) do |hash, (k, v)| hash.tap do |h| h[camelize(k.to_s)] = camelize_hash(v, upcase_first) end end end def camelize(str, upcase_first = false) str = str.gsub(/_(\w)/) { |m| m[1].upcase } str = str[0].upcase + str[1...str.length] if upcase_first str end def downcase_keys(hash) hash.transform_keys(&:downcase) end def disable_retries_client_options { retry_reads: false, retry_writes: false, max_read_retries: 0, max_write_retries: 0, } end # Converts camel case clientOptions, as used in spec tests, # to Ruby driver underscore options. def convert_client_options(spec_test_options) mapper = Mongo::URI::OptionsMapper.new spec_test_options.each_with_object({}) do |(name, value), opts| if name == 'autoEncryptOpts' auto_encryption_options = convert_auto_encryption_client_options(value) opts[:auto_encryption_options] = auto_encryption_options else mapper.add_uri_option(name, value.to_s, opts) end opts end end def order_hash(hash) hash.to_a.sort.to_h end # Transforms an array of CommandStarted events to an array of hashes # matching event specification in YAML spec files # rubocop:disable Metrics, Style/IfUnlessModifier def yamlify_command_events(events) events = events.map do |e| command = e.command.dup # Fake BSON::Code for map/reduce commands MAP_REDUCE_COMMANDS.each do |key| command[key] = BSON::Code.new(command[key]) if command[key].is_a?(String) end if command['readConcern'] # The spec test use an afterClusterTime value of 42 to indicate that we need to assert # that the field exists in the actual read concern rather than comparing the value, so # we replace any afterClusterTime value with 42. if command['readConcern']['afterClusterTime'] command['readConcern']['afterClusterTime'] = 42 end # Convert the readConcern level from a symbol to a string. if command['readConcern']['level'] command['readConcern']['level'] = command['readConcern']['level'].to_s end end if command['recoveryToken'] command['recoveryToken'] = 42 end # The spec tests use 42 as a placeholder value for any getMore cursorId. command['getMore'] = command['getMore'].class.new(42) if command['getMore'] # Remove fields if empty command.delete('query') if command['query'] && command['query'].empty? { 'command_started_event' => order_hash( 'command' => order_hash(command), 'command_name' => e.command_name.to_s, 'database_name' => e.database_name ) } end # Remove any events from authentication commands. events.reject! do |e| command_name = e['command_started_event']['command_name'] AUTHENTICATION_COMMANDS.include?(command_name) end events end # rubocop:enable Metrics, Style/IfUnlessModifier # rubocop:disable Metrics def convert_operation_options(options) if options options.map do |k, v| out_v = case k when 'readPreference' out_k = :read out_v = {} v.each do |sub_k, sub_v| if sub_k == 'mode' out_v[:mode] = Utils.underscore(v['mode']) else out_v[sub_k.to_sym] = sub_v end end out_v when 'defaultTransactionOptions' out_k = Utils.underscore(k).to_sym convert_operation_options(v) when 'readConcern' out_k = Utils.underscore(k).to_sym Mongo::Options::Mapper.transform_keys_to_symbols(v).tap do |out| out[:level] = out[:level].to_sym if out[:level] end when 'causalConsistency' out_k = Utils.underscore(k).to_sym v when 'writeConcern' # Tests added in SPEC-1352 specify {writeConcern: {}} but what # they mean is for the driver to use the default write concern, # which for Ruby means no write concern is specified at all. # # This nil return requires the compact call below to get rid of # the nils before outgoing options are constructed. next nil if v == {} # Write concern option is called :write on the client, but # :write_concern on all levels below the client. out_k = :write_concern # The client expects write concern value to only have symbol keys. v.transform_keys(&:to_sym) else raise "Unhandled operation option #{k}" end [ out_k, out_v ] end.compact.to_h else {} end end # rubocop:enable Metrics def int64_value(value) if value.respond_to?(:value) # bson-ruby >= 4.6.0 value.value else value.instance_variable_get(:@integer) end end URI_OPTION_MAP = { app_name: 'appName', auth_mech: 'authMechanism', auth_source: 'authsource', replica_set: 'replicaSet', ssl_ca_cert: 'tlsCAFile', ssl_cert: 'tlsCertificateKeyFile', ssl_key: 'tlsCertificateKeyFile', }.freeze # rubocop:disable Metrics def create_mongodb_uri(address_strs, **opts) creds = opts[:username] ? "#{opts[:username]}:#{opts[:password]}@" : '' uri = +"mongodb://#{creds}#{address_strs.join(',')}/" uri << opts[:database] if opts[:database] if (uri_options = opts[:uri_options]) uri << '?' uri_options.each do |k, v| uri << '&' write_k = URI_OPTION_MAP[k] || k case k when :compressors write_v = v.join(',') when :auth_mech next unless v write_v = Mongo::URI::AUTH_MECH_MAP.key(v) raise "Unhandled auth mech value: #{v}" unless write_v else write_v = v end uri << "#{write_k}=#{write_v}" end end uri end # rubocop:enable Metrics # Client-Side encryption tests introduce the $$type syntax for determining # equality in command started events. The $$type key specifies which type of # BSON object is expected in the result. If the $$type key is present, only # check the class of the result. # rubocop:disable Metrics def match_with_type?(expected, actual) if expected.is_a?(Hash) && expected.key?('$$type') case expected['$$type'] when 'binData' expected_class = BSON::Binary expected_key = '$binary' when 'long' expected_class = BSON::Int64 expected_key = '$numberLong' when %w[int long] return actual.is_a?(Numeric) || actual.is_a?(BSON::Int32) || actual.is_a?(BSON::Int64) else raise "Tests do not currently support matching against $$type #{expected['$$type']}" end actual.is_a?(expected_class) || actual.key?(expected_key) elsif expected.is_a?(Hash) && actual.is_a?(Hash) has_all_keys = (expected.keys - actual.keys).empty? same_values = expected.keys.all? do |key| match_with_type?(expected[key], actual[key]) end has_all_keys && same_values elsif expected.is_a?(Array) && actual.is_a?(Array) same_length = expected.length == actual.length same_values = expected.map.with_index do |_, idx| match_with_type?(expected[idx], actual[idx]) end.all? same_length && same_values elsif expected == 42 actual.is_a?(Numeric) || actual.is_a?(BSON::Int32) || actual.is_a?(BSON::Int64) else expected == actual end end # rubocop:enable Metrics # Takes a timeout and a block. Waits up to the specified timeout until # the value of the block is true. If timeout is reached, this method # returns normally and does not raise an exception. The block is invoked # every second or so. def wait_for_condition(timeout) deadline = Process.clock_gettime(Process::CLOCK_MONOTONIC) + timeout loop do break if yield || Process.clock_gettime(Process::CLOCK_MONOTONIC) > deadline sleep 1 end end def ensure_port_free(port) TCPServer.open(port) do # Nothing end end def wait_for_port_free(port, timeout) wait_for_condition(timeout) do ensure_port_free(port) true rescue Errno::EADDRINUSE false end end def get_ec2_metadata_token(ttl: 30, http: nil) http ||= Net::HTTP.new('169.254.169.254') # The TTL is required in order to obtain the metadata token. req = Net::HTTP::Put.new('/latest/api/token', { 'x-aws-ec2-metadata-token-ttl-seconds' => ttl.to_s }) resp = http.request(req) raise "Metadata token request failed: #{e.class}: #{e}" if resp.code != '200' resp.body end def ec2_instance_id http = Net::HTTP.new('169.254.169.254') metadata_token = get_ec2_metadata_token(http: http) req = Net::HTTP::Get.new('/latest/dynamic/instance-identity/document', { 'x-aws-ec2-metadata-token' => metadata_token }) resp = http.request(req) payload = JSON.parse(resp.body) payload.fetch('instanceId') end def ec2_instance_profile http = Net::HTTP.new('169.254.169.254') metadata_token = get_ec2_metadata_token(http: http) req = Net::HTTP::Get.new('/latest/meta-data/iam/info', { 'x-aws-ec2-metadata-token' => metadata_token }) resp = http.request(req) return nil if resp.code == '404' payload = JSON.parse(resp.body) payload['InstanceProfileArn'] end def wait_for_instance_profile deadline = Process.clock_gettime(Process::CLOCK_MONOTONIC) + 15 loop do begin ip = ec2_instance_profile if ip puts "Instance profile assigned: #{ip}" break end rescue StandardError => e puts "Problem retrieving instance profile: #{e.class}: #{e}" end if Process.clock_gettime(Process::CLOCK_MONOTONIC) >= deadline raise 'Instance profile did not get assigned in 15 seconds' end sleep 3 end end def wait_for_no_instance_profile deadline = Process.clock_gettime(Process::CLOCK_MONOTONIC) + 15 loop do begin ip = ec2_instance_profile if ip.nil? puts 'Instance profile cleared' break end rescue StandardError => e puts "Problem retrieving instance profile: #{e.class}: #{e}" end if Process.clock_gettime(Process::CLOCK_MONOTONIC) >= deadline raise 'Instance profile did not get cleared in 15 seconds' end sleep 3 end end def wrap_forked_child yield rescue StandardError => e warn "Failing process #{Process.pid} due to #{e.class}: #{e}" exec(BIN_FALSE) else # Exec so that we do not close any clients etc. in the child. exec(BIN_TRUE) end def subscribe_all(client, subscriber) subscribe_all_sdam_proc(subscriber).call(client) end def subscribe_all_sdam_proc(subscriber) lambda do |client| client.subscribe(Mongo::Monitoring::TOPOLOGY_OPENING, subscriber) client.subscribe(Mongo::Monitoring::SERVER_OPENING, subscriber) client.subscribe(Mongo::Monitoring::SERVER_DESCRIPTION_CHANGED, subscriber) client.subscribe(Mongo::Monitoring::TOPOLOGY_CHANGED, subscriber) client.subscribe(Mongo::Monitoring::SERVER_CLOSED, subscriber) client.subscribe(Mongo::Monitoring::TOPOLOGY_CLOSED, subscriber) client.subscribe(Mongo::Monitoring::SERVER_HEARTBEAT, subscriber) client.subscribe(Mongo::Monitoring::CONNECTION_POOL, subscriber) client.subscribe(Mongo::Monitoring::COMMAND, subscriber) end end # Creates an event subscriber, subscribes it to command events on the # specified client, invokes the passed block, asserts there is exactly one # command event published, asserts the command event published has the # specified command name, and returns the published event. def get_command_event(client, command_name, include_auth: false) subscriber = Mrss::EventSubscriber.new client.subscribe(Mongo::Monitoring::COMMAND, subscriber) begin yield client ensure client.unsubscribe(Mongo::Monitoring::COMMAND, subscriber) end subscriber.single_command_started_event(command_name, include_auth: include_auth) end # Drops and creates a collection for the purpose of starting the test from # a clean slate. # # @param [ Mongo::Client ] client # @param [ String ] collection_name def create_collection(client, collection_name) client[collection_name].drop client[collection_name].create end # If the deployment is a sharded cluster, creates a direct client # to each of the mongos nodes and yields each in turn to the # provided block. Does nothing in other topologies. # rubocop:disable Metrics def mongos_each_direct_client return unless ClusterConfig.instance.topology == :sharded client = ClientRegistry.instance.global_client('basic') client.cluster.next_primary client.cluster.servers.each do |server| direct_client = ClientRegistry.instance.new_local_client( [ server.address.to_s ], SpecConfig.instance.test_options.merge( connect: :sharded ).merge(SpecConfig.instance.auth_options) ) yield direct_client direct_client.close end end # rubocop:enable Metrics # rubocop:disable Metrics def permitted_yaml_classes @permitted_yaml_classes ||= [ BigDecimal, Date, Time, Range, Regexp, Symbol, BSON::Binary, BSON::Code, BSON::CodeWithScope, BSON::DbPointer, BSON::Decimal128, BSON::Int32, BSON::Int64, BSON::MaxKey, BSON::MinKey, BSON::ObjectId, BSON::Regexp::Raw, BSON::Symbol::Raw, BSON::Timestamp, BSON::Undefined, ].freeze end # rubocop:enable Metrics def load_spec_yaml_file(path) if RUBY_VERSION < '2.6' YAML.safe_load(File.read(path), permitted_yaml_classes, [], true) else # Here we have Ruby 2.6+ that supports the new syntax of `safe_load``. YAML.safe_load(File.read(path), permitted_classes: permitted_yaml_classes, aliases: true) end end private def convert_auto_encryption_client_options(opts) auto_encrypt_opts = Utils.snakeize_hash(opts) _apply_kms_providers(opts, auto_encrypt_opts) _apply_key_vault_namespace(opts, auto_encrypt_opts) _apply_schema_map(opts, auto_encrypt_opts) _apply_encrypted_fields_map(opts, auto_encrypt_opts) auto_encrypt_opts.merge!(extra_options: convert_auto_encryption_extra_options(auto_encrypt_opts)) end def _apply_kms_provider_aws(opts, auto_encrypt_opts) return unless opts['kmsProviders']['aws'] # The tests require that AWS credentials be filled in by the driver. auto_encrypt_opts[:kms_providers][:aws] = { access_key_id: SpecConfig.instance.fle_aws_key, secret_access_key: SpecConfig.instance.fle_aws_secret, } end def _apply_kms_providers(opts, auto_encrypt_opts) _apply_kms_provider_aws(opts, auto_encrypt_opts) _apply_kms_provider_azure(opts, auto_encrypt_opts) _apply_kms_provider_gcp(opts, auto_encrypt_opts) _apply_kms_provider_local(opts, auto_encrypt_opts) end def _apply_kms_provider_azure(opts, auto_encrypt_opts) return unless opts['kmsProviders']['azure'] # The tests require that Azure credentials be filled in by the driver. auto_encrypt_opts[:kms_providers][:azure] = { tenant_id: SpecConfig.instance.fle_azure_tenant_id, client_id: SpecConfig.instance.fle_azure_client_id, client_secret: SpecConfig.instance.fle_azure_client_secret, } end def _apply_kms_provider_gcp(opts, auto_encrypt_opts) return unless opts['kmsProviders']['gcp'] # The tests require that GCP credentials be filled in by the driver. auto_encrypt_opts[:kms_providers][:gcp] = { email: SpecConfig.instance.fle_gcp_email, private_key: SpecConfig.instance.fle_gcp_private_key, } end def _apply_kms_provider_local(opts, auto_encrypt_opts) return unless opts['kmsProviders']['local'] auto_encrypt_opts[:kms_providers][:local] = { key: BSON::ExtJSON.parse_obj(opts['kmsProviders']['local']['key']).data } end def _apply_key_vault_namespace(opts, auto_encrypt_opts) auto_encrypt_opts[:key_vault_namespace] = opts['keyVaultNamespace'] || 'keyvault.datakeys' end def _apply_schema_map(opts, auto_encrypt_opts) return unless opts['schemaMap'] auto_encrypt_opts[:schema_map] = BSON::ExtJSON.parse_obj(opts['schemaMap']) end def _apply_encrypted_fields_map(opts, auto_encrypt_opts) return unless opts['encryptedFieldsMap'] auto_encrypt_opts[:encrypted_fields_map] = BSON::ExtJSON.parse_obj(opts['encryptedFieldsMap']) end # rubocop:disable Metrics def convert_auto_encryption_extra_options(opts) # Spawn mongocryptd on non-default port for sharded cluster tests extra_options = { mongocryptd_spawn_args: [ "--port=#{SpecConfig.instance.mongocryptd_port}" ], mongocryptd_uri: "mongodb://localhost:#{SpecConfig.instance.mongocryptd_port}" }.merge(opts[:extra_options] || {}) # if bypass_query_analysis has been explicitly specified, then we ignore # any requirement to use the shared library, as the two are not # compatible. if SpecConfig.instance.crypt_shared_lib_required && !opts[:bypass_query_analysis] extra_options[:crypt_shared_lib_required] = SpecConfig.instance.crypt_shared_lib_required extra_options[:crypt_shared_lib_path] = SpecConfig.instance.crypt_shared_lib_path extra_options[:mongocryptd_uri] = 'mongodb://localhost:27777' end extra_options end # rubocop:enable Metrics end mongo-ruby-driver-2.21.3/upload-api-docs000077500000000000000000000060311505113246500200770ustar00rootroot00000000000000#!/usr/bin/env ruby # frozen_string_literal: true require 'bundler/inline' gemfile true do source 'https://rubygems.org' gem 'nokogiri' gem 'aws-sdk-s3' gem 'yard', '>= 0.9.35' end require 'aws-sdk-s3' require 'optparse' require 'yard' # This class contains logic for uploading API docs to S3. class FileUploader def initialize(options) Aws.config.update({ region: options[:region], credentials: Aws::Credentials.new(options[:access_key], options[:secret_key]) }) Aws.use_bundled_cert! @s3 = Aws::S3::Client.new @bucket = options[:bucket] @prefix = options[:prefix] @docs_path = options[:docs_path] end def upload_docs puts "Uploading to #{@bucket}" Dir.glob("#{@docs_path}/**/*").each do |file| next if File.directory?(file) upload_file(file, key(file)) print '.' $stdout.flush end puts "\nDone!" end private def key(file) File.join(@prefix, file.gsub("#{@docs_path}/", '')) end def upload_file(file, key) mime_type = mime_type(file) @s3.put_object(bucket: @bucket, key: key, body: File.read(file), content_type: mime_type) end def mime_type(file) { '.html' => 'text/html', '.css' => 'text/css', '.js' => 'application/javascript', }.fetch(File.extname(file)) end end # This class contains logic for parsing CLI and ENV options. class Options def initialize @options = {} parse_cli_options! parse_env_options! @options[:prefix] = 'docs/ruby-driver/current/api' @options[:docs_path] = 'build/public/current/api' end def [](key) @options[key] end private def parse_cli_options! OptionParser.new do |opts| opts.banner = 'Usage: upload-api-docs [options]' opts.on('-b BUCKET', '--bucket=BUCKET', 'S3 Bucket to upload') do |b| @options[:bucket] = b end opts.on('-r REGION', '--region=REGION', 'AWS region') do |r| @options[:region] = r end end.parse! %i[bucket region].each do |opt| raise OptionParser::MissingArgument, "Option --#{opt} is required" unless @options[opt] end end def parse_env_options! @options[:access_key] = ENV.fetch('DOCS_AWS_ACCESS_KEY_ID') do raise ArgumentError, 'Please provide aws access key via DOCS_AWS_ACCESS_KEY_ID env variable' end @options[:secret_key] = ENV.fetch('DOCS_AWS_SECRET_ACCESS_KEY') do raise ArgumentError, 'Please provide aws secret key via DOCS_AWS_SECRET_ACCESS_KEY env variable' end end end def generate_docs(options) YARD::CLI::Yardoc.run( '.', '--exclude', './.evergreen', '--exclude', './.mod', '--exclude', './examples', '--exclude', './profile', '--exclude', './release', '--exclude', './spec', '--readme', './README.md', '-o', options[:docs_path] ) begin File.delete(File.join(options[:docs_path], 'frames.html')) rescue StandardError nil end end options = Options.new generate_docs(options) FileUploader.new(options).upload_docs return